There is third, pytype (by google), which I found pretty good but it rarely gets mentioned. However, like the others it is slow, so I hope this one is fast and supports all the pytype features (especially being able to type-check un-annotated code).
To round out the big 4, there's also Pyre from Meta. I haven't used it myself, as when I last checked it had a low number of PEPs covered, but I've heard some good words for it.
Looking at the nice demo, I think just defaulting to asking for confirmation if there is ambiguity, instead of dazzling the user with `mergiraf solve` magic would help; there is already a `merigraf review`. Then, a confirm prompt, an option to undo the resolution completely, or just do it on a file-by-file basis (with help what command to run next).
Ah, more complex than I thought: "venvstacks allows you to package Python applications and all their dependencies into a portable, deterministic format, without needing to include copies of these large Python frameworks in every application archive.", and in "Caveats and Limitations" (please, all projects should have one): "does NOT support combining arbitrary virtual environments with each other".
Is there a helper to merge venv1 and venv2, or create venv2 which uses venv1 dependencies and on load both are merged?
The hard part is figuring out what "merge" means for your use case. If there's a definite set of packages that should be in the environment that are all already at definite locations on the local drive, there are many possible approaches (copying, `.pth` files, hard links, and symlinks should also work) to stitching together the venv you want. But you can't just feed the individual package paths to Python, because `sys.path` entries are places that Python will look for the top-level package folders (and top-level module `.py` files - which explains why), not the paths to individual importable things.
More importantly, at runtime you can only have one version of a given package, because the imports are resolved at runtime. Pip won't put multiple versions of the same library into the same environment normally; you can possibly force it to (or more likely, explicitly do it yourself) but then everyone that wants that library will find whichever version gets `import`ed and cached first, which will generally be whichever one is found first on `sys.path` when the first `import` statement is reached at runtime. (Yes, the problem is the same if you only have one venv in the first place, in the sense that your dependency graph could be unsolvable. But naively merging venvs could mean not noticing the problem until, at runtime, something tries to import something else, gets an incompatible version, and fails.)
For example, there can be "dev" group that includes "test", "mkdocs", "nuitka" groups (nuitka wants to be run with venv it builds binary for, so to keep venv minimal, it is in a separate group)
My understanding is that the entire reason that venv exists is because python's library system is nothing but dependency spaghetti: whatever is needed by one project conflicts with whatever is needed by another so you have to give them bespoke library environments where those conflicts won't interfere with one another.
In that perspective "merging" them directly defeats the purpose. What is needed is a better library ecosystem.
venvs are used to isolate groups of dependencies, but it's not just about conflicts. Many other languages expect you to do the same thing; people complain less because either the language statically resolves imports and can support multiple versions of a library in the same environment; and/or because the ecosystem has some conventions that allow the tooling to detect the "current environment" better, and/or because the standard installer isn't itself a library that defaults to appearing in every environment and installing specifically for "its own" environment; and/or (possibly the biggest one) they don't have to worry about building and installing complex multi-language projects where the one they're using is just providing a binding.
An important reason for using them is to test deployment: if your code works in a venv that only has specific things installed (not just some default "sandbox" where you install everything), then you can be sure that you didn't forget to list a dependency in the metadata. Plus you can test with ranges of versions of your dependencies and confirm which ones your library will work with.
They're also convenient for making neat, isolated installations for applications. Pipx wraps pip and venv to do this, and as I understand it there's similarly uvx for uv. This is largely about making sure you avoid "interference", but also about pinning specific versions of dependencies. It also lowers the bar somewhat for your users: they still need to have a basic idea of what Python is and know that they have a suitable version of Python installed, but you can tell them to install Pipx if they don't have it and run a single install command, instead of having to wrestle with both pip and venv and then also know how to access the venv's console scripts that might not have been put on PATH.
And with just 3 layers: Runtime, Framework, Application. But at least you are not switching tools, and it presumably would prevent you from installing LARGE_v1.1, and then installing TINY_v2.2 in a later layer, which however upgrades LARGE to v1.2 and your docker images are now twice the size.
GitHub provided a way to contribute, but also to avoid learning to rebase, thus making it more welcoming to devs who only know about commit and pull - that is what made it so popular. The squash then rebase or merge step is done on server side. Plus it has a very "harmless" UI, but that hides a lot of details (patchsets) and the layout wastes so much space imo.
This also means devs could avoid learning more about git, and this lowest common denominator git workflow makes it so frustrating for those of us who learned git all the way. I can't even mark a PR as "do not squash" to prevent it being merged in the default way which throws out all history.
IMO you are spot on. GitHub's worst sin is that it has mis-educated new generations of developers. My 16yo son uses github every day; I've needed to explain fetch + rebase to him several times. It just doesn't seem to stick; it seems foreign to the entire community he's collaborating with.
Yeah, this annoys me too. I actually find the "forced merge" (`--no-ff`) style even worse because you can never tell if you're looking at "real" commits or just crap because they couldn't be bothered to rebase.
Must say, though, I think "history" is completely the wrong way to think about version control. It's not about tracking history, it's about tracking versions. History is the crap, unrebased commits. Rebasing turns the history (throwaway, works in progress) into versions.
It's a bit cumbersome, and I think only recently you can make longer dependency chains. It's certainly not automated away with just git commands, but maybe there there is a Gitlab API way. The only way I know is to "edit" the PR (or MR in Gitlab speak) and paste the URL into some "depends on" field, then save.
There are certainly other problems as well, like you might have an MR 1 from feature1 to master, and MR 2 from feature2 to master which in turn depends on MR 1. Most likely your feature2 branch is off your feature1 branch, so it contains feature1's changes when compared to master, and that's what is shown in the Gitlab review UI. This makes reviewing MR 2's changes in parallel to MR 1 frankly impossible.
Having said that, I still think that this would be the right way to organize this kind of work, however Gitlab's execution is not great, unfortunately. Any of this is probably impossible in Github too. I wonder if Gerrit gets this right, I have no experience with it.
edit:
One interesting point of MR dependencies in Gitlab is that I think you can depend on MRs from other projects. This is sometimes useful if you have dependent changes across projects.
I usually just point the target branch of MR 2 to MR 1. After merging MR 1 GitLab automatically change it to the default branch so it's more or less okay.
However this makes updating these MRs very rebase heavy and as said in OP it is hostile to reviewers.
Attention Set <3 - that alone is worth another post. Gerrit really is one of the best kept dev secrets, and if you never had the luck of seeing it in person at a company where you worked at, well...
Makes me wonder what other git or dev-in-general blindspots I have.
Yeh, Attention Set is a game changer. We (Aviator) also took inspiration (ahem.. copied) attention set from Gerrit:
https://docs.aviator.co/attentionset
There is one thing I miss on Gerrit when you push a stack of commits: A central place to talk about the whole of the stack, not just individual commits. This "big picture", but still technical stuff, too often happens in the issue tracker. But where to place it, I have no idea. This stack is just too ephemeral and and can be completely different on the next push.
Yeah, but you can't really discuss the topic itself, right?
I do think this is a weakness of Gerrit. It doesn't really capture "big picture" stuff nearly so well. At least on GH you can read the top-level comment, which is independent of the commits inside it. Most of the time I was deep in Gerrit doing review or writing patches, it was because the architectural decisions had already been made elsewhere.
I guess it's one of the tradeoffs to Gerrit only being a code review tool. Phabricator also didn't suffer from this so much because you could just create a ticket to discuss things in the exact same space. Gerrit is amazingly extensible though so plugging this in is definitely possible, at least.
On a mailing list, you used to be able to write up the big picture in the "cover letter" (patch#0). Design-level discussions would generally occur in a subthread of patch#0. Also, once the patch set was fully reviewed, the maintainer could choose to apply the branches on a side branch at first (assuming the series was originally posted with proper "--base" information), then merge said side branch into master at once. This would preserve proper development history, plus the merge commit provides space for capturing the big picture language from the cover letter in the git commit log.
That's exactly what Gerrit can do. When you push an x-b-c-d-e chain, these show up stacked in the UI, but you can easily cherry-pick b onto main (see that the CI passes, and the usual review), and rebase everything on top of that. If it is x, the bottom one, you can directly submit it and continue with the others.
All this rebasing sounds like constant pain to pull from Gerrit. Does it actually create new branches v2 v3 after a rebase? Or how do I switch my local checkout to the rebased branch from remote?
That's just a `git pull --rebase` away (or set pull to never merge but rebase), and all the non-merged commits are rebased onto the new upstream, and while doing the rebase the already-merged commit is dropped. The next git push origin HEAD:refs/for/main will then push the remaining commits.
https://github.com/google/pytype?tab=readme-ov-file#pytype--...