Ugh, I really wish this had been written in Go or Rust. Just something that produces a single binary executable and doesn't require you to install a runtime like Node.
Projects like this have to update frequently, having a mechanism like npm or pip or whatever to automatically handle that is probably easier. It's not like the program is doing heavy lifting anyway, unless you're committing outright programming felonies there shouldn't be any issues on modern hardware.
It's the only argument I can think of, something like Go would be goated for this use case in principle.
> A single, pre-compiled binary is convenient for the user's first install only.
Its not.
Its convenient for CIs, for deployment, for packaging, for running multiple versions. It's extremely simple to update (just replace the binary with another one).
Now, e.g. "just replacing one file with another" may not have convenience commands like "npm update". But its not hard.
My point is that a pre-compiled binary is extremely more convenient for *everyone involved in the delivery pipeline* including the end-user. Especially for delivering updates.
As someone who's packaged Javascript(node), Ruby, Go and rust tools in .debs, snap, rpms: packaging against a dynamic runtime (node, ruby, rvm etc) is a giant PIAS that will break on a significant amount of users' machines, and will probably break on everyones machine at some point. Whereas packaging that binary is as simple as it can get: most such packages need only one dependency that everyone and his dog already has: libc.
> My point is that a pre-compiled binary is extremely more convenient for everyone involved in the delivery pipeline* including the end-user. Especially for delivering updates.*
The easiest is running "sudo apt update && sudo apt upgrade" and have my whole system updated. Instead of writing some script to get it done from some github's releases page and hoping that it's not hijacked.
Having a sensible project is what make it easy down the line (including not depending on gnu libc if not needed as some people uses musl). And I believe it's easy to setup a repository if your code is proprietary (Just need to support the most likely distribution, like ubuntu, fedora, suse's tumbleweed,...)
It’s a standalone binary. It doesn’t require anything at all. It’s literally just one file you can put anywhere you like. It doesn’t need a third-party package manager.
Unless you build self-updating in, which Google certainly has experience in, in part to avoid clients lagging behind. Because aside from being a hindrance (refusing to start and telling the client to update) there's no way you can actually force them to run an upgrade command.
How so? Doesn’t it also make updates pretty easy? Have the precompiled binary know how to download the new version. Sure there are considerations for backing up the old version, but it’s not much work, and frees you up from being tied to one specific ecosystem
That's not an argument against the difficulty of "updating a binary file" vs "updating via pip", it's merely addressing what your work deems important and possible.
(Aside from the fact that allowing "use pip" completely defeats the purpose of any other of these mechanisms, so it's a poster-child example of security-theater)
Just `wget -O ~/.local/bin/gemini-cli https://ci.example.com/assets/latest/gemini-cli` (Or the CURL version thereof)
It can pick the file off github, some CI's assets, a package repo, a simple FTP server, an HTTP fileserver, over SSH, from a local cache, etc. It's so simple that one doesn't need a package manager. So there commonly is no package manager.
Yet in this tread people are complaining that "a single binary" is hard to manage/update/install because there's no package manager to do that with. It's not there, because the manage/update/install is so simple, that you don't need a package manager!
> is so simple, that you don't need a package manager!
You might not know the reason ppl use package managers. Installing this "simple" way make it quite difficult to update and remove compared to using package managers. And although they are also "simple", it's quite a mess to manage packages manually in place of using such battle-tested systems
> You might not know the reason ppl use package managers.
People use package managers for the following:
- to manage dependencies
- to update stuff to a specific version or the latest version
- to downgrade stuff
- to install stuff
- to remove stuff
any of these, except for the dependency management, are a single command, or easy to do manually, with a single compiled binary. They are so simple that they can easily be built into the tool. Or handled by your OSs package manager. Or with a "shell script" that the vendor can provide (instead of, or next to, the precompiled binary.
I did not say manually, you infer that. But I never meant that. The contrary: because it's so simple, automating that, or have your distro, OS or package manager do this for you, is trivial. As opposed to that awful "curl example.com/install.sh | sudo tee -" or those horrible built-in updaters (that always start nagging when I open the app - the one moment that I don't want to be bothered by updates because I need the app now)
The only reason one would then need a package manager is to manage dependencies. But a precompiled binary like Go's or Rusts typically are statically compiled so they have no (or at most one) dependency.
Imagine the ease of a single ".targz" or so that includes the correct python version, all pips, all ENV vars, config files, and is executable. If you distribute that - what do you still need pip for? If you distribute that, how simple would turning it into a .deb, snap, dmg, flatpack, appimg, brew package, etc be? (Answer: a lot easier than doing this for the "directory of .py files. A LOT)
> Imagine the ease of a single ".targz" or so that includes the correct python version, all pips, all ENV vars, config files, and is executable. If you distribute that - what do you still need pip for?
pip is there so you don't need to do that. In the deployment world, you really want one version per system for everything and know that everything is in sync. To get that the solution was a distribution of software and a tool to manage them. We then extended that to programming language ecosystem and pip is part of the result.
But for workstation, a lot of people wants the latest, so the next solution was to be able to abstract the programming language ecosystem from the distribution (And you may not have a choice in the case of macOS), so what we get is directory-restricted interactions (go, npm,..) or doing shell magic so that the tooling think it's the system (virtual env,...).
It's a neat trick, but the only reason to do so is if you want to distribute compiled version of a software to customer. But if the user have access to the code, It's better to adapt the software to the system (repositories, flatpak...) or build a system around it (VM, containers, ...).
You'd think that, but a globally installed npm package is annoying to update, as you have to do it manually and I very rarely need to update other npm global packages so at least personally I always forget to do it.
I feel like Cargo or Go Modules can absolutely do the same thing as the mess of build scripts they have in this repo perfectly well and arguably better.
I don't think that's the main reason. Just installed this and peaked in node_nodules. There are a lot of random deps, probably for the various local capabilities, and it was probably easier to find those libs in the Node ecosystem than elsewhere.
Also, react-reconciler caught my eye. Apparently that's a dependency of ink, which lets you write text-based UIs in React.
You don't have to believe me if you don't want to. But I strongly advise everyone who still uses prettier to try a formatter written in Rust, for example dprint. It's a world of difference.
The question is whether what makes it useful is actually being in the terminal (limited, glitchy, awkward interaction) or whether it's being able to run next to files on a remote system. I suspect the latter.
Eh, I can't see how your comment is relevant ti the parent thread. Creating a CLI in Go is barely more complicated than JS. Rust, probably, but people aren't asking for that.
They wrote the CLI "GUI" in React using ink, which is all JS-only. I don't know what the Golang way of doing this would be, but maybe it's harder if you want the same result.
There are many GUI building libraries in Go. Sure, you wouldn't be writing JSX (and I agree it's an interesting idea), but it doesn't mean it's more any work to get things rendered in a terminal with other approaches, especially with these AI assistants to help you finish the boring parts.
If the UI is complicated at all, React is a well-established way to do that easily. The one-off tools will be harder, and even the AI won't know them as well as it knows React.
I really don't mind either way. My extremely limited experience with Node indicates they have installation, packaging and isolation polished very well.
Note, I haven't checked that this actually works, although if it's straightforward Node code without any weird extensions it should work in Bun at least. I'd be curious to see how the exe size compares to Go and Rust!
I was going to say the same thing, but they couldn’t resist turning the project into a mess of build scripts that hop around all over the place manually executing node.
I guess it needs to start various processes for the MCP servers and whatnot? Just spawning another Node is the easy way to do that, but a bit annoying, yeah.
That is point not a line. An extra 2MB of source is probably a 60MB executable, as you are measuring the runtime size. Two "hello worlds" are 116MB? Who measures executables in Megabits?
It depends a lot on what the executable does. I don’t know the hello world size, but anecdotally I remember seeing several go binaries in the single digit megabyte range. I know the code size is somewhat larger than one might expect because go keeps some type info around for reflection whether you use it or not.
The Golang runtime is big enough by itself that it makes a real difference from some WASM applications, and people are using Rust instead purely because of that.
From my perspective, I'm totally happy to use pnpm to install and manage this. Even if it were a native tool, NPM might be a decent distribution mechanism (see e.g. esbuild).
Obviously everybody's requirements differ, but Node seems like a pretty reasonable platform for this.
As a longtime user of NPM but overall fan of JS and TS and even its runtimes, NPM is a dumpster fire and forcing end users to use it is brittle, lazy, and hostile. A small set of dependencies will easily result in thousands (if not tens of thousands) of transitive dependency files being installed.
If you have to run end point protection that will blast your CPU with load and it makes moving or even deleting that folder needlessly slow. It also makes the hosting burden of NPM (nusers) who must all install dependencies instead of (nCI instances), which isn't very nice to our hosts. Dealing with that once during your build phase and then packaging that mess up is the nicer way to go about distributing things depending on NPM to end users.
I ran the npm install command in their readme, it took a few seconds, then it worked. Subsequent runs don't have to redownload stuff. It's 127MB, which is big for an executable but not a real problem. Where is the painful part?
Language choice is orthogonal to distribution strategy. You can make single-file builds of JavaScript (or Python or anything) programs! It's just a matter of packaging, and there are packaging solutions for both Bun and Node. Don't blame the technology for people choosing not to use it.
My thoughts exactly. Neither Rust not Go, not even C/C++ which I could accept if there were some native OS dependencies. Maybe this is a hint on who could be its main audience.