It's really unfortunate the term "free software" took off rather than e.g. "libre software", since it muddies discussions like this. The point of "free software" is not "you don't have to pay," it's that you have freedom in terms of what you do with the code running on your own machine. Selling free software is not incompatible with free software: it's free as in freedom, not as in free beer.
Nobody in this comments thread appears to be confused by or misusing the term "free software". We're talking about free software vs (commercial) proprietary software.
> I am still surprised most Linux Distros haven't changed their package managers to allow for selling of proprietary solutions directly
Free packages remain unaffected, but now there are optional commercial options you can pay for which fund the free (as in free money) infrastructure you already take advantage of so that these projects are fully sustainable. I imagine some open source projects could even set themselves up to receiving donations directly via package managers.
I promise you, everybody understands the general idea, but adding a built-in store to your operating system is far from a neutral action that has no second- or third-order effects. It isn't that it somehow affects "free" packages. Incoming text wall, because I am not very good at being terse.
- It creates perverse incentives for the promotion of free software.
If development of the operating system is now funded by purchases of proprietary commercial software in the app store, it naturally incentivizes them to sell more software via the app store. This naturally gives an incentive to promote commercial software over free software, contrary to the very mission of free software. They can still try to avoid this, but I think the incentive gets worse due to the next part (because running a proper software store is much more expensive.)
Free software can be sold, too, but in most cases it just doesn't make very much sense. If you try to coerce people into paying for free software that can be obtained free of charge, it basically puts it on the same level as any commercial proprietary software. If said commercial software is "freemium", it basically incentivizes you to just go with the freemium proprietary option instead that is not just free software, but also often arguably outright manipulative to the user. I don't really think free software OS vendors want to encourage this kind of thing.
- It might break the balance that makes free software package repositories work.
Software that is free as in beer will naturally compete favorably against software that costs money, as the difference between $0 and $1 is the biggest leap. Instead of selling software you can own, many (most?) commercial software vendors have shifted to "freemium" models where users pay for subscriptions or "upsells" inside of apps.
In commercial app stores, strict rules and even unfair/likely to be outlawed practices are used to force vendors to go through a standardized IAP system. This has many downsides for competition, but it does act as a (weak) balance against abusive vendors who would institute even worse practices if left to their own devices. Worse, though, is that proprietary software is hard to vet; the most scalable way to analyze it is via blackbox analysis, which is easily defeated by a vendor who desires to do so. Android and iOS rely on a combination of OS-level sandboxing and authorization as well as many automated and ostensibly human tests too.
I am not trying to say that what commercial app stores do is actually effective or works well, but actually that only serves to help my point here. Free software app stores are not guaranteed to be free of malware more than anything else is, but they have a pretty decent track record, and part of the reason why is because the packaging is done by people who are essentially volunteers to work on the OS, and very often are third parties to the software itself. The packages themselves are often reviewed by multiple people to uphold standards, and many OSes take the opportunity to limit or disable unwanted anti-features like telemetry. Because the software is free, it is possible to look at the actual changes that go into each release if you so please, and in fact, I often do look at the commit logs and diffs from release to release when reviewing package updates in Nixpkgs, especially since it's a good way to catch new things that might need to be updated in the package that aren't immediately apparent (e.g.: in NixOS, a new dlopen dependency in a new feature wouldn't show up anywhere obvious.)
Proprietary software is a totally different ball game. Maintainers can't see what's going on, and more often than not, it is simply illegal for them to attempt to do so in any comprehensive way, depending on where they live.
If the distributions suddenly become app store vendors, they will wind up needing to employ more people full time to work on security and auditing. Volunteers doing stuff for free won't scale well to a proper, real software store. Which further means that they need to make sure they're actually getting enough revenue for it to be self-sustaining, which again pushes perverse incentives to sell software.
What they wanted to do is build a community-driven OS built on free software by volunteers and possibly non-profit employees, and what they got was a startup business. Does that not make the problem apparent yet?
- It makes the OS no longer neutral to software stores.
Today, Flatpak and Steam are totally neutral and have roughly equal footing to any other software store; they may be installed by default in some cases, but they are strictly vendor neutral (except for obviously in SteamOS). If the OS itself ships one, it lives in a privileged position that other software store doesn't. This winds up with the exact same sorts of problems that occur with Windows, macOS, iOS and Android. You can, of course, try to behave in a benevolent manner, but what's even better than trying to behave in a benevolent manner is trying to put yourself in as few situations as possible to where you need to in order to maintain the health of an ecosystem. :)
--
I think you could probably find some retorts to this if you wanted. It's not impossible to make this model work, and some distributions do make this model work, at least insofar as they have gotten now. But with that having been said, I will state again my strongly held belief that it isn't that projects like Debian or Arch Linux couldn't figure out how to sell software or don't know that they can.
Most people's mental model of Claude Code is that "it's just a TUI" but it should really be closer to "a small game engine".
For each frame our pipeline constructs a scene graph with React then
-> layouts elements
-> rasterizes them to a 2d screen
-> diffs that against the previous screen
-> finally uses the diff to generate ANSI sequences to draw
We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written.
This is just the sort of bloated overcomplication I often see in first iteration AI generated solutions before I start pushing back to reduce the complexity.
Usually, after 4-5 iterations, you can get something that has shed 80-90% of the needless overcomplexification.
My personal guess is this is inherent in the way LLMs integrate knowledge during training. You always have a tradeoff in contextualization vs generalization.
So the initial response is often a plugged together hack from 5 different approaches, your pushbacks provide focus and constraints towards more inter-aligned solution approaches.
Ok I’m glad I’m not the only one wondering this. I want to give them the benefit of the doubt that there is some reason for doing it this way but I almost wonder if it isn’t just because it’s being built with Claude.
Counterpoint: Vim has existed for decades and does not use a bloated React rendering pipeline, and doesn't corrupt everything when it gets resized, and is much more full featured from a UI standpoint than Claude Code which is a textbox, and hits 60fps without breaking a sweat unlike Claude Code which drops frames constantly when typing small amounts of text.
Yes, I'm sure it's possible to do better with customized C, but vim took a lot longer to write. And again, fullscreen apps aren't the same as what Claude Code is doing, which is erasing and re-rendering much more than a single screenful of text.
It's possible to handle resizes without all this machinery, most simply by clearing the screen and redrawing everything when a resize occurs. Some TUI libraries will automatically do this for you.
Programs like top, emacs, tmux, etc are most definitely not implemented using this stack, yet they handle resizing just fine.
That doesn't work if you want to preserve scrollback behavior, I think. It only works if you treat the terminal as a grid of characters rather than a width-elastic column into which you pour information from the top.
Yes yes I'm familiar with the tweet. Nonetheless they drop frames all the time and flicker frequently. The tweet itself is ridiculous when counterpoints like Vim exist, which is much higher performance with much greater complexity. They don't even write much of what the tweet is claiming. They just use Ink, which is an open-source rendering lib on top of Yoga, which is an open-source Flexbox implementation from Meta.
What? Technology has stopped making sense to me. Drawing a UI with React and rasterizing it to ANSI? Are we competing to see what the least appropriate use of React is? Are they really using React to draw a few boxes of text on screen?
There is more than meets the eye for sure. I recently compared a popular TUI library in Go (Bubble Tea) to the most popular Rust library (Ratatui). They use significantly different approaches for rendering. From what I can tell, neither is insane. I haven’t looked to see what Claude Code uses.
You can do it and may be ok for single user with idle waiting times, but performance/throughput will be roughly halved (closer to 2/3) and free context will be more limited with 8xH200 vs 16xH100 (assuming decent interconnect). Depending a bit on usecase and workload 16xH100 (or 16xB200) may be a better config for cost optimization. Often there is a huge economy of scale with such large mixture of expert models so that it would even be cheaper to use 96 GPU instead of just 8 or 16. The reasons are complicatet and involve better prefill cache, less memory transfer per node.
FWIW, Ink is working on an incremental rendering system: they have a flag to enable it. It's currently pretty buggy though unfortunately. Definitely wish Anthropic would commit some resources back to the project they're built on to help fix it...
It worked out poorly for America — we got stuck in a long expensive war that we got basically nothing from — but for the average Iraqi? I'd much rather be an Iraqi citizen than an Iranian one, and that wouldn't have been true in the 90s. Saddam was pretty evil — and a bad leader. Iraq's GDP per capita is 6x higher today than it was in 2002, a year before the invasion.
It worked out pretty poorly for the average Iraqi. Hundreds of thousands of Iraqis were killed (some estimates put it at around 1 million), and millions of people became refugees.
Citing the relative GDP per capita number is reductive and doesn’t give a good picture of the average person’s life.
The GDP should be banned as a metric for being a life quality proxy, it's insane how so many people still refer to it although proven to neglect so many parts of what counts into LQ. To OP: Go check out Doughnut Economics - the book does a good job clearing up economical fallacies & mismodelling of such things.
This is a pretty wild counter-factual. Reminds me of a report I saw about a hipster cafe existing in Baghdad 2025 as proof of success of the US invasion. What would the alternative have been? How do you factor in the loss of life? I suppose the real answer is asking Iraqis...
Broccoli has 2.8g of protein per 100g. Beef has 26g per 100g, and chicken has 27g. If you're trying to get protein, broccoli isn't going to do much, and I think it's good that the government is being honest about that. A chart that listed broccoli as a major source of protein would be misleading. Broccoli is a good source of many nutrients, and the chart calls it out as such, but it is not an effective source of protein.
If you compare protein per kJ instead, broccoli has 0.021g protein per kJ whereas lean beef mince has 0.028g per kJ. Much more similar. Although of course you would need food that is higher density protein as well so you don't have too much volume to eat.
But that is a kind of silly way to compare. Broccoli isn't very filling _and_ it doesn't have very much protein in it. That doesn't change the fact that it lack protein.
The question is if I'm preparing a meal that I want to be filling, healthy, and energizing, how should I do it. Broccoli isn't a good answer to the protein part of that question.
Normalising by mass is a poor way to assess food's protein content since different foods have greatly different water contents. E.g. beef jerky has much higher protein per 100g than beef largely because it's dried (admittedly, probably also because they use leaner cuts)
Beef has ~3x more protein per gram than legumes. It is much more protein-dense than vegetables or legumes.
Similarly, it's a "complete" protein, whereas most vegetables and legumes are missing necessary amino acids.
The downside of beef isn't the "density" of nutrients: the downside is high saturated fat. Chicken breast, though, is similarly high in protein without the saturated fat downside.
> most vegetables and legumes are missing necessary amino acids
In practice, there's no evidence of amino acid deficiency in vegans/vegetarians except ones that restrict even further (potato diet, fruitarians, etc)
https://pmc.ncbi.nlm.nih.gov/articles/PMC6893534/
Besides the ever-popular soybean being a complete protein, if you have normal variety in your diet, it's just not something you have to worry about.
>In practice, there's no evidence of amino acid deficiency in vegans/vegetarians
That is not what your linked article says. It says there is not evidence of protein deficiency, and the deficiency of amino acids is overstated. Not that there is no deficiency.
And vegan/vegetarian health is really a 2nd order variable here. Vegans and vegetarians could have massive amino acid surpluses and it remains a fact that vegetable proteins lack useful amino acids that meat has. Maybe the vegetarians are eating lots of eggs. Maybe they are taking lots of supplements. Maybe they are actually eating meat despite calling themselves vegans and vegetarians. It doesn't matter. There really is no disputing the fact about the composition of meat/vegetable protein.
> a fact that vegetable proteins lack useful amino acids that meat has.
This isn't a problem since you only need nine essential amino acids and they are present in adequate quantity in various vegetables and shrooms. The others are synthesized by ones body.
Only if those vegetables or shrooms were grown in natural sunlight (no greenhouse plastic/glass involved) and in a soil with abundant minerals, macronutrients, and high brix value.
The fat is an excellent source of energy though and it's very hard to get fat by eating fat because it's essentially hormonally inert. I.e. eating fat doesn't precipitate insulin which is the hormone that enables body fat accumulation.
So the problem with steak isn't the steak itself it's the "steak dinner" where the meat comes with sides such as french fries and drinks such as beer.
> The downside of beef isn't the "density" of nutrients: the downside is high saturated fat.
There are other downsides to beef .. such as the batshit crazy use of ecosystems and resources required to produce it at industrial scale.
Got a (beef) cow roaming in your yard, somehow getting by on whatever grows out of the ground? Enjoy your steak! Generating 6x the calories via a water-intensive cover crop to feed the cow so you can eat it later? Just say no.
This is orthogonal to nutritious eating habits; I don't think the food pyramid should lie about nutrition due to ecological concerns. (I do think the food pyramid should be a little more concerned about saturated fat than it is, though — which is why I called out chicken as an alternative, and elsewhere also mentioned fish.)
Worth noting that like amino acids there are essential fatty acids as well, and most people have poor nutrition there... red meat isn't "only" saturated fat, but a fairly balanced fatty acid profile. You can have too much, but in moderate cuts it isn't too bad.
I usually suggest around 0.5g fat to 1g protein as a minimal, higher if keto/carnivore.
That's true, although fish has a better balance of essential fatty acids than red meat. Although, oddly enough, wagyu has a (much) better fatty acids profile than other types of beef, so you can justify the occasional wallet splurge on health grounds!
The GPT-5 series is a new model, based on the o1/o3 series. It's very much inaccurate to say that it's a routing system and prompt chain built on top of 4o. 4o was not a reasoning model and reasoning prompts are very weak compared to actual RLVR training.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
Prior to 5.2 you couldn’t expect to get good answers to questions prior to March 2024. It was arguing with me that Bruno Mars did not have two hit songs in the last year. It’s clear that in 2025 OpenAI used the old 4.0 base model and tried to supercharge it using RLVR. That had very mixed results.
reply