They did not. Anthropic is protecting its huge asset: the Claude Code value chain, which has proven itself to be a winner among devs (me included, after trying everything under the sun in 2025). If anything, Anthropic's mistake is that they are incapable of monetizing their great models in the chat market, where ChatGPT reigns: ie. Anthropic did not invest in image generation, Google did and Gemini has a shot at the market now.
Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!
Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
It's all easily swappable without OpenCode. Just symlink CLAUDE.md -> AGENTS.md and run `codex` instead of `claude`.
> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc).
Every feature you listed has an open-source MCP server implementation, which means every agent that supports MCP already has all those features. MCP is so epic because it has already nailed the commodification coffin firmly shut. Besides, Anthropic has way less funding than OAI or Google. They wouldn't win the moat-building race even if there were one.
That said, the conventional wisdom is that lowering switching costs benefits the underdogs, because the incumbents have more market share to lose.
Models each have their own, often competing, quirks on how they utilize AGENTS.md and CLAUDE.md. It's very likely a CLAUDE.md written for use with Claude Code utilizes prompting techniques that results in worse output if taken directly and used with Codex. For example, Anthropic recommends putting info that an agent must adhere to in statements like "MUST run tests after writing code" and other all-caps directives, whereas people have found using the same language with GPT-5.2 results in less instruction following, more timid responses than if the AGENTS.md were written without them.
In my experience, even a version upgrade of the same model will tend to break many assumptions about its quirks, so most people don't have time to try to optimize for them anyway. This is the wrong technology if you're that concerned about reliability.
> ie. Anthropic did not invest in image generation, Google did and Gemini has a shot at the market now.
They're after the enterprise market - where office / workspace + app + directory integration, security, safety, compliance etc. are more important. 80% of their revenue is from enterprise - less churn, much higher revenue per W/token, better margins, better $/user.
Microsoft adopting the Anthropic models into copilot and Azure - despite being a large and early OpenAI investor - is a much bigger win than yet another image model used to make memes for users who balk at spending $20 per month.
Same with the office connector - which is only available to enterprises[0] (further speaking to where their focus is). There hasn't yet been a "claude code" moment for office productivity, but Anthropic are the closest to it.
[0] This may be a mistake as Claude Code has been adopted from the ground up
Anthropic is rather obnoxious about training on user data, and I wonder if enterprises (and small businesses!) will grow up soon and start using competing products instead.
(Not that Google is amazing in this regard — their purchasable product options are all over the place to the point where it might be nearly a full time (human!) job to keep track of how to correctly purchase Gemini. Gemini itself seems incapable of figuring this out, or at least I haven’t found the right prompt yet. Gemini is absolutely amazing at hallucinating Google product offerings. OpenAI, on the other hand, seems to have nailed this.)
Usually you can see it when someone nags about “call us” pricing that is targeted at enterprise. People that nag about it are most likely not the customers someone wants to cater to.
When I was a software developer, I mostly griped about this when I wanted to experiment to see if I would even ask my larger enterprise if they would be interested in looking into it. I always felt like companies were killing a useful marketing stream from the enterprise's own employees. I think Tailscale has really nailed it, though. They give away the store to casual users, but make it so that a business will want to talk to sales to get all the features they need with better pricing per user. Small businesses can survive quite well on the free plan.
I'm sure everyone "wants to" land a many million dollar deal with a big company that has mild demands, but that doesn't mean those naggers are bad customers. Bad customers have much more annoying and unreasonable demands than a pricing sheet.
I don’t think anyone lands contracts with “mild demands”.
Most of the time you want to cut off ‘non customers’ as soon as possible and don’t leave ‘big fish’ without having direct contact person who can explain stuff. People just clicking around on their own will make assumptions that need to be addressed in a way no one wastes time.
> Most of the time you want to cut off ‘non customers’ as soon as possible
If you mean this literally, one of the best ways to turn non-customers into customers is to give them a way to pay you. Which means telling them the price. If you're implying something else by ‘non customers’ then I'm missing the implication.
> and don’t leave ‘big fish’ without having direct contact person who can explain stuff
You can give a contact person and have a list of prices.
> People just clicking around on their own will make assumptions that need to be addressed in a way no one wastes time.
Making everyone call you to negotiate is going to waste time.
Definitely there exist customers one must fire, but the flip side is, some of them might have genuine complaints.
... an extremely popular marketing tool ... sending an equally excessive amount of data above what they were paying for. They were far less adamant about the product, and on some days I didn't even want them as a customer. If there was a minor blip in the service, they were the first to complain. Reminder, [Sentry] was still a side project at the time so I had a day-job. That meant it was often stressful for Chris and I to deal w/ customer support, and way more stressful dealing with outages.
We had one customer who loved the product, and one who didn't. Both of these customers had such extreme volumes of data that it had a tangible infrastructure cost associated with hosting them. We knew the best thing to do was to find a way to be able to charge them more money for the amount of data they sent. So we set off to build that and then followed up with each customer.
To our surprise, the customer that loved the product didn't want to pay more. The customer who was constantly complaining immediately jumped on the opportunity. What's the lesson to take away from this?
... when I was a teenager I worked at Burger King, and there was an anecdote I will never forget: for every customer that complains, there are nine more with a similar experience. I've cemented this in my philosophy around development, to the point where I now believe over rotating on negative feedback is actually just biasing towards the customers who truly see the value in what you're offering. The customer that was complaining really valued our product, whereas the customer that was happy was simply content.
I am curious how big of a chance they have. I could imagine many enterprises that are already (almost by default) Microsoft customers (Windows, Office, Entra etc.) will just default to Copilot (and maybe Azure) to keep everything neatly integrated.
So an enterprise would need to be very dedicated to use everything Microsoft, but then go through the trouble use Claude as their AI just because it is slightly better for coding.
I have a feeling I am missing something here though, I would be happy for anyone to educate me!
I think at the current price point the capability of office copilot (which I don't use, only read reviews) is that it's basically email writer/summarizer/meeting notes.
Can't light a candle to Opus 4.5 who can now create and modify financial models from PDFs and augmented with websearch and the Excel skill (gpt-5.2 can do this too). That said the market IS smaller
This is really not the point. Anthropic isn’t cutting off third-party. You can use their models via API all you want. Why are people conflating this issue? Anthropic doesn’t owe anyone anything to offer their “unlimited” pro tiers outside of Claude Code. It’s not hard to build your own Opencode and use API keys. CLI interface by itself is not a moat.
People should take this as a lesson on how much we are being subsidized right now.
Claude code runs into use limitations for everyone at every tier. The API is too expensive to use and it's _still_ subsidized.
I keep repeating myself but no one seems to listen: quadratic attention means LLMs will always cost astronomically more than you expect after running the pilot project.
Going from 10k loc to 100k loc isn't a 10x increase, it's a 99x increase. Going from 10k loc to 1m loc isn't a 100x increase, it's a 9999x increase. This is fundamental to how transformers work and is the _best case scenario_. In practice things are worse.
I don't see LLMs ingesting the LoCs. I see CC finding and grepping and reading file contents piecewise, precisely because it is too expensive to ingest a whole project.
So what you say is not true: cost does not directly correlate with LoC.
There are high-quality linear or linear-ish attention implementations for the scales around 100k... 1M. The price of context can be made linear and moderate, and it can be greatly improved by implementing prompt caching and passing savings to users. Gpt-5.2-xhigh is good at this and from my experience has markedly higher intelligence and accuracy compared to opus-4.5, while enjoying lower price per token.
>Claude code runs into use limitations for everyone at every tier
What do you mean by this? I know plenty of people who never hit the upgraded Opus 4.5 limits anymore even on the $100 plan, even those who used to hit the limits on the $200 plan w/ Opus 4 and Opus 4.1.
>The API is too expensive to use and it's _still_ subsidized.
What do you mean by saying the API is subsidized? Anthropic is a private company that isn't required to (and doesn't) report detailed public financial statements. The company operating at a loss doesn't mean all inference is operating at a loss, it means that the company is spending an enormous amount of money on R&D. The fact that the net loss is shrinking over time tells us that the inference is producing net profit over time. In this business, there is enormous up front cost to train a model. That model then goes on to generate initially large, but subsequently gradually diminishing revenue until the model is deprecated. That said, at any given snapshot-in-time, while there is likely large ongoing R&D expenditure on the next model causing the overall net profit for the entire company to still be negative, it's entirely possible that several, if not many or even most of the previously trained models have fully recouped their training costs in inference revenue.
It's fairly obvious that the monthly subscriptions are subsidized to gain market share the same way Uber rides were on early on, but what indication do you have that the PAYG API is being subsidized? How would total losses have shrunk from $5.6B in 2024 to just $3B in 2025 while ARR grew from ~$1B to ~$7B over the same time period (one where usage of the platform dramatically expanded) if PAYG API inference wasn't running at a net profit for the company?
>quadratic attention means LLMs will always cost astronomically more than you expect after running the pilot project
This is only true as long as O(n²) quadratic attention remains the prevailing paradigm. As Qwen3-Next and Nemotron 3 Nano have shown with hybrid linear attention + sparse quadratic layers and a hybrid Mamba SSM, not all modern, performant LLMs necessarily need to run strictly O(n²) quadratic attention models. Sure, these aren't frontier models competitive with Opus 4.5 or Gemini 3 Pro or GPT 5.2 xhigh, but these aren't experimental tiny toy models like RWKV or Falcon Mamba that serve as little more than PoCs for alternative architectures, either. Qwen3-Next and Nemotron 3 Nano are solid players in their respective local weight classes.
It might make sense from Anthropics perspective but as a user of these tools I think it would be a huge mistake to build your workflow around Claude Code when they are pushing vendor lock in this aggressively.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
As a user of Claude Code via API (the expensive way), Anthrophic's "huge mistake" is capping monthly spend (billed in advance and pay as you go some $500 - $1500 at a time, by credit card) at just $5,000 a month.
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
I imagine a combination of stop loss and market share. If larger shops use up compute, you can't capture as many customers by headcount.
// There was a figure around o3, an astonishing model punching far above the weights (ahem) of models that came after, that suggested the thinkiest mode cost on the order of $3500 to do a deep research. Perhaps OpenAI can afford that, while Anthropic can't.
That leads to the obvious question; is the API next on the chopping block? Or would they just increase the API pricing to a point where they are A) making profit off it and B) nobody would use the API just for a different client?
I'm pretty sure everyone is pricing their APIs to break-even, maybe profit if people use caching properly (like GPT-5 can do if you mark the prompts properly)
Sounds plausible they're not really making any. Arbitrary and inflexible pricing policies aren't unusual, but it sounds easy enough for a new rapidly-growing company to let the account managers decide which companies they might have a chance of upselling 150 seat enterprise licenses to and just bill overage for everyone else...
Their target is the Enterprise anyway. So they are apparently willing to enrage their non-CC user base over vendor-locking.
But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.
Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.
If you’ve only got a CLAUDE.md and sub agent definitions in markdown it is pretty easy to do at the moment, although more of their feature set is moving in a direction that doesn’t have 1:1 equivalents in other tools.
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
I rather have a product that is only good at one single thing than mid for everything else especially when the developer experience for me is much more consistent than using gemini and chatgpt to the point that I only have chatgpt for productivity reasons and also sometimes making better prompts to claude (when I don't use claude to make a better prompt). After realizing that Anthropic is discounting token usages for claude code they should have made that more explicit and also the API key (but hindsight is 20/20) they should already have been blocking third party apps or just have you make another API key that has no discount but even then this could have pissed off developers.
> Anthropic is protecting its huge asset: the Claude Code value chain
Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”
If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.
“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.
If developers are using Claude code with it's quirks, Anthropic controls the backend LLM.
If developers are using OpenCode, it's easy for developers to try different LLMs and maybe substitute it (temporarily or permanently).
In an enterprise market, once they choose a tool they tend to stay with that even if it is not the best, the cost and timeframe of changing is too high.
if developers could swap LLMs freely on their own tool that is big missed opportunity for Anthropic. Not a User friendly move, but the norm in Enterprise.
Right now, most enterprises are experimenting with different LLMs and once they chose they will be locked for a long time.
If they cant can't chose because their coding agent doesn't let them they be locked to that.
Anthropic and OpenAI are essentially betting that a somewhat small difference in accuracy translates to a huge advantage, and continuing to be the one that's slightly but consistently better than others is the only way they can justify investments in them at all. It's natural to then consider that an agent trained to use a specific tool will be better at using that tool. If Claude continues to be slightly better than other models at coding, and Claude Code continues to be slightly better than OpenCode, combined it can be difficult to beat them even at a cheaper price. Right now, even though Kimi K2 and the likes are cheaper with OpenCode and perform decently, I spend more than 10x the amount on Claude Code.
In that case though, why the lock-in? If the combination really does have better performance than competitors’ offerings, then Anthropic should encourage an open ecosystem, confident in winning the comparison.
I imagine they do not see it as a level playing field. If OpenCode can draw on Claude Code credits but cannot draw on Codex ones (we've just had a tweet promising to fix this, more or less), then it can be construed as an advantage on the part of OpenAI. Personally I think it's idiotic and companies should stop penny-pinching in situations where people are already paying $200, there can be no more value extraction at this price point.
The problem the second you stop subsidizing Claude Code and start making money on it the incentive to use it over opencode disappears. If opencode is the better tool than claude code - and that's the reason people are using their claude subscription with it instead of claude code - people will end up switching to it.
Maybe they can hope to murder opencode in the meantime with predatory pricing and build an advantage that they don't currently have. It seems unlikely though - the fact that they're currently behind proves the barrier to building this sort of tool isn't that high, and there's lots of developers who build their own tooling for fun that you can't really starve out of doing that.
I'm not convinced that attempting to murder opencode is a mistake - if you're losing you might as well try desperate tactics. I think the attempt is a pretty clear signal that Antrhopic is losing though.
It’s possible that tokens become cheap enough that they don’t need to raise prices to make a profit. The latest opus is 3x less expensive than the previous.
Then the competitors drop prices though. The current justification for claude code is just that it's an order of magnitude (or more) cheaper per token than comparable alternatives. That's a terrible business model to be stuck in.
If everyone is dropping prices in this scenario then I don’t see how the user eventually gets squeezed.
I mean I guess they could do a bait and switch (drop prices so low that Anthropic goes bankrupt, then raises price) but that’s possible in literally any industry, and sees unlikely given the current number of competitors
I am pretty sure most people get Anthropic's move. I also think "getting it" is perfectly compatible with being unhappy about it and voicing that opinion online.
Agreed. The system is ALL about who controls the customer relationship.
If Anthropic ended up in a position that they had to beg various Client providers to be integrated (properly) and had to compete with other LLMs on the same clients and could be swapped out at a moment's notice, they would just become a commodity and lose all leverage. They don't want to end up in such situation. They do need to control the delivery of the product end-to-end to ensure that they control the customer relationship and the quality.
This is also going to be KEY in terms of democratizing the AI industry for small startups because this model of ai-outside-tools-inside provides an alternative to tools-outside-ai-inside platforms like Lovable, Base44 and Replit which don't leave as much flexibility in terms of swapping out tooling.
> Anthropic's mistake is that they are incapable of monetizing their great models in the chat market
The types of people who would use this tool are precisely the types of people who don't pay for licenses or tools. They're in a race to the bottom and they don't even know it.
> and that's a very thin layer
I don't think Anthropic understands the market they just made massive investments in.
>They did not. Anthropic is protecting its huge asset: the Claude Code value chain
that's just it, it has been proven over and over again with alternatives that CC isn't the moat that Anthropic seems to think it is. This is made evident with the fact that they're pouring R&D into DE/WM automation meanwhile CC has all the same issues it has had for months/years -- it's as if they think CC is complete.
if anything MCP was a bigger moat than CC.
also : I don't get the opencode reference. Yes, it's nice -- but codex and gemini-cli are largely compatible with cc generated codebases.
There will be some initial bumpiness as you tell the agent to append the claude.md file to all agent reads -- or better yet just merge it into agent file.) -- but that's about as rough as it'll get.
They’re betting that the stickiness of today’s regular users is more valuable than the market research and training data they were receiving from those nerdy, rule-breaking users.
> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
I don't understand, why would other models not be able to support any, or some, or even a particular single one of these? I don't even see most of these as relevant to the model itself, but rather the harness/agentic framework around it. You could argue these require a base degree of model competence for following instructions, tool calling, etc, but these things are assumed for any SOTA model today, we are well past this. Almost all of these things, if not all, are already available in other CLI + IDE-based agentic coding tools.
i think they're trading future customer acquisition and model quality for the current claude code userbase which they might also lose from this choice.
the reason i got the subscription wasnt to use claude code. when i subscribed you couldnt even use it for claude code. i got it because i figured i could use those tokens for anything, and as i figured out useful stuff, i could split it off onto api calls.
now that exploration of "what can i do with claude" will need to be elsewhere, and the results of a working thing will want to stay with the model that its working on.
It's crazy how bad the interface it is. I'm generally a fan of the model performance but there is not a day where their CLI will not flash random parts of scrollback or have a second of input lag just typing in the initial prompt (how is that even possible? you are not doing anything?). If this is their "premier tool" no vending machine business can save them.
> making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc)
I use CC as my harness but switch between third party models thanks to ccs. If Anthropic decided to stop me from using third party models in CC, I wouldn't just go "oh well, let's buy another $200/mo Claude subscription now". No. I'd be like: "Ok, I invested in CC—hooks/skills/whatever—but now let's ask CC to port them all to OpenCode and continue my work there".
That's a legal non-starter for all car companies. They would be made liable for every car incident where self-driving vehicles were spotted in close vicinity, independently of the suit being legit. A complete nightmare and totally unrelated to the tech. Makes would spend more time and tech clearing their asses in court than building safe cars.
Google is in that game too with "AI Mode", stealing traffic from ChatGPT.
But as OP and other threads here highlighted, the other ~half of the gold sits in more fenced communities like WhatsApp, IG, Telegram, and other messaging and non-digital communities that are getting their "news" and "information" from viral shorts from IG, TikTok and YT Shorts.
3% market share is 150 million active users give or take. That's no death by any count in the software world.
Gosh, I really wish Mozilla would just dig into their user-base and find a way to adequately become sustainable... or find a way to make it work better as a foundation that is NOT maintained by Google, ie like the Wiki Foundation. I do spend a LOT of time in FF, can't anyone see there's a value beyond selling ads and personal info that could make Mozilla more sustainable, dependable and resilient?
Just the set the record straight on how and why these acquisitions go at IBM. This is a first hand account working at and with IBM and competitors and being in the room as tech-guy accessory to murder.
IBM lives off huge multi-year contract deals with their customers, each are multi-multi-million dollars worth. IBM has many of these contracts, maybe ~2000 of them around the planet, including your own government wherever it is that you live. This is ALL that matters to IBM. ALL. That. Matters.
These huge contracts get renegotiated at every X years. IBM renewal salespeople are tough and rough, in particular the ones on the renewal teams, and they spend every minute of every hour in between renewals grooming the decision makers, sponsors, champions and stakeholders (and their families) within these big corporations. Every time you see an IBM logo at a sports event (and there are many IBM-sponsored events), that's not IBM marketing to you the ad-viewer. They are there for grooming their stakeholders, who fight hard to be in the best IBM sponsored-seats at those venues, and in the glamorous pre and after party, celebs included. IBM also sponsors other stuff, even special programs at universities. Who go to these universities? Oh, you bet, the stakeholder's kids, who get the IBM-treatment and IBM-scholarship at those places.
But the grooming is not enough. The renewal is not usually at risk - who has the balls to uninstall IBM out of a large corp? What is at risk is IBM's growth, which is fueled by price increases at every renewal point not the sale of new software or new clients - there are no new clients for IBM anywhere anymore! These price increases need to happen, not just because of inflation but because of the stock price and bonuses that keep the renewal army and management going strong, since this is a who-knows-who business. To justify the price increase internally at those huge client corps (not to the stakeholder but to their bosses, boards, users, etc) IBM needs to throw a bone into these negotiations. The bone is whatever acquisition you see they make: Red Hat, Hashicorp... Or developments like Watson. Or whatever. They are only interested in acquiring products or entering markets that can be thrown at those renewal negotiations, with very few exceptions. Why Confluent? Well, because they probably did their research and decided that existing Confluent licenses can be applied to one (yeah, one) or many renewal contracts as growth fuel for at least 1-to-N iterations of renewals.
Renewal contracts correspond anywhere from 60% to 95% of IBM's revenue, depending on how you account for the the consulting arm and "new" (software/hw sales/subscriptions). I particularly have not seen lots of companies hiring IBM consultants "just because we love IBM consultants and their rates", so consulting at a site is always tied to the renewal somehow, even if billed separately or not billed at all. Same for new sw sales, if a company wants something IBM has on their catalog from their own whim and will, then that will just probably be packed into the next renewal because that's stakeholder leverage for justifying the renewal's increase base rate. Remember, a lot of IBM's mainframes are not even sold, they are just rentals.
Most IBM investment into research programs, new tech (quantum computing!) etc are there just to help the renewals and secure a new Govt deal here and there. How? Well, maybe the increase in the renewal for the, ie, State of Illinois contract gets a bone thrown in for a new "Quantum Research Center (by IBM)" at some U of I campus or tech park that the now visionary Governor will happily cut the ribbon, photo op and do the speech. Oh wait! I swear I made this up as an example, but this one is actually true, lol:
having worked in a government agency that ditched IBM, let me offer a view of what that looks like from the customer side:
IBM bought a company whose product we'd been using for a while, and had a perpetual license for. A few years after the purchase, IBM tried to slip a clause into a support renewal that said we were "voluntarily" agreeing to revoke the perpetual license and move to a yearly per-seat license. Note: this was in a contract with the government, for support, not for the product itself. They then tried to come after us for seat licenses costs. Our lawyers ripped them apart, as you can't add clauses about licensing for software to a services contract, and we immediately tore out the product and never paid IBM another dime.
I tell this story not to be all "cool story, bro", but to point out that IBM does focus on renewal growth, but they're not geniuses...they're just greedy assholes who sometimes push for growth in really stupid ways.
Yes there was a reason as Perl took inspiration from Lisp - everything is a list- and everyone knows how quick C's variadic arguments get nasty.
So @_ was a response to that issue, given Perl was about being dynamic and not typed and there were no IDEs or linters that would type-check and refactor code based on function signatures.
JS had the same issue forever and finally implemented a rest/spread operator in ES6. Python had variadic from the start but no rest operator until Python3. Perl had spread/rest for vargs in the late 80s already. For familiarity, Perl chose the @ operator that meant vargs in bourne shell in the 70s.
Not only normal arguments like we get in C or Pascal, but there's keyword arguments, you can have optional arguments, and a rest argument, which is most like Perl's @_. And that's not even getting into destructuring lambda lists which are available for macros or typed lambda lists for methods.
Not only the movie theater, Netflix killed social life. Well, streaming, feeds and their algorithms in general, but Netflix is very much the ones that really owned the narrative of what to do on a weekend night.
This is very anecdatal, certainly, but I've spoken/overheard a few neighborhood hospitality business owners that had to forclose or cut down due to the constant decline of people leaving the house to just meet in a bar or coffee shop. Only sport nights keeps them going, because sports online remain expensive in most places.
Maybe just my observation or my neck of the woods, but seems to fit the general sentiment of a reduced social environment on the streets in certain parts of the world.
Fine, but there's a noticeable asymmetry in how the three languages get treated. Go gets dinged for hiding memory details from you. Rust gets dinged for making mutable globals hard and for conceptual density (with a maximally intimidating Pin quote to drive it home). But when Zig has the equivalent warts they're reframed as virtues or glossed over.
Mutable globals are easy in Zig (presented as freedom, not as "you can now write data races.")
Runtime checks you disable in release builds are "highly pragmatic," with no mention of what happens when illegal behavior only manifests in production.
The standard library having "almost zero documentation" is mentioned but not weighted as a cost the way Go's boilerplate or Rust's learning curve are.
The RAII critique is interesting but also somewhat unfair because Rust has arena allocators too, and nothing forces fine-grained allocation. The difference is that Rust makes the safe path easy and the unsafe path explicit whereas Zig trusts you to know what you're doing. That's a legitimate design, hacking-a!
The article frames Rust's guardrails as bureaucratic overhead while framing Zig's lack of them as liberation, which is grading on a curve. If we're cataloging trade-offs honestly
> you control the universe and nobody can tell you what to do
Global mutable variables are as easy in Rust as in any other language. Unlike other languages, Rust also provides better things that you can use instead.
A second component is that statics require const initializers, so for most of rust’s history if you wanted a non-trivial global it was either a lot of faffing about or using third party packages (lazy_static, once_cell).
Since 1.80 the vast majority of uses are a LazyLock away.
I don't think it's specifically hard, it's more related to how it probably needed more plumbing in the language that authors thought would add to much baggage and let the community solve it. Like the whole async runtime debates
Yeah, now they are part of Anthropic, who haven't figured out monetization themselves. Shikes!
I'm a user of Bun and an Anthropic customer. Claude Code is great and it's definitely where their models shine. Outside of that Anthropic sucks,their apps and web are complete crap, borderline unusable and the models are just meh. I get it, CC's head got probably a powerplay here given his department is towing the company and his secret sauce, according to marketing from Oven, was Bun. In fact VSCode's claude backend is distributed in bun-compiled binary exe, and the guy is featured on the front page of the Bun website since at least a week or so. So they bought the kid the toy he asked for.
Anthropic needs urgently, instead, to acquire a good team behind a good chatbot and make something minimally decent. Then make their models work for everything else as well as they do with code.
> Yeah, now they are part of Anthropic, who haven't figured out monetization themselves.
Anthropic are on track to reach $9BN in annualised revenue by the end of the year, and the six-month-old Claude Code already accounts for $1BN of that.
Not sure if that counts as "figured out monetization" when no AI company is even close to being profitable -- being able to get some money for running far more expensive setups is not nothing, but also not success.
Monetisation is not profitability, it’s just the existence of a revenue stream. If a startup says they are pre-monetisation it doesn’t mean they are bringing in money but in the red, it means they haven’t created any revenue streams yet.
How is their Web app any different than any other AI? I feel like it’s on par with all of them. It works great for me. Although I mostly use Claude code.
As far as the data goes, adjusted for inflation, tuition and fees have eased up in the last ~5 years [1]. But overall, college enrollment has been going down anyway [2], except for 2025, where it hints at a slight rebound.
So I'd say we have to consider the full set of drivers that can correlate: overall rising cost of living making it very expensive to be at a university full-time, general labor market sentiment which is mostly down since covid, interest rates and debt risk which are still high despite recent cuts, etc.
Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!
Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
reply