MCP isn't a 96-page contract that covers every eventuality. It's a gentleman's agreement sealed with a firm handshake. And trying to write that 96-page contract now would be incredibly unwise.
But this person was laid off. His help was (apparently) not appreciated, and he's not helping anyone by sitting alone in his car on the parking lot.
Do you think it is healthy behavior to go to a parking lot at 0900 every day and do nothing because you mentally cannot face the idea of not going to an office?
That's just your take. We don't know where he sat in the team, so we can assume the idea that he wasn't appreciated by his teammates as incorrect. He didn't make the cut based on unknown metrics from upper management, but they have their own reasons for doing things.
Getting in to the parking lot of the old office sounds way healthier than not making it out of bed at all.
What a weird dichotomy. It's not between "sitting in your old employers' parking lot" and "lying in bed all day", it's between "sitting in your old employers' parking lot" and "learning new skills", "finding a new job", "discovering new hobbies", "spending more time with your loved ones" or almost anything else.
Instead he chose to sit alone in a parking lot so he could feel "normal". Feeling compelled to do a specific action (excluding things like breathing) just to feel normal has a name, and that name is "addiction". It is not usually considered a good thing.
He didn't just drive there and sit in the car for a week or so either, which could be a shock reaction or wanting to keep the routine going whilst looking for the next thing to do... He was doing this for 6-8 months. It reveals a lot about a "rational" crowd.
They could go anywhere though - why not go to a coffee shop at 9 with a laptop or on a morning hike? I agree sitting in bed depressed would be bad but it seems like avoiding the issue to specifically sit in the parking lot of an old employer.
At minimum I think it would be healthier to tie part of your identify to an aspect of your career you enjoy rather than a specific employer itself.
You've cherry-picked a situation where there is an obvious social norm being broken. A better example would be going to the park and sitting on the bench you used to sit on with your ex. I agree with GP that this is healthier than lying despondent in bed.
Coping mechanisms are complex and diverse. The individual in question lost a major source of meaning-making in their life and was struggling to cope with that loss. I don't believe this is any less healthy than other common responses, which range from societal withdrawal to substance abuse.
SQlite as a database for web services had a little bit of a boom due to:
1. People gaining newfound appreciation of having the database on the same machine as the web server itself. The latency gains can be substantial and obviously there are some small cost savings too as you don't need a separate database server anymore. This does obviously limit you to a single web server, but single machines can have tons of cores and serve tens of thousands of requests per second, so that is not as limiting as you'd think.
2. Tools like litestream will continuously back up all writes to object storage, so that one web server having a hardware failure is not a problem as long as your SLA allow downtimes of a few minutes every few years. (and let's be real, most small companies for which this would be a good architecture don't have any SLA at all)
3. SQLite has concurrent writes now, so it's gotten much more performant in situations with multiple users at the same time.
So for specific use cases it can be a nice setup because you don't feel the downsides (yet) but you do get better latency and simpler architecture. That said, there's a reason the standard became the standard, so unless you have a very specific reason to choose this I'd recommend the "normal" multitier architectures in like 99% of cases.
Just to clarify: Unless I've missed something, this is only with WAL mode and concurrent reads at the same time as writes, I don't think it can handle multiple concurrent writes at the same time?
As I understand it, there can be concurrent writes as long as they don't touch the same data (the same file system pages, to be exact). Also, the actual COMMIT part is still serialized and you need to begin your transactions with BEGIN CONCURRENT. If two transactions do conflict, the later one will be forced to ROLLBACK although you can still try again. It is up to the application to do this.
Also just a note: BEGIN CONCURRENT is not in mainline SQLite releases. You need to build your own from a branch. Not a huge deal but just something to note.
I’m a fan of SQLite but just want to point out there’s no reason you can’t have Postgres or some other rdbms on the same machine as the webserver too. It’s just another program running in the background bound to a port similar to the web server itself.
Couple thousand simultaneous should be fine, depending on total system load, whether you're running on spinning disks or on SSDs, p50/99 latency demands and of course you'd need to enable the WAL pragma to allow simultaneous writes in the first place. Run an experiment to be sure about your specific situation.
The energy is not free, since the solar panels cost money and don't last forever. Even at optimistic prices, it's still something like 0.03 USD/kWh. Install them on a boat and they have to deal with constant vibrations, humid conditions, seagulls shitting all over them, etc etc etc.
I used to work on ships and almost everything constantly breaks down without constant maintenance. I bet it would be much cheaper to put the solar panels on land and charge the ship when it's in port.
That may all be true, but there are other benefits that could make it worth it. For example it could be, in theory, self-sufficient forever if something else breaks down making it unable to maneuver. Then you can at least sit in the middle of the sea and have your heating and cooking and desalination working until you repair the propulsion.
There’s something funny to me about taking your experience with solar on a small sailboat and extrapolating this to a commercial ferry that would need a very large solar installation that’s funny to me. Something tells me the experience isn’t transferable.
The point isn’t to power the main drive, the point is to preserve energy used elsewhere on the ship.
My experience sailing and dealing with vessels from 30ft to 180ft give me a perspective that you probably don’t.
Providing solar panels along the roof would give the ship a few KWh of power that would otherwise be drawing from the main batteries. This would extend the range of the ship by 5-10%.
The ship battery is 40,000 kwh and uses at least 10,000 kwh per crossing, with 10 minutes to recharge. A handful of kwh are negligible because this isn't a sailboat.
The electricity sector in Uruguay has 98% renewable power
For how much cost? The range of the ship is already handled well by the batteries. An extra 5-10% isn’t going to meaningfully add value nor reduce fuel costs. There’s no way to recapture the capital expenditure such solar panels would require.
The 5-10% number is completely invented. I doubt it's half as high as 5%, but until and unless someone does the maths, there's no point in speculating.
The math has been done many times for solar panels on the roof of cars, and it's not worthwhile. Ships are not the same though.
At any rate, it's inevitably far more sensible to put a larger solar panel + battery installation at a fixed place on land, and charge vehicles from that.
The journey it makes is 90 minutes and it can charge for that journey in 8 minutes. Offloading and onloading the thousands of passengers (and 220 cars!) takes much longer than the 8 minutes for the battery to charge.
I wouldn’t go that far. Not at hull speed. But a good fraction of it. The silent 60 for example.
Full throttle you’ll be out of juice in a week. Hull speed maybe a month. Depending on wave conditions. But going, stopping, having lunch, enjoying the day, going again, enjoying tomorrow, you can be out there as long as you have provisions.
It is big difference between mounting solar on your personal sailboat and installing them on a large commercial passenger ship. The regulations are totally different.
I was bored so I did the math and you are not correct. Even if you don't care about the people themselves, a normal citizen in an industrialized society like Israel has about 40 years of working life. Let's assume for simplicity that some rockets would hit children but others would hit retired people, on average hitting people when they're halfway through their career and would have 20 years of productive work left.
According to Wikipedia [1], Israel has an average GDP per capita of about 60 USD per hour worked, which at 40 hours per week, 50 weeks worked per year over 20 years comes to about 40000 hours of work and ~2.4 million USD of GDP generated. At an income tax of about 30% [2], that means an income for the state of about 800k USD equivalent. If the person dies due to rocket attack, the state would miss out on that. Iron dome interceptors are quite cheap compared to that and the laser intercepts should be an order of magnitude cheaper still.
This doesn't even take into account the sunk costs that industrialized nations incur by every citizen having to attend school for about the first two decades of their lives, mostly funded by the state. That represents a tremendous investment into human capital that would be lost if you let your citizens get shot up in preventable rocket attacks.
So no, human lives are not actually cheap when viewed through the lens of a country, even when completely excluding morals and only looking at it financially. They are in fact quite valuable.
One life can cost as much as you calculated. However, if the attack will kill an unproductive (elderly, disabled or other) person then it could be a net gain instead of loss for the economy.
Perhaps but you while you can maybe predict where the rocket will fall, you cannot reliably predict who it might kill if it hits. People move around and even if you can see it will hit a house for the elderly, you cannot see how many (grand)children are currently visiting. Also the opposite is true: a rocket hitting a child care facility would cause double the economic damage. That is why I used an average in my previous post.
In any case, elderly and disabled are not as useless to the economy as you might suppose. There are many disabled who are economically productive. One of the most capable colleagues I've ever had was a blind programmer. Grandparents often provide things like babysitter services that don't show up in formal GDP measurements but are very valuable nonetheless. Don't count out the contribution of people to society just because they don't have a normal job.
Both sides are right. Life is cheap in many developing nations. My hope is that this tech could help governments in those regions to protect their citizens even when their GDP returns are significantly lower.
This essay, like so many others, mistakes the task of "building" software with the task of "writing" software. Anyone in the world can already get cheap, mass-produced software to do almost anything they want their computer to do. Compilers spit out new build of any program on demand within seconds, and you can usually get both source code and pre-compiled copies over the internet. The "industrial process" (as TFA puts it) of production and distribution is already handled perfectly well by CI/CD systems and CDNs.
What software developers actually do is closer to the role of an architect in construction or a design engineer in manufacturing. They design new blueprints for the compilers to churn out. Like any design job, this needs some actual taste and insight into the particular circumstances. That has always been the difficult part of commercial software production and LLMs generally don't help with that.
It's like thinking the greatest barrier to producing the next great Russian literary novel is not speaking Russian. That is merely the first and easiest barrier, but after learning the language you are still no Tolstoy.
You're getting caught up on the technical meaning of terms rather than what the author actually wrote.
Theyre explicitly saying that most software will no longer be artisianal - a great literary novel - and instead become industrialized - mass produced paperback garbage books. But also saying that good software, like literature, will continue to exist.
Yes, I read the article. I still think it's incorrect. Most software (especially by usage) is already not artisanal. You get the exact same browser, database server and (whatsapp/signal/telegram/whatever) messenger client as basically everyone else. Those are churned out by the millions from a common blueprint and designed by teams and teams of highly skilled specialists using specialized tooling, not so different from the latest iPhone or car.
As such, the article's point fails right at the start when it tries to make the point that software production is not already industrial. It is. But if you look at actual industrial design processes, their equivalent of "writing the code" is relatively small. Quality assurance, compliance to various legal requirements, balancing different requirements for the product at hand, having endless meetings with customer representatives to figure out requirements in the first place, those are where most of the time goes and those are exactly the places where LLMs are not very good. So the part that is already fast will get faster and the slow part will stay slow. That is not a recipe for revolutionary progress.
I think the author of the post envisions more code authoring automation, more generated code/test/deployment, exponentially more. To the degree what we have now would be "quaint", as he says.
Your point that most software uses the same browsers, databases, tooling and internal libraries is a weakness, a sameness that can be exploited by current AI, to push that automation capability much further. Hell, why even bother with any of the generated code and infrastructure being "human readable" anymore? (Of course, all kinds of reasons that is bad, but just watch that "innovation" get a marketing push and take off. Which would only mean we'd need viewing software to make whatever was generated readable - as if anyone would read to understand hundreds/millions of generated complex anything.)
LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.
There is a large mass of unwritten software. It would add value but it is too bespoke to already have an open source solution. Think about a non-profit organization working with proprietary file formats and databases. They will be able to generate automation tools that they could otherwise not afford. This will be repeated over and over. This is what I think the author is getting at.
> You get the exact same browser, database server and (whatsapp/signal/telegram/whatever) messenger client as basically everyone else.
Hey! I'm going to passionately defend my choice over a really minor difference. I mean do you see how that app does their hamburger menu?! It makes the app utterly unusable!
Maybe I'm exaggerating here but I've heard things pretty close in "chrome vs Firefox" and "signal vs ..." threads. People are really passionate about tiny details. Or at least they think that's that they're passionate about.
Unfortunately I think what they don't realize is that passion often hinders that revolutionary progress you speak of. It just creates entrenched players and monopolies in domains where it should be near trivial to move (browsers are definitely trivial to jump ship)
> It just creates entrenched players and monopolies in domains where it should be near trivial to move (browsers are definitely trivial to jump ship)
I think this is understating the cost of jumping. Basically zero users care about the "technological" elements of their browser (e.g. the render engine, JS engine, video codecs) so long as it offers feature equivalence, but they do care a lot about comparatively "minor" UX elements (e.g. password manager, profile sync, cross-platform consistency, etc) which probably actually dominate their user interaction with the browser itself and thus understandably prove remarkably sticky ("minor" here is in terms of implementation complexity versus the rest of a browser).
Yeah I think you're right. That it's the little things that get people upset rather than the big things weirdly enough. But I think people should have a bit more introspection. Are their complaints things they seriously care about or justifies for their choices. Can they themselves differentiate. It might seem obvious but the easiest person to fool is yourself and we're all experts at it.
I guess two things can be true at the same time. And I think AI will likely matter a lot more than detractors think, and nowhere near as much as enthusiasts think.
Perhaps a good analogy is the spreadsheet. It was a complete shift in the way that humans interacted with numbers. From accounting to engineering to home budgets - there are few people who haven't used a spreadsheet to "program" the computer at some point.
It's a fantastic tool, but has limits. It's also fair to say people use (abuse) spreadsheets far beyond those limits. It's a fantastic tool for accounting, but real accounting systems exist for a reason.
Similarly AI will allow lots more people to "program" their computer. But making the programing task go away just exposes limitations in other parts of the "development" process.
To your analogy I don't think AI does mass-produced paperbacks. I think it is the equivalent of writing a novel for yourself. People don't sell spreadsheets, they use them. AI will allow people to write programs for themselves, just like digital cameras turned us all into photographers. But when we need it "done right" we'll still turn to people with honed skills.
I think existing skilled programmers are leveraging AI to increase productivity.
I think there are some people with limited, or no, programming experience who are vibe coding small apps out of nothing. But I think this is a tiny fraction of people. As much as the AI might write code, the tools used to do that, plus compile, distribute etc are still very developer focused.
Sure, one day my pastor might be able to download and install some complete environment which allows him to create something.
Maybe it'll design the database for him, plus install and maintain the local database server for him (or integrate with a cloud service.)
Maybe it'll get all the necessary database and program security right.
Maybe it'll integrate well with other systems, from email to text-import and export. Maybe that will all be maintainable as external services change.
Maybe it'll be able to do support when the printing stops working, or it all needs to be moved to a new machine.
Maybe this environment will be stable enough for the years and decades that the program will be used for. Maybe updating or adding to the program along the way won't break existing things.
Maybe it'll work so well it can be distributed to others.
All this without my pastor even needing to understand what a "variable" is.
That day may come. But, as well as it might or might not write code today, we're a long long way from this future. Mass producing software is a lot more than writing code.
We could have LLM’s capable of doing all that for your pastor right now and it would still take time before these systems can effectively reason through troubleshooting this bespoke software. Right now the effectiveness of LLLM-powered troubleshooting software platforms relies upon the gravity induced by millions of programmers sharing experiences upon more or less the same platforms. Gigabytes to terabytes of text training data on all sorts of things that go bonkers on each platform.
We are now undergoing a Cambrian explosion of bespoke software vibe coded by a non-technical audience, and each one brings with it new sets of failure modes only found in their operational phase. And compared to the current state, effectively zero training data to guide their troubleshooting response.
Non-linearly increasing the surface area of software to debug, and inversely decreasing the training data to apply to that debugging activity will hopefully apply creative pressure upon AI research to come up with more powerful ways to debug all this code. As it stands now, I sure hope someone deep into AI research and praxis sees this and follows up with a comment here that prescribes the AI-assisted troubleshooting approach I’m missing that goes beyond “a more efficient Google and StackOverflow search”.
Also, the current approach is awesome for me to come up to speed on new applications of coding and new platforms I’m not familiar with. But for areas that I’m already fluent in and the areas my stakeholders especially want to see LLM-based amplification, either I’m doing something wrong or we’re just not yet good at troubleshooting legacy code with them. There is some uncanny valley of reasoning I’m unable to bridge so far with the stuff I’m already familiar with.
>All this without my pastor even needing to understand what a "variable" is.
Missing the point. The barrier to make software has lowered substantially. This not makes mediocre devs less mediocre and for a lot of businesses out there being slightly less mediocre is all they need most of the time. Needing decent devs 20-40% of the time is already a big win in terms of expenses. Making small quick mediocre software that later on you need a decent dev for a couple of months to clean as opposed to pay and keep that dev for several years to make the software from scratch.
Yes, it is not very efficient, but neither are those Cobol apps in old banks. It's always about it being just good enough that it works not beautifully crafted software that never breaks. The market can stay alive longer than you can keep a high salary job as a very experienced dev when you are competing against 100 other similarly experienced devs for your job.
This was already true before LLMs. "Artisinal software" was never the norm. The tsunami of crap just got a bit bigger.
Unlike clothing, software always scaled. So, it's a bit wrongheaded to assume that the new economics would be more like the economics of clothing after mass production. An "artisanal" dress still only fits one person. "Artisanal" software has always served anywhere between zero people and millions.
LLMs are not the spinning jenny. They are not an industrial revolution, even if the stock market valuations assume that they are.
Agreed, software was always kind of mediocre. This is expected given the massive first mover advantage effect. Quality is irrelevant when speed to market is everything.
Unlike speed to market it doesnt manifest in an obvious way but I've watched several companies lose significant market share because they didnt appreciate software quality.
“Garbage books” are mass-printed, but aren’t mass-written in a mass production sense. Mass production is about producing fairly exact copies of something that was designed once. The design part has always remained more artisanal than industrial. It’s only the production based on the design (or manuscript) that is industrial.
The difference with software is that software is design all the way down. It only needs to be written once, similar to how a mass-produced item needs only be designed once. The copying that corresponds to mass production is the deployment and execution of the software, not the writing of it.
The syntactic representation will become that. End of day it's just math ops, state sync of memory and display. Even semantic objects like an OSs protected memory is a special case of access control that can be mathematically computed around. There is nothing important about special semantics.
The user experience will be less constrained as the self arrangement of pixels improves and users do not run into designer constraints, usually due to lack of granularity some button widget or layout framework is capable of.
"Artisanal" software engineers probably never were their own self selected identity.
Have been writing code since the late 80s, when Windows and commercial Unix were too expansive and we all wrote shoddy but functional kernels. Who does that now? Most gigs these days are glue code to fetch/cache deps and template concrete config values for frameworks. Artisanal SaaS configuration is not artisanal software engineering.
And because software engineers were their own worst enemy the last decade; living big as they ate others jobs and industries; hate for the industry has gone mainstream. Something politicians have to react to. Non-SWEs don't want to pay middle men to use their property. GenAI can get them to that place.
As an art teacher once said; making things for money is not the practice of a craft. It's just capitalism. Anyone building SaaS apps through contemporary methods is a Subway sandwich artist, not the old timey well rounded farmer, hunter, who also bakes bread.
Isn't this already the case? Your company doesn't build its own word processor, they license it from Microsoft, or they pay Google for G Suite, or whatever. Great books are sold in paperback, after all.
What he's missing is that there's always been a market for custom-built software by non-professionals. For instance, spreadsheets. Back in the 1970s engineers and accountants and people like that wrote simple programs for programmable calculators. Today it's Python.
The most radical development in software tools I think, would be more tools for non-professional programmers to program small tools that put their skills on wheels. I did a lot of biz dev around something that encompassed "low code/no code" but a revolution there involves smoothing out 5-10 obstacles with a definite Ashby character that if you fool yourself that you can get away with ignoring the last 2 required requirements you get just another Wix that people will laugh at. For now, AI coding doesn't have that much to offer the non-professional programmer because a person without insight into the structure of programs, project management and a sense of what quality means will go in circles at best.
I think the thinking in the article is completely backwards about the economics. I mean, the point of software is you can write it once and the cost to deploy a billion units is trivial in comparison. Sure, AI slop can put the "crap" in "app" but if you have any sense you don't go cruising the app store for trash but find out about best-of-breed products or products that are the thin edge of a long wedge (like the McDonald's app which is valuable because it has all the stores baacking it)
> What software developers actually do is closer to the role of an architect in construction or a design engineer in manufacturing. They design new blueprints for the compilers to churn out. Like any design job, this needs some actual taste and insight into the particular circumstances. That has always been the difficult part of commercial software production and LLMs generally don't help with that.
As Bryan Cantrill commented (quoting Jeff Bonwick, co-creator of ZFS): code is both information about the machine and the machine:
Whereas an architect creates blueprints which is information, that gets constructed into a building/physical object, and a design engineer also creates documents that are information that get turned into machine(s), when a developer writes code they are generating information that acts like a machine.
Software has a duality of being both.
How does one code and not create a machine? Produce a general architecture in UML?
I think what Cantrill is getting at here is that a running program necessarily consists of both code and hardware. If the software is missing, the hardware will be idling. If the hardware is not present, then the software will be just bytes on a storage device. It's only the combination of hardware and software that makes a working system.
What software developers produce is not a machine by itself. It's at most a blueprint for a machine that can be actualized by combining it with specific hardware. But this is getting a bit too philosophical and off track: LLMs can help produce source code for a specific program faster, but they are not very good at determining whether a specific program should be built at all.
> I think what Cantrill is getting at here is that a running program necessarily consists of both code and hardware.
"The thing that is remarkable about it is that it has this property of being information—that we made it up—but it is also machine, and it has these engineered properties. And this is where software is unlikely anything we have ever done, and we're still grappling on that that means. What does it mean to have information that functions as machine? It's got this duality: you can see it as both."
It's not about software and hardware needing each other, but rather about the strange 'nature' of software.
He has made the point before:
> We suffer -- tremendously -- from a bias from traditional engineering that writing code is like digging a ditch: that it is a mundane activity best left to day labor -- and certainly beneath the Gentleman Engineer. This belief is profoundly wrong because software is not like a dam or a superhighway or a power plant: in software, the blueprints _are_ the thing; the abstraction _is_ the machine.
(Perhaps @bcantrill will notice this and comment.)
> If the hardware is not present, then the software will be just bytes on a storage device.
And what do you mean by "hardware" and what is meant by 'running software'? If you see a bunch of C or Python or assembly code, and you read through it, is it 'running' in your brain? Do you need 'real' CPUs or can you run software on stuff that is not made of silicon but the carbon between your ears?
Yes, the point I was making (and as you point out, have been making for the last quarter century) is that we err when not making this realization -- and indeed, I think the linked piece is exactly backwards because it doesn't understand this. That is, the piece views a world of LLM-authored/-assisted software as "industrialized" when I view it as the opposite of this: because software costs nothing to replicate (because the blueprints are the machine!), pre-LLM ("handcrafted") software is already tautologically industrialized. Lowering the barrier to entry of software with LLMs serves to allow for more bespoke software -- and it is, if anything, a kind of machine-assisted de-industrialization of software.
> Lowering the barrier to entry of software with LLMs serves to allow for more bespoke software -- and it is, if anything, a kind of machine-assisted de-industrialization of software.
Instead of people downloading / purchasing the same bits for a particular piece of software which is cookie cutter like a two-piece from Men's Suite Warehouse, we can ask LLM for custom bit of code: everyone getting a garment from Savile Row.
I've worked for a lot of people involved in the process happily request their software get turned into spaghetti. Often because some business process "can't" be changed, but mostly because decision makers do not know / understand what they're asking in a larger scheme of things.
A good engineer can help mitigate that, but only so much. So you end up with industrial sludge to some extent anyway if people in the process are not thoughtful.
> It's like thinking the greatest barrier to producing the next great Russian literary novel is not speaking Russian.
The article is very clearly not saying anything like that. It's saying the greatest barrier to making throwaway comments on Russian social media is not speaking Russian.
Roughly the entire article is about LLMs making it much cheaper to make low quality software. It's not about masterpieces.
And I think it's generally true of all forms of generative AI, what these things excel at the most is producing things that weren't valuable enough to produce before. Throwaway scripts for some task you'd just have done manually before is a really positive example that probably many here are familiar with.
But making stuff that wasn't worth making before isn't necessarily good! In some cases it is, but it really sucks if we have garbage blog posts and readmes and PRs flooding our communication channels because it's suddenly cheaper to produce than whatever minimal value someone gets out of hoisting it on us.
Yeah, that's right. Although it's a bit of both. Vibe coded stuff above a certain size is mostly low quality code even if the software is perfectly reasonable because it hasn't fallen over due to the code issues stacking up yet, and as it gets a bit bigger it becomes low quality software as well.
> It's like thinking the greatest barrier to producing the next great Russian literary novel is not speaking Russian. That is merely the first and easiest barrier, but after learning the language you are still no Tolstoy.
And what do you feel is the role of universities? Certainly not just to learn the language right? I'm going through a computer engineering degree and sometimes I feel completely lost with an urge to give up on everything, even though I am still interested in technology.
As others have said, you're missing the author's point. The author is claiming that the act of writing software is getting industrialized by LLMs. LLMs will produce small, useful, but completely disposable programs that under the previous "artisanal" model would normally take me or another programmer an hour or so to write or debug. Or for something a bit more complicated, it can be vibe coded in 10 minutes, whereas it otherwise would have taken 10 hours to write and debug. You wouldn't want to use this sort of software extensively or for very long, just like you probably wouldn't frame a photo posted on social media. It might just be something to do some random task with your computer that is nontrivial that no other software tool does out of the box.
I'm kinda hoping that eventually each ractor will run in it's own ruby::box and that each box will get garbage collected individually, so that you could have separate GCs per ractor, BEAM-style. That would allow them to truly run in parallel. One benefit should be to cut down p99 latency, since much fewer requests would be interrupted by garbage collection.
I'm not actually in need of this feature at the moment, but it would be cool and I think it fits very well with the idea of ractors as being completely separated from each other. The downside is of course that sharing objects between ractors would get slower as you'd need to copy the objects instead of just sharing the pointer, but I bet that for most applications that would be negligible. We could even make it so that on ractor creation you have to pass in a box for it to live in, with the default being either a new box or the box of the parent ractor.
They already truly run in parallel in Ruby 4.0. The overwhelming majority of contention points have been removed in the last yet.
Ruby::Box wouldn't help reducing contention further, they actually make it worse because with Ruby::Box classes and modules and an extra indirection to go though.
The one remaining contention point is indeed garbage collection. There is a plan for Ractor local GC, but it wasn''t sufficiently ready for Ruby 4.0.
I know they run truly parallel when they're doing work, but GC still stops the world, right?
Assuming you mean "because with Ruby::Box classes and modules have an extra indirection to go though." in the second paragraph, I don't understand why that would be necessary. Can't you just have completely separate boxes with their own copies of all classes etc, or does that use too much memory? (Maybe some COW scheme might work, doodling project for the holidays acquired haha)
Anyway, very cool work and I hope it keeps improving! Thanks for 4.0 byroot!
Yes, Ractor local GC is the one feature that didn't make it into 4.0.
> Can't you just have completely separate boxes with their own copies of all classes etc, or does that use too much memory?
Ruby::Box is kinda complicated, and still need a lot of work, so it's unclear how the final implementation will be. Right now there is no CoW or any type of sharing for most classes, except for core classes.
Core classes are the same object (pointer) across all boxes, however they have a constant and method table for each box.
But overall what I meant to say is that Box wouldn't make GC any easier for Ractors.
reply