Hacker Newsnew | past | comments | ask | show | jobs | submit | captain_coffee's commentslogin

Some sort of an AI crash / bubble bursting is expected to be honest - now if that will take the rest of the US economy as well.... debatable. Any strong opinions on this?

> now if that will take the rest of the US economy as well.... debatable.

In the grand scheme of GDP, the US hasn't done much growth in anhtjjg else this decade, all while massively increasing spending to prevent post COVID recessions.

It certainly doesn't look good. But this was being setup for 30 years as we outsourced our strong manufacturing wing to make the top brass richer in the short run. So I do think the house of cards falls if AI does.

The sad part is that we may have been able to whether the storm under the right leadership. But that sure isn't the leadership in the White House right now.


I'm not an expert, my knowledge is just from reading around a lot, but I think there's some stats that would suggest the US is particularly exposed:

- At points, AI investment has actually seen more spending that US consumer spending[0], there's some debate on this[1] but if true, that leads to a narrative of the US being 'propped up' by AI investment.

- US GDP growth was strong last year, but behind quite a lot of other similar countries like the UK, Germany and Japan, which doesn't suggest a comparatively strong economy.

- The US is actively increasing it's borrowing substantially (Big Beautiful Bill) while lowering it's currencies value through trade wars and unpredictability (see bond market). That reduces its ability to use its wealth to borrow its way out of a financial crash (like with the 2008 crash, or Covid).

This could be a little overblown and is hard to tell, the US is definitely an extremely wealthy country, even if its less wealthy comparatively that a few years prior.

[0] https://fortune.com/2025/08/06/data-center-artificial-intell...

[1] https://www.cnbc.com/2026/01/26/ai-wasnt-the-biggest-engine-...


On (1): Might be useful to separate investment flows from the rest of US's economic activity.

AI investment is propping up capital flows, the GDP statistic, and responsible for most of the gains on SPX, but its still a small fraction of the economy.


Is there really a bubble though?

Most of the activity is with the same old big tech stocks, and the largest investment by far is not even market driven. Stargate is defense spending.

AI doesn't have to sway consumers, and it doesn't even have to work that well now or ever for governments to keep pumping money into it. The whole point of Stargate is to de-risk with reduced need for security clearances to handle big data (whistleblowing) and eventually get away from foreign tech. Also, there are a ton of businesses who have always done things on-premises for compliance and they can now cut costs by migrating to these government vetted data centers.

It genuinely shocks me how rarely anyone brings this up. It's been very loudly said by Trump and OpenAI since he took office, and it was going to happen regardless of who was elected.


What else does the economy consist of these days? It's pretty much already in a recession if you exclude the big AI companies.

Besides, basically every company had been desperately shoving AI into all their products. Throwing all of that out when the bubble pops won't be pretty.


I imagine depreciated AI features will be like the soft varnish surfaces of some 90s cars after 10 years - disgustingly sticky, shedding flakes left and right, and in hindsight an obviously stupid idea that wasn't tested sufficiently before pushing it on consumers

Yes, the concentration of wealth led to the AI boom and it’s going to lead to the crash for sure. The AI boom was nothing but a crypto bubble. And since it’s making up a large majority of the investment right now I would say that’s the only reason that we didn’t have a crash last year.

> Europe just became a lower-cost extension of Silicon Valley.

Pretty much spot on. The UK included in the above definition of Europe as well.


Me personally: absolutely not - and I fundamentally do not understand the need for something like this. I would never use such a tool under any possible circumstance knowing what I know about the current technology underpinning these clankers.

These feels on par with Microsoft's push to shove Copilot down everyone's neck at every step possible whether we like/need it or not


Could you give only a few examples from those PLENTY of other places that are better? Like the first 10 that come to your mind, you made me very curious!

Not the person you asked but I am thinking of France and Germany. France has Station F (https://en.wikipedia.org/wiki/Station_F).

There are also Portugal, Spain and Romania that are rising contenders. I am also very dubious of UK being "world first"


I was thiking the exact same thing: Either a troll or someone that should be actively ignored.

So ... will the crash be bigger that the one causing The Great Depression or smaller? Any bets?

The Great Depression saw 25% unemployment and a third of farmers losing their land. Millions could die if it gets that bad today.

This sounds good to the "earth is overpopulated" crowd.

Not even close. The 2008 financial crisis is better comparison. And even then I think most negative effect will come from investors pulling money away from everything non-AI, than OpenAI/Anthropic/Oracle crashing and burning.

Before 2030, especially if the frontier AI labs begin to IPO before then.

I'm really eager to see how Anthropic's IPO this year (if it even happens) will pan out.

> in 2026 devs write 90% of the code using natural language, through an LLM.

That is literally not true - is the author speaking about what he personally sees at his specific workplace(s)?

If 90% of the code at any given company is LLM-generated that is either a doomed company or a company doesn't write any relevant code to begin with.

I literally cannot imagine a serious company in which that is a viable scenario.


I can believe LLM generated after being cut up into small slices that are carefully reviewed.

But to have 20 copies of Claude Code running simultaneously and the code works so well you don't need testers == high on your own supply.


Sadly, I'm seeing a LOT of this kinda of usage. So much so, I know a couple people that brag about how many they have running at time same time, pretty much all the time.

Including Steve Yegge, is Gas Town orchestrator of LLMs is... wild. And complicated.

https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...


I think he smokes more weed than I ever did.

Now that I've read that link (well... a bit, I just couldn't at a certain point) I totally understand this comment.

> high on your own supply.

reminds me of a bar owner who died of liver failure. people said he himself was his best customer


Its insane to me seeing this kind of thing. I write 100% of my code by hand. Of developers I know, they write >95% of code by hand

>We are entering an era where the Brain of the application (the orchestration of models, the decision-making) might remain in Python due to its rich AI ecosystem, but the Muscle, the API servers, the data ingestion pipelines, the sidecars, will inevitably move to Go and Rust. The friction of adopting these languages has collapsed, while the cost of not adopting them (in AWS bills and carbon footprint) is rising.

This is the most silicon valley brain thing I've seen for a while

We're entering an era where I continue to write applications in C++ like I've always done because its the right choice for the job, except I might evaluate AI as an autocomplete assistant at some point. Code quality and my understanding of that code remains high, which lets me deliver at a much faster pace than someone spamming llm agent orchestration, and debuggability remains excellent

90% of code written by devs is not written by AI. If this is true for you, try a job where you produce something of value instead of some random silicon valley startup


That depends on how you define "doomed". Most screwed up companies don't go belly up overnight. They get sold as fixer-uppers and passed between bigger firms and given different names until, finally, it is sold for parts. The way this works is that all parties behave as if the company is the opposite of doomed. It's in a sense correct. The situation hardly seems doomed if everyone has enough time to make their money and split before the company's final death twitches cannot be denied, in which case the company accomplished its mission. That of course doesn't mean everything from its codebase to its leadership didn't lack excellence the whole time.

Yeah, I would say it's pretty variable, and it depends on what you mean by the word write.

I've recently joined a startup whose stack is Ruby on Rails + PostgreSQL. Whilst I've used PostgreSQL, and am extremely familiar with relational databases (especially SQL Server), I've never been a Rubyist - never written a line of Ruby until very recently in fact - and certainly don't know Rails, although the MVC architecture and the way projects are structured feels very comfortable.

We have what I'll describe as a prototype that I am in the process of reworking into a production app by fixing bugs, and making some pretty substantial functional improvements.

I would say, out of the gate, 90%+ of the code I'm merging is initially written by an LLM for which I'm writing prompts... because I don't know Ruby or Rails (although I'm picking them up fast), and rather than scratch my head and spend a lot of time going down a Google and Stackoverflow black hole, it's just easier to tell the LLM what I want. But, of course, I tell it what I want like the software engineer I am, so I keep it on a short leash where everything is quite tightly specified, including what I'm looking for in terms of structure and architectural concerns.

Then the code is fettled by me to a greater or lesser extent. Then I push and PR, and let Copilot review the code. Any good suggestions it makes I usually allow it to either commit directly or raise a PR for. I will often ask it to write automated tests for me. Once it's PRed everything, I then both review and test its code and, if it's good, merge into my PR, before running through our pipeline and merging everything.

Is this quicker?

Hmm.

It might not be quicker than an experienced Rails developer would make progress, but it's certainly a lot quicker than I - a very inexperienced Rails developer - would make progress unaided, and that's quite an important value-add in itself.

But yeah, if you look at it from a certain perspective, an LLM writes 90% of my code, but the reality is rather more nuanced, and so it's probably more like 50 - 70% that remains that way after I've got my grubby mitts on it.


This is exactly how I use AI as well in codebases and languages I’m not familiar with.

I’m a bit concerned we might be losing something without the google and stack overflow rabbit holes, and that’s the context surrounding the answer. Without digging through docs you don’t see what else is there. Without the comments on the SO answer you might miss some caveats.

So while I’m faster than I would have been, I can’t help but wonder if I’m actually stunting my learning curve and might end up slower in the long term.


Yeah, I had that concern as well, but I tend to find that both ChatGPT and Claude do a pretty good job of explaining the issues around the solutions I'm using them for, so I think - maybe not all - but a good portion of that learning is still happening. No doubt this is a result of the way I prompt them, because it does involve a lot of discussion, iteration, and back and forth.

Of course, you do have to know what you're doing well enough to spot a hallucination, and those do occur but, I mean, how often do you go down a blind alley or rethink your approach with coding, particularly with a new platform or framework, anyway? It doesn't feel like I'm wasting a huge amount of time here.


So let me get this straight - you vibe code, make what you consider as necessary changes to the LLM-generated code, create PRs that get to be reviewed by another AI tool (Copilot), potentially make changes based on Copilot's suggestions and at the end, when you are satisfied with that particular PR you merge it yourself without having any other human reviewing it and then continue to the next PR.

Did I get that right or did I miss anything?


Not quite: remember I said I fettled the code myself? This might involve rewriting, refactoring, reorganising, and I'll make changes to Copilot's PRs as well, but still the percentage of code written by LLM remains what I'd consider to be very high.

I've been an engineer for more than 25 years so I know how to architect an application from the highest level down to the fine details, and I can obviously code in a variety of languages. Even with Ruby and Rails I know it well enough to read and understand the code already. Plus I obviously know HTML and JavaScript, and have used various CSS frameworks and overlays. (Though the truth is I basically hate fussing with CSS [and its various proxies], and I'm very far indeed from the king of UX or design anyway, so having an LLM to assist with that is a godsend.)

The strict definition of vibe coding, as far as I've understood it, is that you don't touch or even really look at the code at all, and make all modifications, do debugging, etc., via prompting the LLM or using an agent. Vibe coding tends to break down in critical areas like security, or when the contours and wrinkles of the domain become too complex, so I'd never trust that approach for anything substantial or where security is a critical concern. For most organisations the data that I'm working with in our app is going to be second only to financial information in terms of sensitivity, and it's a complex domain, so the problems we're solving are simply not amenable to a pure vibe-coding approach.

Going back to the original point on how much code LLMs are writing: with other languages that I'm more familiar with, where it might be quicker for me to write the code to do what I need myself rather than write a spec for the LLM, and modify afterwards, you would probably see quite a different picture.


Yes OK, but that was not the point that I was trying to make.

You are developing in a language + framework that you are not familiar with, without having any human feedback in the process at all - see where I am going with this?

Even with a quarter of a decade of experience you are still in what can basically be reffered to as uncharted territory.


Define what you mean by uncharted territory?

- I've never used a new language or framework before? Nope

- I've never used both a new language and a new framework at the same time? Nope

- I've never tried to do real work in a new language and new framework that I'm not particularly familiar with? Nope

- I've never shipped work I've done in a new langauge and new framework that I'm not particularly familiar with? Nope

What specifically do you think is so magic about Ruby and/or Rails that you think I won't have encountered something similar in some other form - in some other language or framework - before?

And do you not think that the reviewing and changes that I am making to the code, the prompting I'm doing, the decisions I'm making about which changes to integrate and which to ignore, etc., as a human being, count as feedback?

At the end of the day I'm building a web app, and soon enough an API, and the concerns in building this web app and API are exactly the same as the concerns I've had in building other web apps and APIs.

The only difference is the language and the framework, and I've learned new languages and frameworks before. And, although I'm using an LLM to help, I've been doing that for the past 3 years using different stacks and the only notable difference is that LLMs are quite a lot better at helping to develop software than they were 3 years ago.

So how is this uncharted territory?

---

And, as an aside, not an issue you've raised but one I commonly see on HN: people moaning about job ads demanding X years of experience in this or that framework or language (with the parody being that the desired framework or language has only existed for X years, or maybe not even that long), and how this isn't really relevant to whether an experienced software engineer can do the advertised job or not. I happen to agree with that perspective but here's the thing: you can't have it both ways. Either that's true or it's not true, and I'm operating (as I have done before and as you can probably tell) on the basis that it is true.


This seems like a really short-sighted view. 6 months from now you'll be much more inexperienced than if you just went through the initial struggle (with an LLM's help!)

I'm not sure what you mean by this. As I mentioned in another comment, both ChatGPT and Claude to a pretty good job of explaining around the solution we're working on - because of the way I'm prompting them, no doubt - and it's not as if I haven't also worked through both Ruby and Rails tutorials.

What I'd observe is that it's simply a different way of learning, but to me it seems like it's working at least decently well, and it means that I'm able to devote more time to thinking about and working on our roadmap and higher level issues without completely screwing my work/life balance - and, critically, time with family because the kids are growing up fast.


What does the struggle with an LLMs help look like?

"Explain this to me" until you're able to complete the task, instead of "do this for me"

If you break down the problem to small enough chunks, these basically become the same thing. The hardest part of new languages is the syntax and new APIs, so you end up getting the code anyway.

It seems like it may be true, but pointlessly true. I.e. yes, 90% of code is probably written by LLMs now - but that high number is because there is such a gigantic volume of garbage being generated.

The problem is not coding (for me). The problem is thinking for a long time about what to code, then the execution is merely the side effect of my thinking. The LLM has helped me execute faster. Is not a silver bullet, and I do review the outputs carefully. But I won’t pretend it hasn’t made me much more productive.

If there's a human in then loop, actually reading the plans and generated code, then it's possible to have 90% of me code generated by an LLM and maintain reasonable quality.

In my experience, most of the NodeJS shops do this. Because, LLMs on the surface seemingly are good at giving you a quick solution for JS code. Whether it's a real solution or patchwork is up for debate, but, for most mid-level to junior devs, it's good enough to get the job done. Now, multiply this workflow 10x for 10 employees. That's how you end up with a complete rewrite and hiring a senior consultant.

they just have to keep repeating it

the first 10 what? Years? It's actually not like that: https://www.government.nl/topics/income-tax/shortening-30-pe...

From 1 January 2024, expats who meet the conditions receive the following tax benefits:

- 30% tax free for the first 20 months;

- 20% tax free for the next 20 months;

- 10% tax free for the last 20 months.

So that's a tapered reduction over the first 5 years and the amount of money that you gain after tax is between negligeable and insultingly small.

Basically in its current form "The Dutch 30% ruling" is not really worth it, if you want to move to The Netherlands do it for other reasons, and the advertisment of this mechanism feels borderline disingenious in its current form.


I think it was like that some years ago. Now, as you said, it's really useless. 20 months are just the time to find an apartment, furnish it and get used to the place.

Afterwards you have to pay some of the highest taxes in the world....


more like 50+ % less salary, just saying

I doubt Americans will even pick up the phone or respond to LinkedIn messages / emails when they will se the budgets for the software Engineering roles in the EU.

I am saying that as an European, just to be clear.


Not everyone is optimizing for total comp. Some are optimizing for better lives. It's not a wild concept considering how many people get pulled into startups, 90% of which fail, under the guide of "mission" and lower market comp. Do you pick a mostly assured better quality of life? Or an equity payout lottery ticket/fairy tale? Certainly, there is a minority of folks making wild comp at FAANG, but that is a privileged minority of total tech and IT workers.

> Some are optimizing for better lives

Of course. I just hope these people know that for example healthcare in Europe is by no means free.


It's not free, but it's much cheaper. (And yes, that includes taxation.)

https://www.oecd.org/en/data/indicators/health-spending.html

https://commons.wikimedia.org/wiki/File:OECD_health_expendit...

As a bonus, all that spend doesn't make us better in outcomes.

https://ourworldindata.org/us-life-expectancy-low#life-expec...


My health insurance for a family of four in Spain is $2k/year. In the US, it was exceeding $25k/year with premiums, copays, deductibles, etc. While not free, it is accessible.

There was a time in my life we had to decide in the middle of the night if we could afford to take one of our children to the ER in the US when they were a newborn. I will never have that feeling in Europe, and that is priceless. Tax me more, I will happily contribute to a functioning governance system. I like taxes, with them I contribute to civilization. As an American, I am all in on Europe. It's not perfect, but the bar is in hell.

We Asked 300 People About Health Care Costs. The Numbers Are Shocking. - https://www.nytimes.com/2026/01/22/opinion/health-insurance-... | https://archive.today/MnYz9 - January 22nd, 2026


That article is just mindblowing. My countries Health Service is far from perfect, but that is insanity.

This is a component of what those who can qualify for some sort of visa are fleeing. The economics are undeniable.

i have questions about this! is there a way to get in touch with you?

I mean the issue here is your arguing on hackernews. The vast majority of people on this site in the USA just don't have these issues. Health care is taken care by the employeer and they are paid more.

Nobody in Europe thinks that healthcare just exists for free, but that it should be available to who need it for free and are happy to pay for that via tax.

I think you're not quite understanding just how bad EU pay is for software. Frankly with the $$ you basically always going to come out ahead with the more comp especially since USA software companies normally offer great healthcare and comparable vacation.

I know several folks who've migrated from US -> EU tech roles in the last few years. Yes, you earn less and pay (somewhat) more taxes. But if you have a few kids the difference in cost of education pretty much wipes out the difference, and some folks really value the stress reduction of a robust social safety net (layoff protections, healthcare coverage while unemployed, etc)

With a baby on the way, I'd seriously consider it for their lifetime benefits. Where does one begin looking?

I don't know about France, but here in Denmark you'd generally find tech jobs on LinkedIn.

If you have a decent amount of experience I don't think you'd be looking for very long.

But as stated by other commenters, the salaries and lower and the taxes higher.

What you get back is great worker protection, child care, free education and generally a feeling of safety for yourself and family. We also have a democracy that offers more than two choices!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: