Publishing early work feels pointless until you look back and realise the later stuff couldn't exist without it. Same goes for any expressive work. Sounds like a platitude, but it really is all about the process.
It's related to survivorship bias or whichever; successful writers have written for years already, but you / the potential writer only discover these when they're already established. Few people will actually have followed them as they progressed through the years.
Anyway, point is, you read a good post whose goodness was in part due to the thousand posts before it, then think "I need to be as good as this" and you'll fail. I'm sure there's a word for that too.
Agree with this. Constraining generation with physics, legality, or even tooling limits turns the model into a search-and-validate engine instead of a word predictor. Closer to program synthesis.
The real value is upstream: defining a problem space so well that the model is boxed into generating something usable.
Shift feels real. LLMs don't replace devs, but they do compress the value curve. The top 10% get even more leverage, and the bottom 50% become harder to justify.
What worries me isn't layoffs but that entry-level roles become rare, and juniors stop building real intuition because the LLM handles all the hard thinking.
You get surface-level productivity but long-term skill rot.
> juniors stop building real intuition because the LLM handles all the hard thinking. You get surface-level productivity but long-term skill rot.
This was a real problem pre-LLM anyway. A popular article from 2012, How Developers Stop Learning[0], coined the term "expert beginner" for developers who displayed moderate competency at typical workflows, e.g. getting a feature to work, without a deeper understanding of lower levels, or a wider high-level view.
Ultimately most developers don't care, they want to collect a paycheck and go home. LLMs don't change this; the dev who randomly adds StackOverflow snippets to "fix" a crash without understanding the root cause was never going to gain a deeper understanding, the same way the dev who blindly copy&pastes from an LLM won't either.
> Ultimately most developers don't care, they want to collect a paycheck and go home. LLMs don't change this; the dev who randomly adds StackOverflow snippets to "fix" a crash without understanding the root cause was never going to gain a deeper understanding, the same way the dev who blindly copy&pastes from an LLM won't either.
I read this appraisal of what "most devs" want/care about on HN frequently. Is there actually any evidence to back this up? e.g. broad surveys where most devs say they're just in it for the paycheck and don't care about the quality of their work?
To argue against myself: modern commercial software is largely a dumpster fire, so there could well be truth to the idea!
> I read this appraisal of what "most devs" want/care about on HN frequently. Is there actually any evidence to back this up? e.g. broad surveys where most devs say they're just in it for the paycheck and don't care about the quality of their work?
Almost every field I've ever seen is like that. Most people don't know what they're doing and hate their jobs in every field. We managed to make even the conceptually most fulfilling jobs awful (teaching, medicine, etc).
You could say the same sort of thing about compilers, or higher-level languages versus lower-level languages.
That's not to say that you're wrong. Most people who use those things don't have a very good idea of what's going on in the next layer down. But it's not new.
Complex technology --> Moat --> Barrier to entry --> regulatory capture --> Monopoly == Winner take all --> capital consolidation
A tale as old as time. It's a shame we can't seem to remember this lesson repeating itself over and over and over again every 20-30-50 years. Probably because the winners keep throwing billions at capitalist supply-side propaganda.
Cursor’s doc indexing is acc one of the few AI coding features that feels like it saves time. Embedding full doc sites, deduping nav/header junk, then letting me reference @docs inline actually improves context grounding instead of guessing APIs.
LLMs shift the bottleneck - becomes less about typing code, more about spotting when something’s subtly wrong. Still need real judgment just applied to different layers. The skills that atrophy are surface-level. The deeper ones (debugging, systems thinking, knowing what not to trust) become more important.
Don't think the limit is in what LLMs can evaluate - given the right context, they’re good at assessing quality. The problem is what actually gets retrieved and surfaced in the first place. If the upstream search doesn’t rank high-quality or relevant material well, LLM never sees it. It's not a judgment problem, more of a selection problem.
Pretty cool.
However truly reliable, scalable LLM systems will need structured, modular architectures, not just brute-force long prompts. Think agent architectures with memory, state, and tool abstractions etc...not just bigger and bigger context windows.