This is easily the most spot-on comment I've read on HN in a long time.
The humility of understanding what you don't know and the limitations of that is out the window for many people now. I see time and time again the idea that "expertise is dead". Yet it's crystal clear it's not. But those people cannot understand why.
It all boils down to a simple reality: you can't understand why something is fundamentally bad if you don't understand it at all.
> Similar to the media, I've picked up on vibes from academia that have a baseline AI negative tilt.
The media is extremely pro-AI (and a quick look at their ownership structure gives you a hint as to why). You seem to be projecting your own biases here, no?
And how would those LLMs learn? How would you learn to ask the right questions that further scientific research?
I'm writing a blog post on this very thing actually.
Outsourcing learning and thinking is a double edged sword that only comes back to bite you later. It's tempting: you might already know a codebase well and you set agents loose on it. You know enough to evaluate the output well. This is the experience that has impressed a few vocal OSS authors like antirez for example.
Similarly, you see success stories with folks making something greenfield. Since you've delegated decision making to the LLM and gotten a decent looking result it seems like you never needed to know the details at all.
The trap is that your knowledge of why you've built what you've built the way it is atrophies very quickly. Then suddenly you become fully dependent on AI to make any further headway. And you're piling slop on top of slop.
> It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.
In practice, this isn't bearing out at all though both among my peers and with peers in other tech companies. Just making a blanket statement like this adds nothing to the conversation.
Agreed, it's funny how people have taken unrestrained use of AI as an axiom at this point. There very much is still time to significantly control it + regulate it. Is there enough appetite by those in power (across the political spectrum)? Right now I don't think so.
>There very much is still time to significantly control it + regulate it.
There's also huge financial momentum shoving AI through the world's throat. Even if AI was proven to be a failure today, it would still be pushed for many years because of the momentum.
Not sure why this is being downvoted. It's spot on. You see folks like Dario et al. raising the alarm bells about what they claim is coming... while working as hard as they can to bring that gloomy future to fruition.
No one in power is going to help unless there's money in it.
It's being downvoted because it's a ridiculous premise. "The Elites" are human too. This attitude is nonsensical and child-like. Nobody is out here trying to round up the hippies and force them to live in some kind of pods to be harvested for their nutrients or whatever.
This technology, like every prior technology, will cause some people to lose their jobs and some new jobs to be created. This will annoy people who have to learn new skill instead of coasting until retirement as they planned.
It is no different than the buggy whip manufacturers being annoyed at Henry Ford. They were right that it was bad for their industry, but wrong about it being the death of... well all the million things they claimed it would be the death of.
And just like Henry Ford and the automobile, one of many externalities was the destruction of black communities: white flight that drained wealth, eminent domain for highways, and increased asthma incidence and other disease from concentrated pollution.
Yet, overall it was a net positive for society... as almost every technological innovation in history has been.
Did you know the 2/3rds of the people alive today wouldn't be if it hadn't been for the invention of the Haber-bosch process? Technology isn't just a toy, it's our life support mechanism. The only way our population gets to keep growing is if our technology continues to improve.
Will there be some unintended consequences? Absolutely. Does that mean we can (or even should) stop it? Hell no. Being pro-human requires you to be pro-technology.
I don't think this argument is logically sound. The assertion that this (and every other!!) technological innovation is a "net positive" merely because of our monotonic population growth is both weakly defined and unsubstantiated. Population is not a good proxy for all things we find desirable in society, and even if it were, it is only a single number that cannot possibly distinguish between factors that helped it and factors that hurt it.
Suppose I invent The Matrix, capable of efficiently sustaining 100b humans provided they are all strapped in with tubes and stuff. Oh and no fancy simulation to keep you entertained either -it's only barely an improvement on death. Economics forces everyone into matrix-hell, but at least there's a lot of us. Net positive for society?
Human fecundity is probably not actually the meaning of life, it's just the best approximation most people can wrap their heads around.
If you can think of a better one, let me know. Be warned though, you'll be arguing with every biological imperative, religion, and upbringing in the room when you say it.
I don't need to prove anything. You folks are the ones claiming harm. That said, AI is more akin to the invention of antibiotics than it is to the invention of any specific drug. Name any other entire category of technology from which no good has ever come. Just one.
I doubt you can. Even bioweapons led to breakthroughs in pesticides and chemotherapy. Nukes led to nuclear power, and even harmful AI stuff like deep fakes are being used for image restorations, special effects, and medical imaging.
You're just flat out wrong, and I think you know it.
You are speaking in tautology. Yes we know that technology investment often leads to great advancement and benefits for humanity, but it is not sufficient to obviate the need for consciousness and reduction of harm. This technology will be used to disenfranchise people and we need to be willing to say, "no, try again." Not to stop advancement, but to steer it into being more equitable.
We should be trying to optimize for the best combination of risk and benefit, not taking on unlimited risk in the promise of some non-zero benefit. Your approach is very much take-it-or-leave-it which leaves very little room for regulating the technology.
The GenAI industry lobbying for a moratorium on regulation is them trying to hand wave any disenfranchisement (e.g. displaced workers, youth mental health, intellectual property rights violated, systemically racist outcomes, etc).
> We should be trying to optimize for the best combination of risk and benefit
I 100% support this stance, it's good advice for life in general. I object to the ridiculous Luddite's view espoused elsewhere in this thread.
>The GenAI industry lobbying for a moratorium on regulation is them trying to hand wave any disenfranchisement (e.g. displaced workers, youth mental health, intellectual property rights violated, systemically racist outcomes, etc).
There must be a balance certainly. We can't "kill it before it's born", but we also need to be practical about the costs. I'm all in on debating exactly where that line should be, but object to the idea that it provides no value at all. That's madness, and dishonesty.
Henry Ford didn't make his cars out of buggy whips. He made a new industry. He didn't cannibalize an existing one. You cannot make an LLM without digesting the source material.
Cannibalizing a <product/industry/etc.> is a common phrase to describe the act of a new thing outcompeting an existing thing to another thing to the degree that it significantly harms the market share, sometimes to the point of figurative extinction. Redundancy is a very common reason for this to occur.
Digesting is a weird way to say "learning from." By that logic I've been digesting news, books, movies, songs, and comic books since I was born. My brain is great big 'ole copyright violation.
What matters here is not the source material, it's the output. Possessing or consuming copyrighted material is not illegal, distributing it is. So what matters here is: Can we say that the output is transformative, and does it work to progress the arts and sciences (the stated purpose of copyright in the US constitution)?
I would say yes to both things, except in rare cases of bugs or intentional copyright violations. None of the major AI vendors WANT these things to infringe copyright, they just do it from time to time by accident or through the omission of some guardrail that nobody had yet considered. Those issues are generally fixed fairly promptly (a few major screw ups notwithstanding).
It's because people rub shoulders with tech billionaires and they seem normal enough (e.g. kind to wait staff, friends and family). The billionaires, like anyone, protect their immediate relationships to insulate the air of normality and good health they experience personally. Those people who interact with billionaires then bristle at our dissonant point of view when we point at the externalities. Externalities that have been hand waved in the name of modernity.
100% this. all of this spending is predicated on a stratospheric ROI on AI investments at the proposed investment levels. If that doesn't pan out, we'll see a lot of people left holding the cards including chip fabs, designers like Nvidia, and of course anyone that ponied up for that much compute.
What a silly take. Where the tech is is extremely relevant. The reality of this blog post is it shows the tech is clearly not going anywhere better either, as they seem to imply. 24 hours of useless code is still useless code.
This idea that quality doesn't matter is silly. Quality is critical for things to work, scale, and be extensible. By either LLMs or humans.
The humility of understanding what you don't know and the limitations of that is out the window for many people now. I see time and time again the idea that "expertise is dead". Yet it's crystal clear it's not. But those people cannot understand why.
It all boils down to a simple reality: you can't understand why something is fundamentally bad if you don't understand it at all.