> He’s said that he sees nuclear energy as one of the best ways to solve the problem of growing demand for AI, and the energy that powers the technology, without relying on fossil fuels.
Of all the things in which a scarce planet resource could be spent on.
I'm an AI pessimist, but even I see the value in the current large language models. If we can do it for the price of a few cracked atoms, that would be better.
I think what would make LLMs much more viable is if they had more of a sense of credibility. They've developed a reputation of being very confident generators of bullshit, and that's completely fatal in compliance-oriented environments.
If I ask Zesty Autocorrect to generate a snippet for me, but then I have to carefully inspect and review it to make sure it actually does what I wanted, I may as well spend the same amount of time just building what I wanted in the first place. Trading off 30 minutes of coding and testing for 45 minutes of code review and testing isn't a win.
I will admit to having found a use case for an AI model; I spend a fair bit of time using Stable Diffusion locally trying to roll some new desktop wallpaper, but if my anime-style vampire prince has a hallucinated third nipple, nobody's going to die or go bankrupt.
I'm an AI optimist but LLMs are still too primitive and burning resources too soon on them is a waste. Imagine nuclear powered ENIACs. The research should continue and Close AI haven't got to the pinacle of AI.
Of all the things in which a scarce planet resource could be spent on.