> If only climate scientists were given .01% of the credence these buffoons get.
Climate scientists are extrapolating complex models of a system that's literally planet-sized, with lot of chaotic elements, and where it takes decades before you can distinguish real changes from noise. And, while their extrapolations are plausible, many of them are highly sensitive to parameters we only have approximate measurements for. Finally, the threat itself is quite abstract, multifaceted, and set to play out over decades - as are any mitigation methods proposed.
The "buffoons", on the other hand, are extrapolating from well-established mathematical and CS theorems, using clear logic and common sense, both of which point to the same conclusions. Moreover, the last few years - and especially last few months - provide ample and direct evidence their overall extrapolations are on point. The threat itself is rather easy to imagine, even if through anthropomorphism, and set to play out near-instantly. There are no known workable solutions.
It's not hard to see why the latter group has it easier - at least now. A year ago, it's them who got .01% of the credence climate people got. But now, they have a proof of concept, and one that everyone can play with, for free, to see that it's real.
The buffoons are definitely talking about a scary beast that's easy to imagine, because we've been watching it in Terminator movies for forty years. But this is not that beast.
The beast here is humanity, and capitalism, by which I mean, the idea that you can collect money without working and that that is at all ethically permissible. The threat of AI is what is happening with kids' books on the Kindle platform, where a deluge of ChatGPT-generated kids' books are gaming the algorithm and filling kids' heads with the inane doggerel that this thing spits out and which people seem to believe passes for "writing".
And people keep saying how amazing the writing is. Show me some writing by an AI that a kindergartener couldn't do better. What they do is not writing, it's a simulacrum of the form of a story but there is nothing in it that constitutes art, just an assemblage of plagiarized structures and sequences. A mad-lib.
Everyone is freaking out, and the people who should be calming folks down and pushing for a rational distribution of this new tool which will be extremely useful for some things, eventually, are abdicating their responsibility in hopes of lots of money in their bank account.
When silent movies came out, there were people who freaked out and couldn't handle seeing pictures move, even though, the pictures weren't actually moving. It was an illusion of movement. This is an illusion of AI, it's just a parlor trick, like a victorian seance where you grandpa banged on the table. Scary, because they set the whole scenario up so you would only look at the stuff they wanted you to see. We still spend months assembling a single shot of a movie, and even if AI starts doing some of that work, all that work still has to happen; the pictures still don't move. A hundred years from now, what you're all freaking out about still won't be intelligent.
I do agree this is a world-changing technology, but not in the way they're telling you it is, and the only body I see which is approaching this with even a shred of rational thinking is the EU parliament. The danger is what people will do with it, the fact is it's out and it's not going back in the bottle.
We don't solve this by building a moat around a private corporation and attempt to pitchform all the AI into the castle. One requires two things to use this technology: A bit of Python, and a LOT of compute capacity. The first is the actual hard part. The second is in theory easier for a capitalist to muster, but we can get it in other ways, without handing control of our society to private equity. It's time we get straight on that.
The AI apocalypse only happens if we cling to capitalism as the organizing principle of our society. What this is definitely going to kill is capitalism, because capitalists are already using it to take huge bites of the meat on our limbs. Ever seen a baboon eat lunch? That's us right now, the baboon's lunch. As long as we tolerate this idea that people who have money should be able to do whatever they want, yes, AI will kill us (edit: because it works for free, however absurdly badly).
How many submarines, how many Martin Shreklis, before we recognize the real threat?
Climate scientists are extrapolating complex models of a system that's literally planet-sized, with lot of chaotic elements, and where it takes decades before you can distinguish real changes from noise. And, while their extrapolations are plausible, many of them are highly sensitive to parameters we only have approximate measurements for. Finally, the threat itself is quite abstract, multifaceted, and set to play out over decades - as are any mitigation methods proposed.
The "buffoons", on the other hand, are extrapolating from well-established mathematical and CS theorems, using clear logic and common sense, both of which point to the same conclusions. Moreover, the last few years - and especially last few months - provide ample and direct evidence their overall extrapolations are on point. The threat itself is rather easy to imagine, even if through anthropomorphism, and set to play out near-instantly. There are no known workable solutions.
It's not hard to see why the latter group has it easier - at least now. A year ago, it's them who got .01% of the credence climate people got. But now, they have a proof of concept, and one that everyone can play with, for free, to see that it's real.