It wouldn't have to want to kill everyone. As long as it doesn't want to not kill everyone, the side effects of it getting what it wants could be catastrophic.
> and we don't notice
How well do we understand what's going on inside ChatGPT? How well will we understand the next?
> and forget to shut it off
Earlier I would have argued that sufficiently advanced AI could prevent itself from being shut off via Things You Didn't Expect, and would instrumentally want to preserve its existence. But these days, people are giving ChatGPT not just internet access but even actively handing it control over various processes. At this rate, the first superhuman AI will face not an impermeable box but a million conveniently labeled levers!
> Earlier I would have argued that sufficiently advanced AI could prevent itself from being shut off via Things You Didn't Expect
There's a good argument along these lines that I keep reposting when someone asks if we can't just shut the AI off. "All you gotta do is push a button, sir?"
It wouldn't have to want to kill everyone. As long as it doesn't want to not kill everyone, the side effects of it getting what it wants could be catastrophic.
> and we don't notice
How well do we understand what's going on inside ChatGPT? How well will we understand the next?
> and forget to shut it off
Earlier I would have argued that sufficiently advanced AI could prevent itself from being shut off via Things You Didn't Expect, and would instrumentally want to preserve its existence. But these days, people are giving ChatGPT not just internet access but even actively handing it control over various processes. At this rate, the first superhuman AI will face not an impermeable box but a million conveniently labeled levers!