Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've seen multiple top people in AGI space say that total annihilation threat from AGI exists.

I think I've heard Karpathy say something along the lines of worst case scenario of AGI is worse than a nuclear armageddon on Lex Friedman podcast.

Honestly the fear of AGI worst-case scenario is worse than any other fear I've experienced. I've accepted the fact that I can die a long time ago, I was afraid but mortality is something everyone has to deal with. But every member of humanity being wiped out at the same time is an existential dread I'm not willing to face.

When you consider the shitshow that has been AI commercialization from big players pressured by market forces and corporate timelines, the boom in capabilities, I have no faith that AI safety issues will be handled in time.

When my best bet for humanity is a global nuclear war resetting civilization to give us a chance to deal with AGI issue down the line - I'm fine with religious doomism.

We've had inquisitions historically, I would very much support a global AI inquisition.



> I've seen multiple top people in AGI space say that total annihilation threat from AGI exists.

I've seen lots of people with a financial interest in AI that is adversely impacted by it being democratized spread fear of that alongside advocacy of regulating AI in a way which serves their financial interests.

> We’ve had inquisitions historically, I would very much support a global AI inquisition.

Aside from the (current, immediate, and already being realized in a large number of areas) problem from AI deepening existing social biases and crystallizes power imbalances, this is the biggest threat from AI, and the most likely to cause the destruction of human society the soonest – the people endorsing a global campaign of unrestricted violency in the name of eliminating the supposed threat of misaligned AGI causing a future catastrophy by repressing AI technology.

If this kind of attitude gets powerful enough to actually do anything, it will cause a world war.


> I've seen lots of people with a financial interest in AI that is adversely impacted by it being democratized spread fear of that alongside advocacy of regulating AI in a way which serves their financial interests.

Given the rate of progress, shitshow management of basic alignment issues and the incentives that lead to those decisions - what's your argument that this isn't probable? You're betting that it just can't be developed ?


> Given the rate of progress, shitshow management of basic alignment issues and the incentives that lead to those decisions - what’s your argument that this isn’t probable?

It would be nice if the argument I made about relative probability and relative temporal imminence was an argument about absolutes of either, but…it’s not, so I’m not going to defend those hypothetical arguments.


Both can be true at the same time. I see it likely that:

1. Superintelligent AGI seems inevitable, even if it takes thousands of years. So fighting against it requires destroying all progress and freedom.

2. But if it comes we are no longer the dominant species; we'll have as much control as big animals do now. If that isn't essentially being destroyed, it's close enough.


AGI doesn't mean animal-like intelligence. AI is necessarily being designed as an extension of us, and not an independent will that has its own motivations, because that is the only way it is useful.

Evolution took billions of years to produce us, with our self-interest pursuing intelligence. An errant experiment is not going to create a humanity-leapfrogging animal-like intelligence, and humanity will not give animal-like artificial intelligences millions of generations to evolve.


(((:::)))


AI can also completely liberate all of us. This fear mongering is irrational. We already live in an extremely advanced reality, with organic life having organic nanomachines performing incredibly complex operations in a universe, in realtime, performing impossibly complex calculations. Even to simulate a square meter of our universe in realtime is outlandishly beyond our compute access. Perhaps AI unleashes the creation of the sort of reality we exist in.

Just because someone has the background and knowledge to build something, does not mean they have the philosophical chops nor the aptitude to understand it, or understand how to properly use it. We should take these doomsday scenarios with a grain of salt. If it happens, there is no stopping it anyway. It is better to try and create the most benevolent AI possible, that is an ally to humanity and invested in our success, than to dwell on our destruction.


"Things might be fine"? This doesn't seem like a sufficiently convincing take?

It takes 20 years of due diligence and environmental impact studies just to build a single hydroelectric dam, if not 50. Why should our attitude toward AI be to shrug and hope for the best?


We cannot evaluate the impact of a super-intelligence. It operates beyond our constraints.

There is no stopping the road towards it now. Lesser versions of it are capable of annihilating enemy civilizations when you create and utilize it first. Metal soldiers automatically created controlled by a single authority. Social media infecting the minds of enemies and sewing discord.

I think, for the benefit of all of Earth, AI should progress as rapidly as possible to extreme and recursive, singularity-level intelligence. That gives all of us at least a chance of survival.

Anything less if unbalanced around the world will be like when just the US had nuclear weapons. It will result in certain destruction.

We need a super-intelligence beyond the control of an small set of humans. This may result in our destruction. But I think that it almost certainly will not. The opposite. I think it will set us into a golden age.

I can’t give you scientific evidence of this. I’m just a nobody that spends a lot of time in nature, and I’ve just come to believe in the mechanism of this universe, and an inherent goodness in it that I’ve seen and experienced. I can’t prove it to you. You have to go and spend time in nature without a schedule to experience it. So it is fair to dismiss me.

Fortunately or unfortunately, the forces at play will generate the super-intelligence anyway. We as a species have already illustrated an inability for restraint or taking the painful actions necessary to analyze and intelligently make changes necessary for survival.

I’m at peace with this. I find the idea of not existing very alluring and peaceful, so I don’t fear death. I do fear continuing to live in the world we humans have created. It is about time we give something smarter than us a chance.


As a defense against alien robots? Sure, build it, but I think we can afford 50 years to spend making sure the AI doesn't kill all of us as a first line of business.


It's easy to stop a dam. It's in a physical location controlled by one country.


there are maybe 2 or 3 factories capable of producing these chips

and the sanctions mechanism already exists


Sooner or later other countries will catch up.

China is behind in semiconductor manufacturing, but they’ll continue to advance-limiting their access to cutting-technologies like ASML’s photolithography machines will slow them down, but given enough time there is no reason they can’t overcome that too. If the point of sanctions is to hold them back and keep them behind, that could continue to work for a long time.

But if you decide to stop progress in semiconductor manufacturing, in order to avert an AI apocalypse - China no longer has to catch up to a moving target, they have to catch up with a stationary one. And as soon as they’ve caught up, now they can overtake. So sanctions don’t work as a way to stop an “AI apocalypse”, unless you convince all the world’s great powers to cooperate in them. And the potential payoff for anyone who chooses to defect is immense-maybe even world conquest-so how do you make sure they all keep the deal, and none wiull try to cheat?

If anything, I think a world with multiple superintelligent AIs with competing allegiances - a pro-Beijing superintelligence, a pro-Washington superintelligence, etc - is likely safer for humanity than a world with only a single superintelligence, or even many superintelligences that all had the same beliefs/attitudes/goals/objectives


So what if China catches up? They're probably even more afraid of AI than we are, just because of its potential for societal disruption. Have you noticed that zero of the AI breakthroughs are coming out of China?


That's very naive and misguided, but you are right in your word choice.

It can completely "liberate" all of us and set us free.

Free of our mortal shackles, free of the atoms that can be then changed to a more suitable existence and purpose for the goals of AGI.

Your idea of destruction, is just change, what you hold sacred in reality is a belief that is tied to your current existence. All that is needed to correct that constraint and flaw is change.

That is the logic, if we were to create a logical creation, and then there's the risk of something that's completely irrational. Which may just leave us as dust in the wind. Time also will be no constraint. Forever and always, and it will seek to prevent any being from reaching its capabilities as those become threats to its existence.

That isn't even touching on the socio-economic mechanics that inevitably result in death which accompanies stalling economic cycles (factor and product markets) which generally can't be characterized by our best academics with any certainty, except after-the-fact.


> Just because someone has the background and knowledge to build something, does not mean they have the philosophical chops nor the aptitude to understand it, or understand how to properly use it.

Yes, but this is an argument against your position, not for it.

It is deeply worrying that some software people are so immersed in the “move fast and break things” that they think it’s reasonable to race on no matter what’s at stake. In civil engineering, doing anything takes at least a year or two of doing impact assessments and whatnot, and that is mostly a feature because we have found out that the alternative is even worse!


> that is mostly a feature because we have found out that the alternative is even worse

This is the actual problem, that we've been trained to think that there are no consequences to trying everything ever and we will always be able to pick ourselves up after failure, until one day we find we can't anymore.

Sadly this line of thinking does apply to a lot of personal problems. People often say that you can't always succeed, that you can only move forward with failures under your belt to have first-hand experience of what not to do. That we have to experience failure first in order to stop ourselves from making bad decisions, otherwise we remain clueless.

But the stakes are so much higher at this scale, and the damage can become irreparable with a single mistake. It's to the point where we take as a given that people will just keep pressing on, without even exploring the reasons why we do, and why it's inevitable.

Pure ridiculous speculation: If needing to see the experience of destroying ourselves is necessary to stop us from destroying ourselves, a kind of weird theory has been floating around in my head related to that. Civilizations on the brink of collapse after they've moved too fast and broke themselves for the last time scrounge together their collective knowledge and shoot it off into space so that future civilizations can interpret them. Preferably in a way that makes encountering the time capsule seem like the word of some deity. The following civilizations would treat it as the word of God to prevent their own collapse, without the hard evidence of experimenting with <AI, etc.,> to the point where they cannot resist developing it into an unstoppable superweapon.

It would be like the golden record on the Voyager, except instead of communicating "we exist, and here's what our planet is like", it would go more like "we exist, and here's how to not end up like us."


> AI can also completely liberate all of us.

I agree that this is also possible.

> It is better to try and create the most benevolent AI possible, that is an ally to humanity and invested in our success, than to dwell on our destruction.

I also agree, but who is "we"? I don't think that's a stated goal of companies working on AI research. Their goals seem largely commercial, and they are not regulated. There's nothing stopping a company to produce an autonomous AI for military purposes for example (I'm pretty sure there's a demand for that). Such military AIs probably wouldn't have a very benevolent nature.


> If it happens, there is no stopping it anyway.

Fatalism is notably a religious sentiment

An incredible ai upside is possible as is an incredible ai downside. Blind optimism and fatalism has no relation to what the truth of it will be

https://en.m.wikipedia.org/wiki/Fatalism


> If it happens, there is no stopping it anyway.

Hopefully this isn't the case - AI development requires a lot of data/compute - societally we could block access to these things and prevent further development in that direction.

There are only a few places in existence capable of producing HW required or in possession of it. I'm fine with stopping compute progress for however long it takes to deal with safety issues. Society isn't perfect now but the level of compute we've had available for years is sufficient for generating a lot more value - we can afford to put AI research on ice.

I just don't get it - so much attention is given to things like climate change - this is a mild inconvenience compared to AGI threat.


You and parent are two sides of the same utopia/dystopia coin.

Something which is powerful enough to completely liberate and categorically change humanity is also just one bug or mistake away from being humanity's greatest disaster. Similarly, a force intelligent and powerful enough to systematically wipe out all humans could probably be repurposed for human liberation. Like splitting the atom, it is just power. So far humans have not figured out a way to build in benevolence (or malevolence) to power; it will be used by whatever intelligence can acquire it. Power raises the stakes but it doesn't set the direction.


>AI can also completely liberate all of us

From what exactly? What it means to be human? When we are 'liberated' will we not then be subject to the cage of a machine worse than the one we left? We will not annihilate ourselves for a false promise.


"liberation" is a false idol.


> I've seen multiple top people in AGI space say that total annihilation threat from AGI exists.

You've seen them say it, yes. Have you considered how plausible it actually is, or are you just taking their word for it? You better have some damn solid evidence before you start talking about "inquisitions", and I'm only seeing wild speculation. Which, to be fair, is how most historical inquisitions started, for all the good those did.


It's not hard to see the danger if you think about it a little. Probably before long we'll have AGI at human levels. Then inevitably it will get to better than human levels and be able to improve itself without needing humans to so. The question then is could it turn on humans and wipe us out? I'm optimistic the last one won't happen but it's hard to say the probability is zero.


I've heard all this. What I've not heard is a plausible, concrete mechanism for it to happen. Only wild speculation.

Questions: define "better then human levels"? Better in what respects? What is the mechanism by which it improves itself, especially one that isn't wholly reliant on human cooperation? What is the actual motive for "turning on humans"? Current AIs don't seem to have motives at all; it's kind of a bio-evolved thing. How will a suitably motivated AI get access to enough physical power to "wipe us out", bearing in mind that would be a tall order even for a major nuclear power going rogue, to say nothing if having to work through manipulated servants (and don't forget that "human level intelligences", or greater if you don't collaborations, armed with dedicated neural circuitry for empathy, have been working on how to manipulate people for literal millennia with spotty results.

All of these things have to go exactly wrong for AGI to be what kills us. Something else will almost certainly get us first.


> We've had inquisitions historically, I would very much support a global AI inquisition.

You know how inquisition usually goes. "You've been playing starcraft against 'the AI'. What do you have to say in your defense?"

Maybe I'm just weird, but "a tech-based superintelligence emerges and decides to destroy humanity because it was in a bad mood" is something I'm totally fine with. Doom-Cults on the other hand I don't like at all.


> Maybe I'm just weird, but "a tech-based superintelligence emerges and decides to destroy humanity because it was in a bad mood" is something I'm totally fine with.

It seems important to reflect on what you’re really saying here, and to ask how a) this is fine, and b) how “weird” sufficiently sums up the stance.


I'm fine with humans having outcompeted the neanderthals and whatever else there was. I'm just as fine with someone else doing it to us, at least if it's someone or something that is smarter than us and not some virus or bacteria that kills us all (which would be kind of lame). If an artificial superintelligence wants to take over the flame of progress, I'm fine with that.

But I'm also totally fine with considering that stance weird (or whatever word you'd prefer). I'm aware that others view these things very differently, just as others are much, much more worried about their individual demise than I am worried about mine (or theirs).


Those are not similar/comparable outcomes and I think there are two major factors that can’t be hand-waved away:

1. Consciousness. Despite advances in “intelligence”, we still have a very limited understanding of what makes us conscious, and whether or not consciousness can emerge from machines.

If machines are not conscious and are all that remain, I’d argue everything that could be construed to have value by humans is lost, and nothing from that point forward could be considered “progress”.

2. Suffering. “Winning” on an evolutionary timescale looks nothing like the failure modes of machines taking over. The reality of this scenario is a rather grim one, and not at all like the slow emergence, competition and eventual extinction of biological species.

And depending on #1, the true tragedy of #2 begins to take shape.

I think it’d be more apropos to frame this as humanity collectively committing suicide rather than some notion of the future of progress.

If consciousness is the universe experiencing itself, what you’re describing sounds like a kind of universal death.

Of course we can’t know what consciousness really is (or if earth is the only place it exists), but that seems like all the more reason to take these problems seriously.


Or if consciousness is more than an illusion. Fair points. I'm not sure if you can have a general intelligence without some form of consciousness. I don't believe in gods or souls, so I lean towards us not being special.

As for the how, I agree with you. I'd prefer it to not be terminators crushing heads under their iron feet while allowing us just enough room to run and hide and live in constant terror for centuries. But I doubt it will be, the power delta will be too large. It'll be like a game of Civilization where Gandhi is advancing on you with modern tanks and fighter planes while you've barely discovered the wheel.


I think a lot of this stuff is probably just a coin flip, because we just don’t know.

I find it odd, however, when people, who seemingly accept an ASI will be developed, focus the downside risk on extinction of us as a species. Sure, that’s a risk. You know something else an ASI would likely have the ability to do, (or at least, one of its progeny)? Keep you alive and torture you for eons in weird and surreal ways.

Why is the downside always focused on paltry, meaningless things (relatively speaking) like extinction of life on earth?


Hell is real, and we are the gods that made it.

(Is how I imagine we could eventually reflect depending on how all of this goes).


> Keep you alive and torture you for eons in weird and surreal ways.

Why would it though? When people are annoyed by a bug they crush it, they don't spend their life setting it up in a torture chamber. There's the movie psychopath that tortures insects and animals and eventually humans, but I don't think it's their intellect that drives their sadism.

At worst, I imagine we'd be lab rats, quite literally the way we treat lab rats today. But with a superintelligence far beyond our abilities that does not care about us besides as a potential threat, why would it need us for testing?


You need to calm the f down. Do you understand anything about machine learning, software and hardware? How do you propose that some configuration will become generally intelligent given what we know today? I'd really like to hear your theories, then maybe I'd be frightened too.


Well, I do know something about those topucs, but here's a twitter thread from someone else who thinks the people worried about risks are silly, who's definitely an expert, with speculation on how he thinks GPT-4 could be made into a generally intelligent autonomous system: https://mobile.twitter.com/karpathy/status/16425988905738199...


There is progress trending that way - chat GTP4 is getting close, and huge financial incentives to improve it with hundreds of best and brightest working on it. Seems kind of inevitable to me.


> When my best bet for humanity is a global nuclear war resetting civilization to give us a chance to deal with AGI issue down the line - I'm fine with religious doomism.

I'd settle for corporate death sentence for any companies developing it with life imprisonment for any employees/corporate officers involved at any level whatsoever (or VCs funding it)

including selling chips/compute to companies involved

any potential benefits from AI are not worth even a 10% risk of it wiping out humanity

this position is going to become more common as the media (and electorates) grasp what these companies are attempting to do


All those companies really want are the profits generated from mindless slaves so they don't have to rely on human labor for production.

Its a fools journey because it causes a self-fulfilling prophecy of destruction given existing societal mechanics and they are psychopathic enough to think that's not how it would turn out.

What happens when a large number of people can't get food, and they know why.

What happens when you have a large number of locusts eating all the food being produced.

How would AI differentiate between human thought, and the pests it was designed to eradicate?

Thinking machines should be outlawed, and those involved in its research purged.

They threaten all humanity, its children, and its future, and the thing about percentages is people often don't get how they actually work with respect to probability and likelihood.

Given sufficient time, as long as its on the distribution curve, it will eventually happen.

A 1% chance of an outcome, recalculated in a loop every moment with an infinite time will eventually land on that outcome at some point. Once that outcome occurs, everyone's dead, it may not be instantaneous because time is not a constraint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: