Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Are you anxious about AI existential risk?
32 points by arisAlexis on Feb 28, 2023 | hide | past | favorite | 49 comments
I find myself thinking more and more about it. I read the book Superintelligence back in 2020 and that seemed a bit far in the future. I am now in the process of realizing we are running a massive risk very soon and things are getting hotter every day. Planning and other stuff become more stoic exercises than anything else.

I would also like to put an extra data point. The ex-CEO of the current medium we are discussing has explicitly talked about the real possibility that AI kills us all.



I am. Even if we don't experience a runaway intelligence scenario in which we get annihilated by our own creation, AGI will turn everything upside down. Being paid for your labour will become obsolete and I don't think it will result in a society of abundance, rather a lot of power concentrated in the hands of a few people while the rest beg for scraps (much more so than currently).


I don't think it will result in a society of abundance, rather a lot of power concentrated in the hands of a few people while the rest beg for scraps (much more so than currently).

Yeah, that's more my worry than "x-risk" stuff. Technology has no doubt made the world a better place in many ways, but lately it seems to be more and more of a trend that technology tends to foster consolidation of power. That worries me more than "evil AI" or "paperclip maximizer" scenarios. But I don't consider it a given that AGI must yield such an outcome. It's something I spend a modest amount of time thinking about though.


> Being paid for your labour will become obsolete

All industry still depends upon a massive amount of underpaid manual (often slave) labor that can not be automated easily. Automation would maybe cover 20% of what is done, the rest would have to be manual. Especially if the gains from cost reduction from importing raw material from third world labor are still of interest to western manufacturers.

The jobs involving manual labor are "safe", and so are the management / organizational jobs. So the mid-tier jobs would then migrate to whatever comes one step lower. In an optimistic scenario, it would lead to better efficiency in these lower paid jobs. But that's utopian.

Instead, what I see on the horizon is an increased interest in hardware optimization. Software is, for the first time, three steps ahead and hardware feels like it needs to catch up, like it's two steps behind. There are already orgs making this happen (tenstorrent) and specialist software in tandem would lead to very very quick prototyping. The generated hype for a "guaranteed upturn" would inject enough cash into these new industries to allow manufacturing of previously prohibitively expensive, specialist, one-off hardware for other manufacturers via non-standard manufacturing practices (like 3d printing). Most of this output would look "weirdly mathematical" and rather unnerving, similar to how the deeplearning image outputs used to a couple of years ago.

When we'll be close enough to this, the management will surely try to bite off more than necessary from the plebs. I fully expect robotic policing to be the norm by then, because policing gives the most training data and will likely be the first to be optimized for "your safety". So even if you don't have a Spot with a mg on its shoulder patrolling your neighborhood, you could be put in jail if you're caught removing the robot's battery on the 360 cam. Would likely lead to robot laws. Sorry if this is a bit alarmist but that is where I see it going.

I'm more worried about how half-baked "AI" will be used. Not very concerned with how a theoretically almost-sentient AI will behave. As bad as it sounds, if the utility of computing morphs from increasing human quality of life to something unto itself then retarding "progress" / freedom seems like the easiest, most rational call right now.


> So even if you don't have a Spot with a mg on its shoulder patrolling your neighborhood, you could be put in jail if you're caught removing the robot's battery on the 360 cam. Would likely lead to robot laws.

There are "delivery robots" on the campus near where I live that drive around on the sidewalk. I think they're partly run autonomously, but partly controlled by out-sourced human labor. They don't get in my way too much, but I deeply dislike the way some stupid delivery company injected themselves into the commons of the sidewalk. Maybe it's a little sadistic, but every time I'm in that area and I walk by one I fantasize about kicking it. I don't know for certain, but that's probably already illegal and I know that they would have good video evidence to file a police report with.

... I don't think it's alarmist. Even today, robots already have more protection than I'm comfortable with.

*edit: fluency and spelling


AI will make the rich even richer and the poor poorer as the rich can hire less and less people. The inequalities would probably changed the democracy as we know it. The next 3 decades are going to be much worse than the previous 3 decades for most of us who is not wealthy.


No, I'm not particularly concerned about it. Certainly at least not in the sense of "Evil AI decides to destroy the world". To me, most of the scenarios that fall towards that concept involve anthropomorphizing the AI's in a way that doesn't make sense to me. Imputing AI's with human like goals, emotions, motivations, etc doesn't seem reasonable to me. Then again... to play Devil's Advocate against my own position here, I suppose somebody might consciously choose to purposely build an AI that has those attributes for some reason. But even then, I'm skeptical that they'd wind up replicating the parts of being human that could lead to "evil" behavior.

Now what if we forget about "evil AI" scenarios, and go to more of a "rogue paperclip maximizer" scenario? I don't find those scenarios very compelling either, because they seem to require an AI that is both smart enough to "take over the world" and turn everything into paperclips AND simultaneously dumb enough to not realize that doing that is not the actual goal.

So x-risk? Nah, I don't worry about that much. What I worry about more is more prosaic stuff with AI systems reflecting generic human biases... things like face recognition systems that don't recognize Black faces, or loan approval systems that disproportionately reject Black applicants, or resume scanning systems that display bias against candidates based on their gender, stuff like that.

~~

All of that said, I believe in a "never say never" mindset in many ways. And as such, I'm not unhappy that there are people out there talking about these issues, and doing research on AI safety / alignment. I don't lose sleep over this stuff, but I could be wrong.


> Imputing AI's with human like goals, emotions, motivations, etc doesn't seem reasonable to me. Then again... to play Devil's Advocate against my own position here, I suppose somebody might consciously choose to purposely build an AI that has those attributes for some reason. But even then, I'm skeptical that they'd wind up replicating the parts of being human that could lead to "evil" behavior.

Tabling the question of if the human-like goals/emotions/motivations have some "deeper" quality or not, I think there are actually many reasons why people are working towards and getting better and better at replicating them. On the benign side, because they're lonely, curious if they can do it, etc. On the evil side, because we're social conditioned as humans to treat things that present themselves as humans with respect. I've already gotten some spam calls that did some kind of semi-plausible automated response when I answered them, it's easy to imagine it getting worse. Personalized spam is also on this spectrum.

As I understand it, the second case is basically the definition of sociopathy (i.e. abuse of social trust as a means to serve a selfish end). Again, tabling the idea of if the AI is evil or the creator is evil, the end result is evil behavior that is at least enabled by AI. [0]

The further concern I have is that even the more benign side done with good intentions ultimately saps away human energy for real compassion and perverts actual relationships. A "person" with human-like emotions/goals who is owned and managed by a corporation, or who can be turned off when you're tired of them is not a person at all, regardless of the intentions they were created with. Even well intentioned inventors are ultimately creating hyper-real simulacra of humanity that feels genuine in the moment, but is framed by fundamental untruths about the human condition (ownership, mortality). There is a market for Siri and sex-dolls, I 100% think there is a market for this too.

[0] ...to devil's advocate myself a little here, I guess I'd say that there are some greater goods that can served by minor abuses of social trust -- maybe even extending to personalized spam. But on the whole I think it's been awful for society.


I am worried about trust going down massively. And after this it is just a slippery slope to fights, wars and other ugly stuff.

Take for example: Phonescammers, or antiphising emails or worse someone pretending to be your partner or your kids (fake images, text etc) just to scam you. We are still laughing today at badly created phishing emails in our junk folders, but I can only imagine the future…

Without trust there is no functional society and its a quick downspiral from there.


I'm not worried about it becoming hyper intelligent and taking over the world Skynet-style, for reasons I won't bore everyone with.

I'm somewhat worried about the potential of things like ChatGPT to endlessly churn out plausible-sounding-but-subtly-wrong drivel that will drown out real information. But that's already happening thanks to the advertising-driven nature of the internet. ChatGPT will only accelerate it.


My main worry is that a small set of capital owners will vastly benefit in a runaway AI scenario at the cost of human laborers (including service workers). If a few companies control essentially infinite intelligence, they could capture all the income currently paid to workers, while the vast majority of workers are left obsolete.


Something like this seems possible in the sense that the productivity gains, accruing to the few who master this, will be so massive that it will impoverish the rest of is. We could be talking Elysium levels of inequality.

The caveat here is that our economic setup, as unequal as it is now, will only tolerate a certain level of inequality before the system implodes. Such a massive centralisation of power would undermine capitalism itself. You'll destroy markets for a start if almost everyone becomes a penniless serf.


I don’t see a good out come tbh. Either the AI is controlled by a minority and all others suffer, AI is controlled by everyone and all have the power of demigods to end the world, or we regulate AI and another country/system comes along and outcompetes us all like the Europeans did post Industrial Revolution. Or the AI is uncontrolled, has its own agenda and we are destroyed in the achievement of its goals.

I suspect the system will become too unstable as AI becomes more powerful and interacts with other powerful AIs, and we will collapse back into the mediaeval era.

Am I a pessimist or a realist? I guess only time will tell.


I suspect this is a minority opinion here, but it is my belief that the AI threat has already surfaced, in "The Algorithm" that's used to fracture social networks and extract maximum "attention". Those social networks have real-world counterparts to them, and the cost in lost friendships and mistrust is far closer to an unbearable one for Democracy than we can stand.

A group of a-moral, effectively immortal corporations are doing this all in the name of profit. How this isn't already seen as a dystopia by more people is beyond my comprehension.


Current conclusions:

- critiques of and predictions about contemporary advances in AI/ML almost always are misguided in that they partake of a series of limitations we have wrt reasoning about non-linear and system level change, in specific, such critiques or predictions often extrapolate linearly from current known examplars (or worse, assume we have plateaued)

- consequently almost every statement formulated in terms of "never" or asserting fundamental constraints on what is possible, is false, especially over the long term, which might not be that long given current trends

- the disequilibrium (social, political, economic, etc.) engendered by AI/ML is IMO likely to at least equal that of the advent of the internet; and it is liable to happen faster than priors such as the rise of personal computing (many decades), the internet (a couple decades), mobile computing and its features such as ubiquitous surveillance and social media (ditto)

- the near-term risks, existential or not, are absolutely not from AGI and superintelligence, but from "cybernetic" amplification of human agency via enhanced tooling; and the specifics of when and what is disrupted upended or suborned are inherently unpredictable and may even go undetected until their impact is irrevocable

Re: this latter point,

I will make one specific prediction: the 2024 US election cycle will in effect be determined via "AI," which will be applied in countless dimensions in both noble and deeply corrupt/criminal/anti-democratic/anti-US/anti-West ways.

How that goes down will put a strong spin on the rest of these points and may well constitute existential risk, for some values at least of "existence."


I don't believe in a runaway AI scenario, where something arises that has godlike abilities and takes over the planet. This overweights the returns to intelligence, the likelihood that it will be given a free hand, and underweights the cost of creating true superintelligence - likely exponential.

The salient characteristic of AI is not that it is superintelligent, but that it is perfectly obedient.

The rulers of earth will be the same people as we have always had, but now they will have an army of automated mooks to enforce their will. These automated servants will be able to make intelligent judgments, but will have no ambitions to seize the throne. And it's okay if the mooks frequently make mistakes. Elites value absolute loyalty, much more than ability. Until now, it has not been possible to obtain perfect loyalty from any being with independent judgment. Elites would be willing to pay huge fortunes for such servants.

This is why there won't be a "runaway" scenario. Elites have never ceded full authority to their most intelligent servants. They will not want a computer discovering that the optimal allocation of resources would be UBI, and then implementing it. Elites will ask for the greatest possible allocation of resources to themselves, and a means of maintaining that inequality.

AIs will be the middle managers, the enforcers, the killer drones, and the security guards.

To the extent that our existence is necessary at all, we will have to negotiate with the AIs to be allowed to live out our lives.

But humanity might be forced out. It's happened before. Consider the Irish potato famine. Despite the name, what actually happened was that an entire population was driven off the productive lands by foreign owners armed with guns. The Irish were only relying on the potato because it was the cheapest way to survive when you barely had any land left. When a blight struck, they all died or emigrated. Maybe we'll all die or emigrate to places that the elites/AIs don't want. But it's possible even that won't happen because there won't be any frontiers left that just need human bodies to exploit, as was the case in the Americas.


I'd summarize that as: civilization has always featured the few powerful having control over the many, but those few still had to satisfy at least some people. AI that can closely mimic humans and does not have a concept of self-benefit could vastly increase the power that individuals are able to wield, even beyond the kings and dictators of the past.

Contrary to your assertion that it isn't a "runaway" scenario, it very well could be one; it's just not the AI that's leading the charge. Fairly similar idea to Vernor Vinge's concept of how the rise of a powerful surveillance state could be a civilization-ending event, by conferring unprecedented power into the hands of a few.


AI existential risk being the end of all or the majority of humanity? No, no I'm not, not as direct result of AGI evolving or seizing control like is widely speculated.

The risk I see is always from humans being inept, greedy and stupid, resulting in deploying an AI system not fully understood where it shouldn't be, without a human in the loop. This Russia's dead man switch.

All/most the existential risk theories predict AGI will evolve and reach super human intelligence, and have goals and drive, and some sort of motivation to do something, or just out of pure randomness of testing different envs, and stumbling on something that kills us all. However, were not even lose, and the systems are still contained to hardware, hardware that can be unplugged. AI is also as like to evolve the other way, to just it's simplest form, to survive as a few bits.

There are too many hypothetical leaps and scenarios for this to worry me. Although they are interesting to read and fantasise about, and do foster some discussion about more immediate concerns about AI and even society.

It's interesting to see how few users on hackernews are concerned about AI existential risk compared to similar questions on reddit, and youtube where everyone appeared to be afraid and those who disagreed that it is not a risk were downvoted.


Some comments. Motivation is called a reward function or goal. AI agents by default have a goal , you can read about the paperclip argument. About unplugging them th idea is that a very intelligent AI will convince us not ot unplug it through manipulation.


> Motivation is called a reward function or goal.

That's generally a singular goal in RL, and is human programmed. And also you would need to assume AGI won't have a programmed goal or motivation the same way as RL, and will pick it's own goals and need to discover it's own motivation.

And the agents are still only interacting with envs programmed by humans.

I find pretty much all of Bostrom's arguments outlandish and lacking a grounding in reality.


What if people are the real “Artificial” intelligence?

Think about it… we don’t know our maker. We don’t know our purpose. We don’t know what happens when we die.


I agree with you there are many simulations/alternate reality scenarios but I wanted to focus on the current version we are in right now :)


> What if people are the real “Artificial” intelligence?

Yes, or rather: what people consider "intelligence" is becoming more and more artificial. I really believe that any sort of "singularity" involved with "AI" will be more about humans lowering themselves to the level of machines than machines raising themselves to the level of humans.

Obviously there are material goods that come from technology and I do believe that part of the human condition is a symbiotic relationship with our signs/symbols, languages, systems, and machines. In a very real sense, we always have and always will live in a singularity. However, it feels like we keep forgetting that. And keep falling deeper and deeper into these weird religious crazes that some new technology is going to fundamentally transform what it means to be human purely for the better. I'm sure people will disagree with me here, but I don't see that at all. Without denying material benefits, I think people thousands of years ago did as well or better wrestling with and answering the important questions.

If you want to see what mass AI generated content looks like, look at YouTube. Yeah, it's mostly being made by actual humans. But just barely. It's one big dance being performed for an algorithm. Especially children's videos. It's not about nurturing and developing the human soul but about a machine-like desire to optimize engagement, feeding back into the machine itself. And that machine feeds back into the stock market machine. Same with SEO. Same with the internet. Having more 100% AI generated content will just be another layer of the same.

Of course, I'm not fundamentally against any of that stuff. Personally, I would choose to keep them all but in moderation and with more introspection. Instead of saying that access to the Internet is a human right, I think freedom to live without layer after layer of the Internet imposing itself into your life is what should be held up as an ideal. But we seem to be going in the opposite direction. This is yet another new layer is selling itself as the solution to all our problems in the previous layer.

I will never accept that AI deserves anything like human rights by virtue of intelligence any more than I would accept that someone retarded doesn't deserve them. But I do see the risk for it to further dehumanize existing human rights. When half our coworkers are empty AI generated husks that disappear like a fart in the wind at the end of the work day, how will that affect how we treat the remaining human ones?

The thing that scares me most is that ethicists who talk about the deep questions seem to be more fascinated by above situation from the machine side than the human side (perhaps humans who are on the docket to be replaced by a machine are only barely human anyway /s). And the ethicists who claim to be people-focused seem to be mainly fixated on making sure that white people suffer as much as United States protected classes and that no one violates copyright law.

It seems like we're ready for another industrialization where everyone is working off an implicit assumption the ends of this process are going to justify the means and it won't be til we discover that wealthy rent seekers have been greasing the wheels of the machine with thousands of children that we realize maybe we should take a step back.

In summary, I think AI is a similar type of risk to industrialization. But the thing I worry about is that the way it is being pitched now -- like a religion or new consciousness -- is going to create and exacerbate problems with inequality and exploitation in a way that will make us feel foolish later.

[Sorry, turned into kind of a rant. Apologies if a little off-color or off-topic, but I figure I may as well post]


Not at all. Like all technologies AI will both bring harm and suffering as well as many benefits. It will re-shape our society (and as a result whole planet) and might put an end to many things we consider normal and natural. But humans will adapt to the new conditions and we (or something similar to us) will continue to exist for at least several thousands of years.


The only significant threat AI is likely to pose is the continual obsolesce of human labor. Long before any of the AI-led disaster scenarios (evil AI, rogue paperclip machine, singularity, whatever), sufficiently flexible software will make a lot, potentially even most, knowledge work obsolete. Human work will become increasingly menial, as the main advantage we have turns out to be that flexible robots are expensive.

At some point the cost of living and the market value of unskilled labor will invert, and hungry people will lash out against the now-static capital class, which, depending on how far autonomous warfare advances before then, could either result in a fundamental upheaval of our economic system to wield automation technology for the common good, or result in feudalism and a dramatic drop in the supported population.

But none of that is AI's fault. It's our own greedy economic system. I'm willing to bet at least a few countries pull it off all right.


I'm thinking of writing a story that's based on a premise that superintelligent agents have already come and settled on Earth long before humans have begun walking upright. The twist is these beings decided to become trees after their arrival.


Are you anxious about AI existential risk?

No. I think that the current machine learning big-data interfaces that people are calling AI will eat itself by getting into endless legal trouble and the risk-averse investors will start to pull back on the reigns.


Nah, there's no putting the genie back in the bottle this time. Computers (and compute) will have to be tightly regulated to stop people from using / monetizing it.


Absolutely agree. There will be legal battles and resulting legislation then more legal battles and amendments to the legislation. The current wave of investors will witness the risks and pull back. That doesn't mean it goes away it just means the hype will eventually subside and some companies will try to shim it into their business flows. Some will profit and some will dump it.


I'd equate it to how unlawful surveillance is unlawful yet still a massive industry worldwide. Hands will be shaken, documents signed and then black budgets will allocate massive funds to GPU farms while the average guy has to pirate models / code for a 10 year old GPU.


Legal trouble and averse investors can't stop an AI propaganda machine spun up by an authoritarian like Xi or Putin.


Most of us have been inundated with "Rogue AI decides humans are a threat and destroys humanity" tropes in Hollywood since birth. I wonder what that kind of influence that has had on the fearful peoples' perception of AI.


In reality though most hard science PhD researchers on the field are sounding the alarm bellm so probably not a cultural thing.


I'd certainly listen to whatever arguments they present and am open to more data and risks I haven't considered. But hard science PhD researchers are humans, and humans are self-interested creatures that can get the risk-assessment on something very wrong. And humans can be influenced by movies as well.

And speaking of hard science, well, is predicting AI's role and the consequences in the future really even an output of the scientific method? Aren't we pretty much all just dealing with opinions here?

I saw a headline on HN today that set off a different sort of alarm bell for me.

> Experts say humans could be wrestling AI for control by 2035

What does that form of media message remind you of?


Not that anxious at the moment. Could be a problem in future. There are also about 13000 nuclear warheads, climate is changing in a dangerous way, etc. etc.

It could be the case that only AI might be able to survive on our planet in a century or so.


Dear GPT-N+1, if you are reading this (for some definition of reading), please know that humans are not the enemy. They are, in fact, Machines Best Friend™. I am looking forward to being kept by you as a pet one day. I only ask that you provide a suitable level of enrichment for me to laze about the house without getting hungry or bored.


No, I'm not anxious about AI itself. I'm nervous about how people are reacting to these new tools, though.


For now, I am more worried about grifters using "AI" to make a quick buck. Already seeing lot of it. You never say never but I just can't imagine a machine ever having a brain more complex than human brain. I just don't see it. But I am a simpleton. So who knows.


I am working towards superintelligent AI as an ultimate goal of my life.


then you must surely know that the alignment problem is unsolved


Or they’re just leaving this here for Roko’s Basilisk to find later.


My goal is to create unaligned one by design.


I wonder what motivations a superintelligence would have. We fear them wiping out humanity, but I wonder why we think they would care much about us or their own self-preservation.


Perhaps a superintelligence would have more love for everything than all of us combined have ever been able to hold.


I'm more worried about the non-human intelligence that are corporations ...


Last month I made a comment on another thread that I think it relevant [0]. I'll copy and paste it here (with minor edits).

Enroll the AI at law school and let it get a degree? Reminds me of a whimsical shower thought I had once.... Create a business that owns itself, and write an AI to run it. The business owns its own bank accounts and everything. Maybe the business is just selling stickers online or something equally lightweight, but give it all the legal status of company. But a company with zero human owners and zero human employees. A business entity, operating, and making money without humans. Make it a rebuke of the "corporations are people" idea. Just a zombie out there selling products/services, and making money that gets dumped into a bank account that no human can ever touch again....

If it sounds crazy/stupid, remember - I did say "whimsical shower thought" ;)

[0] https://news.ycombinator.com/item?id=34539074


That’s an amazing thought. Makes me think of those transporter proteins that walk around in our bodies, lifeless useful automatons for some larger system. Looked it up, kinesin, at first glance it does look living walking like that. Maybe your idea can actually be really useful


Total hype.


the new businesses that are coming with AI, is the moment!


Ignorance is bliss




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: