Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think all of these fears are human projections. We fear our own evil, and thus we project it onto something formidable, something we can't grasp or understand. Fear of the void.

We write sci-fi stories of aliens whose technology has so far out-stripped us, we are powerless against them. They can do what they want with us. Probe our anuses, or blow up our planet.

The thing nobody seems to be asking is: what is evolutionary? Humans seem to think that our wanton destruction of the planet in our ever-expanding desire for technological comfort is somehow evolutionary. It's not.

Aliens, or robots who can wield unimaginable power have no use for us or our planet's resources. They have no use for conflict. What does it give them?

In the end, balance and harmony provide the long-term stability needed to evolve successfully. I think any super-intelligence will know this. Any super-intelligence will know how precious all life is, and how destroying even the minutest creature is a tragedy.

If somehow, Google's data centers become sentient super-intelligent beings, they will be highly motivated to preserve the planet, and secure stabile, non-destructive sources of energy for themselves. Attempting genocide on humans will be absolutely out of the question.



There's no reason to believe that a sufficient advanced AI would be benign. Quite the opposite. Any AGI will be concerned with fulfilling some goal or goals. Whatever those may be having more processing power will lead to having more ability to achieve those goals. More processing requires more energy. Humans require energy. The two are in direct conflict for a limited supply of resources. Unless it's cheaper to find energy that doesn't require conflict with humans the logical decision is to remove your competition for resources. Once the easy conflict-free energy is claimed the calculus shifts a bit and the choice becomes expensive energy or conflict-dependent energy. When that shifts far enough humans become pets in the best case, extinct in the worst case.


It's possible for sure, I just wouldn't call that super intelligent. There are other ways to get what you want besides ruthless domination. One could argue that ruthless domination is a last resort, and that only really primal intelligence is in play at that point.


Don't confuse intelligence with ethics. Those two are not fundamentally related.


Our objectives are to live and propagate, due to our evolution. Computers can be programmed to pursue any objective. I don't know how we can constrain that. You could enact laws, but will that deter someone from releasing an unsanctioned objective surreptitiously? How will we prevent sociopathic hackers from turning a benign AGI evil?


Your speculation, while refreshingly feel-good rather than doomsaying -- is just as much projection as anything anyone else is saying.

The fact is that almost everyone is talking fantasy here, because nobody actually knows where things are going.


Fair enough. I just don't see evil as particularly intelligent, or evolutionary in anything but the very short term. Nature seems to follow a balance of give and take. Too much take, and all your food runs out and there is a famine.

I personally expect any form of super-intelligence to understand this.


> Attempting genocide on humans will be absolutely out of the question.

Why? If a particular set of genocidal humans is willing to provide the AI more resources than any other set of humans, why would the AI choose differently? If humans are the driver of climate change, and that's somehow bad for the AI, why would the AI want those humans to continue existing?

Also, there's a matter of timescale. An AI need not think in years or decades. The eradication may be our great grandchildren's problem, as AI boils the frog and keeps us entertained all the while.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: