Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't see a logical reason why artificial emotional "beings" would favor the future of purely technological "beings" over humans/cyborgs - they/it doesn't have the same evolutionary drive to advance it's own species as natural life, including humans, have.

Wouldn't that depend on how these "artificial beings" are designed in the first place? They might not necessarily favor a robot-dominated world, but they're not necessarily going to coexist peacefully with human beings, either. Whether they serve everyone's interests or whether they'll need to have their rogue asses kicked by Will Smith ultimately depends on what kind of intelligence and emotions are programmed into them. Even if they are expected to learn on their own, the design of the learning algorithm (as well as the stimuli they're initially exposed to) will have a significant impact on the outcome.

In particular, human philosophical conjectures will inevitably make their way into the design of artificial beings. From abstract concepts in metaphysics and epistemology to the most practical parts of value theory, human philosophy pervades every "intuitive" assumption that we make on a day-to-day basis. But I have yet to see a philosophical position that would be safe if extrapolated to its logical conclusions by hyper-intelligent beings. So there is definitely cause for concern, in addition to well-known issues of inequality.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: