Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The concern is over how to eliminate a certain class of bugs from our software as it becomes increasingly complex to the point that no human understands it. If you ask your robot to get you a glass of milk, and it finds you are out, you don't want it to rob the local grocery store, killing anyone who tries to stop it. Fixing such bugs by enumerating things not to do isn't viable -- we have to get machines that share our values or stick to Oracle type machines that just don't care about the physical world.


That's the thing - I don't agree with your premise that humans and robots/AI would be as seperate as you frame it. "Vanilla" humans may not understand the AI, but the people working with the stuff would surely be vastly enhanched humans, cyborgs.

I agree that we have to create AIs that share our values. However, I don't understand how/why we would/could not. We obviously create AIs to serve us and in order to serve us independently it without needing manual input of tasks (which would just make it an advanced computer) it needs to understand us.

I simply don't understand how the default AI would be detrimental to humans, what purpose would such an AI serve and why would we create it?


The mind design space is HUGE. (http://lesswrong.com/lw/rm/the_design_space_of_mindsingenera...)

The "default AI" is a program that we build. That's all we know. Most programs that we build do not properly represent human values. If they don't properly take those into account, then we lose things we care about. The AI that "optimizes our supply chain for paper clips" will, in the limiting case, consume humans and the environment and the earth and the sun in order to produce as many paperclips as possible and distribute them as widely as possible. A "default AI" will not care about its survival or the survival of its creators.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: