Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Digital immortality, true AI and the destruction of mankind (fakeguido.blogspot.com)
10 points by daivd on June 16, 2010 | hide | past | favorite | 7 comments


The reason this isn't the accepted ideal method for building an AI is that it -doesn't build an AI-.

A decision tree attached to your incoming email doesn't constitute an AI; its only automation. It isn't making decisions, and a structure of these decision trees, even complicated enough to perfectly mimic a person, is not an intelligence, its a philosophical zombie.

Yeah, it is theoretically possible to build a model of me that mimics me entirely by clearly defining the decision trees, but that's not the problem of AI: Its easy enough to teach it to do something, but hard to build something that actually learns.


A philosophical zombie is not a decision tree incapable of learning. A philosophical zombie is something that is exactly like a human in every way except that it lacks consciousness. The concept has more than a few problems - see http://lesswrong.com/lw/p7/zombies_zombies/

If a decision tree (or a clever algorithm or whatever) REALLY did perfectly mimic a thinking human (including our ability to learn), do you still think it wouldn't REALLY be intelligent? That there's some sort of important, qualitative difference between being shifted through a decision tree and sensory inputs and feedback loops building up neuron action potentials? That may be true, but the evidence is heavily stacked against such a worldview.

That said, I agree that this is a bad approach to AI. It's not like people haven't been trying to stitch together various subproblems of intelligence for the better part of 50 years.


But 50 years is hardly a long time is it?


I am not so sure.

Is it possible to create a standalone complex where consciousness arises or do we need human experience to bootstrap the process?

That to me seems to be the question.

Logic needs variable to work. The more you can capture traces from the physical world and put them together in the digital world the more variables you have to work with. That seems like a good start in my opinion regardless of whether it creates true AI.

I wrote a post about it recently, The Ghost Protocol http://000fff.org/the-ghost-protocol-digital-identity-for-im...

Also John Smart have his Brain Preservation Institute with the idea of CyberTwins.

http://www.brainpreservation.org/

But this is all very speculative of course.


"The reason this isn't the accepted ideal method for building an AI is that it -doesn't build an AI-."

You started off with a promising first sentence, then fell into a philosophical debate that people couldn't resist getting trapped in.

So, more to the point, what this does build is nothing more and nothing less than a uselessly complicated software system. It's all fun and games at first, but then the heuristics start interacting in weird ways, then they start interacting in really weird ways, and you start stacking heuristics on top of heuristics for applying heuristics, and before long you have a system that demonstrates rare flashes of brilliance but in practice is so unpredictable and opaque you can't use it.

You end up with demo-ware.


"and before long you have a system that demonstrates rare flashes of brilliance but in practice is so unpredictable and opaque you can't use it"

_That_ actually seems to resemble human psychology to some degree ...


And some philosophers say we are only automatons. Make it complicated enough, and its intelligent. Or indistinguishable from intelligence. Use the word "zombie" if you like, that's just emotional sniping.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: