Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I definitely think you are on to something. The attempts at artificial intelligence I am aware of all consist of some sort of optimizing, so trying to find a good thing to optimize seems a very reasonable thing to try.

(This is just my speculation, so take it with a grain of salt)

Here I think it is reasonable to look to human motivation. Maybe by making an agent that optimizes what a human brain optimizes, we could see similar behaviour?

A reasonable start is Maslows hierarchy of needs.

1. Biological and physiological needs. For an embodied AI, this could correspond to integrity checks coming up valid, battery charging, servicing.

2. Safety needs. I think these emerge from prediction+physiological needs.

3. After that we have social needs. This one is a little bit tricky. Maybe we could put in a hard coded facial expression detector?

4. Esteem needs. Social+prediction

5. Cognitive needs. I have no idea how this could be implemented

6. Aesthetic needs. I think these are pretty much hard-coded in humans, but are quite complex. Coding this will be ugly (irony)

7. Self-actualization???

Now, from 1 and 3 it is reasonable to suppose (provided the optimizer is good enough) that we could train the AI, like one trains a dog. You give command, AI obeys, you smile/pet it ( -> reward).

It does something bad, you punish it.

In order for the optimization procedure to not take unreasonably long time, I think it is important that the initial state has some instincts.

Make sound if you need battery. Pay attention to sounds that are speechlike.

Maybe give it something aking to filial imprinting could also be a good idea.

Extensive research on neural basis for motivation should be prioritized in my opinion.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: