Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fools learn from experience. The wise learn from others' experience.

(even knowing this, I am usually a fool. Also, could someone please tl;dr "inner vs outer losses" for me? advthanksance)

Edit: am I properly interpreting "pain as grounding" to be somewhat parallel to the I term of a PID controller?



The problem with learning from others' experience is... where does this learning process come from?

Learning from watching others is actually quite difficult. As Terence said, "when two do the same, it's not the same." No set of pixel observations will be identical. How do you map a third-person view to your own first-person view? You need sophisticated algorithms before 'learning from others' experience' is even an option. We struggle to get robots to learn from imitation. Human children routinely 'over-imitate' because they can't distinguish what part of a sequence of actions is necessary and what parts are optional or can be sloppier and still achieve the goal. Indeed, how do you even know what the 'goal' was? People differ greatly in abilities, preferences, and knowledge, so you would seem to require theory of mind just to begin. (I'm reminded of a point about kittens I saw argued: cats can't learn from observations, and instead, when a kitten 'imitates' their mother, they are actually simply becoming interested in the same object or place, and then independently inventing, by the usual cat trial and error, whatever useful behavior it was.)


I currently believe the third-person view comes before the first-person view. There's a substantial evolutionary pressure to be able to predict one's predator or prey; even quite limited theories of mind come in handy there. Consciousness, however, isn't required — I'll argue that the first-person view is an accident: once we have a system that is suitable for modelling others, it makes sense that the creature we have the most substantial data stream on is ourselves, so we model that as well.

This way around might explain why our self-awareness sometimes fails to be optimal.

For how little theory of mind it takes to play antagonistic games, see Shannon's 1953 mind reading machine: https://this1that1whatever.com/miscellany/mind-reader/Shanno...

(or try to play fetch with a dog, mixing two types of throws, and see how quickly it learns your tells)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: