Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It kind of depends. You can broadly call any kind of search “reasoning”. But search requires 1) enumerating your possible options and 2) assigning some value to those options. Real world problem solving makes both of those extremely difficult.

Unlike in chess, there’s a functionally infinite number of actions you can take in real life. So just argmax over possible actions is going to be hard.

Two, you have to have some value function of how good an action is in order to argmax. But many actions are impossible to know the value of in practice because of hidden information and the chaotic nature of the world (butterfly effect).



Isn't something about alphago also involves "infinitely" many possible outcomes? Yet they cracked it, right?


Go is played on a 19x19 board. At the beginning of the game the first player has 361 possible moves. The second player then has 360 possible moves. There is always a finite and relatively “small” number of options.

I think you are thinking of the fact that it had to be approached in a different way than Minimax in chess because a brute force decision tree grows way too fast to perform well. So they had to learn models for actions and values.

In any case, Go is a perfect information game, which as I mentioned before, is not the same as problems in the real world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: