Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, depending on how deep you think winning at Go is, there are, yes.

But probably we underestimate the value of our "contemporary contextual richness", ie. relationships/correspondences that are not apparent (not yet known) yet turn out to be important and valuable and easy to comprehend are mostly only possible because we spend our life (mostly pretty successfully) in this extremely complex and ever-changing environment.

AI/ML/LMMs first would need to get up to speed, I guess, to be able to have these insights and be able to provide them at the right time. (Otherwise ... it's probably already in the training data. Or not that deep. Or too deep.)



2 comments here:

  - these days, everything is called machine learning... AlphaGo is a great AI achievement, but I don't really consider it ML. it's classic AI augmented with NNs iirc. However I'm willing to concede that it's (a|my) taxonomy issue. 

  - however, being on the receiving end of Stockfish and friends (chess engines) I see "just moves, no insights". Even, for example, the insight that pushing the h-pawn is often better than previously perceived was caused by humans reverse engineering the results.


Hm, okay, what do you consider ML then? (And what AI, and how much is the overlap?) For me AlphaGo is more ML than AI. (Exactly because of the "moves no insight" that you mention.)


well, the wikipedia definition:

>Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions.

So it's about learning via statistics & data. If you don't use data or statistics it's not ML. Chess engines definitely do not fit there. They descend the game tree and evaluate it with an algorithm like minimax. No statistics. No learning. No training (although the static evaluation function might have been trained/tweaked via NNs). I don't know the details of AlphaGo, but I'm guessing it's similar: the concept of the game is hardcoded (game tree,....) while the evaluation of a position is done via NN. the training can be done via games against itself.


> I don't know the details of AlphaGo

As far as I know that's the whole thing that makes it better than the older ones is ML.

https://miro.medium.com/v2/resize:fit:4000/format:webp/1*0pn...


this is a really nice overview! thx.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: