Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recently wrote a chess AI in javascript, and found that it wasn't able to evaluate nearly as many moves per second as I expected, even though the implementation was pretty light (though not extensively optimized).

My feeling is that for serious AIs that need to perform huge tree evaluations (whether classical alpha-beta or something closer to monte-carlo minimum regret searches) there's several orders of magnitudes of optimization that can be performed, huge gains from subtle tradeoffs (larger static memory allocations, memory caching strategies, etc) and deep heuristics that can lead to pruning search spaces by huge percentages.

Highly tuned implementations such as these - or the graphics visualizations from the demo scene - have always fascinated me as an art form.

In practice, for most practical purposes, the industry seems to value more code for more features over the incredible feats of superimplementation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: