There are about a zillion implementations of vanilla A* on the internet. What's particularly interesting about this one? Do you have some cool new heuristic function or some nice insight into how the search works? The as-is version is pretty crappy. I mean, I can't even see the nodes expanded!
As far as I can tell there is nothing particularly interesting about this implementation. It's cool because it lets you play with it and because the code is pretty short which makes it easy for anyone unfamiliar with the algorithm to grok how it works. It also has some nice documentation: http://www.matthewtrost.org/projects/astar/documentation.htm...
Actually, I found the documentation reasonably poor. It does a good job of explaining the API but does little to provide a would-be user with any insights into why the search works, how it works or how efficient it is. It doesn't even discuss why (or when) I might choose one of the three given heuristic functions over the others.
i.e. despite its length, it's not actually a very useful document.
While I agree that what you listed would be useful to include I think it's important to note that it is documentation of his implementation (what the various parameters are for) and not of how A* works. To that end he did a good job. I especially like the little ideas he throws in under "Extending astar.js." He links to the Wikipedia entry for those who want to know more about how A* works.
Suppose the C++ STL documentation, such as it is, linked you to CLRS. Would you regard it as adequate or would you demand more? I don't expect a rigorous academic treatment here but at least something to tell me (a) how efficient the implementation is and (b) why (or when) I might want to use X or Y provided feature over Z. For example: why is the Euclidean heuristic "slow"? Is it a shitty implementation? Is it because it's underestimating the goal distance too much? Another gripe: the author doesn't even tell me what kind of open and closed list implementations are in use!
I don't mean to be a hater but this fails at contributing something novel (which it doesn't), it fails as a learning resource which teaches something interesting (which it doesn't) and it fails as a well documented programming hack (which it isn't).
Even if it's not the best implementation, it could be considered as scaffolding upon which to improve.
While you clearly have enough experience with A* to tell the difference between a good and bad implementation, I've not got the same background (I've been reading about it, looking at flash implementations and trying to figure out how the heck I was going to do it in JS+Canvas).
Maybe there's a short list of glaring inefficiencies or places where this implementation is behind the curve. What's it missing?
I'm sure that for the OP this is a reasonably significant achievement, to me it's voodoo... and to John Carmack it's something else.
I doubt Carmack would be so derisive of someone's professional development. Would you tell an 8 year old that their finger paintings were pretty crappy too?
Criticism can be valid and encouraging if delivered in the right way.
Perhaps we have different expectations of what we think should appear on Hacker News.
I come here to find out about cool new stuff that hackers elsewhere in the world are working on. To me, an individual learning a fundamental AI algorithm isn't all that interesting. Particularly as the OP in this case has nothing to add to the discussion; he's just crowing to the world: "Look! I implemented someone else's idea!"
That's retarded. 90% of the articles on the homepage at all times are garbage tech gossip. I don't see you insulting those authors. You're either a troll or just a jerk.
Wtf? This is a news site. How is somebody learning something fundamental news? Would you like to see a front-page story every time some kid completes an algorithms assignment? I don't. Not because the learning is unimportant, simply because it's most interesting to the person doing the learning.
All I'm saying is this: if you've learned something cool, and you want to tell the world about it, you need to present it in a way such that your learning contributes something to the learning of others. For example: explaining a complicated idea in really simple terms that anyone can understand (such as this cute example of the Halting Problem proof: http://www.lel.ed.ac.uk/~gpullum/loopsnoop.html).
Another way to make your learning interesting to other people is to present an implementation that provides insight into the subject. The author tried to do this but I argue his execution is poor. I learned nothing from his app and nothing from his documentation. No implementation insights, no analysis, no beautiful code, nothing. Nada. Zip. Zilch. It's just a web-based A* implementation. Of which there are zillions.
Excuse me? The study of better heuristics to guide search is a very active area of AI research. See, for example, Sturtevant et al's 2009 work on True Distance Heuristics or their 2011 work on compressed heuristic functions presented at AAAI last week.
No, it's more general. The central idea is to identify "interesting" nodes in explicit graphs which can be used to improve heuristic accuracy. You might even be able to do this for implicit graphs though that extension is not straightforward. I suspect you'd need some kind of pattern database approach.
Harabor and Grastien had a nice AAAI 2011 paper on speeding up heuristic search -- which is specific to 2D grids.
Exactly, my point was that you can't improve that much on heuristics for a 2D grid. The heuristics are already admissible and simple, pretty much all you can do is speed it up.
Well, just to put in a hopeful word here, for hundreds of years mathematicians had figured that the most efficient way to multiply square matrices was O(n^3). Yet today it's significantly less than that, even in practice.
Huh? The True Distance Heuristic stuff to which I refer can speed up search quite dramatically. The trick is to find a nice tradeoff between improved heuristic accuracy and memory overhead.
Basically: the less accurate AStar's heuristic function is, whatever the domain, the more nodes it has to expand to reach the goal. That's because with a poor heuristic the fringe is expanded in all directions rather than strictly in the direction of the goal. Manhattan and Octile are not great heuristics because they assume the path to the goal is always unblocked. So you end up exploring dead-ends and other areas that look promising but which the optimal path does not cross.
You can see the difference in performance by comparing different heuristic functions; for example the special case of AStar where h(n) = 0 (i.e. Dijkstra's algorithm) vs the Manhattan distance heuristic. The latter is more informed and the search proceeds faster.
The paper to which I refer builds a database of costs between every node and certain landmark points. Given such information you can now better estimate the distance between two points by computing a distance differential.
Bottom line: their search is several times faster than vanilla AStar using a plain-jane Octile or Manhattan heuristic. More generally, the more landmarks you use, the better the heuristic. However, with better performance comes the need to store more costs which can introduce a significant memory overhead.
The details are in the papers I mentioned if you want to know more.