Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this research focused on search on two-dimensional grids?


No, it's more general. The central idea is to identify "interesting" nodes in explicit graphs which can be used to improve heuristic accuracy. You might even be able to do this for implicit graphs though that extension is not straightforward. I suspect you'd need some kind of pattern database approach.

Harabor and Grastien had a nice AAAI 2011 paper on speeding up heuristic search -- which is specific to 2D grids.


Exactly, my point was that you can't improve that much on heuristics for a 2D grid. The heuristics are already admissible and simple, pretty much all you can do is speed it up.


Well, just to put in a hopeful word here, for hundreds of years mathematicians had figured that the most efficient way to multiply square matrices was O(n^3). Yet today it's significantly less than that, even in practice.


Huh? The True Distance Heuristic stuff to which I refer can speed up search quite dramatically. The trick is to find a nice tradeoff between improved heuristic accuracy and memory overhead.


Can you explain how it speeds up search dramatically on a 2D grid? The only calculation one needs to do is "x1-x2 + y1-y2".


Basically: the less accurate AStar's heuristic function is, whatever the domain, the more nodes it has to expand to reach the goal. That's because with a poor heuristic the fringe is expanded in all directions rather than strictly in the direction of the goal. Manhattan and Octile are not great heuristics because they assume the path to the goal is always unblocked. So you end up exploring dead-ends and other areas that look promising but which the optimal path does not cross.

You can see the difference in performance by comparing different heuristic functions; for example the special case of AStar where h(n) = 0 (i.e. Dijkstra's algorithm) vs the Manhattan distance heuristic. The latter is more informed and the search proceeds faster.

The paper to which I refer builds a database of costs between every node and certain landmark points. Given such information you can now better estimate the distance between two points by computing a distance differential.

Bottom line: their search is several times faster than vanilla AStar using a plain-jane Octile or Manhattan heuristic. More generally, the more landmarks you use, the better the heuristic. However, with better performance comes the need to store more costs which can introduce a significant memory overhead.

The details are in the papers I mentioned if you want to know more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: