Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But you can't go out and buy more cache. If you have an algorithm where the working set spills out of L2 cache (let's just assume that anything needing a log time search facility won't fit in L1) you can easily get a factor of 5 or 10 or so by packing your data.

Main memory latency is a huge factor in modern performance analysis. And I was sad to see that this wasn't even covered in the article. The biggest win for a B-tree for in-memory use IMHO isn't actually the size advantage, it's the fact that you can prefetch each node and not have to pay the full round trip latency for log2(N) lookups.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: