Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You bring up very valid points. I was speaking in terms of asymptotic runtime analysis, which does not care about constant overhead. However, the cost-per-operation will be dependent on the architecture that runs it.


You cannot talk about asymptotic behavior ("as n grows to infinity") and "regardless if n is small" at the same time. If you're talking asymptotic, you should ignore the constant. If you're talking about small n, then you can't ignore the constant.


Yes, which gives these arbitrary cutoffs like Java's for sorting varying efficiency depending on the processor architecture. These cutoffs between different algorithms are vital though for implementing fast multiplication of large numbers. GMP has a different set of cutoffs for multiplication for every supported CPU architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: