Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article says this about insertion sort:

This is the fastest sorting algorithm for small arrays, up to maybe 20, 30 elements. Even in software!

No no no. Insertion sort is fastest only when the items being sorted are in nearly sorted order. Its runtime will always be dependent on the number of items, n, being sorted. For a completely randomized array of items, it will run at O(n^2 ) regardless if n is small. Quicksort and Mergesort will do better in these situations.



Any source for that? I am pretty sure insertion sort has a lower constant than quick sort and merge sort. O(n^2) with low constant can beat O(n log n) when n is small.

Wikipedia agrees with me too about quick sort being slower:

https://en.wikipedia.org/wiki/Quicksort#Optimizations

EDIT: Wikipedia says the same about merge sort.

https://en.wikipedia.org/wiki/Merge_sort#Comparison_with_oth...


You bring up very valid points. I was speaking in terms of asymptotic runtime analysis, which does not care about constant overhead. However, the cost-per-operation will be dependent on the architecture that runs it.


You cannot talk about asymptotic behavior ("as n grows to infinity") and "regardless if n is small" at the same time. If you're talking asymptotic, you should ignore the constant. If you're talking about small n, then you can't ignore the constant.


Yes, which gives these arbitrary cutoffs like Java's for sorting varying efficiency depending on the processor architecture. These cutoffs between different algorithms are vital though for implementing fast multiplication of large numbers. GMP has a different set of cutoffs for multiplication for every supported CPU architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: