Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This assumes that tasks can be defined as small and large ahead of starting them which goes back to a similar mistake that you pointed out - our estimates need to be true and have high confidence. Estimates for tasks have high variance and some of them will turn out to be inordinately complex, though not clear at first. Second one is that it assumes all tasks are of same priority and the utility of all tasks are equal.

Both of them are not true for software tasks. The assumptions behind the queueing policy are not true here.



One of the neat things about the "always pre-empt the currently processing task" is that you don't need to know the size of any task! All you need to know is that it's variable, which you can find out from past data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: