We're talking about occasional hiccups, not an average-case response-latency overhead. You can't get worst-case latency of 2-10us with a non-realtime kernel. Even a page fault could take longer than that.
> You can't get worst-case latency of 2-10us with a non-realtime kernel. Even a page fault could take longer than that.
You obviously can, and this has nothing to do with the kernel being real-time or not.
There is no situation I can think of where a page fault should occur on a properly setup system running a production networking software, meaning no swap, huge TLB, and proper memory management.
If you can think of "no situation" where a server may incur a page fault, forced preemption, or need to perform any I/O to a service/database, then I hope you at least recognise that your world in no way represents the state of server software at large because none of these things is true for the vast majority of server software.
In a former life I worked on some safety-critical onboard avionics software for an ultrasonic platform, and 2us was around the upper-limit worst-case latency (i.e. you'll kill someone if you miss that deadline); still, it's not the kind of requirements the vast majority of software finds itself under.
When working over the internet, some of the very best services are at >10ms ping latency anyway, where a 500us hiccup is imperceptible.
> If you can think of "no situation" where a server may incur a page fault, forced preemption, or need to perform any I/O to a service/database, then I hope you at least recognise that your world in no way represents the state of server software at large
I won't deny that the majority of software out there is not latency sensitive, but the context of this article is specifically targeting those softwares that are _not_ using garbage collection, arguing that it is undeservedly overlooked. OP even adds that GC is a "solved problem" because some GC implementation is 500us worst case latency.
My point is that the article author, and OP, are mistaken. Because if you are in the category of "I write server side software without GC" (e. g. C/C++), then 500us is horribly wrong.
Your point being that 500us is fine for most software out there is surely true, but not relevant, because if that is the case, you're probably not using C/C++, thus this article is not targeting you.
In _my world_ as you phrase it, traffic is unshaped. You need to be able to support line rate, otherwise packets are dropped, and hell breaks loose.
Your "properly set up system" is apparently doing nothing other than running your single job. The vast majority of real-world systems have to deal with antagonists.
All of the characteristics you mention are true on production systems used in large scale fleets...and yet "bumps" happen...because there's never one thing happening. It's all the things, and it's always changing.
I'm gonna guess you do finance. A six microsecond fiber oneway is a thing of beauty. There are certain technical luxuries associated with that domain, and rarely any requirement to exhibit high utilization rates...or deal with antagonists running on the same hardware.
Finance is one of such use cases, but there's a lot more, and that's the use case for people not using GC, thus why I find this article (and the comment saying 500us is a solved problem) pedantic.
I wrote code profilers for instance, which also need perfectly predictable latency. I worked on L2 and L3 networking applications (bridging, routing, forwarding) that need line rate support. People working on audio sampling, or codecs have the same constraints, etc.
There's a whole world of applications where 500us is ridiculously slow. The article takes the OS as example, but if my OS has 500us random latency spikes, I would be horrified.
> because if that is the case, you're probably not using C/C++
The point is that this claim is more wrong than it should be, eg. that C/C++ is still used more than it should be partly because these GC myths persist, hence the article.
I think at the end we're debating if the glass is half full or half empty.
I claim that people are not ignorant and if they use C/C++, they are aware of what a GC implies, and cannot afford it.
The article claims that people are somehow wrongly mislead to think they _need_ C/C++ while a GC'ed language would be alright.
I don't think people are dumb. I think given the choice, any sane person would pick Python or similar to write an application, and that thinking they don't because they don't know more is pedantic.
It's a mistake to conclude that people make rational choices based on the premise that they aren't dumb. Counterexamples abound.
It's literally true that people who have never programmed in a language with GC don't know what it's like, but they absorb folklore about it's difficulties or unsuitability from places like HN. Except such content is almost always from extremes that aren't applicable to most development, thus producing an unintentionally skewed picture.
Articles exactly like this one are needed for balance, and to encourage more experimentation with safer choices.