The actual algorithm (which is pretty sensible in the absence of delayed ack) is fundamentally a feature of the TCP stack, which in most cases lives in the kernel. To implement the direct equivalent in userspace against the sockets API would require an API to find out about unacked data and would be clumsy at best.
With that said, I'm pretty sure it is a feature of the TCP stack only because the TCP stack is the layer they were trying to solve this problem at, and it isn't clear at all that "unacked data" is particularly better than a timer -- and of course if you actually do want to implement application layer Nagle directly, delayed acks mean that application level acking is a lot less likely to require an extra packet.
Speaking of Slashdot, some fairly frequent poster had a signature back around 2001/2002 had a signature that was something like
mv /bin/laden /dev/null
and then someone explained how that was broken: even if that succeeds, what you've done is to replace the device file /dev/null with the regular file that was previously at /bin/laden, and then whenever other things redirect their output to /dev/null they'll be overwriting this random file than having output be discarded immediately, which is moderately bad.
Your version will just fail (even assuming root) because mv won't let you replace a file with a directory.
Sure, in the middle of a magnitude 9 earthquake I'd rather be in the middle of a suburban golf course (as long as it is far from any coastal tsunami) than any building, but I don't spend the majority of my time outside.
Two issues:
1. If you're making this choice during an earthquake, "outside" is often not a grassy field but rather the fall zone for debris from whatever building you're exiting.
2. If the earthquake is big/strong enough that you're in any real danger of building level issues, the shaking will be strong enough that if you try to run for the outside you're very likely to just fall and injure yourself.
As someone who is super nearsighted, the smaller screen on a phone is great for reading, especially in contexts like bedtime reading where I want to have my glasses off.
I have read many hundreds of books this way.
The problem with a tablet is that most tablets, especially the sort that are good for seeing entire as-printed pages at once, are too big for me to keep the entire screen in focus without wearing glasses. (with that said, foldables improve things here, since the aspect ratio bottleneck is typically width so being able to double the width on the fly makes such things more readable.
There's a weird fetishization of long uptimes. I suspect some of this dates from the bad old days when Windows would outright crash after 50 days of uptime.
In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet.
On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot).
And of course FreeBSD hasn't implemented kernel live patching -- but then, that isn't a "long uptime" solution anyway, the point of live patching is to keep the system running safely until your next maintenance window.
> There's a weird fetishization of long uptimes. I suspect some of this dates from the bad old days when Windows would outright crash after 50 days of uptime.
My recollection is that, usually, it crashed more often than that. The 50 days thing was IIRC only the time for it to be guaranteed to crash (due to some counter overflowing).
> In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet.
Or that the part of the system which needs the security updates not be exposed to the Internet. Other than the TCP/IP stack, most of the kernel is not directly accessible from outside the system.
> On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot).
You don't need a software update for that. Normal use of the system is enough to make it gradually diverge from its "clean" after-boot state. For instance, if you empty /tmp on boot, any temporary file is already a subtle difference from how it would be on a fresh reboot.
Personally, I consider having to reboot due to a security fix, or even a stability fix, to be a failure. It means that, while the system didn't fail (crash or be compromised), it was vulnerable to failure (crashing or being compromised). We should aim to do better than that.
> My recollection is that, usually, it crashed more often than that. The 50 days thing was IIRC only the time for it to be guaranteed to crash (due to some counter overflowing).
I had forgotten about this issue (never got a Windows 9x survive more than a few days without crashing), and apparently it was a 32-bit millisecond counter that would overflow after 49.7 days:
> unless you restart all of userspace (at which point you might as well just reboot).
I can't speak for FreeBSD, but on my OpenBSD system hosting ssh, smtp, http, dns, and chat (prosody) services, restarting userspace is nothing to sweat. Not because restarting a particular service is easier than on a Linux server (`rcctl restart foo` vs `systemctl restart foo`), but because there are far fewer background processes and you know what each of them does; the system is simpler and more transparent, inducing less fear about breaking or missing a service. Moreover, init(1) itself is rarely implicated by a patch, and everything else (rc) is non-resident shell scripts, whereas who knows whether you can avoid restarting any of the constellation of systemd's own services, especially given their many library dependencies.
If you're running pet servers rather than cattle, you may want to avoid a reboot if you can. Maybe a capacitor is about to die and you'd rather deal with it at some future inopportune moment rather than extending the present inopportune moment.
There are a lot of OT, safety and security infrastructure that must be run on premise in large orgs and require four to five nines of availability. Much of the underlying network, storage, and compute infra for these OT and SS solutions run proprietary OSs based on a BSD OS. BSD OSs are chosen specifically for their performance, security and stability. These solutions will often run for years without a reboot. If a patch is required to resolve a defect or vulnerability it generally does not require a reboot of the kernel and even so these solutions usually have HA/clustering capabilities to allow for NDU (non disruptive upgrades) and zero downtime of the IT infra solution.
It's from a bygone era. An era when you'd lose hours of work if you didn't go file -> save, (or ctrl-s, if you were obsessive). If you reboot, you lose all of your work, your configuration, that you haven't saved to disk. Computers were scarce, back in those days. There was one in the house, in the den, for the family. These days, I've got a dozen of them and everything autosaves. But so that's where that comes from.
Home computers seem more scarce to me today than they did ~25 years ago.
Sure: People have smart TVs and tablets and stuff, which variously count as computing devices. And we've broadly reached saturation on pocket supercomputers adoption.
But while it was once common to walk into a store and find a wide array of computer-oriented furniture for sale, or visit a home and see a PC-like device semi-permanently set up in the den, it seems to be something that almost never happens anymore.
So, sure: Still-usable computers are cheap today. You've got computers wherever you want them, and so do I. But most people? They just use their phone these days.
(The point? Man, I don't have a point sometimes. Sometimes, it's just lamentations.)
> But while it was once common to walk into a store and find a wide array of computer-oriented furniture for sale, or visit a home and see a PC-like device semi-permanently set up in the den, it seems to be something that almost never happens anymore.
My experience is the opposite: due to the increasing popularity in PC gaming, furniture stores now carry gaming-oriented desks and chairs that they didn't sell before.
Are you sure you're not thinking of "SmartDay" days that are part of the SmartRate program?
Flex Alerts are CAISO and ultimately about grid stability. SmartRate/SmartDay are ultimately about marginal cost of production on PG&E. The two are certainly correlated -- at the very least, a Flex Alert day is almost guaranteed to be a SmartDay.
Notably, the SmartRate program is capped at 15 days per year, and in practice PG&E will keep a few in reserve for surprise late season events, but even if there are no Flex Alert days they're still going to be called on electricity-is-expensive-even-if-the-grid-is-stable days.
It has been nearly 20 years, but my rule of thumb was that I wouldn't leave until I had done *three* review passes of the test. That is, quadruple checking, completing the exam and then reviewing my answers three times. That is pretty far into the diminishing returns for me catching my own errors.
That *almost* never happens, but there are exceptions -- sometimes they really do give way way more time than you need, especially if you are already strong at the material in question.
With that said, the key point is that the time tradeoff here for leaving early is terrible in typical college classes that have heavy weight on exams. Especially the first 10 -20 minutes of double checking is very likely worth 5+ hours of homework time or study time in terms of points towards the grade.
Huh? There are plenty of good reasons to complain about the 5V5A thing, but "fire hazard" is not one of them.
It is even entirely within spec for a PD power supply to offer a 5V5A PDO, as long as it is only doing so with a 5A capable cable (i.e. 100W or 240W). 5V5A is no more a fire hazard than 20V5A.
The spec violation isn't that it negotiates 5V5A when available, but that it isn't willing to buck from 9V or 15V to get those 25W which means that power supply compatibility is incredibly limited.
It's a shame that RPi didn't just adopt a proper PD interface for power. For that matter, if they had USB-C + TB/USB4 with display support, then I could just plug it into my display without any other cables like I do my laptop, with all the peripherals connected to the display.
Any currently existing (to say nothing of two years ago) "TB/USB4" chipset would dramatically increase the price of something with a retail price on the order of $50.
With that said, DisplayPort Alternate Mode would be considerably more straightforward.
Apparently the RPi 5's SoC already supports USB-C display alt-mode.. unfortunately they don't to proper PD negotiation, which would not be considerably more expensive. There are cheap vape pens that support PD negotiation properly.
Are you sure these "cheap vape pens" don't just use 5V3A, which doesn't require any PD negotiation at all? (a lot of them screw even that much up, and a lot of people confuse "PD negotiation" with simply having the right resistors on the CC pins)
There is real cost savings here -- the RPi5 avoided the need for a buck circuit, and for that matter probably a dedicated PD controller chip.
In contrast, in the context of a "cheap vape pen" you have a battery which means you need to be able to convert to (and from!) battery voltage, so you need that conversion circuitry anyway.
Even the voltage is not matching spec (Pi power supply has 5.1 volts, not 5.0 volts!). That is because historically Pi had shitty cables, with high resistance and voltage drops. 5V5A is not even in spec, limit for 5 volts is 3 amps!
> fire hazard than 20V5A
That would be 100 watts! Many people just grab any usbc cable, and solder it directly to GPIO power pins. But good luck with that!
Initial batches of Pi4 did not even had a resistor, to request 3.0 amps!
The point is that power dissipation in a cable is a function of the current going through it. The cable will get exactly as hot carrying 5 amps with a voltage of 5 volts as it will carrying 20 or 48.
(now, that is more *wasteful* -- you lose the same amount of power to heat carrying 25W at 5V5A as you do at 100W 20V5A, but that's 4x the relative waste in power)
> Many people just grab any usbc cable, and solder it directly to GPIO power pins.
You're not going to get *any* 5 amp mode out of a standard PD power supply unless the cable indicates it is 5 amp capable, which isn't going to happen unless that "any usbc cable" has the right emarker on it.
> limit for 5 volts is 3 amps.
There is no such limit.
What there is is two things:
1. There are a standard set of PDOs a standard "X watt" PD power supply is supposed to provide. 5V3A 9V3A 15V3A 20V5A, (then 28, 36, and 48 volts for EPR) with the highest one limited to the power limit of the supply. These only go up to 3 amps until you get to 20 volts.
2. Devices are supposed to support those standard PDOs.
Anything other than those standard PDOs is optional (at least before 3.2 which starts introducing AVS as a requirement at 27W+). 12V support is common, as for that matter is PPS support. 5A support below 20V in fixed PDOs is 100% allowed but is super rare.
(5A lower voltage PPS is a different story, but unfortunately the RPi5 doesn't know how to negotiate 5V PPS. That is a shame because it would 100x its power supply compatibility because most chargers targeting higher end Samsung phones support it.)
A power supply is 100% allowed to support 5V5A. It just isn't required to. It would have been 100% legitimate for the RPi5 to have a buck circuit to handle a standard 27W 9V3A power supply and then turn that buck off if the power supply and cable support 5V5A.
> Initial batches of Pi4 did not even had a resistor, to request 3.0 amps!
To be precise, it had *a* resistor (connected to the shorted together CC pins) when it was supposed to have one separate resistor for each pin, and that broke cables with emarkers.
At the very least, the benefits of QUIC are very very dubious for low RTT connections like inside a datacenter, especially when you're losing a bunch of hardware support and moving a fair bit of actual work to userspace where threads need to be scheduled etc. On the other hand Cloudflare to backend is not necessarily low RTT and likely has nonzero congestion.
With that said, I am 100% in agreement that the primary benefits of QUIC in most cases would be between client and CDN, whereas the costs are comparable at every hop.
Is CF typically serving from the edge, or serving from the nearest to the server? I imagine it would be from the edge so that it can CDN what it can. So... most of the time it wont be a low latency connection from CF to backend. Unless your back end is globally distributed too.
The somewhat moribund Foresight Exchange which is a ~30 year old play money idea futures market has discussed this idea a lot over the years, even to the point of having a number of "True" claims of exactly the form described in the article, such as http://www.ideosphere.com/fx-bin/Claim?claim=T2015
With that said, I'm pretty sure it is a feature of the TCP stack only because the TCP stack is the layer they were trying to solve this problem at, and it isn't clear at all that "unacked data" is particularly better than a timer -- and of course if you actually do want to implement application layer Nagle directly, delayed acks mean that application level acking is a lot less likely to require an extra packet.