One of the neater aspects of HP-UX is that, given the breadth of pre-2000s HP, HP-UX ran on a number of different devices. You can almost (_almost_ - it's a stretch) think of it as a precursor to how Linux proliferated on routers and smartphones.
While you'd _expect_ to find HP-UX racked in a datacenter, you can also find it on workstations, where its proprietary VUE desktop environment eventually morphed into CDE (which, ironically, I've only ever used on Solaris).
It powered at least one early, pre-laptop-form-factor portable PC, the HP Integral. And you can also find it running on oscilloscopes, logic analyzers, and other test equipment from the 80s and 90s.
Doing connected work from the subway has gotten much, much easier in the last few years. I attribute that to three things:
1. Cell service has become low-latency. This is very different from "fast", which it has also become! When I started working from the train (on HSPA+), pings in the hundreds of milliseconds were the norm. My first step was usually to SSH to a remote machine, and let just the text lag on me. Nowadays, I can run a Web browser locally without issue.
2. Cell service has, at the same time, become ubiquitous in subway tunnels. When I started, there were some areas that dropped down to EDGE (unusable), and some areas that had no service at all. Now, there is exactly one place on the Boston transit system - Back Bay Station - where I lose cell service.
3. Noise cancelling tech has gotten better. It's not just about noise cancelling headphones: both of my laptops (a 2024 MBP and a ThinkPad P14s) have microphones that can filter out screeching wheels and noisy teenagers quite well. That means I can take meetings without making them miserable for the people on the other end.
These, honestly, are a huge game-changer for me. The ability to take a 30 minute meeting while commuting, where otherwise I would've had to get in early or stay late at work, actually does wonders for my ability to have a life outside of work.
> 2. Cell service has, at the same time, become ubiquitous in subway tunnels.
Not in New York, unfortunately. All of the stations have cell service, and one tunnel (14th Street L train tunnel under the East River), but everywhere else has no service between stations. It’s an annoying limitation that most cities seem to have fixed by now.
Sadly, you don't even need to engage directly with these companies to be affected. Case in point: e-mail.
I host my own e-mail. Valid SPF, not on any spam blacklists, good reputation score on my static IP.
At the beginning of November, I lost the ability to send e-mail to Gmail - it was all rejected as, quote, "possibly spammy". Double checked SPF and DMARC... Double checked documentation... Spent time setting up DKIM on my mail server, even though I sent nowhere near enough mail to merit it. Nothing got through for two weeks.
Google Postmaster Tools were totally unhelpful, telling me _that_ I was being blocked, but not _why_ I was being blocked. There is a community support forum where I posted - it hasn't seen a response since I posted in November. There was also a support portal where I could, in theory, contact a human. I sent something in there, and am still awaiting a reply.
Now remember, Gmail isn't just for @gmail.com addresses. Gmail hosts my accountant's domain. Gmail hosts the domain for a club that I'm part of. Gmail hosts friends who also have their own domains. Gmail hosts... well, probably a solid half of the Internet's e-mail.
My only way out of this nightmare was to reach out to a contact at Google, who - having an @google.com e-mail - was also unable to receive e-mail from me, and made the case to the right folks internally that I couldn't send important messages to him. A few days later, I could magically send e-mail to Google again.
Do I have any idea what I did? No. Do I have any idea what they resolved? Also no. Can I prevent it in the future? Who knows!
I'm increasingly of the opinion that the modern practice of not telling people why they've been blocked -- or even that they've been blocked -- was devised by sadists to satisfy their proclivity.
The core of the flaw is that actual fraudsters and spammers are repeat players and ordinary people aren't. The bad guys expect to be blocked, so they test for it. They check if their messages are getting through and then notice immediately when they stop. Whereas real people expect their messages to go through, because why wouldn't they when they've done nothing wrong? And then become isolated and depressed because it seems like everyone they know is suddenly ignoring them.
The bad guys create thousands of accounts and play multi-armed bandit, so when some of them get blocked they can identify why by comparing them to the ones that didn't, or create new ones and try new things until something works, and thereby learn what not to do. Whereas real people have no idea what sort of thing is going to arouse the Dalek either before or after their primary account is exterminated.
So it's a practice that creates a large increase in the false positive rate (normal people have no way to know how to avoid it) in exchange for a small decrease in the false negative rate (bad guys figure it out quickly). In a context where false positives cost a zillion times more than false negatives because the bad guys treat accounts as a fungible commodity they acquire in bulk whereas innocent people often have their whole lives tied to one account.
And all of that is only disguising the real problem, which is that people get blocked having done nothing wrong. If you were expected to point them to the spam they sent or the fraud they attempted then you wouldn't be able to do it when they'd done no such thing, and then "we can't tell anyone because it would help the bad guys" is used to paper over the fact that you couldn't tell them regardless. When the decision was made by an opaque AI and then reviewed by no one, there isn't actually a reason, there's just a machine that turns you off.
Towards the end of using self-hosted email at $dayjob, a couple of years ago now, Google started bouncing [some of] our email.
In the header for the bounce messages was included a description of the problem (as they perceived it), and a link for background reading.
I never followed up on it personally (that wasn't my job anymore because reasons), but the bounces seemed descriptive-enough for someone who was paid to care about it to make it work.
I also host my own email. In my case, Google always routes the first email I send to a new Gmail address as spam. After the recipient marks the email as good, future emails are received as expected. The only way around this is to advise the recipient via Gmail that I've set an email to them via a different route, so that they can check their spam and mark the email as good. This has been going on for at least two years.
Basically, Google are shadow-banning me till they get caught. I think this should be illegal.
> All if, else if constructs will contain either a final else clause or a comment indicating why a final else clause is not necessary.
I actually do this as well, but in addition I log out a message like, "value was neither found nor not found. This should never happen."
This is incredibly useful for debugging. When code is running at scale, nonzero probability events happen all the time, and being able to immediately understand what happened - even if I don't understand why - has been very valuable to me.
I like rust matching for this reason: You need to cover all branches.
In fact, not using a default (the else clause equivalent) is ideal if you can explicitly cover all cases, because then if the possibilities expand (say a new value in an enum) you’ll be annoyed by the compiler to cover the new case, which might otherwise slip by.
Rust is a bit smarter than that, in that it covers exhaustiveness of possible states, for more than just enums:
fn g(x: u8) {
match x {
0..=10 => {},
20..=200 => {},
}
}
That for example would complain about the ranges 11 to 19 and 201 to 255 not being covered.
You could try to map ranges to enum values, but then nobody would guarantee that you covered the whole range while mapping to enums so you’d be moving the problem to a different location.
Rust approach is not flawless, larger data types like i32 or floats can’t check full coverage (I suppose for performance reasons) but still quite useful.
In principle C compilers can do this too https://godbolt.org/z/Ev4berx8d
although you need to trick them to do this for you. This could certainly be improved.
The compiler also tells you that even if you cover all enum members, you still need a `default` to cover everything, because C enums allow non-member values.
Same. I go one step further and create a macro _STOP which is defined as w/e your language's DebugBreak() is. And if it's really important, _CRASH (this coerces me to fix the issue immediately)
That is not the same thing at all. Unreachable means that entire branch cannot be taken and the compiler is free to inject optimizations assuming that’s the case. It doesn’t need to crash if the violation isn’t met - indeed it probably won’t. It’s the equivalent of having something like
x->foo();
if (x == null) {
Return error…;
}
This literally caused a security vulnerability in the Linux kernel because it’s UB to dereference null (even in the kernel where engineers assumed it had well defined semantics) and it elided the null pointer check which then created a vulnerability.
I would say that using unreachable() in mission critical software is super dangerous, moreso than an allocation failing. You want to remove all potential for UB (ie safe rust with no or minimal unsafe, not sprinkling in UB as a form of documentation).
You're right, the thing I linked to do does exactly that. I should have read it more closely.
The projects that I've worked on, unconditionally define it as a thing that crashes (e.g. `std::abort` with a message). They don't actually use that C/C++ thing (because C23 is too new), and apparently it would be wrong to do so.
For many types of projects and approaches, avoiding UB is necessary but not at all sufficient. It's perfectly possible to have critical bugs that can cause loss of health or life or loss of millions of dollars, without any undefined behavior being involved.
Funnily enough, Rust's pattern matching, an innovation among systems languages without GCs (a small space inhabited by languages like C, C++ and Ada), may matter more regarding correctness and reliability than its famous borrow checker.
Possibly, I am not sure, though Delphi, a successor language, doesn't seem to advertise itself as having pattern matching.
Maybe it is too primitive to be considered proper pattern matching, as pattern matching is known these days. Pattern matching has actually evolved quite a bit over the decades.
I likewise have a circa 1997 LaserJet that I refuse to give up. Both the printer and scanner still function flawlessly, every time I need them to - something that few printers today seem capable of.
I switched to 64-bit Windows in 2006. The printer supports PCL drivers, but there are no 64-bit drivers for the scanner. Luckily, I was able to keep it going by running 32-bit Windows in a VM, and passing the parallel port through.
I switched to a laptop without a parallel port in 2019 (thank you, Lenovo, for keeping the parallel port on docks as long as you did). At that point, I bought a JetDirect that supports both printing and scanning over the network. CUPS and SANE both support it out of the box.
Those 90s LaserJets were genuinely incredible, and aside from (understandably) dog-slow PostScript processing, I think they were a pinnacle of office printer engineering.
We had one keep on trucking for... geez, as far as I'm aware it's still out there.
Much like the author, I consider myself to not use my phone too much. That said, it's probably just as far from the truth for me as it is for him.
Microsoft Authenticator is the biggest offender that comes to mind - without it, I cannot work. My company requires that we share our location to access systems (it's to enforce compliance controls that data stays in the country), so I can no longer use an offline MFA strategy like a U2F token or a TOTP key - I _have_ to use Microsoft Authenticator.
This seems to be how a lot of modern history is found.
I recently got to talk to a big-ish name in the Boston music scene, who republished one of his band's original 1985 demos after cleaning the signal up with AI. He told me that he found that tape in a bedroom drawer.
There's a certain meaningfulness ascribed to deliberately taking time for something.
I actively listen to a vinyl record when I cue it up. I let the radio sputter in the background while I work.
I actively read a book when I have a night or weekend to myself. I let Hacker News articles tend to go in one ear and out the other, even if I tell myself I spent some time reading before bed.
I actively figure out what's going on in the world when "what's going on in the world" becomes too dire for me to ignore. I fall asleep to the 10:00 news.
It surprises me _not in the least_ that I'd spend time with something that I want to make time for, and not just something I've allowed to become part of my routine.
I wrote a web application in an internship, circa 2011. I had no existing platform/framework to work with, no mentorship (the team wasn't really prepared to support an intern), and most importantly, an Apache web server running in Cygwin, with no PHP runtime installed. No one as much as told me what language I'd be writing at this job.
The Web development I'd done up to that point consisted of raw HTML/CSS, with some ASP.NET or PHP running on the backend. I'd never written a line of JavaScript in my life.
It was at this point that I "discovered" a winning combination: HTML, CSS, and JavaScript running in the user's browser. The backend was a set of C# applications which wrote to standard out, which could be invoked directly by Apache's mod_cgi, since C# compiles down to Windows executables. There were countless better other solutions at this point - ASP.NET/PHP (as I'd already used). FastCGI, WSGI, and others were all a thing at this point, but I'd never heard of them.
I outputted a JavaScript object (I had no idea what JSON was at the time, or that I was effectively outputting it) and read it back into the browser using a thin wrapper around XMLHttpRequest. I then iterated over the outputm and transformed the data into tables. JQuery was a thing at that point, but likewise, I'd never heard of it.
Say what you will about the job, the team, the mentorship (or lack theorof) - it took them three months before they realized I'd written C# at a Java shop, and at that point the thing was already being used widely across engineering.
The important takeaway here was, that "winning combination" of some minimal JavaScript and CGI was the perfect ratio of simple, approachable, and powerful, to enable me to finish the task at hand, and in a way that (at least until anybody saw the architecture) everybody was thrilled with. It didn't require a deeper understanding of a framework to bootstrap it from nothing. Write an HTTP response to standard out, formatted as an object, and you were on your way.
This architecture is also wonderful for diagnosis and investigation. You have effectively broken down the problem into CLI tools that can be independently tested and invoked with problematic requests.
It is also the pattern many APIs and function-based architectures are embracing again, partly for that reason (directly, or as a side effect of good decoupling being a core goal).
I don't think such usage is malicious, so much as ignorant - it's sometimes hard to know that a behavior _isn't_ part of the API, especially if the API is poorly documented to begin with.
I maintain a number of such poorly-documented systems (you could, loosely, call them "APIs") for internal customers. We've had a number of scenarios where we've found a bug, flagged it as a breaking change (which it is), said "there's _no way_ anybody's depending on that behavior", only to have one or two teams reach out and say yes, they are in fact depending on that behavior.
For that reason, we end up shipping many of those types of changes ship with a "bug flag". The default is to use the correct behavior; the flag changes the behavior to remain buggy, to keep the internal teams happy. It's then up to us to drive the users to change their ways, which.. doesn't always happen efficiently, let's say.
While you'd _expect_ to find HP-UX racked in a datacenter, you can also find it on workstations, where its proprietary VUE desktop environment eventually morphed into CDE (which, ironically, I've only ever used on Solaris).
It powered at least one early, pre-laptop-form-factor portable PC, the HP Integral. And you can also find it running on oscilloscopes, logic analyzers, and other test equipment from the 80s and 90s.
reply