That's the gist of his whole talk – that doing things "the UNIX way" (which can be defined to various degrees of specificity) has been cargo culted, and that we should reexamine whether solutions that were pragmatic 50+ years ago is still the best we can do.
The specific reason I mentioned it was because his initial example was about how much more ceremony and boilerplate is needed when you need to pretend that USB interfaces are actually magic files and directories.
Not sure if I'm just misunderstanding the article or not, but it feels like an overengineered solution, reminescent of SAML's replacement instructions (just a hardcoded and admittedly way better option --- but still in a similar vein of "text replacement hacks").
I know it's not the most elegant thing ever, but if it needs to be JSON at the post-signing level, why not just something like `["75cj8hgmRg+v8AQq3OvTDaf8pEWEOelNHP2x99yiu3Y","{\"foo\":\"bar\"}"]`, in other words, encode the JSON being signed as a string.
This would then ensure that, even if the "outer" JSON is parsed and re-encoded, the string is unmodified. It'll even survive weird parsing and re-encoding, which the regex replacement option might not (unless it's tolerant of whitespace changes).
(or, for the extra paranoid: encode the latter to base64 first and then as a string, yielding something like `["75cj8hgmRg+v8AQq3OvTDaf8pEWEOelNHP2x99yiu3Y","eyJmb28iOiJiYXIifQ"]` --- this way, it doesn't look like JSON anymore, for any parsers that try to be too smart)
If the outer needs to be an object (as opposed to array), this is also trivially adapted, of course: `{"hmac":"75cj8hgmRg+v8AQq3OvTDaf8pEWEOelNHP2x99yiu3Y","json":"{\"foo\":\"bar\"}"}`.
You can and this will be simple and reliable.. but that's solving the different (and easier) problem that the post. In the post, author wants to have still have parsable JSON _and_ a signature. Think middleware which can check signature, but cannot alter the contents, followed by backend expecting nice JSON. Or a logging middleware which looks at individual fields. Or a load balancer which checks the "user" and "project" fields. Or a WAF checking for right fields. In other words:
> Anyone who cares about validating the signature can, and anyone who cares that the JSON object has a particular structure doesn’t break (because the blob is still JSON and it still has the data it’s supposed to have in all the familiar places).
As author mentions, you can compromise by having "hmac", "json" and "user" (for routing purposes only), but this will increase overall size. This is approach 2 in the blog.
> in other words, encode the JSON being signed as a string. This would then ensure that, even if the "outer" JSON is parsed and re-encoded, the string is unmodified. It'll even survive weird parsing and re-encoding, which the regex replacement option might not (unless it's tolerant of whitespace changes).
Would it be guaranteed to survive even standard parsing?
It wouldn’t surprise me at all, for example, if there are json parsers out there that, on reading, map “\u0009" and “\t" to the same string, so that they can only round-trip one of those strings. Similarly, there’s the pair of “\uabcd” and “\uABCD”. There probably are others.
Presumably when receiving the object, you'd first unescape the string (which should yield a unique output unless you have big parser bugs), check the UTF-8 bytes of the unescaped string against the signature, and only then decode the unescaped string as the inner JSON object. It shouldn't matter how exactly the string is escaped, as long as it can be unescaped successfully.
There are many ways to represent the JSON as binary… and all are equally valid. The easiest case to think about is with and without whitespace. Because what HMAC cares about are the byte[] values, not alphanumeric tokens.
Then, if you couple this with sending data through a proxy (maybe invisible to the developers), which may or may not alter that text representation, you end up with a mess. If you base64 encode the JSON, you now lose any benefit you might gain from those intermediate proxies, as they can’t read the payload…
base64 is often even larger than an escaped JSON string. and not human-readable at all.
I'll take stringified json-in-json 90% of the time, thanks. if you're using JSON you're already choosing an inefficient, human-oriented language anyway, a small bit more overhead doesn't hurt.
(obviously neither of these are good options, just defer your parsing so you retain the exact byte sequences while checking, and then parse the substring. you shouldn't be parsing before checking anyway. but when you can't trust people to do that...)
I've recently moved from Svelte (initially 4, then 5) to Vue 3, and much prefer it.
The big issue for me was the lack of support for nested observables in Svelte, which caused no end of trouble; plus a lack of portals (though maybe the new snippets fix that?).
Mind, you'll be unable to send emails to Microsoft-owned accounts (@outlook.com, @hotmail.com, and similar).
That's because Microsoft, in their infinite wisdom, decided that a reasonable default was to use a whitelist of allowed senders, blocking everyone else by default.
There is supposedly a process to get that unlocked, but they never replied to my own request ...
My own bank used to have [A-Za-z0-9] passwords with a character limit + browser certificates. I thought the former was pretty bad (and it's always alarming, as it tends to imply they're storing it in plaintext if they even care what the characters are ...).
Then they got bought by another bank ... and now, they require *6-digit* PINs/passwords + no certificates. Yes, there's 2FA involved now, but seriously, 6 digits?
This is exactly the approach I'm planning on for my own language.
I have a 3-way split:
1) core.* - basic support things like builtin traits, etc; this is the only namespace/prefix that is "special" to the compiler
2) std.* - things you might see in a minimalist stdlib. String manipulation, some useful algorithms, math, etc.
3) etc.* - technically part of the package ecosystem, but officially supported. All the usual suspects like advanced I/O, networking, base64, and whatnot would be found here. Making it part of the package management means it can be versioned independently from the compiler and in principle even broken between major versions (not that I'm planning to, but design mistakes happen)
I find that, at least with dentists, the quality seems to be an inverse function of their experience.
I've gone through a lot of dentists recently (long story, but nothing to do with quality of dental work), and I've consistently found that the younger/"inexperienced" dentists use more modern/advanced[^1] techniques, whereas the older ones tend to favor sticking to what they learned in a medical school years ago, plus an occasional conference or such. As opposed their very foundation being based on more up-to-date knowledge.
[^1] Unlike in software, this often translates to "better", at least from my experience as a patient.
---
Apparently this is somewhat of a problem in computer science for 50-something year olds, where one can sometimes find it hard to find a job. Companies prefer younger, more "malleable" candidates.
---
There is also the general fact of life that experience often brings hubris & arrogance. This is definitely not always true, but it's another case where more experience is actually worse.
The following is anecdotal but I have to mirror your observation.
I've seen both sides of the coin where a son followed in his father's footsteps. The Father was an old stodgy pain in the ass with ancient practices.
The son opened his own practice with modern offices, a lot of software based systems both for office work and patient care.
The son was decent and the technology helped but the best dentist ive had was another super old dentist who ended up adopting some of the tech while also being an absolute magician in his work due to his experience. His ability to ascertain edge cases from things such a cavity xray really made him a top tier dentist. Thats something tech cannot always make up for. Its raw intuition from years of experience. It was a heartbreaking event when he decided to retire. :/
There is a good argument to be had for younger dentist adopting new technology and learning the latest skills but it isn't always a perfect fit.
The best dentist seems like the old person open to new ideas(in my experience).
I always imagine that people working in dentistry end up getting incredibly jaded. There's only so much neglect of basic personal care that anyone can face. Over and over again. And people don't listen to the recommendations because it is all going to be fixed by the dentist.
The younger dental professionals I have come across tend to have real passion and make me go "oh wow, they are really into this and inspiring!". Which seems to wear off over time. Maybe it's just me projecting from what it's like to work with computers because it certainly feels familiar.
I've more less accepted that dentists and other related fields are mostly licensed so they
1. don't kill you doing something stupid
2. if they make a big mistake, know how to deal with it and get help immediately
This isn't a jab at dentists at all, as I feel like I've been lucky enough to have good dentists that did great work at reasonable fees.
I'v also dealt with
1. an orthodontist that apparently just ground the enamel off my teeth while I was a child, for cosmetic reasons
2. a dentist that somehow cut a square pocket into my tooth to treat a tiny cavity. This caused stress fractures in the tooth originating from the corners. The dentist who repaired it even said "I do not understand how this was even done. It should not be possible to do with normal dentistry tools"
I don’t have a dog in this fight. (There are benefits of experience and also benefits of fresh ideas) but in no way does the parent comment prove the reverse of what they’re arguing. It is not an example of an exception proving the rule.
Anecdata ahead: I'm about to leave a dentist after 15 years because I feel more and more like a cow with insurance payout for udders. She used to be good, but now seems like she's just making sure there is not a penny of insurance money left at the end of the year. Still got $50 left? Hey, how about some fluoride goo before you go, because "insurance pays for it".
One thing to consider if there is a shift in this behavior at a small independent practice you use (like dentist or veterinarian offices) is the aggressive acquisition of these firms by private equity over the last few years. Oftentimes the owners / partners get a huge payout and stay on as the face of the practice while taking marching orders from the new owners to maximize profit over quality of care. This acquisition is often invisible to patients / customers except insofar as quality of care declines, people at the office seem more stressed out, and more procedures get ordered that you don’t have the expertise to assess the need for.
I'm wondering if a similar progression is happening with MDs. OneMedical <-> Amazon (corporate rather than PE), for example. Current provider was solo, then went to OneMedical, and now is an Amazon employee (indirectly).
Familiarity breeds contempt. 3rd to last DDS drilled a cavity freehand in like 5 minutes without being all that careful. Seemed like it was maximizing the number of procedures while minimizing the amount of care and time spent.
I'm working towards my own language, and I'm sort of stuck on this issue, too:
Namely:
1. Should the "default" (syntactically-sugar'd) arrays in the language (e.g. `int[]`) be restricted to simple contiguous storage, or should they also hold (potentially negative) stride, etc?
2. Should I have multidimensional arrays? (to be clear, I'm talking about contiguous blocks of data, not arrays-holding-pointers-to-arrays)
3. Should slicing result in a view, or a copy?
(naturally, some of these options conflict)
I'm honestly erring towards:
1. "fat" arrays;
2. multidimensional arrays;
3. slicing being views.
This allows operations like reversing, "skipping slices" (e.g. `foo[::2]` in Python), even broadcasts (`stride=0`) to be represented by the same datastructure.
It's ostensibly even possible to transpose matrices and such, all without actual copying of the data.
But there are good arguments for the reverse, too. E.g. "fat" arrays need 1 more `size_t` of storage per dimension (to store the stride), compared to "thin" arrays. And a compiler can optimize thin ones much better due to more guarantees (for example, a copy can use vector instructions instead of having to check stride first). Plus "thin" arrays are definitely the more common scenario and it simplifies the language.
So, yeah, still not quite sure despite liking the concept. One obvious option would be for `int[]` to be a thin array, but offering `DataView<int>` [1]. The problem with that is that people implementing APIs will default to the easy thing (`int[]`) instead of the more-appropriate-but-more-letters `DataView<int>`. Ah, dilemmas.
[1] (not actual syntax, but I figured I'd use familiar syntax to get the point across)
"Everything is a file" is not a bad abstraction for some things. It feels like Linux went the route of a golden hammer here.