Many years ago, I was leading a team that implemented a hyperfast XML, XSLT, XPath parser/processor from the ground up in C/C++. This was for a customer project. It also pulled some pretty neat optimizations, like binary XSLT compilation, both in-mem and FS caching, and threading. On the server-side, you could often skip the template file loading and parsing stages, parallelize processing, and do live, streaming generation. There was also a dynamic plugin extension system and a push/pull event model. The benchmarks were so much better than what was out there. Plus, it was embeddable in both server and client app code.
Would have been great if it had been open-sourced, but they paid for all the development and owned the codebase. They wanted to use it to dynamically generate content for every page for every unique device and client that hit their server. They had the infrastructure to do that for millions of users. The processing could be done on the server for plain web browsers or embedded inside a client binary app, so live rendering to native could be done on-device.
Back then, it was trivial to generate XML on-the-fly from a SQL-based database, then send that back, or render it to XHTML or any other custom presentation format via XSLT. Through XSD schema, the format was self-documenting and could be validated. XSLT also helped push the standardizing on XHTML and harness the chaos of mis-matched HTML versions in each browser. It was also a great way to inject semantic web tags into the output.
But I always thought it got dragged down with the overloaded weight of SOAP. Once REST and Node showed up, everyone headed for new pastures. Then JS in browsers begat SPAs, so rendering could be done in the front-end. Schema validation moved to ad-hoc tools like Swagger/OpenAPI. Sadly, we don't really have a semantic web alternative now and have to rely on best guesses via LLMs.
For a brief moment, it looked like the dream of a hyper-linked, interconnected, end-to-end structured, realtime semantic web might be realizable. Aaand, then it all went poof.
TL;DR: The XML/XSLT stack nailed a lot of the requirements. It just got too heavy and lost out to lighter-weight options.
In a past life I had a Wall of Shame of headlines on firmware update fails.
The lesson was you built firmware updates upfront and right into your development process so it became a non-event. You put in lots of tests, including automatic verification and rollback recovery. You made it so everyone was 100% comfortable pushing out updates, like every hour. It wasn't this big, scary release thing.
You did binary deltas so each update was small, and trickle download during down-time. You did A/B partitions, or if you had flash space, A/B/C updates (current firmware, new update, last known good one). Bricking devices and recalls are expensive and cause reputational damage. Adding OTA requires WiFi, BLE, or cell, which increases BOM cost and backend support. Trade-off is manual updates requiring dealership visits or on-site tech support calls with USB keys. It doesn't scale well. For consumer devices, it leads to lots of unpatched, out-of-date devices, increasing support costs and legal risk. OTA also lets you push out in stages and do blue-green deployment testing.
For security, you had on-device asymmetric encryption keys and signed each update, then rolled the keys so if someone reverse-engineered the firmware, it wouldn't be a total loss. Ideally add a TPM to the BOM with multiple key slots and a HW encryption engine. Anyone thinking about shipping unencrypted firmware, or baking symmetric encryption keys into firmware should be publicly flogged.
You also needed a data migration system so user-customizations aren't wiped out. My newish car, to this day, resets most user settings when it gets an OTA. No wonder people turn off automatic updates.
The really good systems also used realistic device simulators to measure impact before even pushing things out. And you definitely tested for communication failures and interruptions. Like, yoink out a power-line mid-update and then watch what happens after power is back on. Yes, it's costly and time-consuming, but consider the alternatives.
The ones that failed the most were when they spent months or years developing the basic system, then tacked on update at the end as part of deployment. Since firmware update wasn't as sexy as developing cool new tech, this was doled out to lower-tier devs who didn't know what they were doing. Also, doing it at the end of the project meant it was often the least-tested feature.
The other sin was waiting months before rolling out updates, so there were lots of changes packed into one update, which made a small failure have a huge blast radius.
These were all technical management failures. Designing a robust update system should be right up-front in the project plan, built by your best engineers, then including it in the CI/CD pipeline.
Just for context, the worst headline I had was for update failure in a line of hospital infant incubators.
Great insights into what goes one when developing firmware and updating it! I've never worked on that side of things so I was always curious what the development and testing process looked like.
100% agree with statement of problem. Only partly on-board with the possible solutions. There are many other ways, and most likely not a single solution to fit all.
Kudos to CF for pointing out the dips in the road ahead. A big, juicy problem to tackle.
One of the main points of using a UUID as a record identifier was to make it so people couldn't guess adjacent records by incrementing or decrementing the ID. If you add sequential ordering, won't that defeat the purpose?
Seems like it would be wise to add caveats around using this form in external facing applications or APIs.
There's still 62 bits of random data, even if you know the EXACT milisecond the row was created (you likely won't), you still need to do 1 billion guesses per second for 73 years.
Ideally you have some sort of rate limit on your APIs...
UUIDv7 has a 48 bit timestamp, 12 bits that either provide sub-millisecond precision or are random (in pg they provide precision) and another 62 bits that are chosen at random.
The A UUIDv7 leaks to the outside when it was created, but guessing the next timestamp value is still completely unfeasible. 62 bits is plenty of security if each attempt requires an API request
... and the next person working on the system thinks "well, this thing is unpredictable, so it's OK if I leak an unsalted hash of it". If they think at all, which is far from certain.
Why does everybody want to find excuses to leave footguns around?
Was going to ask how much all this cost, but this sort of answers it:
> "Managing Cost and Usage Limits: Chaining agents, especially in a loop, will increase your token usage significantly. This means you’ll hit the usage caps on plans like Claude Pro/Max much faster. You need to be cognizant of this and decide if the trade-off—dramatically increased output and velocity at the cost of higher usage—is worth it."
Any post about IoT security that doesn't mention or link to Shodan (https://www.shodan.io) is missing a lot of context. It's way worse than you think.
Also, with tools like Chip Whisperer (https://www.newae.com/chipwhisperer) the physical security of the hardware root of trust needs to be reevaluated.
Would have been great if it had been open-sourced, but they paid for all the development and owned the codebase. They wanted to use it to dynamically generate content for every page for every unique device and client that hit their server. They had the infrastructure to do that for millions of users. The processing could be done on the server for plain web browsers or embedded inside a client binary app, so live rendering to native could be done on-device.
Back then, it was trivial to generate XML on-the-fly from a SQL-based database, then send that back, or render it to XHTML or any other custom presentation format via XSLT. Through XSD schema, the format was self-documenting and could be validated. XSLT also helped push the standardizing on XHTML and harness the chaos of mis-matched HTML versions in each browser. It was also a great way to inject semantic web tags into the output.
But I always thought it got dragged down with the overloaded weight of SOAP. Once REST and Node showed up, everyone headed for new pastures. Then JS in browsers begat SPAs, so rendering could be done in the front-end. Schema validation moved to ad-hoc tools like Swagger/OpenAPI. Sadly, we don't really have a semantic web alternative now and have to rely on best guesses via LLMs.
For a brief moment, it looked like the dream of a hyper-linked, interconnected, end-to-end structured, realtime semantic web might be realizable. Aaand, then it all went poof.
TL;DR: The XML/XSLT stack nailed a lot of the requirements. It just got too heavy and lost out to lighter-weight options.