At one point with http only your isp could do its own cache, large corporate it networks could have a cache, etc. which was very efficient for caching. But horrible for privacy. Now we have CDN edge caching etc but nothing like the multi layer caching that was available with http.
It's $165 per 10 years if you don't lose it or $65 if you just need in place of national ID (i.e. no international travel). I think anyone can save up that much in 10 years, renewals a bit cheaper btw.
> Local state ID cards don't prove citizenship.
No, but to get a Real ID in any state you have to prove you're in the country legally, and in some states to get any form of ID you have to prove that.
Governments aren’t just rolling out Digital IDs. They’re rolling out the platform to enable them to require that you authenticate with a range of apps and websites, ostensibly to keep children safe, with the real purpose being to link your unique identifier to all your online activity. They can then easily build an overall picture of who you are from that ID. Potentially, all this data can be fed into a pre-crime AI.
> Governments aren’t just rolling out Digital IDs. They’re rolling out the platform to enable them to require that you authenticate with a range of apps and websites, ostensibly to keep children safe, with the real purpose being to link your unique identifier to all your online activity.
This is just straight up not true for the EUDI which is probably the most serious and advanced approach to digital ID. The wallets are decentralized and the government does not see the individual authentication transaction in any way.
Part of a Digital ID is an identity provider that implements protocols such as OAuth 2 and OIDC. Once this is in place, the government that owns the Digital ID system can mandate that platforms such as social networks, search engines, email providers, etc. link the users in its jurisdiction to its Digital ID via OAuth/OIDC. As this isn't as onerous as reviewing identity documents, governments can make this a requirement for a large range of platforms, even quite small ones.
Yes, I realise governments already have some powers to view private data, but they have to do a lot of legwork to link data to specific people. They'll always get false positives, false negatives, duplicates, etc. And they'll miss a number of platforms that have data on the person of interest. Digital ID combined with a mandatory identity platform and data retention requirements will make law enforcement far more efficient and give governments unprecedented power over what we see, hear and say online. The government will have a complete list of all the platforms on which you authenticated with their Digital ID.
We're already sleepwalking into this. In Australia, we have the under-16 social media ban taking effect next month. We're also in the process of rolling out our Digital ID, which has an OAuth/OIDC-based identity system. Numerous government departments have already integrated with it. It opens up to private sector integrations in December 2026, just in time for all involved in the under-16 social media ban to realise it's not working effectively and for Digital ID to save the day. The law states that Digital ID is a voluntary means of identification and other methods should always be offered, but the UX of OAuth 2 vs. uploading photos of your ID documents and a selfie, and waiting for it to be reviewed, will make Digital ID the de facto standard for Australians proving their age and, in the process, permanently linking their Digital ID Identifier to all their social media accounts. That includes "anonymous" ones like Reddit. And integrators can apply for an exemption to Digital ID being voluntary on their platform, making the case that the per-user cost of complying with the law without Digital ID is prohibitively expensive.
Once Australia rolls this out to social networks, it will keep expanding until virtually everything is captured.
> Once this is in place, the government that owns the Digital ID system can mandate that platforms such as social networks, search engines, email providers, etc. link the users in its jurisdiction to its Digital ID via OAuth/OIDC
Governments can do that today already. Digital IDs don't contribute anything to this. They just make our lives easier, not governments'.
> but they have to do a lot of legwork to link data to specific people. They'll always get false positives, false negatives, duplicates, etc.
Those false positives/negatives, duplicates affect real people too. That's just a case for digital IDs, not against.
> and, in the process, permanently linking their Digital ID Identifier to all their social media accounts
How do you reach to that conclusion? How are they permanently linked? It's perfectly possible to verify your age digitally without permanently linking your ID with your social accounts.
> Once Australia rolls this out to social networks, it will keep expanding until virtually everything is captured.
Again, that can be done without digital IDs. You're holding the wrong front here. Privacy invading laws should be fought, but the public shouldn't be kept away from the convenience and privacy gains of digital IDs. It makes no sense.
To set this up, you have to scan the chip on your passport. Its essentially the same data on both chips, one is just in my phone's enclave and the other is in an embedded NFC chip.
And, specifically, frictionless perfect enforcement. Kind of like CCTV you can pull on request after a crime, vs proactive permanent ubiquitous surveillance (looking at you, Flock Safety).
It feels healthier for the enforcement apparatus to have a budget, in terms of material personnel or time, that requires some degree of priority-setting. That priority-setting is by its nature a politically responsive process. And it’s compatible with the kind of situation that allows Really Quite Good enforcement, but not of absolutely everything absolutely all the time.
Otherwise ossification feels like exactly the word, as you said, stavros: if it costs nothing for the system to enforce stuff that was important in the hazy past but is no longer relevant, nobody wants to be the one blamed for formally easing restrictions just in case something new bad happens; 20 years later you’re still taking off your shoes at the airport. (I know, I know, they finally quit that. Still took decades. And the part that was cost-free—imaging your genitalia—continues unabated.)
Since most of that "digital ID" manifestations are just pixels on a screen, these are not a problem to fake pixel-perfect.
I did some limited travel during the COVID era, including areas that did not want to recognise my country's digital vaccination certificate. I presented them with a pixel-perfect picture of their own country's digital vaccination certificate. It's easy to copy from a screen of a friend, and it's not complicated to create your own Apple Wallet pass that looks like the one you want.
I was showing a real QR code -- that was issued to a person who wasn't me. As soon as that produced a big green checkmark on anyone's QR scanner, I was in.
If they need to match with the info on paper it's not clear what the case for "digital id" is? If one needs to present "digital id + paper id" one can simply present the paper id as they do today.
That's kinda theoretic discussion by now. As the whole COVID thing is behind us, we can probably look at all the money that were spent in the world to create vaccination certificates, sign them, create the distribution network, distribute the certificates, build the verifying scanners, purchase them en-masse and pay the thousands of people who were standing at the entrance of numerous shopping malls and using these scanners to check the QR codes, only to create a system that is trivially bypassed by using a jpeg file.
My argument is that “digital ids can’t be faked” is a bad argument, and if you rely on it to prove a point then it might be a weak proof.
(Digital IDs indeed can’t be faked but usually they are a part of a process that can be easily bypassed by using something that presents itself as a valid Digital ID even if it’s not.)
I don't think they will, as this will leave a significant amount of population without ids. The fallback will always be there.
Credit cards are a great example: they can't be faked, however while the cryptographers are sitting on their high hill and patting themselves on the back for doing great job, the credit card fraud rings billions of dollars every month. It doesn't happen because of fake cards -- it happens by exploiting the flaws in the whole process that a (non-fakeable) card is a part of.
I know a guy who went to jail for that. He was in the news and everything. Banned from this country for life. Warned him that what he was doing was a stupid idea, he was even doing it for others who also got arrested...
I don't know what "that" was, and again, I had both the vaccination and the digital certificate to prove it; the system in place would not accept the real documents, so I fed it with other documents that it did accept.
The people who check your QR code with scanners on the entrance to a shopping mall (and refuse to let you in unless the scanner shows a green mark) are not the police nor the prosecution, and I have a good case to present to a judge in any case.
"The guy who went to jail" could be unvaccinated (or even infected) and presenting other people's certificates to enter an area for vaccinated people only (e.g. hospitals) where he might have endangered other people's lives; that's something that might be deserving jail time. I was vaccinated however, and by all means had the right to enter that shopping mall; I just wasn't able to prove it to the imperfect system that was there to check.
Isn't this just seeing a slippery slope and deciding to build a terrace[1], in that the existence of a digital ID doesn't automatically lead to mandate to carry one—any more than the existence of a physical ID card does?
A physical ID can, depending on the validation process.
Digital ID doesn't have to report your location either, depending on the implementation. It's not like it's a given a digital ID system has to give your location.
An SSH key is a digital ID. Does it report your location when you use it? A GPG key can be a digital ID. Does it report your location when you sign something?
At best a digital ID has an additional attack surface and is just more accessible.
You normally aren't carrying your passport with you, right? So even if lower security, the chance of that information being swiped is generally lower.
Phones are pretty high profile targets, this makes them more so.
I do like the idea and the convenience, but I'm definitely wary of these things too. Especially in the modern tech world where security is often being treated as a second thought as it is less impactful for sales. I'm pretty sure it is always cheaper to implement the security, but right now we're not great at playing long games and we like to gamble. Humans have always been pretty bad at opportunity costs. We see the dollars spent now and that seems to have far more value than what you save later.
On the other hand, currently US citizens are not legally required to walk around with their IDs on them. That's not true for non-citizens btw. You should have to just give the officer your name, but they can detain you while they "verify your identity." With an ID becoming frictionless and more commonly held on person, will this law change? Can we trust that it'll stay the same given our current environment of more frequent ID requests (I'm trying to stay a bit apolitical. Let's not completely open up that issue here?). I'd say at best it is "of concern." But we do live in a world run by surveillance capitalism.
There's a really good example I like of opportunity cost that shows the perverse nature of how we treat them. Look at the Y2K bug. Here on HN most of us know this was a real thing that would have cost tons of money had we not fixed it. But we did. The success was bittersweet though, as the lack of repercussions (the whole point of fixing the problem!) resulted in people believing the issue was overblown. Most people laugh at Y2K as if it was a failed doomsday prediction rather than a success story of how we avoided a "doomsday" (to be overly dramatic) situation. So we create a situation where you're damned if you do and damned if you don't. If you do fix a problem, people treat you as if you were exaggerating the problem. If you don't fix the problem you get lambasted for not having foreseen the issue, but you do tend to be forgiven for fixing it.
Just remember, CloudStrike's stock is doing great[0] ($546). Had you bought the dip ($218) you'd have made a 150% ROI. They didn't even drop to where they were a year previously, so had you bought in July of 2023 ($144) and sold in the dip you'd have still made a 50% profit in that year... (and 280% if you sold today).
Convince me we're good at playing the long game... Convince me we're not acting incredibly myopic... Convince me CloudStrike learned their lesson and the same issue won't happen again...
You're ignoring the benefits though - it will help adapt more services to work online and reduce bureaucracy.
Look at Germany where they outright refuse to acknowledge emails as a legal notification / correspondence so everything still gets sent as letters and fax. It's extremely slow and cumbersome.
Also it will help for security as the central service can authenticate you, instead of every little hotel and bank branch, etc. keeping a copy of your passport.
As someone who self hosted bare metal Kubernetes on my own rack, it's a lot of work to get it set up. We used RedHat Openshift which has a pretty good solution out of the box, but the learning curve was relatively high.
That being said, once it was set up, there was not a lot of maintenance. Kubernetes is quire resilient when set up properly, and the cost savings were significant.
Try talos next time. It took minutes to setup. Red Hat docs and product names scare me since they are intentionally obtuse. I thought I wanted openshift but no way i'm paying and i couldn't figure out how to even get started. Talos was such a breeze.
The thing with OpenShift (https://github.com/okd-project/okd) is you can set it up, and then run basically one command (oc new-app .) to push almost any app to Kubernetes. All bells and whistles included.
I mean RisingWave, the solution mentioned in the article, is a complete startup rewriting things in Rust mostly to avoid the larger Java solutions like Flink and Spark...
The comparison is a little pears to apple. Similar nutritions but different enough to not draw conclusions. The hardware in the Ceph test is only capable of max 1.7TiB/s traffic (optimally without any overhead whatsoever).
I also assume that the batch size (block size) is different enough that this alone would make a big difference.
That difference is still pronounced, yes. But the workload is so different. Training AI is hardly random read. Still not a comparison which should lead you to any conclusions.
A lot of web frameworks etc do most of the instrumentation for you these days. For instance using opentelemetry-js and self hosting something like https://signoz.io should take less than an hour to get spun up and you get a ton of data without writing any custom code.
Context propagation isn't trivial on a multi-threaded async runtime. There are several ways to do it, but JVM agents that instrument bytecode are popular because they work transparently.
While that’s true, if you’ve already solved punching correlation-IDs and A/B testing (feature flags per request) through then you can use the same solution for all three. In fact you really should.
Ours was old so based on domain <dry heaving sounds>, but by the time I left the project there were just a few places left where anyone touched raw domains directly and you could switch to AsyncLocalStorage in a reasonable amount of time.
The simplest thing that could work is to pass the original request or response context everywhere but that… has its own struggles. It’s hell on your function signatures (so I sympathize with my predecessors not doing that but goddamn) and you really don’t want an entire sequence diagram being able to fire the response. That’s equivalent to having a function with 100 return statements in it.
"It might help to go over a non-exhaustive list of things the offical SDK handles that our little learning library doesn’t:
- Buffer and batch outgoing telemetry data in a more efficient format. Don’t send one-span-per-http request in production. Your vendor will want to have words."
- Gracefully handle errors, wrap this library around your core functionality at your own peril"
Maybe the confusion here is in comparing different things.
The InfluxData docs you're linking to are similar to Observability vendor docs, which do indeed amount to "here's the endpoint, plug it in here, add this API key, tada".
But OpenTelemetry isn't an observability vendor. You can send to an OpenTelemetry Collector (and the act of sending is simple), but you also need to stand that thing up and run it yourself. There's a lot of good reasons to do that, but if you don't need to run infrastructure right now then it's a lot simpler to just send directly to a backend.
Would it be more helpful if the docs on OTel spelled this out more clearly?
The problem is ecosystem wide - the documentation starts at 8/10 and is written for observability nerds where easy things are hard, and hard things are slightly harder.
I understand the role that all the different parts of OTel plays in the ecosystem vs InfluxDB, but if you pay attention to that documentation page, it starts off with the easiest thing (here's how you manually send one metric), and then ramps up the capabilities and functionality from here. OTel docs slam you straight into "here's a complete observaility stack for logs, metrics, and traces for your whole k8s deployment".
However, since OTel is not a backend, there's no pluggable endpoint + API key you can just start sending to. Since you were comparing the relative difficulties of sending data to a backend, that's why I responded in kind.
I do agree that it's more complicated, there's no argument there. And the docs have a very long way to go to highlight easier ways to do things and ramp up in complexity. There's also a lot more to document since OTel is for a wider audience of people, many of whom have different priorities. A group not talked about much in this thread is ops folks who are more concerned with getting a base level of instrumentation across a fleet of services, normalizing that data centrally, pulling in from external sources, and making sure all the right keys for common fields are named the right way. OTel has robust tools for (and must document) these use cases as well. And since most of us who work on it do so in spare time, or a part-time capacity at work, it's difficult to cover it all.
reply