Most of them switched to stupid apps described above. 6 to 8 char passwords, 6 char PIN codes etc. I don't know how they pass security audits, unless the audits are merely a protection tax.
Is that enough though? You may have wildcards on domains that are not even on a public DNS and you may forget to replace it "somewhere". For that reason it is better to either dump list of domains from your local DNS or have e.g. zabbix or another agent on every host machine checking that file for you.
That's exactly my point. Is that while this service sounds quite useful for many common cases, it's going to fail in cases where there's not a 1-to-1 certificate-to-server mapping. Even outside of wildcards, you have to account for cases where the cert might be installed on N number of load balancers.
If you're using a cert on multiple IPs, or IPv4+v6, SSLBoard will monitor all IPs. It's not foolproof, but it covers most common practices. btw wildcard certs don't have a good reputation (blast radius)...
I'd say that load balancers (one-address-to-N-servers) count as a common practice, but I otherwise agree in that regard.
Regarding wildcard certs, eh. I wouldn't say they have a bad reputation. Sure, greater blast radius. But sometimes it can certainly simplify things to use one. Your ACME client configuration is easier and your TLS terminator configuration often becomes easier when the terminator would otherwise need to switch based on SNI.
one-address-to-N-servers is perfect if the N servers don't all terminate TLS. If not, it becomes impossible to actually test what certificates are actually served. I've seen this fail before (TLS tests flip/flop between good/bad between checks).
As for wildcard certs, I agree there are use cases where we really need them like dynamic subdomains {customer}.status.com
Can you share how they make ACME client configuration easier?
> Can you share how they make ACME client configuration easier?
It's not a profound difference, but you don't need to add each name to your config. Depending on the team's tooling and processes, that may be inconsequential. But in a setting where config management isn't handled super well, where the TLS terminator is a resource shared by multiple, distinct teams, this is a simplification that can make a difference at the margin.
Think less Cloudflare-scale, and more SMB scale (especially in a Windows shop or recovering Windows shop with a different kind of technical culture than what we might all be implicitly imagining).
I'm working on something that could help: linking sslboard with software that's making issuance and distribution of certs easier, ie. a proper CLM. It's not cloud based for security reasons. In that context, we know your wildcard certs because we issue them, and we could know where they are if we distribute them...
Please get in touch with me (chris@sslboard.com) if you're interested in early access and having a word in the development of the product!
I didn't realize you were behind SSLBoard. I think you should've disclaimed that involvement at the beginning. I see now that it's in your bio, but disclaiming is still on you.
Blame the devs as well, lots of useless junk code and libraries for things that could've been couple lines of code that ends up bloating a site and make it slow - then they need a CDN like and caching solutions like cloudflare. 4/5 of the web wouldn't need any of it (if their sites were optimized from the start).
DDOS attacks do happen, but considering the size of the web, the chance it will affect you is very low. AI bots are more of a stress test than a DDOS, figure out your bottlenecks and fix them.
E.g. for a frontend yourself a budget of 1MB for a static site and 2MB for dynamic one and go from there.
Because they don't have a vision for the entire OS apart from sticking a copilot button in every app. Current windows has so many features that I am sure even current MS employees have little idea about, and because of that, have been rewritten many times over. Sometimes I wonder if they even have any idea about what's going on in registries or how are you supposed to figure out certain policies in hybrid mode - between defender/intune and GPOs. Good luck figuring out windows hello vs convenience pin, or when defender goes haywire and starts blocking stuff etc etc.
In windows you have 5 versions of apps for each era of windows, in m365 you have 5 different dashboards showing same information just a little bit differently so you have to know all of them if you need info A and B.
But at least in admin.microsoft you have Copilot and Agents above Users and Groups, because ef'in up your muscle memory is important ...
We've been using Element/Matrix for quite some time now and are fairly happy with it for the most part. The only major hiccup was hosting providers, not the software itself, per se.
We originally signed up with element.io back when they were called vector.im. Service was good, but a year or two in they decided they wanted to focus on those sweet, sweet enterprise licences and the pricing changes were untenable for our little 15 person operation. (I bear them little ill will for this, gotta do what you gotta do and all that, but it was a real PITA at the time.)
We moved to etke.cc who have been quite good. They were responsive to my modest support requests, and apart from being initially a bit surprised we wanted an unfederated server (which to their credit they dealt with with alacrity and aplomb) it's been a service we've just used and not had to otherwise think about.
The only sticking point was there was no way to migrate our messages from the older service. If memory serves, this was due to a deficiency in either Matrix or Synapse due to changing domains (originally an element.io customer subdomian). So always your own subdomain if you can is the moral of the story, I guess. I don't know if the migration story has improved in the years since.
If we had to leave Element/Matrix for whatever reasons I would definitely look at Zulip based on the many recommendations I see for it here. I think back when we went with Element I was quite interested in Zulip, but there just wasn't any good hosting options at the time and we didn't want to go with self-hosting (time-sink vs $$-sink).
It is highly unlikely Consumer GPU will use HBM any time soon. At least I dont see it happening before 2030 or 2033. HBM is expensive, anywhere between 3 - 8x the cost of GPDDR and GDPPR already being more expensive than LPDDR. And that is without factoring in current DRAM pricing situation.
That is a value for the entire gpu, what about the memory part itself? Also consumers don't need 300GB of it (yet).
But to answer - memory is progressing very slowly. DDR4 to DDR5 was not even a meaningful jump. Even PCIe SSDs are slowly catching up to it which is both funny and sad.
As for the usecase - I use my memory as a cache for everything. Every system in the last 15-20 years I used I maxed out memory on, I never cared much about speed of my storage, because after loading everything into RAM, the system and apps feel a lot more responsive. The difference on older systems with HDDs were especially noticeable, but even on an SSDs, things have not improved much due to latencies. Of course using any webapp connecting to the network will negate any benefits of this, but it makes a difference with desktop apps.
These days I even have enough memory to be able to run local test VMs so I don't need to use server resources.
It's important to note that the `unsafe` keyword is poorly named. What it does is unlock a few more capabilities at the cost of upholding the invariants the spec requires. It should really be called "assured" or something. The programmer is taking the wheel from the compiler and promising to drive safely.
As for why there is unsafe in the kernel? There are things, especially in a kernel, that cannot be expressed in safe Rust.
Still, having smaller sections of unsafe is a boon because you isolate these locations of elevated power, meaning they are auditable and obvious. Rust also excels at wrapping unsafe in safe abstractions that are impossible to misuse. A common comparison point is that in C your entire program is effectively unsafe, whereas in Rust it's a subset.
EDIT: Hacker News has limited my ability to respond. Please keep in mind that Rust has a large number of active fans, who may have biases for whatever reasons.
> Still, having smaller sections of unsafe is a boon because you isolate these locations of elevated power, meaning they are auditable and obvious.
The Rustonomicon makes it very clear that it is generally insufficient to only verify correctness of Rust-unsafe blocks. If the absence of UB in a Rust-unsafe block depends on Rust-not-unsafe code in the surrounding module, potentially the whole module has to be verified for correctness. And that assumes that the module has correct encapsulation, otherwise even more may have to be verified. And a single buggy change to Rust-not-unsafe code can cause UB, if a Rust-unsafe block somewhere depends on that code to be correct.
You need unsafe Rust for FFI - interfacing with the rest of the kernel which is still C, uses raw pointers, has no generics, doesn't track ownership, etc. One day there might enough Rust in the kernel to have pure-Rust subsystems APIs which would no longer require unsafe blocks to use. This would reverse the requirements as C would be a second class citizen with these APIs (not that C would notice or care). How far Rust is to get pushed remains to be seen but it might a long time to get there.
I was referring to the current unsafe blocks used for Rust->C FFI. Obviously OS code in any language will need to perform low-level operations, those unsafe blocks are never going away.
> I was referring to the current unsafe blocks used for Rust->C FFI.
You need direct shared mutable memory access with runtime locking even in the pure-Rust parts. That's kinda what OSes need, actually. Some things (Maybe DMA, possibly Page Table mutation, register saving/loading, as a few examples) can't be compile-time checked.
In fact, I would guess that if you gradually moved the Linux code over to Rust, at the end of it you'd still have maybe 50% of it in unsafe blocks.
So, no - your claim is no different than "if it compiles it works".
Rust is very nice for encapsulation. C isn't great at that work, and of course it can't express the idea that whatever we've encapsulated is now safe to use this way, in C everything looks equally safe/ unsafe.
It's worth noting that "aliasing" in Rust and C typically mean completely unrelated things.
Strict aliasing in C roughly means that if you initialize memory as a particular type, you can only access it as that type or one of a list of aliasable types look like char. Rust has no such restriction, and has no concept of strict aliasing like this. In Rust, "type aliasing" is allowed, so long as you respect size, alignment, and representability rules.
Aliasing safety in Rust roughly means that you can not have an exclusive reference to an object if any other reference is active for that reference (reality is a little bit more involved than that, but not a lot). C has no such rule.
It's very unfortunate that such similar names were given to these different concepts.
No. Aliasing is a single idea, an alias is another name for the same thing. The concept translates well from its usual English meaning.
The C "strict aliasing" rule is that with important exceptions the name for a thing of type T cannot also be an alias to a thing of type S, and char is an important exception. Linux deliberately switches off this rule.
Rust's rule is that there mustn't be mutable aliases. We will see why that's important in a moment.
Aliasing is an impediment to compiler optimisation. If you've been watching Matt's "Advent of Compiler Optimisation" videos (or reading the accompanying text) it's been covered a little bit in that, Matt uses C and C++ in those videos, so if you're scared of Rust you needn't fear that in the AoCO
But why mutation? Well, the optimisations concern modification. The optimiser does its job by rewriting what you asked for as something (possibly not something you could have expressed at all in your chosen language) that has the same effect but is faster or smaller. Rewrites which avoid "spilling" a register (writing its value to memory) often improve both size and speed of the software, but if there is aliasing then spilling will be essential because the other aliases are referring to the same memory. If there's no modification it doesn't matter, copies are all identical anyway.
Fair enough, I meant in terms of what rules and restrictions exist around aliasing, which are different between the two, but my wording was indeed off.
I am not sure how I feel about this solution. It is already painful to deal with certs on every single piece of IT equipment. Unless you create and manage your own CA and manage it, which is an extra burden, what is the point of this? This will only create more janky scripts and annoyances for very little benefit.
What's next? Enforcing email signing with SMIME or PGP?
supply chain - if you put some 3rd party script link, ad, tracking or even just update dependencies to a bad version like the npm packages hack on your page, TLS won't save you if the service or dependency gets hacked
The biggest culprit is the ad network script. Whether it’s a script tag, an iframe, an image pixel, it’s basically allowing the browser to send your visit event and user agent information (or the chrome updated headers) to that 3rd party and if it’s using jsonp, can callback a function on the page to inject malware that can take over your browser. Ask me how I know.
reply