Hacker Newsnew | past | comments | ask | show | jobs | submit | kbolino's commentslogin

How many CAs do you think there are? How many countries do you think they operate in?

Maybe we could augment the old EV cert indicator with a flag icon, but now there's yet another thing that users have to pay attention to. Maybe the CA/Browser Forum could run a clearinghouse for company names, but apart from trivial examples, there might very well be legitimate cases of two companies with the same name in the same country, just in different industries. Now do we augment the indicator with an industry icon too? Then the company changes its name, or forms a subsidiary relationship, or what have you. Now do we need to put "Meta (formerly Facebook)" or "Facebook (division of Meta)" etc. in the name?

There's just so many problems with the EV cert approach at Internet scale and they're largely beyond solvable with current infrastructure and end-user expectations.


I'm genuinely surprised that usize <=> pointer convertibility exists. Even Go has different types for pointer-width integers (uintptr) and sizes of things (int/uint). I can only guess that Rust's choice was seen as a harmless simplification at the time. Is it something that can be fixed with editions? My guess is no, or at least not easily.

There is a cost to having multiple language-level types that represent the exact same set of values, as C has (and is really noticeable in C++). Rust made an early, fairly explicit decision that a) usize is a distinct fundamental type from the other types, and not merely a target-specific typedef, and b) not to introduce more types for things like uindex or uaddr or uptr, which are the same as usize on nearly every platform.

Rust worded in its initial guarantee that usize was sufficient to roundtrip a pointer (making it effectively uptr), and there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken. Sort of the more fundamental problem is that many crates are perfectly happy opting out of compiling for weirder platform--I've designed some stuff that relies on 64-bit system properties, and I'd rather like to have the ability to say "no compile for you on platform where usize-is-not-u64" and get impl From<usize> for u64 and impl From<u64> for usize. If you've got something like that, it also provides a neat way to say "I don't want to opt out of [or into] compiling for usize≠uptr" and keeping backwards compatibility.

If you want to see some long, gory debates on the topic, https://internals.rust-lang.org/t/pre-rfc-usize-is-not-size-... is a good starting point.


> ...not to introduce more types for things like uindex or uaddr or uptr, which are the same as usize on nearly every platform. ... there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken.

The proper approach to resolving this in an elegant way is to make the guarantee target-dependent. Require all depended-upon crates to acknowledge that usize might differ from uptr in order to unlock building for "exotic" architectures, much like how no-std works today. That way "nearly every platform" can still rely on the guarantee with no rise in complexity.


I brought up Go because it was designed around the same time and, while it gets a lot of flack for some of its other design decisions, this particular one seems prescient. However, I would be remiss if I gave the impression that the reasoning behind the decision was anticipation of some yet unseen future; the reality was that int and uint (which are not aliases for sized intN or uintN) were not initially the same as ptrdiff_t and size_t (respectively) on all platforms. Early versions of Go for 64-bit systems had 32-bit int and uint, so naturally uintptr had to be different (and it's also not an alias). It was only later that int and uint became machine-word-sized on all platforms and so made uintptr seem a bit redundant. However, this distinction is fortuitous for CHERI etc. support. Still, Go on CHERI with 128-bit uintptr might break some code, however such code is likely in violation of the unsafe pointer rules anyway: https://pkg.go.dev/unsafe#Pointer

Yet Rust is not Go and this solution is probably not the right one for Rust. As laid out in a link on a sibling comment, one possibility is to do away with pointer <=> integer conversions entirely, and use methods on pointers to access and mutate their addresses (which may be the only thing they represent on some platforms, but is just a part of their representation on others). The broader issue is really about evolving the language and ecosystem away from the mentality that "pointers are just integers with fancy sigil names".


I'd say, that even more than pointer sizes, the idea that a pointer is just a number really needs to die, and is in no way a forward looking decision expected of a modern language.

Pointers should at no point be converted into numbers and back as that trips up many assumptions (special runtimes, static/dynamic analysis tools, compiler optimizations).

Additionally, I would make it a priority that writing FFIs should be as easy as possible, and requires as little human deliberation as possible. Even if Rust is safe, its safety can only be assumed as long as the underlying external code upholds the invariants.

Which is a huge risk factor for Rust, especially in today's context of the Linux kernel. If I have an object created/handled by external native code, how do I make sure that it respects Rust's lifetime/aliasing rules?

What's the exact list of rules my C code must conform to?

Are there any static analysis/fuzzing tools that can verify that my code is indeed compliant?


> Which is a huge risk factor for Rust, especially in today's context of the Linux kernel. If I have an object created/handled by external native code, how do I make sure that it respects Rust's lifetime/aliasing rules?

Can you expand on this point? Like are you worried about whether the external code is going to free the memory out from under you? That is part of a guarantee, the compiler cannot guarantee what happens at runtime no matter what the author of a language wants, the CPU will do what it's told, it couldn't care about Rusts guarantees even if you built your code entirely with rust.

When you are interacting with the real world and real things you need to work with different assumptions, if you don't trust that the data will remain unmodified then copy it.

No matter how many abstractions you put on top of it there is still lighting in a rock messing with 1s and 0s.


C doesn't require convertibility to an integer and recognizes that pointers may have atypical representations. Casting to integer types has always been implementation defined. [u]intptr_t is optional specifically to allow such platforms to claim standard conformance.

I know, but from what you wrote, Rust kinda assumes pointers are ints (because they are the same type as array indices)

So as to not only hate on Rust, I want to remark, that baking in the idea that pointers can be used for arithmetic isn't a good one as well.


> Is it something that can be fixed with editions? My guess is no, or at least not easily.

Assuming I'm reading these blog posts [0, 1] correctly, it seems that the size_of::<usize>() == size_of::<*mut u8>() assumption is changeable across editions.

Or at the very least, if that change (or a similarly workable one) isn't possible, both blog posts do a pretty good job of pointedly not saying so.

[0]: https://faultlore.com/blah/fix-rust-pointers/#redefining-usi...

[1]: https://tratt.net/laurie/blog/2022/making_rust_a_better_fit_...


Great info!

Personally, I like 3.1.2 from your link [0] best, which involves getting rid of pointer <=> integer casts entirely, and just adding methods to pointers, like addr and with_addr. This needs no new types and no new syntax, though it does make pointer arithmetic a little more cumbersome. However, it also makes it much clearer that pointers have provenance.

I think the answer to "can this be solved with editions" is more "kinda" rather than "no"; you can make hard breaks with a new edition, but since the old editions must still be supported and interoperable, the best you can do with those is issue warnings. Those warnings can then be upgraded to errors on a per-project basis with compiler flags and/or Cargo.toml options.


> Personally, I like 3.1.2 from your link [0] best, which involves getting rid of pointer <=> integer casts entirely, and just adding methods to pointers, like addr and with_addr. This needs no new types and no new syntax, though it does make pointer arithmetic a little more cumbersome. However, it also makes it much clearer that pointers have provenance.

Provenance-related work seems to be progressing at a decent pace, with some provenance-related APIs stabilized in Rust 1.84 [0, 1].

[0]: https://blog.rust-lang.org/2025/01/09/Rust-1.84.0/

[1]: https://doc.rust-lang.org/std/ptr/index.html#provenance


Indeed, the only missing piece really then is the ability to warn/error when exposed provenance and/or as-casts are used.

AIUI, this particular change would only impact some newly supported architectures going forward. These don't need to be buildable with older editions.

This feels antithetical to two of the main goals of editions.

One of those goals is that code which was written for an older edition will continue to work. You should never be forced to upgrade editions, especially if you have a large codebase which would require significant modifications.

The other goal is that editions are interoperable, i.e. that code written for one edition can rely on code written for a different edition. Editions are set on a per-crate basis, this seems to be the case for both rustc [1] and of course cargo.

As I see it, what you're saying would mean that code written for this new edition initially couldn't use most of the crates on crates.io as dependencies. This would then create pressure on those crates' authors to update their edition. And all of this would be kind of pointless fragmentation and angst, since most of those crates wouldn't be doing funny operations on pointers anyway. It might also have the knock-on effect of making new editions much more conservative, since nobody would want to go through that headache again, thus undermining another goal of editions.

[1]: https://rustc-dev-guide.rust-lang.org/guides/editions.html


If the part about how "most of those crates wouldn't be doing funny operations on pointers" can be verified automatically in a way that preserves safety guarantees when usize != uaddr/uptr, these crates can continue to build without transitioning to a newer edition. Otherwise, upgrading these crates is the right move. Other code targeting earlier editions of Rust would still build, they would simply need a compiler version update when depending on the newly-upgraded crates.

Aren't browsers generally implementing their own DNS resolution (via DoH) nowadays anyway? Not sure it helps that much, but operating systems not enforcing/delivering DNSSEC seems like a side-stepped problem now.

No, not as a general rule they aren't. And remember, the DNSSEC record delivery problem isn't an issue for the majority of all browser sessions, just a small minority that are on paths that won't pass DNSSEC records reliably. Since you can't just write those paths off, and you can't really reliably detect them, you end up needing a resolution fallback --- at which point you might as well not be using DANE.

This was a big enough issue that there was a whole standards push to staple DNSSEC records to the TLS handshake (which then fell apart).


There are Extended Validation (EV) certificates, and for a couple of years browsers gave them special treatment (typically, a green lock indicator instead of gray, sometimes accompanied by the validated business name). However, they were eventually demoted to the same appearance as ordinary Domain Validation (DV) certificates for a couple reasons:

1) This is not as useful as it sounds. Business names are not unique, and the legal entity behind a legitimate business may have a different name that no one has ever heard of.

2) Validation gets dicier as the world gets opened up and as laws and customs change. The higher tier confers greater prestige and legitimacy, but the only discriminator really backing it is money.


Yea, this was what I thought I'd dealt with before but I couldn't remember.

It's too bad the same hasn't happened to software notarization and signing systems.

People will argue that having payments enforced some accountability, but I'm not really convinced.


There were two main issues.

1) A lot of these businesses have customers outside the U.S. Those customers, including some foreign governments, did not want their data to be snooped by the U.S. government. The business risk here is loss of customers.

2) There is no such thing as a private backdoor. If one entity (admittedly a very well resourced one) can snoop, so can others. The publicity also entices new players to enter the game. The business risk here is loss of reputation.


But why?

There's no reliable source of truth for your home network. Neither the local (m)DNS nor the IP addresses nor the MAC addresses hold any extrinsic meaning. You could certainly run the standard ACME challenges, but neither success nor failure would carry much weight.

And then the devices themselves have no way of knowing your hub/router/AP is legitimate. You'd have to have some way of getting the CA certificate on to them that couldn't be easily spoofed.

EDIT: There is a draft for a new ACME challenge called dns-persist-01, which mentions IoT, but I'm not really sure how it helps that use case exactly: https://datatracker.ietf.org/doc/html/draft-ietf-acme-dns-pe...


Absent widespread adoption of DNSSEC, which has just not happened at all, I don't see any alternative.

The authentication must be done before the encryption parameters are negotiated, in order to protect against man-in-the-middle attacks. There must be some continuity between the two as well, since the authenticated party (both parties can be authenticated, but only one has to be) must digitally sign its parameters.

Any competing authentication scheme would therefore have to operate at a lower layer with even more fundamental infrastructure, and the only thing we've really got that fits the bill is DNS.

EDIT: A standard exists for this already, it's called DANE, though it has very little support: https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


This applies to grandparent too (for the record I largely agree with them) but the issue isn't just "authenticity" but "identification" -- there's no real attestation about who is in on the other end of the site. This identity was once at least somewhat part of the certificate itself.

Yes, it is fair to say that domain names are not the sum total of identity. However, the EV certificate experience showed that, at least in terms of WebPKI and the open Internet, there really isn't anything better than domains yet.

We have clear and seemingly easy go-to examples like proving that yes, this is THE Microsoft, and not a shady fly-by-night spoof in a non-extradition territory, but apart from the headline companies--who as of late seem to like changing their names anyway--this actually isn't easy at all.

Walled gardens like app stores have different trade-offs, admittedly.


JAR has additional structure to it, though it's mostly optional stuff, like metadata and code signing:

https://docs.oracle.com/en/java/javase/21/docs/specs/jar/jar...


Don't forget to delete META-INF!

There's no sweet spot I've found. I don't work for Cloudflare but when I did have a status indicator to maintain, you could never please everyone. Users would complain when our system was up but a dependent system was down, saying that our status indicator was a lie. "Fixing" that by marking our system as down or degraded whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius). The juice no longer seemed worth the squeeze and we gave up on automated status indicators.

> "Fixing" that by marking our system as down or degraded whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius).

This seems like an issue with the design of your status page. If the broken dependencies truly had a limited blast radius, that should've been able to be communicated in your indicators and statistics. If not, then the unreliable reputation was deserved, and all you did by removing the status page was hide it.


> all you did by removing the status page was hide it

True, but everyone that actually made the company work was much happier for it.


> whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius)

You are responsible of your dependencies, unless they are specific integrations. Either switch to more reliable dependencies or add redundancy so that you can switch between providers when any is down.


The headline status doesn't have to be "worst of all systems". Pick a key indicator, and as long as it doesn't look like it's all green regardless of whether you're up or down, users will imagine that "green headline, red subsystems" means whatever they're observing, even if that makes the status display utterly uninterpretable from an outside perspective.

While it has nothing to do with how responsive your mouse feels, as that is measured in milliseconds while CAS latency is measured in nanoseconds, there has indeed been a small regression with DDR5 memory compared to the 3 previous generations. The best DDR2-4 configurations could fetch 1 word in about 6-7 ns while the best DDR5 configurations take about 9-10 ns.

https://en.wikipedia.org/wiki/CAS_latency#Memory_timing_exam...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: