Both are also early Go engineers and developers who hacked on the Go stdlib for years. Most people in the Go community know them. Great people, and the idea speaks for it. I wish them best of luck.
Color me confused, but what do containers add to FreeBSD beyond jails? Jails have their own IP addresses and root filesystem, plus they use the host OS's version of libc and OpenSSL/LibreSSL and all the other core utils.
Is it the convenience utilities for building and running container images?
However, I still can't pinpoint what the value proposition is compared to using jails. Is there anybody around here able and willing to shed some light? I know I didn't use Cunningham's Law to start the conversation like a clever netizen but maybe, just this once, a good faith response to a good faith question is possible.
HN is a pretty simple, efficient monolithic web application. Some updates might need a restart. It's OK for some web requests to fail during that time. HN isn't life critical with sixtuple nine uptime requirements.
Tbh like 99% of web apps aren’t critical - most of them are for buying something or providing infrastructure to make it easier to buy something anyway.
It’s fine if your online shop is down for a few minutes (of course the business won’t see it like that but it’s true)
A sales site being down might lose you a sale. But the simplicity might save you so muh more than that loses you. And often the complexion of high availability infrastructure results in more downtime than it prevents.
For stuff like HN, I like the peek behind the scenes it provides. It's all just software written by some humans and way too often people take themselves and their shitty software way too serious.
I feel like this obsession with zero downtime has gotten a bit silly. Sure, for some things it's damn near required (though I imagine that's fewer things than most people think), but it 100% does not matter even a little bit if HN is unavailable for 10 seconds or so.
Everyone went right for “downtime” - no, that’s not an issue. I would have expected a configuration change, which wouldn’t require a restart but might indeed result in downtime.
Or even a configuration change that some control system notices and does restart the service.
It’s the manual, hands-on connotation (maybe only in my mind) of telling someone a restart is involved. Automate this stuff - don’t want the code in the server all year? Fine, have a process rebuild and relaunch on a schedule that makes sense. Might have downtime, but definitely have less hands-on.
Just to talk about a different direction here for a second:
Something that I find to be a frustrating side effect of malware issues like this is that it seems to result in well-intentioned security teams locking down the data in apps.
The justification is quite plausible -- in this case WhatsApp messages were being stolen! But the thing is... that if this isn't what they steal they'll steal something else.
Meanwhile locking down those apps so the only apps with a certain signature can read from your WhatsApp means that if you want to back up your messages or read them for any legitimate purpose you're now SOL, or reliant on a usually slow, non-automatable UI-only flow.
I'm glad that modern computers are more secure than they have been, but I think that defense in depth by locking down everything and creating more silos is a problem of its own.
I agree with this, just to note for context though: This (or rather the package that was forked) is not a wrapper of any official WhatsApp API or anything like that, it poses as a WhatsApp client (WhatsApp Web), which the author reverse engineered the protocol of.
So users go through the same steps as if they were connecting another client to their WhatsApp account, and the client gets full access to all data of course.
From what I understand WhatsApp is already fairly locked down, so people had to resort to this sort of thing – if WA had actually offered this data via a proper API with granular permissions, there might have been a lower chance of this happening.
I could certainly see the value in this in principle but sadly the labyrinthine mess that is the Apple permission system (in which they learned none of the lessons of early UAC) illustrates the kind of result that seems to arise from this.
A great microcosm illustration of this is automation permission on macOS right now: there's a separate allow dialog for every single app. If you try to use a general purpose automation app it needs to request permission for every single app on your computer individually the first time you use it. Having experienced that in practice it... absolutely sucks.
At this point it makes me feel like we need something like an async audit API. Maybe the OS just tracks and logs all of your apps' activity and then:
1) You can view it of course.
2) The OS monitors for deviations from expected patterns for that app globally (kinda like Microsoft's SmartScreen?)
3) Your own apps can get permission to read this audit log if you want to analyze it your own way and/or be more secure. If you're more paranoid maybe you could use a variant that kills an app in a hurry if it's misbehaving.
Sadly you can't even implement this as a third party thing on macOS at this point because the security model prohibits you from monitoring other apps. You can't even do it with the user's permission because tracing apps requires you to turn SIP off.
> Maybe the OS just tracks and logs all of your apps' activity
The problem here, is that like so many social-media apps, the first thing the app will do is scrape as much as it possibly can from the device, lest it lose access later, at which point auditing it and restricting its permissions is already too late.
Give an inch, and they’ll take a mile. Better to make them justify every millimetre instead.
We're not in 1980 anymore. Most people need zero, and even power users need at most one or two apps that need that full access to the disk.
In macOS, for example, the sandbox and the file dialog already allow opening any file, bundle or folder on the disk. I haven't really come across any app that does better browsing than this dialog, but if there's any, it should be a special case. Funny enough, WhatsApp on iOS is an app that reimplements the photo browser, as a dark pattern to force users to either give full permission to photos or suffer.
The only time where the OS file dialog becomes limited is when a file is actually "multiple files". Which is 1) solvable by bundles or folders and 2) a symptom of developers not giving a shit about usability.
I think you miss understood. If the OS becomes the arbiter of what can and cannot be accessed; it's a slippery slope to the OS becoming a walled garden that only approved apps and developers are allowed to operate. Of course that is a pretty large generalization, but we already see it with mobile devices and are starting to see it with windows and Mac OS.
I don't think we should be handing more power to OS makers and away from users. There has to be a middle ground between wall gardens and open systems. It would be much better for node & npm to come up with a solution than locking down access.
Meanwhile locking down those apps so the only apps with a certain signature can read from your WhatsApp means that if you want to back up your messages or read them for any legitimate purpose you're now SOL, or reliant on a usually slow, non-automatable UI-only flow.
...and this gives them more control, so they can profit from it. Corporate greed knows no bounds.
I'm glad that modern computers are more secure than they have been
I'm not. Back when malware was more prevalent among the lower class, there was also far more freedom and interoperability.
The virus-infested computers caused by scam versions of Neopets, are not dissimilar to Windows today.
Live internet popups you didn't ask for, live tracking of everything you do, new buttons suddenly appearing in every toolbar. All of it slowing down your machine.
It seems to me the only adequate solution regarding any of these types of security and privacy vs data sharing and access matters, is going to be an OS and system level agent that can identify and question behaviors and data flows (AI firewall and packet inspection?), and configure systems in line with the user’s accepted level of risk and privacy.
It is already a major security and privacy risk for users to rely on the beneficence and competence of developers (let alone corporations and their constant shady practices/rug-pulls), as all the recent malware and large scale supply chain compromises have shown. I find the only acceptable solution would be to use AI to help users (and devs, for that matter) navigate and manage the exponential complexity of privacy and security.
For a practical example, imagine your iOS AI Agent notifying you that as you had requested, it is informing you that it adjusted the Facebook data sharing settings because the SOBs changed them to be more permissive again after the last update. It may even then suggest that since this is the 5685th shady incident by Facebook, that it may be time to adjust the position towards what to share on Facebook.
That could also extend to the subject story; where one’s agent blocks and warns of the behavior of a library an app uses, which is exfiltrating WhatsApp messages/data and sending it off device.
Ideally such malicious code will soon also be identified way sooner as AI agents can become code reviewers, QA, and even maintainers of open source packages/libraries, which would intercept such behaviors well before being made available; but ultimately, I believe it should all become a function of the user’s agent looking out for their best interests on the individual level. We simply cannot sustain “trust me, bro” security and privacy anymore…especially since as has been demonstrated quite clearly, you cannot trust anyone anymore in the west, whether due to deliberate or accidental actions, because the social compact has totally broken down… you’re on your own… just you and your army of AI agents in the matrix.
That's the funny thing about those here in the spirit of Hacker News. We want to build – to hack.
It's all well and good for us all to use Linux to side-step this, but sometimes (shock, horror), we even want to _share_ those hacks with other people!
As such, it's kinda nice if the Big Tech software on those devices didn't lock all of our friends in tiny padded cells 'for their own safety'.
I don't really know what I'm doing, but. Why couldn't messages be stored encrypted on a blockchain with a system where both user's in a one-one conversation agree to a key, or have their own keys, that grants permission for 'their' messages. And then you'd never be locked into a private software / private database / private protocol. You could read your messages at any point with your key.
A huge fraction of the knee-jerk reactions here seem to miss the key point that the post is trying to get across:
> In the mid-2010s, during Furman’s tenure running economic policy under Obama, the company sold its defense business, offshored production, and slashed research, a result of pressure from financiers on Wall Street.
> Mesdag engaged in a proxy fight to wrest control of the company from its engineering founders, accusing one of its founders and iRobot Chairman Colin Angle of engaging in “egregious and abusive use of shareholder capital” for investing in research.
Yes Roomba sucks at this point. We get it. Thing is, if you slash research... that's what eventually becomes of your product.
This is what's wrong with investing overall: 1Q future blindness.
We'd have almost nothing if it weren't for university partnerships and corporate R&D way back when. There's no way to accomplish this now except to stay private.
Well, they took most of that money, and then just bought back their own stock. It's something more than just 1Q blindness and failure to understand the importance of research.
A company who does cutting edge R&D for defense contracts and and consumer small appliances is destined for trouble. They are two very different lines of business. While you might make an argument about synergy, the problem stems from the investors who are investing in two very different lines of business. Ultimately one of them was going to win. The failure to realize that offshoring would turn suppliers into competitors is a known issue in the consumer small appliance world and it looks like they were not ready.
Interestingly enough the R&D portion that was sold off, became Endeavor Robotics which was sold to Teledyne FLIR Systems and seems to be doing fine.
Their research wasn't on vacuum cleaners. It was building robots for the military and space. That's exactly what investors were complaining about -- the research wasn't leading to better vacuum cleaners. It was a distraction and not what investors wanted their money being used for.
It's crazy that the Dodge brothers destroyed the company/shareholder relationship for every contemporary and future US-based corporation and then died.
To pre-empt the folks who'll come in here and laugh about how Rust should be preventing memory corruption... I'll just directly quote from the mailing list:
Rust Binder contains the following unsafe operation:
// SAFETY: A `NodeDeath` is never inserted into the death list
// of any node other than its owner, so it is either in this
// death list or in no death list.
unsafe { node_inner.death_list.remove(self) };
This operation is unsafe because when touching the prev/next pointers of
a list element, we have to ensure that no other thread is also touching
them in parallel. If the node is present in the list that `remove` is
called on, then that is fine because we have exclusive access to that
list. If the node is not in any list, then it's also ok. But if it's
present in a different list that may be accessed in parallel, then that
may be a data race on the prev/next pointers.
And unfortunately that is exactly what is happening here. In
Node::release, we:
1. Take the lock.
2. Move all items to a local list on the stack.
3. Drop the lock.
4. Iterate the local list on the stack.
Combined with threads using the unsafe remove method on the original
list, this leads to memory corruption of the prev/next pointers. This
leads to crashes like this one:
> So the prediction that incautious and unverified unsafe {} blocks would cause CVEs seems entirely accurate.
This is one/the first CVE caused by a mistake made using unsafe Rust. But it was revealed along with 159 new kernel CVEs found in C code.[0]
It may just be me, but it seems wildly myopic to draw conclusions about Rust, or even, unsafe Rust from one CVE. More CVEs will absolutely happen. But even true Rust haters have to recognize that tide of CVEs in kernel C code runs something like 19+ CVEs per day? What kind of case can you make that "incautious and unverified unsafe {} blocks" is worse than that?
> Github says 0.3% of the kernel code is Rust. But even normalized to lines of code, I think counting CVEs would not measure anything meaningful.
Your sense seems more than a little unrigorous. 1/160 = 0.00625. So, several orders of magnitude fewer CVEs per line of code.
And remember this also the first Rust kernel CVE, and any fair metric would count both any new C kernel code CVEs, as well as those which have already accrued against the same C code, if comparing raw lines of code.
But taking a one week snapshot and saying Rust doesn't compare favorably to C, when Rust CVEs are 1/160, and C CVEs are 159/160 is mostly nuts.
I'm more interested in the % of rust code that is marked unsafe. If you can write a kernel with 1% safe, that sounds pretty great. If the nature of dealing with hardware (AFAIK most of a kernel is device drivers) means something higher, maybe 10%, then maybe safety becomes difficult, especially because unsafety propagates in an unclear way since safe code becomes unsafe to some degree when it calls into it.
I'm also curious about the percentage of implicit unsafe code in C, given there are still compilers and linters checking something, just not at the level of lifetimes etc in Rust. But I guess this isn't easy to calculate.
I like rust for low level projects and see no need to pick C over it personally - but I think it's fair to question the real impact of language safety in a realm that largely has to be unsafe. There's no world where Rust is more unsafe than C though so it's all academic. I just wonder if there's been any analysis on this, in close to metal applications like a kernel.
> I'm more interested in the % of rust code that is marked unsafe.
I think you should less interested in % unsafe as what the unsafe is used to do, that is, it's likelihood to cause UB, etc. If it's unsafe to interface with C code, or unsafe to do a completely safe transmute, I'm not sure one should care.
> There's no world where Rust is more unsafe than C though so it's all academic
I think Rust is more unsafe than C due to supply chain issues in the Rust ecosystem, which have not fully materialized yet. Rust certainly has an advantage in terms of memory safety, but I do not believe it is nearly as big as people like to believe compared to a C project that actually cares about memory safety and applies modern tooling to address safety. There seems to be a lot of confirmation bias. I also believe Rust is much safer for average coders doing average projects by being much safer by default.
> I think Rust is more unsafe than C due to supply chain issues in the Rust ecosystem
This is such an incredibly cheap shot. First, the supply chain issues referenced have nothing to do with Rust, the language, itself. Second, Rust's build system, cargo, may have these issues, but cargo's web fetch features simply aren't used by the Linux kernel.
So -- we can have a debate about which is a better a world to live in, one with or without cargo, but it really has nothing to do with the Linux kernel security.
It would probably have to be normalized to something slightly different as lines of code necessary to a feature varies by language.. But even with the sad state of CVE quality, I would certainly prefer a language that deflects CVEs for a kernel that is both in places with no updates and in places with forced updates for relevant or irrelevant CVE.
The kernel policy for CVEs is any patch that is backported, no? So this is just the first Rust patch, post being non-experimental, that was backported?
Isn’t it obvious that the primary source of CVEs in Rust programs would be the portions of the program where the human is charge of correctness instead of the compiler?
The relevant question is whether it results in fewer and less severe CVEs than code written in C. So far the answer seems to be a resounding yes
"Cause" seems unsubstantiated: I think to justify "cause," we'd need strong evidence that the equivalent bug (or worse) wouldn't have happened in C.
Or another way to put it: clearly this is bad, and unsafe blocks deserve significant scrutiny. But it's unclear how this would have been made better by the code being entirely unsafe, rather than a particular source of unsafety being incorrect.
But it didn't promise to be the solution either. Rust has never claimed, nor have its advocates claimed, that unsafe Rust can eliminate memory bugs. Safe Rust can do that (assuming any unsafe code relied upon is sound), but unsafe cannot be and has never promised to be bug free.
Except that it didn't fail to be the solution: the bug is localized to an explicit escape hatch in Rust's safety rules, rather than being a latent property of the system.
(I think the underlying philosophical disagreement here is this: I think software is always going to have bugs, and that Rust can't - and doesn't promise - to perfectly eliminate them. Instead, what Rust does promise - and deliver on - is that the entire class of memory safety bugs can be eliminated by construction in safe Rust, and localized when present to errors in unsafe Rust. Insofar as that's the promise, Rust has delivered here.)
You can label something an "explicit escape hatch" or a "latent property of the system", but in the end such labels are irrelevant. While I agree that it may be easier to review unsafe blocks in Rust compared to reviewing pointer arithmetic, union accesses, and free in C because "unsafe" is a bit more obvious in the source, I think selling this as a game changer was always an exaggeration.
Having written lots of C and C++ before Rust, this kind of local reasoning + correctness by construction is absolutely a game changer. It's just not a silver bullet, and efforts to miscast Rust as incorrectly claiming to be one seem heavy-handed.
Google's feedback seems to suggest Rust actually might be a silver bullet, in the specific sense meant in the "No Silver Bullet" essay.
That essay doesn't say that silver bullets are a panacea or cure all, instead they're a decimal order of magnitude improvement. The essay gives the example of Structured Programming, an idea which feels so obvious to us today that it's unspoken, but it's really true that once upon a time people wrote unstructured programs (today the only "language" where you even could do this is assembly and nobody does it) where you just jump arbitrarily to unrelated code and resume execution. The result is fucking chaos and languages where you never do that delivered a huge improvement even before I wrote my first line of code in the 1980s.
Google did find that sort of effect in Rust over C++.
That's not how it works. A larger codebase to scrutinize means that there's more chance of missing a memory safety bug. If you can keep the Rust unsafe block bug-free, you don't need to worry about them anymore in safe Rust. They're talking about attention getting divided all over the code where this distinction is not there (like C code). They always have been.
On top of that, there is something else they say. You have to uphold the invariants inside the unsafe blocks. Rust for Linux documents these invariants as well. The invariant was wrong in this case. The reason I mention this is because this practice has forced even C developers to rethink and improve their code.
Rust specifies very clearly what sort of error it eliminates and where it does that. It reduces the surface area of memory safety bugs to unsafe blocks, and gives you clear guidelines on what you need to ensure manually within the unsafe block to avoid any memory safety bugs. And even when you make a human error in that task, Rust makes it easy to identify them.
There are clear advantages here in terms of the effort required to prevent memory safety bugs, and in making your responsibilities explicit. This has been their claim consistently. Yet, I find that these have to be repeated in every discussion about Rust. It feels like some critics don't care about these arguments at all.
Obviously. If you use a language which inherently makes memory safety bugs in regular code impossible, all memory safety bugs will be contained to the "trust me, I know what I'm doing - no need to check this" bypass sections. Similarly, all drownings happen in the presence of water.
The important thing to remember is that in this context C code is one giant unsafe {} block, and you're more likely to drown in the sea than in a puddle.
> I know nothing about Rust. But why is unsafe needed?
The short of it is that for fundamental computer science reasons the ability to always reject unsafe programs comes at the cost of sometimes being unable to verify that an actually-safe program is safe. You can deal with this either by accepting this tradeoff as it is and accepting that some actually-safe programs will be impossible to write, or you can add an escape hatch that the compiler is unable to check but allows you to write those unverifiable programs. Rust chose the latter approach.
> Kinda sounds a lock would make this safe?
There was a lock, but it looks like it didn't cover everything it needed to.
I think you missed the parents point. We all universally acknowledge the need for the unsafe{} keyword in general; what the parent is saying is: given the constraint of a lock, could this code not have obviated the need for an unsafe block entirely. Thus rendering the memory safety-issue impossible.
Ah, I see that interpretation now that you spelled it out for me.
Here's what `List::remove` says on its safety requirements [0]:
/// Removes the provided item from this list and returns it.
///
/// This returns `None` if the item is not in the list. (Note that by the safety requirements,
/// this means that the item is not in any list.)
///
/// # Safety
///
/// `item` must not be in a different linked list (with the same id).
pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> {
At least if I'm understanding things correctly, I don't think that that invariant is something that locks can protect in general. I can't say I'm familiar enough with the code to say whether some other code organization would have eliminated the need for the unsafe block in this specific case.
Sorry, but this is like saying 'when I am not wrong, I am right 100% of the time'.
The devs didn't write unsafe Rust to experience the thrills of living dangerously, they wrote it because the primitives were impossible to express in safe Rust.
If I were to write a program in C++ that has a thread-safe doubly linked list in it, I'd be able to bet on that linked list will have safety bugs, not because C++ is an unsafe language, but because multi-threading is hard. In fact, I believe most memory safety errors today occur in the presence of multi-threading.
Rust doesn't offer me any way of making sure my code is safe in this case, I have to do the due diligence of trying my best and still accept that bugs might happen because this is a hard problem.
The difference between Rust and C++ in this case, is that the bad parts of Rust are cordoned off with glowing red lines, while the bad parts of C++ are not.
This might help me in minimizing the attack surface in the future, but I suspect Rust's practical benefits will end up less impactful than advertised, even when the language is full realized and at its best, because most memory safety issues occur in code that cannot be expressed in safe Rust and doing it in a safe Rust way is not feasible for some technical reason.
If rust is so inflexible that it requires the use of unsafe to solve problems, that's still rust's fault. You have to consider both safe rust behaviour as well as necessary unsafe code.
This is sort of the exact opposite of reality: the point of safe Rust is that it's safe so long as Rust's invariants are preserved, which all other safe Rust preserves by construction. So you only need to audit unsafe Rust code to ensure the safety of a Rust codebase.
(The nuance being that sometimes there's a lot of unsafe Rust, because some domains - like kernel programming - necessitate it. But this is still a better state of affairs than having no code be correct by construction, which is the reality with C.)
I've written lots of `forbid(unsafe_code)` in Rust; it depends on where in the stack you are and what you're doing.
But as the adjacent commenter notes: having unsafe is not inherently a problem. You need unsafe Rust to interact with C and C++, because they're not safe by construction. This is a good thing!
I think unsafe Rust is harder to write than C. However, that's because unsafe Rust makes you think about the invariants that you'd need to preserve in a correct C program, so it's no harder to write than correct C.
In other words: unsafe Rust is harder, but only in an apples-and-oranges sense. If you compare it to the same diligence you'd need to exercise in writing safer C, it would be about the same.
Safe Rust has more strict aliasing requirements than C, so to write sound unsafe Rust that interoperates with safe Rust you need to do more work than the equivalent C code would involve. But per above, this is the apples-and-oranges comparison: the equivalent C code will compile, but is statistically more likely to be incorrect. Moreover, it's going to be incorrect in a way that isn't localizable.
Ultimately every program depends on things beyond any compilers ability to verify, for example the calls to code not written in that language being correct, or even more fundamentally if you're writing some embedded program that literally has interfaces to foreign code at all the silicon (both that handles IO and that which does the computation) being correct.
The promise of rust isn't that it can make this fundamentally non-compiler-verifiable (i.e. unsafe) dependency go away, it's that you can wrap the dependency in abstractions that make it safe for users of the dependency if the dependency is written correctly.
In most domains rust don't necessitate writing new unsafe code, you rely on the existing unsafe code in your dependencies that is shared, battle tested, and reasonably scoped. This is all rust, or any programming langauge, can promise. The demand that the dependency tree has no unsafe isn't the same as the domain necessitating no unsafe, it's the impossible demand that the domain of writing the low level abstractions that every domain relies on doesn't need unsafe.
Almost all of them. It would be far shorter to list the domains which require unsafe. If you're seeing programmers reach for unsafe in most projects, either you're looking at a lot of low level hardware stuff (which does require unsafe more often than not), or you are seeing cases where unsafe wasn't required but the programmer chose to use it anyway.
Ultimately all software has to touch hardware somewhere. There is no way to verify that the hardware always does what it is supposed to be because reality is not a computer. At the bottom of every dependency tree in any Rust code there always has to be unsafe code. But because Rust is the way it is those interfaces are the only places you need to check for incorrectly written code. Everywhere else that is just using safe code is automatically correct as long as the unsafe code was correct.
And that is fine, because those upstream deps can locally ensure that those sections are correct without any risk that some unrelated code might mis-safely use them unsafely. There is an actual rigorous mathematical proof of this. You have no such guarantees in C/C++.
> And a bug in one crate can cause UB in another crate if that other crate is not designed well and correctly.
Yes! Failure to uphold invariants of the underlying abstract model in an unsafe block breaks the surrounding code, including other crates! That's exactly consistent with what I said. There's nothing special about the stdlib. Like all software, it can have bugs.
What the proof states is that two independently correct blocks of unsafe code cannot, when used together, be incorrect. So the key value there is that you only have to reason about them in isolation, which is not true for C.
I think you're misunderstanding GP. The claim is that the only party responsible for ensuring correctness is the one providing a safe API to unsafe functionality (the upstream dependency in GP's comment). There's no claim that upstream devs are infalliable nor that the consequences of a mistake are necessarily bounded.
Those guys were writing a lot of unsafe rust and bumped into UB.
I sound like an apologist, but the Rust team stated that “memory safety is preserved as long as Rusts invariants are”. Feels really clear, people keep missing this point for some reason, almost as if its a gotcha that unsafe rust behaves in the same memory unsafe way as C/C++: when thats exactly the point.
Your verification surface is smaller and has a boundary.
> Any large Rust project I check has tons of unsafe in its dependency tree.
This is an argument against encapsulation. All Rust code eventually executes `unsafe` code, because all Rust code eventually interacts with hardware/OS/C-libraries. This is true of all languages. `unsafe` is part of the point of Rust.
And all of it is eventually run on an inherently unsafe CPU.
I cannot understand why we are continuing to have to re-litigate the very simple fact that small, bounded areas of potential unsafety are less risky and difficult to audit than all lines of code being unsafe.
It's just moving the goalposts. "If it compiles it works" to "it eliminates all memory bugs" to "well, it's safer than c...".
If Rust doesn't live up to its lofty promises, then it changes the cost-benefit analysis. You might give up almost anything to eliminate all bugs, a lot to eliminate all memory bugs, but what would you give up to eliminate some bugs?
Can you show me an example of Rust promising "if it compiles it works"? This seems like an unrealistic thing to believe, and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.
The cost-benefit argument for Rust has always been mediated by the fact that Rust will need to interact with (or include) unsafe code in some domains. Per above, that's an explicit goal of Rust: to provide sound abstractions over unsound primitives that can be used soundly by construction.
> Can you show me an example of Rust promising "if it compiles it works"? [...] and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.
I have heard it and I've stated it before. It's never stated in absolute confidence. As I said in another thread, if it was actually true, then Rust wouldn't need an integrated unit testing framework.
It's referring to the experience that Rust learners have, especially when writing relatively simple code, that's it tends to be hard to misuse libraries in a way that looks correct and compiles but actually fails at runtime. Rust cannot actually provide this guarantee, it's impossible in any language. However there are a lot of common simple tasks (where there's not much complex internal logic that could be subtly incorrect) where the interfaces provided by libraries they're depending on are designed to leverage the type system such that it's difficult to accidentally misuse them.
Like something like not initializing a HTTP client properly. The interfaces make it impossible to obtain an improperly initialized client instance. This is an especially distinct feeling if you're used to dynamic languages where you often have no assurances at all that you didn't typo a field name.
I've seen (and said) "if it compiles it works," but only when preceded by softening statements like "In my experience," or "most of the time." Because it really does feel like most of the time, the first time your program compiles, it works exactly the way you meant it to.
I can't imagine anybody seriously making that claim as a property of the language.
Yeah, I think the experiential claim is reasonable. It's certainly my experience that Rust code that compiles is more confidence-inspiring than Python code that syntax-checks!
6 days ago: Their experience with Rust was positive for all the commonly cited reasons - if it compiles it works
8 days ago: I have to debug Rust code waaaay less than C, for two reasons: (2) Stronger type system - you get an "if it compiles it works" kind of experience
4 months ago: I've been writing Rust code for a while and generally if it compiles, it works.
5 months ago: If it’s Rust, I can just do stuff and I’ve never broken anything. Unit tests of business logic are all the QA I need. Other than that, if it compiles it works.
9 months ago: But even on a basic level Rust has that "if it compiles it works" experience which Go definitely doesn't.
Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...
GP isn't asking for examples of just anyone making that statement. They're asking for examples of Rust making that promise. Something from the docs or the like.
> Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...
It's a memory error involving unsafe code, so it would be out of scope for whatever promises Rust may or may not have made anyways.
I think it's pretty reasonable to interpret "Language X promises Y" as tantamount to said promise appearing in Language X's definition and/or docs. Claims from devs in their official capacities are likely to count as well.
On the other hand, what effectively random third parties say doesn't matter all that much IMO when it comes to these things because what they think a language promises has little to no bearing on what the language actually promises. If I find a bunch of randos claiming Rust promises to give me a unicorn for my birthday it seems rather nonsensical to turn around and criticize Rust for not actually giving me a unicorn in my birthday.
What docs or language definition are you even talking about? Rust doesn't have an official spec. How can Rust make such a promise if it does not even have an official spec or documentation where it can make it?
As the other commenter said, said promises are made by people. The problem comes from the fact that these people are not always just random internet strangers who don't know a thing about programming who say random stuff that crosses their mind. Sometimes, it comes from the authority of the Rust compiler developers themselves (who apparently also don't seem to know anything about programming, considering that they have made such retarded claims...).
Just look at any talk given by them, or any post made on any forum, or, most importantly, the many instances where such statements are made on the Rust book (which is not an official spec for the language, but it is the closest thing we have, ignoring Ferrocene's spec because rustc is not based on that...).
Also most public speakers who were extremely vocal about Rust and made cookie cutter and easy to digest content for beginner programmers were dead set on selling the language through empty promises and some weird glorification of its capabilities, bordering the behaviour of a cult. Cue in, No Boilerplate, Let's Get Rusty, etc... all of these people have said many times the "if it compiles, you know it works!" statement, which is very popular among Rust programmers, and we all know that that is not true, because anyone with any experience with Rust will be able to tell you that with unsafe Rust, you can shoot yourself in the foot.
Stop selling smoke, this is a programming language, why must it also be a cult?
> What docs or language definition are you even talking about?
I thought it was pretty clear from context that I was speaking more generally there. Suppose not.
> As the other commenter said, said promises are made by people.
Sure, but when the developers of a language collectively agree that the language they all work on should make a particular promise, I think it's reasonable to condense that to <language> promises <X> rather than writing everything out over and over.
It's kind of similar to how one might say "Company X promises Y" rather than "The management of company X promises Y". It's a convenient shorthand that I think is reasonably understood by most people.
> Rust doesn't have an official spec. How can Rust make such a promise if it does not even have an official spec or documentation where it can make it?
Rust does have official documentation [0]?
And that being said, I don't think a language needs an official spec to make a promise. As far as most programmers are concerned, I'm pretty sure the promises made in language/implementation docs are good enough. K&R was generally good enough for most C programmers before C had a spec, after all (and arguably even after to some extent) :P
> Sometimes, it comes from the authority of the Rust compiler developers themselves []. Just look at any talk given by them, or any post made on any forum
Does it? At least from what I can remember off the top of my head I don't think I've seen such claim from official Rust devs speaking in their capacity as such. Perhaps you might have links to such?
> or, most importantly, the many instances where such statements are made on the Rust book
Are there "many instances"? `rg 'compiles.*it.*works` turns up precisely one (1) instance of that statement in the Rust book [1] and slight variations on that regex don't turn up any additional instances. What the book says portrays that statement in a slightly different light than you seem to think:
> Note: A saying you might hear about languages with strict compilers, such as Haskell and Rust, is “If the code compiles, it works.” But this saying is not universally true. Our project compiles, but it does absolutely nothing! If we were building a real, complete project, this would be a good time to start writing unit tests to check that the code compiles and has the behavior we want.
I wouldn't claim that my search was comprehensive, though, and I also can't claim to know the Rust Book from cover to cover. Maybe you know some spots I missed?
> which is not an official spec for the language, but it is the closest thing we have
I believe that particular honor actually goes to the Rust Reference [2].
> Also most public speakers who were extremely vocal about Rust and made cookie cutter and easy to digest content for beginner programmers were dead set on selling the language through empty promises and some weird glorification of its capabilities, bordering the behaviour of a cult. Cue in, No Boilerplate, Let's Get Rusty, etc... all of these people have said many times the "if it compiles, you know it works!" statement, which is very popular among Rust programmers, and we all know that that is not true, because anyone with any experience with Rust will be able to tell you that with unsafe Rust, you can shoot yourself in the foot.
Again, I don't think what unrelated third parties say has any bearing on what a language actually promises. Rust doesn't owe me a unicorn on my birthday no matter how many enthusiastic public speakers I find.
I've also said it, with the implication that the only remaining bugs are likely to be ones in my own logic. Like, suppose I'm writing a budget app and haven't gone to the lengths of making Debit and Credit their own types. I can still accidentally subtract a debit from a balance instead of adding to it. But unless I've gone out of my way to work around Rust's protections, e.g. with unsafe, I know that parts of my code aren't randomly mutating immutables, or opening up subtle use-after-free situations, etc. Now I can spend all my time concentrating on the program's logic instead of tracking those other thousands of gotchas.
It's not moving the goalposts at all. I'm not a Rust programmer, but for years the message has been the same. It's been monotonous and tiring, so I don't know why you think it's new.
Safe Rust code is safe. You know where unsafe code is, because it's marked as unsafe. Yes, you will need some unsafe code in an notable project, but at least you know where it is. If you don't babysit your unsafe code, you get bad things. Someone didn't do the right thing here and I'm sure there will be a post-mortem and lessons learned.
To be comparable, imagine in C you had to mark potentially UB code with ub{} to compile. Until you get that, Rust is still a clear leader.
That's like saying that if c is so inflexible it requires the use of inline assembly to solve problems, it's C's fault if inline assembly causes undefined behavior.
> If rust is so inflexible that it requires the use of unsafe to solve problems...
Thankfully, it doesn't. There are very few situations which require unsafe code, though a kernel is going to run into a lot of those by virtue of what it does. But the vast majority of the time, you can write Rust programs without ever once reaching for unsafe.
What's the alternative that preserves safe-by-default while still allowing unlimited flexibility to accidentally break things? I mean, Rust allows inline assembly because there are situations where you absolutely must execute specific opcodes, but darned if I want that to be the common case.
Yes. When writing unsafe, you have to assume you can never trust anything coming from safe rust. But you are also provided far fewer rakes to step on when writing unsafe, and you (ideally) are writing far fewer lines of unsafe code in a Rust project than you would for equivalent C.
Rust is written in Rust, and we still want to be able to e.g. call C code from Rust. (It used to be the case that external C code was not always marked unsafe, but this was fixed recently).
samdoesnothing is making a legitimate point about needing to consider prevalence of unsafe inna Rust program. That he's being downvoted to hell is everything wrong with HN.
I really hope that Japanese developers take advantage of this situation to show off at least some of the creativity that's possible when we're not quite as limited by Apple's restrictions. I don't doubt that from a 'security' point of view, Apple is going to continue to enforce all sorts of things that make it wildly more difficult to use the supercomputer in our pockets than I would like. But nonetheless, perhaps this gives a bit of room to be a little more illustrative.
I know that from time to time people have argued that jailbreaking should have resulted in more creativity if it were going to, but with that tiny, tiny market, it's hard to believe that many developers, relatively speaking, would have been able to go hard at building something custom and impressive. With this larger market, hopefully folks will get the chance to do that now.
Based on other HN rules thus far, I tend to think that this just results in more comments pointing out that you're violating a rule.
In many threads, those comments can be just as annoying and distracting as the ones being replied to.
I say this as someone who to my recollection has never had anyone reply with a rule correction to me -- but I've seen so many of them over the years and I feel like we would fill up the screen even more with a rule like this.
> David Crawshaw - before this, CTO and co-founder of Tailscale
> Josh Bleecher Snyder - was a Director of Engineering at Braintree, amongst other things
reply