Hacker Newsnew | past | comments | ask | show | jobs | submit | smallstepforman's commentslogin

Linux has overcommit so failing malloc hasnt been a thing for over a decade. Zig is late to the party since it strong arms devs to cater to a scenerio which no longer exists.

On Linux you can turn this off. On some OS's it's off by default. Especially in embedded which is a major area of native coding. If you don't want to handle allocation failures in your app you can abort.

Also malloc can fail even with overcommit, if you accidentally enter an obviously incorrect size like -1.


I had a teacher who said "a good programmer looks both ways before crossing a one way street"

Funny that you mentioned Erlang since Actors and message passing are tricky to implent in Rust (yes, I’ve seen Tokio). There is a readon why Rust doesnt have a nice GUI library, or a nice game engine. Resources must be shared, and there is more to sharing than memory ownership.

You need to be pragmatic and practical. Extra large codebases have controllers/managers that must be accessible by many modules. A single global vs dozens of local references to said “global” makes code less practical.

One of my favorite talks of all-time is the GDC talk on Overwatch's killcam system. This is the thing that when you die in a multiplayer shooter you get to see the last ~4 seconds of gameplay from the perspective of your killer. https://www.youtube.com/watch?v=A5KW5d15J7I

The way Blizzard implemented this is super super clever. They created an entirely duplicate "replay world". When you die the server very quickly "backfills" data in the "replay world". (Server doesn't send all data initially to help prevent cheating). The camera then flips to render the "replay world" while the "gameplay world" continues to receives updates. After a few seconds the camera flips back to the "gameplay world" which is still up-to-date and ready to rock.

Implementing this feature required getting rid of all their evil dirty global variables. Because pretty much every time someone asserted "oh we'll only ever have one of these!" that turned out to be wrong. This is a big part of the talk. Mutables globals are bad!

> Extra large codebases have controllers/managers that must be accessible by many modules.

I would say in almost every single case the code is better and cleaner to not use mutable globals. I might make a begrudging exception for logging. But very begrudgingly. Go/Zig/Rust/C/C++ don't have a good logging solution. Jai has an implict context pointer which is clever and interesting.

Rust uses the unsafe keyword as an "escape hatch". If I wrote a programming language I probably would, begrudgingly, allow mutable globals. But I would hide their declaration and usage behind the keyworld `unsafe_and_evil`. Such that every single time a programmer either declared or accessed a mutable global they would have to type out `unsafe_and_evil` and acknowledge their misdeeds.


If we're getting philosophical, we can identify a hierarchy of globals:

1. Read-only (`const`s in Rust). These are fine, no objections.

2. Automatic-lazily-initialized write-once, read-only thereafter (`LazyLock` in Rust). These are also basically fine.

3. Manually-initialized write-once, read-only thereafter (`OnceLock` in Rust). These are also basically fine, but slightly more annoying because you need to be sure to manually cover all possible initialization pathways.

4. Write-only. This is where loggers are, and these are also basically fine.

5. Arbitrary read/write. This is the root of all evil, and what we classically mean when we say "global mutable state".


Good list.

2 and 3 are basically fine. Just so long as you don’t rely on initialization order. And don’t have meaningful cleanup. C++ initialization fiasco is great pain. Crash on shutdown bugs are soooo common with globals.

4 of have to think about.

And yes 5 is the evilness.


Could you describe what you would consider a good logging solution?

Haven’t found it yet! Jai’s implicit context pointer is interesting. Need to work with it more. It still has lots of limitation. But interesting angle.

This is a great example of something that experience has dragged me, kicking and screaming, into grudgingly accepting: That ANY time you say “We will absolutely always only need one of these, EVER” you are wrong. No exceptions. Documents? Monitors? Mouse cursors? Network connections? Nope.

Testing is such a good counter example. "We will absolutely always only need one of these EVER". Then, uh, can you run your tests in parallel on your 128-core server? Or are you forced to run tests sequentially one at a time because it either utterly breaks or accidentally serializes when running tests in parallel? Womp womp sad trombone.

There was an interesting proposal in the rust world to try and handle that with a form of implicit context arguments... I don't have time to track down all the various blogposts about it right now but I think this was the first one/this comment thread will probably have links to most of it: https://internals.rust-lang.org/t/blog-post-contexts-and-cap...

Anyways, I think there are probably better solutions to the problem than globals, we just haven't seen a language quite solve it yet.


Dont forget that people also view Netflix on TV’s, and a large number of physical TV’s were made before AV1 was specced. So 30% overall may also mean 70% on modern devices.

I once saw a C64 disk drive 1541 play the US national anthem by moving the floppy drive head across the disk in patterns to produce mechanical noise which vould be interpreted as a melody/tune. So a combination of a cheap built in speaker and a floppy disk could have produced stereo sound!!!!

Meanwhile, 8 year old Amigas were playing 12 bit digital stereo sound on 4 channels … on a preemptive multitasking system on a machine with less RAM and a < 8Mhz CPU.


Why stop with only 2 OS’s? I triple boot with Haiku.


It already exists (any Open Source OS).


Actually, for Robotics hardware is a solved problem. Software is struggling to keep up.


> Actually, for Robotics hardware is a solved problem.

I understand the sentiment but this couldn't be further from the truth. There are no robotic hand models that get close to the fidelity of humans (or even other primates).

The technology just doesn't exist yet, motors are a terrible muscle replacement. Even completely without software, a puppeteered hand model would be revolutionary.


There were snipers on all sides doing immoral things. The war in Sarajevo started when a Serbian wedding was shot up in April 1992, causing the Serbs to panic and want to secede from Bosnia & Herzegovina. Then you end up with "rights for this ethnic group but not the other", and a civil war breaks out with geopolitical meddling from foreign wanna-be powers.


No please, stop puking this bullshit...if you believe that, you also believe Princip's assasination of Duke Ferdinand was the underlying cause of WW1 (and not the tensions among the great colonial powers of the time).


My biggest beef with exceptions is invisible code flow. Add the attribute throws<t> to function signature (so that its visible) and enforce handling by generating compiler error if ignored. Bubbling up errors is OK. In essence, this is result<t,e>. Thats OK. Even for constructors.

What I dislike is having a mechanism to skip 10 layers of bubbling deep inside call stack by a “mega” throw of type <n> which none of the layers know about. Other than


I would also like to have type checked exceptions in c++ (as long as we can also template over exception specifications).

But what I would really want is noexcept regions:

   <potentially throwing code>...
   noexcept {
      <only noexcept code here> ...
   }
   <potentially more throwing code>...
i.e. a way to mark regions of code that cannot deal with unwind and must only call non-throwing operations.


Don't think of the uncaught type as a "mega" throw. It's just a distinct type of error that nobody specified that they can handle. If you truly worry about the caller missing something, then somewhere in there you can catch anything and translate into a recognizable exception. This is easiest to understand in a library. The interface functions can catch all and translate into one particular exception type for "unknown" or generic errors. Then, that will be caught by anyone using the thing as documented. This only works if it's just reporting a non-fatal error. In case of a fatal error, it can't be handled, so the translation is kind of pointless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: