Hacker Newsnew | past | comments | ask | show | jobs | submit | another-acct's commentslogin

> C, a useless language without effective scopes

Mutexes can be handled safely in C. It's "just another flavor" of resource management, which does take quite a bit of discipline. Cascading error paths / exit paths help.


Might want to move foo.add() out of the lock scope (assuming foo is a thread-private resource):

    value = nil
    lock {
      if (data.size() > 0) {
        value = data.pop()
      }
    }
    if (value) {
        foo.add(value)
    }


I think it's fair to comment not only on the subject, but on the writing itself, too.

And it might help Justine improve her writing (and reach a larger audience -- after all, blog posts intend to reach some audience, don't they?). Of course you can always say, "if you find yourself alienated, it's your loss".


This is how it should be. IIRC -- apologies, can't find a source --, Ulrich Drepper wrote somewhere about NPTL that its mutexes were not particularly lightweight, but that you should design your program for low contention anyways.

For highly contended data structures, spinlocks (and nowadays explicit atomics) are likely better.


> remove locks from code and replace with some kind of queue or messaging abstraction

Shared-nothing message passing reflects the underlying (modern) computer architecture more closely, so I'd call the above a good move. Shared memory / symmetric multiprocessing is an abstraction that leaks like a sieve; it no longer reflects how modern computers are built (multiple levels of CPU caches, cores, sockets, NUMA, etc).


If you are doing pure shared nothing message passing, you do not need coherent caches; in fact cache coherency gets in the way of pure message passing.

Viceversa if you do pure message passing you are not benefitting from hardware accelerated cache coherency and leaving performance (and usability) on the floor.


That's good to hear! I am pretty removed from underlying hardware now, so it makes me happy to hear that better way of doing things is catching on even in low-level land.


Agreed; this is what I've always (silently) thought of those fat binaries. Absolute stroke of genius, no doubt, and also a total abomination (IMO) from a sustainability perspective.


I also meant to comment about the grandstanding in her post.

Technical achievement aside, when a person invents something new, the burden is on them to prove that the new thing is a suitable replacement of / improvement over the existing stuff. "I'm starting to view /not/ using [cosmo] in production as an abandonment of professional responsibility" is emotional manipulation -- it's guilt-tripping. Professional responsibility is the exact opposite of what she suggests: it's not jumping on the newest bandwagon. "a little rough around the edges" is precisely what production environments don't want; predictability/stability is frequently more important than peak performance / microbenchmarks.

Furthermore,

> The C library is so deeply embedded in the software supply chain, and so depended upon, that you really don't want it to be a planet killer.

This is just underhanded. She implicitly called glibc and musl "planet killers".

First, technically speaking, it's just not true; and even if the implied statement were remotely true (i.e., if those mutex implementations were in fact responsible for a significant amount of cycles in actual workloads), the emotional load / snide remark ("planet killer") is unjustified.

Second, she must know very well that whenever efficiency of computation is improved, we don't use that for running the same workloads as before at lower cost / smaller environmental footprint. Instead, we keep all CPUs pegged all the time, and efficiency improvements only ever translate to larger profit. A faster mutex too translates to more $$$ pocketed, and not to less energy consumed.

I find her tone of voice repulsive.


I agree overall with your sentiment but wanted to comment on one of your statements that I perceived to be hyperbole.

> Second, she must know very well that whenever efficiency of computation is improved, we don't use that for running the same workloads as before at lower cost / smaller environmental footprint. Instead, we keep all CPUs pegged all the time, and efficiency improvements only ever translate to larger profit. A faster mutex too translates to more $$$ pocketed, and not to less energy consumed.

It depends on the use case. If you can serve the same number of users / requests with fewer machines, then you buy and run fewer machines. (Yes, saving energy, but also saving on both capex and opex.)

Also, when you're talking about anything resembling interactivity (as you might in the context of, say, a webserver), you really don't want to run anywhere close to 100% average utilization. With unbounded queues, you end up with arbitrarily high wait times; with bounded queues, you end up serving 503s and 429s and other server errors.

That said, my experience with modern webservers is that you generally don't rely on mutexes for synchronizing most work across worker threads, and instead you try to keep your workloads as embarrassingly parallel as possible.


> or just want to experience and enjoy my excellent (and occasionally eclectic) taste

Thanks for the good laugh :)

Seriously though, your CV is impressive. I hope you'll land well, and quickly. In my (very recent) job hunt experience, the job market is currently mortally ill; the more senior and experienced you are, the more the insane interviewing and HR practices, and the inexplicable rejections, will hurt your soul.

A friend of mine sent me the following links:

https://danluu.com/hiring-lemons/

https://danluu.com/programmer-moneyball/

https://danluu.com/algorithms-interviews/

Good luck!


> For me, GitHub PR review drives me crazy. It's good for exactly one round of exchange. After that nobody can tell what the heck is going on.

Matches my experience totally. It devolves into a heap of garbage. In comparison, with (incremental) mailing list-based review, it's not difficult to go up to v7 or so.

> But on non-subjective metrics it seems like LLVM PRs on GitHub are gathering noticeably less discussion than they used to enjoy as Phabricator diffs.

That could be a consequence of GitHub making it harder to comment sensibly.


Whenever you force push v2, v3, v4 of your branch called "foobar", you can also push branches called "foobar-v2", "foobar-v3", "foobar-v4" (pointing to identical commit hashes, respectively). The "foobar" branch is what refreshes the PR. There are no PRs for the versioned (and effectively read-only) branches, they are there for reviewer reference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: