Hacker Newsnew | past | comments | ask | show | jobs | submit | rwmj's commentslogin

My guess the problem being solved is how to get acquired by a big Linux vendor.

I thought it was how to plug the user freedom hole. Profits are leaking because users can leave the slop ecosystem and install something that respects their freedom. It's been solved on mobile devices and it needs to be solved for desktops.

Please don't. C packaging in distros is working fine and doesn't need to turn into crap like the other language-specific package managers. If you don't know how to use pkgconf then that's your problem.

When I used to work with C many years ago, it was basically: download the headers and the binary file for your platform from the official website, place them in the header/lib paths, update the linker step in the Makefile, #include where it's needed, then use the library functions. It was a little bit more work than typing "npm install", but not so much as to cause headaches.

What do you do when the code you downloaded refers to symbols exported by libraries not already on your system? How do you figure out where those symbols should come from? What if it expects version-specific behavior and you’ve already installed a newer version of libwhatever on your system (I hope your distro package manager supports downgrades)?

These are very, very common problems; not edge cases.

Put another way: y'all know we got all these other package management/containerization/isolation systems in large part because people tried the C-library-install-by-hand/system-package-all-the-things approaches and found them severely lacking, right? CPAN was considered a godsend for a reason. NPM, for all its hilarious failings, even moreso.


> These are very, very common problems; not edge cases.

Honestly? Over the course of my career, I've only rarely encountered these sorts of problems. When I have, they've come from poorly engineered libraries anyway.


Here is a thought experiment (for devs who buy into package managers). Take the hash of a program and all its dependency. Behavior is different for every unique hash. With package managers, that hash is different on every system, including hashes in the future that are unknowable by you (ie future "compatible" versions of libraries).

That risk/QA load can be worth it, but is not always. For an OS, it helps to be able to upgrade SSL (for instance).

In my use cases, all this is a strong net negative. npm-base projects randomly break when new "compatible" version of libraries install for new devs. C/C++ projects don't build because of include/lib path issues or lack of installation of some specific version or who knows what.

If I need you to install the SDL 2.3.whatever libraries exactly, or use react 16.8.whatever to be sure the app runs, what's the point of using a complex system that will almost certainly ensure you have the wrong version? Just check it in, either by an explicit version or by committing the library's code and building it yourself.


Check it in and build it yourself using the common build system that you and the third party dependency definitely definitely share, because this is the C/C++ ecosystem?

You are conflating development with distribution of binaries (a problem which interpreted languages do not have, I hasten to add).

1. The accepted solution to what you're describing in terms of development, is passing appropriate flags to `./configure`, specifying the path for the alternative versions of the libraries you want to use. This is as simple as it gets.

As for where to get these libraries from in the event that the distro doesn't provide the right version, `./configure` is basically a script. Nothing stopping you from printing a couple of ftp mirrors in the output to be used as a target to wget.

2. As for the problem of distribution of binaries and related up-to-date libraries, the appropriate solution is a distro package manager. A c package manager wouldn't come into this equation at all, unless you wanted to compile from scratch to account for your specific circumstances, in which case, goto 1.


And with header only libraries (like stb) its even less than that.

I primarily write C nowadays to regain sanity from doing my day job, and the fact that there is zero bit rot and setup/fixing/middling to get things running is in stark contrast to the horrors I have to deal with professionally.


And then you got some minor detail different from the compiled library and boom, UB because some struct is layed out differently or the calling convention is wrong or you compiled with a different -std or …

Which is exactly why you should leave it to the distros to construct a consistent build environment. If your distro regularly gets this wrong then you do have a problem.

Well, if you're fine with using 3-year old versions of those libraries packaged by severely overworked maintainers who at one point seriously considered blindly converting everything into Flatpaks and shipping those simply because they can't muster enough of manpower, sure.

"But you can use 3rd party repositories!" Yeah, and I also can just download the library from its author's site. I mean, if I trust them enough to run their library, why do I need opinionated middle-men?


If this is a concern (which it rarely is) then you can pitch in with distro packaging. Volunteers are always welcome.

> "But you can use 3rd party repositories!"

That's not something I said.


>(which it rarely is)

You're saying it's _rare_ for developers to want to advance a dependency past the ancient version contained in <whatever the oldest release they want to support> is?

Speaking for the robotics and ML space, that is simply the opposite of a true statement where I work.

Also doesn't your philosophy require me to figure out the packaging story for every separate distro, too? Do you just maintain multiple entirely separate dependency graphs, one for each distro? And then say to hell with Windows and Mac? I've never practiced this "just use the system package manager" mindset so I don't understand how this actually works in practice for cross-platform development.


I agree entirely. C doesn't need this. That I don't have to deal with such a thing has become a new and surprising advantage of the language for me.

I find this sentiment bewildering. Can you help me understand your perspective? Is this specifically C or C++? How do you manage a C/C++ project across a team without a package manager? What is your methodology for incorporating third party libraries?

I have spent the better half of 10 years navigating around C++'s deplorable dependency management story with a slurry of Docker and apt, which had better not be part of everyone's story about how C is just fine. I've now been moving our team to Conan, which is also a complete shitshow for the reasons outlined in the article: there is still an imaginary line where Conan lets go and defers to "system" dependencies, with a completely half-assed and non-functional system for communicating and resolving those dependencies which doesn't work at all once you need to cross compile.


You're confusing two different things.

For most C and C++ software, you use the system packaging which uses libraries that (usually) have stable ABIs. If your program uses one of those problematic libraries, you might need to recompile your program when you update the library, but most of the time there's no problem.

For your company's custom mission critical application where you need total control of the dependencies, then yes you need to manage it yourself.


Ok - it sounds like you’re right, but I think despite your clarification I remain confused. Isn’t the linked post all about how those two things always have a mingling at the boundary? Like, suppose I want to develop and distribute a c++ user-space application in a cross platform way. I want to manage all my dependencies at the language level, and then there’s some collection of system libraries that I may or may not decide to rely on. How do I manage and communicate that surface area in a cross platform and scalable way? And what does this feel like for a developer - do you just run tests for every supported platform in a separate docker container?

What "distro" package manager is available on Windows and macOS? vcpkg doesn't provide binary packages and has quite a few autotools-shaped holes. Homebrew is great as long as you're building for your local machine's macOS version and architecture, but if you want to support an actual user community you're SOL.

I mean … it clearly isn’t working well if problems like “what is the libssl distribution called in a given Linux distro’s package manager?” and “installing a MySQL driver in four of the five most popular programming languages in the world requires either bundling binary artifacts with language libraries or invoking a compiler toolchain in unspecified, unpredictable, and failure-prone ways” are both incredibly common and incredibly painful for many/most users and developers.

The idea of a protocol for “what artifacts in what languages does $thing depend on and how will it find them?” as discussed in the article would be incredibly powerful…IFF it were adopted widely enough to become a real standard.


Assuming that your distro is, say, Debian, then you'll know the answer to that is always libssl-dev, and if you cannot find it then there's a handy search tool (both CLI and web page: https://packages.debian.org) to help you.

I'm not very familiar with MySQL, but for C (which is what we're talking about here) I typed mysql here and it gave me a bunch of suggestions: https://packages.debian.org/search?suite=default&section=all... Debian doesn't ship binary blobs, so I guess that's not a problem.

"I have to build something on 10 different distros" is not actually a problem that many people have.

Also, let the distros package your software. If you're not doing that, or if you're working against the distros, then you're storing up trouble.


Actually "build something on 10 different distros" is not a problem either, you just make 10 LXC containers with those distros on a $20/mo second-hand Hetzner box, sick Jenkins with trivial shell scripts on them and forget about it for a couple years or so until a need for 11th distro arrives, in which case you spend half an hour or so to set it up.

> what is the libssl distribution called in a given Linux distro’s package manager?

I think you're going to need to know that either way if you want to run a dynamically linked binary using a library provided by the OS. A package manager (for example Cargo) isn't going to help here because you haven't vendored the library.

To match the npm or pip model you'd go with nix or guix or cmake and you'd vendor everything and the user would be expected to build from scratch locally.

Alternatively you could avoid having to think about distro package managers by distributing with something like flatpak. That way you only need to figure out the name of the libssl package the one time.

Really issues shouldn't arise unless you try to use a library that doesn't have a sane build system. You go to vendor it and it's a headache to integrate. I guess there's probably more of those in the C world than elsewhere but you could maybe just try not using them?


I've contemplated this quite a bit (and I personally maintain a C++ artifact that I deploy to production machines, and I generally prefer not to use containers for it), and I think I disagree.

Distributions have solved a very specific problem quite nicely: they are building what is effectively one application (the distro) with many optional pieces, it has one set of dependencies, and the users update the whole thing when they update. If the distro wants to patch a dependency, it does so. ELF programs that set DT_INTERP to /lib/ld-linux-[arch].so.1 opt in to the distro's set of dependencies. This all works remarkably well and a lot of tooling has been built around it.

But a lot of users don't work in this model. We build C/C++ programs that have their own set of dependencies. We want to try patching some of them. We want to try omitting some. We want to write programs that are hermetic in the sense that we are guaranteed to notice if we accidentally depend on something that's actually an optional distro package. The results ... are really quite bad, unless the software you are building is built within a distro's build system.

And the existing tooling is terrible. Want to write a program that opts out of the distro's library path? Too bad -- DT_INTERP really really wants an absolute path, and the one and only interpreter reliably found at an absolute path will not play along. glibc doesn't know how to opt out of the distro's library search path. There is no ELF flag to do it, nor is there an environment variable. It doesn't even really support a mode where DT_INTERP is not used but you can still do dlopen! So you can't do the C equivalent of Python venvs without a giant mess.

pkgconf does absolutely nothing to help. Sure, I can write a makefile that uses pkgconf to find the distro's libwhatever, and if I'm willing to build from source on each machine* (or I'm writing the distro itself) and if libwhatever is an acceptable version* and if the distro doesn't have a problematic patch to it, then it works. This is completely useless for people like me who want to build something remotely portable. So instead people use enormous kludges like Dockerfile to package the entire distro with the application in a distinctly non-hermetic way.

Compare to solutions that actually do work:

- Nix is somewhat all-encompassing, but it can simultaneously run multiple applications with incompatible sets of dependencies.

- Windows has a distinct set of libraries that are on the system side of the system vs ISV boundary. They spend decades doing an admirable job of maintaining the boundary. (Okay, they seem to have forgotten how to maintain anything in 2026, but that's a different story.) You can build a Windows program on one machine and run it somewhere else, and it works.

- Apple bullies everyone into only targeting a small number of distros. It works, kind of. But ask people who like software like Aperture whether it still runs...

- Linux (the syscall interface, not GNU/Linux) outdoes Microsoft in maintaining compatibility. This is part of why Docker works. Note that Docker and all its relatives basically completely throw out the distro model of interdependent packages all with the same source. OCI tries to replace it with a sort-of-tree of OCI layers that are, in theory, independent, but approximately no one actually uses it as such and instead uses Docker's build system and layer support as an incredibly poorly functioning and unreliable cache.

- The BSDs are basically the distro model except with one single distro each that includes the kernel.

I would love functioning C virtual environments. Bring it on, please.


> C packaging in distros is working fine

GLIBC_2.38 not found


Like, seriously. It's impossible to run Erlang/OTP 21.0 on a modern Ubuntu/Debian because of libssl/glibc shenanigans so your best bet is to take a container with the userspace of Ubuntu 16 (which somehow works just fine on modern kernel, what a miracle! Why can't Linux's userspace do something like that?) and install it in there. Or just listen to "JuST doN'T rUN ouTdaTED SoftWAre" advices. Yeah, thanks a lot.

If you have a distro-supplied binary that doesn't link with the distro-supplied glibc, something is very very wrong.

If you're supplying your own binaries and not compiling/linking them against the distro-supplied glibc, that's on you.


Linking against every distro-supplied glibc to distribute your own software is as unrealistic as getting distributions to distribute your software for you. The model is backwards from what users and developers expect.

But that's not the point I'm making. I'm attacking the idea that they're "working just fine" when the above is a bug that nearly everyone hits in the wild as a user and a developer shipping software on Linux. It's not the only one caused by the model, but it's certainly one of the most common.


It's hardly unrealistic - most free software has been packaged, by each distro. Very handy for the developer: just email the distro maintainers (or post on your mailing list) that the new version is out, they'll get round to packaging it. Very handy for the user, they just "apt install foo" and ta-da, Foo is installed.

That was very much the point of using a Linux distro (the clue is in the name!) Trying to work in a Windows/macOS way where the "platform" does fuck-all and the developer has to do it all themselves is the opposite of how distros work.


User now waits for 3rd party "maintainers" to get around to manipulating the software they just want to use from the 1st party developer they have a relationship with. If ever.

I understand this is how distros work. What I'm saying is that the distros are wrong, this is a bad design. It leads to actual bugs and crashes for users. There have been significant security mistakes made by distro maintainers. Distros strip bug fixes and package old versions. It's a mess.

And honestly, a lot of software is not free and won't be packaged by distros. Most software I use on my own machines is not packaged by my distro. ALL the software I use professionally is vendored independently of any distribution. And when I've shipped to various distributions in the past, I go to great lengths to never link anything if possible that could be from the distro, because my users do not know how to fix it.


No, just no.

Using system/distro packages is great when you're writing server software and need your base system to be stable.

But, for software distributed to users, this model fails hard. You generally need to ship across OSs, OS versions and for that you need consistent library versions. Your software being broken because a distro maintainer has decided that a 3 year old version of your dependency is close enough is terrible.


If you software is not being distributed by that distribution and is using some external download tool, it is inherently not supported and the only way to make sure it works is to compile from source.

^ This.

Plus, we already have great C package management. Its called CMake.


CMake is not a package management tool, it is a build tool. It can be abused to do package management, but that isn't what it is for.

I hate autotools, but I have stockholm syndrome so I still use it.

I hated auto tools until I had to use cmake. Now, I still hate auto tools, but I hate cmake more.

Its not so hard once you learn it. Of course, you will carry that trauma with you, and rightly so. ;)

Top-up subscriptions required for AI features? I wish every service would do this!

It would be amazing if the tech leadership is so far gone they think AI is so great that everyone will beg to pay for it and it gets locked behind subscription fees. The only way it could get better is if the users that are paying get some kind of flair or label so I can ignore them.

Now if only the feature would only be installed after you paid

Are you saying we cannot talk about the bad things the US has done?

No I'm saying we can, unlike how it is in China. Besides that point, I think GP is arguing that China is villinized more than the US.

I'm pretty sure if you criticise the US on something they care about, you posts will disappear from social media pretty quickly. Not because of political censorship but because of Trust and Safety violations

There's so many ways to do this, but a simpler method is to hide a small logic block (somewhere in the 10 billion transistors of your CPU) that detects a specific, long sequence of bits and invokes the kill switch.

The author should really upload these to the Internet Archive.

The emulator (which seems like it's for DOS) seems a strange thing to include on the disc:

  ><fs> file /ddman.exe 
  MS-DOS executable, MZ for MS-DOS

TFA states they will do so once they receive the expected takedown notice from Sony.

Can confirm the bit about phone sockets. Our old house came with a phone socket by the bed in both bedrooms. Also a TV aerial socket in the master bedroom (not in the second bedroom though, no TV for you!)

The journal name ("Management Science") is a bit of a giveaway too.

Join me in my new business endeavor where we found the Journal for Journal Science.

If the document was some standard thing, like say a standard tax form that every CZ resident needs, then I'd absolutely use Google Translate / AI translation on it. The assumption is that there's no specific trick or catch in the document.

If it was a customized contract then I'd want to use a local legal professional who could also speak English.


Douglas Adams's prediction seems more apposite:

"Meanwhile, the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: