Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rolling releases are always going to have more problems than stable and tested releases. This is sinply mathematical and statistical fact.

For a new switching user choosing stable Ubuntu-based distro is the best choice. It has the best software support, the best informational support (distro articles, googling problems), the best quality (because of the popularity).

I am using Mint since 2014 and I am still using the same install of 9 years ago. I update software packages whenever. With big OS updates that come twice a year I wait just in case for 2 extra months and update to that new version. 9 years and counting.

And now imagine an Arch user and how many times something would break during 9 years of operation... It is very telling when discussing this topic and Arch users use arguments like "Arch doesn't really break that much" without truly realizing what others hear and notice. "THAT much", Karl!..



When dealing with Arch vs. Ubuntu - it is more about "Mostly unmodified upstream software" vs. "Canonical customized software", as opposed to "stay on the bleeding edge" vs. "be stuck with a year old packages"

I am using same Arch install on my desktop for the last 7 or so years. So far, the only time it broke for me was during the whole pulseaudio -> pipewire jump. Half of that is probably due to my pulseaudio tweaks. There's always Manjaro (I use this on my laptop) or other semi rolling release distros too. You are still subscribed to the same repositories but your updates lag by a month or two and would be released to you only after they are happy with their tests.


> Rolling releases are always going to have more problems than stable and tested releases. This is sinply mathematical and statistical fact.

lol.

I feel bad responding with just "lol", but I hope you do actually see how this is quite silly. There is indeed no simple mathematical or statistical fact that says running out of date software with downstream custom patches is more reliable than running the latest version from the developer. If anything, it can cause a lot of difficult-to-detect stability issues that may not impact all users or all use cases.

Not only that, but a big problem with "stable" distros is that most people don't want to use e.g. OBS from 2 years ago, so they need some way to run the latest software. Flatpak or Snap? Sure, but then how do you use e.g. OBS plugins? Suddenly, you are back at "OK, maybe I need a PPA" at which point you need to do things that will inevitably make your system less stable because you are now running somewhat of a "snowflake" configuration that gets more unique every time you add a new PPA or non-trivial modification. Whereas in Arch, you just install it from the package manager, or at worst, AUR, which due to the vastly simpler package management system, is a lot less prone to breakage.

Let me summarize:

- I don't agree that rolling release distros are inherently less stable overall. Stable distros are "tested" but blood, sweat and tears can only go so far. For some really compelling evidence, please ask the Linux kernel folks how their 'LTS' project went. As it turns out, maintaining LTS software is non-trivial in and of itself, and it introduces new problems that did not exist originally.

- Even if rolling release distros were inherently less stable, the reduced need to rely on third-party repositories, packages and even out-of-package-manager installations due to the more up to date packages would offset a lot of that, considering a large part of the problem with Debian/Ubuntu is also just simply that installations get borked too easily.

- Even if that weren't true, Arch's much simpler package management has less of a tendency to get tangled up in impossible-to-resolve dependency issues. Part of this is due to the nature of the AUR vs third-party PPAs, and part of it is just that it's literally much simpler overall, so there's less to go wrong. (Arch packages often err on the side of being less modular, which has its downsides but it certainly simplifies many things.)

- Even if that weren't true, my general experience running mainline Linux and bleeding edge packages for years as my primary operating environment suggests it's actually not very common to be hit with rolling release breakages. Usually people who release software do not just blindly release broken shit. Yes sure, regressions happen, this is a fact of life, but that doesn't mean that you're better off using old versions of things, there's definitely a balance, you don't get to have the benefits of having the newest fixes and never having a regression. In my opinion, frontloading the pain of rather occasional regressions is well worth getting the benefits of the new versions sooner, especially if rolling back is easy. And that, among other reasons, is Exactly why immutable Linux is the future.

P.S.: Yes, "that much" is a perfectly valid thing to say. I have spent many, many, many hours debugging Debian and Ubuntu issues, so the problem isn't that those distros never break and Arch does. The problem is that all distros break and Debian and Ubuntu installs mysteriously, despite being stable distros, absolutely seem to have the most trouble, especially during upgrades. And zero isn't an option. Windows installations break too. Sometimes, especially recently, every Windows installation breaks at once.


Using Stable doesn't mean that you are not getting new releases of consumer software. It only means more vetting and the fact that fundamental changes (like switching from X to Wayland) are not going to happen all of a sudden. Read Ubuntu's or Mint's "What's new" posts to understand what kind of changes we are talking about.

There is a reason why Release Candidates exist. There is a reason why testing exists and why, despite that, there will always be things not working for the first time, that would require bug-fixing releases.

And yes, Stable releases being more stable than bleeding-age, is a mathematical/statistical fact. You can prove it yourself with the magic of a spreadsheet: - Make a graph that grows X1% on average. This indicates the quality of the software (it is getting better, after all; otherwise there would be no point in updating). But that is the average growth. In practice it is random; daily it can grow or decline by X2, pretty severely as well, but on average it grows. - You can play with X1 and X2 numbers. - Calculate percentages that the update will be better than the previous state for both cases: (1) update 365 times, (2) update 2 times - also take into account that dropping the quality is not equal in strength to growing by the same number. There is a good reason why people are risk-aversed. Quality being dropped means all kinds of trouble and should be calculated as negatively impacting the following X3 (play with this number) days.

If you do that, you will mathematically prove me right. :-)


> P.S.: Yes, "that much" is a perfectly valid thing to say.

I don't think they want 0 breaks but one thing those stable distros can do is isolate you from breaking changes. Think semantic versioning but for package lists. Especially if there are packages that don't play well with each other. iirc gtk2 vs. gtk3 was one such thing? There's a reason a lot of people still use debian etc.. on servers. It is that they just chose a configuration that works and they don't care for/about any breaking updates - as long as they get stable security patches etc..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: