We reuse FreeBSD's WiFi drivers (I assumed responsibility for that port some years ago), so most things that work on FreeBSD work on Haiku.
FreeBSD has fallen behind a bit, so you may want to check model numbers carefully before buying something. Intel hardware of the "9200" series and anything before it generally works (though it may not have all features enabled.) Atheros hardware is the next best, with anything before the "Killer" series especially well supported.
Slightly older ThinkPads tend to do very well. I have an E550 (from 2015) and a T60 (I dunno, 2009?) that are pretty great. My main machine is a custom Ryzen desktop where I carefully picked out hardware that I knew either was already supported or would soon be (e.g when I bought it, the NVMe driver was still in an unstable state and I had more work to do on it, but now it's pretty solid.)
FreeBSD has one WiFi driver ("iwm", or on Haiku we call it "idualwifi7260") that supports 802.11ac hardware, and supposedly the 802.11 stack is ready for ac (there is an out-of-tree driver that crashes a lot but does get ac speed), but nobody added ac support to the "iwm" driver.
Actually both my laptop and desktop use that driver, so who knows, maybe I'll poke at adding 802.11ac support to it one of these days.
I would donate also. The only reason I don't run it on my newer laptop is lack of wifi. Using Linux after FreeBSD is like getting dessert at McDonald's after a Michelin-star meal.
FreeBSD does have some kind of effort to work on adopting them. While that may expand compatibility greatly, it would be in my view a major loss. FreeBSD WiFi drivers more than make up for in quality what they fail to have in quantity; Linux drivers are generally the bottom of the barrel and this often translates directly to poor user experiences with them. FreeBSD drivers may work on less hardware ... but when or where they do work, they are often much more of a sure thing.
(Haiku has its own troubles with FreeBSD drivers, for sure, but generally this seems to hold true: either it works nearly perfectly or it completely fails to recognize or initialize the hardware. Almost all tickets on our tracker now or in the past about such drivers have been along those lines.)
You've put into words something I've been thinking about and somewhat remorseful about: when FreeBSD just exists as a different kernel to run the same Linux stuff, what's the project really? Legacy?
I find that I tend to prefer the FreeBSD take on things as well, so the Linux angle just feels weird the past few years.
I believe that's one ongoing effort to solve the issue (since it'd also ensure things like 802.11ax support). It's just low dev resources, I think - but this is from my just browsing the situation over the past few weeks, mind you. An actual FreeBSD dev should correct me if I'm wrong.
I also believe there's a hack floating around to forward ac from a Linux VM, which some use.
The single biggest thing for me is the unified system design and implementation.
That is, it's not like Linux/BSD desktop environments where the kernel, display server, window manager, desktop shell, file manager, distribution ... are all developed by separate teams in separate code repositories with separate goals, schedules, standards, etc. In Haiku, you can change the UI toolkit, display server, and init system all in a single commit to one repository.
This has a massive array of advantages. It means we never go back and forth about where responsibility for a bug lies, only where it should be fixed. It means we can decide to go with or against trends and standards as it makes sense to (our package manager is probably the biggest example of this.)
Virtually everything else I like about Haiku stems from this, whether it's the timeless UI, the overall system architecture, or even the code itself (which is a genuine pleasure to just read, not something that one can often say about any project.)
i booted haiku on a linux laptop that is using btrfs and it wasn't able to mount the linux partitions. (mounted it as almost empty with some broken entries, so it looks like btrfs needs more work)
i would expect that ext4 is most popular and thus more likely to be stable
Linux can at least read BFS, and Haiku can read at at least theoretically write ext4 but you may want to check the bugtracker to double-check that, I think the ext4 write support being fully functional is a recent development.
great. as long as both OS can read the whole disk, i can make a usable dual-boot system.
the worst irritation with dual-boot is having to reboot to access data that's on the other OS disk. but this way i can boot and test haiku without having to move all my data over
R1's original goal was "feature-complete replacement for BeOS R5". I think we have at this point achieved that, but we also have a vague goal of "usable, stable, daily-driver OS" which we are not quite there for (mostly on the stability and daily-driver front; there are users who use it as a daily driver, but in a limited fashion.)
The Haiku kernel and CLI already supports multiple users, you can add them and SSH into them already. Permissions checks aren't quite there yet, and the GUI is totally non-multiuser-aware. It is a R2 requirement, but it could come sooner...
Honestly, I have no idea, because I have not even so much looked at what the actual implementation of modules is like in C++20 yet. Haiku uses almost none of the STL as it is, we write almost all our own containers and use C++ merely as "C with Classes" more than anything else.
I think if we had sufficient time and energy, we might have started our own programming language that takes a lot from C++ but would diverge sharply after the "C with Classes" part (we have joked about it before, at least.) For one, some of the paradigms we use a lot in Haiku might serve well as baked-in language features, or could be taken further with compiler support. Memory safety is also another big one; I know Rust is now the "C++ successor with memory safety," but at least to me I think it does not quite fit the bill; though we have an especially esoteric view of what "C++" is (notably I haven't put in the time to really learn Rust, honestly, though some of the other Haiku contributors are fans, and the Rust port to Haiku is sufficiently solid at this point)
The biggest thing I think we would ultimately change in any wildly hypothetical programming language we might come up with, though, would probably be ABI stability. C++ is just a huge pain to keep ABI-stable (C is as well, to a lesser extent), and there are all kinds of tricks that are clearly possible now in compilers, in ELF, etc. that there is clearly room for a slightly different language design coupled with a radically different language ABI to make ABI stability much less of a chore to maintain. (We are very big on dynamic linking and stable ABIs, something Rust seems to have basically given up on if it ever really tried, and the same in Go and other newer languages.)
I would imagine that modules instead of headers might come about as some kind of development along with ABI stability if nothing else.
Rust has optional support for the stable C ABI. I'm not sure what you mean by the C ABI being "a huge pain to keep stable"; it's simple and the requirements for stability are well known, it's also the main choice for FFI in a variety of language ecosystems.
The requirements for stability being well known do not make them easy to follow. If you really want stability, you basically have to eschew all use of structs, or add struct versioning, to begin with. Then you can't add or remove parameters from functions, but instead must add new functions only. Those are of course just the first two things; in practice, most libraries do not even try to follow them, and just bump the SONAME version with anything besides a minor bugfix release.
Because in reality there is no such thing as C ABI, only the OS ABI.
On OSes that happen to be written in C, most devs tend to misunderstand the OS ABI for C ABI and then use both interchangeably.
A new OS update can bring changes, also on platforms whose OS isn't written in C, several C compiler can opt for different kinds of ABI thus not allowed for cross compiler linkage without extra steps.
I like the idea of more operating systems to pick from. I'd love to try Haiku or BSD one day soon. What motivates people to invest in these very niche systems?
I'd love to play around with them for fun but is there more to it?
A while ago (2011ish maybe?) I had a hand-me-down laptop whose processor speed was measured in MHz (it even had a floppy drive!). The previous owners had windows XP on it but that didn't really run well. I tried various Linux distros/desktop environments/window managers and found that even running a barebones AwesomeWM setup was sluggish. I decided to give Haiku a shot and was surprised by how smooth it felt. I'm pretty sure the only sluggishness I had when opening applications or booting up was because the hard drive was very slow. I don't know how they did it but somehow it worked wonderfully on this really weak laptop. (Unfortunately I couldn't get the floppy drive working, idr why but some driver issue blocked me.)
Since then I've kept an eye on it and plan on going back to it with some more powerful and better supported hardware to really get to play around with stuff like the interesting filesystem and get some things I use all the time ported over. Would love to switch over from mainly using Linux to having a Linux home server, and using Haiku as a daily driver and sshing in for Linuxy stuff.
One of the other Haiku developers had this [1] response to an inquiry about why Haiku is so fast:
> The system is not all that well optimized, uses a 15 year old compiler which does not uses any modern CPU features, and by default, the kernel is built in debug mode which makes it much slower than it could be.
> How do other operating systems still manage to feel slower? I have no idea.
(Those kernel debug options are no joke, they are a massive slowdown. The ones for the TCP stack alone take network throughput down to 1/5 of what it is without it; the ones for SMP, the virtual memory manager, lock facilities, etc. combined make the system visibly less snappy. We disable these on beta builds, but they are enabled on nightlies, and back in 2011 they were on by default.)
I'm not familiar with OS internals, but i believe having a system-wide consistent design (whole haiku is a monorepo) would enable you to simplify a lot of stuff.
Also, many modern distros and desktop environments are bloated: you either have a choice between a full-featured, resource-hungry desktop (Gnome/KDE) or an efficient minimalist window manager (i3/sway). I believe the same applies to different parts/layers of the system.
Haiku doesn't have the same feature set (yet) as full-featured modern desktops and doesn't have to deal with dozens of compatibility layers, so maybe that's also part of the reason it fares better.
Doesn't that apply to everyone's OS of choice though? Actually maybe it doesn't... But it's not very helpful unless you can be at least a little bit specific about why it makes you happy.
> Doesn't that apply to everyone's OS of choice though?
Do people have love deep in their hearts for Windows or even macOS 11?
It's hard to explain these days, but back in the late 90s it was just so far ahead of everything else except maybe NeXT and NeXT was entirely out of reach for 99% of the world.
The UI was ridiculously smooth and fast and everything worked together like a well oiled machine.
Here they play an mp3 and a video that continue to render while you move the windows around, on a 133mhz machine without it even putting up a sweat. Clearly that's nothing today but that was unheard of at the time.
Beyond that it had amazing features you still don't see on operating systems today like the file system being an actual queryable database. Common metadata like ID3 tags from MP3's were entered into this and queryable.
The standard email client stored emails as individual files and just queried the file system. I believe the address book did the same for people.
The tabs of the individual windows stack together across apps! There were just so many little wonderful fit-and-finish things like that you don't get these days.
> Do people have love deep in their hearts for Windows
Windows 2000, yes.
Amazing OS, insanely fast, and lightweight. With only a handful of background running processes and a couple dozen services it was possible to know exactly what was running on your computer at any time.
On at least one occasion I was able to detect malware on my machine by noticing unexpected background network traffic via the light on my network hub blinking when it shouldn't have been.
Windows XP will go down in history as the OS that an entire generation is nostalgic for, but Windows 2000 was its fiddly, hard to setup, but rock solid once running, older brother.
Win2K wasn't that hard to setup, and it got better when XP came out because driver support expanded. I've certainly had more difficulty setting up Linux today than I ever did Win2K in the past.
And getting gaming to work was a separate challenge.
It wasn't terrible, but it did require knowledge of what chipset your motherboard was running and you had to know that certain magic patches were needed to the OS.
> On at least one occasion I was able to detect malware on my machine by noticing unexpected background network traffic via the light on my network hub blinking when it shouldn't have been.
That's golden. Would deserve a blogpost on its own.
If you weren't actively loading a webpage or playing a game, there wouldn't be any network traffic.
This was back before the days of auto updates or telemetry!
So basically if I was doing something locally, and I saw the lights blink on my hub, and my (tiny!) process list didn't have any obvious suspects, I knew something was up.
Malware wasn't nearly as well hidden back then, so uncovering it wasn't all that hard.
I fondly remember that period indeed. But reading your comment i realized it's been a while since i could tell whether my computer having network activity was suspicious or not, and i'm guessing a lot of younger people don't even realize that could be a thing.
> Malware wasn't nearly as well hidden back then, so uncovering it wasn't all that hard.
Yeah catpicture.jpg.exe was definitely easier to identify than modern viruses are.
I may contribute to Haiku, now that they are putting full-time resources into place.
Why would I do this? It is more than nostalgia or fun, although that is part of it. I believe that diversity is vital to keeping the technological landscape healthy. Most people are happy to go with the status quo, fewer are willing to work to make positive changes and even less are willing to fund their efforts.
Will something come of Haiku because of this? Directly or indirectly, yes. Someone will be working on a vision that may differ from that of Apple/Microsoft/linux, etc. This will have a ripple effect as either Haiku succeeds, or those who work on it take their viewpoints and knowledge to other companies and efforts. Either way, this sort of diversity helps out in keeping the technological ecosystem from becoming more and more of a monoculture.
I use OpenBSD because it's a very pragmatic choice for servers. Nearly everything is "off" by default and I can add only the specific things I need, reducing the attack surface. Pledge/unveil makes sandboxing my applications very simple. The pf firewall is much easier to use with confidence than iptables. The manpages are great and the system is small relative to most Linux distros, so you can understand how everything works and fits together -- great for infra. It's well-designed and consistent throughout because the kernel, OS, and all core tools are designed by the same people. As of yet, no conntrack-style edge cases requiring days digging through kernel code. :)
This of course involves many trade-offs (losing access to common "modern" tech like containers, slower performance) but for my company the trade-offs were justified.
> The pf firewall is much easier to use with confidence than iptables
Tell me more! I cut my teeth on ipchains/iptables back in the early 90's, and feel very familiar with it. That said, I know that doesn't mean it's the best/easiest at all. I've tried to grok Pf once or twice, but never ended up getting very far. I wanted to ask you if you began with Pf, or if you came from something else? More or less, I'm trying to figure out if Pf was difficult for me just because it wasn't my first.
I came from iptables and still have the misfortune of needing to use that in my job. :)
My favorite feature of pf is the configuration file, which is a human-readable/writable file which sets the state in a consistent way, rather than iptable's preferred mechanism of using a bunch of CLI commands executed in just the right sequence. I want a guaranteed consistent state when I'm configuring firewalls across a fleet, and pf.conf makes that not only easy to achieve, but the default behavior. Just modify the conf file, copy that to all the boxes, reload pf, and your firewalls are all updated and guaranteed to be in the same state, regardless of whatever state they were in before.
You can read the pf.conf file from top to bottom to figure out what's going to happen to any packet. All the rules are included right there.
Maybe to get a more considered word in about features you need? I can't really think of anything specific os-wise, but I'm thinking, if you rely on some cool cross-app functionality (e.g., macOS drag-icon-from-toolbar-to-move-corresponding-file) you can have a good shot at contributing to/owning that feature on a small OS project vs. something like Fedora which would require a lot more buy-in
> I'd love to try Haiku or BSD one day soon. What motivates people to invest in these very niche systems?
My interest in FreeBSD began about 12 years ago, when a friend of mine told me about the BSD operating systems and he said that the one he was using was very secure (OpenBSD) and that it had good documentation and that these BSD operating systems (FreeBSD, OpenBSD, and others) are each developed "as a whole", as opposed to Linux which I had recently begun seriously using but which is developed as a bunch of separate much more loosely-tied projects and then bundled together in the form of various distros.
I installed FreeBSD and liked it a lot. I just felt at home, somehow. And for a good while I was running FreeBSD also on my desktop and my laptop.
Fast-forward to present day. On my laptop I run macOS. On my servers I run FreeBSD. My MacBook Pro M1 laptop is my daily driver. I have a desktop that I run Linux on but I rarely boot it because mostly I have no reason to. Almost everything I do I can do with my MacBook Pro M1 and with my servers that run FreeBSD.
But even though I like FreeBSD so much, I feel and fear that Linux keeps advancing in much bigger strides than FreeBSD, because of the many many more people contributing to Linux compared to how many people are developing FreeBSD.
I really want to get into eBPF on Linux soon and explore that. It seems like it could help me gain both insights into the execution of the software that I develop, even more than is possible with DTrace maybe. And I want to explore what can be done on Linux using kTLS and eBPF together. And I am curious to find out more about things like what they talk about at https://pchaigno.github.io/ebpf/2020/11/04/hxdp-efficient-so...
And all of those things have me thinking a lot about whether the positives of using FreeBSD (jails, OpenZFS in base, a system that is developed as a whole, etc) actually justify staying with FreeBSD. Or if I should ditch FreeBSD and focus my energy on Linux instead of on FreeBSD.
Haiku tangent:
I'm no Haiku master, but as I understand it a proper Haiku should succinctly convey the experience of a moment in time. That's really the hallmark of a Haiku. Use of "seasonal words" is a traditional, but not breaking requirement. And the 5-7-5 thing is somewhat misleading since the Japanese (rough) equivalent to a "syllable" is shorter than an English syllable such that a 5-7-5 Haiku in English tends to be ~30% longer in time taken to say than a Haiku in Japanese. (This is what I can recall off the top of my head from The Haiku Handbook. Which is much more worth reading than my comment if you're interested in Haiku.)
I ran BeOS many, many years ago on a PowerComputing Mac-clone and remember it fondly. I had high hopes that when LG acquired WebOS that perhaps some form of BeOS as a general computing platform might resurface.
One question I've always had about Haiku is how faithful it is to the underlying implementation and architecture of BeOS, not so much its resultant API compatibility? Because it was the guts of BeOS which seemed to make it so special, not its component interfaces.
The BeOS internal architecture changed a lot from release to release. The kernel, file system, all the kits were constantly undergoing massive change and I don't think there was an ABI until release four. We broke binaries all the time!
Most of the code wasn't open sourced, but it leaked and yellowTab released a version using the actual BeOS source code. The entire system compressed down to 90MB and you could ftp all.tgz and build the whole system.
Looking at the architecture critically, is it a good idea to have a C++ API and have to deal with fragile base classes? I remember stuffing classes full of placeholders to create space in the vtable for future changes.
BeOS was well documented and the BeBook is still nice to read. Looking at the Haiku source, I see that they have all the kits, with the same API. When I look at the source for Looper, the code is a shadowy reflection of the "real" source. It may even be much better!
If Haiku feels good to use and program, then I think it is faithful to BeOS. Cyril was constantly improving the kernel, Dominic was constantly improving the filesytem, Benoit was constantly improving the window server, Pavel was constantly improving Tracker, etc. There wasn't really a dogma about what BeOS should be except fast, responsive, forward looking and fun for users and developers.
Ignoring the leaks and looking just at the officially released Tracker and Deskbar, or the FAT driver, or the filesystem cache (from the samples of the Be File System book)'s code ... I think the claim that Haiku's code is much better than BeOS' was is not really a hard one to make ;)
Some of our community members who remember BeOS well and still boot it up from time to time remark that Haiku feels like a much more polished and stable system than BeOS at this point, even when using classic applications.
Palm acquired the rights to BeOS, some part of it was used as the guts of webOS for their run at the new-wave smartphone market, and then that of course was acquired by LG, which then used webOS to power some number of smart TVs. If I'm remembering correctly.
Funny, then, that even BeOS or a sliver of it lived on in a mobile operating system. See this conversation about how Linux seems to almost overwhelmingly dominate the mobile OS space, with iOS as the most significant exception.
For what it's worth, a couple of years ago I saw an ATM machine reboot.... and saw the OS/2 splash screen. Wonder if it's popular in that industry, or if it was an older/one-off machine?
WebPositive is based on modern WebKit but quite a few features have not yet been enabled (because they depend on platform code that is not implemented) or function not so well. But there is potential for sure.
Nothing prohibits porting Chromium or Firefox, they are just huge projects with a massive surface area. (Even WebKit, smaller than both of those, is larger than all of Haiku itself.) So we have put our time into WebKit instead.
There is Qt Creator already in the package repositories, and I think someone got Code::Blocks to at least build if not start. I've heard NetBeans used to work, though I'm not sure it still does. Eclipse would require much more work.
If you can build QT5 on Haiku, then it may be possible to get it's embedded version of chromium running. It's largely self contained, but some of the dependencies contain a few finicky code like simd jpeg library or ffmpeg.
People have attempted building QtWebEngine, yes, but it still has a surprising amount of OS-specific code within it. I think it may have gotten so far as displaying webpages but then crashed incessantly. Plus, I'm not sure what browser shells that use QtWebEngine are still actively developed; Falkon looks to be dead...
Not terrible, but the email app is atrocious. The abstraction is great (imagine browsing emails like you do in explorer, its so native) but fails to deal with modern mail (90000 items in your inbox, filters etc) well
Thanks for explaining. As a foreigner (to you US folks) this was not obvious to me. I always assumed "Inc" meant for profit like "SA(RL)" does in France. Around here non-profits have different legal statutes.
Well, I do have a real name, and if you look in the right place you can find it, but >=99.9% of people in the Haiku community and those who have heard of it know me only as "waddlesplash", so why bother announcing with anything else?
I would love to go about writing a toy operating system, but I only know Python, and I feel like I would need to learn a lower level programming language (Rust, C++, or C) before I could even start.
I wonder if it would be simpler building on top of one of the linux or BSD kernels (I've heard NetBSD's is pretty cool)
You have a long way to go but at least people have worn the path before you [1] so that it's easier than ever to find out what you need to do and how to do it.
In many countries a for profit company cannot accept "free" contributions, as this is in breach of minimum wage regulations and volunteering. E.g. here you can only volunteer for charities.
got an example of case where this caused a problem?
i can't imagine how that would work. while i can imagine that a company may not instruct people to work for them for free, wouldn't that also prevent me from making code freely available that a company then just uses?
it makes no sense. if that were the case companies would not be able to use any software that is available for free.
there can't be many countries where that is actually the case.
I think though that it would be more accurate to say that there are now two haiku formats - the Japanese one, and the English one which was originally a misunderstanding of the Japanese one but which now has a life of its own, for better or for worse.
I get it, but the "English version" you speak of is so often just one thought in seventeen syllables split into three lines at otherwise arbitrary points. There's no meaningful construction beyond fitting the thought into 17 syllables. If one is going to do the 5/7/5 style, make them actual phrases, rather than disjoint groups of words that only make sense in the whole.
Or you can go make a (tax-deductible) donation to Haiku, Inc. directly to support my contract: https://www.haiku-inc.org/donate/