I've tried to abstract everything architecture dependent but there is still a lot of x86 specific code outside of archutecure specific directories. I've been planning on adding support for new platforms, but I've not yet had the motivation to start doing it.
I'm not familiar with other architectures than x86 so I can't really say what it will take. At least a lot of work :D
I've managed to compile the GNU toolchain (binutils + gcc). gcc seems to work fine, it can properly compile source code into assembly. There are some problems with binutils that make it not able to create object files or link. I'm haven't dug deeper to see where the problem actually is.
wow, i envy you! :) i have been working on a really simple self-compiling C compiler for some time, was wondering if you are interested in adding it to your OS haha. but if GCC works fine, then i don't see a reason :d
Pretty much what others said. You should read through https://wiki.osdev.org/Getting_Started and note that it will take a lot time if you decide to go for developing an OS.
I do plan to port more! I want to keep the base OS free of 3rd party code, but ports are really nice way to get things running that I have not yet written.
I have some ports locally that are not yet working. I have git, binutils, gcc, make all compiling but they are giving some weird errors. Probably a bug in my libc or syscalls.
Oh yeah, sorry. I'm missing documentation on how to access the GUI environment. You have to enter the GUI environment using `start-gui` command. After that doom should start by running `doom` in the GUI terminal.
Performance in the web emulator is really bad though, so don't expect much :D
Yeah basically every commonly used device has its protocol standardized. Although there exist some devices where the manufacturer has to provide the drivers. All of the devices I have written drivers have had their specifications publicly available for free (e.g. NVMe at https://nvmexpress.org/specifications).
I'm not really familiar with how Linux nor Windows handle drivers. While compiling the Linux kernel you specify which drivers you want to build into the kernel and which ones you want as modules. Usually most common ones compiled along side the kernel so there isn't really need to install them later, just load the driver modules. There are also devices that work with just a generic driver but would have more features with a specific one (e.g. led settings on a gaming mouse). I think Windows is maybe installing these optional drivers.
Microsoft, having the resources it does, was able to design and implement a stable driver ABI. In fact, they also have a stable userspace ABI. Both have evolved, but that's not the point.
Linux conversely (whether by design or by limitation) does not have stable ABIs (although userspace compatibility isn't terrible). Even though you can build kernel drivers as modules (and then load/unload), those modules are unique to each specific kernel build.
Remember those guys didn't write the compiler & linker toolchains they were using like Microsoft was able to do.
I will be honest, I understand why Linus & Co. have decided to keep it this way, because it encourages hardware manufacturers to submit either their drivers or their specifications to the upstream kernel, which promotes software freedom. It is a noble goal and has served them well.
BUT - thieir decision has caused me no small amount of consternation over the years as a system administrator. Once you use Linux for something that is not "server software" (be it auth, file sharing, web, etc.), you are generally using it to drive some piece of hardware (CNCs, industrial tools, cough phone systems cough). Vendors, especially those that deal in low-volume / high-margin products, do not want to release their source code. They're allergic to the idea even. So I have gotten stuck in outdated kernel hell on several occasions because the kernel devs have decided to change internal interfaces in a point-point release that my driver software relied upon.
I so wish that Linux would move to a stable driver ABI. It would make administration & upgrades so much easier, especially on the embedded side.
But I also know that it'll never happen.
FWIW, the "no stable kernel ABI" is unique to Linux, although no one really does this like Microsoft (i.e. the BSDs can break between releases, but I do believe NetBSD is superior to FreeBSD in this respect).
I do maybe 95% of testing on a VM. It is way faster and much more convenient. I do test on real hardware regularly though. It's always cool to see stuff actually running on bare metal and bare metal is not as forgiving as VMs can be.
Generally I decide a feature I want to add. Then I do a general overview of
the corresponding specifications and sometimes look at how already existing OSs handle that. I try to make some kind of mental model about the system and what it needs. Then I basically just write what ever I come up on the spot.
I have a really bad habit of not writing docs or taking notes. Basically I just hold everything in my head (and forget about it when I would need that information again). For some more complex stuff I do draw diagrams and write notes but I pretty much only keep those locally for myself.
> I have a really bad habit of not writing docs or taking notes. Basically I just hold everything in my head (and forget about it when I would need that information again).
This was me for a very long time. I started making notes now, knowing that I will most likely forget (some of it). I still have all sorts of files scattered around though with my notes, i.e. the notes are disorganized (to some extent). I thought of using Obsidian for them and I have tried it but I do not use it consistently, I just go for my XTerm window with emacs or vim.
I'm not familiar with other architectures than x86 so I can't really say what it will take. At least a lot of work :D