<i>So with the 80386, Intel finally abandoned their failed approach of segmented address spaces and joined the linear rest of the world. (Of course the 386 is technically still segmented, but let's ignore that).</i>
That seems an odd interpretation of how they extended the 286 protected mode on the 386. The 286 converted the fixed address+64k sized segment registers to 'selectors' in the LDT/GDT which added permissions/etc to a segment descriptor structure which were transparently cached along with the 'base' of the segment in generally invisible portions of the register. The problem with this approach was the same as CHERI/etc that it requires a fat pointer comprising the segment+offset which to this day remains problematic with standard C where certain classes of programmers expect that sizeof (void*) == sizeof (int or long).
Along comes the 386 with adds a further size field (limit) to the segment descriptor which can be either bytes or pages.
And of course it added the ability to back linear addresses with paging, if enabled.
Its entirely possible to run the 386 in an object=segment only mode where each data structure exists in its own segment descriptor and the hardware enforces range checking, and heap compression/etc can happen automatically by simply copying the segment to another linear address and adjusting the base address. By today standards the number of outstanding segment descriptors is limiting, but remember 1985 when a megabyte of RAM was a pretty reasonable amount...
The idea that someone would create a couple descriptors with base=0:limit=4G and set all the segment register to them, in order to assure that int=void * is sorta a known possible misuse of the core architecture. Of course this basically requires paging as the processor then needs to deal with the fact that it likely doesn't actually have 4G of ram, and the permissions model then is enforced at a 4K granularity. Leaving open all the issues C has with buffer overflows, and code + data permissions mixing/etc. Its not a better model, just one easier to reason about initially, but then for actual robust software starts to fall apart for long running processes due to address space fragmentation and a lot of other related problems.
AKA, it wasn't necessarily the best choice, and we have been dealing with the repercussion of lazy OS/systems programmers for the 40 years since.
PS: intel got(gets) a lot of hate from people wanting to rewrite history, by ignoring the release date of many of these architectural advancements. Ex the entire segment register 'fiasco' is a far better solution than the banked memory systems available in most other 8/16 bit machines. The 68000 is fully a year later in time, and makes no real attempt at being backwards compatible with the 6800 unlike the 8086 which is clearly intended to be a replacement for the 8080.
> (Of course the 386 is technically still segmented, but let's ignore that)
Yes, the 80386 was still technically segmented, but the overwhelming majority of operating systems (95%+) effectively abandoned segmentation for memory protection and organization, except for very broad categories such as kernel vs. user space.
Instead, they configured the 80386 registers to provide a large linear address space for user processes (and usually for the kernel as well).
> The idea that someone would create a couple descriptors with base=0:limit=4G and set all the segment register to them, in order to assure that int=void * is sorta a known possible misuse of the core architecture
The thing that you mischaracterize as a "misuse" of the architecture wasn't just some corner case that was remotely "possible", it was what 95% of the industry did.
The 8086 wasn't so much a design as a stopgap hail-mary pass following the fiasco of the iAPX 432. And the VAX existed long before the 8086.
I think my point revolves more around what the HW designers were enabling. If they thought that the flat model was the right one, they would have just kept doing what the 286 did, and fixed the segment sizes at 4G.
Yes. The point is that the hardware designers were wrong in thinking that the segmented model was the right one.
The hardware designers kept enabling complex segmented models using complex segment machinery. Operating system designers fixed the segments as soon as the hardware made that possible in order to enable a flat (paged) memory model and never looked back.
But were the software people actually right, or did they just follow the well-trodden path of VMS / UNIX, instead of making full use of the x86 hardware?
Having separate segments for every object is problematic because of pointer size and limited number of selectors, but even 3 segments for code/data/stack would have eliminated many security bugs, especially at the time when there was no page-level NX bit. For single-threaded programs, the data and stack segment could have shared the same address space but with a different limit (and the "expand-down" bit set), so that 32-bit pointers could reach both using DS, while preventing [SS:EBP+x] from accessing anything outside the stack.
Inasmuch as hardware exists to run software, so software is the customer, the hardware people were wrong by definition, as they created a product that their customers weren't asking for, didn't want and had no use for.
Might segmentation have been better if the software had wanted it? Well, it's a counterfactual, so in some sense we can't know. And we can argue why we believe one or the other is better, but the evidence seems to be pretty overwhelming. It's not that there weren't (and aren't) operating systems that use segmentation, but somehow their "better" memory model didn't take the world by storm.
Is the security extension from 1996, which has a section on keyboard security
and its crazy to me that this anyone can claim X11 can't be off loaded, which its been doing for decades. From all the crazy blt/pattern HW acceleration to GL/vulcan implementations to the fact that the entire server can be on the other side of a network pipe, meaning it could be anywhere, including entirely encapsulated on a graphics card/smart nic/etc.
And if your talking about the xlib serialization, that was largely fixed with XCB.
The KDE blog entry reads like a modern political platform denying climate change, or claiming renewable energy can replace traditional energy sources on the grid.
One head strictly stuck in the ground and ignoring the cases that make many of those statements flatly false. Like for example, the nvidia support. Nvidia support in Linux is in the 'good luck' catalog, especially on any optimus laptop, where one is lucky if the power management works, much less multiple screen docking/undocking, and a heap of other issues. Then please clarify which actual driver stack one is running (nouveau vs provide by nvidia binaries, vs nvidia open source) To claim its great with wayland ignores core failures that still exist.
Its the same with X11 forwarding, which like copy paste, has been steady degrading to the point where all the dbus/etc services being depended on makes double digit percentages of applications not work with 'ssh -X' and oh wow, waypipe. It seems all of windows/osx and KDE/Gnome are steadily shooting themselves in the foot.
I'm sorta happy I pulled my financial support not long ago, there are a couple 'toxic' people in the distro DE community who are pushing their own agendas, everyone else be damned. And weirdly enough it seems those people aren't doing it for some corporate/whatever reason, but just to wave a flag about their accomplishments. The entire reason most people claim wayland is 'better' is largely FUD, but that doesn't stop the true believers.
The 1996 extension had severe limitations. Untrusted clients have no clipboard, but also no GPU acceleration at all and other features were barely tested using it so it was somewhat random if they would work. It breaks a ton of applications and was therefore used by approximately no one.
Ok, so instead of a couple UAC style prompts for screen readers, macro recording, desktop sharing, etc, and some tweaks to GDK, we got what? An entire new backend GDK windowing system, and a pile of broken applications? And its been decades?
And its not like actual flaws people found couldn't be fixed.
Did you consider that maybe when you hold an opinion different than the people actually knowledgeable about a topic - like the people developing desktop environments and the former developers of X building Wayland - it might be because you are wrong and have a poor understanding of the field and not because they want to annoy you?
The flaws were not limited to the 1996 poor security extensions. These kind of half broken extensions are everywhere in X11. At some point, if the tweaks you have to do is basically rewriting the whole rendering pipeline and adding new APIs for the most significant systems, what you are doing is strictly équivalent to writing a new piece of software which is exactly what the people behind Wayland did.
And don't worry, the change adverse people you see here complaining about limitations fixed years ago would be complaining the same if the effort was on rewriting part of X11. That's life. Armchair complainers and keyboard warriors will complain while actual doers push things forward.
> Which is a load of FUD, the X11 security extensions from (checks google) 1996, restrict this.
Wait, what ? X11 has extensions ? As in can be "extended" ? And has the same thing since ( for the sake of dialogue) 1996 ?
That't why it must die. We need a monolith window system, with clear versions, all incompatible with each other. Only then, real progress can be made. /s
ECC! I don't care what BS people say about ZFS/Btrfs/whatever, if a bit flips on your router hopefully the checksum fails and nothing bad happens.
If you flip a bit in memory, on the way to the disk, then its corrupt at rest, and future reads will likely propagate the error.
Sure, who cares, a glitch here/there in your kids first birthday video. Better hope the glitch is there, rather than in say the bit of code computing the sector offsets/whatever.
Stories like this hit the media every couple years, so if it can happen on the big fancy EMC/whatever then it can happen on your little NAS in the closet.
So, just pay the little extra for the CPU+MB+RAM that protects your data from the NIC all the way to the HD.
Asahi is also still a platform with a huge pile of out of tree patches on top because the platform itself is pretty unusual, requiring for example, a 16K page size kernel which is unlike pretty much every other arm Linux platform.
I was going to write a snarky comment, but in the "if any of Qualcomm leadership is listening" I'm going ask a question:
Why is any of this needed when the kernel is full of platforms that are forward compatible with the Linux kernel and boot and generally operate on day one, without a huge pile of changes?
What does it benefit the user to have a huge pile of proprietary implementations of devices they frankly don't care about? Ex: just about anything related to power management? Why can't QC adhere to industry standards when they implement standard devices, ex: USB? Why can't these platforms adhere to industry standard firmware interfaces rather than custom mailbox interfaces?
And generally considered unconstitutional, until suddenly it wasn't, just like GWB nationalizing the TSA itself thereby creating the single largest case of the federal government pilfering through everyday Americans persons and property hunting for things that are legal to own. Which was also wildly considered unconstitutional, until it wasn't.
And go read the 4th amendment, with the understand that no one who signed it thought anything in the constitution authorizes any part of the federal government to ignore the absolutist language the bill of rights is written in. The assumption was that if there arose a need to justify the federal government searching people like this it needed a super majority to pass an amendment to fix it.
Right, and the reason this has been going on for nearly a quarter century in the USA is because it was widely considered an unconstitutional national passport until 9-11, and got bipartisan push-back from a number of states following its passage.
The federal government passed it along with the authoritarian wishlists various agencies had been salivating over for 40+ years and unable to get passed, until under the guise of saving us from the 'terrorists', who now 25 years later, turned out the actual terrorists were probably just domestic authoritarians. The guys living in caves weren't really a threat and could be dealt with, without passing a bunch of stuff to affect every single citizen of the country.
That seems an odd interpretation of how they extended the 286 protected mode on the 386. The 286 converted the fixed address+64k sized segment registers to 'selectors' in the LDT/GDT which added permissions/etc to a segment descriptor structure which were transparently cached along with the 'base' of the segment in generally invisible portions of the register. The problem with this approach was the same as CHERI/etc that it requires a fat pointer comprising the segment+offset which to this day remains problematic with standard C where certain classes of programmers expect that sizeof (void*) == sizeof (int or long).
Along comes the 386 with adds a further size field (limit) to the segment descriptor which can be either bytes or pages.
And of course it added the ability to back linear addresses with paging, if enabled.
Its entirely possible to run the 386 in an object=segment only mode where each data structure exists in its own segment descriptor and the hardware enforces range checking, and heap compression/etc can happen automatically by simply copying the segment to another linear address and adjusting the base address. By today standards the number of outstanding segment descriptors is limiting, but remember 1985 when a megabyte of RAM was a pretty reasonable amount...
The idea that someone would create a couple descriptors with base=0:limit=4G and set all the segment register to them, in order to assure that int=void * is sorta a known possible misuse of the core architecture. Of course this basically requires paging as the processor then needs to deal with the fact that it likely doesn't actually have 4G of ram, and the permissions model then is enforced at a 4K granularity. Leaving open all the issues C has with buffer overflows, and code + data permissions mixing/etc. Its not a better model, just one easier to reason about initially, but then for actual robust software starts to fall apart for long running processes due to address space fragmentation and a lot of other related problems.
AKA, it wasn't necessarily the best choice, and we have been dealing with the repercussion of lazy OS/systems programmers for the 40 years since.
PS: intel got(gets) a lot of hate from people wanting to rewrite history, by ignoring the release date of many of these architectural advancements. Ex the entire segment register 'fiasco' is a far better solution than the banked memory systems available in most other 8/16 bit machines. The 68000 is fully a year later in time, and makes no real attempt at being backwards compatible with the 6800 unlike the 8086 which is clearly intended to be a replacement for the 8080.