How are you defining "general-purpose OS"? Are you saying IoT and robotics shouldn't use a Linux kernel at all? Or just not your general purpose distros? I would be interested to hear more of your logic here, since it seems like using the same FOSS operating system across various uses provides a lot of value to everyone.
I think, that I want at least hard-real-time OS in any computer which can move physical objects. Linux kernel cannot be it: hard RTOS cannot have virtual memory (mapping walks is unpredictable in case of TLB miss) and many other mechanisms which are desired in desktop/server OS are ill-suited for RTOS.
Scheduler must be tuned differently, I/O must be done differently. It is not only «this process have RT priority, don't preempt it», it is design of whole kernel.
Better, this OS must be verified (as seL4). But I understand, that it is pipe dream. Heck, even RTOS is pipe dream.
About IoT: this word means nothing. Is connected TV IoT? I have no problems with Linux inside it. My lightbulb which can be turned on and off via ZigBee? Why do I need Linux here? My battery-powered weather station (because I cannot put 220v wiring in backyard)? Better no, I need as-low-power-as-possible solution.
To be honest, O think even using one kernel for different servers is technically wrong, because RDBMS, file server and computational node needs very different priories in kernel tuning too. I prefer network stack of FreeBSD, file server capabilities (native ZFS & Ko) of Solaris, transaction processing of Tandem/HPE NonStop OS and Wayland/GPU/Desktop support of Linux. But everything bar Linux is effectively dead. And Linux is only «good enough» in everything, mediocre.
I understand value of unification, but as engineer I'm sad.
I work on embedded devices, fairly powerful ones to be fair, and I think systemd is really great, useful software. There's a ton of stuff I can do quite easily with systemd that would take a ton of effort to do reliably with sysvinit.
It's definitely pretty opinionated, and I frequently have to explain to people why "After=" doesn't mean "Wants=", but the result is way more robust than any alternative I'm familiar with.
If you're on a system so constrained that running systemd is a burden, you are probably already using something like buildroot/yocto and have a high degree of control about what init system you use.
I don't believe there are any serious technical obstacles to providing a graphical installer in something like an initramfs environment. Many distros do provide graphical installation mechanisms using PXE, which loads the kernel and installer-initramfs over the network (and is similar in the sense that it won't touch local storage unless you tell it to)
I don't have a way to quickly around to check, but I thought the arch install media used squashfs? In which case I wouldn't have thought it was safe to blow away the backing store.
And if Linux$oft suddenly decides every user's system needs a backdoor or that every system mus automatically phone home with your entire browsing data, then, well, too bad, so sad of course!
Doing secure boot properly is kind of difficult. There are a bunch of TPM measurement registers for various bits and bobs (kernel, initramfs, cmdline, lots more). Using UKIs simplifies it a lot, but it’s not trivial to do right at the moment.
Banks don't use these things because they provide any real security. They use them because the platform company calls it a "security feature" and banks add "security features" to their checklists.
The way you defeat things like that is through political maneuvering and guile rather than submission to their artificial narrative. Publish your own papers and documentation that recommends apps not support any device with that feature or require it to be off because it allows malware to use the feature to evade malware scans, etc. Or point out that it prevents devices with known vulnerabilities from being updated to third party firmware with the patch because the OEM stopped issuing patches but the more secure third party firmware can't sign an attestation, i.e. the device that can do the attestation is vulnerable and the device that can't is patched.
The way you break the duopoly is by getting open platforms that refuse to support it to have enough market share that they can't ignore it. And you have to solve that problem before they would bother supporting your system even if you did implement the treachery. Meanwhile implementing it makes your network effect smaller because then it only applies to the devices and configurations authorized to support it instead of every device that would permissionlessly and independently support ordinary open protocols with published specifications and no gatekeepers.
Another point is (often )the apps that banks makes are 3rd party developed by outsourcing (even if within the same developed country). If someone uses some MiTM or logcat to see some traffic and publishes it then banks get bad publicity. So to prevent this the banks, devs tell anything that is not normal (i.e) non-stock ROM is bad.
FOSS is also something many app-based software devs don't like on their products. While people in cloud, infra like it the app devs like these tools while developing or building a company but not when making end resulting apps.
Remote attestation absolutely provides increased security. Mobile banking fraud rates are substantially lower than desktop/browser banking fraud. Attestation is major reason why.
I think ever compute professional needs to spend at least a year trying to secure a random companies windows network to appreciate how impossible this actually is without hardware based roots of trust like TPMs and HSMs
It's not even that. The main reason is probably that attackers are going to be writing code to automate their attacks, and desktops are easier to develop on than phones, so that's what they use with no reason to do otherwise.
Even if you stopped supporting desktops, then they would just reverse engineer the mobile app instead of the web app and extract the attestation keys from any unpatched model of phone and still run their code on a server, and then it would show up as "mobile fraud" because they're pretending to be a phone instead of a desktop, when in reality it was always a server rather than a phone or a desktop.
And even if attestation actually worked (which it doesn't), that still wouldn't prevent fraud, because it only tries to prove that the person requesting the transfer is using a commercial device. If the user's device is compromised then it doesn't matter if it can pass attestation because the attacker is only running the fake, credential stealing "bank app" on the user's device, not the real bank app. Then they can run the official bank app on an official device and use the stolen credentials to transfer the money. The attestation buys you nothing.
All this theatre is turning out to be nothing more than giving up the agency we have today (nice things), for a risk averse kneejerk runaround with glaring ulterior motives...just like the scan your face+id push for services.
Would YOU be willing to use a bank that refused to use TLS? I didn't think so. How is you refusing to accept remote attestation and the bank refusing to connect to you any different?
Because Banking has existed and operated fine for countless decades without it(attestation).
Also, as there is ample discussion elsewhere, having attestation does NOT eliminate the ability for your account to become compromised.
As restated.
"If the user's device isn't compromised then everything is fine regardless of whether or not it can pass attestation. If the user's device is compromised, the device doesn't need to pass attestation to run a fake bank app and steal the user's credentials. Once the attacker has the user's credentials they can use them to transfer money regardless of whether or not they have to use a different device that can pass attestation.
It doesn't really provide any security."
IT DOES however completely rewrite the paradigm of general purpose computing in very asymmetrical ways.
If the user's device isn't compromised then everything is fine regardless of whether or not it can pass attestation. If the user's device is compromised, the device doesn't need to pass attestation to run a fake bank app and steal the user's credentials. Once the attacker has the user's credentials they can use them to transfer money regardless of whether or not they have to use a different device that can pass attestation.
It doesn't really provide any security.
On top of that, there are tons of devices that can pass attestation that have known vulnerabilities, so the attacker could just use one of those (or extract the keys from it) if they had any reason to. But in the mobile banking threat model they don't actually need to.
It's not a matter of being hard. It's like trying to prevent theft by forcing everyone to wear a specific brand of shoes. The fact that the shoe company insists that it's useful is not evidence that it is.
It's not that you can't solve the problem, it's that you can't solve the problem using that mechanism. Attestation is useless for this.
The thing that would actually work for this is to have an open standard supported by PCs and phones to read the chip in payment/ATM cards, because then you could do "card-present" transactions remotely. You touch your card to the phone/PC and enter your PIN to authorize a new merchant. That actually solves the problem because then instead of the bank trusting every commercially available phone on the market, they only trust the specific card that they mailed to the cardholder, and you can only authorize a new merchant with physical possession of the card because it contains a private key. But that doesn't require attestation because then you don't need the keys to be in the phone since they're in the card.
Well, it depends. I can now do banking from my desktop computer because there is no way our banks can attest that we're running our browsers in their approved hardware+software stack. Of course they can already disable banking from the browser but if they choose to keep it open but require attestation in your browser when it becomes possible, I don't think it's a good thing.
It would but how and who to run it? Ideally some one like Linux Foundation sits on the White house meetings or EU meetings. But they don't. Govts don't understand. I was once participating in a Youth meeting with MEPs - most of them have only iPhones. Most (not all) lawmakers live on a different planet.
Also IIRC, linux foundation etc are not interested in doing such standardisations.
Isn't it possible to force TPM measurements for stuff like the kernel command line or initramfs hash to match in order to decrypt the rootfs? Or make things simpler with UKIs?
Most of the firmwares I've used lately seem to allow adding custom secureboot keys.
I mean, isn't that just about what happened to Docker?
They wrote a really nice wrapper around cgroups/ns/tarball hosting and then struggled to monetize it because a large portion of their users are exactly the kind of people who could set up a curlftpfs document cloud.
Shouldn't you already be using low privilege accounts for stuff like gathering information about prod?
Overprivileged accounts is a huge anti-pattern for humans too. People make mistakes. Insider threats happen. Part of ops is making it so users don't have privileges to do damage without appropriate authorization.
I've never read Peter Thiel's books, but isn't that kinda a part of his playbook? Monopolies, but driving progress? "Competition is for losers"? I never fully understood it because it seems like then you're just competing with yourself.
reply