Unless you have untrusted users with SSH you can get away with a lot. I've reviewed many major Linux patches for the past several years and found we weren't actually impacted by most of them.
For example, I don't need Zombieload/MDS patches as I don't have anyone running untrusted code on the servers, I didn't need the rds_tcp vulnerability patch from last week because I don't have RDS modules loaded on any of my servers. I didn't need client side OpenSSH patches on these servers either, nor OpenSSL patches for UDP SSL. Typically a quick check with ansible is all it takes to confirm if these things are or aren't real risks for you.
EDIT:
Just looking at some CVE lists... It looks like assuming that the entire attack surface is the kernel and pre-auth openssh you may be in the clear running stock Ubuntu Server Minimal 14.04, a 5 year old OS today.
Kernel vulnerabilities resulting in code execution in the TCP stack or related code resulting in code execution are few and far between. OpenSSH vulnerabilities... well, the last pre-auth OpenSSH vulnerability, one of the two in it's entire lifetime had the severe consequence of... being able to check usernames too fast.
Please let me know if I've missed a big one, but I don't see anything that could even be used to do more than DoS a system like this running an old kernel and openssh server.
The thing about Zombieload/MDS (well, not really those, they're really more theoretical attacks... But Meltdown/Spectre in general, and any other local root exploit) is that they turn a remote shell, and perhaps a very limited one, into a remote root shell.
Not having any ports open is one thing, but I do think your attack surface is larger than just the kernel and OpenSSH. Does Ubuntu not have UPNP open by default?
But things connecting from your computer to the outside world can also be exploited. Just the very first one I thought of, dhcpcd, has a recent CVE. And there are many more programs on a default Ubuntu install that connect to the outside world without user interaction -- are you willing to let a vulnerability in any one of those become a remote root shell?
> The thing about Zombieload/MDS (well, not really those, they're really more theoretical attacks... But Meltdown/Spectre in general, and any other local root exploit) is that they turn a remote shell, and perhaps a very limited one, into a remote root shell.
There's also many, many other local exploits that don't get nearly as much PR in Linux and if an attacker wants to take advantage of one they can basically just wait. Local privescs are pretty common as the attack surface is massive.
Isolating to separate kernels in separate VMs or better, separate physical hardware is always better than relying on Linux's privilege separation. All but my development servers could be run as root with no significantly greater risk.
> Not having any ports open is one thing, but I do think your attack surface is larger than just the kernel and OpenSSH. Does Ubuntu not have UPNP open by default?
On Ubuntu server as configured by this provider at least, this is all I have exposed in netstat -nlput
> But things connecting from your computer to the outside world can also be exploited. Just the very first one I thought of, dhcpcd, has a recent CVE. And there are many more programs on a default Ubuntu install that connect to the outside world without user interaction -- are you willing to let a vulnerability in any one of those become a remote root shell?
Not really a serious concern, dhcpcd isn't running on any of my servers. Sorry if you have this confused, I meant Ubuntu server... not much runs really. Yes, of course I wouldn't suggest browsing the web or similar operations, which opens a massive attack surface, but for a server the attack surface is much narrower. Not much phoning home except perhaps an update check if you have that enabled.
> Isolating to separate kernels in separate VMs or better, separate physical hardware is always better than relying on Linux's privilege separation.
Sure, you can also do more. You can also air-gap your machines and sneaker-net everything that's needed, or if your server needs to send updates you can send UDP over a tx-only link (use an optical link and only connect the tx.)
But there's a cost-benefit analysis here. Discounting MDS is one thing, I actually agree with Intel's risk assessment on it, biased as they are. But generally installing security updates on an LTS disro is easy and painless; there's no real reason not to do it.
> All but my development servers could be run as root with no significantly greater risk.
Are we operating under a different definition of "risk" here? Running servers as root definitely increases risk. As root you can do much more persistent damage when an attack does happen, basically putting the machine in a state where the only solution is to wipe and install from scratch.
> As root you can do much more persistent damage when an attack does happen, basically putting the machine in a state where the only solution is to wipe and install from scratch.
In any reasonable project or company server that malicious actors ever had access to is counted as completely compomissed no matter what permissions they had. There basically no other option than wipe and reinstall since OS cant really perform trusted self check. For all you know you can have rootkit living in bootloader.
Of course even hardware cant be trusted really, but this is another level of risk management while "wipe and reinstall" (or wipe and restore from backups) is an industry standard.
Ubuntu Server's default install comes with (at least) a DHCP client and an NTP client. Maybe more things, those are just the two I checked.
Sure, you can use static IPs, and disable NTP, and take other steps to harden your server and reduce your attack surface. But remote exploits for random default programs are routinely discovered, so defense in depth is just a good idea.
That's my bad, I was thinking of some personal scenarios in which I've had low value servers running single applications connected to the internet for years without updates at all.
That's dangerous thinking. Any un-patched service can turn into a pivot point. If the same folks who managed more critical pieces of infrastructure log in there, it can almost certainly be used to pivot onto their other systems.
I think you're thinking too big here. This is a single scaleway machine, by itself. It has only one user(me), and one listening process(two if counting ssh). Even if you gained root access to it, there's literally nothing you could do I would care too much about, including taking it offline.
That wasn't meant to be a point of pride or accomplishment, more a testament to the reliability of Scaleway. I've been super happy with them.
It's just the security cargo cultists. Some were arguing that I should be turning on Spectre/Meltdown mitigation on my Hadoop cluster. It's my cluster, dude. My engineers have the right to run code on it. If they don't and they're running code on it, I've already lost the game. If you can even contact one of my machines the game is up. What even is the threat model here for Spectre/Meltdown.
They have no sense of risk. Just security cargo-cultists.
I don't know that it's cargo-cult behavior, but maybe it's a lack of perspective in general. I work in security, and yes, it's good practice to patch all the things, but only in that it's the easiest default policy that makes things happen. If you have to pick and choose, you need to understand things well enough to be able to judge.
As a security consultant, I think that kind of perspective is where I can help add value to our clients; our usual point of contact is a project manager, whose eyes tend to glaze over when given a big vulnerability report, or worse, a spreadsheet. To them, every line feels like some sort of crisis. Now if I can get them to patch in a timely fashion, there is at least no pile of years-old issues, and we can take the time to discuss the few that remain.
Very true. We have some cluster users on Gentoo, who are happy that they can simply flip off all those pesky performance-eating security mitigations system-wide. Not only in the kernel, but also userspace side PIC/PIE/SSP/etc.
> Even if you gained root access to it, there's literally nothing you could do I would care too much about, including taking it offline.
Well, the person who has to deal with it being used as a jump point or IRC relay hiding some third party's behavior might care.
Providing only one small service and OpenSSH and not being tied into other infrastructure directly means it's not really a desirable target for the rest of the project, but it also means it's not likely to cause too much of a problem if it gets a reboot every once in a while.
The added benefit is that if gives you the occasional point in time to make sure everything is runnnig cleanly with all the updated applied. You ensure SSH and the geolocation service are restarted after they get updates, right? What about after glibc updates? What about after a zlib update?
If you really want to make sure updates are applied, you want to make sure an prior version of updated code active in memory is cleared out. Knowing if that's been accomplished isn't always easy, but one easy way to do so is a reboot after an update.
Is it really dangerous to log into compromised machines, though?
I think unless one does something stupid like SSH agent forwarding or using shared passwords, it should be safe even if machine is totally compromised.
If you fetch a remote binary or script to your dev machine and run it, your dev machine could be compromised -- but I am not sure why would anyone want to do this.
If you specify X forwarding, then anyone can own you. But you should not have any X apps on the remote server to begin with.
If you are transferring files, then vulnerabilities in rsync/rcp could get you. But those would have to be on the client side, your desktop machine -- and hopefully this machine is well patched.
If you are using IP filters / the machine is on LAN, then yes, it could be bad. But in this case, the machine was on the public network.
There was old bug with "get window title" putting stuff into input buffer, but it was fixed years ago.
Don't get me wrong, I think you do want to keep the machines up to date, and one should always enable unattended updates.
But I also believe in defense-in-depth, so if one is "managing more critical pieces of infrastructure", they should always assume the remote machine they are managing is compromised, and always take precautions.
Generally speaking though you're correct though - keep your client up to date and you'll be protected from a hacked server.
Clients are in general expose much larger attack surfaces in many cases, so likely will have more frequent and significant security patches. There's a lot more to attack in a web browser than in say, Nginx.
It seems to be something that is technically possible but almost no one uses it because reboots aren't that bad especially in the age of docker where you just destroy the whole OS when you update.
Not really. It's technically possible with Ksplice, but almost no distro actually supports it.
Beyond the kernel, you have various libs and binaries that will be replaced during upgrades. All can usually/mostly be restarted without a reboot, but just upgrading packages alone won't guarantee all running processes have been updated.
The core code behind kSplice/kGraft have been upstream since Linux 4.0 and both Red Hat and SUSE support it (in fact, many security patches are released this way). I believe that some less enterprise-y disros like Fedora and Ubuntu support it too.
The issue isn't whether it's supported, the problem is that live patching is limited in what it can patch (when functions are inlined it can become impossible to patch them and so on). So while a machine with 4 years uptime might be live patched there are some security issues that cannot be patched that way (for instance, the retpoline patches for Meltdown/Spectre require all function pointers to have different calling conventions and that requires a reboot).
Ubuntu supports it officially, so does Fedora. From my experience it works more or less fine on CentOS, so probably RHEL too. For Suse there is kGraft, so basically >90% of install base supports live patching.
I don't think it's part of the usual Ubuntu distro. I understood you need to register to get it. And it's free (as in beer) only for limited use cases. Don't remember the details.
I run it on all dedicated servers, as well as managed servers where we can easily pass the cost on.
They're currently releasing livepatches across all the kernel builds to address the Intel MDS stuff (at least the kernel-based mitigations) and it's all very pleasant and hands-off.
It's cool but we run Linux in VMs. The VMs can complete a reboot in less than 20s. It's fast enough that it doesn't register on uptime monitors. Live patching adds complexity for not a lot of benefit.
Not to mention that if you're trying to be rebootless you have to worry about running services holding old versions of libraries in memory. Sure there's checkrestart/needrestart, but when reboots are so fast it doesn't matter much.
I don't think Scaleway has upgraded their bare metal ARM kernel (their 1st ARM gen) for a very long time. So if you don't build your own one, there are no kernel patches.
(Please correct me if I'm wrong. Being wrong here would be good news.)
CrayLink was awesome. I worked for a web hoster in the late 90s that was big on SGI and we'd abuse temporary CrayLinks between two Origin 200s to upgrade storage or spin off sites to a new server with minimal service impact.
There are plenty of System X HPC installs. They just no longer come from IBM. Even when IBM still owned the line, it was being outsourced. The University of California SRCS system from ~10 years ago was an iDataPlex sold to us by IBM. Most of the boards had Asus marked on them.
Also remember the System X line was sold way after the Think lines.
The HP Compaq merger as well as the Agilent spin-off in many ways marked the transition from an engineering company to a bunch of vacuum cleaner salespersons.
PS: Yes it's unfair to blame all this on Compaq, probably more a result of increasingly expensive semiconductor R&D.
My rule of thumb for buying laptops (since the mid 2000s) has been that compaq is the bottom-rung cheap brand that should always be avoided. Not sure their survival has been a good thing.
In 2006, every single one of my coworkers bought a brand-new MacBook, and within a month every single one of my coworkers had a MacBook in the shop.
I bought a Compaq laptop with a 64-bit CPU for under $1000. It ran flawlessly for over a decade, needing only a new battery. I eventually gave it to my parents who still have it.
Brand necrophilia. Compaq consistently built better gear than HP before being absorbed. HP used that brand for their junk as a way of getting back at Compaq.
Wrong. Compaq had much higher DOA and other defects in the mid 90s. They relied on customer institutional memory from the 80s when they really were the best.
Counterpoint: my Compaq Presario 1210 survived for about 20 years before it finally stopped POSTing. Even the original hard drive still worked (albeit with a range of bad sectors around which I had to partition).
The consumer gear was trash. The server lines were an entirely different story. Also worth noting that by the mid 2000s, Compaq was just a branding on the consumer side. The hardware was all the same old HP consumer junk.
Logging in with @cyphar:cyphar.com works for me. I host my own homeserver so I did have to set it up, but if you are using someone else's homeserver that is all you need to do. The homeserver and identity server information is all provided with the new .well-known/matrix/client support.
if you're not a matrix.org user, you either enter your full mxid to discover your server, or manually enter your homeserver's URL in Advanced. if this isn't working, please yell...