I don’t understand the apparent lack of hypervisor-based kernel protections in desktop Linux. It seems there is a significant opportunity for improvement beyond the basics of KASLR, stack canaries, and shadow stacks. However, I don’t see much work in this area on Linux desktop, and people who are much smarter than me develop for the kernel every day yet have not seen fit to produce some specific advanced protections at this time that I get into below. Where is the gap in my understanding? Is this task so difficult or costly that the open source community cannot afford it?
Windows PCs, recent Macs, iPhones, and a few Android vendors such as Samsung run their kernels atop a hypervisor. This design permits introspection and enforcement of security invariants from outside or underneath the kernel. Common mitigations include protection of critical data structures such as page table entries, function pointers, or SELinux decisions to raise the bar on injecting kernel code. Hypervisor-enforced kernel integrity appears to be a popular and at least somewhat effective mitigation although it doesn't appear to be common on desktop Linux despite its popularity with other OSs.
Meanwhile, in the desktop Linux world, users are lucky if a distribution even implements secure boot and offers signed kernels. Popular software packages often require short-circuiting this mechanism so the user can build and install kernel modules, such as NVidia and VirtualBox drivers. SELinux is uncommon, ergo root access is more or less equivalent to the kernel privileges including introduction of arbitrary code into the kernel on most installations. TPM-based disk encryption is only officially supported experimentally by Ubuntu and is usually linked to secure boot, while users are largely on their own elsewhere. Taken together, this feels like a missed opportunity to implement additional defense-in-depth.
It’s easy to put code in the kernel. I can do it in a couple of minutes for a "hello world" module. It’s really cool that I can do this, but is it a good idea? Shouldn’t somebody try and stop me?
Please insert your unsigned modules into my brain-kernel. What have I failed to understand, or why is this the design of the kernel today? Is it an intentional omission? Is it somehow contrary to the desktop Linux ethos?
It’s easy to put code in the kernel. I can do it in a couple of minutes for a “hello world” module. It’s really cool that I can do this, but is it a good idea? Shouldn’t somebody try and stop me?
Yes, not being root stops you. Don't run untrusted code as root.
My illustration is meant to highlight the lack of care that is taken w.r.t. kernel code compared to systems that require code signing. If some privileged process is compromised, it can simply ask the kernel to insert a module with arbitrary code. Should processes be able to do this? For many systems, the answer is no: only otherwise authenticated code can run in the kernel. No userspace process has the right to insert arbitrary code. A system with a complete secure boot implementation and signed kernel modules prevents even root from inserting an unauthorized module. Indeed, on Android on a Samsung device with RKP, unconfined root still cannot insert a kernel module that isn't signed by Samsung. The idea of restricting even root from doing dangerous things isn't new. SELinux uses rules to enforce similar concepts.
Yes, not being root is a useful step, but protecting the kernel from root might still be desirable and many systems try to do this. Exploits can sometimes get untrusted code running as root on otherwise reasonable secure systems. It's nice if we can have layered security that goes beyond, so I ask: why don't we have this today when other systems do?
if you don't want modules, you can compile a monolithic kernel. i have done so for a few years. it saves time if you run something like gentoo or LFS, because you don't need an initrd and no mechanisms for loading modules. it has the downside of not being able to change some parameters during runtime, ie. you have to reboot and pass different parameters via booloader. you can then also switch off support for loading modules.
I should not be forbidden from running my own code on my own hardware, right? But I should be protected from random code taking over my entire system, right? That's why Linux restricts certain operations to root.
You absolutely can if you want to. Xen have been around for decades, most people that do GPU passthrough also kind of technically do that with pure Linux. Xen is the closest to what Microsoft does: technically you run Hyper-V then Windows on top, which is similar to Xen and the special dom0.
But fundamentally the hard part is, the freedoms of Linux brings in an infinite combination of possible distros, kernels, modules and software. Each module is compiled for the exact version of the kernel you run. The module must be signed by the same key as the kernel, and each distro have its own set of kernels and modules. Those keys needs to be trusted by the bootloader. So when you go try to download the new NVIDIA driver directly from their site, you run into problems. And somehow this entire mess needs to link back to one source of trust at the root of the chain.
Microsoft on the other hand controls the entire OS experience, so who signs what is pretty straightforward. Windows drivers are also very portable: one driver can work from Windows Vista to 11, so it's easy to evaluate one developer and sign their drivers. That's just one signature. And the Microsoft root cert is preloaded on every motherboard, so it just works.
So Linux distros that do support secure boot properly, will often have to prompt the user to install their own keys (which is UX nightmare of its own), because FOSS likes to do things right by giving full control to the user. Ideally you manage your own keys, so even a developer from a distro can't build a signed kernel/module to exploit you, you are the root of trust. That's also a UX nightmare because average users are good a losing keys and locking themselves out.
It's kind of a huge mess in the end, to solve problems very few users have or care about. On Linux it's not routine to install kernel mode malware like Vanguard or EAC. We use sandboxing a lot via Flatpak and Docker and the likes. You often get your apps from your distro which you trust, or from Flathub which you also trust. The kernel is very rarely compromised, and it's pretty easy to cleanup afterwards too. It's just not been a problem. Users running malware on Linux is already very rare, so protecting against rogue kernel modules and the likes just isn't in need enough for anyone to be interested in spending the time to implement it.
But as a user armed with a lot of patience, you can make it all work and you'll be the only one in the world that can get in. Secure boot with systemd-cryptenroll using the TPM is a fairly common setup. If you're a corporate IT person you can lock down Linux a lot with secure boot, module signing, SELinux policies and restricted executables. The tools are all there for you to do it as a user, and you get to custom tailor it specifically for your environment too! You can remove every single driver and feature you don't need from the kernel, sign that, and have a massively reduced attack surface. Don't need modules? Disable runtime module loading entirely. Mount /home noexec. If you really care about security you can make it way, way stronger than Windows with everything enabled and you don't even need an hypervisor to do that.
This is a question I myself have wondered for a long while now. Before the Arch warriors come in to shout about how Secure Boot is evil and also useless and how everything Windows, Mac, and so on does for security is only needed because they're insecure and not free and spyware and other angry words, I agree with your assessment.
The problem is that while Linux is well tested in Server environments, it is still an insignificant factor on the desktop. Servers are very well locked down in a lot of cases, so if something makes its way into the system itself, many security mitigations on the way have already failed.
Desktops are different because the user is a lot more likely to install/run/browse to stuff that is dangerous.
Right now, the only saving grace for Linux is that malware targets Windows and Android primarily, the most commonly used operating systems. What's the point of targeting less than 4 percent of the world when you can target 90 percent of the world?
This will change if "The year of Linux desktop" actually happens and people start mass using Linux desktops. You can bet on more Linux malware happening.
One consideration is that on a Linux server, the data of interest to attackers is more likely to be accessible by some low-privileged daemon like a SQL server. Compromising the kernel in such a fundamental way doesn't provide anything of value on its own, so defenses perhaps are not as mature along this plane. It's enough to get to the database. You might go for the kernel to move laterally, but the kernel itself isn't the gold and jewels.
Server environments are much more tightly controlled as you mentioned. I feel like there are more degrees of trust (or distrust) on a user system than on a server (configured top-to-bottom by an expert) for that reason and the differences in use case, and Linux desktop doesn't really express this idea as well as maybe it should. It places a lot of trust on the user to say the least, and that's not ideal for security.
I think secure boot is a great idea. There must be a way to have layered security without abusing it to lock out users from their owned machines.
You have absolutely zero clue what in the world you are talking about 😂😂😂😂
You're commenting as if there is a difference between a "desktop" and "server" install, when in practicality there is none. It's not Windows with different tiered builds by price. 😭
Incorrect. The difference is not that there's a server edition or desktop edition (which for many linux distros, there very much are server and desktop editions, even if the only difference is which packages are installed by default), but that when you properly setup a server with internet-exposed services, you usually are smart enough, have gone to school for this, learned from experience, or all of the above, how to secure a linux system for server use, and should have a configuration setup that would be inconvenient at best for a desktop, but is more secure for the purpose of a server. In addition, when running a server, you stick to what you need, you don't arbitrarily download stuff onto a server, as that could break your live service(s) if something goes wrong.
The average desktop user does not have any of that experience or knowledge to lock down their system like ft knox, nor do they have the willpower to resist clicking on / downloading and running what they shouldn't, so if most of everyone stopped using Windows and jumped to Linux, you would see a lot more serious issues than the occasional halfass attempt at linux malware.
A bit late but: The Linux kernel can prevent unsigned code from being inserted into it, the signing key is generated by the one building the kernel image and automatically signs all modules that are build with the kernel.
This only happens if lockdown is enabled, which happens automatically if you use secure boot. The distros I use also support secure boot so I have it for example.
What would that accomplish? If someone manages to compromise the kernel you are in trouble. Linux uses a monolithic design so there is no isolation between kernel modules. The kernel would need a totally redesign and rewrite to make it run as a microkernel. Microkernels are problematic in general since they add a lot of complexity.
It does appear to take an interesting approach with using VMs to separate out the system components and applications, but I don't think it introspects into those VMs to ensure the parts are behaving correctly as seen from the outside of the VM looking in.
It's a really cool OS I haven't heard about before though!