This looks simple enough, I'll have a crack at it this weekend. Thank you
the drivers are blacklisted on the host at boot
This is the problem I was alluding to, though I'm surprised you are still able to see the console despite the driver being blacklisted. I have heard of people using scripts to manually detach the GPU and attach it to a VM, but sounds like you don't need that, which is interesting
I was wrong, got confused about how secure boot and disk encryption worked 😅
I'll admit I've done this too 😅 Not ideal but a good idea nonetheless
I was confused on how secure boot and disk encryption worked, ignore me 😅
Actually that might work. I thought that secure boot and disk encryption would prevent mounting the disk to a different system, but now I can't think of any reason why it would. Good idea
That sounds brilliant. Have any resources to learn how to do something like this? I've never created custom boot entries before
It only hijacks the GPU when I start the VM
How did you do this? All the tutorials I read hijack the GPU at startup. Do you have to manually detach the GPU from the host before assigning it to the VM?
Serial is still a thing.
Good to know 👍
Get a cheap video card.
I'd be tempted to just pass it through as well 😅
Live CD.
Doesn't work if you have encrypted disk (nevermind I was wrong about this)
Or a usb to vga adapter.
A server class system with BMC.
Interesting ideas, I'll look into them thanks
As mentioned in another reply, this doesn't work if you have encrypted disk. The price for security I suppose
Edit: nevermind I thought that secure boot and disk encryption would prevent you from mounting the disk to another system, but that appears to be wrong
A rescue iso doesn't work if you have encrypted disk. I thought everybody encrypted disk nowadays.
If you don’t have a live boot option you can also pull the disk and fix it on another machine, or put a different boot disk in the system entirely.
This is an interesting idea though, as long as the other machine has a different GPU then the system shouldn't hijack it on startup.
You can probably also disable hardware virtualization extensions in the bios to break the VM so it doesn’t steal the graphics card.
AFAIK GPU passthrough is usually configured to detach the GPU from the host automatically on startup. So even if all VMs were broken, the GPU would still be detached. However as another commenter pointed out, it's possible to detach it manually which might be safer against accidental lockouts.
If you want to lock down the web server and ssh behind a VPN, that's where you can fuck up and lock yourself out though.
How to troubleshoot broken ssh if you do GPU passthrough?
This hasn't happened to me yet but I was just thinking about it. Let's say you have a server with an iGPU, and you use GPU passthrough to let VMs use the iGPU. And then one day the host's ssh server breaks, maybe you did something stupid or there was a bad update. Are you fucked? How could you possibly recover, with no display and no SSH? The only thing I can think of is setting up serial access for emergencies like this, but I rarely hear about serial access nowadays so I wonder if there's some other solution here.
Fair enough. I'm considering a similar setup myself, so thanks
What do you use it for? You don't get the privacy benefits when running it locally though right? Since your IP gives away your identity