Mastodon

Proxmox vs virt-manager

Send to Kindle

I mentioned in my first post this year that I would cover many (if not most) of the technology projects that I’m working on in 2023. I’m well into it, and I haven’t posted about any of them (yet). One of the reasons is that I’m so deeply absorbed (in the best sense) that I haven’t wanted to “slow down” to blog about them.

But, of course, if I don’t post when they’re fresh, I’m not going to get the important details right, and that’s the point of blogging.

This post is out of order, since I had a number of successes (and frustrations, but no failures yet!) in my 2023 technology odyssey, and I’m choosing to start with this, which was a number of projects in (but related to many of them). Since this one is over, I’m posting it first.

So, a number of the projects I’m working on (or have in mind) will be running in a VM (each might be in separate VMs). I wanted to run them in a separate physical host, but in my home (technically known as a “home lab”). I bought a machine to dedicate to that (another post coming on that at some point).

I decided that I would install Proxmox as the base OS. People call it a bare metal hypervisor, but as I’ve learned (I too didn’t know) that it really runs Debian beneath the surface of Proxmox. I read a ton about Proxmox, Xen, XCP-NG, KVM (virt-manager/virsh and cockpit). On my laptop, I have run VirtualBox for over a dozen years, and even though I’m perfectly comfortable with it, I knew it wouldn’t be my “remote” virtualization server of choice.

I had no trouble installing Proxmox and getting it up and running. I was annoyed (mildly) by all of the nagging to sign up for a paid subscription, given that I’m running it in a home lab and it’s an open source project. An occasional gentle reminder that a support subscription is available would be fine, but not what they were doing. That’s not the reason I’m no longer running Proxmox (oh wait, did I need to say spoiler alert before writing that?).

Proxmox is reasonably attractive and reasonably easy to figure out (with tons of documentation and forum posts to help you if you can’t figure it out easily). That’s all good. Except when it isn’t…

Under the covers, Proxmox runs KVM (just like Xen and XCP-NG). I had a running VM that I wanted to migrate over to Proxmox (supposedly a very easy task). That VM was a Home Assistant OS install. It was in qcow2 format (the native format of QEMU disks). I was under the impression (apparently incorrectly!) that qemu was the default file format and virtualization engine for all KVM-based systems. I figured I’d merely have to copy over the qcow2 image and “load” it into Proxmox.

Nope! I had to convert the qcow2 image into a raw image (easy enough) and then load it in. It loaded fine.

But did it? Sure, it was correctly loaded into the new VM, but it wouldn’t boot. I spent the next two days (possibly three!), trying every possible combination of things that you could tweak in Proxmox. SeaBIOS (legacy bios boot) and UEFI boot. I booted (successfully) off of CD-ROM ISO images with various bootloaders on them (SuperGRUB, etc.) and also systemrescucd (and some of its kin). Those all booted successfully, so the issue wasn’t the base Proxmox install.

Once those CDs were live, I was able to mount the Home Assistant (HA) drive. I tried doing various low-level things to it (updating its grub config among other things). Each time I reverted to booting off of the HA disk, it failed to boot (though after updating grub, I got a little further, as in I was dropped to a grub shell!).

All the while, I was Googling my brains out (almost literally). I followed every piece of advice that other people were getting when they had VMs that weren’t booting. Nothing even came close to working.

I had to take a breath and consider why I chose Proxmox to begin with. There were two reasons:

  1. It felt like I would have to reboot the underlying system less often than if I installed KVM/QEMU on top of a regular Linux distribution (I’ll explain below).
  2. While it was less complicated to administer than XCP-NG (from my research), it still had high end capabilities like migrating a running VM.

When I spent some time thinking about it, my need for migrating a running VM (and some of the other cool features) were just not things I needed, or was even likely to ever use. I’m not load balancing a big cluster here, just running a few local VMs for specific purposes (the first being Home Automation via HA).

Now the explanation from #1 above. On my cloud servers I run Ubuntu (because that’s the default install for most of those providers and I’m familiar enough with administering it to be 100% comfortable accepting that as a default.

However, on my home machines (laptop and many servers, both physical and VM) I run Arch Linux (my long-time preferred OS, for 11 years now). Arch is a rolling release OS (meaning it is constantly updated, there is no big bang upgrade on some regular cycle.

As such, there could be a kernel update every single day (for example). I tend to run my installs on a bleeding edge basis, accepting all the updates and rebooting whenever a new kernel or systemd update are applied. That’s more reboots than a normal Linux sysadmin would accept, but hey, these are all in my home, and I’m not running a business on them…

Back to our saga, I thought Proxmox would at least mean that the machine running multiple VMs wouldn’t have to reboot as often, leaving the VMs to be individually restarted only when they had their own necessary update installed.

After those days of frustration, I decided to revert to what I know personally. I wiped Proxmox off the machine and installed Arch Linux (ah, a breath of very fresh air). I installed the full virtualizaton suite of KVM/QEMU on top of that.

Then I fired up virt-manager and imported the qcow2 image of HA and voila, it booted on the first try.

Since then, I’ve been able to do a number of extremely cool things (all of which would have worked fine on Proxmox as well!), but that’s a topic for another day.

I want to be super clear that I think Proxmox is a very cool project and I would have enjoyed using it. It might be my one very specific case that failed (though I also tried to install HA from scratch from an ISO and I failed at that as well!). I know from reading the forums that many people are running HA under Proxmox, so it could simply be me being stupid (certainly a distinct possibility), but I’m happy I’m back in the land of the familiar.


Posted

in

, ,

by

Comments

2 responses to “Proxmox vs virt-manager”

  1. […] projects. Many are interrelated and involve running virtual machines (VMs). In my post about Proxmox vs virt-manager I never mentioned the hardware that I’m currently using for some of these […]

  2. […] recently blogged about VirtualBox and virt-manager and Clonezilla. I’ve mentioned VMs and Clonezilla in another post or two as well. In many of […]

Leave a Reply

Your email address will not be published. Required fields are marked *