LXC-First: Why I Prefer Containers Over Full VMs in My Homelab
LXC-First: Why I Prefer Containers Over Full VMs in My Homelab
When I first started building out my homelab, everything ran as a full VM.
Jellyfin? VM. Radarr/Sonarr? VM. Utility tools? VM.
It worked, but it felt heavy. Every new service meant another virtual machine, another OS to patch, more RAM overhead, more storage eaten by base images and snapshots.
At some point, I started experimenting with LXC containers on Proxmox—and never really looked back. These days, my default is:
If it can run well as an LXC, it will run as an LXC.
Here’s why.
What I Mean by “LXC-First”
On Proxmox, LXC containers sit between “bare metal” and “full VM”:
- They share the host kernel instead of running their own.
- They can be resource-limited like VMs (CPU/RAM/disk quotas).
- They still feel like separate “machines” with their own IPs, packages, users, and services.
“LXC-first” for me means:
- All typical services (media stack, monitoring, small web apps, utilities) go into unprivileged LXC containers by default.
- Only specialized workloads (Windows, kernel modules, GPU-heavy edge cases, AD DCs, etc.) get a full VM.
Reason 1: Resource Efficiency
Less RAM overhead
Each VM needs:
- Its own kernel
- Its own init system
- Background services you don’t really care about
Each LXC only needs:
- What you install inside it
- A small bit of metadata on the host
In practice, this means I can:
- Run more services on the same hardware.
- Give each service “just enough” RAM without feeling wasteful.
- Reserve the “fat” resources (lots of RAM, vCPUs) for workloads that truly need them.
Storage that isn’t stupid
VM disks tend to:
- Grow as monolithic images.
- Duplicate base OS installs across multiple VMs.
With LXCs:
- Root filesystems are usually small.
- Data is split cleanly onto host-mounted storage:
/mnt/storage/media/mnt/storage/downloads/mnt/storage/appdata/...
Backups are smaller, and I don’t have five copies of the same 10 GB OS image wasting space.
Reason 2: Faster Deployments and Recovery
When everything is a VM, you get used to:
- Waiting for ISOs to boot.
- Clicking through installers.
- Running full-OS updates for each VM.
With LXC:
- Creating a new container off a template is seconds, not minutes.
- Most of the time I can:
- Clone an existing “base” LXC.
- Change the hostname and IP.
- Install one or two packages.
- If something gets messed up, I can:
- Restore an LXC snapshot almost instantly.
- Or just destroy and recreate it from scratch and reattach the data mount.
Fail fast, rebuild fast, move on.
Reason 3: Cleaner Separation of Data and OS
I want application configs and media data separate from the “OS” itself.
With LXCs, it’s natural to:
- Use bind mounts from the host:
/mnt/storage/media→/mnt/media/mnt/storage/downloads→/mnt/downloads/mnt/storage/appdata/jellyfin→/config
- Treat the container as “disposable glue”:
- App binaries and dependencies are inside the LXC.
- All important data is outside, on the host.
If the container dies, gets corrupted, or I want to switch distros:
- Destroy the container.
- Recreate it from a newer template.
- Reattach the same mount points.
- Reinstall the app and point it at the existing config/data.
It feels closer to the container workflow you see with Docker, but still with full “VM-like” logins and a persistent filesystem.
Reason 4: Easier Maintenance
LXC makes day-to-day maintenance a bit more sane.
Unified kernel updates
- The host handles kernel updates.
- LXCs share that kernel, so I’m not patching 10 different kernels that all think they need a reboot.
Simpler backup strategy
For VMs, you’re basically backing up:
- Entire disk images, including OS + data mixed together.
For LXCs, I typically:
- Back up just the container config and OS filesystem.
- Keep big media datasets backed up/snapped separately at the storage layer.
Restoring becomes more flexible, and backups are less bloated.
Reason 5: Enough Isolation Without Overkill
Could I run everything in a full VM for maximum isolation? Yes.
Do I need that level of isolation for Jellyfin, Radarr, Sonarr, and small utility services? Not really.
LXC gives me:
- Separate namespaces (PIDs, networking, mount points).
- Per-container resource limits.
- Unprivileged containers for a good security baseline.
And in Proxmox:
- I still manage them like “machines” with their own IPs.
- They integrate cleanly with my network (VLANs, firewalls, etc.).
It’s a nice middle point between “one big host with everything installed” and “one VM per app.”
When I Still Choose a Full VM
LXC is not a silver bullet. I still use VMs for:
1. Windows and GUI-heavy workloads
- Anything Windows is automatically a VM.
- Some legacy apps simply expect a full OS + GUI + drivers.
2. Domain controllers / AD / critical infrastructure
- For AD DCs and similar core infra, I prefer VMs:
- Well-tested, well-understood behavior under virtualization.
- Easy integration with backup/DR tooling that expects VMs.
3. Kernel-level features and odd hardware
LXCs share the host kernel, so if I need:
- Custom kernel modules,
- Very niche drivers,
- Or a clean boundary for weird hardware experiments,
I’ll spin up a VM.
4. Isolation for “sketchy” workloads
If I’m testing:
- Questionable scripts,
- Tools I don’t fully trust,
- Or something I might blow up,
I’d rather blow up a VM boundary than risk the host or its shared kernel.
Example Layout: LXC-First Homelab
A simplified version of how I think about workload placement:
LXCs (default):
- Jellyfin / Emby / Plex
- Radarr / Sonarr / Lidarr / Prowlarr / Bazarr
- qBittorrent + VPN
- Paperless-ngx
- Monitoring / metrics / dashboards
- Small web apps and utilities
VMs:
- Windows Server (RDS, AD, legacy apps)
- Any Windows desktop VM
- Experimental Linux with unusual kernels or drivers
- Firewall/Router VMs (if not on dedicated hardware)
This keeps the majority of services lightweight and quick to manage, while reserving VMs for the stuff that actually benefits from them.
Downsides of LXC You Should Know About
It’s not all perfect. Some common gotchas:
- Certain apps assume they are on bare metal or a full VM.
- Permissions can get weird with bind mounts and unprivileged containers.
- Some Docker-in-LXC setups need extra tuning (nesting, cgroup tweaks).
- Debugging kernel-level issues is trickier because the host and all LXCs share the same kernel.
You need to be comfortable with:
- Understanding how the host and container interact.
- Dealing with UID/GID mapping.
- Reading Proxmox docs/logs when something doesn’t behave as expected.
Final Thoughts
Going LXC-first in my homelab has:
- Reduced resource waste.
- Made spinning up new services trivial.
- Simplified backups and restores.
- Kept my environment more maintainable over time.
Full VMs still have a place, but they’re no longer the default—they’re the exception.
If you’re currently running everything as a VM, try this:
- Pick one low-risk service (like a small web app or utility).
- Move it into an LXC with bind-mounted storage.
- Live with it for a while.
- If it behaves, repeat for the next app.
You might find yourself quietly shifting to an LXC-first mindset too.