Introduction

Building a Pi cluster has been on my bucket list for a while. With the release of the Raspberry Pi 5, the increased performance made it finally worthwhile to build a true mini data center. I went all-in with four Pi 5s, running Raspbian 64-bit, installed Pimox (Proxmox VE on ARM), and configured LXC containers for lightweight virtualization. To top it off, I used Ceph for distributed shared storage and implemented high availability (HA) across the nodes.

Here’s the full breakdown of how I built and configured my highly available Pi cluster.

Hardware Setup

Parts List:

  • 4x Raspberry Pi 5 (8GB)
  • 4x 128GB USB 3.0 Flash Drives (storage)
  • 4x 64GB Micro SD Cards
  • 4x Waveshare POE Hats
  • 4x Cat 6 Keystone Couplers
  • 1x Raspberry Pi Rack Mount (Holds 4 Pis)

Networking:

All Pi 5s are connected to a Gigabit switch, with static IPs set outside the DHCP scope.

Software Installation

1. Base OS - Raspbian (Debian Bookworm 64-bit)

Installed the Raspberry Pi OS (Lite) 64-bit on each Pi using Raspberry Pi Imager. I chose the Lite version for performance and flexibility.

Post-boot, I did basic configuration:

sudo apt update && sudo apt upgrade -y
sudo raspi-config  # Set hostname, enable SSH, etc.

2. Installing Pimox (Proxmox on ARM)

On each Pi, I installed Pimox 8 (Proxmox VE 8 for ARM). Here’s the short version:

echo "deb http://deb.debian.org/debian bookworm main contrib non-free" | sudo tee /etc/apt/sources.list
sudo apt update
sudo apt install -y curl gnupg2 lsb-release
curl -fsSL https://enterprise.proxmox.com/debian/pve/dists/bookworm/Release.gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/pimox.gpg
echo "deb http://deb.proxmox.com/debian/pve bookworm pve-no-subscription" | sudo tee /etc/apt/sources.list.d/pve.list
sudo apt update
sudo apt install -y proxmox-ve

⚠️ Note: Some packages like QEMU/KVM are skipped or replaced as they’re not ARM compatible. I used LXC exclusively.

After reboot, the web UI was available on https://<pi-ip>:8006.

Cluster Configuration

1. Join Nodes into a Cluster

I designated one Pi as the primary Proxmox node and added the rest using the Web UI:

  • Datacenter → Cluster → Create Cluster on Node 1
  • On Nodes 2–4: Join Cluster via Web UI or CLI

Example CLI (on joining node):

pvecm add <primary-node-ip>

All nodes now show under a single datacenter.

2. LXC Containers for Services

I deployed most of my services in LXC containers, which are:

  • Lightweight
  • Fast to spin up
  • ARM-compatible

I created templates on the primary node and cloned them to others.

Example LXC containers:

  • Pi-hole x2 (DNS)
  • Uptime Kuma
  • Tailscale

3. Ceph Shared Storage

Why Ceph?

Ceph provides distributed, redundant, self-healing storage. Ideal for HA and shared container volumes.

Setup Overview:

  1. Install Ceph via Proxmox Web UI (or CLI) on all nodes
  2. Add each Flash Drive as a Ceph OSD
  3. Create a Ceph pool (e.g. rbd)
  4. Add it as a storage option in Proxmox
  5. Use it for LXC Container storage

Example CLI snippet (run on each node):

pveceph install
pveceph init --network 192.168.1.0/24
pveceph createosd /dev/sda

After adding all OSDs, I created the storage pool:

pveceph pool create rbd --size 2 --min_size 1

4. High Availability (HA) Configuration

With Ceph set up and shared storage in place, I configured HA:

  • Enabled the HA Manager in Proxmox
  • Added critical LXC containers to the HA resource list
  • Defined groups (e.g., proxgroup, pihole01, and pihole02)

Now, if one node goes offline, containers auto-migrate to available nodes using the shared Ceph volume.

Monitoring and Maintenance

  • Proxmox Web UI provides great dashboards.

Closing Thoughts

This Raspberry Pi 5 cluster has exceeded my expectations. Thanks to Pimox, LXC, and Ceph, I now have a fully functioning, fault-tolerant, and low-power cluster perfect for testing services and hosting homelab apps.

If you’re looking to get into clustering, virtualization, or just love a good DIY tech project — this setup is a solid start.