0d29ff67-9bc6-4d66-a4cb-b34692ba9f46 Skip to content

Building a High-Availability Homelab with Proxmox VE & Ceph on a Budget (2025)

Achieving high availability (HA) in your homelab—ensuring services stay online even if one server fails—used to require expensive enterprise gear. However, with open-source tools like Proxmox VE and Ceph distributed storage, you can build a resilient, building HA homelab cheap setup using budget-friendly or used hardware.

This guide walks through the concepts and steps for creating a Proxmox VE cluster setup guide with integrated Ceph storage for Proxmox tutorial.

Why Proxmox VE + Ceph for HA?

  • Proxmox VE: A powerful open-source virtualization platform (based on Debian Linux) that supports KVM virtualization and LXC containers. It has built-in clustering and HA features.
  • Ceph: A highly scalable, software-defined storage solution that provides object, block, and file storage. When integrated with Proxmox, it allows for distributed, redundant storage across your cluster nodes, enabling live migration and HA for VMs and containers.
  • Cost-Effective: Leverages open-source software and commodity or used hardware.
  • Scalability: Easily add more nodes to increase compute and storage capacity.
  • Resilience: Tolerates node failures without significant service interruption (depending on configuration).

Hardware Considerations for a Budget HA Cluster

Building a low power consumption homelab server 2025 cluster often involves used enterprise gear or budget mini PCs.

  • Nodes (Minimum 3 Recommended): You need at least 3 nodes for reliable Ceph quorum and Proxmox HA. Popular budget options include:
    • Used Mini PCs (Dell OptiPlex Micro, HP EliteDesk Mini, Lenovo ThinkCentre Tiny): Look for models with decent CPUs (Intel Core i5 6th gen+), upgradeable RAM (16GB+ per node recommended), and ideally 2+ network ports (or ability to add one).
    • Used Enterprise Servers (Dell PowerEdge R620/R720, HP ProLiant DL360/DL380 G8+): More powerful but consume more power and generate more noise/heat. Check compatibility.
  • RAM: Crucial for both Proxmox and Ceph. Aim for 16GB+ per node if possible, 32GB+ is better for larger setups.
  • Storage:
    • OS Drive: Small SSD (120GB+) per node for Proxmox OS.
    • Ceph OSDs (Object Storage Daemons): Drives dedicated to Ceph storage. You need at least one per node (more is better). SSDs offer the best performance, but HDDs can be used for bulk storage. Consider used enterprise SSDs for better endurance.
    • Ceph Journal/DB (Optional but Recommended): A faster SSD (NVMe if possible) partition per node can significantly boost Ceph write performance, especially if using HDDs for OSDs.
  • Networking:
    • Cluster Network: Dedicated, high-speed network (1Gbps minimum, 10Gbps+ recommended) for Proxmox cluster communication and Ceph traffic. Use a separate physical network or VLANs.
    • Public Network: For VM/container traffic and management access.
    • Managed Switch: Recommended for VLANs and potentially higher speeds (10GbE).
Used Dell Optiplex Micro PC

Used Dell Optiplex Micro PC

www.ebay.com

Compact, energy-efficient business PCs that make excellent Proxmox nodes

Used Enterprise SSDs

Used Enterprise SSDs

www.ebay.com

Cost-effective storage options for Ceph with higher durability than consumer drives

10GbE SFP+ Network Cards

10GbE SFP+ Network Cards

amazon.com

Network interface cards for high-speed interconnect between cluster nodes

Enterprise NVMe SSDs

Enterprise NVMe SSDs

amazon.com

High-performance NVMe drives for Ceph journal/WAL storage to improve performance

Setup Overview

  1. Install Proxmox VE: Install Proxmox on the OS drive of each node.
  2. Configure Networking: Set up static IPs for management and dedicate a network interface (or VLAN) for the cluster/Ceph network on each node.
  3. Create Proxmox Cluster: Designate one node as the first cluster member, then join the other nodes.
  4. Install Ceph: Use the Proxmox GUI to install Ceph packages on all nodes.
  5. Configure Ceph Monitors (MONs): Create Ceph Monitors on at least 3 nodes for quorum.
  6. Configure Ceph Managers (MGRs): Create Ceph Managers (usually co-located with MONs).
  7. Create Ceph OSDs: Wipe the designated storage drives and create OSDs on each node using those drives. Configure Journal/DB devices if using them.
  8. Create Ceph Pools: Define storage pools (e.g., one for VM disks, one for backups) with desired replication levels (size=3 recommended for HA) and placement groups.
  9. Add Ceph Storage to Proxmox: Add the Ceph pools as RBD (RADOS Block Device) storage resources within the Proxmox Datacenter view.
  10. Configure Proxmox HA: Define HA groups, specify desired VM/CT start order and priorities, and enable HA for critical workloads.

Key Steps & Configuration (Conceptual)

1. Proxmox Installation & Networking

  • Download Proxmox VE ISO and create bootable USB.
  • Install on each node.
  • During setup or via /etc/network/interfaces, configure static IPs:
    • vmbr0: Management bridge (uses one physical NIC).
    • vmbr1 (or similar): Dedicated cluster/Ceph network bridge (uses second physical NIC or VLAN).

2. Creating the Cluster

  • On Node 1: Datacenter -> Cluster -> Create Cluster. Give it a name.
  • On Node 2 & 3: Datacenter -> Cluster -> Join Cluster. Enter Node 1’s IP and root password. Select the dedicated cluster network link.

3. Installing & Configuring Ceph

  • In Proxmox GUI (on any node): Select a Node -> Ceph -> Install Ceph. Choose the latest stable version (e.g., Quincy, Reef). Repeat on all nodes.
  • Monitors/Managers: Select Node -> Ceph -> Monitor/Manager. Create MONs and MGRs on at least 3 nodes, ensuring they use the dedicated Ceph network IP.
  • OSDs: Select Node -> Ceph -> OSD. Click Create: OSD. Select the drive(s) intended for Ceph. If using a separate faster drive for DB/WAL (Journal), specify it here. Repeat for all designated OSD drives on all nodes.

4. Creating Ceph Pools & Adding Storage

  • Select Node -> Ceph -> Pools. Click Create.
    • Name: e.g., vm-storage
    • Size: 3 (means 3 copies of data will be stored across different OSDs/nodes)
    • Min Size: 2 (allows writes even if one copy is temporarily unavailable)
    • Application: RBD
    • Adjust Placement Groups (PGs) based on OSD count (use pgcalc tool if needed).
  • Datacenter -> Storage -> Add -> RBD.
    • ID: e.g., ceph-rbd
    • Pool: Select the pool created (vm-storage).
    • Monitor Hosts: Enter IPs of your Ceph monitors.
    • Username: admin (default)
    • Enable KRBD for better performance.
    • Select Content types (Disk image, Container).

5. Configuring Proxmox HA

  • Datacenter -> HA -> Groups. Create a group (e.g., critical-vms). Add nodes to the group. Set priorities.
  • Select a VM/CT -> More -> Manage HA.
    • Select State: started.
    • Choose the HA Group.
  • Proxmox will now attempt to restart HA-managed VMs/CTs on other available nodes if their current node fails.

Important Considerations

  • Network Speed: The cluster/Ceph network is critical. 10GbE significantly improves performance, especially during OSD recovery or VM migration.
  • Ceph size and min_size: Understand these parameters. size=3, min_size=2 is standard for 3+ nodes, providing redundancy against one node failure.
  • Placement Groups (PGs): Proper PG calculation is important for data distribution and performance. Use online calculators or the ceph osd pool set <pool_name> pg_num <value> and pgp_num <value> commands.
  • Testing: Regularly test HA failover by simulating node failures (gracefully shutting down a node) to ensure it works as expected.
  • Backups: HA is not a backup! Implement a separate backup strategy (e.g., Proxmox Backup Server) for disaster recovery.

Conclusion

Building a Proxmox HA guide cluster with Ceph cluster budget build techniques takes effort but provides incredible resilience and flexibility for your homelab. By carefully selecting best used hardware for Proxmox cluster and following the setup steps for configuring Proxmox high availability and Ceph, you can create a powerful, diy hyperconverged infrastructure without breaking the bank. Remember that networking performance and sufficient RAM are key to a smooth experience.