🕒 6 min read (Quick)
I’ve been using this setup for almost three years now, and it originally started as a test setup for Proxmox on low-powered hardware. Over time, I found it stable and capable enough to replace my other test VMware infrastructure and the systems running on there, making the full switch to Proxmox for my virtualization needs slowly after.

This setup is a great example of how second-hand hardware from places like eBay, OfferUp, or Craigslist, with a few upgrades, can be more than enough to start and run a reliable homelab without investing in enterprise gear or building out white box servers. Although I still personally prefer white box servers.
Hardware & Specs
This cluster runs on two Dell OptiPlex 3040 Mini PCs, which started as spare workstations. They are compact, low-power, and surprisingly effective for virtualization. Despite their age and relatively low core count, they have handled a handful of side projects and some more important things, such as a Windows domain, MS DHCP, a few other Windows-based services, and a dozen or so Linux/appliances, pretty well up till this point. Keep in mind nothing running is CPU intensive for a sustained period of time, and nothing is really all that critical:

The specs on these little PCs might shock you, but again, nothing is CPU-intensive.
Hardware Breakdown
Component | Specification | Cost |
Nodes | 2x Dell OptiPlex 3040 Mini PCs (Leftover) | $240 (2x 120 Used) |
CPU | Intel Core i3-6100T (4-Core @ 3.2GHz) | $0 |
RAM | 32GB DDR3 (Each Node) | $0 |
Primary Storage | 1TB SanDisk Ultra II SSD (Each Node) | $160 (2x $80 New) |
Proxmox OS Storage | JOIOT 120GB Portable External SSD (Each Node) | $80 (2x $40 New) |
Secondary Network | Cable Matters USB 3.0 to Gigabit Ethernet (Each Node) | $40 (2x $20 New) |
Hypervisor | Proxmox VE 8.x (Free) | $0 |
This setup started as a test environment for Proxmox, but over time, it became my primary virtualization lab. Upgrading storage and adding network separation allowed it to handle multiple VMs and management efficiently and reliably. Although I would not recommend the use of a USB to Ethernet adapter on a server, in this case, it’s been fine.
Storage Layout
Since these machines only support one internal drive, I had to use external storage for Proxmox OS while keeping the internal SSDs for VM storage.
Storage | Type | Purpose |
JOIOT 120GB SSD | External USB SSD Drive | Proxmox OS Installation |
SanDisk Ultra II 960GB SSD | Internal SSD | Primary VM Disk Storage |
local | Directory | Proxmox OS Installation |
local-lvm | LVM-Thin | VM Disk Storage [NO VMs TO BE STORED] |
local-vm-data-01 / 02 | LVM-Thin | Primary VM Disk Storage |
os_iso | SMB/CIFS | ISO Image Repository |
PVECLUSTER01-Backup | Proxmox Backup Server | VM Backups |
Why an External SSD for Proxmox? Initially, I tried dual USB Sandisk flash drives with ZFS, but they showed early signs of reliability issues. So I switched those out for the JOIOT external USB 3.0 SSDs for the form factor, also the manufacturer claims it has reliable NAND Flash. They have held up the last 3x years with these running pretty much 24/7.

Network Layout
Interface | Type | Traffic |
enp2s0 | Network Device (Built-in Gigabit Ethernet) | Management |
enx5c857e3e8efe | Network Device (USB 3.0 Gigabit Ethernet) | VM VLAN Trunk |
vmbr0 | Linux Bridge | Management (enp2s0) |
vmbr1 | Linux Bridge | VM VLAN Trunk (enx5c857e3e8efe) |

The built-in Gigabit Ethernet adapters on the compute nodes handle the Proxmox management interface traffic, while a USB-to-Ethernet adapter is used for VM network traffic to keep things separated.

Final Thoughts
In the end, this 2-node Proxmox cluster is nice and simple in how the components work together. First, using internal SSDs for VM storage ensures fast data access for the virtual machines, while external SSDs for Proxmox OS were done due to lack of space in the computers themselves, they have worked pretty well for the last couple of years.
By using that extra USB-to-Ethernet dongle the VM network traffic can be isolated from the host network management traffic, which is also the cluster network interface that is sensitive to latency. This helps to ensure that the Proxmox cluster is healthy & stable.
Recently I’ve been testing a Kubernetes cluster within this cluster so that will add another level of flexibility in application deployments and has been all-around very interesting to work with lately. These nodes also back up VMs to a Proxmox Backup server.