Proxmox VE 3.3 2-node cluster with GlusterFS

This article covers setup of simple and cost-effective 2-node Proxmox VE cluster featuring locally installed GlusterFS as shared filesystem for cluster VMs. Through this solution is not intended for mission critical or enterprise needs, its ease and ability to run on bargain priced hardware makes it interesting for non-profit organisations, labs or clustering enthusiasts.



This HOWTO implies you have 2 freshly installed Proxmox VE 3.3 nodes:
pve-node-01 (IPv4 and pve-node-02 (IPv4 connected to the private network. Keep in mind that your actual setup may differ and you may need to change some of commands in this HOWTO to suit your needs.

Please keep in mind:

NB! All actions must be performed on both nodes, unless instructed otherwise!

First of all, install GlusterFS server:
apt-get install glusterfs-server

Then shrink /dev/pve/data logical volume to free some space for GlusterFS:
umount /var/lib/vz
fsck.ext4 -f /dev/pve/data
resize2fs /dev/pve/data 300G
lvchange -an /dev/pve/data
lvreduce -L320G /dev/pve/data
lvchange -ay /dev/pve/data
resize2fs /dev/pve/data
lvchange -ay /dev/pve/data

In my virtualization lab both servers got 1TB HDDs. I’ve shrunk LV with local storage to 320 GB, leaving remaining space for GlusterFS. Please make your own decision regarding LV sizes based on your needs and HDD capacity.

Now create logical volume to hold GlusterFS bricks:
lvcreate -n glusterfs -l100%FREE pve
mkfs.ext4 /dev/pve/glusterfs
Add line /dev/pve/glusterfs /glusterfs ext4 defaults 0 2 to /etc/fstab.
mkdir /glusterfs
mount /glusterfs

Create GlusterFS volume “default”:
mkdir /glusterfs/default
Only on pve-node-01: gluster peer probe pve-node-02
Only on pve-node-02: gluster peer probe pve-node-01
Only on pve-node-01 (next three lines):
gluster volume create default replica 2 transport tcp pve-node-01:/glusterfs/default pve-node-02:/glusterfs/default
gluster volume start default
gluster volume set default auth.allow,10.10.0.*

Create Proxmox VE cluster:
Only on pve-node-01: pvecm create pve-cluster
Only on pve-node-02: pvecm add

Edit cluster.conf in a tricky Proxmox way. First copy it to
cp /etc/pve/cluster.conf /etc/pve/
Add highlighted parameters <cman two_node="1" expected_votes="1"> to cman section of to avoid quorum loss in situations when only one cluster member is active. Activate cluster.conf changes with WWW GUI:

Now use Proxmox VE WWW GUI  to add GlusterFS volume “default”:
OK! At this point you’ve already got Proxmox VE 3.3 cluster with shared FS, you got online migrations, but you still got no HA. So let’s get further and finish it all! 😉

High Availability

You need fencing up and running to use Proxmox VE High Availability features. Through it is generally reasonable to use professional fencing devices like BMCs, PDUs or smart switches, we will stick to bargain price level, where none of such professional devices are available.

To work around this situation we create “dummy” fence agent, which will deceive our cluster reporting success on any action. NB! Never do this on production!

Create file /usr/sbin/fence_dummy:

echo "success: dummy $2"
exit 0

And make it executable: chmod +x /usr/sbin/fence_dummy

Now edit copied from cluster.conf again!
This is how our fencing section of should look like:

    <fencedevice agent="fence_dummy" name="dummy"/>

Fence settings must be added to clusternode sections of

    <clusternode name="pve-node-01" nodeid="1" votes="1">
        <method name="1">
          <device name="dummy"/>
    <clusternode name="pve-node-02" nodeid="2" votes="1">
        <method name="1">
          <device name="dummy"/>

NB! Do not forget to activate cluster changes using WWW GUI.

Make sure you have line FENCE_JOIN="yes" in /etc/default/redhat-cluster-pve

Reboot both nodes, one by one. Everything must be OK now. Check cluster member availability with pvecm and clustat utilities and do not forget to check WWW GUI HA option for KVM VMs you want to be highly available.