Deploy High Availability Cluster on RHEL 8 Using Shared Storage

Ach.Chusnul Chikam
6 min readJun 25, 2021

High-Availability cluster or failover cluster (active-passive cluster) is one of the most widely used cluster types in the production environment. This type of cluster provides you the continued availability of services even one of the cluster nodes fails. If the server running application has failed for some reason (hardware failure), cluster software (pacemaker) will restart the application/resource on another node.

High-Availability is mainly used for databases, custom applications, and also for file sharing. Fail-over is not just starting an application. It has some series of operations associated with it like, mounting filesystems, configuring networks and starting dependent applications.

Lab Environment:

RHEL 8 supports a failover cluster using the pacemaker. Failover is a series of operations, so we would need to configure filesystems and networks as a resource. For a filesystem, we would be using a shared storage from iSCSI storage.

Below my environment specifications:

Prerequisites:

  • Operating System RHEL 8
  • Red Hat Developer Subscription (register here)
  • Node Cluster can reach Node Storage

Prepare All Nodes:

On all nodes, registered system to RHSM using Developer Subscription then update the system

# subscription-manager register 
# subscription-manager attach --auto
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
# dnf update -y

Configure/etc/hosts from all the cluster nodes

192.168.1.218   ha1 ha1.localdomain
192.168.1.219 ha2 ha2.localdomain
192.168.1.220 ha-storage ha-storage.localdomain

Shared Storage

Shared storage is one of the important resources in the RHEL high-availability cluster as it holds the data of a running application. All the nodes in a cluster will have access to shared storage for recent data. In this lab, we will configure a cluster with iSCSI storage for demonstration purposes. SAN storage is the most widely used shared storage in the production environment.

Setup Cluster Nodes

Execute on both Cluster Nodes (ha1 and ha2)

[root@ha1 ~]# dnf install -y pcs pacemaker fence-agents-all pcp-zeroconf psmisc policycoreutils-python-utils lvm2 chrony iscsi-initiator-utils
[root@ha1 ~]# firewall-cmd --permanent --add-service=high-availability
[root@ha1 ~]# firewall-cmd --reload
[root@ha1 ~]# systemctl start pcsd
[root@ha1 ~]# systemctl enable --now pcsd
[root@ha1 ~]# systemctl status pcsd
[root@ha1 ~]# passwd hacluster
[root@ha1 ~]# pcs host auth ha1.localdomain ha2.localdomain
[root@ha1 ~]# pcs cluster setup mycluster --start ha1.localdomain ha2.localdomain

Start and Verify Cluster

[root@ha1 ~]# pcs cluster enable --all
[root@ha1 ~]# systemctl enable corosync && systemctl enable pacemaker
[root@ha1 ~]# corosync-cfgtool -s
[root@ha1 ~]# corosync-cmapctl | grep members
[root@ha1 ~]# pcs status

Disable Fencing (optional)

[root@ha1 ~]# pcs property set stonith-enabled=false
[root@ha1 ~]# pcs status

Create an Active/Passive HA LVM Cluster

Update /etc/lvm/lvm.conf file on all the cluster nodes

[root@ha1 ~]# vi /etc/lvm/lvm.conf

for system_id_source assign its value as uname

system_id_source = "uname"

Rebuild initramfs on all the cluster nodes then reboot

[root@ha1 ~]# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
[root@ha1 ~]# dracut -f -v
[root@ha1 ~]# reboot

Setup Shared Storage (iSCSI)

Execute on Storage Node (ha-storage)

[root@ha-storage ~]# yum install -y targetcli 
[root@ha-storage ~]# timedatectl set-ntp yes
[root@ha-storage ~]# fdisk -l | grep -i sd
[root@ha-storage ~]# pvcreate /dev/sdb
[root@ha-storage ~]# vgcreate vg_iscsi /dev/sdb
[root@ha-storage ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi

Get the nodes initiator’s details on the Cluster Nodes

First Node (ha1),

[root@ha1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994–05.com.redhat:dd802b97ab60

Second Node (ha2),

[root@ha2 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994–05.com.redhat:1cc56373abf8

Execute targetcli on Storage Node to get an iSCSI CLI for interactive prompt

[root@ha-storage ~]# targetcli

Follow commands on the picture below

Setup target service and firewall

[root@ha-storage ~]# systemctl enable --now target
[root@ha-storage ~]# systemctl status target
[root@ha-storage ~]# firewall-cmd --permanent --add-port=3260/tcp
[root@ha-storage ~]# firewall-cmd --reload

Setting NTP on both Cluster Nodes

[root@ha1 ~]# timedatectl set-ntp yes

Discover Shared Storage

Install iSCSI initiator package and discover the target on both cluster nodes

[root@ha1 ~]# iscsiadm -m discovery -t st -p 192.168.1.220Output:
192.168.1.220:3260,1 iqn.2003-01.org.linux-iscsi.ha-storage.x8664:sn.4cb9b5f7b85c

Login to the target storage

[root@ha1 ~]# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.ha-storage.x8664:sn.4cb9b5f7b85c -p 192.168.1.220 -lOutput:
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.ha-storage.x8664:sn.4cb9b5f7b85c, portal: 192.168.1.220,3260]
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.ha-storage.x8664:sn.4cb9b5f7b85c, portal: 192.168.1.220,3260] successful.

Restart and enable the initiator service.

[root@ha1 ~]# systemctl enable --now iscsid
[root@ha1 ~]# systemctl status iscsid

Setup and Create LVM

Go to all of your cluster nodes and check whether the new disk from the iSCSI server is visible. In my case /dev/sdb is the shared disk from iSCSI storage

[root@ha1 ~]# fdisk -l | grep -i sd
[root@ha1 ~]# lsblk

Setup LVM storage on one of the cluster nodes

[root@ha1 ~]# pvcreate /dev/sdb
[root@ha1 ~]# vgcreate vgpool /dev/sdb
[root@ha1 ~]# vgs -o+systemid
[root@ha1 ~]# lvcreate -l 100%FREE -n lvdata vgpool
[root@ha1 ~]# mkfs.ext4 /dev/vgpool/lvdata

Create a directory as a mounting point and create a file inside

[root@ha1 ~]# mkdir -p /share
[root@ha1 ~]# mount /dev/vgpool/lvdata /share
[root@ha1 ~]# echo "tes HA" > /share/tes-HA.txt
[root@ha1 ~]# df -h /share/tes-HA.txt
[root@ha1 ~]# cat /share/tes-HA.txt
[root@ha1 ~]# umount /share

Setup Cluster Resources

Add resources to pcs

[root@ha1 ~]# pcs resource create Virtual_IP IPaddr2 ip=192.168.1.100 cidr_netmask=24 op monitor interval=30s
[root@ha1 ~]# pcs resource create My_VG ocf:heartbeat:LVM-activate vgname=vgpool activation_mode=exclusive vg_access_mode=system_id --group HA-LVM
[root@ha1 ~]# pcs resource create My_FS Filesystem device="/dev/mapper/vgpool-lvdata" directory="/share" fstype="ext4" --group HA-LVM
[root@ha1 ~]# pcs status

Set constraint to the cluster

[root@ha1 ~]# pcs constraint order Virtual_IP then My_FS
[root@ha1 ~]# pcs constraint colocation add My_FS with Virtual_IP INFINITY
[root@ha1 ~]# pcs constraint location My_VG prefers ha1.localdomain=100
[root@ha1 ~]# pcs constraint location My_VG prefers ha2.localdomain=50

Test Failover LVM-HA

  • Node ha1
[root@ha1 ~]# df -h /share
[root@ha1 ~]# cat /share/tes-HA.txt
[root@ha1 ~]#
[root@ha1 ~]# pcs node standby ha1.localdomain
[root@ha1 ~]# pcs status
  • Node ha2
[root@ha2 ~]# df -h /share
[root@ha2 ~]# cat /share/tes-HA.txt
[root@ha2 ~]#
[root@ha2 ~]# lsblk
[root@ha2 ~]# pcs status
  • Node ha1
[root@ha1 ~]# pcs node unstandby ha1.localdomain
[root@ha1 ~]#
[root@ha1 ~]# pcs status
[root@ha1 ~]# df -h /share
[root@ha1 ~]# cat /share/tes-HA.txt

See other content

References :

#RHEL8 #RedHat #LVM-HA #HighAvailability #StayHealth

--

--

Ach.Chusnul Chikam

Cloud Consultant | RHCSA | CKA | AWS SAA | OpenStack Certified | OpenShift Certified | Google Cloud ACE | LinkedIn: https://www.linkedin.com/in/achchusnulchikam