High-Availability Cluster with Failover (Active / Passive) through Linux pacemaker is used widely in cluster for small scale production machines. Cluster provide continuous service from Cluster servers to clients in case of any fault it get shifted to another member of Cluster. So let’s start Configure Active/Passive pacemaker cluster on RHEL7/CentOS7.

I am conducting this SetUP in KVM Host machine, But working so it should work same as for two hardware machines with SAN or NFS with bit integration for storage part, because cluster will only effective when have more fault resistance. Further will write post on Iscsi and DRBD integration with Cluster.

For this setup, we are using CentOS7 latest as of now in Oct 2017.


Two KVM Guest with centOS 7.4.1708 like mentioned below.

  1. Linux machine has two ethernet cards configured in bonding
  2. Host machine has shared disk for MySQL and FTP data that configured in Guest machines
  3. Cluster has two group, MySQL and vsftpd.
  4. Cluster has six resource, three for each Group (vityual IP, Service and FileSystem).
  5. one group running on one host only, in case of any fault it shifted to another host

Below terminal representation describe cluster diagram.








Host Details
Guest1 Name : mysql-pri
Guest1 IP :

Guest2 Name : mysql-sec
Guest IP :

Version used in Setup

[root@mysql-pri ~]# uname  -r

[root@mysql-pri ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 

[root@mysql-pri ~]# pcs --version

[root@mysql-pri ~]# pacemakerd --version
Pacemaker 1.1.16-12.el7_4.4
Written by Andrew Beekhof

[root@mysql-pri ~]# corosync -v
Corosync Cluster Engine, version '2.4.0'
Copyright (c) 2006-2009 Red Hat, Inc.

Installation of Clsuter software

Now we should start installation of Cluster software on machine.

yum install pcs -y

Use above command on both machine to install pacemaker packages

Configuration Steps

Cluster process used to run through hacluster user. So we have to set password for hacluster user. Please assign same password on both nodes.

passwd hacluster

Now start pcs service and enable it, so it will start automatically on boot time.

#systemctl start pcsd.service
#systemctl enable pcsd.service

Cluster creation

Now we should create cluster and add members to it. But before doing it we have to authorize both nodes and save authorization tokens like below mentioned.

[root@mysql-pri ~]# pcs cluster auth mysql-pri mysql-sec
Username: hacluster
mysql-pri: Authorized
mysql-sec: Authorized

In Above command, we have to use same password as it were used to assign hacluster user, These password save in /var/lib/pcsd/tokens file in encrypted format.

Now we could create cluster and add members within same command.

[root@mysql-pri ~]# pcs cluster setup --start --name MySQL_cluster mysql-pri mysql-sec
Destroying cluster on nodes: mysql-pri, mysql-sec...
mysql-sec: Stopping Cluster (pacemaker)...
mysql-pri: Stopping Cluster (pacemaker)...
mysql-sec: Successfully destroyed cluster
mysql-pri: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'mysql-pri', 'mysql-sec'
mysql-pri: successful distribution of the file 'pacemaker_remote authkey'
mysql-sec: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
mysql-pri: Succeeded
mysql-sec: Succeeded

Starting cluster on nodes: mysql-pri, mysql-sec...
mysql-pri: Starting Cluster...
mysql-sec: Starting Cluster...

Synchronizing pcsd certificates on nodes mysql-pri, mysql-sec...
mysql-pri: Success
mysql-sec: Success
Restarting pcsd on the nodes in order to reload the certificates...
mysql-pri: Success
mysql-sec: Success

In above command you can see we created Cluster name MySQL_cluster and add both nodes into it.

Now we need to enable Cluster for both nodes like mentioned below.

[root@mysql-pri ~]# pcs cluster enable --all
mysql-pri: Cluster Enabled
mysql-sec: Cluster Enabled

So we have cluster enabled working on two nodes. We need to add resource on top over it to manage between nodes. But before let’s check status of Cluster as below mentioned commands.

[root@mysql-pri ~]# pcs status
Cluster name: MySQL_cluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: mysql-pri (version 1.1.16-12.el7_4.2-94ff4df) - partition with quorum
Last updated: Fri Sep 29 00:11:57 2017
Last change: Fri Sep 29 00:10:27 2017 by hacluster via crmd on mysql-pri

2 nodes configured
0 resources configured

Online: [ mysql-pri mysql-sec ]

No resources

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

We could see Cluster is working with two nodes and no resources configured over it. So now its time to add some resource.

Disabling Stonith

We are not going to use stonith in this cluster as we are using KVM Guest machine for building Cluster, which don’t support any hardware stonith device. But sure will cover some soft stonith features in another post like sbd (Stonith block device) devices

[root@mysql-pri ~]# pcs property set stonith-enabled=false

Addition of resource

As said above we are using two services as resource over Cluster. Details are below

MySQL  -- MySQL Filesystem / MySQL virtual IP / MySQL Service

FTP    -- FTP Filesystem   / FTP Virtual IP  / FTP Service

FTP Virtual IP :
MySQL Virtual IP :

FTP FileSystem : /var/ftp
MySQL FileSystem : /var/lib/mysql

These File system is Disk image hosted on KVM host machine and shared to both Cluster nodes but only mount through Cluster services. It’s same setup that we do in VMware Server for Cluster Linux machines. I have configured lvm over those Disk.

So i am going to add above resource one by one to Cluster. We are not including resource service and File system creation in this post. I hope viewers know how to install and configure FTP, MySQL and LVM on linux machine.

Let’s add FTP group resource first.

[root@mysql-pri ~]# pcs resource create vsftpd_fs ocf:heartbeat:Filesystem device="/dev/mapper/vsftpd_vg-lv1" directory="/var/ftp" fstype="ext4" --group vsftpd

[root@mysql-pri ~]# pcs resource create vsftpd_vip ocf:heartbeat:IPaddr2 ip= cidr_netmask=24 --group vsftpd

[root@mysql-pri ~]# pcs resource create vsftpd_ser service:vsftpd --group vsftpd

In above command, we have created resource group vsftpd and add three resource over same group.

Now we need to add another resource group and resource over same group.

[root@mysql-pri ~]# pcs resource create MySQL_fs cf:heartbeat:Filesystem device="/dev/mapper/mysql_vg-lv1" directory="/var/lib/mysql" fstype="ext4" --group MySQL

[root@mysql-pri ~]# pcs resource create MySQL_vip ocf:heartbeat:IPaddr2 ip= cidr_netmask=24 --group MySQL

[root@mysql-pri mysql]# pcs resource create MySQL_ser ocf:heartbeat:mysql config="/etc/my.cnf"  --group MySQL

Now we have added two groups and six resource group over same two groups.

Let’s see status of Cluster.

[root@mysql-pri ~]# pcs status
Cluster name: MySQL_cluster
Stack: corosync
Current DC: mysql-pri (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Tue Oct 24 11:28:23 2017
Last change: Tue Oct 24 11:21:51 2017 by root via cibadmin on mysql-pri

2 nodes configured
6 resources configured

Online: [ mysql-pri mysql-sec ]

Full list of resources:

 Resource Group: vsftpd
     vsftpd_ser	(service:vsftpd):	Started mysql-pri
     vsftpd_fs	(ocf::heartbeat:Filesystem):	Started mysql-pri
     vsftpd_vip	(ocf::heartbeat:IPaddr2):	Started mysql-pri
 Resource Group: MySQL
     MySQL_fs	(ocf::heartbeat:Filesystem):	Started mysql-sec
     MySQL_ser	(ocf::heartbeat:mysql):	Started mysql-sec
     MySQL_vip	(ocf::heartbeat:IPaddr2):	Started mysql-sec

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

So we have configured two node cluster that would support fault tolerance of any machine fault which would support application for longer uptime.