Watch all our Tutorials and Training Videos for Free on our Youtube Channel, Get Online Web Tools for Free on swebtools.com

Search Suggest

Integrate DRBD with Pacemaker Clusters on RHEL7 / CentOS7


Integrate DRBD with Pacemaker Clusters on RHEL7 / CentOS7

This post will show you How to Integrate DRBD with Pacemaker Clusters on RHEL7 / CentOS7.

If you are interested in learning, Request you to go through the below recommended tutorial.




How to Integrate DRBD with Pacemaker Clusters on RHEL7 / CentOS7


This article describes you the step by step procedure to integrate DRBD with Pacemaker Clusters for High Availability Apache Web server on RHEL7 / CentOS7.

Using Pacemaker and DRBD are a complete cost-effective solution for many High Availability situations.

Also refer this below links to know more about DRBD and Pacemakers. Because in this article, We give you only the procedure to Integrate DRBD with Pacemaker Clusters on Linux.

What is DRBD, How DRBD works - DRBD Tutorial for Beginners
How to Install and Configure DRBD Cluster on RHEL7 / CentOS7


Our Lab Setup:
Hostname : node1 and node2
Operating System : CentOS 7.1 - 64 Bit
Management IP Address : 192.168.2.10(node1) and 192.168.2.20(node2)
Dedicated network device for Data replication : 172.16.2.61 and 172.16.2.62 respectively.
Dedicated Local disk : /dev/sdb and preferably same size on each nodes.
Required Packages : pcs, pacemaker, fence-agents-all, psmisc policycoreutils-python, httpd*, drbd*-utils, kmod-drbd*

Use the following procedure to integrate DRBD with Pacemaker Clusters for High Availability Apache Web server on RHEL7 / CentOS7.

Prerequisites:
1. Make sure both node1 and node2 are reachable.
2. Make an entry of each nodes for name resolution in /etc/hosts file or configure the nodes in DNS. Refer this link to configure the DNS Server on RHEL7 / CentOS7.
3. Yum repo enabled, refer this link to Configure YUM Repo Server on Linux.
4. Internet connection needed if you prefer to install DRBD software through internet. If you are using Linux VM's, and dont know how to provide the internet access to VM's, Refer this link to configure internet to linux guest OS virtual machines.

Steps involved
1. Install Apache Web server packages (httpd).
2. Install Cluster Software (Pacemaker)
3. Configure the Cluster with Pacemaker and Corosync.
4. Download DRBD and Install the DRBD Packages
5. Configure DRBD
6. Initialize the DRBD meta data storage
7. Define the DRBD Primary node
8. Create and mount the Filesystem of DRBD device
9. Create a cluster resources and set properties
10. Testing the fail-over manually.


1. Install Apache Web server packages (httpd)


Install the appropriate Apache Web server package "httpd" on both nodes using yum to avoid dependencies issue.
yum -y install httpd*

Note : If iptables enabled on your environment, allow apache (http) service to accept the connection on both nodes.
firewall-cmd --permanent --add-service=http
firewall-cmd --reload

2. Install Cluster Software (Pacemaker)


Install the Cluster packages on both nodes using yum to avoid dependencies issue.
yum install -y pacemaker pcs fence-agents-all psmisc policycoreutils-python

Note : If iptables enabled on your environment, allow cluster related services to accept the connection on both nodes.
firewall-cmd --permanent --add-service=high-availability
firewall-cmd --reload

3. Configure the Cluster with Pacemaker and Corosync.


Once the Cluster software packages are installed, reset the password of the user "hacluster" on both nodes which was created during the Cluster packages installation and this user will be used to authenticate the cluster nodes.
echo "redhat" | passwd --stdin hacluster

On both nodes, start and enable the pcsd daemon.
systemctl start pcsd
systemctl enable pcsd

Configure corosync on the first node “node1”, first authenticate the Cluster nodes membership using the user "hacluster"
[root@node1 ~]# pcs cluster auth node1 node2 -u hacluster
Password:
node1: Authorized
node2: Authorized

Just give the password of hacluster when its prompting for the password as above or just bypass the password like below.
[root@node1 ~]# pcs cluster auth node1 node2 -u hacluster -p redhat

where "redhat" is my hacluster’s user password.

Lets define the Cluster name and Cluster nodes on first node “node1” using pcs command, which generate and synchronize the corosync configuration.
[root@node1 ~]# pcs cluster setup --name mycluster node1 node2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
node1: Succeeded
node2: Succeeded
Synchronizing pcsd certificates on nodes node1, node2...
node1: Success
node2: Success
Restaring pcsd on the nodes in order to reload the certificates...
node1: Success
node2: Success

Where “mycluster” is my cluster name.

On node1, Start and enable the cluster..
[root@node1 ~]# pcs cluster start --all
[root@node1 ~]# pcs cluster enable --all

Once you have started, Cluster configuration file /etc/corosync/corosync.conf will be generated on both nodes.

On both nodes, check the cluster status.
pcs status

Set property for the cluster
pcs property set stonith-enabled=false

STONITH also known as fencing to protects your data from being corrupted by rogue nodes or unintended concurrent access. We just disabled this future, hence we dont have shared filesystem.
pcs property set no-quorum-policy=ignore
pcs property set default-resource-stickiness="INFINITY"

4. Download DRBD and Install the DRBD Packages


Please refer the below link to know more about DRBD in detail.
What is DRBD, How DRBD works - DRBD Tutorial for Beginners
How to Install and Configure DRBD Cluster on RHEL7 / CentOS7

Preferred methods to download the DRBD packages are,
Method 1 : Download the latest DRBD packages from DRBD project’s sponsor company link http://oss.linbit.com/drbd/ and compile it from the source.
Method 2 : Enable EPEL repository on RHEL7, CentOS7 and SL7.
We prefer method 2, which enables the EPEL repository. So install the EPEL Repository on both nodes.
rpm -ivh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

Import the Public Key on both nodes.
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org

Install the DRBD packages on both nodes
yum -y install drbd*-utils kmod-drbd*

Note : If you get any error as public key is not installed on any nodes, import all the public keys available in the directory /etc/pki/rpm-gpg/ as below and repeat the installation again.
rpm --import /etc/pki/rpm-gpg/*

On both nodes, load the DRBD module manually or reboot the nodes.
modprobe drbd

On both nodes, Verify the module loaded or not.
lsmod | grep drbd
drbd                  405309  0
libcrc32c              12644  2 xfs,drbd

Note : If iptables enabled on your environment, allow cluster related services to accept the connection on both nodes.
firewall-cmd --permanent --add-port=7788/tcp
firewall-cmd --reload

5. Configure DRBD


Lets Create a DRBD resource file “testdata1.res”. We recommend to always create a unique name to identify the resource purpose. It should be end with “.res” file extension.
vi /etc/drbd.d/testdata1.res

Copy the below simple configuration content into the file testdata1.res.
resource testdata1 {
protocol C;
on node1 {
device /dev/drbd0;
disk /dev/vgdrbd/vol1;
address 172.16.2.61:7788;
meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/vgdrbd/vol1;
address 172.16.2.62:7788;
meta-disk internal;
}
}

Where,
Resource testdata1 - To define the resource name “testdata1”
Protocol C - Resources are configured to use fully synchronous replication (Protocol C). To know more about this just refer this link - Understand the replication protocol.
on node1 - To define first node name and its options within the statement.
device /dev/drbd0 - To define the logical device name used for the block device. (Should be common on both nodes)
disk /dev/vgdrbd/vol1 - To define the dedicated local disk used for replication.
address 172.16.2.62:7788 - To define the dedicated network to be used for the replication and resource uses TCP port 7788.
meta-disk internal - Defined to use internal Meta data.

Copy the DRBD resource file to other nodes.
[root@node1 ~] scp /etc/drbd.d/testdata1.res node2:/etc/drbd.d/

On both nodes, create LVM disk because we have defined /dev/vgdrbd/vol1 in the configuration file. We can also use the block devices as /dev/sdb if you dont prefer LVM.
pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
vgcreate vgdrbd /dev/sdb
Volume group "vgdrbd" successfully created
lvcreate -n vol1 -l100%FREE vgdrbd
Logical volume "vol1" created.

6. Initialize DRBD meta data storage.


On both nodes, use “drbdadm” command to initialize the meta data storage.
[root@node1 ~] drbdadm create-md testdata1
[root@node2 ~] drbdadm create-md testdata1
Above commands will give an output like below.
--==  Thank you for participating in the global usage survey  ==--
The server's response is:
you are the 10680th user to install this version
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
success

Starting and Enabling the DRBD Daemon.
On both nodes start simultaneously one after another..
systemctl start drbd
systemctl enable drbd

7. Define the DRBD Primary node


Lets define the DRBD Primary node as first node “node1”.
[root@node1 ~] drbdadm primary testdata1

Use "--force" if you get any error to make the node primary forcefully.
[root@node1 ~] drbdadm primary testdata1 --force

Checking the DRBD Status
[root@node1 ~]# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by phil@Build64R7, 2016-01-12 14:29:40
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:1048508 nr:0 dw:0 dr:1049236 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
[root@node1 ~]# drbd-overview
0:testdata1/0  Connected Primary/Secondary UpToDate/UpToDate

8. Create and mount the Filesystem of DRBD device


On first node “node1”, create a file system, mount the volume and write some test “index.html” content.
[root@node1 ~]# mkfs.ext3 /dev/drbd0
[root@node1 ~]# mount /dev/drbd0 /mnt
[root@node1 ~]# echo "Welcome to www.learnitguide.net tutorial blog" > /mnt/index.html
[root@node1 ~]# umount /mnt

9. Create cluster resources and set properties


Lets create a cluster resources of IP Address, Httpd, DrbdData, DrbdDataClone, DrbdFS on first node which are the resources required to bring up on the other nodes in case of any failures.

Create IP Address resource.
[root@node1 ~]# pcs resource create VirtIP ocf:heartbeat:IPaddr2 ip=192.168.2.100 cidr_netmask=32 op monitor interval=30s

Where, VirtIP is the resource name.
ocf:heartbeat:IPaddr2 define three things about the resource to add.
The first field ‘ocf’ is the standard to which the resource script confirms and where to find it. “pcs resource standards” is the command to list the available resource standards.
The second field (heartbeat) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in. “pcs resource providers” is the command to list the available OCF resource providers.
The third field (IPaddr2) is the name of the resource script. “pcs resource agents ocf:heartbeat” is the command to  see all the resource agents available for a specific OCF provider.

Create Httpd service resource.
[root@node1 ~]#  pcs resource create Httpd ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=1min

Set Constraint priority and order to start and stop.
[root@node1 ~]# pcs constraint colocation add Httpd with VirtIP INFINITY
[root@node1 ~]# pcs constraint order VirtIP then Httpd
Adding VirtIP Httpd (kind: Mandatory) (Options: first-action=start then-action=start)


Integrate DRBD resource to the cluster.

'pcs' is the utitility  to queue up several changes into a file and commit those changes automatically.

[root@node1 ~]# pcs cluster cib drbd_cfg


Create DRBD data device resource.
[root@node1 ~]#  pcs -f drbd_cfg resource create DrbdData ocf:linbit:drbd drbd_resource=testdata1 op monitor interval=60s

Where "testdata1" is the DRBD resource name which we have created in the configuration file..

Create DRBD Clone resource.
Create a clone resource to allow the resource to run on both nodes at the same time.
[root@node1 ~]#  pcs -f drbd_cfg resource master DrbdDataClone DrbdData master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

After you are done with all the changes, you can commit them all at once by pushing the drbd_cfg file into the live CIB.
[root@node1 ~]# pcs cluster cib-push drbd_cfg
CIB updated

Check the cluster resource status
[root@node1 ~]# pcs status resources
VirtIP (ocf::heartbeat:IPaddr2):       Started node1
Httpd  (ocf::heartbeat:apache):        Started node1
Master/Slave Set: DrbdDataClone [DrbdData]
Masters: [ node1 ]
Slaves: [ node2 ]

Create DRBD filesystem resource.

[root@node1 ~]# pcs cluster cib fs_cfg

Lets create a filesystem resource to mount the DRBD device on the mountpoint.
[root@node1 ~]#  pcs  -f fs_cfg resource create DrbdFS Filesystem device="/dev/drbd0" directory="/var/www/html" fstype="ext3"

Set Constraint priority and resource order to start and stop for the resources.
[root@node1 ~]# pcs  -f fs_cfg constraint colocation add DrbdFS with DrbdDataClone INFINITY with-rsc-role=Master
[root@node1 ~]# pcs  -f fs_cfg constraint order promote DrbdDataClone then start DrbdFS
Adding DrbdDataClone DrbdFS (kind: Mandatory) (Options: first-action=promote then-action=start)
[root@node1 ~]# pcs -f fs_cfg constraint colocation add Httpd with DrbdFS INFINITY
[root@node1 ~]# pcs -f fs_cfg constraint order DrbdFS then Httpd
Adding DrbdFS Httpd (kind: Mandatory) (Options: first-action=start then-action=start)

Update the changes.
[root@node1 ~]# pcs cluster cib-push fs_cfg
CIB Updated

Check the cluster resource status
[root@node1 ~]# pcs status resources
VirtIP (ocf::heartbeat:IPaddr2):       Started node1
Httpd  (ocf::heartbeat:apache):        Started node1
Master/Slave Set: DrbdDataClone [DrbdData]
Masters: [ node1 ]
Slaves: [ node2 ]
DrbdFS (ocf::heartbeat:Filesystem):    Started node1

By default, the operation timeout for all resources' start, stop, and monitor operations is 20 seconds. we will adjust the global operation timeout default to 240 seconds.
[root@node1 ~]# pcs resource op defaults timeout=240s
[root@node1 ~]# pcs resource op defaults
timeout: 240s

Now all the resources are running on node1, So we can access our web server successfully using the URL http://192.168.2.100

10. Testing the fail-over manually.


Currently all the resources are running on node1, Lets stop the cluster on node1, so all the resources should fail-over to the second node “node2” without any issue..
On node1:
[root@node1 ~]# pcs cluster stop node1
node1: Stopping Cluster (pacemaker)...
node1: Stopping Cluster (corosync)...

Cluster is stopped on node1. Lets go to node2 and check the status.

On node2:
[root@node2 ~]# pcs status resources
VirtIP (ocf::heartbeat:IPaddr2):       Started node2
Httpd  (ocf::heartbeat:apache):        Started node2
Master/Slave Set: DrbdDataClone [DrbdData]
Masters: [ node2 ]
Stopped: [ node1 ]
DrbdFS (ocf::heartbeat:Filesystem):    Started node2

All the resources are running on node2 as expected. So fail-over happened successfully. So we can access the web server from the node2 now.

Hope you have got an idea How to Integrate DRBD with Pacemaker Clusters on RHEL7 / CentOS7.

Related Content on Linux might be useful to you to improve your Linux Skills.








Keep practicing and have fun. Leave your comments if any.





Support Us: Share with your friends and groups.


Stay connected with us on social networking sites, Thank you.

 drbd with pacemaker, configure drbd with pacemaker, drbd pacemaker cluster on linux, drbd pacemaker howto, integrate drbd with pacemaker clusters