How to Install and Configure DRBD Cluster on RHEL7 / CentOS7

How to Install and Configure DRBD Cluster on RHEL7 / CentOS7

This article explains you the step by step procedure of How to Install and Configure DRBD on RHEL7 / CentOS7. In the previous post, we have explained about the DRBD (Distributed Replicated Block Device) to understand What is DRBD (Distributed Replicated Block Device), How the DRBD Works, DRBD Operations, DRBD Replications modes, DRBD Architecture and DRBD Administration commands.

Our Lab Environment
1. RHEL7 or CentOS7 - 64bit x 2 Nodes (node1 and node2)
2. Dedicated Local disk on each nodes (/dev/sdb and preferably same size).
3. Dedicated network device and IP address on each nodes for replication ( and respectively)
4. Yum repo enabled, refer this link to configure yum repo server.
5. Iptables ports (7788) allowed

Step involved to install and Configure DRBD on RHEL7 / CentOS7
1. Download DRBD and Install the DRBD Packages.
2. Configuring DRBD.
3. Initialize the meta data storage on each nodes.
4. Starting and Enabling the DRBD Daemon.
5. Enable the DRBD Primary node.
6. Create and mount the Filesystem of DRBD device.
7. Testing the configurations.


1. Download DRBD and Install the DRBD Packages
Preferred methods to download the DRBD packages are,

Method 1 : Download the latest DRBD packages from DRBD project’s sponsor company link http://oss.linbit.com/drbd/ and compile it from the source.
Method 2 : Enable EPEL repository on RHEL7, CentOS7 and SL7.

We prefer method 2, which enables the EPEL repository. So install the EPEL Repository on both nodes.
rpm -ivh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
Import the Public Key on both nodes.
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org
Before installing, find the available latest version of DRBD to install using yum.
yum info *drbd* | grep Name
Name        : drbd84-utils
Name        : drbd84-utils-sysvinit
Name        : kmod-drbd84
Above output shows as the current version available is drbd84. (i.e. drbd 8.4)

Install the DRBD 8.4 packages on both nodes
yum -y install drbd84-utils kmod-drbd84
Note : If you get any error as public key is not installed on any nodes, import all the public keys available in the directory /etc/pki/rpm-gpg/ as below and repeat the installation again.
rpm --import /etc/pki/rpm-gpg/*
Ensure drbd kernel module is loaded or not using the below command,
lsmod | grep -i drbd
If it is not loaded, find the drbd.ko module using find command and install it.
find / -name drbd.ko
insmod /usr/lib/modules/3.10.0-514.16.1.el7.x86_64/weak-updates/drbd84/drbd.ko

2. Configuring DRBD in Linux
DRBD configuration file is /etc/drbd.conf. Normally, this configuration file is a skeleton with the following contents:
include "/etc/drbd.d/global_common.conf";
include "/etc/drbd.d/*.res";
By convention, /etc/drbd.d/global_common.conf contains the global and common sections of the DRBD configuration, whereas the “.res” files contain one resource section each.

Lets Create a DRBD resource file “testdata1.res”. We recommend to always create a unique name to identify the resource purpose. It should be end with “.res” file extension.
vi /etc/drbd.d/testdata1.res
Copy the below simple configuration content into the file testdata1.res.
resource testdata1 {
protocol C;        
on node1 {
                device /dev/drbd0;
                disk /dev/sdb;
                meta-disk internal;
        on node2 {
                device /dev/drbd0;
                disk /dev/sdb;
                meta-disk internal;
Resource testdata1 - To define the resource name “testdata1”
Protocol C - Resources are configured to use fully synchronous replication (Protocol C). To know more about this just refer this link http://www.learnitguide.net/2016/07/what-is-drbd-how-drbd-works-drbd.html
on node1 - To define first node name and its options within the statement.
device /dev/drbd0 - To define the logical device name used for the block device. (Should be common on both nodes)
disk /dev/sdb - To define the dedicated local disk used for replication (Not mandatory to be the same on both nodes)
address - To define the dedicated network to be used for the replication and resource uses TCP port 7788.
meta-disk internal - Defined to use internal Meta data.

Copy the DRBD resource file to other nodes.
[root@node1 ~] scp /etc/drbd.d/testdata1.res node2:/etc/drbd.d/
3. Initialize the meta data storage on each nodes
[root@node1 ~] drbdadm create-md testdata1
[root@node2 ~] drbdadm create-md testdata1
Above commands will give an output like below.
  --==  Thank you for participating in the global usage survey  ==--
The server's response is:
you are the 10680th user to install this version
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
4. Starting and Enabling the DRBD Daemon.
On both nodes start simultaneously one after another..
systemctl start drbd
systemctl enable drbd
5. Enabling the DRBD Primary node
Lets define the DRBD Primary node as first node “node1”.
[root@node1 ~] drbdadm primary testdata1
[root@node1 ~] drbdadm primary testdata1 --force
Use "--force" if you get any error to make the node primary forcefully.

Checking the DRBD Status
[root@node1 ~]# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by phil@Build64R7, 2016-01-12 14:29:40
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:1048508 nr:0 dw:0 dr:1049236 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
[root@node1 ~]# drbd-overview
 0:testdata1/0  Connected Primary/Secondary UpToDate/UpToDate
6. Create and mount the Filesystem of DRBD device
Create a file system, mount the volume and write some data on first node "node1"
[root@node1 ~]# mkfs.ext3 /dev/drbd0
[root@node1 ~]# mount /dev/drbd0 /mnt
[root@node1 ~]# touch /mnt/testfile
[root@node1 ~]# ll /mnt/
total 16
drwx------ 2 root root 16384 Jul  8 20:29 lost+found
-rw-r--r-- 1 root root     0 Jul  8 20:31 testfile
7. Testing the DRBD configurations
Lets test the configuration by changing primary mode "node1" to second node "node2" to check the data replication works or not.

Unmount the volume drbd0 on First node "node1".
[root@node1 ~]# umount /mnt
Change the primary mode to secondary mode on first node "node1"
[root@node1 ~] drbdadm secondary testdata1
Change the secondary mode to primary mode on second node "node2"
[root@node2 ~] drbdadm primary testdata1
Mount the volume and check the data available or not.
[root@node2 ~] mount /dev/drbd0 /mnt
[root@node2 ~] ll /mnt
total 16
drwx------ 2 root root 16384 Jul  8 20:29 lost+found
-rw-r--r-- 1 root root     0 Jul  8 20:31 testfile
Yes, we could see our data written on node1 disk is successfully replicated to node2 disk. Thats all about the DRBD Basic Configuration. Refer this link to see, how to integrate this DRBD data replication with Pacemaker clusters.

Thanks for reading our post. share with your friends. We appreciate your feedback, Leave your comments if any.
We have more articles to be updated soon. To not miss any updates, Stay Connected with us on,
Youtube Video Channel : https://goo.gl/6zcLtQ
Like us on Facebook : https://goo.gl/Nwrf5q
Follow us on Twitter : https://goo.gl/RZB5oN. Keywords : Configuring DRBD, How to Configure DRBD on linux, drbd configuration in linux, How to Configure DRBD on centos, how to configure drbd, setting up drbd on linux

Related Something You missed?

Videos 575225278120745303