# Starting the cluster

Use this procedure to start cluster services.
1. Log in to Active-Node as root, or as a user with superuser privileges.
2. In a separate terminal session, log in to Passive-Node as root, or as a user with superuser privileges.
3. Start DRBD.
1. On Active-Node, start DRBD.
drbdadm up all
The following example output is expected:
Moving the internal meta data to its proper location
Internal drbd meta data successfully moved.
2. On Passive-Node, start DRBD.
drbdadm up all
The output of this command should match the output on Active-Node.
3. On both nodes, confirm that the DRBD device for the serviced thin pool is larger.
lsblk -ap --output=NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
4. On Active-Node, wait for disks to synchronize.
watch drbd-overview
Do not proceed until the status of all devices is UpToDate/UpToDate.
5. On Active-Node, start cluster services for the distributed file system.
pcs cluster unstandby --all
6. On either node, confirm the identity of the primary node.
pcs status resources
The first section shows the status of the nodes.
7. On Active-Node, resize the volume group that contains the serviced thin pool.
1. Identify the volume group that contains the serviced thin pool.
lvs --options=lv_name,vg_name,lv_size
Typically, the logical volume is serviced-pool and the containing volume group is serviced.
2. Display information about the volume group.
Replace Volume-Group with the name of the volume group identified in the previous substep:
vgdisplay Volume-Group
3. Identify the DRBD disk associated with the serviced thin pool.
Typically, the serviced thin pool device is associated with /dev/drbd2. To verify the configuration in your environment, review /etc/drbd.d/serviced-dfs.res.
4. Resize the volume group.
Replace DRBD-Device with the DRBD device associated with the serviced thin pool:
pvresize DRBD-Device
5. Display information about the volume group.
Replace Volume-Group with the name of the volume group identified in the previous substep:
vgdisplay Volume-Group
The size of the volume group should be larger.
8. On Active-Node, resize the serviced thin pool.
1. Start the serviced group cluster services.
pcs resource enable serviced-group
2. Add space to the data storage area of the serviced thin pool.
In the following command:
• Replace Total-Size with the sum of the existing device size plus the space to add to the device, in gigabytes. Include the units identifier, G.
• Replace Volume-Group with the name of the LVM volume group identified in the previous step.
• Replace Logical-Volume with the name of the logical volume identified in the previous step.
lvextend -L+Total-SizeG Volume-Group/Logical-Volume
3. Display information about LVM logical volumes on the host.
lvs --options=lv_name,vg_name,lv_size
The result should show the larger size of the logical volume.
9. On Active-Node, start the storage resource.
1. Start the storage service.
pcs resource enable serviced-storage
2. Confirm that the resource started correctly.
pcs status
10. Optional: On Active-Node, increase the size of the application tenant volume, if desired.
1. Display the device mapper name of the serviced thin pool.
grep -E '^\b*SERVICED_DM_THINPOOLDEV' /etc/default/serviced \
| sed -e 's/.*=//'
Typically, the name is /dev/mapper/serviced-serviced--pool.
2. Increase the size of the tenant device.
In the following command:
• Replace Device-Mapper-Name with the device mapper name of the thin pool.
• Replace Tenant-ID with the identifier of the tenant device.
• Replace Total-Size with the sum of the existing device size plus the space to add to the device, in gigabytes. Include the units identifier, G.
serviced-storage resize -d /opt/serviced/var/volumes \
-o dm.thinpooldev=Device-Mapper-Name Tenant-ID Total-SizeG