You are on page 1of 4

Pacemaker PCS Cookbook for Managing Linux Clusters

Srini Rao

Pacemaker and pcs on NFS servers managing cluster resources


This cookbook mostly use various commands to show how to manage pacemaker cluster resources.
Primarly five types of resources configured in our pacemaker configurations:
vip

Virtual ip resource managed by pacemaker

luksOpenLvm

Luks LVM open resource for lvm that has setup with Luks

Filesystem

xfs file system resource that need to be monitored

nfs-server

nfs server resource

LVM

Logical volume manager resource

Before get started, here is the cluster current status.


[A3][root@nfs3 ~]# pcs status
Cluster name: nfs
Last updated: Fri Sep 16 12:44:41 2016
Last change: Tue Aug 16 17:59:49
2016 by root via crmd on nfs4
Stack: cman
Current DC: nfs3 (version 1.1.14-8.el6-70404b0) - partition with quorum
2 nodes and 43 resources configured
Online: [ nfs3 nfs4 ]
Full list of resources:
Resource Group: ha-nfsserver
vip
(ocf::heartbeat:IPaddr2):
Started nfs3
luksOpen-3600a098038303753723f494743616663p1_lvm (ocf::heartbeat:luksOpenLvm):
Started nfs3
luksOpen-3600a098038303753723f494743616669p1_lvm (ocf::heartbeat:luksOpenLvm):
Started nfs3
lvm_iscsi1
(ocf::heartbeat:LVM):
Started nfs3
fs-vg_iscsi1-lv_iscsi1
(ocf::heartbeat:Filesystem): Started nfs3
pseudo-vg_iscsi1-lv_iscsi1
(ocf::heartbeat:Filesystem): Started nfs3
luksOpen-3600a098038303753723f49474361666cp1_lvm (ocf::heartbeat:luksOpenLvm):
Started nfs3
luksOpen-3600a098038303753723f49474361662fp1_lvm (ocf::heartbeat:luksOpenLvm):
Started nfs3
lvm_iscsi2
(ocf::heartbeat:LVM):
Started nfs3
fs-vg_iscsi2-lv_iscsi2
(ocf::heartbeat:Filesystem): Started nfs3
pseudo-vg_iscsi2-lv_iscsi2
(ocf::heartbeat:Filesystem): Started nfs3
luksOpen-3600a098038303753723f494743616661p1_lvm (ocf::heartbeat:luksOpenLvm):
Started nfs3
luksOpen-3600a098038303753723f494743616662p1_lvm (ocf::heartbeat:luksOpenLvm):
Started nfs3
lvm_iscsi3
(ocf::heartbeat:LVM):
Started nfs3
fs-vg_iscsi3-lv_iscsi3
(ocf::heartbeat:Filesystem): Started nfs3
pseudo-vg_iscsi3-lv_iscsi3
(ocf::heartbeat:Filesystem): Started nfs3
luksOpen-3600a098038303753723f494743616664p1_lvm (ocf::heartbeat:luksOpenLvm):
Started nfs3
luksOpen-3600a098038303753723f494743616665p1_lvm (ocf::heartbeat:luksOpenLvm):

Pacemaker PCS Cookbook for Managing Linux Clusters

Srini Rao

Started nfs3
lvm_iscsi4
(ocf::heartbeat:LVM):
Started nfs3
fs-vg_iscsi4-lv_iscsi4
(ocf::heartbeat:Filesystem): Started nfs3
pseudo-vg_iscsi4-lv_iscsi4
(ocf::heartbeat:Filesystem): Started nfs3

Create a resource
To create a new resource
#pcs resource create fs-vg_iscsi7-lv_iscsi7 ocf:heartbeat:Filesystem
device=/dev/mapper/vg_iscsi7-lv_iscsi7 directory=/iscsi7 fstype=xfs
fast_stop="no" force_unmount="safe" op stop on-fail=stop timeout=200 op monitor onfail=stop timeout=200 OCF_CHECK_LEVEL=10

Delete a resource
To delete a resource
[A3][root@nfs3 ~]# pcs delete resource fs-vg_iscsi7-lv_iscsi7

Show the resource


[A3][root@nfs3 ~]# pcs resource show fs-vg_iscsi7-lv_iscsi7
Resource: fs-vg_iscsi7-lv_iscsi7 (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/mapper/vg_iscsi7-lv_iscsi7 directory=/iscsi7 fstype=xfs
Operations: monitor interval=30s (fs-vg_iscsi7-lv_iscsi7-monitor-interval-30s)
start interval=0s timeout=20s (fs-vg_iscsi7-lv_iscsi7-start-interval-0s)
stop interval=0s timeout=20s (fs-vg_iscsi7-lv_iscsi7-stop-interval-0s)

Manually Moving Resources Around the Cluster


To mover a resource to any other nodes
[A3][root@nfs3 ~]# pcs resource move fs-vg_iscsi7-lv_iscsi7

To move a resource to a nfs4 (other node)


[A3][root@nfs3 ~]#

pcs resource move fs-vg_iscsi7-lv_iscsi7 nfs4

To mover a resource to its origional node nfs3


[A3][root@nfs3 ~]#

pcs resource clear fs-vg_iscsi7-lv_iscsi7

Note: You can also use the move command to move resource back to its origional node, however, it
won't clear the constraint that move command generated. Thus, it's better to use 'resource clear' to move
back to its normal status.

Pacemaker PCS Cookbook for Managing Linux Clusters

Srini Rao

Note2: When moving a resource, any other resources that has constraint to the resource to be moved
will get moved too.

Moving Resources Due to Failure


By default, there is no threshold defined, so pacemaker will move resource to other nodes whenever it
fails. To define a threadhold to 3,run
# pcs resource meta fs-vg_iscsi7-lv_iscsi7 migration-threshold=3

The command above defines the resource fs-vg_iscsi7-lv_iscsi7 to move after 3 failures.
Note: after a resource move due to failure, it will not run on the origional node until the failcount is
reset, or failure timeout reached.
To set all resource threshold to 3, so all resources in the cluster will move after 3 times fails
# pcs resource defaults migration-threshold=10

To show current failcount


[A3][root@nfs3 ~]# pcs resource failcount show luksOpen3600a098038303753723f49474361665ap1_lvm
No failcounts for luksOpen-3600a098038303753723f49474361665ap1_lvm

To clear the failcount, run


[A3][root@nfs3 ~]#resource failcount reset luksOpen3600a098038303753723f49474361665ap1_lvm

Note: the threshold only works when in normal mode, not for start and stop operation.
Start failures cause the failcount to be set to INFINITY and thus always cause the resource to move
immediately.
Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled, then
the cluster will fence the node in order to be able to start the resource elsewhere. If STONITH is not
enabled, then the cluster has no way to continue and will not try to start the resource elsewhere, but will
try to stop it again after the failure timeout.

Disabling, and Banning Cluster Resources


To stop a nfs-server resource on a node and don't want it get started on other nodes.
#pcs resource disabled nfs-server

To start a nfs-server resource on a node and back the resource to normal state
#pcs resource enabled nfs-server

To ban a resource on a node


#pcs resource ban nfs-server nfs4

If no node specified, it's banned on current node

Pacemaker PCS Cookbook for Managing Linux Clusters

Srini Rao

To remove the ban constraint, run


#pcs resource clear nfs-server

To debug a resrouce start


#pcs resource debug-start nfs-server

Disabling a Monitor Operations


To disable monitor operation for a resource
#pcs resource update pseudo-vg_iscsi7-lv_iscsi7 op monitor enabled="false"

To enable monitor operation for a resource


#pcs resource update pseudo-vg_iscsi7-lv_iscsi7 op monitor enabled="true"
To permenant stop a resource monitoring , just delete the monitoring

Managed Resources
To set a resource to unmanaged state, compare to the resource deletion, unmanaged resource is still in
the cluster configuration, but pacemaker doesn't manage it.
#pcs resource unmanage nfs-server

To set a resource to managed state


#pcs resource manage nfs-server

You might also like