This repository has been archived by the owner on Feb 8, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 45
SBD based stonith as 1st level device backed up with existing IPMI devices on 2nd level
Andrei Zheregelia edited this page Nov 2, 2020
·
1 revision
Check that mpath device is created:
lsblk
yum install -y sbd
pcs stonith sbd device setup --device=/dev/disk/by-id/dm-name-mpathi
Load watchdog module on both nodes
modprobe softdog
Add to autoload, otherwise cluster will not start right after fencing
echo softdog > /etc/modules-load.d/watchdog.conf
systemctl restart systemd-modules-load
pcs stonith sbd enable --device=/dev/disk/by-id/dm-name-mpathi
Restart cluster:
pcs cluster stop --all
pcs cluster start all
Check status. Expected output:
[root@inteln13-m07 ~]# pcs stonith sbd status --full
SBD STATUS
<node name>: <installed> | <enabled> | <running>
srvnode-2: YES | YES | YES
srvnode-1: YES | YES | YES
Messages list on device '/dev/disk/by-id/dm-name-mpathi':
0 srvnode-1 clear
1 srvnode-2 clear
SBD header on device '/dev/disk/by-id/dm-name-mpathi':
==Dumping header on disk /dev/disk/by-id/dm-name-mpathi
Header version : 2.1
UUID : c65594e1-42e0-432c-8490-8533e2a69c94
Number of slots : 255
Sector size : 512
Timeout (watchdog) : 5
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 10
==Header on disk /dev/disk/by-id/dm-name-mpathi is dumped
pcs stonith create sbd_c1 fence_sbd devices=/dev/disk/by-id/dm-name-mpathi
pcs constraint location sbd_c1 avoids srvnode-2
pcs stonith create sbd_c2 fence_sbd devices=/dev/disk/by-id/dm-name-mpathi
pcs constraint location sbd_c2 avoids srvnode-1
Expected output for pcs stonith show --full
:
Resource: stonith-c1 (class=stonith type=fence_ipmilan)
Attributes: auth=PASSWORD delay=5 ipaddr=10.230.244.136 lanplus=true login=admin passwd=admin! pcmk_host_check=static-list pcmk_host_list=srvnode-1 power_timeout=40
Operations: monitor interval=10s (stonith-c1-monitor-interval-10s)
Resource: stonith-c2 (class=stonith type=fence_ipmilan)
Attributes: auth=PASSWORD ipaddr=10.230.244.137 lanplus=true login=admin passwd=admin! pcmk_host_check=static-list pcmk_host_list=srvnode-2 power_timeout=40
Operations: monitor interval=10s (stonith-c2-monitor-interval-10s)
Resource: sbd_c1 (class=stonith type=fence_sbd)
Attributes: devices=/dev/disk/by-id/dm-name-mpathi
Operations: monitor interval=60s (sbd_c1-monitor-interval-60s)
Resource: sbd_c2 (class=stonith type=fence_sbd)
Attributes: devices=/dev/disk/by-id/dm-name-mpathi
Operations: monitor interval=60s (sbd_c2-monitor-interval-60s)
pcs stonith level add 1 srvnode-1 sbd_c1
pcs stonith level add 2 srvnode-1 stonith-c1
pcs stonith level add 1 srvnode-2 sbd_c2
pcs stonith level add 2 srvnode-2 stonith-c2
Expected output for pcs stonith show --full
:
Resource: stonith-c1 (class=stonith type=fence_ipmilan)
Attributes: auth=PASSWORD delay=5 ipaddr=10.230.244.136 lanplus=true login=admin passwd=admin! pcmk_host_check=static-list pcmk_host_list=srvnode-1 power_timeout=40
Operations: monitor interval=10s (stonith-c1-monitor-interval-10s)
Resource: stonith-c2 (class=stonith type=fence_ipmilan)
Attributes: auth=PASSWORD ipaddr=10.230.244.137 lanplus=true login=admin passwd=admin! pcmk_host_check=static-list pcmk_host_list=srvnode-2 power_timeout=40
Operations: monitor interval=10s (stonith-c2-monitor-interval-10s)
Resource: sbd_c1 (class=stonith type=fence_sbd)
Attributes: devices=/dev/disk/by-id/dm-name-mpathi
Operations: monitor interval=60s (sbd_c1-monitor-interval-60s)
Resource: sbd_c2 (class=stonith type=fence_sbd)
Attributes: devices=/dev/disk/by-id/dm-name-mpathi
Operations: monitor interval=60s (sbd_c2-monitor-interval-60s)
Target: srvnode-1
Level 1 - sbd_c1
Level 2 - stonith-c1
Target: srvnode-2
Level 1 - sbd_c2
Level 2 - stonith-c2