Saturday, December 26, 2015

HP-UX MC/ServiceGuard 2-node cluster with ISCSI LUNs for MCSG shared package LVM

Creation of 2-node HP-UX MC/ServiceGuard Cluster using the Integrity Physical servers


Create a VG on the HP-UX server1 which has to be present for initial Cluster configuration

Pre-work:

The 2-node HP-UX cluster will have the nodes as ivm1l.example.com and ivm2u.example.com.

The Shared LUNs for the package VG LV and Filesystems are from an ISCSI target server configured on CentOS server.


For the Quorum mechanism we configure a Quorum Server on a PA-RISC 11.31 System  which works as a Quorum Server for this 2-node ServiceGuard Cluster.

Refer to the Link below for ISCSI target (on CentOS7) and ISCSI intiator configuration on HP-UX 11iv3

http://hpux-interview-questions.blogspot.in/2015/12/iscsi-server-target-on-centos7-for.html 

For Quorum Server configuration on HP-UX  see the link 

 

http://hpux-interview-questions.blogspot.in/2015/12/11iv3-qs-configuration-steps-on-rp3410.html 

 

 

root@ivm1l@/tmp#pvcreate -f /dev/rdsk/c7t0d1
Physical volume "/dev/rdsk/c7t0d1" has been successfully created.
root@ivm1l@/tmp#vgcreate /dev/lock /dev/dsk/c7t0d1
Increased the number of physical extents per physical volume to 10239.
/dev/lock /dev/dsk/c7t0d1
Volume group "/dev/lock" has been successfully created.
Volume Group configuration for /dev/lock has been saved in /etc/lvmconf/lock.conf

Create the MAP file for the VG , SCP to the other node and IMPORT the config on the other node


root@ivm1l@/tmp#vgexport -p -v -s -m /tmp/lock.map lock
Beginning the export process on Volume Group "lock".
vgexport: Volume group "lock" is still active.
/dev/dsk/c7t0d1
vgexport: Preview of vgexport on volume group "lock" succeeded.
root@ivm1l@/tmp#scp -pr /tmp/lock.map ivmu2:/tmp/
lock.map 100% 22 0.0KB/s 0.0KB/s 00:00
root@ivm1l@/tmp#
root@ivm1l@/tmp#
root@ivm1l@/tmp#


On the other node do a VGIMPORT

vgimport -v -s -m /tmp/lock.map /dev/lock

root@ivm1l@/tmp#
for initial condfiguration of the cluster ensure that the /.rhosts file, /etc/cmcluster/cmclnodelist files are configured and that the /etc/hosts files are proerly updated on both the nodes

ON BOTH THE NODES

/etc/hosts

192.168.10.62 ivm1l ivm1l.example.com
192.168.10.61 ivmu2 ivmu2.example.com

/.rhost and /etc/cmcluster/cmclnodelist

192.168.10.62 root
ivm1l root
ivm1l.example.com root
192.168.10.61 root
ivmu2 root
ivmu2.example.com root

RUN CMQUERYCL to get the initial cluster configuration


root@ivm1l@/tmp#cmquerycl -v -q 192.168.10.50 -n ivm1l -n ivmu2 -C /etc/cmcluster/cmclconfig.ascii

Number of configured IPv6 interfaces found: 0.
Warning: Unable to determine local domain name for ivm1l
check_cdsf_group, no cdsf group specified.
Looking for other clusters ... Done
Gathering storage information
Found 23 devices on node ivm1l
Found 23 devices on node ivmu2
Analysis of 46 devices should take approximately 5 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node ivm1l
Found 2 volume groups on node ivmu2
Analysis of 4 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Note: Disks were discovered which are not in use by either LVM or VxVM.
Use pvcreate(1M) to initialize a disk for LVM or,
use vxdiskadm(1M) to initialize a disk for VxVM.
Gathering network information
Beginning network probing
Completed network probing

Node Names: ivm1l
ivmu2

Bridged networks (local node information only - full probing was not performed):

2 lan1 (ivm1l)

4 lan1 (ivmu2)

IP subnets:

IPv4:

192.168.10.0 lan1 (ivm1l)
lan1 (ivmu2)

IPv6:

Possible Heartbeat IPs:

IPv4:

192.168.10.0 192.168.10.62 (ivm1l)
192.168.10.63 (ivmu2)

IPv6:

Route Connectivity (local node information only - full probing was not performed):

IPv4:

1 192.168.10.0

Possible IP Monitor Subnets:

IPv4:

192.168.10.0 Polling Target 192.168.10.51

IPv6:

Possible Cluster Lock Devices:

Quorum Server: 192.168.10.50 16 seconds

LVM volume groups:

/dev/vg00 ivm1l

/dev/lock ivm1l
ivmu2

/dev/vg00 ivmu2

LVM physical volumes:

/dev/vg00
/dev/disk/disk2_p2 64000/0xfa00/0x0 ivm1l

/dev/lock
/dev/dsk/c7t0d1 255/0/5.0.0.1 ivm1l

/dev/dsk/c7t0d1 255/0/6.0.0.1 ivmu2

/dev/vg00
/dev/disk/disk3_p2 64000/0xfa00/0x1 ivmu2

LVM logical volumes:

Volume groups on ivm1l:
/dev/vg00/lvol1 FS MOUNTED /stand
/dev/vg00/lvol2
/dev/vg00/lvol3 FS MOUNTED /
/dev/vg00/lvol4 FS MOUNTED /tmp
/dev/vg00/lvol5 FS MOUNTED /home
/dev/vg00/lvol6 FS MOUNTED /opt
/dev/vg00/lvol7 FS MOUNTED /usr
/dev/vg00/lvol8 FS MOUNTED /var

Volume groups on ivmu2:
/dev/vg00/lvol1 FS MOUNTED /stand
/dev/vg00/lvol2
/dev/vg00/lvol3 FS MOUNTED /
/dev/vg00/lvol4 FS MOUNTED /tmp
/dev/vg00/lvol5 FS MOUNTED /home
/dev/vg00/lvol6 FS MOUNTED /opt
/dev/vg00/lvol7 FS MOUNTED /usr
/dev/vg00/lvol8 FS MOUNTED /var
Warning: Failed to find a configuration that satisfies the minimum network configuration requirements.
Minimum network configuration requirements are:
- 2 or more heartbeat networks OR
- 1 heartbeat network with local switch (HP-UX Only) OR
- 1 heartbeat network using APA with 2 trunk members (HP-UX Only) OR
- 1 heartbeat network using bonding (mode 1) with 2 slaves (Linux Only)

Writing cluster data to /etc/cmcluster/cmclconfig.ascii.

EDIT The ASCII FILE and make changes (As per your setup, we have made minimal changes here as highlighted) : Change the Cluster name, put the IP of the quorum server and modify other parameters if you wish.


root@ivm1l@/tmp#vi /etc/cmcluster/cmclconfig.ascii
"/etc/cmcluster/cmclconfig.ascii" 430 lines, 19550 characters
# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************

# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.

CLUSTER_NAME ivm12_cluster


# The HOSTNAME_ADDRESS_FAMILY parameter specifies the Internet Protocol address
# family to which Serviceguard will attempt to resolve cluster node names and
# quorum server host names.
# If the parameter is set to IPV4, Serviceguard will attempt to resolve the name
s
# to IPv4 addresses only. This is the default value.
# If the parameter is set to IPV6, Serviceguard will attempt to resolve the name
s
# to IPv6 addresses only. No IPv4 addresses need be configured on the system or
# listed in the /etc/hosts file except for IPv4 loopback address.
# If the parameter is set to ANY, Serviceguard will attempt to resolve the names
# to both IPv4 and IPv6 addresses. The /etc/hosts file on each node must contai
n
# entries for all IPv4 and IPv6 addresses used throughout the cluster including
"/etc/cmcluster/cmclconfig.ascii" 430 lines, 19550 characters
root@ivm1l@/tmp#
root@ivm1l@/tmp#vi /etc/cmcluster/cmclconfig.ascii
root@ivm1l@/tmp#
root@ivm1l@/tmp#

Check the cluster configuration File usimhg cmcheckconf

root@ivm1l@/tmp#cmcheckconf -v -C /etc/cmcluster/cmclconfig.ascii

root@ivm1l@/tmp#cmcheckconf -v -C /etc/cmcluster/cmclconfig.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmclconfig.ascii.
MAX_CONFIGURED_PACKAGES configured to 300.
Checking nodes ... Done
Checking existing configuration ... Done
MAX_CONFIGURED_PACKAGES configured to 300.
Gathering storage information
Found 2 devices on node ivm1l
Found 2 devices on node ivmu2
Analysis of 4 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node ivm1l
Found 2 volume groups on node ivmu2
Analysis of 4 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Begin file consistency checking
/etc/cmcluster/cmclfiles2check is the same across nodes ivm1l ivmu2
/etc/hosts is the same across nodes ivm1l ivmu2
WARNING: /etc/nsswitch.conf permissions could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf owner could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf checksum could not be checked on nodes ivm1l ivmu2

/etc/nsswitch.conf is the same across nodes ivm1l ivmu2
/etc/services is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmignoretypes.conf is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmknowncmds is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmnotdisk.conf is the same across nodes ivm1l ivmu2
Command 'cat /etc/cmcluster/cmclfiles2check | /usr/sbin/cmcompare -W -v -n ivm1l -n ivmu2' exited with status 2
WARNING: Unable to check consistency of all files listed in /etc/cmcluster/cmclfiles2check
Minimum network configuration requirements for the cluster have
not been met. Minimum network configuration requirements are:
- 2 or more heartbeat networks OR
- 1 heartbeat network with local switch (HP-UX Only) OR
- 1 heartbeat network using APA with 2 trunk members (HP-UX Only) OR
- 1 heartbeat network using bonding (mode 1) with 2 slaves (Linux Only)
Maximum configured packages parameter is 300.
Verified 0 new packages.
Total number of packages configured in the cluster is 0.
Creating the cluster configuration for cluster ivm12_cluster
Adding node ivm1l to cluster ivm12_cluster
Adding node ivmu2 to cluster ivm12_cluster
cmcheckconf: Verification completed. No errors found.
Use the cmapplyconf command to apply the configuration.
root@ivm1l@/tmp#cmapplyconf -v -C /etc/cmcluster/cmcluster.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmcluster.ascii
cmapplyconf: Nonexistent file: /etc/cmcluster/cmcluster.ascii.


Apply the cluster configuration file to create the cluster

root@ivm1l@/tmp#cmapplyconf -v -C /etc/cmcluster/cmclconfig.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmclconfig.ascii
MAX_CONFIGURED_PACKAGES configured to 300.
check_cdsf_group, no cdsf group specified.
Checking nodes ... Done
Checking existing configuration ... Done
MAX_CONFIGURED_PACKAGES configured to 300.
Gathering storage information
Found 2 devices on node ivm1l
Found 2 devices on node ivmu2
Analysis of 4 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node ivm1l
Found 2 volume groups on node ivmu2
Analysis of 4 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Begin file consistency checking
/etc/cmcluster/cmclfiles2check is the same across nodes ivm1l ivmu2
/etc/hosts is the same across nodes ivm1l ivmu2
WARNING: /etc/nsswitch.conf permissions could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf owner could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf checksum could not be checked on nodes ivm1l ivmu2

/etc/nsswitch.conf is the same across nodes ivm1l ivmu2
/etc/services is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmignoretypes.conf is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmknowncmds is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmnotdisk.conf is the same across nodes ivm1l ivmu2
Command 'cat /etc/cmcluster/cmclfiles2check | /usr/sbin/cmcompare -W -v -n ivm1l -n ivmu2' exited with status 2
WARNING: Unable to check consistency of all files listed in /etc/cmcluster/cmclfiles2check
Minimum network configuration requirements for the cluster have
not been met. Minimum network configuration requirements are:
- 2 or more heartbeat networks OR
- 1 heartbeat network with local switch (HP-UX Only) OR
- 1 heartbeat network using APA with 2 trunk members (HP-UX Only) OR
- 1 heartbeat network using bonding (mode 1) with 2 slaves (Linux Only)
Maximum configured packages parameter is 300.
Configured 0 new packages.
Total number of packages configured in the cluster is 0.
Creating the cluster configuration for cluster ivm12_cluster
Adding node ivm1l to cluster ivm12_cluster
Adding node ivmu2 to cluster ivm12_cluster
Marking/unmarking volume groups for use in the cluster
Completed the cluster creation
root@ivm1l@/tmp#
root@ivm1l@/tmp#


View the created cluster

root@ivm1l@/tmp#cmviewcl

CLUSTER STATUS
ivm12_cluster down

NODE STATUS STATE
ivm1l down unknown
ivmu2 down unknown
root@ivm1l@/tmp#

No comments:

Post a Comment