Saturday, December 26, 2015

11iv3 QS Configuration steps on the RP3410 Server

QS Configuration steps on the RP3410 Server

Download the Quorum Server Software from HP Software Site for HP-UX. The Software is available for HP-UX also for RedHat Linux. 


Install QS software on 192.168.10.50

Create /var/adm/qs folder
Create the folder /etc/cmcluster
Create the fike /etc/cmcluster/qs_authfile
Add the node names for all the cluster nodes here


root@node3l@/etc/cmcluster#cat qs_authfile
ivmu2
ivm1l
ivmu2.example.com
ivm1l.example.com


Add the inittab entry for QS

root@node3l@/etc/cmcluster#grep -i qs /etc/inittab
qs:345:respawn:/usr/lbin/qs >> /var/adm/qs/qs.log 2>&1
root@node3l@/etc/cmcluster#
root@node3l@/etc/cmcluster#

RE-read the INITTAB file

root@node3l@/etc/cmcluster#init q
root@node3l@/etc/cmcluster#
root@node3l@/etc/cmcluster#

Verify the QS processes are running


root@node3l@/etc/cmcluster#ps -ef | grep -i qs

HP-UX MC/ServiceGuard 2-node cluster with ISCSI LUNs for MCSG shared package LVM

Creation of 2-node HP-UX MC/ServiceGuard Cluster using the Integrity Physical servers


Create a VG on the HP-UX server1 which has to be present for initial Cluster configuration

Pre-work:

The 2-node HP-UX cluster will have the nodes as ivm1l.example.com and ivm2u.example.com.

The Shared LUNs for the package VG LV and Filesystems are from an ISCSI target server configured on CentOS server.


For the Quorum mechanism we configure a Quorum Server on a PA-RISC 11.31 System  which works as a Quorum Server for this 2-node ServiceGuard Cluster.

Refer to the Link below for ISCSI target (on CentOS7) and ISCSI intiator configuration on HP-UX 11iv3

http://hpux-interview-questions.blogspot.in/2015/12/iscsi-server-target-on-centos7-for.html 

For Quorum Server configuration on HP-UX  see the link 

 

http://hpux-interview-questions.blogspot.in/2015/12/11iv3-qs-configuration-steps-on-rp3410.html 

 

 

root@ivm1l@/tmp#pvcreate -f /dev/rdsk/c7t0d1
Physical volume "/dev/rdsk/c7t0d1" has been successfully created.
root@ivm1l@/tmp#vgcreate /dev/lock /dev/dsk/c7t0d1
Increased the number of physical extents per physical volume to 10239.
/dev/lock /dev/dsk/c7t0d1
Volume group "/dev/lock" has been successfully created.
Volume Group configuration for /dev/lock has been saved in /etc/lvmconf/lock.conf

Create the MAP file for the VG , SCP to the other node and IMPORT the config on the other node


root@ivm1l@/tmp#vgexport -p -v -s -m /tmp/lock.map lock
Beginning the export process on Volume Group "lock".
vgexport: Volume group "lock" is still active.
/dev/dsk/c7t0d1
vgexport: Preview of vgexport on volume group "lock" succeeded.
root@ivm1l@/tmp#scp -pr /tmp/lock.map ivmu2:/tmp/
lock.map 100% 22 0.0KB/s 0.0KB/s 00:00
root@ivm1l@/tmp#
root@ivm1l@/tmp#
root@ivm1l@/tmp#


On the other node do a VGIMPORT

vgimport -v -s -m /tmp/lock.map /dev/lock

root@ivm1l@/tmp#
for initial condfiguration of the cluster ensure that the /.rhosts file, /etc/cmcluster/cmclnodelist files are configured and that the /etc/hosts files are proerly updated on both the nodes

ON BOTH THE NODES

/etc/hosts

192.168.10.62 ivm1l ivm1l.example.com
192.168.10.61 ivmu2 ivmu2.example.com

/.rhost and /etc/cmcluster/cmclnodelist

192.168.10.62 root
ivm1l root
ivm1l.example.com root
192.168.10.61 root
ivmu2 root
ivmu2.example.com root

RUN CMQUERYCL to get the initial cluster configuration


root@ivm1l@/tmp#cmquerycl -v -q 192.168.10.50 -n ivm1l -n ivmu2 -C /etc/cmcluster/cmclconfig.ascii

Number of configured IPv6 interfaces found: 0.
Warning: Unable to determine local domain name for ivm1l
check_cdsf_group, no cdsf group specified.
Looking for other clusters ... Done
Gathering storage information
Found 23 devices on node ivm1l
Found 23 devices on node ivmu2
Analysis of 46 devices should take approximately 5 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node ivm1l
Found 2 volume groups on node ivmu2
Analysis of 4 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Note: Disks were discovered which are not in use by either LVM or VxVM.
Use pvcreate(1M) to initialize a disk for LVM or,
use vxdiskadm(1M) to initialize a disk for VxVM.
Gathering network information
Beginning network probing
Completed network probing

Node Names: ivm1l
ivmu2

Bridged networks (local node information only - full probing was not performed):

2 lan1 (ivm1l)

4 lan1 (ivmu2)

IP subnets:

IPv4:

192.168.10.0 lan1 (ivm1l)
lan1 (ivmu2)

IPv6:

Possible Heartbeat IPs:

IPv4:

192.168.10.0 192.168.10.62 (ivm1l)
192.168.10.63 (ivmu2)

IPv6:

Route Connectivity (local node information only - full probing was not performed):

IPv4:

1 192.168.10.0

Possible IP Monitor Subnets:

IPv4:

192.168.10.0 Polling Target 192.168.10.51

IPv6:

Possible Cluster Lock Devices:

Quorum Server: 192.168.10.50 16 seconds

LVM volume groups:

/dev/vg00 ivm1l

/dev/lock ivm1l
ivmu2

/dev/vg00 ivmu2

LVM physical volumes:

/dev/vg00
/dev/disk/disk2_p2 64000/0xfa00/0x0 ivm1l

/dev/lock
/dev/dsk/c7t0d1 255/0/5.0.0.1 ivm1l

/dev/dsk/c7t0d1 255/0/6.0.0.1 ivmu2

/dev/vg00
/dev/disk/disk3_p2 64000/0xfa00/0x1 ivmu2

LVM logical volumes:

Volume groups on ivm1l:
/dev/vg00/lvol1 FS MOUNTED /stand
/dev/vg00/lvol2
/dev/vg00/lvol3 FS MOUNTED /
/dev/vg00/lvol4 FS MOUNTED /tmp
/dev/vg00/lvol5 FS MOUNTED /home
/dev/vg00/lvol6 FS MOUNTED /opt
/dev/vg00/lvol7 FS MOUNTED /usr
/dev/vg00/lvol8 FS MOUNTED /var

Volume groups on ivmu2:
/dev/vg00/lvol1 FS MOUNTED /stand
/dev/vg00/lvol2
/dev/vg00/lvol3 FS MOUNTED /
/dev/vg00/lvol4 FS MOUNTED /tmp
/dev/vg00/lvol5 FS MOUNTED /home
/dev/vg00/lvol6 FS MOUNTED /opt
/dev/vg00/lvol7 FS MOUNTED /usr
/dev/vg00/lvol8 FS MOUNTED /var
Warning: Failed to find a configuration that satisfies the minimum network configuration requirements.
Minimum network configuration requirements are:
- 2 or more heartbeat networks OR
- 1 heartbeat network with local switch (HP-UX Only) OR
- 1 heartbeat network using APA with 2 trunk members (HP-UX Only) OR
- 1 heartbeat network using bonding (mode 1) with 2 slaves (Linux Only)

Writing cluster data to /etc/cmcluster/cmclconfig.ascii.

EDIT The ASCII FILE and make changes (As per your setup, we have made minimal changes here as highlighted) : Change the Cluster name, put the IP of the quorum server and modify other parameters if you wish.


root@ivm1l@/tmp#vi /etc/cmcluster/cmclconfig.ascii
"/etc/cmcluster/cmclconfig.ascii" 430 lines, 19550 characters
# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************

# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.

CLUSTER_NAME ivm12_cluster


# The HOSTNAME_ADDRESS_FAMILY parameter specifies the Internet Protocol address
# family to which Serviceguard will attempt to resolve cluster node names and
# quorum server host names.
# If the parameter is set to IPV4, Serviceguard will attempt to resolve the name
s
# to IPv4 addresses only. This is the default value.
# If the parameter is set to IPV6, Serviceguard will attempt to resolve the name
s
# to IPv6 addresses only. No IPv4 addresses need be configured on the system or
# listed in the /etc/hosts file except for IPv4 loopback address.
# If the parameter is set to ANY, Serviceguard will attempt to resolve the names
# to both IPv4 and IPv6 addresses. The /etc/hosts file on each node must contai
n
# entries for all IPv4 and IPv6 addresses used throughout the cluster including
"/etc/cmcluster/cmclconfig.ascii" 430 lines, 19550 characters
root@ivm1l@/tmp#
root@ivm1l@/tmp#vi /etc/cmcluster/cmclconfig.ascii
root@ivm1l@/tmp#
root@ivm1l@/tmp#

Check the cluster configuration File usimhg cmcheckconf

root@ivm1l@/tmp#cmcheckconf -v -C /etc/cmcluster/cmclconfig.ascii

root@ivm1l@/tmp#cmcheckconf -v -C /etc/cmcluster/cmclconfig.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmclconfig.ascii.
MAX_CONFIGURED_PACKAGES configured to 300.
Checking nodes ... Done
Checking existing configuration ... Done
MAX_CONFIGURED_PACKAGES configured to 300.
Gathering storage information
Found 2 devices on node ivm1l
Found 2 devices on node ivmu2
Analysis of 4 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node ivm1l
Found 2 volume groups on node ivmu2
Analysis of 4 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Begin file consistency checking
/etc/cmcluster/cmclfiles2check is the same across nodes ivm1l ivmu2
/etc/hosts is the same across nodes ivm1l ivmu2
WARNING: /etc/nsswitch.conf permissions could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf owner could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf checksum could not be checked on nodes ivm1l ivmu2

/etc/nsswitch.conf is the same across nodes ivm1l ivmu2
/etc/services is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmignoretypes.conf is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmknowncmds is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmnotdisk.conf is the same across nodes ivm1l ivmu2
Command 'cat /etc/cmcluster/cmclfiles2check | /usr/sbin/cmcompare -W -v -n ivm1l -n ivmu2' exited with status 2
WARNING: Unable to check consistency of all files listed in /etc/cmcluster/cmclfiles2check
Minimum network configuration requirements for the cluster have
not been met. Minimum network configuration requirements are:
- 2 or more heartbeat networks OR
- 1 heartbeat network with local switch (HP-UX Only) OR
- 1 heartbeat network using APA with 2 trunk members (HP-UX Only) OR
- 1 heartbeat network using bonding (mode 1) with 2 slaves (Linux Only)
Maximum configured packages parameter is 300.
Verified 0 new packages.
Total number of packages configured in the cluster is 0.
Creating the cluster configuration for cluster ivm12_cluster
Adding node ivm1l to cluster ivm12_cluster
Adding node ivmu2 to cluster ivm12_cluster
cmcheckconf: Verification completed. No errors found.
Use the cmapplyconf command to apply the configuration.
root@ivm1l@/tmp#cmapplyconf -v -C /etc/cmcluster/cmcluster.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmcluster.ascii
cmapplyconf: Nonexistent file: /etc/cmcluster/cmcluster.ascii.


Apply the cluster configuration file to create the cluster

root@ivm1l@/tmp#cmapplyconf -v -C /etc/cmcluster/cmclconfig.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmclconfig.ascii
MAX_CONFIGURED_PACKAGES configured to 300.
check_cdsf_group, no cdsf group specified.
Checking nodes ... Done
Checking existing configuration ... Done
MAX_CONFIGURED_PACKAGES configured to 300.
Gathering storage information
Found 2 devices on node ivm1l
Found 2 devices on node ivmu2
Analysis of 4 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node ivm1l
Found 2 volume groups on node ivmu2
Analysis of 4 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Begin file consistency checking
/etc/cmcluster/cmclfiles2check is the same across nodes ivm1l ivmu2
/etc/hosts is the same across nodes ivm1l ivmu2
WARNING: /etc/nsswitch.conf permissions could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf owner could not be checked on nodes ivm1l ivmu2

WARNING: /etc/nsswitch.conf checksum could not be checked on nodes ivm1l ivmu2

/etc/nsswitch.conf is the same across nodes ivm1l ivmu2
/etc/services is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmignoretypes.conf is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmknowncmds is the same across nodes ivm1l ivmu2
/etc/cmcluster/cmnotdisk.conf is the same across nodes ivm1l ivmu2
Command 'cat /etc/cmcluster/cmclfiles2check | /usr/sbin/cmcompare -W -v -n ivm1l -n ivmu2' exited with status 2
WARNING: Unable to check consistency of all files listed in /etc/cmcluster/cmclfiles2check
Minimum network configuration requirements for the cluster have
not been met. Minimum network configuration requirements are:
- 2 or more heartbeat networks OR
- 1 heartbeat network with local switch (HP-UX Only) OR
- 1 heartbeat network using APA with 2 trunk members (HP-UX Only) OR
- 1 heartbeat network using bonding (mode 1) with 2 slaves (Linux Only)
Maximum configured packages parameter is 300.
Configured 0 new packages.
Total number of packages configured in the cluster is 0.
Creating the cluster configuration for cluster ivm12_cluster
Adding node ivm1l to cluster ivm12_cluster
Adding node ivmu2 to cluster ivm12_cluster
Marking/unmarking volume groups for use in the cluster
Completed the cluster creation
root@ivm1l@/tmp#
root@ivm1l@/tmp#


View the created cluster

root@ivm1l@/tmp#cmviewcl

CLUSTER STATUS
ivm12_cluster down

NODE STATUS STATE
ivm1l down unknown
ivmu2 down unknown
root@ivm1l@/tmp#

ISCSI Server (Target) on CentOS7 for ISCSI initiator on HP-UX 11.31 11iv3

In this example we create the ISCSI server on CENTOS7 for MC/ServiceGuard shared LUNs on HP-UX 2 node cluster. For Quorum if all you have are ISCSI LUNs you will need to use a Quorum Server for MC/ServiceGuard Cluster creation (A.11.20)


NOTE: HP-UX MC/ServiceGuard does not support the ISCSI LUNs as First Cluster Lock PV/VG or Cluster Lock LUN configuration. Though this can be used for the shared LUNs which form the parts of the package Shared Filesystems on LVM.


For the quorum mechanism while using ISCSI storage on HP-UX in MC/ServiceGuard configuration, the Quorum server has to be used (if you do not have shared FC LUN which can be used as Lock PV/VG or lock LUNM) 

Current Setup

 2* HP-UX 11iv3 servers (these are ISCSI initiators)

ivmu1.example.com

ivmu2.example.com  

ISCSI target (CentOS 7 System)

centos7.example.com

The CentOS server has a VG in which LVs are created, and these will work as the ISSCI LUNs to the HP-UX servers.

 

Each LUN will be assigned to both the HP-UX systems as these will work as the clustered VG for the packages on the MC/Service Guard Cluster.

 

ISCSI Target configuration on RHEL /CENT OS 7

On the Centos 7 server install the scsi-traget-utils 


yum install -y scsi-target-utils

Please note that for CENTOS 7 the iscsi RPM comes from EPEL. This is Needed to enable the EPEL repo on the centos machine first.

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo rpm -Uvh epel-release-7*.rpm


Create the LVs for use as ISCSI back store

 

  

On the Centos7 the LVs to be used are as

These LVs which are created on a VG on the CentOS 7 server will be the backing devices for the ISCSI LUNs which will be accessed by the ISCSI Initiator on the HP-UX ServiceGuard members.


vgdisplay -v | grep -i "Lv path"

LV Path /dev/centos/rxquorum
LV Path /dev/centos/vm1_root
LV Path /dev/centos/vm2_root
LV Path /dev/centos/vm12_hpsrp


Create the target configuration file /etc/tgt/targets.conf as below


[root@centos7 ~]# cat /etc/tgt/targets.conf
# This is a sample config file for tgt-admin.
#
# The "#" symbol disables the processing of a line.

# Set the driver. If not specified, defaults to "iscsi".
default-driver iscsi

# Set iSNS parameters, if needed
#iSNSServerIP 192.168.111.222
#iSNSServerPort 3205
#iSNSAccessControl On
#iSNS On

# Continue if tgtadm exits with non-zero code (equivalent of
# --ignore-errors command line option)
#ignore-errors yes


###### For the node 192.168.10.63 ivm2u Second service guard node #######

<target iqn.2015-12.com.example:san4>
backing-store /dev/centos/vm12_hpsrp
initiator-address 192.168.10.62
</target>



<target iqn.2015-12.com.example:san5>
backing-store /dev/centos/rxquorum
initiator-address 192.168.10.62
</target>


<target iqn.2015-12.com.example:san6>
backing-store /dev/centos/vm1_root
initiator-address 192.168.10.62
</target>

<target iqn.2015-12.com.example:san7>
backing-store /dev/centos/vm2_root
initiator-address 192.168.10.62
</target>


###### For the node 192.168.10.63 ivm2u Second service guard node #######

<target iqn.2015-12.com.example:san4>
backing-store /dev/centos/vm12_hpsrp
initiator-address 192.168.10.63
</target>

<target iqn.2015-12.com.example:san5>
backing-store /dev/centos/rxquorum
initiator-address 192.168.10.63
</target>


<target iqn.2015-12.com.example:san6>
backing-store /dev/centos/vm1_root
initiator-address 192.168.10.63
</target>

<target iqn.2015-12.com.example:san7>
backing-store /dev/centos/vm2_root
initiator-address 192.168.10.63
</target>
[root@centos7 ~]#

Also, it can be useful to check the ports currently used by the ISCSI Target Server

[root@centos7 ~]# netstat -an | grep -i 3260
tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN
[root@centos7 ~]#


Finally, open the 3260 tcp port in the firewall configuration:

(CentOS 7 /RHEL 7 have the IPFILTER replaced by firewalld as compared to CentOS/RHEL7)

Make the 3260 TCP open in the firewall

[root@centos7 ~]# firewall-cmd --permanent --add-port=3260/tcp
success
[root@centos7 ~]#
[root@centos7 ~]#
[root@centos7 ~]# firewall-cmd --reload

success
[root@centos7 ~]#
[root@centos7 ~]#
[root@centos7 ~]#


If you want to disable the Linux firewall (not suggested) you can do

systemctl stop firewalld
systemctl disable firewalld

[root@centos7 ~]# systemctl stop firewalld
[root@centos7 ~]# systemctl disable firewalld
[root@centos7 ~]#
[root@centos7 ~]# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: inactive (dead)

Dec 01 09:36:36 centos7.example.com systemd[1]: Starting firewalld - dynamic firewall daemon...
Dec 01 09:36:38 centos7.example.com systemd[1]: Started firewalld - dynamic firewall daemon.
Dec 01 10:05:53 centos7.example.com systemd[1]: Stopping firewalld - dynamic firewall daemon...
Dec 01 10:05:54 centos7.example.com systemd[1]: Stopped firewalld - dynamic firewall daemon.
Dec 01 22:54:44 centos7.example.com systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@centos7 ~]#


Start the tgtgd service (This makes the iscsi LUNs visible to the iscsi initiator client servers)
service tgtd restart


See the status of the tgtd service


[root@centos7 ~]# service tgtd status
Redirecting to /bin/systemctl status tgtd.service
tgtd.service - tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled)
Active: active (running) since Tue 2015-12-01 23:13:44 EST; 19min ago
Process: 5890 ExecStop=/usr/sbin/tgtadm --op delete --mode system (code=exited, status=0/SUCCESS)
Process: 5881 ExecStop=/usr/sbin/tgt-admin --update ALL -c /dev/null (code=exited, status=0/SUCCESS)
Process: 5879 ExecStop=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
Process: 6030 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v ready (code=exited, status=0/SUCCESS)
Process: 5932 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 5930 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
Process: 5928 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Main PID: 5927 (tgtd)
CGroup: /system.slice/tgtd.service
ââ5927 /usr/sbin/tgtd -f

Dec 01 23:13:44 centos7.example.com tgtd[5927]: tgtd: bs_thread_open(412) 16
Dec 01 23:13:44 centos7.example.com tgtd[5927]: tgtd: device_mgmt(246) sz:26...t
Dec 01 23:13:44 centos7.example.com tgtd[5927]: tgtd: bs_thread_open(412) 16
Dec 01 23:13:44 centos7.example.com tgtd[5927]: tgtd: device_mgmt(246) sz:26...t
Dec 01 23:13:44 centos7.example.com tgtd[5927]: tgtd: bs_thread_open(412) 16
Dec 01 23:13:44 centos7.example.com systemd[1]: Started tgtd iSCSI target da....
Dec 01 23:14:15 centos7.example.com tgtd[5927]: tgtd: sbc_mode_page_update(8...0
Dec 01 23:14:15 centos7.example.com tgtd[5927]: tgtd: sbc_mode_page_update(8...0
Dec 01 23:14:15 centos7.example.com tgtd[5927]: tgtd: sbc_mode_page_update(8...0
Dec 01 23:29:14 centos7.example.com tgtd[5927]: tgtd: sbc_mode_page_update(8...0
Hint: Some lines were ellipsized, use -l to show in full.
[root@centos7 ~]#


See the status of the exported ISCSI LUNs

It is worth noting that each of these LV backed ISCSI devices are shared to both the HP-UX Systems.



[root@centos7 ~]# tgt-admin -s
Target 1: iqn.2015-12.com.example:san1
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 3
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 6
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
I_T nexus: 10
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 14
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 107374 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/centos/vm12_hpsrp
Backing store flags:
Account information:
ACL information:
192.168.10.62
192.168.10.63
Target 2: iqn.2015-12.com.example:san2
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 4
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 7
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
I_T nexus: 11
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 15
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00020000
SCSI SN: beaf20
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00020001
SCSI SN: beaf21
Size: 42950 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/centos/vm12_quor
Backing store flags:
Account information:
ACL information:
192.168.10.62
192.168.10.63
Target 3: iqn.2015-12.com.example:san3
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 5
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 8
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
I_T nexus: 12
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 16
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00030000
SCSI SN: beaf30
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00030001
SCSI SN: beaf31
Size: 53687 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/centos/vm1_root
Backing store flags:
Account information:
ACL information:
192.168.10.62
192.168.10.63
Target 4: iqn.2015-12.com.example:san4
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 1
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
I_T nexus: 2
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 9
Initiator: iqn.2015-12.com.example:ivmu2.example.com alias:
Connection: 0
IP Address: 192.168.10.63
I_T nexus: 13
Initiator: iqn.2015-12.com.example:ivm1l.example.com alias:
Connection: 0
IP Address: 192.168.10.62
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00040000
SCSI SN: beaf40
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00040001
SCSI SN: beaf41
Size: 64425 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/centos/vm2_root
Backing store flags:
Account information:
ACL information:
192.168.10.62
192.168.10.63
[root@centos7 ~]#



ISCSI Initiator Configuration on HP-UX


To configure the HP-UX iSCSI initiator, perform the following steps: (These had been done on both the nodes): These are performed on both the HP-UX servers which are to be part of the cluster.

# export PATH=$PATH:/opt/iscsi/bin

Display the iSCSI initiator name that was configured.
Ensure that the ISCSI initiator Software is installed on the HP-UX servers else install the same.

root@ivmu2@/#swlist -l product | grep -i iscsi
ISCSI-SWD B.11.31.03g HP-UX iSCSI Software Initiator
root@ivmu2@/#
root@ivmu2@/#

See the initial ISCSI configuration

root@ivmu2@/#iscsiutil -l
Initiator Name : iqn.1986-03.com.hp:ivmu2.1964a4de-5026-11dc-ad1a-e8cc2250d732
Initiator Alias :

Authentication Method :
CHAP Method : CHAP_UNI
Initiator CHAP Name :
CHAP Secret :
NAS Hostname :
NAS Secret :
Radius Server Hostname :
Header Digest : None,CRC32C (default)
Data Digest : None,CRC32C (default)
SLP Scope list for iSLPD :
root@ivmu2@/#


See the current ISCSI Client Configuration on the HP-UX servers

root@ivm1l@/#
root@ivm1l@/#iscsiutil -l
Initiator Name : iqn.2015-12.com.example:ivm1l.example.com
Initiator Alias :

Authentication Method : None
CHAP Method : CHAP_UNI
Initiator CHAP Name :
CHAP Secret :
NAS Hostname :
NAS Secret :
Radius Server Hostname :
Header Digest : None,CRC32C (default)
Data Digest : None,CRC32C (default)
SLP Scope list for iSLPD :
root@ivm1l@/#

Set the Authentication mode to NONE

root@ivmu2@/#iscsiutil -t authmethod None


Change the iSCSI initiator name.

iscsiutil -i -N <initiator name in iqn or eui format>

iscsiutil -i -N iqn.2015-12.com.example:ivm1l.example.com


Add the ISCSI target server (192.168.10.87) to both the HP-UX servers

iscsiutil -a -I 192.168.10.87


See the ISCSI Target configured for the HP-UX server



root@ivm1l@/#iscsiutil -pD
`
Discovery Target Information
----------------------------

Target # 1
-----------
IP Address : 192.168.10.87
iSCSI TCP Port : 3260
iSCSI Portal Group Tag : 1

User Configured:
----------------

Authenticaton Method : None
CHAP Method : CHAP_UNI
Initiator CHAP Name :
CHAP Secret :
Header Digest : None,CRC32C (default)
Data Digest : None,CRC32C (default)
root@ivm1l@/#



root@ivm1l@/#uptime
9:51pm up 5 mins, 1 user, load average: 0.03, 0.03, 0.01

SCAN the ISCSI disks, they are highlighted

root@ivm1l@/#ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
=======================================================================
disk 0 0/1/1/0.0.0.0.0 sdisk CLAIMED DEVICE SEAGATE ST9300605SS
/dev/dsk/c0t0d0 /dev/rdsk/c0t0d0
disk 1 0/1/1/0.0.0.1.0 sdisk CLAIMED DEVICE SEAGATE ST9300605SS
/dev/dsk/c0t1d0 /dev/dsk/c0t1d0s2 /dev/rdsk/c0t1d0 /dev/rdsk/c0t1d0s2
/dev/dsk/c0t1d0s1 /dev/dsk/c0t1d0s3 /dev/rdsk/c0t1d0s1 /dev/rdsk/c0t1d0s3
disk 6 255/0/0.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK

/dev/dsk/c2t0d1 /dev/rdsk/c2t0d1

disk 12 255/0/4.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK

/dev/dsk/c6t0d1 /dev/rdsk/c6t0d1

disk 13 255/0/5.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK

/dev/dsk/c7t0d1 /dev/rdsk/c7t0d1

disk 14 255/0/6.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK

/dev/dsk/c8t0d1 /dev/rdsk/c8t0d1
disk 4 255/1/0.0.0 sdisk CLAIMED DEVICE TEAC DVD-ROM DW-224EV
/dev/dsk/c1t0d0 /dev/rdsk/c1t0d0

root@ivm1l@/#ioscan -NH 64000
H/W Path Class Description
==================================================
64000/0x0 usbmsvbus USB Mass Storage
64000/0x0/0x0 escsi_ctlr USB Mass Storage Virt Ctlr
64000/0x0/0x0.0x0 tgtpath usb target served by usb_ms_scsi driver, target port id 0x0
64000/0x0/0x0.0x0.0x0 lunpath LUN path for disk5
64000/0x2 iscsi iSCSI Virtual Root
64000/0x2/0x0 escsi_ctlr iSCSI Virtual Controller

64000/0x2/0x0.0x0 tgtpath iscsi target served by isvctlr driver, target port id 0x0

64000/0x2/0x0.0x0.0x0 lunpath LUN path for ctl7

64000/0x2/0x0.0x0.0x1000000000000 lunpath LUN path for disk11

64000/0x2/0x0.0x4 tgtpath iscsi target served by isvctlr driver, target port id 0x4

64000/0x2/0x0.0x4.0x0 lunpath LUN path for ctl1

64000/0x2/0x0.0x4.0x1000000000000 lunpath LUN path for disk7

64000/0x2/0x0.0x5 tgtpath iscsi target served by isvctlr driver, target port id 0x5

64000/0x2/0x0.0x5.0x0 lunpath LUN path for ctl5

64000/0x2/0x0.0x5.0x1000000000000 lunpath LUN path for disk15

64000/0x2/0x0.0x6 tgtpath iscsi target served by isvctlr driver, target port id 0x6

64000/0x2/0x0.0x6.0x0 lunpath LUN path for ctl6

64000/0x2/0x0.0x6.0x1000000000000 lunpath LUN path for disk10
64000/0xfa00 esvroot Escsi virtual root
64000/0xfa00/0x0 disk SEAGATE ST9300605SS
64000/0xfa00/0x1 disk SEAGATE ST9300605SS
64000/0xfa00/0x2 disk TEAC DVD-ROM DW-224EV
64000/0xfa00/0x6 ctl IET Controller
64000/0xfa00/0x7 disk IET VIRTUAL-DISK
64000/0xfa00/0x8 ctl IET Controller
64000/0xfa00/0x9 ctl IET Controller
64000/0xfa00/0xa ctl IET Controller
64000/0xfa00/0xb disk IET VIRTUAL-DISK

64000/0xfa00/0xc disk IET VIRTUAL-DISK

64000/0xfa00/0xd disk IET VIRTUAL-DISK
root@ivm1l@/#



ioscan -root@ivm1l@/tmp#
root@ivm1l@/tmp#
root@ivm1l@/tmp#ioscan -kfnC disk
Class I H/W Path Driver S/W State H/W Type Description
=======================================================================
disk 0 0/1/1/0.0.0.0.0 sdisk CLAIMED DEVICE SEAGATE ST9300605SS
/dev/dsk/c0t0d0 /dev/rdsk/c0t0d0
disk 1 0/1/1/0.0.0.1.0 sdisk CLAIMED DEVICE SEAGATE ST9300605SS
/dev/dsk/c0t1d0 /dev/rdsk/c0t1d0
/dev/dsk/c0t1d0s1 /dev/rdsk/c0t1d0s1
/dev/dsk/c0t1d0s2 /dev/rdsk/c0t1d0s2
/dev/dsk/c0t1d0s3 /dev/rdsk/c0t1d0s3
disk 6 255/0/0.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/dsk/c2t0d1 /dev/rdsk/c2t0d1
disk 12 255/0/4.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/dsk/c6t0d1 /dev/rdsk/c6t0d1
disk 13 255/0/5.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/dsk/c7t0d1 /dev/rdsk/c7t0d1
disk 14 255/0/6.0.0.1 sdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/dsk/c8t0d1 /dev/rdsk/c8t0d1
disk 4 255/1/0.0.0 sdisk CLAIMED DEVICE TEAC DVD-ROM DW-224EV
/dev/dsk/c1t0d0 /dev/rdsk/c1t0d0


Please see the other blog in which the ServiceGuard 2-node cluster configuration was done using the ISCSI LUNs. If all the shared LUNs are all ISCSI only then you will need the Quorum Server configured to have the base cluster up and Running. Till the above mentioned version of ServiceGuard 11.20 ISCSI LUN as a lock LUN or a first Cluster LockPV/VG is not supported.

See the latest release notes of ServiceGuard A.11.20 on docs.hp.com


Refer to the Link below for ISCSI target (on CentOS7) and ISCSI intiator configuration on HP-UX 11iv3

http://hpux-interview-questions.blogspot.in/2015/12/iscsi-server-target-on-centos7-for.html 

 

For Quorum Server configuration on HP-UX  see the link 

 

http://hpux-interview-questions.blogspot.in/2015/12/11iv3-qs-configuration-steps-on-rp3410.html