Ensure that both the physical IA systems are able to see and access the shared LUNs coming from ISCSI or FC SAN
the two physical systems here are ivm1l and ivmu2 which are1) Running HP-UX 11iv3 DCOE
2) have the latest products and bundles for HP-UX SRP and HP 9000 containers installed
3) Have the latest versions of the dependent products installed as per the latest HP9000 Containers and HP-UX SRP administration install and configure guide available for HP-UX on HP site.
4) Aries patches are the latest and that PERL installed is as per recommendation for HP-UX system to work as HP-UX SRP Global system to host the System SRPs or HP9000 containers
root@ivmu2@/#ioscan -kfnNC disk
Class I H/W Path Driver S/W State H/W Type Description
===================================================================
disk 2 64000/0xfa00/0x0 esdisk CLAIMED DEVICE SEAGATE ST9300605SS
/dev/disk/disk2 /dev/rdisk/disk2
disk 3 64000/0xfa00/0x1 esdisk CLAIMED DEVICE SEAGATE ST9300605SS
/dev/disk/disk3 /dev/rdisk/disk3
/dev/disk/disk3_p1 /dev/rdisk/disk3_p1
/dev/disk/disk3_p2 /dev/rdisk/disk3_p2
/dev/disk/disk3_p3 /dev/rdisk/disk3_p3
disk 5 64000/0xfa00/0x2 esdisk CLAIMED DEVICE TEAC DVD-ROM DW-224EV
/dev/disk/disk5 /dev/rdisk/disk5
disk 9 64000/0xfa00/0x8 esdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/disk/disk9 /dev/rdisk/disk9
disk 10 64000/0xfa00/0xb esdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/disk/disk10 /dev/rdisk/disk10
disk 11 64000/0xfa00/0xc esdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/disk/disk11 /dev/rdisk/disk11
disk 15 64000/0xfa00/0xd esdisk CLAIMED DEVICE IET VIRTUAL-DISK
/dev/disk/disk15 /dev/rdisk/disk15
root@ivmu2@/#
+++++++++++++++++++++++++++++++++++
Remove the vpars6 and integrity VM host software as the IVM host cannot work as an Global system to host SRP or HP9000 containers
Integrity Virtual Machine host cannot work as a SRP or HP9000 System container host.In case you see there a conflict, it is better either to unconfigure the IVM host or remove the IVM host and related software thereof.
SRP global system
To uninstall at the bundle level do
swremove -x enforce_dependencies=false -x autoreboot=true BB068AA
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
On first system we will create the Filesystems etc for container imports
We will use this disk /dev/disk/disk9 which is a 100GB disk
root@ivmu2@/#ioscan -m dsf /dev/rdisk/disk9
Persistent DSF Legacy DSF(s)
========================================
/dev/rdisk/disk9 /dev/rdsk/c6t0d1
root@ivmu2@/#
Creation of VG LV FS for the container file systems for Recovery of the HP9000 image which was taken using a TAR backup of a Running HP-UX 11iv3 system running on a PA-RISC Platform.
Create the PV
root@ivmu2@/#pvcreate -f /dev/rdisk/disk9
Physical volume "/dev/rdisk/disk9" has been successfully created.
Create the VG
Create the VG
root@ivmu2@/#vgcreate /dev/node4lvg /dev/disk/disk9
Increased the number of physical extents per physical volume to 25599.
Volume group "/dev/node4lvg" has been successfully created.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
Create the Mount Points on the Global system and assign the ownership and privileges as per below to mount the root of the HP9000 container to be imported.
Note that the root for an HP9000 container is /var/hpsrp/<HP9000ContainerName>
Create the Mount Points on the Global system and assign the ownership and privileges as per below to mount the root of the HP9000 container to be imported.
Note that the root for an HP9000 container is /var/hpsrp/<HP9000ContainerName>
root@ivmu2@/#mkdir -p /var/hpsrp/node4l
root@ivmu2@/#chown root:sys /var/hpsrp/node4l
root@ivmu2@/#chmod 755 /var/hpsrp/node4l
Create the LV for being the root of the HP9000 container
Create the LV for being the root of the HP9000 container
root@ivmu2@/#lvcreate -L 6120 -n rootlv node4lvg
Logical volume "/dev/node4lvg/rootlv" has been successfully created with
character device "/dev/node4lvg/rrootlv".
Logical volume "/dev/node4lvg/rootlv" has been successfully extended.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
Create the File System on the LV which was created to be the HP9000 Container root.
root@ivmu2@/#
root@ivmu2@/#mkfs -F vxfs -o version=4 /dev/node4lvg/rrootlv
version 4 layout
6266880 sectors, 6266880 blocks of size 1024, log size 16384 blocks
largefiles supported
Mount the same e3nsure that it is mounted seen in the BDF on the global
Mount the same e3nsure that it is mounted seen in the BDF on the global
root@ivmu2@/#mount -F vxfs /dev/node4lvg/rootlv /var/hpsrp/node4l
root@ivmu2@/#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1441792 238280 1194136 17% /
/dev/vg00/lvol1 1835008 445560 1378640 24% /stand
/dev/vg00/lvol8 8912896 1717192 7149624 19% /var
/dev/vg00/lvol7 7798784 3270000 4493472 42% /usr
/dev/vg00/lvol4 524288 20816 499544 4% /tmp
/dev/vg00/lvol6 11206656 5742376 5421704 51% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
/dev/node4lvg/rootlv
6266880 18010 5858323 0% /var/hpsrp/node4l
Create the LVs and Filesystems for other filesystems for the HP9000 Container recovery like LVs for /stand, /var, /usr, /opt , /tmp, /home
Create the LVs and Filesystems for other filesystems for the HP9000 Container recovery like LVs for /stand, /var, /usr, /opt , /tmp, /home
root@ivmu2@/#lvcreate -L 2046 -n standlv /dev/node4lvg
Warning: rounding up logical volume size to extent boundary at size "2048" MB.
Logical volume "/dev/node4lvg/standlv" has been successfully created with
character device "/dev/node4lvg/rstandlv".
Logical volume "/dev/node4lvg/standlv" has been successfully extended.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
root@ivmu2@/#lvcreate -L 16384 -n varlv /dev/node4lvg
Logical volume "/dev/node4lvg/varlv" has been successfully created with
character device "/dev/node4lvg/rvarlv".
Logical volume "/dev/node4lvg/varlv" has been successfully extended.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
root@ivmu2@/#lvcreate -L 7168 -n usrlv /dev/node4lvg
Logical volume "/dev/node4lvg/usrlv" has been successfully created with
character device "/dev/node4lvg/rusrlv".
Logical volume "/dev/node4lvg/usrlv" has been successfully extended.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
root@ivmu2@/#lvcreate -L 5120 -n tmplv /dev/node4lvg
Logical volume "/dev/node4lvg/tmplv" has been successfully created with
character device "/dev/node4lvg/rtmplv".
Logical volume "/dev/node4lvg/tmplv" has been successfully extended.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
root@ivmu2@/#lvcreate -L 25600 -n optlv /dev/node4lvg
Logical volume "/dev/node4lvg/optlv" has been successfully created with
character device "/dev/node4lvg/roptlv".
Logical volume "/dev/node4lvg/optlv" has been successfully extended.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
root@ivmu2@/#lvcreate -L 1024 -n homelv /dev/node4lvg
Logical volume "/dev/node4lvg/homelv" has been successfully created with
character device "/dev/node4lvg/rhomelv".
Logical volume "/dev/node4lvg/homelv" has been successfully extended.
Volume Group configuration for /dev/node4lvg has been saved in /etc/lvmconf/node4lvg.conf
root@ivmu2@/#
root@ivmu2@/#
root@ivmu2@/#
Create the file systems on the LVs created above for the HP9000 container Image restore
Create the file systems on the LVs created above for the HP9000 container Image restore
root@ivmu2@/#for i in rhomelv roptlv rstandlv rtmplv rusrlv rvarlv
> do
> mkfs -F vxfs -o largefiles /dev/node4lvg/$i
> done
version 7 layout
1048576 sectors, 1048576 blocks of size 1024, log size 16384 blocks
largefiles supported
version 7 layout
26214400 sectors, 26214400 blocks of size 1024, log size 65536 blocks
largefiles supported
version 7 layout
2097152 sectors, 2097152 blocks of size 1024, log size 16384 blocks
largefiles supported
version 7 layout
5242880 sectors, 5242880 blocks of size 1024, log size 16384 blocks
largefiles supported
version 7 layout
7340032 sectors, 7340032 blocks of size 1024, log size 16384 blocks
largefiles supported
version 7 layout
16777216 sectors, 16777216 blocks of size 1024, log size 65536 blocks
largefiles supported
Ensure that for now only the HP9000 container root is mounted
Ensure that for now only the HP9000 container root is mounted
root@ivmu2@/#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1441792 238288 1194120 17% /
/dev/vg00/lvol1 1835008 445560 1378640 24% /stand
/dev/vg00/lvol8 8912896 1717192 7149624 19% /var
/dev/vg00/lvol7 7798784 3270000 4493472 42% /usr
/dev/vg00/lvol4 524288 20816 499544 4% /tmp
/dev/vg00/lvol6 11206656 5742376 5421704 51% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
/dev/node4lvg/rootlv
6266880 18010 5858323 0% /var/hpsrp/node4l
Create the Mount Points for mounting the other remaining file Systems of the HP9000 container to be restored
root@ivmu2@/#mkdir -p /var/hpsrp/node4l/var
root@ivmu2@/#mkdir -p /var/hpsrp/node4l/usr
root@ivmu2@/#mkdir -p /var/hpsrp/node4l/opt
root@ivmu2@/#mkdir -p /var/hpsrp/node4l/tmp
root@ivmu2@/#mkdir -p /var/hpsrp/node4l/home
root@ivmu2@/#
root@ivmu2@/#
root@ivmu2@/#
Set the proper permissions ownership for the file system mount points for the HP9000 Container to be restored.
Set the proper permissions ownership for the file system mount points for the HP9000 Container to be restored.
root@ivmu2@/#chown bin:bin /var/hpsrp/node4l/var/
root@ivmu2@/#chown bin:bin /var/hpsrp/node4l/usr
root@ivmu2@/#chown bin:bin /var/hpsrp/node4l/opt
root@ivmu2@/#chown bin:bin /var/hpsrp/node4l/tmp
root@ivmu2@/#chown root:root /var/hpsrp/node4l/home
root@ivmu2@/#
root@ivmu2@/#
root@ivmu2@/#
Mount the File Systems to which the HP9000 Image will be restored using the tar backup of the HP9000 PA-RISC system which has to be taken previously.
Mount the File Systems to which the HP9000 Image will be restored using the tar backup of the HP9000 PA-RISC system which has to be taken previously.
root@ivmu2@/#mount /dev/node4lvg/rootlv /var/hpsrp/node4l
mount /dev/node4lvg/varlv /var/hpsrp/node4l/var
mount /dev/node4lvg/usrlv /var/hpsrp/node4l/usr
mount /dev/node4lvg/optlv /var/hpsrp/node4l/opt
mount /dev/node4lvg/tmplv /var/hpsrp/node4l/tmp
mount /dev/node4lvg/homelv /var/hpsrp/node4l/home
Ensure that these are mounted as shown here on the global system
Ensure that these are mounted as shown here on the global system
root@ivmu2@/#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1441792 238288 1194120 17% /
/dev/vg00/lvol1 1835008 445560 1378640 24% /stand
/dev/vg00/lvol8 8912896 1717192 7149624 19% /var
/dev/vg00/lvol7 7798784 3270000 4493472 42% /usr
/dev/vg00/lvol4 524288 20816 499544 4% /tmp
/dev/vg00/lvol6 11206656 5742376 5421704 51% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
/dev/node4lvg/rootlv
6266880 18011 5858321 0% /var/hpsrp/node4l
/dev/node4lvg/varlv
16777216 70756 15662314 0% /var/hpsrp/node4l/var
/dev/node4lvg/usrlv
7340032 19291 6863202 0% /var/hpsrp/node4l/usr
/dev/node4lvg/optlv
26214400 73069 24507505 0% /var/hpsrp/node4l/opt
/dev/node4lvg/tmplv
5242880 18777 4897604 0% /var/hpsrp/node4l/tmp
/dev/node4lvg/homelv
1048576 17749 966408 2% /var/hpsrp/node4l/home
Mount the NFS Share on the Global system where the tar backup of the HP9000 Container had been kept
In this case we had that in the name of node3l.tar on the NFS server 192.168.10.876 at the Mount point /nfslv
root@ivmu2@/#
root@ivmu2@/#mkdir -p /nfslv
root@ivmu2@/#mount 192.168.10.87:/nfslv /nfslv
root@ivmu2@/#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1441792 238288 1194120 17% /
/dev/vg00/lvol1 1835008 445560 1378640 24% /stand
/dev/vg00/lvol8 8912896 1717200 7149616 19% /var
/dev/vg00/lvol7 7798784 3270000 4493472 42% /usr
/dev/vg00/lvol4 524288 20816 499544 4% /tmp
/dev/vg00/lvol6 11206656 5742376 5421704 51% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
/dev/node4lvg/rootlv
6266880 18011 5858321 0% /var/hpsrp/node4l
/dev/node4lvg/varlv
16777216 70756 15662314 0% /var/hpsrp/node4l/var
/dev/node4lvg/usrlv
7340032 19291 6863202 0% /var/hpsrp/node4l/usr
/dev/node4lvg/optlv
26214400 73069 24507505 0% /var/hpsrp/node4l/opt
/dev/node4lvg/tmplv
5242880 18777 4897604 0% /var/hpsrp/node4l/tmp
/dev/node4lvg/homelv
1048576 17749 966408 2% /var/hpsrp/node4l/home
192.168.10.87:/nfslv
103080888 7844528 89977096 8% /nfslv
root@ivmu2@/#
_____________________________________________
Create a VG MAP file and Import the VG to the other node
root@ivmu2@/#vgexport -p -v -s -m /tmp/node4lvg.map /dev/node4lvg
Beginning the export process on Volume Group "/dev/node4lvg".
vgexport: Volume group "/dev/node4lvg" is still active.
/dev/disk/disk9
vgexport: Preview of vgexport on volume group "/dev/node4lvg" succeeded.
root@ivmu2@/#ls -la /tmp/node4lvg.map
-rw-r--r-- 1 root sys 82 Nov 11 02:47 /tmp/node4lvg.map
SCP the map file to the other node of the cluster
SCP the map file to the other node of the cluster
root@ivmu2@/#scp -pr /tmp/node4lvg.map ivm1l:/tmp/
node4lvg.map 100% 82 0.1KB/s 0.1KB/s 00:00
root@ivmu2@/#
Preview the import first using the -p option of vgimport
then perform an actual import of the VG on the other node. Please make sure that the VG is activated only one node at a time until this VG is configured as a clustered VG and is under the ServiceGuard Control, failing which may lead to data corruption for the LVs.
croot@ivm1l@/tmp#cd /tmp; vgimport -v -s -m node4lvg.map /dev/node4lvg
Beginning the import process on Volume Group "/dev/node4lvg".
Logical volume "/dev/node4lvg/rootlv" has been successfully created
with lv number 1.
Logical volume "/dev/node4lvg/standlv" has been successfully created
with lv number 2.
Logical volume "/dev/node4lvg/varlv" has been successfully created
with lv number 3.
Logical volume "/dev/node4lvg/usrlv" has been successfully created
with lv number 4.
Logical volume "/dev/node4lvg/tmplv" has been successfully created
with lv number 5.
Logical volume "/dev/node4lvg/optlv" has been successfully created
with lv number 6.
Logical volume "/dev/node4lvg/homelv" has been successfully created
with lv number 7.
vgimport: Volume group "/dev/node4lvg" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.
Ensure that on the other node the VG has been imported and the same is seen in the /etc/lvmtab file
Ensure that on the other node the VG has been imported and the same is seen in the /etc/lvmtab file
root@ivm1l@/tmp#
root@ivm1l@/tmp#
root@ivm1l@/tmp#strings /etc/lvmtab
/dev/vg00
/dev/disk/disk2_p2
/dev/lock
/dev/dsk/c7t0d1
/dev/node4lvg
/dev/dsk/c6t0d1
root@ivm1l@/tmp#
++++++++++++++++++++++++++++++++++++++++++
Start the container restore on the first system where you still have the Filesystems for the HP9000 container restore mounted
This may take considerable time depending on the amount of the data in the TAR backup of the HP9000 System.
The syntax is hp900_recover_image <container_root> <Backup_file>
/opt/HP9000-Containers/bin/hp9000_recover_image /var/hpsrp/node4l /nfslv/node3l.tar
HP 9000 root: /var/hpsrp/node4l
Image file : /nfslv/node3l.tar
Default options for tar: -xf
Additional options to use []:
/var/hpsrp/node4l is not empty
Do you want to continue? [no]:
root@ivmu2@/#/opt/HP9000-Containers/bin/hp9000_recover_image /var/hpsrp/node4l /nfslv/>
HP 9000 root: /var/hpsrp/node4l
Image file : /nfslv/node3l.tar
Default options for tar: -xf
Additional options to use []:
/var/hpsrp/node4l is not empty
Do you want to continue? [no]: yes
Do you want to re-direct stdout? [yes]:
Do you want to run recovery in the background? [no]:
Recovery for HP 9000 system container
About to execute chroot /var/hpsrp/node4l /tmp/tar -xf /tmp/HP9000_IMAGE/node3l.tar> /tmp/hp9000_recover.log 2>&1
Start recovery now? [yes]:
Executing recovery ...
Recovery reported some error - check /tmp/hp9000_recover.log
You may ignore errors relating to /dev recovery
root@ivmu2@/#
++++++++++++++++++++++++++++++++++++++++++++++++++++
SRP add to add the container configuration (After the restore of the TAR backup is complete add the container to the global system using srp- add)
NOTE: AS this container is to be integrated with the ServiceGuard, While using SRP add to add the container we have to make sure that the below are satisfied
the autostart as NO
Specify the IP but do not add that to the NETCONF File.
srp -add
root@ivmu2@/var/hpsrp#srp -a node4l -t hp9000sys
root@ivmu2@/var/hpsrp#srp -a node4l -t hp9000sys
Enter the requested values when prompted, then press return.
Enter "?" for help at prompt. Press control-c to exit.
Services to add: [cmpt,admin,init,prm,network,provision]
Autostart container at system boot? [yes] no
Root user password :
Reenter root user password:
Configure DNS Resolver? [no]
Use rules to restrict unsupported commands? [no]
List of Unix user names for container administrator: [root]
PRM group name to associate with this container: [node4l]
PRM group type (FSS, PSET): [FSS]
PRM FSS group CPU shares: [10]
PRM FSS group CPU cap (press return for no cap): []
PRM group memory shares: [10]
PRM group memory cap (press return for no cap): []
PRM group shared memory (press return for no dedicated memory): []
IP address: 192.168.10.125
Add IP address to netconf file? [yes] no
The following template variables have been set to the values shown:
assign_ip = no
autostart = no
ip_address = 192.168.10.125
root_password = ******
Press return or enter "yes" to make the selected modifications with these
values. Do you wish to continue? [yes] yes
add compartment rules succeeded
add RBAC admin role for compartment succeeded
add prm rules succeeded
Mounting loopback (LOFS) filesystems ...
Generating swlist ...
++++++++++++++++++++
Do a test start and stop of the container from the first global system where the HP9000 container data restore was done and where the HP9000 Container was added.
root@ivmu2@/#srp -start node4l
HP-UX SRP Container start-up in progress
________________________________________
Setting up Containers ..................................... OK
Setting hostname .......................................... OK
Start containment subsystem configuration ................. OK
Start Utmp Daemon : manages User Accounting Database ...... OK
Recover editor crash files ................................ OK
List and/or clear temporary files ......................... OK
Clean up old log files .................................... OK
Start system message logging daemon ....................... OK
Checking user database .................................... OK
Starting HP-UX Secure Shell ............................... OK
Start NFS core subsystem .................................. OK
Start NIS server subsystem ................................ OK
Start ldap client daemon .................................. N/A
Start NIS/LDAP server subsystem ........................... N/A
Start NIS client subsystem ................................ OK
Start lock manager subsystem .............................. OK
Start NFS client subsystem ................................ OK
Start AUTOFS subsystem .................................... OK
Finish containment subsystem configuration ................ FAIL *
Start Internet services daemon ............................ OK
Start remote system status daemon ......................... N/A
Starting sendmail [Done] Starting sm-client [Done] ........ OK
Starting outbound connection daemons for DDFA software .... N/A
Start DCE daemons ......................................... N/A
Starting the password/group assist subsystem .............. OK
Start print spooler ....................................... N/A
Start clock daemon ........................................ OK
Initialize Software Distributor agent daemon .............. OK
Starting the Winbind Daemon ............................... N/A
Starting HP-UX Apache-based Web Server .................... N/A
Starting HP-UX Tomcat-based Servlet Engine ................ N/A
Starting the HPUX Webproxy subsystem ...................... N/A
Start CDE login server .................................... OK
Starting PRNGD (Pseudo Random Number Generator Daemon) .... N/A
* - An error has occurred !
* - Refer to the file //etc/rc.log for more information.
The HP-UX SRP Container is ready.
Verify the Virtual IP of the container is see on the netstat -in on the global system (It will not be as we have not added the IP to the netconf file of the container while performing srp -add for the container)
Verify the Virtual IP of the container is see on the netstat -in on the global system (It will not be as we have not added the IP to the netconf file of the container while performing srp -add for the container)
root@ivmu2@/#netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lan1 1500 192.168.10.0 192.168.10.63 96608 0 33917 0 0
lan0 1500 192.168.10.0 192.168.10.125 8 0 8 0 0
lo0 32808 127.0.0.0 127.0.0.1 4346 0 4346 0 0
Stop the SRP on the First Global System
Stop the SRP on the First Global System
root@ivmu2@/#srp -stop
Do you wish to stop all containers (yes/no) : [no] yes
stopping node4l
HP-UX SRP Container stop in progress
____________________________________
Stop CDE login server ........................................... OK
Stopping HP-UX Apache-based Web Server .......................... OK
Stopping HP-UX Tomcat-based Servlet Engine. ..................... N/A
Stopping the HPUX Webproxy subsystem ............................ OK
Shutting down the Winbind Daemon ................................ OK
Shutting down Perf Agent software ............................... OK
Stopping the perfd daemon. ...................................... OK
Stop clock daemon ............................................... OK
Stop print spooler .............................................. OK
Stopping HP-UX Secure Shell ..................................... FAIL *
Stop DCE daemons ................................................ OK
Stopping outbound connection daemons for DDFA software .......... N/A
Shutting down sendmail [Done] Shutting down sm-client [Done] .... OK
Stopping remote system status daemon ............................ N/A
Stopping Internet services daemon ............................... OK
Stop AUTOFS subsystem ........................................... OK
Stop NFS client subsystem ....................................... OK
Stop lock manager subsystem ..................................... OK
Stop NIS client subsystem ....................................... OK
Stop ldap client daemon ......................................... OK
Stop NIS/LDAP server subsystem .................................. N/A
Stop NIS server subsystem ....................................... OK
Stop NFS core subsystem ......................................... OK
Stop system message logging daemon .............................. OK
Stop Software Distributor agent daemon .......................... OK
Stop Utmp Daemon ................................................ OK
Killing user processes .......................................... OK
Umounting all directories ....................................... OK
* - An error has occurred !
* - Refer to the file //etc/rc.log for more information.
HP-UX SRP Container transition to run-level 0 is complete.
root@ivmu2@/#
root@ivmu2@/#
root@ivmu2@/#srp -l
Name Type Template Enabled Services
----------------------------------------------------------------------
node4l hp9000sys hp9000sys admin,cmpt,init,network,prm,provision
root@ivmu2@/#srp -replace node4l -service network
root@ivmu2@/#srp -status
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys stopped none /var/hpsrp/node4l
root@ivmu2@/#
root@ivmu2@/#
+++++++++++++++++++++++++++++++
Export the container configuration only using an xfile which is the export file which can be used to import the container on the other system.
root@ivmu2@/#srp -export node4l -xfile /node4l_xfile
/var/hpsrp/node4l will require 304407K octets in the exchange file
Enter the requested values when prompted, then press return.
Enter "?" for help at prompt. Press control-c to exit.
Save SRP container directories in exchange file? [no]
Press return or enter "yes" to process this template.
Do you wish to continue? [yes] yes
export compartment rules succeeded
export RBAC admin role for compartment succeeded
export prm rules succeeded
export ipfilter rules succeeded
export ipsec rules succeeded
archiving ...
export system product list succeeded
export compartment network service rules succeeded
export provision service succeeded
root@ivmu2@/#
+++++++++++++++++++++++++++
STOP the container if running on the first Global system
UMOUNT the filesystems related to the container which are still mounted on the first global system.
deactivate the VG non the first global system before activation of the VG on the second Global system to avoid the data corruption.
scp the EXPORT file to the other node
root@ivmu2@/#scp -pr /node4l_xfile ivm1l:/
node4l_xfile 100% 90KB 90.0KB/s 90.0KB/s 00:00
===========================
Import the container on the other node; login to the other global system for performing the import of the container there
root@ivm1l@/#srp -l
No SRP containers configured.
Perform the import of the container'
Perform the import of the container'
root@ivm1l@/#srp -import -xfile node4l_xfile autostart=no
running fitness tests ...
The container's home directory, "/var/hpsrp/node4l", is not fully provisioned.
The home directory is either empty or some of the container's files are missing.
IMPORT and Activate the VG and Mount only the container root before Import on the second server which will be hosting the container
Ensure that the VG is deactivated on the first node and then only activate the VG using vgchange -a y on the second node
After the VGActivation mount the container root and container root only for the container import.
root@ivm1l@/#mount /dev/node4lvg/rootlv /var/hpsrp/node4l
root@ivm1l@/#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1441792 239136 1193272 17% /
/dev/vg00/lvol1 1835008 389968 1433800 21% /stand
/dev/vg00/lvol8 72376320 6418856 65452112 9% /var
/dev/vg00/lvol7 7798784 3270872 4492608 42% /usr
/dev/vg00/lvol4 524288 21632 498864 4% /tmp
/dev/vg00/lvol6 11206656 5742376 5421704 51% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
/dev/node4lvg/rootlv
6266880 323245 5572314 5% /var/hpsrp/node4l
Start the import of the container
Start the import of the container
root@ivm1l@/#srp -import -xfile node4l_xfile autostart=no
running fitness tests ...
==============================================================================
Container "node4l" exported from ivmu2 on Wed Nov 11 03:58:00 MST 2015
==============================================================================
Enter the requested values when prompted, then press return.
Enter "?" for help at prompt. Press control-c to exit.
Default run-level: [3]
Primary network interface name: [] lan1
The following template variables have been set to the values shown:
iface = lan1
Press return or enter "yes" to make the selected modifications with these
values. Do you wish to continue? [yes]
import compartment rules succeeded
import RBAC admin role for compartment succeeded
import prm rules succeeded
import ipfilter rules succeeded
import ipsec rules succeeded
creating mount points ...
restoring filesystems: this will take some time ...
Configuring device files ...
Configuring srp user and group ids ...
WARNING: touch: /var/hpsrp/node4l/var/adm/sw/hp9000_needs_recovery cannot create
import compartment network service rules succeeded
added ip address (192.168.10.125) for interface (lan1:1)
import ipaddress succeeded
import provision service succeeded
List and ensure that the container was inmported
root@ivm1l@/#srp -l
Name Type Template Enabled Services
----------------------------------------------------------------------
node4l hp9000sys hp9000sys admin,cmpt,init,network,prm,provision
Startup the container to test on the node 2
root@ivm1l@/#srp -start node4l
grep: can't open /opt/ssh/etc/sshd_config
HP-UX SRP Container start-up in progress
________________________________________
Setting up Containers ..................................... OK
Setting hostname .......................................... OK
Start containment subsystem configuration ................. OK
Start Utmp Daemon : manages User Accounting Database ...... OK
Recover editor crash files ................................ OK
List and/or clear temporary files ......................... OK
Clean up old log files .................................... OK
Start system message logging daemon ....................... OK
Checking user database .................................... OK
Starting HP-UX Secure Shell ............................... OK
Start NFS core subsystem .................................. OK
Start NIS server subsystem ................................ OK
Start ldap client daemon .................................. N/A
Start NIS/LDAP server subsystem ........................... N/A
Start NIS client subsystem ................................ OK
Start lock manager subsystem .............................. OK
Start NFS client subsystem ................................ OK
Start AUTOFS subsystem .................................... OK
Finish containment subsystem configuration ................ FAIL *
Start Internet services daemon ............................ OK
Start remote system status daemon ......................... N/A
Starting sendmail [Done] Starting sm-client [Done] ........ OK
Starting outbound connection daemons for DDFA software .... N/A
Start DCE daemons ......................................... N/A
Starting the password/group assist subsystem .............. OK
Start print spooler ....................................... N/A
Start clock daemon ........................................ OK
Initialize Software Distributor agent daemon .............. OK
Starting the Winbind Daemon ............................... N/A
Starting HP-UX Apache-based Web Server .................... N/A
Starting HP-UX Tomcat-based Servlet Engine ................ N/A
Starting the HPUX Webproxy subsystem ...................... N/A
Start CDE login server .................................... OK
Starting PRNGD (Pseudo Random Number Generator Daemon) .... N/A
* - An error has occurred !
* - Refer to the file //etc/rc.log for more information.
The HP-UX SRP Container is ready.
Verify that the container has come up without issues on the second node
root@ivm1l@/#srp -status
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys started none /var/hpsrp/node4l
root@ivm1l@/#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1441792 239360 1193048 17% /
/dev/vg00/lvol1 1835008 389968 1433800 21% /stand
/dev/vg00/lvol8 72376320 6418896 65452064 9% /var
/dev/vg00/lvol7 7798784 3270872 4492608 42% /usr
/dev/vg00/lvol4 524288 21632 498864 4% /tmp
/dev/vg00/lvol6 11206656 5742376 5421704 51% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
/dev/node4lvg/rootlv
6266880 323303 5572258 5% /var/hpsrp/node4l
/var/hpsrp/node4l 6266880 323303 5572258 5% /var/hpsrp/node4l
/dev/node4lvg/usrlv
7340032 2241068 4780284 32% /var/hpsrp/node4l/usr
/dev/node4lvg/varlv
16777216 1876194 13969777 12% /var/hpsrp/node4l/var
/usr/lib/hpux32 7798784 3270872 4492608 42% /var/hpsrp/node4l/usr/lib/hpux32
/usr/lib/hpux64 7798784 3270872 4492608 42% /var/hpsrp/node4l/usr/lib/hpux64
/var/hpsrp/node4l/dev/node4lvg/tmplv
5242880 18784 4897596 0% /var/hpsrp/node4l/tmp
/var/hpsrp/node4l/dev/node4lvg/optlv
26214400 3839542 20976546 15% /var/hpsrp/node4l/opt
/var/hpsrp/node4l/dev/node4lvg/homelv
1048576 17911 966374 2% /var/hpsrp/node4l/home
root@ivm1l@/#
Editing the ServiceGuard package configuration files to Package the HP9000 container into the MC/ServiceGuard Cluster
Edit the config files
make a folder /etc/cmcluster/node4l
Copy all the files from /opt/hpsrp/example/serviceguard/srp_as_sg_package to /etc/cmcluster/node4l
root@ivmu2/opt/hpsrp/example/serviceguard/srp_as_sg_package#ls -la
total 208
dr-xr-xr-x 2 bin bin 8192 Nov 8 06:17 .
dr-xr-xr-x 3 bin bin 96 Nov 8 06:17 ..
-rw-r--r-- 1 bin bin 8863 Apr 24 2014 README
-rwxr-xr-x 1 bin bin 2805 Apr 24 2014 srp_control_script
-rwxr-xr-x 1 bin bin 1811 Apr 24 2014 srp_monitor_script
-rw-r--r-- 1 bin bin 44216 Apr 24 2014 srp_package.conf
-rwxr-xr-x 1 bin bin 4538 Apr 24 2014 srp_route_script
-rw-r--r-- 1 bin bin 1597 Apr 24 2014 srp_script.incl
root@ivmu2@/opt/hpsrp/example/serviceguard/srp_as_sg_package#
EDIT THE config files
We edited the three files as
srp_control_script
srp_package.conf
srp_script.incl
After editing the files look like this
+++++++++++++++++++++++++++++++++++++++++
srp_package.conf
root@ivmu2@/etc/cmcluster/node4l#cat srp_package.conf | grep -v ^#
package_name node4l
package_description "Serviceguard Package"
module_name sg/basic
module_version 1
module_name sg/failover
module_version 1
module_name sg/priority
module_version 1
module_name sg/service
module_version 1
module_name sg/external
module_version 1
module_name sg/volume_group
module_version 1
module_name sg/filesystem
module_version 1
module_name sg/package_ip
module_version 1
module_name sg/acp
module_version 1
module_name sg/pr_cntl
module_version 1
package_type failover
node_name *
auto_run yes
node_fail_fast_enabled no
run_script_timeout no_timeout
halt_script_timeout no_timeout
successor_halt_timeout no_timeout
script_log_file $SGRUN/log/$SG_PACKAGE.log
operation_sequence $SGCONF/scripts/sg/volume_group.sh
operation_sequence $SGCONF/scripts/sg/filesystem.sh
operation_sequence $SGCONF/scripts/sg/package_ip.sh
operation_sequence $SGCONF/scripts/sg/external.sh
operation_sequence $SGCONF/scripts/sg/service.sh
failover_policy configured_node
failback_policy manual
priority no_priority
concurrent_vgchange_operations 1
enable_threaded_vgchange 0
vgchange_cmd "vgchange -a e"
cvm_activation_cmd "vxdg -g \${DiskGroup} set activation=exclusivewrite"
vxvol_cmd "vxvol -g \${DiskGroup} startall"
vxvm_dg_retry no
deactivation_retry_count 2
kill_processes_accessing_raw_devices no
concurrent_fsck_operations 1
concurrent_mount_and_umount_operations 1
fs_mount_retry_count 0
fs_umount_retry_count 1
ip_subnet 192.168.10.0
ip_address 192.168.10.125
external_script /etc/cmcluster/node4l/srp_route_script
external_script /etc/cmcluster/node4l/srp_control_script
service_name monitor_sshd_srp_as_sg_package
service_cmd "/etc/cmcluster/node4l/srp_monitor_script"
service_restart none
service_fail_fast_enabled no
service_halt_timeout 300
user_name john
user_host cluster_member_node
user_role package_admin
fs_name /dev/node4lvg/rootlv
fs_directory /var/hpsrp/node4l
fs_type "vxfs"
fs_mount_opt "-o rw"
fs_umount_opt ""
fs_fsck_opt ""
vg node4lvg
root@ivmu2@/etc/cmcluster/node4l#
++++++++++++++++++++++++++++
srp_control_script
root@ivmu2@/etc/cmcluster/node4l#cat srp_control_script | grep -v ^#
if [[ -z $SG_UTILS ]]
then
. /etc/cmcluster.conf
SG_UTILS=$SGCONF/scripts/mscripts/utils.sh
fi
if [[ -f ${SG_UTILS} ]]; then
. ${SG_UTILS}
if (( $? != 0 ))
then
echo "ERROR: Unable to source package utility functions file: ${SG_UTILS}"
exit 1
fi
else
echo "ERROR: Unable to find package utility functions file: ${SG_UTILS}"
exit 1
fi
sg_source_pkg_env $*
. `dirname $0`/srp_script.incl
function srp_validate
{
/sbin/srp -status $SRP_NAME
return $?
}
function srp_start
{
{
/sbin/srp -start $SRP_NAME
return $?
}
function srp_stop
{
/sbin/srp -stop $SRP_NAME
return $?
}
sg_log 5 "SRP start/stop script"
typeset -i exit_val=0
case ${1} in
start)
srp_start
exit_val=$?
;;
stop)
srp_stop
exit_val=$?
;;
validate)
srp_validate
exit_val=$?
;;
*)
sg_log 0 "INFO: Unknown operation: $1"
;;
esac
exit $exit_val
+++++++++++++++++++++++++++++++++
srp_script.incl
root@ivm1l@/etc/cmcluster/node4l#cat srp_script.incl | grep -v ^#
SRP_NAME=node4l
SRP_TYPE=`/opt/hpsrp/bin/srp -status $SRP_NAME | tail -1 | awk '{print $2}'`
SRP_SG_MANAGED_IP[0]=192.168.10.125
SRP_SG_GATEWAY[0]=192.168.10.51
if [[ x$SRP_TYPE = "xworkload" ]]
then
SRP_PIDFILE[0]="/var/hpsrp/$SRP_NAME/opt/ssh/sshd.pid"
elif [[ x$SRP_TYPE = "xsystem" ]]
then
SRP_PIDFILE[0]="/var/hpsrp/$SRP_NAME/var/run/sshd.pid"
fi
root@ivm1l@/etc/cmcluster/node4l#
++++++++++++++++
Copy all the files on /etc/cmcluster/node4l to the other NODE at the same location
Do a cmcheckconf and cmapplyconf for the package configuration files
root@ivm1l@/etc/cmcluster/node4l#cmcheckconf -v -P srp_package.conf
Begin package verification...
Checking existing configuration ... Done
Attempting to validate package node4l.
Validating package node4l via /etc/cmcluster/scripts/mscripts/master_control_script.sh ...
Waiting for up to 1200 seconds for the validation.
On node ivmu2, validation of package node4l succeeded with:
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys stopped none /var/hpsrp/node4l
On node ivm1l, validation of package node4l succeeded with:
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys stopped none /var/hpsrp/node4l
Validation for package node4l succeeded via /etc/cmcluster/scripts/mscripts/mast
Gathering storage information
Found 1 devices on node ivm1l
Found 1 devices on node ivmu2
Analysis of 2 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 1 volume groups on node ivm1l
Found 1 volume groups on node ivmu2
Analysis of 2 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
ERROR : Package node4l has an iSCSI disk /dev/dsk/c6t0d1 configured in volume gr
but the persistent reservation module pr_cntl is not included.
Include sg/pr_cntl module as part of the package.
Maximum configured packages parameter is 300.
Verified 1 new packages.
Total number of packages configured in the cluster is 0.
cmcheckconf: Unable to verify package.
+++++++++++++++++++++++++++
Add the sg/pr_cntl to the package config file if ISCSI is being used
This was added to the Package config file and the config file was again copied to the other node at the same location
module_name sg/pr_cntl
module_version 1
root@ivm1l@/etc/cmcluster/node4l#scp -pr * ivmu2:/etc/cmcluster/node4l/
root@ivm1l@/etc/cmcluster/node4l#cmcheckconf -v -P srp_package.conf
Begin package verification...
Checking existing configuration ... Done
Attempting to validate package node4l.
Validating package node4l via /etc/cmcluster/scripts/mscripts/master_control_script.sh ...
Waiting for up to 1200 seconds for the validation.
On node ivmu2, validation of package node4l succeeded with:
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys stopped none /var/hpsrp/node4l
On node ivm1l, validation of package node4l succeeded with:
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys stopped none /var/hpsrp/node4l
Validation for package node4l succeeded via /etc/cmcluster/scripts/mscripts/master_control_script.sh.
Gathering storage information
Found 1 devices on node ivm1l
Found 1 devices on node ivmu2
Analysis of 2 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 1 volume groups on node ivm1l
Found 1 volume groups on node ivmu2
Analysis of 2 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Maximum configured packages parameter is 300.
Verified 1 new packages.
Total number of packages configured in the cluster is 0.
cmcheckconf: Verification completed. No errors found.
Use the cmapplyconf command to apply the configuration.
root@ivm1l@/etc/cmcluster/node4l#cmapplyconf -v -P srp_package.conf
Begin package verification...
Checking existing configuration ... Done
Attempting to add package node4l.
Validating package node4l via /etc/cmcluster/scripts/mscripts/master_control_script.sh ...
Waiting for up to 1200 seconds for the validation.
On node ivmu2, validation of package node4l succeeded with:
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys stopped none /var/hpsrp/node4l
On node ivm1l, validation of package node4l succeeded with:
NAME TYPE STATE SUBTYPE ROOTPATH
node4l hp9000sys stopped none /var/hpsrp/node4l
Validation for package node4l succeeded via /etc/cmcluster/scripts/mscripts/master_control_script.sh.
Gathering storage information
Found 1 devices on node ivm1l
Found 1 devices on node ivmu2
Analysis of 2 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 1 volume groups on node ivm1l
Found 1 volume groups on node ivmu2
Analysis of 2 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Maximum configured packages parameter is 300.
Configured 1 new packages.
Total number of packages configured in the cluster is 1.
Adding the package configuration for package node4l.
Modify the package configuration ([y]/n)? y
Completed the cluster update
root@ivm1l@/etc/cmcluster/node4l#
root@ivm1l@/etc/cmcluster/node4l#
root@ivm1l@/etc/cmcluster/node4l#cmviewcl
CLUSTER STATUS
ivm12_cluster up
NODE STATUS STATE
ivm1l up running
ivmu2 up running
UNOWNED_PACKAGES
PACKAGE STATUS STATE AUTO_RUN NODE
node4l down halted disabled unowned
root@ivm1l@/etc/cmcluster/node4l#
root@ivm1l@/etc/cmcluster/node4l#
7.2.7Start the package on the first node which will start the 9000 Container as the ServiceGuard package
root@ivm1l@/etc/cmcluster/node4l#cmrunpkg -v node4l
Running package node4l on node ivm1l
Successfully started package node4l on node ivm1l
cmrunpkg: All specified packages are running
root@ivm1l@/etc/cmcluster/node4l#cmviwecl
sh: cmviwecl: not found.
root@ivm1l@/etc/cmcluster/node4l#cmviewcl
CLUSTER STATUS
ivm12_cluster up
NODE STATUS STATE
ivm1l up running
PACKAGE STATUS STATE AUTO_RUN NODE
node4l up running disabled ivm1l
NODE STATUS STATE
ivmu2 up running
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
7.2.8Confirm that the container has come up on the first server and that the container file systems are see mounted from the global as well as the Local container logins. Also verify the Logical IP of the container assigned to an Interface on the Global System.
root@ivm1l@/etc/cmcluster/node4l#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1441792 239416 1192992 17% /
/dev/vg00/lvol1 1835008 389968 1433800 21% /stand
/dev/vg00/lvol8 72376320 6419048 65451912 9% /var
/dev/vg00/lvol7 7798784 3270872 4492608 42% /usr
/dev/vg00/lvol4 524288 21632 498864 4% /tmp
/dev/vg00/lvol6 11206656 5742376 5421704 51% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
/dev/node4lvg/rootlv
6266880 323734 5571853 5% /var/hpsrp/node4l
/var/hpsrp/node4l 6266880 323734 5571853 5% /var/hpsrp/node4l
/dev/node4lvg/usrlv
7340032 2241068 4780284 32% /var/hpsrp/node4l/usr
/dev/node4lvg/varlv
16777216 1875990 13969984 12% /var/hpsrp/node4l/var
/usr/lib/hpux32 7798784 3270872 4492608 42% /var/hpsrp/node4l/usr/lib/hpux32
/usr/lib/hpux64 7798784 3270872 4492608 42% /var/hpsrp/node4l/usr/lib/hpux64
/var/hpsrp/node4l/dev/node4lvg/tmplv
5242880 18784 4897596 0% /var/hpsrp/node4l/tmp
/var/hpsrp/node4l/dev/node4lvg/optlv
26214400 3839542 20976546 15% /var/hpsrp/node4l/opt
/var/hpsrp/node4l/dev/node4lvg/homelv
1048576 17911 966374 2% /var/hpsrp/node4l/home
root@ivm1l@/etc/cmcluster/node4l#netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lan1 1500 192.168.10.0 192.168.10.62 247823 0 136738 0 0
lo0 32808 127.0.0.0 127.0.0.1 7767 0 7767 0 0
lan1:1 1500 192.168.10.0 192.168.10.125 40 0 48 0 0
root@ivm1l@/etc/cmcluster/node4l#
Test the failover and failback in the same way as that of a ServiceGuard package failover and Failback
No comments:
Post a Comment