Monday, August 22, 2011

How to Mount Cloned Volume Groups in AIX

Today, most SAN storage vendors provide some kind of volume or LUN cloning capability. The name and underlying mechanics for each vendor differ, but the end result is pretty much the same. They take a primary volume or LUN and create an exact copy at some point in time. NetApp's name for this technology is FlexClone.

Typically, creating a clone of a LUN and mounting the file system on the original server is a trivial process. The process becomes more complex if volume management is involved. Server based volume management software provides many benefits, but complicates matters where LUN clones are used. In the case of IBM's Logical Volume Management (LVM), mounting clones on the same server results in duplicate volume group information. Luckily, AIX allows LVM to have duplicate physical volume IDs (PVID) for a "short period" of time without crashing the system. Not sure exactly what a "short period" of time equates too, but in my testing I didn't experience a crash.

The process to "import" a cloned volume group for the first time is disruptive in that the original volume group must be exported. It is necessary to have the original volume group exported so that the physical volume IDs (PVIDs) on the cloned LUNs can be regenerated. The recreatevg command is used to generate new PVIDs and to rename the volume names in the cloned volume group. Note that the /etc/filesystem entries need to be manually updated because the recreatevg command prepends /fs to the original mount point names for the clones. Once the /etc/filesystem file is updated, the original volume group can be re-imported with importvg.

Subsequent refreshes of previously imported clones can be accomplished without exporting the original because ODM remembers the previous PVID to hdisk# association. It does not reread the actual PVID from the disk until an operation is performed against the volume group. The recreatevg command will change the PVIDs and volume names on the cloned volume group without affecting the source volume group.

Process for initial import of cloned volume group:


  1. Clone the LUNs comprising the volume group
    1. Make sure to clone in a consistent state
  2. Unmount and export original volume groups
    1. Use df to associate file systems to volumes
    2. Unmount file systems
    3. Use lsvg to list the volume groups
    4. Issue varoffvg to each affected volume group
    5. Use lspv to view the PVIDs for each disk associated with the volume groups
    6. Remember the volume group names and which disks belong to each VG that will be exported
    7. Use varyoffvg to offline each VG
    8. Use exportvg to export the VGs
  3. Bring in the new VG
    1. Execute cfgmgr to discover new disks
    2. Use lspv to identify the duplicate PVIDs
    3. Execute recreatevg on each new VG listing all disks associated with the volume group and –y option to name the VG
    4. Use lspv to verify no duplicate PVIDs
  4. Import the original volume groups
    1. Execute importvg with the name of one member hdisk and the –y option with the original name
    2. Mount the original file systems.
  5. Mount the cloned file systems
    1. Make mount point directories for the cloned file systems
    2. Edit /etc/filesystems to update the mount points for the cloned VG file systems
    3. Use mount command to mount the cloned file systems
The subsequent import of a cloned volume group differs in that only the cloned volume group needs to be unmounted and varied offline prior to the clone refresh. Remember the hdisk numbers involved in each clone volume group that is to be refreshed. Once refreshed use exportvg to export the volume group. Afterward, the recreatevg command is issued naming each hdisk associated with the volume group and its previous name. Now the volumes and file systems are available. Prior to mounting, the /etc/filesystem entries need to be updated to correct the mount points.

Process to refresh cloned volume group:

  1. Unmount and vary off the cloned volume groups to be refreshed
    1. Execute umount on associated file systems
    2. Use varyoffvg to offline each target VG
  2. Refresh the clones on the storage system
  3. Bring in the refreshed clone VGs
    1. Execute cfgmgr
      1. Use lspv and notice that ODM remembers the hdisk/PVID and volume group associations
    2. Use exportvg to export the VGs noting the hdisk numbers for each VG
    3. Execute recreatevg on each refreshed VG naming all disks associated with the volume group and –y option to name the VG to its original name
    4. Now lspv displays new unique PVIDs for each hdisk
  4. Mounting the refreshed clone file systems
    1. Edit /etc/filesystem to correct the mount points for each volume
    2. Issue mount command to mount the refreshed clones
See the example below for a first time import of two cloned volume groups, logvg2 and datavg2, consisting of 2 and 4 disks respectively:



bash-3.00# df

Filesystem 512-blocks Free %Used Iused %Iused Mounted on

/dev/hd4 1048576 594456 44% 13034 17% /

/dev/hd2 20971520 5376744 75% 49070 8% /usr

/dev/hd9var 2097152 689152 68% 11373 13% /var

/dev/hd3 2097152 1919664 9% 455 1% /tmp

/dev/hd1 1048576 42032 96% 631 12% /home

/dev/hd11admin 524288 523488 1% 5 1% /admin

/proc - - - - - /proc

/dev/hd10opt 4194304 3453936 18% 9152 3% /opt

/dev/livedump 524288 523552 1% 4 1% /var/adm/ras/livedump

/dev/pocdbbacklv 626524160 578596720 8% 8 1% /proddbback

/dev/fspoclv 1254359040 1033501496 18% 2064 1% /cl3data

/dev/fspocdbloglv 206438400 193491536 7% 110 1% /cl3logs

/dev/poclv 1254359040 1033501480 18% 2064 1% /proddb

/dev/pocdbloglv 206438400 193158824 7% 115 1% /proddblog

/dev/datalv2 836239360 615477152 27% 2064 1% /datatest2

/dev/loglv2 208404480 195088848 7% 118 1% /logtest2

bash-3.00#

bash-3.00$ umount /datatest2/

bash-3.00# umount /logtest2/

bash-3.00# lsvg

rootvg

pocdbbackvg

dataclvg

logsclvg

pocvg

pocdblogvg

datavg2

logvg2

bash-3.00# varyoffvg datavg2



NOTE: remember the hdisk and vg names for the exported vg's.



bash-3.00# lspv

hdisk0 00f62aa942cec382 rootvg active

hdisk1 none None

hdisk2 00f62aa997091888 pocvg active

hdisk3 00f62aa9a608de30 dataclvg active

hdisk4 00f62aa9a60970fc logsclvg active

hdisk10 00f62aa9972063c0 pocdblogvg active

hdisk11 00f62aa997435bfa pocdbbackvg active

hdisk5 00f62aa9a6798a0c datavg2

hdisk6 00f62aa9a6798acf datavg2

hdisk7 00f62aa9a6798b86 datavg2

hdisk8 00f62aa9a6798c36 datavg2

hdisk9 00f62aa9a67d6c9c logvg2 active

hdisk12 00f62aa9a67d6d51 logvg2 active

bash-3.00# varyoffvg logvg2

bash-3.00# lsvg

rootvg

pocdbbackvg

dataclvg

logsclvg

pocvg

pocdblogvg

datavg2

logvg2

bash-3.00# exportvg datavg2

bash-3.00# exportvg logvg2

bash-3.00#

bash-3.00# exportvg datavg2

bash-3.00# exportvg logvg2

bash-3.00# cfgmgr

bash-3.00# lspv

hdisk0 00f62aa942cec382 rootvg active

hdisk1 none None

hdisk2 00f62aa997091888 pocvg active

hdisk3 00f62aa9a608de30 dataclvg active

hdisk4 00f62aa9a60970fc logsclvg active

hdisk10 00f62aa9972063c0 pocdblogvg active

hdisk11 00f62aa997435bfa pocdbbackvg active

hdisk5 00f62aa9a6798a0c None

hdisk6 00f62aa9a6798acf None

hdisk7 00f62aa9a6798b86 None

hdisk8 00f62aa9a6798c36 None

hdisk13 00f62aa9a6798a0c None

hdisk14 00f62aa9a6798acf None

hdisk15 00f62aa9a6798b86 None

hdisk9 00f62aa9a67d6c9c None

hdisk12 00f62aa9a67d6d51 None

hdisk16 00f62aa9a6798c36 None

hdisk17 00f62aa9a67d6c9c None

hdisk18 00f62aa9a67d6d51 None

bash-3.00#



Notice the duplicate PVIDs. Use the recreatevg command naming all of the new disks in each volume group of the newly mapped clones.



bash-3.00# recreatevg -y dataclvg2 hdisk13 hdisk14 hdisk15 hdisk16

dataclvg2

bash-3.00# recreatevg -y logclvg2 hdisk17 hdisk18

logclvg2

bash-3.00# importvg -y datavg2 hdisk5

datavg2

bash-3.00# importvg -y logvg2 hdisk9

logvg2

bash-3.00# lspv

hdisk0 00f62aa942cec382 rootvg active

hdisk1 none None

hdisk2 00f62aa997091888 pocvg active

hdisk3 00f62aa9a608de30 dataclvg active

hdisk4 00f62aa9a60970fc logsclvg active

hdisk10 00f62aa9972063c0 pocdblogvg active

hdisk11 00f62aa997435bfa pocdbbackvg active

hdisk5 00f62aa9a6798a0c datavg2 active

hdisk6 00f62aa9a6798acf datavg2 active

hdisk7 00f62aa9a6798b86 datavg2 active

hdisk8 00f62aa9a6798c36 datavg2 active

hdisk13 00f62aa9c63a5ec2 dataclvg2 active

hdisk14 00f62aa9c63a5f9b dataclvg2 active

hdisk15 00f62aa9c63a6070 dataclvg2 active

hdisk9 00f62aa9a67d6c9c logvg2 active

hdisk12 00f62aa9a67d6d51 logvg2 active

hdisk16 00f62aa9c63a6150 dataclvg2 active

hdisk17 00f62aa9c63bf6b2 logclvg2 active

hdisk18 00f62aa9c63bf784 logclvg2 active

bash-3.00#



Notice the PVID numbers are all unique now.

remount original file systems



bash-3.00# mount /datatest2

bash-3.00# mount /logtest2

bash-3.00#



create new mount points and edit /etc/filesystems



bash-3.00# mkdir /dataclone1test2

bash-3.00# mkdir /logclone1test2

bash-3.00# cat /etc/filesystems





/fs/datatest2:

dev = /dev/fsdatalv2

vfs = jfs2

log = /dev/fsloglv03

mount = true

check = false

options = rw

account = false



/fs/logtest2:

dev = /dev/fsloglv2

vfs = jfs2

log = /dev/fsloglv04

mount = true

check = false

options = rw

account = false



/datatest2:

dev = /dev/datalv2

vfs = jfs2

log = /dev/loglv03

mount = true

check = false

options = rw

account = false



/logtest2:

dev = /dev/loglv2

vfs = jfs2

log = /dev/loglv04

mount = true

check = false

options = rw

account = false

bash-3.00#



Notice the cloned duplicates are prefixed with /fs on the mount point by the recreatevg command. Also the volume names were changed to prevent duplicate entries in /dev. Update /etc/filesysems with the mount points created previously.



bash-3.00# mount /dataclone1test2

Replaying log for /dev/fsdatalv2.

bash-3.00# mount /logclone1test2

Replaying log for /dev/fsloglv2.

bash-3.00# df

Filesystem 512-blocks Free %Used Iused %Iused Mounted on

/dev/hd4 1048576 594248 44% 13064 17% /

/dev/hd2 20971520 5376744 75% 49070 8% /usr

/dev/hd9var 2097152 688232 68% 11373 13% /var

/dev/hd3 2097152 1919664 9% 455 1% /tmp

/dev/hd1 1048576 42032 96% 631 12% /home

/dev/hd11admin 524288 523488 1% 5 1% /admin

/proc - - - - - /proc

/dev/hd10opt 4194304 3453936 18% 9152 3% /opt

/dev/livedump 524288 523552 1% 4 1% /var/adm/ras/livedump

/dev/pocdbbacklv 626524160 578596720 8% 8 1% /proddbback

/dev/fspoclv 1254359040 1033501496 18% 2064 1% /cl3data

/dev/fspocdbloglv 206438400 193491536 7% 110 1% /cl3logs

/dev/poclv 1254359040 1033501480 18% 2064 1% /proddb

/dev/pocdbloglv 206438400 193158824 7% 115 1% /proddblog

/dev/datalv2 836239360 615477152 27% 2064 1% /datatest2

/dev/loglv2 208404480 195088848 7% 118 1% /logtest2

/dev/fsdatalv2 836239360 615477160 27% 2064 1% /dataclone1test2

/dev/fsloglv2 208404480 195744288 7% 114 1% /logclone1test2

bash-3.00#

Share/Save/Bookmark

2 comments: