We have a fairly large ~46Tb database which needed to get off an older san array due to EMC’s extortion-like support pricing model.
My goal is to take the shortest outage possible, while getting all 46Tb of data migrated to disks on the new array.
The minimal outage does require taking the database down and briefly unmounting the filesystems. But, a 5 minute outage to migrate 46Tb of data seems somewhat tolerable given our alternatives.
For the purpose of this article, I’ll use the simpler CTD disks from a setup with internal only disk, rather than the eyechart CTD disk labels to illustrate the technique. The database server I’m working on has hundreds of luns, all with device names similar to:
c3t60060480000190100478533031383339d0s2
So, we’ll go with internal disks c1t0d0s5, mapped to metadevice d51, mounted as /d1001, as a source disk that has good data. The destination disk is c1t1d0s5 mapped to metadevice d52 as a migration destination.
Let’s make sure we know our source disk, so I’m creating a file based on the device name.
bash-3.00# metastat -ap | grep d51
d51 1 1 c1t0d0s5
bash-3.00# mount | grep d51
/d1001 on /dev/md/dsk/d51 read/write/setuid/devices...
bash-3.00# touch /d1001/I_am_d51
bash-3.00# ls /d1001
I_am_d51 lost+found ...
Our original disk is mounted as /d1001, first unmount it.
bash-3.00# umount /d1001
Now create the mirror, and attach d51 with our good data to it.
bash-3.00# metainit d50 -m d51 d50: Mirror is setup
To see that our data is still there, mount our new mirror and take a look
bash-3.00# mount /dev/md/dsk/d50 /d1001 bash-3.00# ls /d1001 I_am_d51 lost+found
For my purposes, I would do the above unmount, attach new mirror metadevice for all mountpoints. At this point the database could be restarted.
We will then create the new side of the SDS mirror, then attach to get the data synchronized while the database is running.
Use format to label the new incoming disks (c1t1d0s5) then use metainit to initialize the new disk.
bash-3.00# metainit d52 1 1 c1t1d0s5
We can now attach the new disk, which would represent our new san array.
bash-3.00# metattach d50 d52 d50: submirror d52 is attached bash-3.00# metastat d50 d50: Mirror Submirror 0: d51 State: Okay Submirror 1: d52 State: Resyncing Resync in progress: 0 % done Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 117726144 blocks (56 GB) d51: Submirror of d50 State: Okay Size: 117726144 blocks (56 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t0d0s5 0 No Okay Yes d52: Submirror of d50 State: Resyncing Size: 117726144 blocks (56 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t1d0s5 0 No Okay Yes
bash-3.00# metastat d50 | grep progress Resync in progress: 33 % done
When d50 has completed sync’ing we can detach the original disk which will make our old array decommission possible.
bash-3.00# metastat d50 d50: Mirror Submirror 0: d51 State: Okay Submirror 1: d52 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 117726144 blocks (56 GB) d51: Submirror of d50 State: Okay Size: 117726144 blocks (56 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t0d0s5 0 No Okay Yes d52: Submirror of d50 State: Okay Size: 117726144 blocks (56 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t1d0s5 0 No Okay Yes Device Relocation Information: Device Reloc Device ID c1t0d0 Yes id1,sd@n500000e0125d2140 c1t1d0 Yes id1,sd@n500000e0125e77c0
We can now disconnect our original disk and retire our array.
bash-3.00# metadetach d50 d51 d50: submirror d51 is detached
bash-3.00# df -h | grep d1001 /dev/md/dsk/d50 55G 56M 55G 1% /d1001 bash-3.00# ls /d1001 I_am_d51 lost+found
As a note, if there are dozens of mounpoints to synchronize, it should be done in small groups or serially. Having two dozen metadevices syncing at the same time will certainly affect the performance of the database.