ASM online migration of LUN's between Storage Arrays

ASM online disk migration from one disk array to another with zero downtime to database's and the dependent applications..sounds interesting...this was one of the tasks I was involved in recently for multiple Red Hat Linux clusters with ASM disk groups setup with "external" redundancy...

Actually this is a fairly straight forward activity if all the prerequisites are met  and with some planning in place.

Here's the high-level work flow...

Multiple ASM Disk Groups with External redundancy
RHEL OS
  • List disks using /etc/init.d/oracleasm listdisks
  • Identify the multipath device to ASM device mapping using
            /sbin/blkid |grep oracleasm|grep mapp

  • Add the new LUN's at the OS level and ensure multipath has been setup for the new LUN's
  • Create whole partitions on the new LUN's using fdisk or parted
  • To discover the partitions, used below multipath commands..to flush and discover on all remaining nodes
             multipath –F
             multipath -v3 >&-

  • create new ASK disks as below
/etc/init.d/oracleasm createdisk  /dev/mapper/
create a  simple shell script and execute the commands to avoid any typos as you'll have multiple disks to add, use a distinct to distinguish the new array disks from the current array.


  •  Run /etc/init.d/oracleasm scandisks/listdisks on all the nodes to ensure the new ASM disks are identified
  • Change the rebalance power limit of the disk group using ...
         ALTER DISKGROUP DATA REBALANCE POWER 6;
  • Add the new ASM disks to the disk group using asmca as grid OS user
  • Wait for the rebalance operation to complete..
         SELECT * FROM GV$ASM_OPERATION;

The above query should return no rows...when the rebalance operation is complete...

  • Once the rebalance is complete, drop the old set of disks using asmca as grid OS user


  • Wait for the rebalance operation to complete..
         SELECT * FROM GV$ASM_OPERATION;

  • The above query should return no rows...when the rebalance operation is complete...
  • Verify the status of disks using...
     select disk_number,mount_status,header_status,mode_status,state,redundancy,failgroup,path        
               from v$asm_disk order by path;

  • If the disks HEADER status is 'FORMER', proceed with deletedisk command as below...
     /etc/init.d/oracleasm deletedisk ASM_DATA01

  • Advise the sysadmin to tidy up the multipath configuration.
  • Advise the storage admin to unpresent the LUN's from the host