ASM online migration of LUN's between Storage Arrays

ASM online disk migration from one disk array to another with zero downtime to database's and the dependent applications..sounds interesting...this was one of the tasks I was involved in recently for multiple Red Hat Linux clusters with ASM disk groups setup with "external" redundancy...

Actually this is a fairly straight forward activity if all the prerequisites are met  and with some planning in place.

Here's the high-level work flow...

Multiple ASM Disk Groups with External redundancy
RHEL OS
  • List disks using /etc/init.d/oracleasm listdisks
  • Identify the multipath device to ASM device mapping using
            /sbin/blkid |grep oracleasm|grep mapp

  • Add the new LUN's at the OS level and ensure multipath has been setup for the new LUN's
  • Create whole partitions on the new LUN's using fdisk or parted
  • To discover the partitions, used below multipath commands..to flush and discover on all remaining nodes
             multipath –F
             multipath -v3 >&-

  • create new ASK disks as below
/etc/init.d/oracleasm createdisk  /dev/mapper/
create a  simple shell script and execute the commands to avoid any typos as you'll have multiple disks to add, use a distinct to distinguish the new array disks from the current array.


  •  Run /etc/init.d/oracleasm scandisks/listdisks on all the nodes to ensure the new ASM disks are identified
  • Change the rebalance power limit of the disk group using ...
         ALTER DISKGROUP DATA REBALANCE POWER 6;
  • Add the new ASM disks to the disk group using asmca as grid OS user
  • Wait for the rebalance operation to complete..
         SELECT * FROM GV$ASM_OPERATION;

The above query should return no rows...when the rebalance operation is complete...

  • Once the rebalance is complete, drop the old set of disks using asmca as grid OS user


  • Wait for the rebalance operation to complete..
         SELECT * FROM GV$ASM_OPERATION;

  • The above query should return no rows...when the rebalance operation is complete...
  • Verify the status of disks using...
     select disk_number,mount_status,header_status,mode_status,state,redundancy,failgroup,path        
               from v$asm_disk order by path;

  • If the disks HEADER status is 'FORMER', proceed with deletedisk command as below...
     /etc/init.d/oracleasm deletedisk ASM_DATA01

  • Advise the sysadmin to tidy up the multipath configuration.
  • Advise the storage admin to unpresent the LUN's from the host


Notes on MS SQL to Oracle data migration using SQL Developer

A requirement came up to migrate some MS SQL data from SharePoint into Oracle and most of this data has some html formatting tags and more than 4000 characters requiring CLOB columns on Oracle end.

First option is to try SQL Developer and as the source was given as a 'access' database file, tried to connect to access and noticed there's NO access connection tab in SQL Developer 4.1.1 as it uses jdk8 and there's no jdbc-odbc bridge in jdk8 as per this post https://community.oracle.com/thread/3642078 so had to launch SQL Developer 4.0.2 which has "Access" tab.

Now the other issue is connecting to Access gives "Data source name not found and no default driver specified" error which is due to missing 64bit MS Access driver as I was using 64 bit OS. So uninstalled my office suite and installed the Access driver using http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=13255 and now I can connect to Access. However the source data is supposed to be extracted from a query from some of the access base tables and for some reason the access connection shows the queries as 'views'.

So decided to switch to MS SQL connection (which was the original source for this access)
and connected SQL Developer to MS SQL using jTDS driver and now I can connect to MS SQL.
And from there on, it's easier than expected to load the data using 'copy to oracle' option for the MS SQL tables and it nicely converted the nvarchar data into CLOB columns.
However one small glitch is it failed to load the data from a MS SQL table that has ":" in it's name, but strangely it didn't throw any errors simply says load completed with neither table nor data present in Oracle. Also for the tables it worked ok, Oracle table name got trimmed to 30 characters limit on Oracle end.

So renamed the tables on MS SQL end without ":" character and now the load succeeded and table record count and max(length(column_name)) returned the same value on both ends...a pleasing result at the end!



  





Setting up SSH Tunneling connection in SQL Developer 4.1.1



Was looking at using SQL Developer to setup SSH tunneling for the hosts that I do not have port 1521 open but have ssh access to and thatjeffsmith.com explains it quite well showing how to add it directly in the connection tab itself.

However with the version that I'm using being 4.1.1.19, couldn't find the 'ssh' tab in the 'advanced' section of the add new connection window so spent sometime working out 'how to'...and here's how I've done it...hope that's helpful for someone out there looking for some notes on this...

Click on View --->SSH menu item...and that opens the below work area under connections list...








Right click on SSH Hosts and click on "New SSH Host"




and fill in the host and credentials and local port forwarding details as below...
Host in the top section is the remote host (either hostname or ip address) and you can use private key here if that's what you use to connect...However one thing to note here is that the private key should be in openssh format and I had mine in putty format...so used puttykeygen to convert it into openssh format as below...

    Open your private key in PuTTYGen
    Top menu “Conversions”->”Export OpenSSH key”.
    Save the new OpenSSH key when prompted.




















Once above details are in place, save the above connection by clicking 'ok' and then right-click and 'connect' to establish the tunnel.

Now setup the database connection as below...good thing with this is the SSH-HOST connection we have setup earlier can be shared by a number of database connections, say if you have many database's on the host or if you want to setup separate connections for different database users etc.






RMAN Duplicate reports datafile xx not processed because file is offline

Was performing a database clone from RAC to RAC and came across this interesting issue…
RMAN duplicate has done the control file restore and started reporting below messages…and finally failed to open the database...

database mounted
datafile 73 not processed because file is offline
datafile 6 not processed because file belongs to tablespace with one or more offline immediate datafile (X_STATS_INDX)
datafile 10 not processed because file belongs to tablespace with one or more offline immediate datafile (X_STATS_INDX)
datafile 12 not processed because file belongs to tablespace with one or more offline immediate datafile (X_STATS_INDX)

We were doing a clone as of 40 days back but all the backups are available inline on TAPE which we verified via list backup command earlier and confirmed by storage admin so a bit confusing why it's failing here…

In the end we queried the creation timestamp of the initial datafile that's reported as 'offline' and it was added on the day we were restoring unto…however our refresh timestamp happened to be between the datafile addition timestamp and the nighty backup, a very infrequent occurrence.

We were refreshing this db unto 10th June 2015 2300 hrs and datafile was added around 11:10AM…backup happened after 2300 hrs…


SELECT FILE#,NAME,TO_CHAR(CREATION_TIME,'DD-MON-YY HH24:MI:SS') FROM V$DATAFILE WHERE FILE#=73;


When we changed the timestamp to the next day 2am, everything worked perfectly fine.

It appears to be a bug with duplicate option in 11.2.0.3 and fixed in 11.2.0.4.
However this will not be an issue if a traditional restore/recovery method is used.


CATALOG A BACKUPPIECE WITH 'SBT_TAPE'

Had another requirement today to catalog a backuppiece with my SBT_TAPE and couldn't find syntax for it readily available anywhere...

Here's the syntax that worked...

RMAN>  CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(NSR_SERVER=,NSR_GROUP=,NSR_DATA_VOLUME_POOL=,NSR_CLIENT=)';

set sbt_tape as your default device type...this can be removed later if required...

RMAN>  CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';

RMAN> CATALOG DEVICE TYPE 'SBT_TAPE' BACKUPPIECE '0rns43vq_1_1';


The PARMS could be different for different media vendors but the syntax is similar...

RMAN-20207: UNTIL TIME or RECOVERY WINDOW is before RESETLOGS time


Wish you all a very Happy 2015!

OK...here's the requirement...I've a full 'online' backup of a RAC database and need to recover from it multiple times and open the database with 'alter database open resetlogs;' statement.

It's like my gold backup which I want to restore whenever I want...

taken a backup...
rman target / nocatalog

run {
allocate channel c1 device type disk;
allocate channel c2 device type disk;
backup as compressed backupset incremental level 0 database TAG FULL_BACKUP_05JAN2015
plus archivelog TAG ARCH_BACKUP_05JAN2015;
backup spfile;
backup current controlfile;
}

I get these tags above populated from variables but hard-coding them to keep it simple...

Perform 1st restore and recovery as below...

startup nomount;
set DBID xxxxxxx;
restore controlfile from 'c-xx-xxxx'; (this is from the last autobackup)
alter database mount;
list incarnation of database;
restore and recover database as below…
using tape channels as the backup has already been removed from the disks and only available on SBT_TAPE channel.

run {
allocate channel t1 type 'SBT_TAPE';
send 'NSR_ENV=(NSR_SERVER=xxxx,NSR_GROUP=xxxx,NSR_DATA_VOLUME_POOL=xxxxx,NSR_CLIENT=xxxxxx,NSR_SAVESET_BROWSE="xxxxx",NSR_SAVESET_RETENTION="xxxxx")';
SET UNTIL TIME "TO_DATE('06-JAN-2015 11:15:00','DD-MON-YYYY HH24:MI:SS')"; (I've all the archive logs upto and after this point on disk and tape)
restore database;
recover database;
release channel t1;
}

You can allocate multiple channels depending on your media/tape system configuration.

This worked perfectly fine…and I've opened the database as below…

rman> alter database open resetlogs;

Come next day …I've the requirement to restore from the same backup so I run the same process again…and this time it fails...

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/09/2015 14:26:50
RMAN-20207: UNTIL TIME or RECOVERY WINDOW is before RESETLOGS time

Now it failed with the above errors...

The cause of the issue is the implicit crosscheck and cataloging being done by rman during restore and recover phase…
….
….
….
executing command: SET until clause

Starting restore at 09-JAN-2015 14:26:45
Starting implicit crosscheck backup at 09-JAN-2015 14:26:45
Crosschecked 16 objects
Finished implicit crosscheck backup at 09-JAN-2015 14:26:47

Starting implicit crosscheck copy at 09-JAN-2015 14:26:47
Finished implicit crosscheck copy at 09-JAN-2015 14:26:47

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +XXX/XXXXXX/AUTOBACKUP/2015_01_08/s_868464001.6080.868464003
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_09/thread_1_seq_8.1623.868492825
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_09/thread_2_seq_5.1489.868492827
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_09/thread_1_seq_9.5858.868528837
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_09/thread_1_seq_10.5893.868532601
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_09/thread_2_seq_6.5852.868536051
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_09/thread_1_seq_11.5800.868536649
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_2_seq_1.6077.868463987
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_6110.536.868462613
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_1.6085.868465953
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_2.6141.868465963
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_6111.6443.868462613
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_3.6625.868465973
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_2_seq_2.529.868465975
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_4.528.868465985
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_5.527.868465995
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_6.1982.868466007
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_2_seq_3.5702.868466011
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_1_seq_7.531.868466023
File Name: +XXX/XXXXXX/ARCHIVELOG/2015_01_08/thread_2_seq_4.1877.868485729

released channel: t1
released channel: t2
released channel: t3
released channel: t4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/09/2015 14:26:50
RMAN-20207: UNTIL TIME or RECOVERY WINDOW is before RESETLOGS time

RMAN> **end-of-file**


As you can see above, RMAN is cataloging the archive logs generated  post-resetlogs and getting the new incarnation information…

The way I’ve resolved the issue is as follows…

First delete the archive logs which are generated after the last resetlogs operation…this timestamp is after our recovery point...

So that’s basically the highlighted files above…

Try running the same process again…this time, rman couldn't catalog these new archive logs…and it worked perfectly fine.

This is especially useful if you need to restore and recover from a gold backup multiple times and don’t want to worry about multiple incarnations of the database…and try resetting them etc.