Cross-platform Database Migration from Big to Little Endian formats (Solaris to Linux)

Had an interesting requirement recently to migrate a 1 TB+ database from Solaris Sparc 64bit to Linux x86 64bit environment and here's the step by step process that worked perfectly.

This is a Big to Little endian format change so requires tablespace/datafile conversion to the target endian format.


• Determine whether the source tablespaces (to be transported) are self-contained.

EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('TABLESPACE1','TABLESPACE2',...., TRUE);

After executing the above package, ensure no violations exist by running the below query.
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

Generate a transportable tablespace set as below:

Alter tablespace tablespace1 read only;
Alter tablespace tablespace2 read only;
Alter tablespace tablespace3 read only;

• Export the Source tablespace metadata as below

expdp system/xxxxx
directory=dpump_dir1
dumpfile=export_solaris_tablespaces.dmp
logfile=export_solaris_tablespaces.log transport_tablespaces=tablespace1,tablespace2,tablespace3

• Convert the tablespaces using RMAN utility as below:

rman target / nocatalog


convert tablespace tablespace1,tablespace2,tablespace3
to platform 'Linux x86 64-bit' format '/u02/oradata/target SID/%U';


In this case, /u02/oradata is a NFS mount of the Linux file system so that the converted datafiles directly get created on the target Server.


If you use a local file system, then the files need to be copied via scp or some other utility.


After this step, the tablespaces in 'source' DB can be put back in read-write mode.

• Export the Source ‘Database’ metadata using the below command.


export_metadata.par contents:


userid="/ as sysdba"
directory=dpump_dir1
dumpfile=expdp_full_norows.dmp
logfile=expdp_full_norows.log
full=y
content=metadata_only


expdp parfile=export_metadata.par

• Create the Database using DBCA on the target Linux host

Use the same character set and national character set as the source DB.
Verify and ensure the 'db_files' setting is the same or more that of the source DB value.


• Recreate all the users, roles, sys privs in the target database.

Prepare scripts using the Source DB and execute them on the target DB.


As the tablespaces for the users are not available, you can set 'users' as the default tablespaces of the schemas temporarily and change it back to the original tablespace later.

• Import the datafiles on target host as below:

Impdp system parfile=import_tts.par


import_tts.par contents:
directory=dpump_dir1
dumpfile=export_solaris_tablespaces.dmp
logfile=import_user_tablespace.log
transport_datafiles= '/u02/oradata/target SID/xxx.dbf','xxxxx','xxxx'

• Change the tablespaces to read write state on target.

alter tablespace tablespace1 read write;
alter tablespace tablespace2 read write;
alter tablespace tablespace3 read write;

• Import the full export with rows=n into the target Server.

Contents of par file


userid="/ as sysdba"
directory=DPUMP_DIR1
dumpfile=expdp_full_norows.dmp
logfile=impdp_full_norows.log
full=y
content=metadata_only
table_exists_action=skip

• Generate a script on Source side to capture privileges from dba_tab_privs and execute them on target instance.

• Execute utlrp.sql at this stage and compare the object count/invalid objects between Source and Target instances.

• Change the default tablespace of users back to their original tablespaces


• Shutdown the Target instance and do a full backup of the DB.







Root.sh fails while installing 11.2.0.3 Grid Infrastructure



Had this issue with root.sh failing on node2 while building a multi-node Linux 11.2.0.3 Grid Infrastructure installation recently and received ORA-15018, ORA-15072 and ORA-15080 which are ASM related.

However the scandisks and listdisks were working fine on all the nodes and was listing the ASM disks correctly.

Further investigation revealed that on node2 where the root.sh was failing with the above errors, the /usr/sbin/oracleasm querydisk -d disk_name shows the device as '/dev/sdc1' whereas all other nodes were showing the correct multipath device names starting with dm-xx.

Changing the ORACLEASM_SCANORDER & ORACLEASM_SCANEXCLUDE parameters as below followed by a restart of asmlib too didnt work.

ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"


This has been documented in MOS note 1059847.1 which is not updated in the standard installation doc for Linux yet.

So as a final resort, I've deconfigured the clusterware on node2 using rootcrs.pl -deconfigure with force option followed by a restart of asmlib and that picked up the right multipath device for the ASM disk.

#$GRID_HOME/crs/install/rootcrs.pl -deconfig -force

After this step, root.sh worked perfectly fine on this node and all other remaining nodes.




Recovering lost objects using RMAN duplicate database on the same host



Had a requirement today to restore a schema in a database running on windows, without affecting the rest of the database. So incomplete recovery or a flashback database is not an option clearly as the corruption was limited to one schema which has it's own tablespace.


So the options are to either perform a
"Tablespace Point-in-Time Recovery (TSPITR)
  or 
"duplicate database to a previous point-in-time" and then compare the objects and import into the production database.


We preferred the option 2 and here's the steps to achieve it:

  • create a new instance/service using oradim
          oradim -new -sid dupdb

  • create a simple pfile using the basic parameters as below
          control_files='d:\new_path\control01.ctl' ....
          db_name=dupdb
          db_file_name_convert='d:\oradata\origdb','d:\oradata\dupdb'
          log_file_name_convert='d:\oradata\origdb','d:\oradata\dupdb' 
          and the usual cache size settings follow...

  • make sure the directories mentioned in the above parameters are created.
  • set ORACLE_SID=dupdb
          sqlplus /nolog
          connect / as sysdba
          create spfile='%ORACLE_HOME%/dbs/spfileDUPDB.ora' from                                                                                                                          pfile='d:\new_path\pfile\initdupdb.ora';
          startup nomount pfile='d:\new_path\pfile\initdupdb.ora';
          exit

  • Start the rman session as below (if there's no catalog database)
          rman target sys@origdb auxiliary / nocatalog
  •  duplicate target database to dupdb until time "to_date('201109151000','YYYYMMDDHH24MI');
           (Skip tablespace option can be used if the db size is large and only selective  tablespaces are required)
  • Once the recovery is complete, start the export of the schema using expdp.
Recovery on the Production DB...
  • take a RMAN or export backup of the original production database at this stage...
  • Drop the schema or drop the objects in the original schema (The below drop & import steps will require a short downtime)
  • drop the objects or the schema in the original production db
  • import the schema using impdp xxxx
  • recompile the objects using utlrp or dbms_utility.compile_schema
And we got the dropped objects back into the DB successfully.





Cloning a db in 'noarchivelog' mode using 'create controlfile set database' option

Cloning a db in ‘noarchivelog’ mode on the same server without using ‘rman’ backups and duplicate option.



1. Identify the list of data files, control files, redo logs of the db to be cloned
2. Create backup control file from trace
3. Create pfile from spfile (if it’s not already available)
4. Copy it as init.ora and edit the below
               a. Path of ‘control files’ to reflect the new directory path
               b. dump directories etc
               c. db_name is set to “clonedb”
5. alter system switch logfile
6. shutdown immediate
7. cp –rp  datafiles and redologs to the new clonedb path
8. edit the original control file from the trace from step 2 above
similar to below
CREATE CONTROLFILE SET DATABASE "" RESETLOGS NOARCHIVELOG
......
.......
LOGFILE
GROUP 1 '/u01/oradata//clonedb/redo01.log' SIZE 50M,
........
DATAFILE
/u01/oradata/clonedb/system01.dbf,
......
.......
CHARACTER SET WE8ISO8859P1;
RECOVER DATABASE USING BACKUP CONTROLFILE;
ALTER DATABASE OPEN RESETLOGS;
ALTER TABLESPACE TEMP ADD TEMPFILE ‘/u01/oradata/clonedb/temp01.dbf,
SIZE ......;


9. export ORACLE_SID=clonedb
10. sqlplus /nolog
startup nomount pfile=’path to edited init.ora’ from step 4 above
11. Run the create-controlfile script prepared at step 8
12. Create spfile from pfile
13. Shutdown immediate
14. Create password file
15. Startup
16. Show parameter db_name


The above process doesn’t change the ‘dbid’ and if that too needs to be changed, we need to use dbnewid (nid) utility.


The only reason you need to copy the redologs too is if the 'recover using backup contolfile' does ask for a change in the redolog, you need to manually provide the path of the redolog.
If you switch the logfile prior to shutdown, it won't be required but it's better to copy them anyway.


Installing Oracle Developer Suite on Windows 7

Was recently trying to install Oracle Developer Suite 10.1.2.0.2 on a new Windows 7 64bit laptop and the Universal Installer was exiting with failed OS pre-requisite checks...and here's the procedure to make it work.


  • right-click on setup -àtroubleshoot compatibility -àTroubleshoot program
  • Click on the check box “The program worked in earlier versions of Windows but won’t install or run now...”
  • On screen “Which version of windows did this program work on before...”...I selected Windows XP service pack 2
  • Then click “Test compatibility settings for the program...”
  • It’ll come back with “Settings applied” windows compatibility mode windows XP (Service Pack 2)
  • Click on "Start the program"
  • Click “Next...” and then once the Universal Installer launched successfully...just click on 'yes save these settings for this program' and close the troubleshooter.

And half way through the installation it complained “install has encountered an error while attempting to verify your virtual memory settings...pls verify that the sum of the initial sizes of the paging file is at least 256 MB.” and the fix was to update the page file size by uncheck the ‘automatically manage paging file size for all drives’ and set the value using ‘custom size’ setting with a minimum value above the value in the error.

After this step...it complained again for the environment variable ‘PATH’ being more than 1023 characters...and had to clean up some of the values from the PATH variable and the installation finished successfully.




RAC to Single Instance conversion...

Had this requirement recently to convert a RAC to Single Instance DB and here's process that worked for me...btw..this is a 10.2.x cluster.

  • backup the db's
  • shutdown the cluster db's using 'srvctl stop database '
  • check the status of nodeapps using 'srvctl status nodeapps -n '
  • stop the nodeapps on all the nodes 'srvctl stop nodeapps -n '...repeat this for all the nodes
  • stop the cluster CRS services as root..'/etc/init.d/init.crs stop'
  • gather the patches installed using 'opatch lsinventory'
  • shutdown the nodes ...and restart the node that will be used for single instance db's.
  • present the storage to this single node as ext3 filesystems
  • verify all the filesystems are present ....
  • rename the oracle_home directory ...say db_1 as db_1.RAC (if you need to drop the home for space reasons, make sure you backup the tns file for ref.)
  • reinstall the single instance binaries
  • apply all the interim patches (using the info collected above using lsinventory)
  • remove all the cluster related settings from the old init.ora
  • change the cluster_database=false and cluster_database_instances=1
  • startup the instance using the modified pfile
  • disable thread 2 using 'alter database disable thread 2'
  • drop the log groups belonging to thread 2...and 3...etc (you may need to switch the logs if the archived=NO
  • drop the thread 2 undo tablespace
  • run a 'show parameter cluster' to verify there're no references to instance 2
  • shutdown immediate
  • create spfile from pfile;
  • startup
  • create the listener
  • update the tnsnames.ora as required
  • update the oracle autostart scripts as required

OWB 11.2.0.2 upgrade issues

We were trying to upgrade our OWB installation from 10.2.0.5 to 11.2.0.2 and during the repository export phase, it's constantly hanging at 95%. It hangs at the same point (95%) every time we try the export and the object it's struck at is "wb_rt_audit_scripts" table.


We tried to increase the java heap size to 1280M as suggested in one of the metalink notes by modifying the -Xmx setting in the 'reposinst.sh' script however that didn't resolve the hang issue.


Further analysis suggested the clob column 'script' in the wb_rt_audit_scripts table is quite large...around 900M in our db...so we then purged the audit data using purge_audit_template.sql which is available under /rtp/sql directory (from the source 10.2 path).





The options that we used were as below...

@purge_audit_template.sql rep_owner DEPLOYMENT ALL null null


@purge_audit_template.sql MY_RUNTIME EXECUTION "number_task_errors > 0" null null



After running the above, we then shrink the lob using

alter table rep_owner.wb_rt_audit_scripts modify lob(script) (shrink space);



And that released the space back into the tablespace.



After this step...we tried the export and it finished quite quickly and the issue is resolved.