Cross-platform Database Migration from Big to Little Endian formats (Solaris to Linux)

Had an interesting requirement recently to migrate a 1 TB+ database from Solaris Sparc 64bit to Linux x86 64bit environment and here's the step by step process that worked perfectly.

This is a Big to Little endian format change so requires tablespace/datafile conversion to the target endian format.


• Determine whether the source tablespaces (to be transported) are self-contained.

EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('TABLESPACE1','TABLESPACE2',...., TRUE);

After executing the above package, ensure no violations exist by running the below query.
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

Generate a transportable tablespace set as below:

Alter tablespace tablespace1 read only;
Alter tablespace tablespace2 read only;
Alter tablespace tablespace3 read only;

• Export the Source tablespace metadata as below

expdp system/xxxxx
directory=dpump_dir1
dumpfile=export_solaris_tablespaces.dmp
logfile=export_solaris_tablespaces.log transport_tablespaces=tablespace1,tablespace2,tablespace3

• Convert the tablespaces using RMAN utility as below:

rman target / nocatalog


convert tablespace tablespace1,tablespace2,tablespace3
to platform 'Linux x86 64-bit' format '/u02/oradata/target SID/%U';


In this case, /u02/oradata is a NFS mount of the Linux file system so that the converted datafiles directly get created on the target Server.


If you use a local file system, then the files need to be copied via scp or some other utility.


After this step, the tablespaces in 'source' DB can be put back in read-write mode.

• Export the Source ‘Database’ metadata using the below command.


export_metadata.par contents:


userid="/ as sysdba"
directory=dpump_dir1
dumpfile=expdp_full_norows.dmp
logfile=expdp_full_norows.log
full=y
content=metadata_only


expdp parfile=export_metadata.par

• Create the Database using DBCA on the target Linux host

Use the same character set and national character set as the source DB.
Verify and ensure the 'db_files' setting is the same or more that of the source DB value.


• Recreate all the users, roles, sys privs in the target database.

Prepare scripts using the Source DB and execute them on the target DB.


As the tablespaces for the users are not available, you can set 'users' as the default tablespaces of the schemas temporarily and change it back to the original tablespace later.

• Import the datafiles on target host as below:

Impdp system parfile=import_tts.par


import_tts.par contents:
directory=dpump_dir1
dumpfile=export_solaris_tablespaces.dmp
logfile=import_user_tablespace.log
transport_datafiles= '/u02/oradata/target SID/xxx.dbf','xxxxx','xxxx'

• Change the tablespaces to read write state on target.

alter tablespace tablespace1 read write;
alter tablespace tablespace2 read write;
alter tablespace tablespace3 read write;

• Import the full export with rows=n into the target Server.

Contents of par file


userid="/ as sysdba"
directory=DPUMP_DIR1
dumpfile=expdp_full_norows.dmp
logfile=impdp_full_norows.log
full=y
content=metadata_only
table_exists_action=skip

• Generate a script on Source side to capture privileges from dba_tab_privs and execute them on target instance.

• Execute utlrp.sql at this stage and compare the object count/invalid objects between Source and Target instances.

• Change the default tablespace of users back to their original tablespaces


• Shutdown the Target instance and do a full backup of the DB.







Root.sh fails while installing 11.2.0.3 Grid Infrastructure



Had this issue with root.sh failing on node2 while building a multi-node Linux 11.2.0.3 Grid Infrastructure installation recently and received ORA-15018, ORA-15072 and ORA-15080 which are ASM related.

However the scandisks and listdisks were working fine on all the nodes and was listing the ASM disks correctly.

Further investigation revealed that on node2 where the root.sh was failing with the above errors, the /usr/sbin/oracleasm querydisk -d disk_name shows the device as '/dev/sdc1' whereas all other nodes were showing the correct multipath device names starting with dm-xx.

Changing the ORACLEASM_SCANORDER & ORACLEASM_SCANEXCLUDE parameters as below followed by a restart of asmlib too didnt work.

ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"


This has been documented in MOS note 1059847.1 which is not updated in the standard installation doc for Linux yet.

So as a final resort, I've deconfigured the clusterware on node2 using rootcrs.pl -deconfigure with force option followed by a restart of asmlib and that picked up the right multipath device for the ASM disk.

#$GRID_HOME/crs/install/rootcrs.pl -deconfig -force

After this step, root.sh worked perfectly fine on this node and all other remaining nodes.