sqlplus hangs....


Came across this bug today where sqlplus hangs while everything else on this old Linux 3 vm (an app server) appears to be running. Could this be due to some missing libraries.... or a memory leak...load appears normal.

strace shows infinite 'times(null)' system calls and it turns out to be a bug in the Oracle client oci in 10.2.0.1 and apparently this bug# 4612267 presents itself if the server uptime is more than 248 days and in this case, the uptime is exactly 249 days.

This is a legacy system so it's untouched for a long time, a good reminder to upgrade and patch systems regularly.

Finally, a simple restart of the app server fixed the issue and gave us another 248 days to think about the patch ;-)


NLS_LENGTH_SEMANTICS & ORA-01450



NLS_LENGTH_SEMANTICS setting allows us to specify the length of a column in 'characters' or 'bytes' and 'byte' is it's default value.

Had one index creation failure today during a OBIEE install with ORA-01450 and this is in a Unicode AL32UTF8 database with 'nls_length_semantics=char' set.

A quick check of dba_tab_columns reveal the data_length being 4x the size of the column that's being defined due to the UNICODE characterset and the nls_length_semantics value of 'CHAR'.

SQL> select COLUMN_NAME, DATA_TYPE,DATA_LENGTH, CHAR_LENGTH,CHAR_USED      from dba_tab_columns
     where table_name='MDS_COMPONENTS';

COLUMN_NAME             DATA_TYPE       DATA_LENGTH     CHAR_LENGTH  C
-------------------------- ----------- ----------- ----------------------
COMP_COMMENT             VARCHAR2         4000               4000    C
COMP_ID                  VARCHAR2         3200                800    C
COMP_VALUE               VARCHAR2         4000               4000    C
COMP_LOCALNAME           VARCHAR2         3200                800    C
COMP_PREFIX              VARCHAR2          508                127    C
COMP_NSID                  NUMBER           22                  0
.....
.....

10 rows selected.

and a desc mds_components shows the columns being defined as VARCHAR2 (800 CHAR) due to the above settings. 



COMP_PREFIX VARCHAR2(127 CHAR)
COMP_LOCALNAME VARCHAR2(800 CHAR)
COMP_VALUE VARCHAR2(4000 CHAR)
COMP_ID VARCHAR2(800 CHAR)
COMP_COMMENT VARCHAR2(4000 CHAR)


However more interestingly an 'alter system set nls_length_semantics=BYTE scope=both' doesnt actually come into effect until the instance is restarted, though it allows 'both' in scope.

Anyway, changing the nls_length_semantics=byte finally fixed the issue with the index failure.



Deploying OEM 12c Management agents








OEM 12c agents can be deployed using a few options and here's the simple procedure for the 'silent install' option using the response file method.

  • Extract the Management agent software from the OMS for your platform.
                (asssuming the OMS_HOME is "/u01/app/middleware/oms")

             /u01/app/middleware/oms/bin/emcli login -username=sysman
             /u01/app/middleware/oms/bin/emcli sync
             /u01/app/middleware/oms/bin/emcli get_supported_platforms

Here's the command used to extract the binaries for Linux x86_64 bit (by default OMS host will have the agent binaries for the platform on which OMS is rnning)

/u01/app/middleware/oms/bin/emcli get_agentimage -destination="/tmp/agent_Linux_x86_64" -platform="Linux x86-64" -version=12.1.0.1.0

This will generate the "12.1.0.1.0_AgentCore_226.zip" zip file under /tmp.
  • Copy this zip file to a temporary location to the Host where the agent need to be installed.
  • unzip the file and update the response file as below.
               OMS_HOST="OMS Server"
               EM_UPLOAD_PORT="4889"
               AGENT_REGISTRATION_PASSWORD="xxxxxx"
               #AGENT_INSTANCE_HOME=
               AGENT_PORT=3872
               b_startAgent=true
               ORACLE_HOSTNAME="Host where Management Agent need to be installed"
               s_agentHomeName="agent12c"



  • Deploy the agent as below
/tmp/agentDeploy.sh AGENT_BASE_DIR=/u00/app/oracle/product/11.2.0/agent12c RESPONSE_FILE=/tmp/agent.rsp

Pls ensure AGENT_BASE_DIR is NOT a parent directory of any other Oracle software installation directories. This will cause serious issues if you try to uninstall in future for any reason as the entire 'agent_base_dir' directory will be removed after the deinstallation of the agent software.

Once the agent installation is complete, it'll prompt you to run the root.sh and that'll complete the agent setup and the uploads will commence immediately.


 

OEM 12c, Advanced Installation



Just finished an advanced install of OEM 12c (Bundle Patch 1) 12.1.0.1 and it's been much easier than I expected.

For those of you planning to install OEM12c, there's new Bundle patch version (12.1.0.1) available on otn download page which is the newest patchset available after the October 2011 product release.

*This post was first written in Dec, 2012 however the new OEM12c Release 2 is already out in Feb, 2013 and if you're planning to install or upgrade now, it's recommended to go to the newest release ...
"Oracle Enterprise Manager Cloud Control 12c Release 2 Plug-in Update 1 (12.1.0.2) New!"


Here's a quick summary of the installation steps that I've followed.
  • Install Oracle Database Server 11.2.0.3(with PSU January 2012)
  • Create a database for EM repository
  • Download the OEM 12c (12.1.0.1) software enterprise-manager/downloads and unpack into a temporary folder
  • After the usual 'support credentials' and 'search for updates' section, the installer will prompt you to choose the type of installation 'Simple or Advanced'.
  • Once you select 'Advanced' and provide the 'middleware home' location say '/u01/app/middleware', it moves into "select plug-in's" section
  • In this section by default, 'Oracle Database, Oracle Fusion Middleware, Oracle Exadata, Oracle MOS(My Oracle Support) are selected.
  • Next section is the Weblogic Configuration where you set a password for weblogic admin user and Node Manager accounts and also provide the OMS Instance Base location which is '/u01/app/middleware/gc_inst'
  • Next Section is the Database details, usual Host, Service, port and sys password for repository setup
  • After this point, the installer will perform will perform a few db configuration checks and might ask you to disable the 'automatic statistics collection job' and also increase the open cursors etc.
  • Next section will ask you choose the 'sysman' and agent registration passwords and also the tablespace/datafile details in the repository database.
  • The final screen is the port configuration section where  you can go with the default ports or modify as per  your standards.
    The important one's here are the EM upload http and https ssl   ports which are             usually 4889     and 1159 and also the Management agent port 3872.
Once you review the summary page and continue, the install will begin and after the usual root.sh, the OEM 12c installation finished within an hour and OEM 12c is ready for monitoring the managed targets.



Cross-platform Database Migration from Big to Little Endian formats (Solaris to Linux)

Had an interesting requirement recently to migrate a 1 TB+ database from Solaris Sparc 64bit to Linux x86 64bit environment and here's the step by step process that worked perfectly.

This is a Big to Little endian format change so requires tablespace/datafile conversion to the target endian format.


• Determine whether the source tablespaces (to be transported) are self-contained.

EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('TABLESPACE1','TABLESPACE2',...., TRUE);

After executing the above package, ensure no violations exist by running the below query.
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

Generate a transportable tablespace set as below:

Alter tablespace tablespace1 read only;
Alter tablespace tablespace2 read only;
Alter tablespace tablespace3 read only;

• Export the Source tablespace metadata as below

expdp system/xxxxx
directory=dpump_dir1
dumpfile=export_solaris_tablespaces.dmp
logfile=export_solaris_tablespaces.log transport_tablespaces=tablespace1,tablespace2,tablespace3

• Convert the tablespaces using RMAN utility as below:

rman target / nocatalog


convert tablespace tablespace1,tablespace2,tablespace3
to platform 'Linux x86 64-bit' format '/u02/oradata/target SID/%U';


In this case, /u02/oradata is a NFS mount of the Linux file system so that the converted datafiles directly get created on the target Server.


If you use a local file system, then the files need to be copied via scp or some other utility.


After this step, the tablespaces in 'source' DB can be put back in read-write mode.

• Export the Source ‘Database’ metadata using the below command.


export_metadata.par contents:


userid="/ as sysdba"
directory=dpump_dir1
dumpfile=expdp_full_norows.dmp
logfile=expdp_full_norows.log
full=y
content=metadata_only


expdp parfile=export_metadata.par

• Create the Database using DBCA on the target Linux host

Use the same character set and national character set as the source DB.
Verify and ensure the 'db_files' setting is the same or more that of the source DB value.


• Recreate all the users, roles, sys privs in the target database.

Prepare scripts using the Source DB and execute them on the target DB.


As the tablespaces for the users are not available, you can set 'users' as the default tablespaces of the schemas temporarily and change it back to the original tablespace later.

• Import the datafiles on target host as below:

Impdp system parfile=import_tts.par


import_tts.par contents:
directory=dpump_dir1
dumpfile=export_solaris_tablespaces.dmp
logfile=import_user_tablespace.log
transport_datafiles= '/u02/oradata/target SID/xxx.dbf','xxxxx','xxxx'

• Change the tablespaces to read write state on target.

alter tablespace tablespace1 read write;
alter tablespace tablespace2 read write;
alter tablespace tablespace3 read write;

• Import the full export with rows=n into the target Server.

Contents of par file


userid="/ as sysdba"
directory=DPUMP_DIR1
dumpfile=expdp_full_norows.dmp
logfile=impdp_full_norows.log
full=y
content=metadata_only
table_exists_action=skip

• Generate a script on Source side to capture privileges from dba_tab_privs and execute them on target instance.

• Execute utlrp.sql at this stage and compare the object count/invalid objects between Source and Target instances.

• Change the default tablespace of users back to their original tablespaces


• Shutdown the Target instance and do a full backup of the DB.







Root.sh fails while installing 11.2.0.3 Grid Infrastructure



Had this issue with root.sh failing on node2 while building a multi-node Linux 11.2.0.3 Grid Infrastructure installation recently and received ORA-15018, ORA-15072 and ORA-15080 which are ASM related.

However the scandisks and listdisks were working fine on all the nodes and was listing the ASM disks correctly.

Further investigation revealed that on node2 where the root.sh was failing with the above errors, the /usr/sbin/oracleasm querydisk -d disk_name shows the device as '/dev/sdc1' whereas all other nodes were showing the correct multipath device names starting with dm-xx.

Changing the ORACLEASM_SCANORDER & ORACLEASM_SCANEXCLUDE parameters as below followed by a restart of asmlib too didnt work.

ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"


This has been documented in MOS note 1059847.1 which is not updated in the standard installation doc for Linux yet.

So as a final resort, I've deconfigured the clusterware on node2 using rootcrs.pl -deconfigure with force option followed by a restart of asmlib and that picked up the right multipath device for the ASM disk.

#$GRID_HOME/crs/install/rootcrs.pl -deconfig -force

After this step, root.sh worked perfectly fine on this node and all other remaining nodes.




Recovering lost objects using RMAN duplicate database on the same host



Had a requirement today to restore a schema in a database running on windows, without affecting the rest of the database. So incomplete recovery or a flashback database is not an option clearly as the corruption was limited to one schema which has it's own tablespace.


So the options are to either perform a
"Tablespace Point-in-Time Recovery (TSPITR)
  or 
"duplicate database to a previous point-in-time" and then compare the objects and import into the production database.


We preferred the option 2 and here's the steps to achieve it:

  • create a new instance/service using oradim
          oradim -new -sid dupdb

  • create a simple pfile using the basic parameters as below
          control_files='d:\new_path\control01.ctl' ....
          db_name=dupdb
          db_file_name_convert='d:\oradata\origdb','d:\oradata\dupdb'
          log_file_name_convert='d:\oradata\origdb','d:\oradata\dupdb' 
          and the usual cache size settings follow...

  • make sure the directories mentioned in the above parameters are created.
  • set ORACLE_SID=dupdb
          sqlplus /nolog
          connect / as sysdba
          create spfile='%ORACLE_HOME%/dbs/spfileDUPDB.ora' from                                                                                                                          pfile='d:\new_path\pfile\initdupdb.ora';
          startup nomount pfile='d:\new_path\pfile\initdupdb.ora';
          exit

  • Start the rman session as below (if there's no catalog database)
          rman target sys@origdb auxiliary / nocatalog
  •  duplicate target database to dupdb until time "to_date('201109151000','YYYYMMDDHH24MI');
           (Skip tablespace option can be used if the db size is large and only selective  tablespaces are required)
  • Once the recovery is complete, start the export of the schema using expdp.
Recovery on the Production DB...
  • take a RMAN or export backup of the original production database at this stage...
  • Drop the schema or drop the objects in the original schema (The below drop & import steps will require a short downtime)
  • drop the objects or the schema in the original production db
  • import the schema using impdp xxxx
  • recompile the objects using utlrp or dbms_utility.compile_schema
And we got the dropped objects back into the DB successfully.