Thursday, September 27, 2012

DB - Character Set conversion using CSSCAN/CSALTER

Character Set conversion using CSSCAN/CSALTER Master Note: 260192.1 :
------------------------------------------------------------------------------------------------------
1. Installing Csscan in Oracle RDBMS 11.2.0.1
set oracle_sid=<your SID>
sqlplus /nolog
SQL> conn / as sysdba
SQL> set TERMOUT ON
SQL> set ECHO ON
SQL> spool csminst.log
SQL> -- note the drop user
SQL> drop user csmig cascade;
SQL> @?/rdbms/admin/csminst.sql

Check the csminst.log for errors. The Csmig user is by default locked.
SQL> alter user csmig identified by csmig account unlock;
User altered.
2. Check current character set
SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- --------------------
OBIQA     READ WRITE
SQL> SELECT value FROM NLS_DATABASE_PARAMETERS WHERE parameter='NLS_CHARACTERSET'
VALUE
----------------------------------------
WE8MSWIN1252
2.a) Invalid objects.
conn / AS sysdba
SELECT owner,
object_name,
object_type,
status
FROM dba_objects
WHERE status ='INVALID'
/

If there are any invalid objects run utlrp.sql
SQL> @?/rdbms/admin/utlrp.sql

2.b) Orphaned Datapump master tables (10g and up) – Follow Note 336014.1

2.b) Leftover Temporary tables using CHAR semantics. – Revisit before CSALTER. Can be ignored if exists too.

3. Run CSSCAN to check LOSSY objects

When run without Exclude Option the CSSCAN will report table XDB.XDB$COMPLEX_TYPE as lossy. But this is a 11gR2 Code bug as here . So, please use the exclude option and run as below

csscan \"sys/manager@<sid> as sysdba\" FULL=Y FROMCHAR= WE8MSWIN1252 TOCHAR= WE8MSWIN1252 LOG=dbcheck CAPTURE=N ARRAY=1000000 PROCESS=2 USER=XDB EXCLUDE=’(XDB.XDB$COMPLEX_TYPE)’

After this there must be no LOSSY data in the Summary Report

4.Run with the target Character set, here UTF8

csscan \"sys/manager@obiqa as sysdba\" FULL=Y TOCHAR=UTF8 LOG=TOUTF8 CAPTURE=Y ARRAY=1000000 PROCESS=2

5.Dealing with "Truncation" data. – We have no Truncation Data to deal with.

6.Dealing with "Convertible" data.

When using Csalter/Alter Database Character Set

6.a) "Convertible" Application Data:

When using Csalter/Alter Database Character Set all User / Application Data "Convertible" data needs to be exported and truncated/deleted. This means ALL data that is listed as "Convertible" in the [Application Data Conversion Summary] summary NEEDS to be exported and truncated.

We have following table in Application Data in Convertible Section.

USER.TABLE

--------------------------------------------------

APEX_030200.WWV_FLOW_BANNER

APEX_030200.WWV_FLOW_BUTTON_TEMPLATES

APEX_030200.WWV_FLOW_CUSTOM_AUTH_SETUPS

APEX_030200.WWV_FLOW_FLASH_CHART_SERIES

APEX_030200.WWV_FLOW_LIST_TEMPLATES

APEX_030200.WWV_FLOW_PAGE_GENERIC_ATTR

APEX_030200.WWV_FLOW_PAGE_PLUGS

APEX_030200.WWV_FLOW_PAGE_PLUG_TEMPLATES

APEX_030200.WWV_FLOW_PROCESSING

APEX_030200.WWV_FLOW_ROW_TEMPLATES

APEX_030200.WWV_FLOW_SHORTCUTS

APEX_030200.WWV_FLOW_STEPS

APEX_030200.WWV_FLOW_STEP_PROCESSING

APEX_030200.WWV_FLOW_TEMPLATES

APEX_030200.WWV_FLOW_WORKSHEETS

DAC.W_ETL_TAG

INFORM.OPB_EXPRESSION

INFORM.OPB_MAPPING

INFORM.OPB_SRC

INFORM.OPB_SRC_FLD

INFORM.OPB_TARG

INFORM.OPB_TARG_FLD

 6.b) "Convertible" Data Dictionary Data:

* For 10g and up you do not need to take action on "convertible" Data Dictionary CLOB data. Convertible CLOB in Data Dictionary objects is handled by Csalter.
Convertible CHAR, VARCHAR2 and LONG listed under the "[Data Dictionary Conversion Summary]" header in the toutf8.txt however
do need action.

7. Take COLD backup of Database cp –rp backup of the DB.

cp -rp obiqa obiqa_bak/

8. Take export backup of schemas/tables reported in the TOUTF8.txt

CREATE OR REPLACE DIRECTORY DATA_PUMP AS '/home/oracle/char_conv';

 Export E.g

expdp tables=MGMT_IP_ELEM_DEFAULT_PARAMS,MGMT_IP_REPORT_ELEM_PARAMS,MGMT_IP_SQL_STATEMENTS directory=DATA_PUMP dumpfile=sysman.dmp logfile=sysman.exp.log

 expdp schemas=SIEBEL directory=DATA_PUMP dumpfile=SIEBEL.dmp logfile=SIEBEL.exp.log

 exp file=sys.dmp log=sys.exp.log tables="SYS.WRI\$_OPTSTAT_HISTGRM_HISTORY","SYS.HISTGRM\$" statistics=none grants=y indexes=y buffer=10240000

 expdp schemas=APEX_030200 directory=DATA_PUMP dumpfile=APEX_030200.dmp logfile=APEX_030200.exp.log parallel=4

impdp schemas=SIEBEL directory=DATA_PUMP dumpfile=SIEBEL.dmp logfile=SIEBEL.imp.log parallel=4k

 DAC

expdp tables=W_ETL_TAG directory=DATA_PUMP dumpfile=W_ETL_TAG.dmp logfile=W_ETL_TAG.exp.log

impdp tables=W_ETL_TAG directory=DATA_PUMP dumpfile=W_ETL_TAG.dmp logfile=W_ETL_TAG.imp.log

 INFORM

expdp tables=OPB_EXPRESSION,OPB_MAPPING,OPB_SRC,OPB_SRC_FLD,OPB_TARG,OPB_TARG_FLD directory=DATA_PUMP dumpfile=inform.dmp logfile=inform.exp.log

impdp tables=OPB_EXPRESSION,OPB_MAPPING,OPB_SRC,OPB_SRC_FLD,OPB_TARG,OPB_TARG_FLD directory=DATA_PUMP dumpfile=inform.dmp logfile=inform.exp.log

 9. Run Section 7) of MOS Note Before using Csalter / Alter Database Character Set check the database for:

There must be no mismatch from MOS note.

10. Run CSSCAN again and confirm the requirments

csscan \"sys/manager@obiqa as sysdba\" FULL=Y TOCHAR=UTF8 LOG=TOUTF8FIN CAPTURE=Y ARRAY=1000000 PROCESS=2

In TOUTF8FIN.txt below messages must appear.

The data dictionary can be safely migrated using the CSALTER script
and
All character type application data remain the same in the new character set

Also check, select value from csm$parameters where name='TO_CHARSET_NAME';

Gives the character set you wish to convert to.

11.The Character set Conversion step:

sho parameter CLUSTER_DATABASE
--  if you are using spfile note the
sho parameter job_queue_processes
sho parameter aq_tm_processes
-- (this is Bug 6005344 fixed in 11g )
-- then do

shutdown immediate
startup restrict
SPOOL Nswitch.log

ALTER SYSTEM ENABLE RESTRICTED SESSION;

 @?/rdbms/admin/csalter.plb

>>Alter the database character set...

>>CSALTER operation completed, please restart database

ALTER SYSTEM SET job_queue_processes=1000 SCOPE=BOTH;
ALTER SYSTEM SET aq_tm_processes=2 SCOPE=BOTH;

shutdown
startup

 12.Verify the character set conversion

SQL> SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET' ;

VALUE$
--------------------------------------------------------------------------------
UTF8

 13.Reload the data pump packages after a change to UTF8 in 10g and up.

  • For Oracle version 10.2, 11.1 and 11.2 :
1. Catdph.sql will Re-Install DataPump types and views

SQL >@ $ORACLE_HOME/rdbms/admin/catdph.sql

Note: If XDB is installed the it is required to run "catmetx.sql" script also.

SQL >@ $ORACLE_HOME/rdbms/admin/catmetx.sql

2. prvtdtde.plb will Re-Install tde_library packages

SQL >@ $ORACLE_HOME/rdbms/admin/prvtdtde.plb
3. Catdpb.sql will Re-Install DataPump packages

SQL >@ $ORACLE_HOME/rdbms/admin/catdpb.sql
4.Dbmspump.sql will Re-Install DBMS DataPump objects

SQL >@ $ORACLE_HOME/rdbms/admin/dbmspump.sql
5. To recompile  invalid objects, if any

SQL >@ $ORACLE_HOME/rdbms/admin/utlrp.sql

SQL> select count(*) from dba_objects where status <> 'VALID';

     COUNT(*)

       ----------

         3

 
14 Import the exported data .

14.a) When using Csalter/Alter database to go to AL32UTF8 and there was NO "Truncation" data, only "Convertible" and "Changeless" in the csscan done in point 4:

SQL> alter system set BLANK_TRIMMING=TRUE scope=spfile;

 Bounce the DB

 15. Import back the Exported Data

 Drop the export the Tables/Schemas before start of import

 CREATE OR REPLACE DIRECTORY DATA_PUMP AS '/home/oracle/char_conv';

expdp schemas=APEX_030200 directory=DATA_PUMP dumpfile=APEX_030200.dmp logfile=APEX_030200.exp.log parallel=4

impdp schemas= APEX_030200  directory=DATA_PUMP dumpfile=APEX_030200.dmp logfile=SIEBEL.imp.log parallel=4

 --Better take export of only necessary tables and drop them before import. The schema import is increasing invalids.

 DAC

expdp tables=W_ETL_TAG directory=DATA_PUMP dumpfile=W_ETL_TAG.dmp logfile=W_ETL_TAG.exp.log

impdp tables=W_ETL_TAG directory=DATA_PUMP dumpfile=W_ETL_TAG.dmp logfile=W_ETL_TAG.imp.log

 INFORM

expdp tables=OPB_EXPRESSION,OPB_MAPPING,OPB_SRC,OPB_SRC_FLD,OPB_TARG,OPB_TARG_FLD directory=DATA_PUMP dumpfile=inform.dmp logfile=inform.exp.log

impdp tables=OPB_EXPRESSION,OPB_MAPPING,OPB_SRC,OPB_SRC_FLD,OPB_TARG,OPB_TARG_FLD directory=DATA_PUMP dumpfile=inform.dmp logfile=inform.imp.log

 Import E.g.

 impdp tables=MGMT_IP_ELEM_DEFAULT_PARAMS,MGMT_IP_REPORT_ELEM_PARAMS,MGMT_IP_SQL_STATEMENTS directory=DATA_PUMP dumpfile=sysman.dmp logfile=sysman.imp.log

 impdp schemas=SIEBEL directory=DATA_PUMP dumpfile=SIEBEL.dmp logfile=SIEBEL.imp.log

 imp fromuser=sys touser=sys file=sys.dmp tables="WRI\$_OPTSTAT_HISTGRM_HISTORY","HISTGRM\$" ignore=y log=sys.imp.log

Saturday, September 1, 2012

Snapshot Standy clone Process:

Note : Check before performing this  both Prod and DR must be in sync ,then only go ahead
Sync Database from EBS DR Host

.E.g . Here both PROD and DR in complete Sync
SQL> select max(sequence#) from v$archived_log where applied='YES';
MAX(SEQUENCE#)
--------------
         35178
SQL> select max(sequence#) from v$archived_log ;
MAX(SEQUENCE#)
--------------
         35178
Cancel the recovery  from DR:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> select log_mode,flashback_on from v$database;
LOG_MODE     FLASHBACK_ON
------------ ------------------
ARCHIVELOG   NO
SQL> alter database flashback on;
Database altered.
CREATE RESTORE POINT SAFE_PROD_TO_SUPT_POINT GUARANTEE FLASHBACK DATABASE;

SQL> select log_mode,flashback_on from v$database;
LOG_MODE     FLASHBACK_ON
------------ ------------------
ARCHIVELOG   YES

convert the physical standby database into Snapshot standby database :
SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
Database altered.
Shut the DRDB now  :
SQL> shut immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
Start the database :
SQL> startup
ORACLE instance started.
Total System Global Area 5344731136 bytes
Fixed Size                  2159472 bytes
Variable Size            1073745040 bytes
Database Buffers         4227858432 bytes
Redo Buffers               40968192 bytes
Database mounted.
Database opened.
SQL> select FLASHBACK_ON,open_mode from v$database;
FLASHBACK_ON       OPEN_MODE
------------------ ----------
YES                READ WRITE

Switch the log file in DR side and shut down the database.
SQL>alter system switch logfile;
          stystem altered.
SQL> select DATABASE_ROLE,name from v$database;
 DATABASE_ROLE         NAME
---------------------------           -------------
SNAPSHOT STANDBY     PROD

Shut down the all the services on target (TARGET) side and drop the DB.
SQL> shut immediate;
SQL>startup mount restrict;
SQL> select name,open_mode from v$database;
SQL> drop database;
Copy the files from DR to TARGET using Rman script.
Once Copy  got completed  in DR side perform the following  :
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 5344731136 bytes
Fixed Size                  2159472 bytes
Variable Size            1140853904 bytes
Database Buffers         4160749568 bytes
Redo Buffers               40968192 bytes
SQL> alter database mount;
Database altered.
SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
Database altered.
Need to shut the database here
SQL> shut immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
Start the database in  no mount state  and mount the as standby and again start the recovery
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 5344731136 bytes
Fixed Size                  2159472 bytes
Variable Size            1140853904 bytes
Database Buffers         4160749568 bytes
Redo Buffers               40968192 bytes
SQL> alter database mount standby database;
Database altered.

SQL> Alter database recover managed standby database disconnect from session;
Database altered.
SQL> alter database flashback off;
SQL> select FLASHBACK_ON,open_mode from v$database;
FLASHBACK_ON       OPEN_MODE
------------------                ----------------------
NO                                 READ WRITE

Verify from the DR alert log that recovery is happening, you must see like below:
[oracle@DRdbnode]$ tail -f ./admin/DR_dbnodedr01/diag/rdbms/drdb/DRDB/trace/alert_DRDB.log
Primary database is in MAXIMUM PERFORMANCE mode
Thu Aug 12 08:50:16 2012
Media Recovery Log /archivelogs/DRSB/DRDB_1_35180_752148892.arc
Thu Aug 12 08:50:50 2012
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1642]: Assigned to RFS process 29995
RFS[1642]: Identified database type as 'physical standby'
Thu Aug 12 08:52:06 2012
Media Recovery Waiting for thread 1 sequence 35181 (in transit)
Thu Aug 12 09:00:37 2012