Thursday, May 30, 2013

Tips About FNDLOAD [ID 735338.1]


Tips About FNDLOAD [ID 735338.1]


What can be done?
These are the extensive list which can be done through FNDLOAD
· Concurrent Programs, Executables
· Request Groups, Request Sets
· Profile Options
· Key and Descriptive Flexfields
· Menus and Responsibilities
· Forms and Form Functions/Personalizations
· Attachments
· Messages
· Value Sets and Values
· Lookup Types
· User Responsibilities
· Printer Definitions
· FND Dictionary
· Help Configuration
· Document Sequences
· Alerts
· Concurrent Manager Schedules

The FNDLOAD: Concurrent Program – FNDLOAD can be executed as a concurrent program.

Some advantages when using FNDLOAD
1. Because downloaded data is stored in a text file, version administration is possible
2. There is nothing to worry to go to purchase because Investment = 0$
3. No learning curve. this is relief for developer and dba
4. Fully supported and recommended by Oracle
5. Capture the migrations in a file and use it during installations, clones etc. to migrate in batch
6. Pin-point when something happened and where (database) easily
7. Your AOL data migration process is now simplified and streamlined goal attained

Some disadvantages when using FNDLOAD
1. Applications patching mechanisms use FNDLOAD heavily possibility of negative impact is not zero
2. UPLOAD_MODE=REPLACE only for menus
3. No validation against migrating database/instance sensitive data

Syntax
The Generic Loader is a concurrent program named FNDLOAD. The concurrent executable takes the following parameters:

FNDLOAD apps/pwd 0 Y mode configfile datafile entity [ param ...]
where

<apps/pwd>
The APPS schema and password in the form username/password[@connect_string]. If
connect_string is omitted, it is taken in a platform-specific manner from the environment
using the name TWO_TASK.

< 0 Y >
Concurrent program flags.

mode
UPLOAD or DOWNLOAD. UPLOAD causes the datafile to be uploaded to the database. DOWNLOAD causes the loader to fetch rows and write them to the datafile.

<congle>
The configuration file to use (usually with a suffix of .lct, but not enforced or supplied by the loader).

<datale>
The data file to write (usually with a suffix of .ldt, but not enforced or supplied by the loader). If the data file already exists, it will be overwritten.

<entity>
The entity(ies) to upload or download. When uploading, always upload all entities, so specify a "-" to upload all entities.

< [param] >
Zero or more additional parameters are used to provide bind values in the access SQL (both UPLOAD and DOWNLOAD). Each parameter is in the form NAME=VALUE. NAME should not conflict with an attribute name for the entities being loaded.

Modes of Operation
This is important because it would drive the whole flow, and it always be either Upload or Download.

Example of download
FNDLOAD apps/pwd 0 Y DOWNLOAD ${FND_TOP}/patch/115/import/afcpprog.lct myfile.ldt \ PROGRAM CONCURRENT_PROGRAM_NAME= concurrent_program_short_name> APPLICATION_SHORT_NAME=<application_short_name>

Example of Upload
FNDLOAD apps/pwd 0 Y UPLOAD ${FND_TOP}/patch/115/import/afcpprog.lct myfile.ldt - CUSTOM_MODE=FORCE undocumented parameter

What are FNDLOAD Options?
· Application level OR single entity level download
o (Example) Download all the profile options of Inventory or just the INV: Default Primary UOM
· Entire OR Partial upload of a data file
o (Example) Upload the entire myfile.ldt or just a single entity indicated by - and mode UPLOAD or  UPLOAD_PARTIAL
o Entity name required for UPLOAD_PARTIAL mode
· Custom mode force update
o To override Oracle’s upload algorithm and update the custom AOL data regardless, use CUSTOM_MODE= FORCE
o UPLOAD_MODE= REPLACE (only for menus)
· Support for NLS uploads
o NLS data uploads for translated columns are supported, use UPLOAD_MODE= NLS

Where is Configuration File Located
By default Oracle delivers most of configuration files that can be used to download certain entities.
· Configuration files with extension .lct
o On Unix - all the configuration files are in $FND_TOP/patch/115/import directory
o On Unix Oracle also places the original configuration files in $FND_TOP/admin/import directory
· Data files with extension .ldt
FNDLOAD File Structure
· The configuration files (.lct) are delivered and maintained by Oracle.
· It has entity definitions, parent-child relationships and user input parameters identified by :NAME.
· Downloading a parent automatically downloads all children - (Example) Concurrent Program download.
· The data files (.ldt) have both entity definition and the data.
· It also shows the version and the location of the configuration file (.lct) that was used.
· Without the configuration file, a data file is useless.
· Without the data file, a configuration file is meaningless.

FNDLOAD Files
· Key files: .lct and .ldt
· FNDLOAD must be run as the apps user not as applsys or any other user, otherwise an Ora-6550 error will be received.
· Both are easily readable, editable and portable.
· Do not modify Oracle .lct files.
· Use a favorite editor to manipulate only the .ldt files but be cautious about data type, length, delimiter placements etc.
· Use the log file outputs or .ldt file contents creatively for quick file comparisons and answer questions as: (Why can this be accessed? What is that profile option name, value and level? What is the value set used for that DFF segment attribute10 etc.)
· Partial string searches (which value set has Priority something in its where clause, etc)

Some sample examples

1 - Printer Styles

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcppstl.lct file_name.ldt STYLE PRINTER_STYLE_NAME=”printer style name”
2 - Lookups

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/aflvmlu.lct file_name.ldt FND_LOOKUP_TYPE APPLICATION_SHORT_NAME=”prod” LOOKUP_TYPE=”lookup name”
3 - Descriptive Flexfield with all of specific Contexts

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt DESC_FLEX P_LEVEL=:COL_ALL:REF_ALL:CTX_ONE:SEG_ALL? APPLICATION_SHORT_NAME=”prod” DESCRIPTIVE_FLEXFIELD_NAME=”desc flex name” P_CONTEXT_CODE=”context name”
4 - Multiple Flexfields

Use a combination of APPLICATION_SHORT_NAME and DESCRIPTIVE_FLEXFIELD_NAME names ie. APPLICATION_SHORT_NAME=PER >> will download all PER flexfields DESCRIPTIVE_FLEXFIELD_NAME=PER_% >> will download all flexfields that start with 'PER_'.

FNDLOAD apps/apps O Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct
file_name.ldt DESC_FLEX DESCRIPTIVE_FLEXFIELD_NAME="PER_%"


5 - Key Flexfield Structures

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt KEY_FLEX P_LEVEL=:COL_ALL:FQL_ALL:SQL_ALL:STR_ONE:WFP_ALL:SHA_ALL:CVR_ALL:SEG_ALL? APPLICATION_SHORT_NAME=”prod” ID_FLEX_CODE=”key flex code” P_STRUCTURE_CODE=”structure name”
6 - Concurrent Programs

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcpprog.lct file_name.ldt PROGRAM APPLICATION_SHORT_NAME=”prod” CONCURRENT_PROGRAM_NAME=”concurrent name”
7 - Value Sets

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt VALUE_SET FLEX_VALUE_SET_NAME=”value set name”
 8 - Value Sets with values

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afffload.lct file_name.ldt VALUE_SET_VALUE FLEX_VALUE_SET_NAME=”value set name”
9 - Profile Options

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afscprof.lct file_name.ldt PROFILE PROFILE_NAME=”profile option” APPLICATION_SHORT_NAME=”prod”
10 - Request Group

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcpreqg.lct file_name.ldt REQUEST_GROUP REQUEST_GROUP_NAME=”request group” APPLICATION_SHORT_NAME=”prod”
11 - Request Sets

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct file_name.ldt REQ_SET APPLICATION_SHORT_NAME=”prod” REQUEST_SET_NAME=”request set”
12 - Responsibilities

FNDLOAD apps/pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afscursp.lct file_name.ldt FND_RESPONSIBILITY RESP_KEY=”responsibility"


13 - Responsibilities with all Security Groups

FNDLOAD apps/<PASSWORD> 0 Y DOWNLOAD FND_TOP/patch/115/import/afscursp.lct <USER>.ldt
FND_USER USER_NAME="<USER>" SECURITY_GROUP=% DATA_GROUP_NAME=%

14 - Menus

FNDLOAD apps/ pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afsload.lct file_name.ldt MENU MENU_NAME=”menu_name”
15 - Forms/Functions/Personalizations: Refer to the System Administrator's Guide on dependencies

FNDLOAD <userid>/<password> 0 Y DOWNLOAD $FND_TOP/patch/115/import/affrmcus.lct
<filename.ldt> FND_FORM_CUSTOM_RULES form_name=<form name>
OR

FNDLOAD <userid>/<password> 0 Y DOWNLOAD $FND_TOP/patch/115/import/afsload.lct file_name.ldt FUNCTION FUNCTION_NAME=<function_name>
OR

FNDLOAD <userid>/<password> 0 Y DOWNLOAD $FND_TOP/patch/115/import/afsload.lct file_name.ldt FORM FORM_NAME=<form_name>
OR

FNDLOAD <userid>/<password> 0 Y DOWNLOAD $FND_TOP/patch/115/import/affrmcus.lct <filename.ldt> FND_FORM_CUSTOM_RULES function_name=<function name>

16 - User/Responsibilities

FNDLOAD apps/ pwd@seeddb 0 Y DOWNLOAD $FND_TOP/patch/115/import/afscursp.lct file_name.ldt FND_USER

17 - Alert

FNDLOAD apps/pwd 0 Y DOWNLOAD
$ALR_TOP/patch/115/import/alr.lct my_file.ldt ALR_ALERTS
APPLICATION_SHORT_NAME=FND ALERT_NAME=Alert name to downloa
18 - Blob

With Release 12.1.1, FNDLOAD supports BLOB data (upload / download ) to better serve content-rich applications.

FNDLOAD apps/pwd 0 Y mode configfile datafile entity [ param ... ]

 19 - Overwrite custom definitions

FNDLOAD apps/apps 0 Y UPLOAD $FND_TOP/patch/115/import/.lct $
XX_TOP/import/.ldt CUSTOM_MODE=FORCE

20 - Load an NLS Language

FNDLOAD <APPS USR>/<APPS PWD> 0 Y UPLOAD <controlfile.lct> <datafile.ldt> \
- UPLOAD_MODE=NLS CUSTOM_MODE=FORCE WARNINGS=TRUE

21 - Migrate the role registration process from one instance to another

a. Please navigate to the path: $FND_TOP /patch/115/import/US/umxrgsvc.ldt

b. The following command can be used to download:

FNDLOAD apps/<PASSWD>@(instance name) O Y DOWNLOAD $FND_TOP/patch/115/import/umxrgsvc.lct umxrgsvc.ldt UMX_REG_SERVICES REG_SERVICE_CODE UMX

c. The following command can be used to upload:

FNDLOAD apps/<PASSWD>@(instance name) O Y UPLOAD $FND_TOP/patch/115/import/umxrgsvc.lct umxrgsvc.ldt UMX_REG_SERVICES REG_SERVICE_CODE UMX
22 - Transfer Custom Messages to another Instance

a. Download the message from the source instance.

FNDLOAD apps/apps 0 Y DOWNLOAD @FND:patch/115/import/afmdmsg.lct password.ldt FND_NEW_MESSAGES APPLICATION_SHORT_NAME=FND MESSAGE_NAME=PASSWORD-INVALID-NO-SPEC-CHAR

b. Move the custom LDT file (password.ldt) over to the destination instance.

c. Upload the custom message to the destination instance.

FNDLOAD apps/apps 0 Y UPLOAD @FND:patch/115/import/afmdmsg.lct password.ldt FND_NEW_MESSAGES APPLICATION_SHORT_NAME=FND CUSTOM_MODE=FORCE

23 - Download UMX Roles and Role Assignment data from one instance and upload to another.

To download from one instance:

FNDLOAD <username/pwd@sid> 0 Y DOWNLOAD $FND_TOP/patch/115/import/afrole.lct
umxroles.ldt WF_ROLE ORIG_SYSTEM=UMX%

To upload to another instance:

FNDLOAD <username/pwd@sid> 0 Y UPLOAD $FND_TOP/patch/115/import/afrole.lct
umxroles.ldt

 References:

· Oracle Applications Systems Administrator Guide - Configuration

Notes:

1. Give special attention when downloading Menus or Responsibilities.  In the case for several developers modifying Responsibilities and Menus, then be very careful.  Not being careful will mean that untested Forms, Functions, and Menus will become available in the clients Production environment besides the tested Forms, Functions, and Menus.

2. Be very careful when downloading flexfields that reference value sets with independent values for GL Segment Codes.  By doing so, downloading and extracting all the test data in GL Codes that might not be applicable for production.

3. There are several variations possible for FNDLOAD.  For example, restricting the download and upload to specific segments within Descriptive Flexfields.

4. FNDLOAD is very reliable and stable, if used properly.

5. Please test the FNDLOAD properly, so as to ensure that no unexpected data occurs.

6. As the name suggests, FNDLOAD is usedful for FND related objects.  However, in any implementation, it's required to migrate the Setups in Financials and Oracle HRMS from one environment to another.  Oracle iSetup can be used for this.  Some of the things that can be migrated using Oracle iSetup are GL Set of Books, HR Organization Structures, HRMS Employees, Profile Options Setup, Suppliers, Customers, Tax Codes & Tax Rates, Financials Setup, Accounting Calendars, Chart of Accounts, GL Currencies.

How to Update Profile option values which are pointing to source instance of DB-Refresh/Clone:


How to Update Profile option values which are pointing to source instance of DB-Refresh/Clone.

sqlplus apps/<apps password>

SQL> set linesize 140
set pagesize 132
column NAME format A40
column VALUE format A80 wrap
select n.user_profile_option_name NAME,
    v.profile_option_value VALUE
    from apps.fnd_profile_options p,
         apps.fnd_profile_option_values v,
         apps.fnd_profile_options_tl n
    where p.profile_option_id = v.profile_option_id (+)
    and p.profile_option_name = n.profile_option_name
    and v.level_id=10001
    AND n.LANGUAGE='US'
 and upper(v.profile_option_value) like upper('%prod%');

Note: If the O/p show PROD entries, please fire below update quey & provide the values source & Destination

Replace source instance paths with target instance using below DML Query.

SQL> update fnd_profile_option_values
set PROFILE_OPTION_VALUE= '&TargetPath'
where PROFILE_OPTION_VALUE = '&SourcePath';

++++++++++++++++++

set linesize 140
set pagesize 132
column NAME format A40
column VALUE format A80 wrap
select   
n.user_profile_option_name NAME,
v.profile_option_value VALUE
from apps.fnd_profile_options p,
    apps.fnd_profile_option_values v,
      apps.fnd_profile_options_tl n
where p.profile_option_id = v.profile_option_id (+)
and p.profile_option_name = n.profile_option_name
and v.level_id=10001
AND n.LANGUAGE='US'
and upper(n.user_profile_option_name) like upper('%&profile_name%');

If you need to check DISCO:

Enter value for profile_name: ICX: Discoverer

Wednesday, May 29, 2013

To Verify Workflow Mailer Configuration & Verify WF mailer IMAP and SMTP Settings using below Query :


To Verify Workflow Mailer Configuration & Verify WF mailer IMAP and SMTP Settings using below Query

Login as applmgr user to apps node
start the application processes
cd $COMMON_TOP/admin/scripts/$CONTEXT_NAME
adstrtal.sh apps/<apps pwd>

sqlplus apps/<apps pwd>

set lines 130
set pages 100
col value format a30
select p.parameter_id,p.parameter_name,v.parameter_value value
from fnd_svc_comp_param_vals_v v,
fnd_svc_comp_params_b p,
fnd_svc_components c
where c.component_type = 'WF_MAILER'
and v.component_id = c.component_id
and v.parameter_id = p.parameter_id
and p.parameter_name in
('OUTBOUND_SERVER', 'INBOUND_SERVER',
'ACCOUNT', 'FROM', 'NODENAME', 'REPLYTO','DISCARD' ,'PROCESS','INBOX', 'TEST_ADDRESS')
order by p.parameter_name;

To performance issue:

Are messages in WF_DEFERRED and WF_NOTIFICATION_OUT processed?

a. Execute the following queries:

SQL> select substr(wfd.corrid,1,40) corrid,
decode(wfd.state,
0, '0 = Ready',
1, '1 = Delayed',
2, '2 = Retained',
3, '3 = Exception',
to_char(substr(wfd.state,1,12))) State,
count(*) COUNT
from applsys.wf_deferred wfd
group by wfd.corrid, wfd.state;

b.
SQL> select substr(wfd.corrid,1,40) corrid,
decode(wfd.state,
0, '0 = Ready',
1, '1 = Delayed',
2, '2 = Retained',
3, '3 = Exception',
to_char(substr(wfd.state,1,12))) State,
count(*) COUNT
from applsys.wf_notification_out wfd
group by wfd.corrid, wfd.state;

c. Provide the output of the $FND_TOP/sql/wfmlrdbg.sql script

d. Upload the latest log file. The following query will return the mailer log file and its location.

set linesize 155;
set pagesize 200;
set verify off;
column MANAGER format a15;
column MEANING format a15;
SELECT concurrent_queue_name manager, fcp.last_update_date, fcp.concurrent_process_id pid, meaning, fcp.logfile_name
FROM fnd_concurrent_queues fcq, fnd_concurrent_processes fcp, fnd_lookups flkup
WHERE concurrent_queue_name in ('WFMLRSVC')
AND fcq.concurrent_queue_id = fcp.concurrent_queue_id
AND fcq.application_id = fcp.queue_application_id
AND flkup.lookup_code=fcp.process_status_code
AND lookup_type ='CP_PROCESS_STATUS_CODE'
AND meaning='Active';

e. Output of $FND_TOP/sql/wfver.sql

f. One more thing it is necessary to check is the mail_status of the notifications

select mail_status, count(*)
from wf_notifications
where status = 'OPEN'
group by mail_status
order by 2 desc;

 +++++++++1501822.1++++++++

Cloning R12: ORA-06502: PL/SQL: numeric or value error trying to update table fnd_profile_option_values [ID 1501822.1]


Cloning R12: ORA-06502: PL/SQL: numeric or value error trying to update table fnd_profile_option_values [ID 1501822.1]      

Solution

1) Run following query to determine profile_option_id associated with profile_option_name = 'UTL_FILE_LOG':

select profile_option_id,
profile_option_name
from fnd_profile_options
where profile_option_name = 'UTL_FILE_LOG';

2) Run following query to determine profile_option_value based on retrieved profile_option_id:

select profile_option_id,
profile_option_value,
application_id,
level_id,
level_value
from fnd_profile_option_values
where profile_option_id = <value from query 1>;

3) Manually update the profile option in table fnd_profile_option_values to be just /usr/tmp:

update fnd_profile_option_values
set profile_option_value = '/usr/tmp'
where profile_option_id=<value from query 1>;

4) Rerun perl adcfgclone.pl dbTier.

How to Change The Generic Logo to Your Company's Logo in EBS R12:


How to Change The Generic Logo to Your Company's Logo in EBS R12:
How to Change the iRecruitment Logo in Release 12

In Release 12, the file being displayed in iRecruitment pages is:
FNDSSCORP.gif.

This image is located in:
$OA_MEDIA/FNDSSCORP.gif

This has been changed from 11i, which was IRCBRAND_MED.gif.

1. Using your new image, replace this by renaming to FNDSSCORP.gif.
Note: Suppose your custom image SVR_LOGO.jpg 
you can add the  SVR_LOGO.jpg  file to $OA_MEDIA across all middle tiers on the instance.

grep -i media $CONTEXT_FILE
         <media oa_var="s_installedFrom">SHARED</media>
         <OA_MEDIA oa_var="s_oamedia">/d01/oracle/SVRUAT/apps/apps_st/comn/java/classes/oracle/apps/media</OA_MEDIA>

2. Bounce the Apache.

This will display the new logo as small. In order to change the display size of the logo to medium or large, you need to follow instructions in Framework Development guide for Release 12. This is because the profile 'FND: Branding Size' responsible for doing this in 11i is not supported anymore in Release 12.

 Set the POR: Hosted Images Directory profile option to the virtual directory /OA_MEDIA/ or a subdirectory (e.g., /OA_MEDIA/US/ or /OA_MEDIA/pictures/).

Note:

POR: Hosted Images Directory = /OA_MEDIA/

$OA_MEDIA 

 SVRLogo.jpg
+++++++++++++++++++
742130.1,731847.1 & 252387.1
+++++++++++++++++++

R12: Forms Timeout More Than 2 Hrs Is Not Working After R12 Upgrade


R12: Forms Timeout More Than 2 Hrs Is Not Working After R12 Upgrade
How to Change User Session Timeout in e-Business Suite R12

Solution:

To implement the solution, please execute the following steps:

Section A: Make sure the Backup of CONTEXT_FILE

Section B:

1. Edit the <CONTEXT_NAME>.xml file in $INST_TOP/appl/admin directory

2. Ensure that value of profile option "ICX: Session Timeout" and autoconfig variable "s_oc4j_sesstimeout" in context file match.

e.g. In case you need the session to timeout after 2 hrs then the profile option "ICX: Session Timeout" and autoconfig variable "s_oc4j_sesstimeout" should be set to 120

grep -i s_oc4j_sesstimeout $CONTEXT_FILE
                        <oc4j_session_timeout oa_var="s_oc4j_sesstimeout">120</oc4j_session_timeout>
3. Run autoconfig and retest the issue.

Note:  This note is correct for servlet mode and the default mode. This will not work in socket mode

How to Change User Session Timeout in e-Business Suite R12:

Please perform the steps given below:

1. Edit the .xml file in $INST_TOP/appl/admin directory
2. Ensure that value of profile option "ICX: Session Timeout" and autoconfig variable "s_oc4j_sesstimeout" in context file match.
    e.g. In case you need the session to timeout after 45 min then the profile option "ICX: Session Timeout" and autoconfig variable "s_oc4j_sesstimeout" should be set to 45
 grep -i s_sesstimeout $CONTEXT_FILE
                        <session_timeout oa_var="s_sesstimeout">7200000</session_timeout>
3. Run autoconfig and retest the issue.

+++++++++++++++++++++
 734077.1, 1067115.1
+++++++++++++++++++++

Target node/queue unavailable or Concurrent Manager Issue’s after DB -Refresh / Cloning :


Target node/queue unavailable or Concurrent Manager Issue’s after Cloning or DB-Refresh:

Solution A:

Step 1: Take down application

SQL> EXEC FND_CONC_CLONE.TARGET_CLEAN;

Step 2: Run cmclean.sql script 

Step 3:  Run autoconfig on dbTier then appsTier.


Start all application services and check whether managers are up.

Solution B:

Apply this solution in last when the above one doesn’t work.

SQL> select node_name,target_node,control_code from fnd_concurrent_queues;

SQL> update apps.fnd_concurrent_queues set node_name = 'Node NAME' where node_name='Existing Node Name';

SQL> select NODE_NAME,NODE_MODE,STATUS from fnd_nodes;

SQL> desc fnd_concurrent_queues

SQL> select control_code,target_node,node_name,CONCURRENT_QUEUE_NAME from fnd_concurrent_queues;

SQL> UPDATE fnd_concurrent_queues set control_code = null;

SQL> UPDATE fnd_concurrent_queues set target_node = 'Node Name';

SQL> UPDATE fnd_concurrent_queues set node_name = 'Node Name';

SQL> update FND_CONCURRENT_QUEUES set control_code = null where concurrent_queue_name = 'OAMGCS_NODENAME'; -- supply the node name..(Ur Existing Node Name)

SQL> Commit;

SQL> select control_code,target_node,node_name,CONCURRENT_QUEUE_NAME from fnd_concurrent_queues;

+++++++++Manjunatha++++++
732709.1,555081.1 & 466532.1
++++++++++++++++++++++



Wednesday, May 22, 2013

Autoconfig Failed with Error: FsCtxFileException: Start of Root Element Expected


Autoconfig Failed with Error: FsCtxFileException: Start of Root Element Expected

Error:

Context Value Management will now update the Context file
ERROR: FsCtxFile.XMLParseException
oracle.xml.parser.v2.XMLParseException: Start of root element expected.
        at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:320)
        at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:367)
        at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser


Solution:

To implement the solution, please execute the following steps:

1. Connect to SQL*PLUS as "APPLSYS" user.
(The "APPLSYS" password is always the same as the "APPS" user.)

2. Backup the FND_OAM_CONTEXT_FILES table, for example:

SQL> CREATE TABLE fnd_oam_context_files_bak AS SELECT * FROM fnd_oam_context_files;

3. Truncate the FND_OAM_CONTEXT_FILES table, for example:

TRUNCATE TABLE fnd_oam_context_files;

4. Re-run Autoconfig on all nodes(Db + Application)  to repopulate the data.

++++++++++++++++++
876806.1,463895.1
++++++++++++++++++

Friday, May 17, 2013

Usage and Storage Management of SYSAUX tablespace occupants SM/AWR, SM/ADVISOR, SM/OPTSTAT and SM/OTHER [ID 329984.1]


Usage and Storage Management of SYSAUX tablespace occupants SM/AWR, SM/ADVISOR, SM/OPTSTAT and SM/OTHER [ID 329984.1]

Question:

Can Tablespace Point-In-Time recovery (TSPITR) be performed on the SYSAUX tablespace?

Answare :

No, the point in time recovery cannot be performed on the SYSAUX tablespace.


Pre steps:

SQL> @$ORACLE_HOME/rdbms/admin/AWRINFO.sql

Step 2:
SQL>  SELECT occupant_name, space_usage_kbytes FROM V$SYSAUX_OCCUPANTS;
SQL>  select * from DBA_HIST_WR_CONTROL;
SQL> select occupant_name, space_usage_kbytes from v$sysaux_occupants;
SQL >  set linesize 120
set pagesize 100

COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40

SELECT  occupant_name "Item",
    space_usage_kbytes/1048576 "Space Used (GB)",
    schema_name "Schema",
    move_procedure "Move Procedure"
FROM v$sysaux_occupants
ORDER BY 1
/



1. Investigate how old stats history we are keeping:
                SQL> select dbms_stats.get_stats_history_retention from dual;

                SQL> select dbms_stats.get_stats_history_availability from dual;
               
                SQL> select min(SAVTIME), max(SAVTIME) from WRI$_OPTSTAT_HISTHEAD_HISTORY;
               

2. If the data is over 31 days we need to drill down to object levels:

        a. set retention on history table to 31 days.
       
                SQL> exec dbms_stats.alter_stats_history_retention(31);

                b. SQL> col Mb form 9,999,999
                     col SEGMENT_NAME form a40
                     col SEGMENT_TYPE form a6
                     set lines 120
                     select sum(bytes/1024/1024) Mb, segment_name,segment_type from dba_segments
                     where tablespace_name = 'SYSAUX'
                     and segment_name like '%OPT%'
                     and segment_type='INDEX'
                     group by segment_name,segment_type order by 1 asc;

                c. exec DBMS_STATS.PURGE_STATS(SYSDATE-number of days you want to keep the data);
                   exec DBMS_STATS.PURGE_STATS(SYSDATE-31); --> most likely it will fail.

3. Apply RDMBS patch 14373728
Note: Before downloading the Ptach for compatible version Please refer (ID 224346.1)
                1. download and apply patch 14373728 for appropiate db version
                2. Set "autoextend on" on temp datafiles.
                3. spool the output from the post-patch install tasks,
                4. review the log to make sure that the script ran okay
                5. Wait for the retention period(31 days) to lapse, then check if p_old has been dropped
                6. If not, then perform a manual drop in case it was taking longer for the p_old to drop because of its size

                This patch creates a table, WRI$_OPTSTAT_HISTHEAD_HISTORY,  with partition. So, next time through it is easier for oracle to look at the partition and drop it
                according to retention period. Currently the table is huge and takes time to recreate table with partition. So, it is necessary
                to set autoextend on on temp datafiles. For the successful execution on post-patch steps should create table with partion. If try to
                get ddl for the table we should see below:
               
                 SQL> col subobject_name format a15;
                 SQL> alter session set nls_date_format ='MON-DD-YY HH24:MI';
               
               
                SQL> select object_name, subobject_name, object_type, status, last_ddl_time
                     from dba_objects where object_name like '%WRI$_OPTSTAT%'
                     order by 1;
                    
                     In the result we will see few tables are created with partition:
                                OBJECT_NAME                              SUBOBJECT_NAME                 OBJECT_TYPE         STATUS  LAST_DDL_TIME
---------------------------------------- ------------------------------ ------------------- ------- ---------------
I_WRI$_OPTSTAT_AUX_ST                                                   INDEX               VALID   17-JUL-10
I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST                                           INDEX               VALID   17-JUL-10
 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST                                          INDEX               VALID   17-JUL-10
WRI$_OPTSTAT_SYNOPSIS$                   SYS_SUBP168                    TABLE SUBPARTITION  VALID   12-MAY-12
WRI$_OPTSTAT_SYNOPSIS$                   SYS_SUBP169                    TABLE SUBPARTITION  VALID   12-MAY-12
WRI$_OPTSTAT_SYNOPSIS_HEAD$                                             TABLE               VALID   17-JUL-10
WRI$_OPTSTAT_SYNOPSIS_PARTGRP                                           TABLE               VALID   17-JUL-10
WRI$_OPTSTAT_TAB_HISTORY                                                TABLE               VALID   17-JUL-10

SQL>  select dbms_metadata.get_ddl('TABLE','WRI$_OPTSTAT_HISTHEAD_HISTORY','SYS') from dual;
DBMS_METADATA.GET_DDL('TABLE','WRI$_OPTSTAT_HISTHEAD_HISTORY','SYS')
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."WRI$_OPTSTAT_HISTHEAD_HISTORY"
   (    "OBJ#" NUMBER NOT NULL ENABLE,
        "INTCOL#" NUMBER NOT NULL ENABLE,
        "SAVTIME" TIMESTAMP (6) WITH TIME ZONE,
        "FLAGS" NUMBER,
        "NULL_CNT" NUMBER,
        "MINIMUM" NUMBER,
        "MAXIMUM" NUMBER,
        "DISTCNT" NUMBER,
        "DENSITY" NUMBER,
        "LOWVAL" RAW(32),
        "HIVAL" RAW(32),
        "AVGCLN" NUMBER,
        "SAMPLE_DISTCNT" NUMBER,
        "SAMPLE_SIZE" NUMBER,
        "TIMESTAMP#" DATE,
        "SPARE1" NUMBER,
        "SPARE2" NUMBER,
        "SPARE3" NUMBER,
        "SPARE4" VARCHAR2(1000),
        "SPARE5" VARCHAR2(1000),
        "SPARE6" TIMESTAMP (6) WITH TIME ZONE,
        "EXPRESSION" CLOB,
        "COLNAME" VARCHAR2(30),
        "SAVTIME_DATE" DATE GENERATED ALWAYS AS (TRUNC("SAVTIME")
) VIRTUAL VISIBLE
   ) PCTFREE 1 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRES
S LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483
645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "SYSAUX"
 LOB ("EXPRESSION") STORE AS BASICFILE (
  TABLESPACE "SYSAUX" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION
  NOCACHE LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 21474
83645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))  ENA
BLE ROW MOVEMENT

Note: To verify the index for rebuild :
SQL> select 'alter index '||segment_name||'  rebuild online parallel (degree 14);' from dba_segments where tablespace_name = 'SYSAUX'
and segment_name like '%OPT%' and segment_type='INDEX'


4. Old partition should be dropped itself. If not wait for retention period and check if the size of the index. If it is not dropped, dropped them manually:

                -- drop indexes: if they get timeout they need to repeat these statements until they can be dropped.
                                drop index i_wri$_optstat_hh_obj_icol_st;
                                drop index i_wri$_optstat_hh_st;

                -- drop partition p_old
                                alter table WRI$_OPTSTAT_HISTHEAD_HISTORY drop partition P_OLD;

                -- recreate the indexes
                                create unique index i_wri$_optstat_hh_obj_icol_st on wri$_optstat_histhead_history (obj#, intcol#, savtime, colname) tablespace sysaux;
                                create index i_wri$_optstat_hh_st on wri$_optstat_histhead_history (savtime) tablespace sysaux;

                >> repeating similar procedure for histogram history table:

                -- drop indexes: if they get timeout they need to repeat these statements until they can be dropped.
                                drop index i_wri$_optstat_h_obj#_icol#_st;
                                drop index i_wri$_optstat_h_st;

                -- drop partition p_old
                                alter table wri$_optstat_histgrm_history drop partition P_OLD;

                -- recreate indexes
                                create index i_wri$_optstat_h_obj#_icol#_st on wri$_optstat_histgrm_history (obj#, intcol#, savtime, colname) tablespace sysaux;
                                create index i_wri$_optstat_h_st on wri$_optstat_histgrm_history (savtime) tablespace sysaux;

NOTE
======
SM/OPTSTAT

select dbms_stats.get_stats_history_retention from dual;
exec dbms_stats.alter_stats_history_retention(10);
If you like to specifically purge data from before a certain timestamp you can use the PURGE_STATS procedure for this. For example:

exec DBMS_STATS.PURGE_STATS(to_timestamp_tz('01-09-2006 00:00:00 Europe/London','DD-MM-YYYY HH24:MI:SS TZR'));

Note:
run it at a time where there is no stats gathering activity on the system, and system activity is minimal

+++++++++++++++++++++
243246.1, 782974.1, 950128.1
++++++++++++++++++++++