Problem
Administrator is trying to run the staging table script. Therefore, administrator launches Oracle SQL Plus, and runs a script similar to the following:
declare xx number; begin — Call the function xx := schemaname.usp_triggerimportbatchjobs(‘IMP’,’#ST_EXTDIM1′,’1′,»,’ADM’,1,to_date(’01-NOV-2007′)); end;
However, Oracle SQL Plus retuns an error.
Symptom
declare
*
ERROR at line 1:
ORA-20001: -1060
ORA-05512: at «schemaname.usp_triggerimportbatchjobs», line 217
ORA-06512: at line 5
Cause
The column ‘BATCH_ID’ inside table ‘XSTAGEDIM1’ incorrectly already has its Batch ID filled in.
- In other words, the entry is not blank (which it expects it to be).
Diagnosing The Problem
Oracle error code ‘ORA-20001’ means that there is an application-specific error code to come.
- TIP: For more details on this, see separate IBM Technote #1347672.
In this case, the relating Controller-specific code/meaning is:
- -1060 = No rows were updated in staging table, possibly wrong importid was sent in
Resolving The Problem
Delete the entries inside BATCH_ID and re-run.
Steps:
NOTE: Before proceeding, backup the Controller database (e.g. use EXP to create a DMP file) as a precaution
- Launch «Oracle Enterprise Manager Console» (Standalone)
- Locate and open Controller database
- Locate and open Controller schema (user) — for example «ControllerLive»
- Locate table ‘XSTAGEDIM1’
- Right-click on ‘XSTAGEDIM1’ and choose ‘View/Edit Contents’
- Notice that there is a column called ‘BATCH_ID’. If the rows beneath this column name are filled in (for example with ’35’) then it means that a BATCH_ID number has already been assigned.
- Delete the numbers inside this column
- Test by re-running the script.
Related Information
[{«Product»:{«code»:»SS9S6B»,»label»:»IBM Cognos Controller»},»Business Unit»:{«code»:»BU059″,»label»:»IBM Software w/o TPS»},»Component»:»Controller»,»Platform»:[{«code»:»PF033″,»label»:»Windows»}],»Version»:»8.3″,»Edition»:»»,»Line of Business»:{«code»:»LOB10″,»label»:»Data and AI»}}]
Historical Number
1039666
July 16, 2019
MSSQL
During the 12c database creation process , you can see ORA-20001 error in the alert log file when the “SYS.ORA $ AT_OS_OPT_SY_ <NN>” auto job runs. To fix the error, it is necessary to drop the job and recreate it. Errors will be as follows.
ORA—12012: error on auto execute of job «SYS».«ORA$AT_OS_OPT_SY_72» ORA—20001: Statistics Advisor: Invalid task name for the current user ORA—06512: at «SYS.DBMS_STATS», line 47207 ORA—06512: at «SYS.DBMS_STATS_ADVISOR», line 882 ORA—06512: at «SYS.DBMS_STATS_INTERNAL», line 20059 ORA—06512: at «SYS.DBMS_STATS_INTERNAL», line 22201 ORA—06512: at «SYS.DBMS_STATS», line 47197 |
First of all, it is necessary to ensure that the tasks are created correctly with the following command.
SQL> EXEC dbms_stats.init_package(); PL/SQL procedure successfully completed. |
Afterwards, it is necessary to identify the owner of the job with the following query.
SQL> select name, ctime, how_created,OWNER_NAME from sys.wri$_adv_tasks where name in (‘AUTO_STATS_ADVISOR_TASK’,‘INDIVIDUAL_STATS_ADVISOR_TASK’); |
If you see an output like the one below, you should connect with SYS. If the job owner is a different user, you should connect with that user.
NAME CTIME HOW_CREATED —————————————————— ————————— ——————————————— OWNER_NAME ———————————————————————————————————————————————————————————— AUTO_STATS_ADVISOR_TASK 13—OCT—18 CMD SYS INDIVIDUAL_STATS_ADVISOR_TASK 13—OCT—18 CMD SYS |
In this case, you should connect with sys via sqlplus and drop and recreate the tasks correctly.
Drop operations can be done as follows.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
SQL> DECLARE v_tname VARCHAR2(32767); BEGIN v_tname := ‘AUTO_STATS_ADVISOR_TASK’; DBMS_STATS.DROP_ADVISOR_TASK(v_tname); END; / PL/SQL procedure successfully completed. SQL> DECLARE v_tname VARCHAR2(32767); BEGIN v_tname := ‘INDIVIDUAL_STATS_ADVISOR_TASK’; DBMS_STATS.DROP_ADVISOR_TASK(v_tname); END; / PL/SQL procedure successfully completed. |
It should then be re-created as follows.
SQL> EXEC DBMS_STATS.INIT_PACKAGE(); PL/SQL procedure successfully completed. |
Then there will be no errors in the alert.log file.
While 12.2 database is being started by srvctl, the alert log shows following messages,
Unable to obtain current patch information due to error: 20001, ORA-20001: Latest xml inventory is not loaded into table
ORA-06512: at «SYS.DBMS_QOPATCH», line 777
ORA-06512: at «SYS.DBMS_QOPATCH», line 864
ORA-06512: at «SYS.DBMS_QOPATCH», line 2222
ORA-06512: at «SYS.DBMS_QOPATCH», line 740
ORA-06512: at «SYS.DBMS_QOPATCH», line 2247
===========================================================
Dumping current patch information
===========================================================
Unable to obtain current patch information due to error: 20001
===========================================================
DBMS_QOPATCH is introduced by 12.1 database as a cool new feature ‘Queryable OPatch’. It is implemented with a PL/SQL package (DBMS_QOPATCH) and a set of tables and directories. In order to understand what the errors really are, let’s do some details research,
system@orcl> select * from dba_directories where directory_name like ‘OPATCH%’;
OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID
——- ——————— ————————————————— ————-
SYS OPATCH_INST_DIR /u01/app/oracle/product/12.2.0/dbhome_1/OPatch 0
SYS OPATCH_SCRIPT_DIR /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch 0
SYS OPATCH_LOG_DIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log 0
system@orcl> exit
[oracle@host01]$ ls -lrt /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log
-rw-r—— 1 oracle osasm 120 Feb 9 17:07 qopatch.log
-rw-r—r— 1 oracle osasm 144227 Feb 10 17:32 qopatch_log.log
[oracle@host01]$
It should be a good guess to start from looking into the log file qopatch_log.log which modification time is very close to the time when errors was reported in alert log,
LOG file opened at 02/10/18 17:32:22
KUP-05007: Warning: Intra source concurrency disabled because the preprocessor option is being used.
Field Definitions for table OPATCH_XML_INV
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Reject rows with all null fields
Fields in Data Source:
XML_INVENTORY CHAR (100000000)
Terminated by «UIJSVTBOEIZBEFFQBL»
Trim whitespace same as SQL Loader
KUP-04095: preprocessor command /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch/qopiprep.bat encountered error
«/u01/app/oracle/product/12.2.0/dbhome_1/QOpatch/qopiprep.bat[55]: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/stout_orcl.txt: cannot create [Permission de«
The database starting got ORA-20001 while accessing external table OPATCH_XML_INV which has
preprocessor command ‘$ORACLE_HOME/QOpatch/qopiprep.bat’. The table definition is,
system@orcl> select owner,table_name from dba_external_tables where table_name=’OPATCH_XML_INV’;
OWNER TABLE_NAME
———- ———————
SYS OPATCH_XML_INV
system@orcl> select dbms_metadata.get_ddl(‘TABLE’,’OPATCH_XML_INV‘,’SYS‘) from dual;
DBMS_METADATA.GET_DDL(‘TABLE’,’OPATCH_XML_INV’,’SYS’)
———————————————————————————
CREATE TABLE «SYS».»OPATCH_XML_INV» SHARING=METADATA
( «XML_INVENTORY» CLOB
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY «OPATCH_SCRIPT_DIR»
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE CHARACTERSET UTF8
DISABLE_DIRECTORY_LINK_CHECK
READSIZE 8388608
preprocessor opatch_script_dir:’qopiprep.bat’
BADFILE opatch_script_dir:’qopatch_bad.bad’
LOGFILE opatch_log_dir:’qopatch_log.log’
FIELDS TERMINATED BY ‘UIJSVTBOEIZBEFFQBL’
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(
xml_inventory CHAR(100000000)
)
)
LOCATION
( «OPATCH_SCRIPT_DIR»:’qopiprep.bat’
)
)
REJECT LIMIT UNLIMITED
system@orcl>
File ‘qopatch_log.log’ is defined as log file of the external table, and will be generated by external table utility while table ‘OPATCH_XML_INV’ is accessed.
According to the table’s definition, PREPROCESSOR-specified command (script file) ‘qopiprep.bat’ will convert ‘raw’ data to records of the table before the table is accessible. That’s why execution error of script ‘qopiprep.bat’ is found in log ‘qopatch_log.log’. The log shows file ‘stout_orcl.txt’ cannot be created while line 55 of the script is being executed, the script code looks as following,
54 rm -rf $ORABASE/rdbms/log/xml_file_$DBSID.xml
55 $ORACLE_HOME/OPatch/opatch lsinventory -xml $ORABASE/rdbms/log/xml_file_$DBSID.xml
-retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORABASE/rdbms/log/stout_$DBSID.txt
56 cat $ORABASE/rdbms/log/xml_file_$DBSID.xml | sed ‘s/^ *//’ | tr ‘n’ ‘ ‘
57 echo «UIJSVTBOEIZBEFFQBL»
58 rm $ORABASE/rdbms/log/xml_file_$DBSID.xml
59 rm $ORABASE/rdbms/log/stout_$DBSID.txt
Check the log directory $ORABASE/rdbms/log (here $ORABASE is same as $ORACLE_HOME) permission,
[oracle@host01]$ ls -ld $ORACLE_HOME/rdbms/log
drwxr-xr-x 3 oracle oinstall 14 Feb 10 18:36 /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log
[oracle@host01]$ id -a
uid=504(oracle) gid=512(oinstall) groups=512(oinstall),513(dba),515(asmdba),519(osasm),520(osdba)
[oracle@host01]$ ls -ld $ORACLE_HOME
drwxr-xr-x 77 oracle oinstall 81 Feb 10 19:22 /u01/app/oracle/product/12.2.0/dbhome_1
The database home owner ‘oracle’ is also the owner of the log directory, and both ‘sqlplus’ and’srvctl’ are executed by ‘oracle’ to start database, all read/write privilges should be inherited from user ‘oracle’ who has full control on the log directory. However, it is only true for sqlplus but not for srvctl.
IS srvctl accessing the external table as user other than oracle? Try to prove it by adding touch command to the script,
54 rm -rf $ORABASE/rdbms/log/xml_file_$DBSID.xml
55 touch /tmp/stout_$DBSID.test
56 $ORACLE_HOME/OPatch/opatch lsinventory -xml $ORABASE/rdbms/log/xml_file_$DBSID.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORABASE/rdbms/log/stout_$DBSID.txt
57 cat $ORABASE/rdbms/log/xml_file_$DBSID.xml | sed ‘s/^ *//’ | tr ‘n’ ‘ ‘
58 echo «UIJSVTBOEIZBEFFQBL»
Try to start database with sqlplus and srvctl respectively,
[oracle@host01]$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@host01]$ ls -l /tmp/stout*.test
/tmp/stout*.test: No such file or directory
[oracle@host01]$ sqlplus / as sysdba
<<message truncated>>
SQL> startup
ORACLE instance started.
<<message truncated>>
Database opened.
SQL> exit
[oracle@host01]$ ls -l /tmp/stout*.test
-rw-r—r— 1 oracle oinstall 0 Feb 10 20:10 /tmp/stout_orcl.test
[oracle@host01]$
[oracle@host01]$ rm /tmp/stout_orcl.test
rm: remove /tmp/stout_orcl.test (yes/no)? yes
[oracle@host01]$ srvctl stop database -db orcl
[oracle@host01]$ srvctl start database -db orcl
[oracle@host01]$ ls -l /tmp/stout*.test
-rw-r—r— 1 grid oinstall 0 Feb 10 20:13 /tmp/stout_orcl.test
See, the external table (running script qopiprep.bat) is accessed as grid while srvctl is run, but oracle while sqlplus. Here, grid is the owner of standalone Grid Infrastructure (Oracle Restart) home. What if the external table is accessed directely from sqlplus?
[oracle@host01]$ ls -l /tmp/stout*.test
/tmp/stout*.test: No such file or directory
[oracle@host01]$
[oracle@host01]$ sqlplus system/oracle
<<message truncated>>
SQL> select count(*) from SYS.OPATCH_XML_INV;
COUNT(*)
———-
1
SQL> exit
[oracle@host01]$ ls -l /tmp/stout*.test
-rw-r—r— 1 oracle oinstall 0 Feb 10 20:39 /tmp/stout_orcl.test
[oracle@host01]$ rm /tmp/stout_orcl.test
rm: remove /tmp/stout_orcl.test (yes/no)? yes
[oracle@host01]$ sqlplus system/oracle@host01/orcl
<<message truncated>>
SQL> select count(*) from SYS.OPATCH_XML_INV;
select count(*) from SYS.OPATCH_XML_INV
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04095: preprocessor command
/u01/app/oracle/product/12.2.0/dbhome_1/QOpatch/qopiprep.bat encountered error
«/u01/app/oracle/product/12.2.0/dbhome_1/QOpatch/qopiprep.bat[56]:
/u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/stout_orcl.txt: cannot create
[Permission de»
SQL> exit
[oracle@host01]$
[oracle@host01]$ ls -l /tmp/stout*.test
-rw-r—r— 1 grid oinstall 0 Feb 10 20:41 /tmp/stout_orcl.test
Apparently, it succeeded when logged onto database locally (bypass listener), and failed while remotely (going through listener. And the listener is running out of Oracle Restart home whose owner
is grid,
[oracle@host01]$ ps -ef | grep tnslsnr
grid 1887 1 0 Jan 11 ? 19:54 /u01/app/12.2.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle 64664 36089 0 20:56:12 pts/3 0:00 grep tnslsnr
Although it, sometimes, made sense in previous version (10g? 11g?), it does not happen in 12.1. Therefore, it should be treated as bug :(.
As a temporary workaround, write permission can be granted to group of the log directory as grid is member of oinstall,
[oracle@magnum]$ id -a grid
uid=506(grid) gid=512(oinstall) groups=512(oinstall),514(asmadmin),515(asmdba),516(asmoper),519(osasm),520(osdba),513(dba)
[oracle@host01]$ cd $ORACLE_HOME/rdbms
[oracle@host01]$ ls -ld log
drwxr-xr-x 3 oracle oinstall 16 Feb 10 20:39 log
[oracle@host01]$ chmod g+w log
[oracle@host01]$ ls -ld log
drwxrwxr-x 3 oracle oinstall 16 Feb 10 20:39 log
I encountered this issue now multiple times in my own environment. And I searched MOS and tried different things. But I couldn’t solve it yet. My Upgrade fails with ORA-20001 during datapatch run – and I’d like to show you how to bring the upgrade to an successful end.
It all starts with an upgrade
In my lab environment I upgrade two databases often in parallel. An 11.2.0.4 database and a 12.2.0.1. The 11.2.0.4 upgrades always flawless. But the 12.2.0.1 occasionally fails. This is the screen AutoUpgrade is showing me – but this is not an AutoUpgrade problem. It is not even a “upgrade” but a patching problem.
At first, my source databases are patched to the most recent patch bundles. At the moment while I write this, it means the July 2020 patch bundles.
I use AutoUpgrade here simply because it is easier. Why should I type instead or click DBUA screens. Especially as DBUA wouldn’t be able to upgrade two databases in parallel.
This is my config file:
global.autoupg_log_dir=/home/oracle/upg_logs # # Database number 1 # upg1.dbname=DB12 upg1.start_time=NOW upg1.source_home=/u01/app/oracle/product/12.2.0.1 upg1.target_home=/u01/app/oracle/product/19 upg1.sid=DB12 upg1.log_dir=/home/oracle/upg_logs upg1.upgrade_node=localhost upg1.target_version=19 upg1.timezone_upg=no upg1.restoration=no # # Database number 2 # upg2.dbname=FTEX upg2.start_time=NOW upg2.source_home=/u01/app/oracle/product/11.2.0.4 upg2.target_home=/u01/app/oracle/product/19 upg2.sid=FTEX upg2.log_dir=/home/oracle/upg_logs upg2.upgrade_node=localhost upg2.target_version=19 upg2.timezone_upg=no upg2.restoration=no
Nothing unusual.
Upgrade is failing
And while the 11.2.0.4 database upgrades nicely, the 12.2.0.1 fails.
$ java -jar $OH19/rdbms/admin/autoupgrade.jar -config UP19.cfg -mode deploy AutoUpgrade tool launched with default options Processing config file ... +--------------------------------+ | Starting AutoUpgrade execution | +--------------------------------+ 2 databases will be processed Type 'help' to list console commands upg> ------------------------------------------------- Errors in database [DB12] Stage [DBUPGRADE] Operation [STOPPED] Status [ERROR] Info [ Error: UPG-1400 UPGRADE FAILED [DB12] Cause: Database upgrade failed with errors For further details, see the log file located at /home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log] ------------------------------------------------- Logs: [/home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log] ------------------------------------------------- Job 102 completed
Let me find out what has happend.
autoupgrade_user.log
As the output proposed, I will start with: /home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log
---------+-------------+ | DB12|UPGRADE [95%]| +---------+-------------+ 2020-07-30 12:24:30.259 ERROR DATABASE NAME: DB12 CAUSE: ERROR at Line 756922 in [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log] REASON: Error: prereq checks failed! ACTION: [MANUAL] DETAILS: 2020-07-30 12:24:30.287 ERROR Database Upgrade Error in File [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log] on Database [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log] 2020-07-30 12:24:31.945 ERROR UPGRADE FAILED [DB12] 2020-07-30 12:24:31.945 ERROR Exception Error in Database Upgrade [UPG-1400#UPGRADE FAILED [DB12]] 2020-07-30 12:24:31.945 INFO End Upgrade on Database [DB12] 2020-07-30 12:24:34.524 ERROR UPGRADE FAILED [DB12] 2020-07-30 12:24:34.533 ERROR Exception Error in Database Upgrade [DB12] 2020-07-30 12:24:34.570 ERROR db12 Return status is ERROR 2020-07-30 12:24:34.572 ERROR Dispatcher failed: AutoUpgException [UPG-1400#UPGRADE FAILED [DB12]] 2020-07-30 12:24:34.574 INFO Starting error management routine 2020-07-30 12:24:34.585 INFO Ended error management routine 2020-07-30 12:24:34.590 ERROR Error running dispatcher for job 103 Cause: Database upgrade failed with errors 2020-07-30 12:24:34.590 ERROR Dispatcher failed: Error: UPG-1400 UPGRADE FAILED [DB12] Cause: Database upgrade failed with errors For further details, see the log file located at /home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log; aborting job 103 for database DB12
Hm … CAUSE: ERROR at Line 756922 in [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log]
The main worker’s catupgr0.log
This is the main worker’s upgrade logfile.
[..] Serial Phase #:105 [DB12] Files:1 Time: 3s Serial Phase #:106 [DB12] Files:1 Time: 0s Serial Phase #:107 [DB12] Files:1 Time: 23s ------------------------------------------------------ Phases [0-107] End Time:[2020_07_30 12:24:07] ------------------------------------------------------ Grand Total Time: 1869s *** WARNING: ERRORS FOUND DURING UPGRADE *** 1. Evaluate the errors found in the upgrade logs and determine the proper action. 2. Rerun the upgrade when the problem is resolved REASON: ERRORS FOUND: During Upgrade FILENAME: /home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log AT LINE NUMBER: 754911 ------------------------------------------------------ Identifier DATAPATCH_ 20-07-30 12:21:32 SCRIPT = [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db12_datapatch_upgrade.log] ERROR = [Error: prereq checks failed! ] STATEMENT = [] ------------------------------------------------------ LOG FILES: (/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db12*.log) Upgrade Summary Report Located in: /home/oracle/upg_logs/DB12/103/dbupgrade/upg_summary.log End of Input Commands ------------------------------------------------------ Start of DataPatch Logs ------------------------------------------------------ stdout from running datapatch to install upgrade SQL patches and PSUs: SQL Patching tool version 19.8.0.0.0 Production on Thu Jul 30 12:19:29 2020 Copyright (c) 2012, 2020, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log Connecting to database...OK Gathering database info...done Bootstrapping registry and package to current versions...done Error: prereq checks failed! verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table Prereq check failed, exiting without installing any patches. Please refer to MOS Note 1609718.1 and/or the invocation log /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log for information on how to resolve the above errors. SQL Patching tool complete on Thu Jul 30 12:21:31 2020 stderr from running datapatch to install upgrade SQL patches and PSUs: End of DataPatch Logs ------------------------------------------------------
So the upgrade itself completed successfully. But datapatch had an issue in the post upgrade phase.
ORA-20001
And this seems to be the problem:
Error: prereq checks failed! verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table
The most famous ORA-20001. On MyOracle Support you will find this note: MOS Note:1602089.1 – Queryable Patch Inventory – Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table. But I need more information to get an idea what’s wrong in my case.
And again for the records, I upgrade to an 19.8.0 home, but I saw the same issue with 19.7.0 already, too.
sqlpatch_invocation.log
Ok, let’s go to the next logfile – the /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log. Maybe it has more details?
[2020-07-30 12:21:29] verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table [2020-07-30 12:21:29] Error: prereq checks failed! [2020-07-30 12:21:29] verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table [2020-07-30 12:21:29] Prereq check failed, exiting without installing any patches. ... [2020-07-30 12:21:29] *** END final state end of patching *** [2020-07-30 12:21:31] Please refer to MOS Note 1609718.1 and/or the invocation log [2020-07-30 12:21:31] /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log[2020-07-30 12:21:31] for information on how to resolve the above errors.
That is not really helpful.
But there is an sqlpatch_debug.log in the same directory. And this has this error sequence:
[2020-07-30 12:21:29] QPI: Caught error while querying opatch_xml_inv [2020-07-30 12:21:29] QPI: in REFRESH_OPATCH_DATA, ERROR code -29913 : ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app [2020-07-30 12:21:29] QPI: Releasing lock SUCCESSFUL, qp_result=0 at : 30-JUL-20 12.21.29.797707000 PM +02:00 [2020-07-30 12:21:29] QPI: Releasing lock FAILED, qp_result=4 at : 30-JUL-20 12.21.29.798750000 PM +02:00 [2020-07-30 12:21:29] QPI: in get_pending_activity, ERROR code -20001 : ORA-20001: Latest xml inventory is not loaded into table
Unfortunately this doesn’t bring me forward either as my error pattern isn’t covered in MOS Note:1602089.1.
alert.log?
Maybe the alert.log has more information?
SERVER COMPONENT id=DP_UPG_BGN: timestamp=2020-07-30 12:19:27 2020-07-30T12:19:47.883482+02:00 XDB initialized. 2020-07-30T12:19:49.188762+02:00 QPI: opatch file present, opatch QPI: qopiprep.bat file present 2020-07-30T12:21:32.945903+02:00 SERVER COMPONENT id=DP_UPG_END: timestamp=2020-07-30 12:21:32
No.
Basically I know that the inventory couldn’t be queried at the end of the upgrade. But my database is upgraded. Hence, no need to restore it. But how do I solve this?
Query the inventory or cleanup metadata
At this stage I wanted to save my upgrade. So I queried the inventory by myself – and it takes quite a bit as you can see here:
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jul 30 15:18:48 2020 Version 19.8.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.8.0.0.0 SQL> set timing on SQL> select * from OPATCH_XML_INV; XML_INVENTORY -------------------------------------------------------------------------------- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Inv entoryInstance> <ora Elapsed: 00:00:35.22 SQL> exit
It may be also a solution to cleanup metadata with:
EXEC dbms_qopatch.clean_metadata();
Complete the upgrade
AutoUpgrade still waits for me – another reason why I don’t use the DBUA which typically is non-resumable. I will resume now my upgrade:
upg> lsj +----+-------+-----------+---------+--------+--------------+--------+-----------------+ |Job#|DB_NAME| STAGE|OPERATION| STATUS| START_TIME| UPDATED| MESSAGE| +----+-------+-----------+---------+--------+--------------+--------+-----------------+ | 102| FTEX|POSTUPGRADE| STOPPED|FINISHED|20/07/30 11:47|12:30:11|Completed job 102| | 103| DB12| DBUPGRADE| STOPPED| ERROR|20/07/30 11:48|12:24:34| UPG-1400| +----+-------+-----------+---------+--------+--------------+--------+-----------------+ Total jobs 2 upg> resume -job 103 Resuming job: [103][DB12]
The output looks promising:
And yes, my database upgrade finished now.
Alternative Workaround
You may use the following workaround – it will increase the threshold to read the inventory:
alter system set "_xt_preproc_timeout"=180 scope=both;
In this case, I set it to 180 seconds.
Annotation:
In a previous version of this blog post, I recommended to set _bug27355984_xt_preproc_timeout instead of _xt_preproc_timeout. Please use _xt_preproc_timeout since otherwise the upgrade to Oracle Database 21c and 23c will fail due to this parameter in your spfile.
Summary
The root cause for this issue may have been incorrect metadata from a previous patch run. Let me emphasize again that this is not an upgrade or AutoUpgrade error. It happens when datapatch gets invoked.
I’m very happy that AutoUpgrade (and the command line upgrade with “dbupgrade -R” are both resumable). Otherwise I wouldn’t have had a chance to fix this so easily. In case you still used the DBUA, you can still jump in with “dbupgrade -R” and complete the upgrade after cleaning up the metadata.
Why do I write such a blog post? Actually, when I hit such an issue with my limited number of upgrades, you may see it another day, too. And the solution is simple. But as I couldn’t find the key in MOS, I would like to show you here how easily you can resume a potentially failed database upgrade – even when the upgrade itself completed fully.
Further Links and Information
- Opatch … oh Opatch … why is Opatch so stubborn? – Dec 15, 2018
- MOS Note:1602089.1 – Queryable Patch Inventory – Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table
–Mike
cause: Due to invalid constraints on sys schema,dbms utility couldnot recompile with dependent objects
SQL> select count(*) from dba_objects where status='INVALID'; COUNT(*) ---------- 181
SQL> exec dbms_utility.compile_schema('SYS'); BEGIN dbms_utility.compile_schema('SYS'); END; * ERROR at line 1: ORA-20001: Cannot recompile SYS objects ORA-06512: at "SYS.DBMS_UTILITY", line 387 ORA-06512: at line 1
Workaround: utlrp script can be used to validate the objects!!
SQL> @?/rdbms/admin/utlrp TIMESTAMP -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- COMP_TIMESTAMP UTLRP_BGN 2020-12-18 14:24:13 DOC> The following PL/SQL block invokes UTL_RECOMP to recompile invalid DOC> objects in the database. Recompilation time is proportional to the DOC> number of invalid objects in the database, so this command may take DOC> a long time to execute on a database with a large number of invalid DOC> objects. DOC> DOC> Use the following queries to track recompilation progress: DOC> DOC> 1. Query returning the number of invalid objects remaining. This DOC> number should decrease with time. DOC> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6); DOC> DOC> 2. Query returning the number of objects compiled so far. This number DOC> should increase with time. DOC> SELECT COUNT(*) FROM UTL_RECOMP_COMPILED; DOC> DOC> This script automatically chooses serial or parallel recompilation DOC> based on the number of CPUs available (parameter cpu_count) multiplied DOC> by the number of threads per CPU (parameter parallel_threads_per_cpu). DOC> On RAC, this number is added across all RAC nodes. DOC> DOC> UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel DOC> recompilation. Jobs are created without instance affinity so that they DOC> can migrate across RAC nodes. Use the following queries to verify DOC> whether UTL_RECOMP jobs are being created and run correctly: DOC> DOC> 1. Query showing jobs created by UTL_RECOMP DOC> SELECT job_name FROM dba_scheduler_jobs DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%'; DOC> DOC> 2. Query showing UTL_RECOMP jobs that are running DOC> SELECT job_name FROM dba_scheduler_running_jobs DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%'; DOC># PL/SQL procedure successfully completed. TIMESTAMP -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- COMP_TIMESTAMP UTLRP_END 2020-12-18 14:24:32 DOC> The following query reports the number of objects that have compiled DOC> with errors. DOC> DOC> If the number is higher than expected, please examine the error DOC> messages reported with each object (using SHOW ERRORS) to see if they DOC> point to system misconfiguration or resource constraints that must be DOC> fixed before attempting to recompile these objects. DOC># OBJECTS WITH ERRORS ------------------- 0 DOC> The following query reports the number of errors caught during DOC> recompilation. If this number is non-zero, please query the error DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors DOC> are due to misconfiguration or resource constraints that must be DOC> fixed before objects can compile successfully. DOC># ERRORS DURING RECOMPILATION --------------------------- 0 Function created. PL/SQL procedure successfully completed. Function dropped. ...(14:24:56) Starting validate_apex for APEX_190200 ...(14:24:57) Checking missing sys privileges ...(14:24:57) Re-generating APEX_190200.wwv_flow_db_version ... wwv_flow_db_version is up to date ...(14:24:57) Key object existence check ...(14:24:58) Setting DBMS Registry for APEX to valid ...(14:24:58) Exiting validate_apex PL/SQL procedure successfully completed.
Desire and obsessive to learn !
Generic technology enthusiast who have dynamic experience in database and other technologies. I have persistence to learn any niche skills faster.
Served multiple DBA roles in fortune 500 companies to proactively prevent unexpected failure events.
Inclined to take risks,face challenging situations and embrace fear!
***I would like to share my thoughts and ideas to this world !***
View all posts by kishan
A client ran into an error on one of their Application Express reports this week. At first they thought it was caused by some changes they made to data in a table. Upon investigation, that wasn’t necessarily the case.
The error encountered:
report error:
ORA-20001: Error fetching column value: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
Let’s start at the beginning. I was contacted by a long-time client the other day. I was part of a team that helped create an application in Oracle Application Express that collects information from organizations and gives out grants to the ones selected. There are deadline dates for each quarter and if applications are submitted within a certain quarter their application is marked with the specified deadline date. This makes it easy to pull the information when the deadline hits. Recently, they went from four deadlines in a fiscal year to three. The client updated the table to put this into effect. The errors on the report seemed to have appeared after the changes were made.
First thing was first. I had to narrow down if this data actually caused the error or if it was something else by coincidence that caused it around the same time. So, I took to Google. Most of my reading gave the same cause. Trying to fit too much of something into a space that wasn’t big enough. Some people said it was a VARCHAR2 variable that wasn’t large enough to contain the information in some PL/SQL code, and some said it was an list of values (LOV) trying to pull too much information from a table using a query. It actually ended up being the latter.
We had a view setup that was being queried to give the report result set. One of the columns returned was ORG_ID, which is just a number identifier for each organization. Under the column attributes for this column in the report, ‘Display As’ was set to ‘Display As Text (based on LOV, does not save state)’. We had a LOV setup to display the organization name, and the return value was the organization ID. This is where the problem arose. This LOV was returning too much information. Not necessarily too many rows, but too many bytes. Changing the ‘Display As’ back to ‘Standard Report Column’ remedied the issue.
Now that I narrowed down the problem, I just had to add the organization’s name back into the report. I updated the view to include the new column and voila! No more error.
I’m not sure if this will help anyone else running into the problem, but I hope that it at least saves someone a bit of time. I never really found a straight answer searching through Google and the Oracle forums. All of the answers I came across were very cryptic, so I hope this is exactly the opposite!
UPDATE: I was contacted by another company that ran into the same error. They ended up with another issue causing the same codes. The cause of their problem was actually returning too many records to the LOV. Dropping the number of records returned down to a lower number fixed the issue. Because we were just displaying ours based on an LOV and not actually giving the user a dropdown to choose from it was easily remedied. With the issue of too many records in an LOV dropdown, a question you may want to ask yourself is, “Does having this many records in a dropdown make this cumbersome for the user?”. In my opinion, yes. There are other ways that allow the user to select from a list that doesn’t require them to scroll… and scroll.. and scroll… through hundreds of options in a list. An alternative, could be the Popup LOV which provides a search box as part of its functionality.