Posted by Pavan DBA on April 23, 2012
Friends, want to share some interesting bug that I faced in datapump concept.
Recently when I am taking export backup of a database using expdp with full=y option, I got below error
UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-06512: at “SYS.DBMS_DATAPUMP”, line 2772
ORA-06512: at “SYS.DBMS_DATAPUMP”, line 3886
ORA-06512: at line 1
I thought there is some issue and searched over “My Oracle Support” and found it is a bug 5969934 that can occur from any version starting from 10.2.0.2 till 11.2.0.3
Reason for the same is as follows
The expdp client makes calls to DBMS_DATAPUMP package to start and monitor export job. Once the export job is underway, the client just monitors the job status by issuing DBMS_DATAPUMP.GET_STAUS. Therefore, if the export logfile says “job successfully completed”, the dump file generated by the job should be fine.
You can simply ignore the errors, since the dump file is still valid for an import.
In the 10.2.0.2 release, there were a number of problems that caused the expdp and impdp clients to exit prematurely, interpreting a nonfatal error as a fatal one, giving the appearance that the job had failed when it hadn’t. In fact, inspection of the log file, if one was specified for the job, showed that the job ran successfully to completion. Often a trace file written by one of the Data Pump processes would provide more detail on the error that had been misinterpreted as a fatal one. Many of these errors involved the queues used for communication between the Data Pump processes, but there were other issues as well.
.
With each subsequent release, these problems have been addressed, and the client has become more robust and rarely, if ever, runs into situations like this. However, this is the result of many bug fixes in subsequent releases, some in Data Pump and some in supporting layers. It’s impossible to know, at this point, what combination of bug fixes would address this specific failure, and even if that was possible, it wouldn’t address other possible failures that look very similar on the client side.
.
Relying on information in the log file is one way to verify that the job actually completed successfully. Problems like this one became much more intermittent by the 10.2.0.4 release and are rarely, if ever, seen in 11.1 or later.
So, as per the above comments, I checked my logfile and found that job is successful (below is output of last lines in logfile)
Master table “SYS”.”SYS_EXPORT_FULL_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
/dbexports/fullexp_21apr2012_01.dmp
/dbexports/fullexp_21apr2012_02.dmp
/dbexports/fullexp_21apr2012_03.dmp
/dbexports/fullexp_21apr2012_04.dmp
/dbexports/fullexp_21apr2012_05.dmp
/dbexports/fullexp_21apr2012_06.dmp
/dbexports/fullexp_21apr2012_07.dmp
/dbexports/fullexp_21apr2012_08.dmp
Job “SYS”.”SYS_EXPORT_FULL_01″ successfully completed at 18:23:21
Any time if you face this error, please don’t panic now and check your logfile immediately 🙂
Hope this helps and HAPPY LEARNING !!!
This entry was posted on April 23, 2012 at 4:12 PM and is filed under export/import/Datapump.
Tagged: How to resolve UDE-00008 and ORA-31626 during expdp, ORA-31626 during datapump, ORA-31626 during datapump export, ORA-31626: job does not exist, ORA-39086: cannot retrieve job information, UDE-00008 during datapump, UDE-00008 during datapump export, UDE-00008 during expdp, UDE-00008: operation generated ORACLE error 31626. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Skip to content
0
0
Read Time:2 Minute, 2 Second
A short post about the ORA-31626 error. Found out a scheduled export- and import- job was broken in an Oracle Database. Error: ORA-31626: job does not exist. After checking the scheduled jobs I realized that it’s not the scheduled job, but the datapump-job (should have read the error more carefully – it states ‘database pump job does not exists…. ).
And it seems all related to being cost-effective in the cloud….
The full error:
Error at line 2
ORA-20010: ORA-20010: ORA-31626: job does not exist –
ORA-06512: at “<user>.<program>”, line 169
ORA-06512: at line 1
ORA-06512: at “SYS.DBMS_ISCHED”, line 150
ORA-06512: at “SYS.DBMS_SCHEDULER”, line 441
ORA-06512: at line 2
Script Terminated on line 2.
First it seemed a scheduled job, but it appeared to be a database-pump-job in a kind of ‘stale’ – situation.
Tue Jun 05 09:00:49 Central Europe Daylight Time 2018
Errors in file <path><file>53244.trc:
ORA-12012: error on auto execute of job 53244
ORA-20010: ORA-20010: ORA-39002: invalid operation – ORA-06512: at “SYS.DBMS_DATAPUMP”, line 4882
ORA-06512: at “<owner>.<program>”, line 146
ORA-06512: at “<owner>.<program>”, line 169
What datapump jobs are there?
SELECT owner_name, job_name, operation, job_mode, state, attached_sessions FROM dba_datapump_jobs;
The job was still there, but ‘not running’. And that’s the one which is causing the error.
Finding out the master table:
SELECT o.status, o.object_id, o.object_type, o.owner||’.’||object_name “OWNER.OBJECT” FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE ‘BIN$%’ ORDER BY 4,2;
Drop this master table:
drop table <owner>.<object_name>;
And the job is running without problems.
But what causes this failure? Most likely: as we are running the database in the cloud, the server is being shut down at a regular interval when not being used to save costs. Not realizing that this job is scheduled around that time…..
So be aware of shutting servers down to save costs in the cloud.
Sources:
–> Note “How To Drop A NOT RUNNING DataPump Job? (Doc ID 1456784.1)
Job Oprel
Until February 2019, Job worked as a solution architect at AMIS Services with a special interest in Oracle licensing, High Availability architectures and managing complex (Oracle) environments, which includes Cloud environments.With a background as Oracle developer, DBA, team-manager and license-consultant he is able to utilize the Oracle technologies to a cost-efficient architecture for his customers.He is regularly involved in consultancy regarding: — Unlimited License Agreements (ULA). — License compliancy-checks and advice regarding optimizing the environment. — Second opinions. — Education / presentations about licensing and managing your infrastructure in the most cost-efficient manner.Twitter: @jobaclenl
Happy
0 %
Sad
0 %
Excited
0 %
Sleepy
0 %
Angry
0 %
Surprise
0 %
Average Rating
- Engineered Systems
- ODA
The bare minimum of licensing an ODA X7-2
Fri Sep 7 , 2018
The other day we had a discussion with Oracle sales about the question if you can buy an Oracle Database Appliance (ODA) X7-2 HA when you only have 1 socket license Oracle SE. The licensing rules can be interpreted different by some. Disregarding the question why you would buy an […]
Последнее время пришлось делать множество операций экспорта и импорта между различными базами версий 10g и 11g, используя как оригинальный экспорт (Original Export) (expimp) так и datapump экспорт (DataPump Export) (expdpimpdp). Столкнулся со множеством проблем.
Основная мысль
Original Export окончательно отомрет. Об этом конкретно написано в статье — Feature Obsolescence — Original Export 10.2 [ID 345187.1]. Исправление багов закончилось еще 31.07.2010. Расширенная поддержка закончится через год 31.07.2013. И всё, дальше только движение по инерции. Поэтому, лучше переходить на DataPump Export. Особенно если вы используете 11g. Original Export есть смысл использовать только в двух случаях: 1) Когда нужно импортировать файл сделанный Original Export, когда-то давно. 2) Если возникает проблема которая решена в Original Export но не решена в DataPump Export, например с chr(0) (см.ниже).
Что почитать
Пара статей, с которыми полезно ознакомиться
Master Note for Data Pump [ID 1264715.1] и Master Note for Export and Import [ID 1264691.1].
********************************************************************************
Проблемы с которыми я сам непосредственно столкнулся
Original Export
- ORA-01400 cannot insert NULL into IMP-00019 IMP-00003
- ORA-01406 fetched column value was truncated
- ORA-01455 converting column overflows integer datatype
Datapump Export
- Проблема chr(0). ORA-39126 ORA-06502 LPX-00216 ORA-06512
- ORA-31633: unable to create master table ORA-31626 job does not exist ORA-00955
- ORA-39065 unexpected master process exception in DISPATCH
********************************************************************************
1. Проблема chr(0). ORA-39126 ORA-06502 LPX-00216 ORA-06512
Для DataPump Export. При импорте (impdp) возникает ошибка, у меня она выглядела так:
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE:"UDB_BUF"."CMP_DETAILS"]
ORA-06502: PL/SQL: numeric or value error
LPX-00216: invalid character 0 (0x0)
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9001
----- PL/SQL Call Stack -----
object line object
handle number name
0x2146dcf18 20462 package body SYS.KUPW$WORKER
0x2146dcf18 9028 package body SYS.KUPW$WORKER
0x2146dcf18 16665 package body SYS.KUPW$WORKER
0x2146dcf18 3956 package body SYS.KUPW$WORKER
0x2146dcf18 9725 package body SYS.KUPW$WORKER
0x2146dcf18 1775 package body SYS.KUPW$WORKER
0x2146e8f38 2 anonymous block
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE:"UDB_BUF"."CMP_DETAILS"]
ORA-06502: PL/SQL: numeric or value error
LPX-00216: invalid character 0 (0x0)
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9001
----- PL/SQL Call Stack -----
object line object
handle number name
0x2146dcf18 20462 package body SYS.KUPW$WORKER
0x2146dcf18 9028 package body SYS.KUPW$WORKER
0x2146dcf18 16665 package body SYS.KUPW$WORKER
0x2146dcf18 3956 package body SYS.KUPW$WORKER
0x2146dcf18 9725 package body SYS.KUPW$WORKER
0x2146dcf18 1775 package body SYS.KUPW$WORKER
0x2146e8f38 2 anonymous block
Job "REG_RT"."IMP_TO_UDB_BUF" stopped due to fatal error at 11:09:35
Причина возникновения ошибки:
В коде процедур или пакетов или в определениях объектов используется оператор chr(0). В данном, конкретном случае, это функциональный индекс
CREATE INDEX cmp_det_intersect_search_i
ON cmp_details (UPPER (store_no), NVL (lvl1_num, CHR (0)), NVL (lvl2_num, CHR (0)), NVL (lvl3_num, CHR (0)))
TABLESPACE index_tbsp;
В Original Export эту проблему устранили, а в DataPump Export нет (например, Bug 3591564 : ORA-1756 IMPORTING FUNCTIONAL INDEX).
Способы решения:
1). Использовать Original Export.
2). Перед экспортом удалить объекты которые содержат chr(0), а после импорта создать их вручную.
3). (может не помочь). При импорте исключить объекты которые содержать chr(0).
4). Изменить приложение — отказаться от использования chr(0).
********************************************************************************
2. ORA-01400 cannot insert NULL into IMP-00019 IMP-00003
Для Original Export. При импорте (imp) возникает ошибка (см.ниже) при этом столбец ROW_MODE на самом деле вообще не содержит NULL значений.
IMP-00019: row rejected due to ORACLE error 1400
IMP-00003: ORACLE error 1400 encountered
ORA-01400: cannot insert NULL into ("UDB_BUF"."ADR_ADDRESS_VOC"."ROW_MODE")
Column : 1000375427
Column : 60
Column : кв.6
Column : 45000000000
Column :
Column :
Column :
Column : 1000375327
Column :
Column : Н
Column : 27
Column : 10-AUG-1999:10:11:56
Column : 27
Column : 23-AUG-1999:17:42:54
Column : 27
Column : Н
Column :
Column : 601
Column : 6
Column : 1
Column : 10-AUG-1999:10:11:56
Column :
Column : 46000000001
Column :
Причина возникновения ошибки:
В 11g1R1 ввели новый тип столбца, который не сохраняет default значения столбца в блоке данный. При получении пустых данных, NULL значения такого столбца, заменяются теми которые определены по умолчанию. Но это не работает с DIRECT=Y. Подробнее — ORA-1400 During Import of Export Dump Written in Direct Path Mode [ID 826746.1]
Способы решения:
1). Использовать DataPump Export экспорт.
2). Использовать Original Export с DIRECT=N.
********************************************************************************
3. ORA-31633: unable to create master table ORA-31626 job does not exist ORA-00955
Для DataPump Export. При импорте (impdp) возникает ошибка
ORA-31626: job does not exist
ORA-31633: unable to create master table "REG_RT.IMP_TO_UDB_BUF"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-00955: name is already used by an existing object
Причина возникновения ошибки:
Обычно такая ошибка возникает когда процесс импорта прерывают, а затем запускают новый процесс импорта, при этом имя job такое же (параметр JOB_NAME). Подробнее здесь — DataPump Export or Import Fails With ORA-31633 ORA-6512 ORA-955 [ID 556425.1].
Способы решения:
Обычно достаточно просто удалить таблицу которая указана в сообщении об ошибке REG_RT.IMP_TO_UDB_BUF. Хотя в статье советуется сначала убедиться что job не работает.
********************************************************************************
4. ORA-01406 fetched column value was truncated
Для Original Export. При экспорте (exp) возникает ошибка
EXP-00008: ORACLE error 1406 encountered
ORA-01406: fetched column value was truncated
EXP-00000: Export terminated unsuccessfully
Причина возникновения ошибки:
Обычно эта ошибка возникает если делать FULL экспорт из 11 версии сервера (сервер), версией exp которая меньше чем 11 (клиент). Подробнее — Full Export Fails With Error ORA-1406 When Exporting 11g Database [ID 553993.1].
Способы решения:
В статье предлагают какие-то не совсем мне понятные способы:
1). Изменить character set БД на AL32UTF8 (это может быть совсем не просто).
2). Применить patch 6804150 (я его не нашел, но он вроде входит в состав patch set 10.2.0.5. Куда его устанавливать, на сервер или на клиента?)
3). Установить patch set 10.2.0.5 (только для 10g? Куда, сервер или клиент?)
Мои способы:
1) Попробовать сделать экспорт только конкретной схемы. Т.е. вместо FULL=Y — OWNER=XXX.
2) Использовать DataPump Export.
********************************************************************************
5. ORA-01455 converting column overflows integer datatype
Для Original Export. При экспорте (exp) возникает ошибка
EXP-00008: ORACLE error 1406 encountered
ORA-01455: converting column overflows integer datatype
EXP-00000: Export terminated unsuccessfully
Причина возникновения ошибки:
Обычно эта ошибка возникает если делать экспорт из 11.2 версии сервера (сервер), версией exp которая меньше чем 11.2 (клиент). Подробнее — EXP: ORA-1455 is raised when exporting from an 11.2 database using a 9i,10g or 11gR1 exp utility [ID 1381690.1]. В 11.2 БД создается с DEFERRED_SEGMENT_CREATION=TRUE по умолчанию, т.е. сегменты для пустых таблиц не создаются. Они создаются при первой вставке данных в таблицу.
Способы решения:
В статье предлагают следующие способы способы:
1). Использовать DataPump Export (expdp).
2). Выделить хотя бы один экстент для пустых таблиц. Т.к. я выгрухаю одну схему, то использую запрос
SELECT 'ALTER TABLE ' || table_name || ' ALLOCATE EXTENT;'
FROM dba_tables
WHERE segment_created = 'NO' AND owner IN ('REG_RT');
с помощью которого получаю набор запросов для выполнения (этот способ мне помог).
3). Почитать 1083330.1. (не читал)
4). Пересоздать БД с DEFERRED_SEGMENT_CREATION=FALSE. (для рабочей базы не приемлемо).
********************************************************************************
6. ORA-39065 unexpected master process exception in DISPATCH
Для DataPump Export. При экспорте (exp) возникает ошибка
ORA-39006: internal error
ORA-39065: unexpected master process exception in DISPATCH
ORA-01403: no data found
ORA-39097: Data Pump job encountered unexpected error 100
Установил 11.2.0.3 + Patch 10. На 11.2.0.3 Patch 8 таких проблем не было. Т.е. использую CPU который еще никогда не использовал, соответственно уже возникает подозрение на CPU. Хотя проблема скорее всего возникает не из-за самого CPU, а что-то ломается при его установке.
Причина возникновения ошибки:
Причин может быть множество.
Способы решения:
Т.к. причин много, то и способов решения тоже много:
1). Убедиться что таблица DUAL одна (у меня она была одна).
select owner, object_name, object_type from dba_objects where object_name=’DUAL’;
OWNER|OBJECT_NAME|OBJECT_TYPE
SYS|DUAL|TABLE
PUBLIC|DUAL|SYNONYM
2). Увеличить streams_pool_size (по умолчанию = 0, увеличил до 128Мб — не помогло).
3). Увеличить aq_tm_processes (по умолчанию = 1, увеличил до 5 — не помогло).
4). Помогла статья DataPump Export Started Failing After Applying CPU Patch [ID 453796.1]. Метаданные DataPump в таблице METANAMETRANS$ потеряны. Нужно восстановить их (всё выполнять под SYS). Сначала проверить что это так
select count(*) from metanametrans$; (=0)
Пересоздать метаданные
@$ORACLE_HOME/rdbms/admin/catmet2.sql
@$ORACLE_HOME/rdbms/admin/utlrp.sql
Проверить еще раз
select count(*) from metanametrans$; (=3302 для 11.2.0.3 Patch 10)
5). Если это не помогло, можно еще попробовать DataPump Import Or Export (IMPDP/EXPDP) Fails With Errors ORA-31626 ORA-31637 [ID 345198.1]
Problem:- During Oracle Datapump export in oracle 10.2.0.4 encounters the errors below
. . exported “MZS_OWNER”.”READING” 3.269 GB 12277747 rows
UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-06512: at “SYS.DBMS_DATAPUMP”, line 2772
ORA-06512: at “SYS.DBMS_DATAPUMP”, line 3886
ORA-06512: at line 1
Solution:– it’s mentioned in Oracle MOS.
DataPump Export (EXPDP) Client Gets UDE-8 ORA-31626 ORA-39086 [ID 549781.1]
check the expdp logfile first, if it’s successfully completed as below, then no issue
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
H:ORACLEORAEXPMZSTWCPDPDUMPPROD_MZS_SCHEMAS_20120612.DMP
Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully completed at 14:12:39
——————–
However, reviewing the log file shows that the “job successfully completed”
Cause
This issue has been discussed in Bug 5969934 EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK
Solution
The expdp client makes calls to DBMS_DATAPUMP package to start and monitor export job. Once the export job is underway, the client just monitors the job status by issuing DBMS_DATAPUMP.GET_STAUS. Therefore, if the export logfile says “job successfully completed”, the dump file generated by the job should be fine.
You can simply ignore the errors, since the dump file is still valid for an import.
In the 10.2.0.2 release, there were a number of problems that caused the expdp and impdp clients to exit prematurely, interpreting a nonfatal error as a fatal one, giving the appearance that the job had failed when it hadn’t. In fact, inspection of the log file, if one was specified for the job, showed that the job ran successfully to completion. Often a trace file written by one of the Data Pump processes would provide more detail on the error that had been misinterpreted as a fatal one. Many of these errors involved the queues used for communication between the Data Pump processes, but there were other issues as well.
With each subsequent release, these problems have been addressed, and the client has become more robust and rarely, if ever, runs into situations like this. However, this is the result of many bug fixes in subsequent releases, some in Data Pump and some in supporting layers. It’s impossible to know, at this point, what combination of bug fixes would address this specific failure, and even if that was possible, it wouldn’t address other possible failures that look very similar on the client side.
This entry was posted on June 12, 2012 at 6:37 AM and is filed under Datapump10g.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.