Oracle error 1406 encountered

Последнее время пришлось делать множество операций экспорта и импорта между различными базами версий 10g и 11g, используя как оригинальный экспорт (Original Export) (expimp) так и datapump экспорт (DataPump Export) (expdpimpdp). Столкнулся со множеством проблем. Основная мысль Original Export окончательно отомрет. Об этом конкретно написано в статье — Feature Obsolescence — Original Export 10.2 [ID 345187.1]....

Последнее время пришлось делать множество операций экспорта и импорта между различными базами версий 10g и 11g, используя как оригинальный экспорт (Original Export) (expimp) так и datapump экспорт (DataPump Export) (expdpimpdp). Столкнулся со множеством проблем.

Основная мысль

Original Export окончательно отомрет. Об этом конкретно написано в статье — Feature Obsolescence — Original Export 10.2 [ID 345187.1]. Исправление багов закончилось еще 31.07.2010. Расширенная поддержка закончится через год 31.07.2013. И всё, дальше только движение по инерции. Поэтому, лучше переходить на DataPump Export. Особенно если вы используете 11g. Original Export есть смысл использовать только в двух случаях: 1) Когда нужно импортировать файл сделанный Original Export, когда-то давно. 2) Если возникает проблема которая решена в Original Export но не решена в DataPump Export, например с chr(0) (см.ниже).

Что почитать

Пара статей, с которыми полезно ознакомиться
Master Note for Data Pump [ID 1264715.1] и Master Note for Export and Import [ID 1264691.1].

********************************************************************************

Проблемы с которыми я сам непосредственно столкнулся

Original Export

  • ORA-01400 cannot insert NULL into IMP-00019 IMP-00003
  • ORA-01406 fetched column value was truncated
  • ORA-01455 converting column overflows integer datatype

Datapump Export

  • Проблема chr(0). ORA-39126 ORA-06502 LPX-00216 ORA-06512
  • ORA-31633: unable to create master table ORA-31626 job does not exist ORA-00955
  • ORA-39065 unexpected master process exception in DISPATCH

********************************************************************************

1. Проблема chr(0). ORA-39126 ORA-06502 LPX-00216 ORA-06512

Для DataPump Export. При импорте (impdp) возникает ошибка, у меня она выглядела так:

Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE:"UDB_BUF"."CMP_DETAILS"]
ORA-06502: PL/SQL: numeric or value error
LPX-00216: invalid character 0 (0x0)

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9001
----- PL/SQL Call Stack -----
object      line  object
handle    number  name
0x2146dcf18     20462  package body SYS.KUPW$WORKER
0x2146dcf18      9028  package body SYS.KUPW$WORKER
0x2146dcf18     16665  package body SYS.KUPW$WORKER
0x2146dcf18      3956  package body SYS.KUPW$WORKER
0x2146dcf18      9725  package body SYS.KUPW$WORKER
0x2146dcf18      1775  package body SYS.KUPW$WORKER
0x2146e8f38         2  anonymous block
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE:"UDB_BUF"."CMP_DETAILS"]
ORA-06502: PL/SQL: numeric or value error
LPX-00216: invalid character 0 (0x0)
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9001
----- PL/SQL Call Stack -----
object      line  object
handle    number  name
0x2146dcf18     20462  package body SYS.KUPW$WORKER
0x2146dcf18      9028  package body SYS.KUPW$WORKER
0x2146dcf18     16665  package body SYS.KUPW$WORKER
0x2146dcf18      3956  package body SYS.KUPW$WORKER
0x2146dcf18      9725  package body SYS.KUPW$WORKER
0x2146dcf18      1775  package body SYS.KUPW$WORKER
0x2146e8f38         2  anonymous block
Job "REG_RT"."IMP_TO_UDB_BUF" stopped due to fatal error at 11:09:35

Причина возникновения ошибки:

В коде процедур или пакетов или в определениях объектов используется оператор chr(0). В данном, конкретном случае, это функциональный индекс

CREATE INDEX cmp_det_intersect_search_i
ON cmp_details (UPPER (store_no), NVL (lvl1_num, CHR (0)), NVL (lvl2_num, CHR (0)), NVL (lvl3_num, CHR (0)))
TABLESPACE index_tbsp;

В Original Export эту проблему устранили, а в DataPump Export нет (например, Bug 3591564 : ORA-1756 IMPORTING FUNCTIONAL INDEX).

Способы решения:

1). Использовать Original Export.
2). Перед экспортом удалить объекты которые содержат chr(0), а после импорта создать их вручную.
3). (может не помочь). При импорте исключить объекты которые содержать chr(0).
4). Изменить приложение — отказаться от использования chr(0).

********************************************************************************

2. ORA-01400 cannot insert NULL into IMP-00019 IMP-00003

Для Original Export. При импорте (imp) возникает ошибка (см.ниже) при этом столбец ROW_MODE на самом деле вообще не содержит NULL значений.

IMP-00019: row rejected due to ORACLE error 1400
IMP-00003: ORACLE error 1400 encountered
ORA-01400: cannot insert NULL into ("UDB_BUF"."ADR_ADDRESS_VOC"."ROW_MODE")
Column : 1000375427
Column : 60
Column : кв.6
Column : 45000000000
Column :
Column :
Column :
Column : 1000375327
Column :
Column : Н
Column : 27
Column : 10-AUG-1999:10:11:56
Column : 27
Column : 23-AUG-1999:17:42:54
Column : 27
Column : Н
Column :
Column : 601
Column : 6
Column : 1
Column : 10-AUG-1999:10:11:56
Column :
Column : 46000000001
Column :

Причина возникновения ошибки:

В 11g1R1 ввели новый тип столбца, который не сохраняет default значения столбца в блоке данный. При получении пустых данных, NULL значения такого столбца, заменяются теми которые определены по умолчанию. Но это не работает с DIRECT=Y. Подробнее —  ORA-1400 During Import of Export Dump Written in Direct Path Mode [ID 826746.1]

Способы решения:

1). Использовать DataPump Export экспорт.
2). Использовать Original Export с DIRECT=N.

********************************************************************************

3. ORA-31633: unable to create master table ORA-31626 job does not exist ORA-00955

Для DataPump Export. При импорте (impdp) возникает ошибка

ORA-31626: job does not exist
ORA-31633: unable to create master table "REG_RT.IMP_TO_UDB_BUF"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-00955: name is already used by an existing object

Причина возникновения ошибки:

Обычно такая ошибка возникает когда процесс импорта прерывают, а затем запускают новый процесс импорта, при этом имя job такое же (параметр JOB_NAME). Подробнее здесь — DataPump Export or Import Fails With ORA-31633 ORA-6512 ORA-955 [ID 556425.1].

Способы решения:

Обычно достаточно просто удалить таблицу которая указана в сообщении об ошибке REG_RT.IMP_TO_UDB_BUF. Хотя в статье советуется сначала убедиться что job не работает.

********************************************************************************

4. ORA-01406 fetched column value was truncated

Для Original Export. При  экспорте (exp) возникает ошибка

EXP-00008: ORACLE error 1406 encountered
ORA-01406: fetched column value was truncated
EXP-00000: Export terminated unsuccessfully

Причина возникновения ошибки:

Обычно эта ошибка возникает если делать FULL экспорт из 11 версии сервера (сервер), версией exp которая меньше чем 11 (клиент). Подробнее — Full Export Fails With Error ORA-1406 When Exporting 11g Database [ID 553993.1].

Способы решения:

В статье предлагают какие-то не совсем мне понятные способы:

1). Изменить character set БД на AL32UTF8 (это может быть совсем не просто).
2). Применить patch 6804150 (я его не нашел, но он вроде входит в состав patch set 10.2.0.5. Куда его устанавливать, на сервер или на клиента?)
3). Установить patch set 10.2.0.5 (только для 10g? Куда, сервер или клиент?)

Мои способы:

1) Попробовать сделать экспорт только конкретной схемы. Т.е. вместо FULL=Y — OWNER=XXX.
2) Использовать DataPump Export.

********************************************************************************

5. ORA-01455 converting column overflows integer datatype

Для Original Export. При  экспорте (exp) возникает ошибка

EXP-00008: ORACLE error 1406 encountered
ORA-01455: converting column overflows integer datatype
EXP-00000: Export terminated unsuccessfully

Причина возникновения ошибки:

Обычно эта ошибка возникает если делать экспорт из 11.2 версии сервера (сервер), версией exp которая меньше чем 11.2 (клиент). Подробнее — EXP: ORA-1455 is raised when exporting from an 11.2 database using a 9i,10g or 11gR1 exp utility [ID 1381690.1]. В 11.2 БД создается с DEFERRED_SEGMENT_CREATION=TRUE по умолчанию, т.е. сегменты для пустых таблиц не создаются. Они создаются при первой вставке данных в таблицу.

Способы решения:

В статье предлагают следующие способы способы:

1). Использовать DataPump Export (expdp).
2). Выделить хотя бы один экстент для пустых таблиц. Т.к. я выгрухаю одну схему, то использую запрос

SELECT 'ALTER TABLE ' || table_name || ' ALLOCATE EXTENT;'
FROM dba_tables
WHERE segment_created = 'NO' AND owner IN ('REG_RT');

с помощью которого получаю набор запросов для выполнения (этот способ мне помог).
3). Почитать 1083330.1. (не читал)
4). Пересоздать БД с DEFERRED_SEGMENT_CREATION=FALSE. (для рабочей базы не приемлемо).

********************************************************************************

6. ORA-39065 unexpected master process exception in DISPATCH

Для DataPump Export. При экспорте (exp) возникает ошибка

ORA-39006: internal error
ORA-39065: unexpected master process exception in DISPATCH
ORA-01403: no data found
ORA-39097: Data Pump job encountered unexpected error 100

Установил 11.2.0.3 + Patch 10. На 11.2.0.3 Patch 8 таких проблем не было. Т.е. использую CPU который еще никогда не использовал, соответственно уже возникает подозрение на CPU. Хотя проблема скорее всего возникает не из-за самого CPU, а что-то ломается при его установке.

Причина возникновения ошибки:

Причин может быть множество.

Способы решения:

Т.к. причин много, то и способов решения тоже много:

1). Убедиться что таблица DUAL одна (у меня она была одна).

select owner, object_name, object_type from dba_objects where object_name=’DUAL’;
OWNER|OBJECT_NAME|OBJECT_TYPE
SYS|DUAL|TABLE
PUBLIC|DUAL|SYNONYM

2). Увеличить streams_pool_size (по умолчанию = 0, увеличил до 128Мб — не помогло).
3). Увеличить aq_tm_processes (по умолчанию = 1, увеличил до 5 — не помогло).
4). Помогла статья DataPump Export Started Failing After Applying CPU Patch [ID 453796.1]. Метаданные DataPump в таблице METANAMETRANS$ потеряны. Нужно восстановить их (всё выполнять под SYS). Сначала проверить что это так

select count(*) from metanametrans$;  (=0)

Пересоздать метаданные
@$ORACLE_HOME/rdbms/admin/catmet2.sql
@$ORACLE_HOME/rdbms/admin/utlrp.sql

Проверить еще раз
select count(*) from metanametrans$; (=3302 для 11.2.0.3 Patch 10)

5). Если это не помогло, можно еще попробовать DataPump Import Or Export (IMPDP/EXPDP) Fails With Errors ORA-31626 ORA-31637 [ID 345198.1]

Recently one of my colleague went into an issue  “Oracle database error 1406: ORA -01406:fetched column value was truncated” while he was trying to create the extract of his tableau workbook.

He approached me and i saw he was creating the extract using the .twbx version. So , thought to first convert the dashboard to .twb and then try creating the extract  .The process started but again failed with the same error

error tableau

Tableau  Error snapshot

Now , I decided to check his Data source and found  it uses multiple Db Tables joined along with Custom SQL . So when i looked the Custom SQL , it looked like below  .

SELECT distinct to_date(‘1-Feb-2014′) as Start_date,
to_date(’31-Jan-2015’) as End_Date,
to_date(fs.TRADE_DATE) as TRADE_DATE,fs.SID
FROM VIP_HO.F_SUMM FS
where  fs.TRADE_DATE between  ‘1-Feb-2014′ and ’31-Jan-2015’

I suspected the issue with the Custom SQL , so just removed the entire custom SQL and tried to created the extract  . To my excitement  , it was working fine. Now i knew with confidence that something is wrong in the Custom SQL piece

I then went into database and matched the datatype of the columns in the table VIP_HO.F_SUMMARY of custom SQL  to ensure that all data type and conversions are done correctly.   What was strange that , he used to_date(fs.TRADE_DATE)  while the Db Table in oracle already had it defined as Date.  I couldn’t understand the reason , However i changed his query by dropping of the  to_Date function on fs.TRADE_DATE column .

So the final query looked like

SELECT distinct to_date(‘1-Feb-2014′) as Start_date,
to_date(’31-Jan-2015’) as End_Date,
fs.TRADE_DATE as TRADE_DATE,
fs.SID FROM VIP_HO.F_SUMMARY FS
where  fs.TRADE_DATE between  ‘1-Feb-2014′ and ’31-Jan-2015’
and saved the dashboard and tried to generate the extract again  And guess what ,  Voila !! it worked .

Later i also googled on this issue and came to know the issue could also arise if the  ” Database  column buffer area” is less compared to the data fetched in the columns of Select statements .

The db manual mentioned the cause as
Cause: In a host language program, a FETCH operation was forced to truncate a character string. The program buffer area for this column was not large enough to contain the entire string. The cursor return code from the fetch was +3.
Action: Increase the column buffer area to hold the largest column value or perform other appropriate processing.

Some experts said , even the calculated fields in tableau that has float with some decimal places  can also create the trouble and Rounding the calculated variable would fix the issue .

To what i understand , such issues would occur  only in the custom SQL code in the Tableau and chances of issue with the usage of regular Db tables is minimal. At least i haven’t seen it.
Thanks for reading  . Feel free to comment and provide your view

Problem

A parallel job with Oracle Connector stage failed with the following errors whilst extracting records containing NUMBER type column without precision and scale.

Item #: 11
Event ID: 11
Timestamp: 2011-11-11 11:11:11
Type: Fatal
User Name: dsadm
Message Id: IIS-CONN-ORA-01024
Message: src_table,0: While reading data for column NUMCOL, the connector received Oracle error code ORA-1406.
(CC_OraStatement::logArrayReturnCodes, file CC_OraStatement.cpp, line 3565)

Cause

Inappropriate SQL type was specified for the problem NUMBER column in Columns tab of Oracle Connector stage.
For example, if you specify Decimal(38,10) for a NUMBER column and a value of 39 significant decimal digits is stored in the column, a truncated value will be fetched and «ORA-01406: fetched column value was truncated» error will occur.

Resolving The Problem

To prevent the ORA-01406 error for NUMBER datatype without precision/scale,

  • Use SQL type Double for Oracle NUMBER column in Columns tab of Oracle Connector stage GUI. This can be achieved simply by loading the table definition imported by Oracle Connector, not by OCI9 plug-in, to Oracle Connector stage GUI.
  • If the table definition from OCI9 stage is preferred, then set large enough precision/scale to cover all the values in the problem NUMBER column. Or, cast the NUMBER to NUMBER(p,s) using user defined query and use Decimal(p,s) in Oracle Connector stage GUI.

Or you can make the job to ignore the ORA-1406 error and continue running by setting the «Fail for data truncation = No» property, which is under Usage —> Session in the Oracle Connector stage GUI.

Related Information

[{«Product»:{«code»:»SSZJPZ»,»label»:»IBM InfoSphere Information Server»},»Business Unit»:{«code»:»BU059″,»label»:»IBM Software w/o TPS»},»Component»:»—«,»Platform»:[{«code»:»PF002″,»label»:»AIX»},{«code»:»PF010″,»label»:»HP-UX»},{«code»:»PF016″,»label»:»Linux»},{«code»:»PF027″,»label»:»Solaris»},{«code»:»PF033″,»label»:»Windows»}],»Version»:»9.1;11.3″,»Edition»:»»,»Line of Business»:{«code»:»LOB10″,»label»:»Data and AI»}}]

Error ORA-1406 Received During Export After Upgrade to 10.2 (Doc ID 367831.1)

Last updated on MARCH 12, 2019

Applies to:

Symptoms

Errors received during export after upgrading to 10G R2:

Auditing is enabled. The initialization parameter AUDIT_TRAIL is set to a value other than NONE or FALSE.

Changes

Oracle version upgraded from 9.2.0.x to 10.2.0.1

Cause

To view full details, sign in with your My Oracle Support account.

Don’t have a My Oracle Support account? Click to get started!

In this Document

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit oracle.com. пїЅ Oracle | Contact and Chat | Support | Communities | Connect with us | | | | Legal Notices | Terms of Use

Источник

Full Export Fails With Error ORA-1406 When Exporting 11g Database (Doc ID 553993.1)

Last updated on JANUARY 30, 2022

Applies to:

Symptoms

Full database export using traditional exp utility version 9.2.0.8 fails with errors EXP-8 ORA-1406 during exporting roles from a database version 11.1.0.x.

Example exp command:

The problem affects all the exp utilities with release lower than 11g.

Changes

Cause

To view full details, sign in with your My Oracle Support account.

Don’t have a My Oracle Support account? Click to get started!

In this Document

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit oracle.com. пїЅ Oracle | Contact and Chat | Support | Communities | Connect with us | | | | Legal Notices | Terms of Use

Источник

The connector received oracle error code ora 1406




Подниму старую тему — почему-то стали валиться в лог такие ошибки —

Вроде это говорит о нехватке буфера чтения. Увеличил Buffer size — 50 К поставил. Все равно ошибки валятся. Вероятно это не тот буфер 😉
Может кто в курсе, где можно подкрутить еще?
.
ps вообще-то тема больше к администрированию — может модераторы перенесут?

То есть, эта ошибка возникает тогда, когда выбираемая из базы данных строка слишком длинна, и ее приходится обрезать. Предлагаемое действие — увеличить размер буфера, сделав его больше максимально возможной длины строки, выбираемой из базы данных. Для при веденного примера я решил это очень просто — зайдя в Панель управления -> Администрирование -> Источники данных ODBC, запустил настройку нужного источника данных и в новом окне, на вкладке Oracle установил значение параметра «Fetch buffer size» на 128000 вместо 64000 по умолчанию.

Для других приложений, использующих ODBC для подключения к Oracle, проблема должна решаться сходным способом, нужно только найти, где установить новый размер буфера.

Источник

9
Handling Runtime Errors

An application program must anticipate runtime errors and attempt to recover from them. This chapter provides an in-depth discussion of error reporting and recovery. You learn how to handle errors and status changes using the SQLSTATE status variable, as well as the SQL Communications Area (SQLCA) and the WHENEVER directive. You also learn how to diagnose problems using the Oracle Communications Area (ORACA). This chapter contains the following topics:

The Need for Error Handling

A significant part of every application program must be devoted to error handling. The main reason for error handling is that it allows your program to continue operating in the presence of errors. Errors arise from design faults, coding mistakes, hardware failures, invalid user input, and many other sources.

You cannot anticipate all possible errors, but you can plan to handle certain kinds of errors that are meaningful to your program. For the Pro*C/C++ Precompiler, error handling means detecting and recovering from SQL statement execution errors. You can also prepare to handle warnings such as «value truncated» and status changes such as «end of data.» It is especially important to check for error and warning conditions after every SQL data manipulation statement, because an INSERT, UPDATE, or DELETE statement might fail before processing all eligible rows in a table.

Error Handling Alternatives

There are several alternatives that you can use to detect errors and status changes in the application. This chapter describes these alternatives, however, no specific recommendations are made about what method you should use. The method is, after all, dictated by the design of the application program or tool that you are building.

Status Variables

You can declare a separate status variable, SQLSTATE or SQLCODE, examine its value after each executable SQL statement, and take appropriate action. The action might be calling an error-reporting function, then exiting the program if the error is unrecoverable. Or, you might be able to adjust data or control variables and retry the action.

See Also:

  • «The SQLSTATE Status Variable» and «Declaring SQLCODE» for complete information about these status variables.

The SQL Communications Area

Another alternative that you can use is to include the SQL Communications Area structure ( sqlca) in your program. This structure contains components that are filled in at runtime after the SQL statement is processed by Oracle.

In this guide, the sqlca structure is commonly referred to using the acronym for SQL Communications Area (SQLCA). When this guide refers to a specific component in the C struct, the structure name ( sqlca) is used.

The SQLCA is defined in the header file sqlca.h , which you include in your program using either of the following statements:

  • EXEC SQL INCLUDE SQLCA;
  • #include

Oracle updates the SQLCA after every executable SQL statement. (SQLCA values are unchanged after a declarative statement.) By checking Oracle return codes stored in the SQLCA, your program can determine the outcome of a SQL statement. This can be done in the following two ways:

  • Implicit checking with the WHENEVER directive
  • Explicit checking of SQLCA components

You can use WHENEVER directives, code explicit checks on SQLCA components, or do both.

The most frequently-used components in the SQLCA are the status variable ( sqlca.sqlcode), and the text associated with the error code ( sqlca.sqlerrm.sqlerrmc). Other components contain warning flags and miscellaneous information about the processing of the SQL statement.

SQLCODE (upper case) always refers to a separate status variable, not a component of the SQLCA. SQLCODE is declared as a integer. When referring to the component of the SQLCA named sqlcode, the fully-qualified name sqlca.sqlcode is always used.

When more information is needed about runtime errors than the SQLCA provides, you can use the ORACA. The ORACA is a C struct that handles Oracle communication. It contains cursor statistics, information about the current SQL statement, option settings, and system statistics.

See Also:

  • «Using the SQL Communications Area (SQLCA)» for complete information about the SQLCA structure.
  • «Using the Oracle Communications Area (ORACA)» for complete information about the ORACA.

The SQLSTATE Status Variable

The precompiler command line option MODE governs ANSI/ISO compliance. When MODE=ANSI, declaring the SQLCA data structure is optional. However, you must declare a separate status variable named SQLCODE. SQL92 specifies a similar status variable named SQLSTATE, which you can use with or without SQLCODE.

After executing a SQL statement, the Oracle Server returns a status code to the SQLSTATE variable currently in scope. The status code indicates whether the SQL statement executed successfully or raised an exception (error or warning condition). To promote interoperability (the ability of systems to exchange information easily), SQL92 predefines all the common SQL exceptions.

Unlike SQLCODE, which stores only error codes, SQLSTATE stores error and warning codes. Furthermore, the SQLSTATE reporting mechanism uses a standardized coding scheme. Thus, SQLSTATE is the preferred status variable. Under SQL92, SQLCODE is a «deprecated feature» retained only for compatibility with SQL89 and likely to be removed from future versions of the standard.

Declaring SQLSTATE

When MODE=ANSI, you must declare SQLSTATE or SQLCODE. Declaring the SQLCA is optional. When MODE=ORACLE, if you declare SQLSTATE, it is not used.

Unlike SQLCODE, which stores signed integers and can be declared outside the Declare Section, SQLSTATE stores 5-character null-terminated strings and must be declared inside the Declare Section. You declare SQLSTATE as

SQLSTATE must be declared with a dimension of exactly 6 characters.

SQLSTATE Values

SQLSTATE status codes consist of a 2-character class code immediately followed by a 3-character subclass code. Aside from class code 00 («successful completion»,) the class code denotes a category of exceptions. And, aside from subclass code 000 («not applicable»,) the subclass code denotes a specific exception within that category. For example, the SQLSTATE value ‘22012’ consists of class code 22 («data exception») and subclass code 012 («division by zero»).

Each of the five characters in a SQLSTATE value is a digit (0..9) or an uppercase Latin letter (A..Z). Class codes that begin with a digit in the range 0..4 or a letter in the range A..H are reserved for predefined conditions (those defined in SQL92). All other class codes are reserved for implementation-defined conditions. Within predefined classes, subclass codes that begin with a digit in the range 0..4 or a letter in the range A..H are reserved for predefined subconditions. All other subclass codes are reserved for implementation-defined subconditions. Figure 9-1 shows the coding scheme.

Figure 9-1 SQLSTATE Coding Scheme

Table 9-1 shows the classes predefined by SQL92.

Источник

ORA-01406 error, not support oracle function ? #62

Comments

binlaniua commented Mar 31, 2015

error

Error: ORA-01406: fetched column value was truncated

The text was updated successfully, but these errors were encountered:

cjbj commented Mar 31, 2015

@binlaniua can you reproduce this with a simpler table and post the CREATE TABLE statement so we can see the data types?

wilkolazki commented Mar 31, 2015

Hi,
I have the same problem. Unfortunatelly I did not manage to reduce the data to locate the exact cause.
Anyway, my query has about 70 columns. I am trying to fetch about 100 000 rows. The same query runs without problem in Pl/Sql Developer.

Tomorow I will try to reduce the column list until I find problematic column. Then I will try to reproduce it on artificial data.

Best regards,
SWilk

cjbj commented Mar 31, 2015

Let us know your NLS environment too.

wilkolazki commented Mar 31, 2015

wilkolazki commented Mar 31, 2015

I have narrowed the issue down to one column.

I will try to create a sample create table script tomorow.

cjbj commented Apr 1, 2015

@wilkolazki how are you fetching 100K rows? Do you bump up maxRows that big?

wilkolazki commented Apr 1, 2015

Not exactly. There is about that many rows in the view i am fetching data from. At the moment I am experimenting and fetching first «random» 100 rows (no ORDER BY clause). I was palnning to try to bump maxRows to that size to fetch all of them when there was no more error. If that would be impossible, I was going to sort the rows and fetch them all with paging. Anyway, I have to somehow get all of them and save them as json

wilkolazki commented Apr 1, 2015

I was able to reproduce this error on artificial data:

The query without the OFFSET clause gives the ORA-01406 error. If I uncomment the offset clause and skip at least one row, then the query works correctly. It does not matter if I use limit the rows from 1 to 100 or from 0 to 99. If the query tries to fetch all rows, then it throws error. If the result set is reduced, then it works.

There is one more weird thing. There is something wrong with charset encoding. The «working» query returns malformed: [ ‘002ZAŻÓŁĆ GĘLĽ JAŃ ZAŻ’ ] instead of [ ‘002ZAŻÓŁĆ GĘŚLĄ JAŹŃ ZAŻÓŁĆ GĘŚLĄ J’ ].

I hope this helps to reproduce the issue.
Sorry for being so chaotic in this post.

cjbj commented Apr 1, 2015

@wilkolazki To run with my AL32UTF8 database, I had to increase the column size so the INSERT worked. I didn’t see a 1406 in various tests. However, with NLS_LANG=POLISH_POLAND.AL32UTF8 and changing the CREATE to use VARCHAR(51), I didn’t get any rows returned by the query nor any error.

I know characterset interaction of Node/v8/node-oracledb is yet to be reviewed. I’ll mark this as a bug so it gets looked at when that review happens, if not sooner.

Can you post your OS, Oracle client library version, and database version? These are always useful.

If anyone else has a testcase, please post it.

wilkolazki commented Apr 2, 2015

@cjbj, here are the system details:

Client library version: Release 11.2.0.3.0 — Production

Anyway, please ignorme my previous comment:

There is one more weird thing. There is something wrong with charset encoding. The «working» query returns malformed: [ ‘002ZAŻÓŁĆ GĘLĽ JAŃ ZAŻ’ ] instead of [ ‘002ZAŻÓŁĆ GĘŚLĄ JAŹŃ ZAŻÓŁĆ GĘŚLĄ J’ ].

I have tested the same query from PHP and the encoding was malformed in the same way. Something went wrong when I have been writing the test insert. I have corrected it, so the test data contains proper polish characters. So character encoding conversion works properly.

Even after I have corrected the data, I still get an error when trying to fetch unlimited result set
The error have something in to do with maxRows option.

So, the table contains 100 rows. If I fetch them all and set maxRows = 100, I get error. If I set the maxRows = 101, the query executes correctly.

Источник

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Oracle error 12899
  • Ora 00600 internal error code arguments ошибка
  • Ora 12571 ошибка
  • Oracle error 12537
  • Oracle error 1017

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии