Так случилось, что продукт, который мы разрабатываем работает с несколькими реляционными базами данных. Сейчас это MS SQL, Postgres и Oracle. Были запуски под много чем от MySQL до покойного, наверное, Firebird и экзотических Sybase с DB2, но сказ не об этом.
Если с MS SQL и Postgres все более мене понятное-привычное, то с Oracle каждый раз нас ждут какие-то сюрпризы. Проницательный читатель сразу заметит, что «руки у нас кривые» и мы «попросту не умеем его готовить», но если, уважаемому читателю захочется узнать чем varchar (а точнее varchar2
) в Богоподобном Oracle отличается от его собратьев, то прошу под кат.
Как все современные системы, мы храним данные в Unicode формате (в данный момент это UTF-8). Почему это может быть важно для реляционных баз данных?
Ну, например, если у вас в базе данных mix unicode и non-unicode типов данных, то некоторые драйвера в такое не могут. Например, JTDS — JDBC драйвер для MS SQL сервера может работать либо в Unicode режиме, либо в Ansi. Соответственно, если Вы решите «сэкономить» и создать не unicode колонку (varchar/char), то получите преобразование unicode->ansi на уровне вставки данных в таблицу и, скорее всего, достигните обратного эффекта (как минимум замедления на вставке данных, а то и на поиске).
Итак, история. Наш сервер приложений проверяет максимальную допустимую длину полей до их вставки (здесь нужно оговориться, что проверка выполняется не по данным БД, а по нашим внутренним метаданным), но несмотря на это иногда под Oracle мы «ловим» ошибку вида ORA-12899: value too large for column.
Что за напасть? Причем, скрипты генерируются примерно одним и тем же способом под все базы данных, но проблема возникает только иногда и только под Oracle.
Не буду томить. Оказалось, что мы невнимательно прочитали спецификацию типа varchar2
в котором хранятся данные
Давайте изменим размер колонки, например, на следующий
alter table address modify street varchar2(150);
Как Вы думаете 150 — это длина в символах (как в других базах в общем-то)? Подсказка — нет Скорее всего в байтах.
А в символах это
alter table address modify street varchar2(150 char);
Т.е. не указывая спецификацию char
—byte
мы оказываемся в серой зоне настроек базы данных по умолчанию. Причем во всех базах до которых мы смогли дотянуться (включая продакшн и не только наши) настройка по умолчанию — это байты.
А теперь давайте вспомним, что в UTF-8, например, один символ может занимать от одного до 4 байт (обычно 1 байт ANSI, 2 русские символы и некоторые которым больше повезло и до 4 для иероглифов).
И что это за дикая настройка по умолчанию для Unicode баз!? Но ведь, именно она, зараза такая, включена «из коробки». Ну т.е. да, я все понимаю: legacy, обратная совместимость для тех времен, когда Unicode’а еще и «в проекте не было», гордость за то, что backup 86 года можно восстановить последней редакцией imp — вот это вот все.
А почему ошибка возникала только иногда и только для некоторых колонок? Так как тот tool, которым мы генерируем базу изначально был настолько умным, что сразу в create table
для всех колонок явно прописывал суффикс char
Выводы:
Неплохо бы иногда проверять, не прокрался ли враг или, если Вы достаточно смелый, изменить эту настройку по умолчанию.
Скрипт для определения значения по умолчанию
SELECT value FROM NLSDATABASEPARAMETERS WHERE parameter='NLSLENGTHSEMANTICS';
Скрипт, который позволяет проверить, что у вас в базе «все ОК»:
SELECT TABLE_NAME, COLUMN_NAME, DATA_LENGTH, CHAR_USED
FROM USER_TAB_COLUMNS
WHERE DATA_TYPE = 'VARCHAR2' AND CHAR_USED = 'B'
ORDER BY TABLE_NAME, COLUMN_NAME
P.S. Сразу оговорюсь, это нормально, если там где Вы это ожидаете размерность в байтах (например, там где 100% ansi символы), но вот для Unciode текста … Ушел плакать дальше на эту тему …
P.P.S. Regexp которым можно попробовать найти скрипты «серой зоны» varchar2(s*d+s*)
P.P.P.S. Поиск ответа на этот вопрос с помощью StackOverflow
P.P.P.P.S. А вот, что думает Oracle по поводу изменения значения параметра NLSLENGTHSEMANTICS
на что-то более разумное «Oracle strongly recommends that you do NOT set the NLSLENGTHSEMANTICS parameter to CHAR in the instance or server parameter file. This may cause many existing installation scripts to unexpectedly create columns with character length semantics, resulting in runtime errors, including buffer overflows.» https://docs.oracle.com/cd/E2469301/server.11203/e24448/initparams149.htm
ORA-12899 means that the value of being inserted or updated data is larger than column’s allowable length that is defined in the table.
What will you do for larger values? Make the column wider? Yes, that’s correct answer. The formal solution to ORA-12899 is to make the column wider to accommodate potential larger values..
In this post, I’ll talk about a special case, it’s about NLS_LENGTH_SEMANTICS. Let’s see how I found ORA-12899 when I migrated some tables from one database (UTF8) to another (AL32UTF8).
SQL> INSERT INTO "ERPAPP"."FIN_NORTH_HIST"@ORA12C SELECT * FROM "ERPAPP"."FIN_NORTH_HIST";
INSERT INTO "ERPAPP"."FIN_NORTH_HIST"@ORA12C SELECT * FROM "ERPAPP"."FIN_NORTH_HIST"
*
ERROR at line 1:
ORA-12899: value too large for column "ERPAPP"."FIN_NORTH_HIST"."MAT"
(actual: 3, maximum: 1)
ORA-02063: preceding line from ORA12C
Check Column Definition
Let’s see the definition of said column in the target database.
SQL> column data_type format a10;
SQL> select data_type, char_length, char_used from dba_tab_cols where owner = 'ERPAPP' and table_name = 'FIN_NORTH_HIST' and column_name = 'MAT';
DATA_TYPE CHAR_LENGTH C
---------- ----------- -
CHAR 1 B
In which, value B in column CHAR_USED means BYTE in DBA_TAB_COLS. That is to say, the semantics of this column is BYTE. By the way, I think you’d better to know the difference between BYTE and CHAR for NLS_LENGTH_SEMANTICS before reading further.
Solution to ORA-12899
Since BYTE semantics within same length is not safe for data migration, so changed the semantics of this column from BYTE to CHAR for databases with multi-byte language. For each character, UTF-8 takes 1 to 4 bytes to store.
SQL> alter table "ERPAPP"."FIN_NORTH_HIST" modify(MAT char(1 char));
Table altered.
Or you can just increase the column width and stay with BYTE.
SQL> alter table "ERPAPP"."FIN_NORTH_HIST" modify(MAT char(4 byte));
Either way should work.
Check Column Definition
Let’s Check the definition of this column again.
SQL> select data_type, char_length, char_used from dba_tab_cols where owner = 'ERPAPP' and table_name = 'FIN_NORTH_HIST' and column_name = 'MAT';
DATA_TYPE CHAR_LENGTH C
---------- ----------- -
CHAR 1 C
OK! Now the semantics of this column is CHAR, let’s do it again.
SQL> INSERT INTO "ERPAPP"."FIN_NORTH_HIST"@ORA12C SELECT * FROM "ERPAPP"."FIN_NORTH_HIST";
1468224039 rows created.
SQL> COMMIT;
Commit complete.
Good! No ORA-12899.
There’s still a question in my mind. Why would this happen? Since AL32UTF8 is in fact a superset of UTF8, it should have no problem on column width during data migration. I suspect that the column was encoded by non-UTF8-based character set in the first place.
You may want to learn about a similar error message ORA-01438 caused by larger values being inserted in NUMBER columns.
Message displayed with error ORA-12899 is self-explained.
This error would come if you are trying to insert into a table’s column which
is not big enough to hold the data being inserted. Suppose you define a column
with a width of 10 characters (VARCHAR2(10)), and later if you try to insert a
value longer than 10 characters, ORA-12899 would be returned. To solve this problem
you can either reduce the data width being inserted, or alternatively increase
the column width. If you have a multi-byte database characterset, a VARCHAR2 column with a width of 10 may not
store exactly 10 characters. Following is one example to produce ORA-12899.
SQL>
drop table test;
Table
dropped.
SQL>
create table test(name varchar2(4));
Table
created.
SQL>
insert into test values (‘aaaaaaaaa’);
insert
into test values (‘aaaaaaaaa’)
*
ERROR
at line 1:
ORA-12899:
value too large for column «SYS».»TEST».»NAME»
(actual: 9, maximum:4
SQL>
insert into test values (‘aaaa’);
1
row created.
Popular Posts — All Times
-
This error means that you are trying to perform some operation in the database which requires encryption wallet to be open, but wallet is …
-
Finding space usage of tablespaces and database is what many DBAs want to find. In this article I will explain how to find out space usage …
-
ORA-01653: unable to extend table <SCHEMA_NAME>.<SEGMENT_NAME> by 8192 in tablespace <TABLESPACE_NAME> This error is q…
-
You may also want to see this article about the ORA-12899 which is returned if a value larger than column’s width is inserted in the col…
-
This document explains how to start and stop an Oracle cluster. To start and stop Grid Infrastructure services for a standalone insta…
-
If you want to know how we upgrade an 11g database to 12c using DBUA, click here . For upgrading 12.1.0.1 to 12.1.0.2 using DBUA, …
-
If database server CPU usage is showing 100%, or high 90%, DBA needs to find out which session is hogging the CPU(s) and take appropriate …
-
By default AWR snapshot interval is set to 60 minutes and retention of snapshots is set to 8 days. For better and precise investigation of…
-
SWAP space recommendation from Oracle corp. for Oracle 11g Release 2 If RAM is between 1 GB and 2 GB, SAWP should be 1.5 times the s…
-
This article explains how to install a 2 nodes Oracle 12cR1 Real Application Cluster (RAC) on Oracle Linux 7. I did this installation on O…
Problem
ORA-12899 Value of STATE too large for column while creating order, value too large for column
Symptom
When creating an order if one adds special characters to a CHAR
field, the length of which is equal to the column size, they see the following
error. This will be true while inserting special character values into any
column of type CHAR in the database, which is equivalent to the length of the
field.
Error Message
ErrorDescription=»ORA-12899: value too large for column «YANTRA_ADM».»YFS_PERSON_INFO».»STATE» (actual: 39, maximum: 35)
»
Resolving The Problem
Check the
NLS_LENGTH_SEMANTICS parameter from the NLS_INSTANCE_PARAMETERS table. Changing
the value to CHAR will resolve the issue.
alter session set
nls_length_semantics = CHAR, prior to running any create table
scripts.
Setting this attribute ensures that the
field sizes are not impacted by the number of bytes a data type can store. For
example, VARCHAR (40) would now be able to store 40 Japanese characters instead
of 40/3 bytes in the UTF-8 character set.
For an existing
table:
(1) Take backup of
the table if need be. (One can use exp utility)
(2) Drop the table
(3) Change the NLS_LENGTH_SEMANTICS
parameter to CHAR
(4)
Recreate the table.
To prove that a change in the length
semantics parameter will help, try the following steps in a local DB
schema:
(1) Make sure that
the NLS_LENGTH_SEMANTICS is set to BYTE.
(2) Create a dummy table with one column as CHAR
(30).
(3) Try inserting
data with special characters which is 30 in length
(4) This insertion should fail
(5) Issue the alter system command to
set the NLS_LENGTH_SEMANTICS parameter to CHAR.
(6) Drop the table and recreate
it.
(7) Try the same
insertion of data as in step 3.
This should be a good enough
proof to show that one can resolve the problem by dropping the table and
recreating it after modifying the parameter value of
NLS_LENGTH_SEMANTICS.
[{«Product»:{«code»:»SS6QYM»,»label»:»Sterling Selling and Fulfillment Suite»},»Business Unit»:{«code»:»BU055″,»label»:»Cognitive Applications»},»Component»:»Not Applicable»,»Platform»:[{«code»:»PF025″,»label»:»Platform Independent»}],»Version»:»All»,»Edition»:»»,»Line of Business»:{«code»:»LOB59″,»label»:»Sustainability Software»}}]
Historical Number
NFX5666
Product Synonym
[<p><b>]Type[</b><p>];NormalFix
Often times, as our companies grow and evolve in response to an expansion in the form of client-base, staff, profit or markets, the data associated with this growth will also change. Data systems like Oracle have the innate ability to remain fairly flexible in regards to working with this variance in information. Still, even the most versatile database systems require maintenance and refining in the face of increased data traffic. This work is essential to accommodating any constraints on memory or necessary redefinitions of parameters. The ORA-12899 error is representative of an instance in which either an uptick in data or a user error is forcing Oracle to stall during its requested action.
The Problem
The ORA-12899 is an Oracle error that occurs when the value entered into the column string is too large. What this means is that an attempt was made by the user to update or insert a column with a value that is too wide for the destination column. The name of a particular column will be given and the actual width of the value, as well as the maximum width permitted for the column, will be associated with that. As mentioned, the value can be given in the form of characters. In the event that the width is reported in characters, this will mean that character length semantics are operating for the column. Otherwise the width will be reported in bytes.
Essentially, this error results from trying to push through a value, or set of values, that exceed the specified maximum width of the column. So, how does a user correct this type of error?
The Solution
To begin, open up the OERR utility. The user will require the full ORA-12899 message to receive the proper feedback on the error. This will provide more information on the error and allow further investigation. Typically, the error can come from one of three sources. The first source is the SQL statements that have been generated. Checking source and destination column data types to discover whether they are compatible with current formats is the second source. Lastly, the user can look at the destination column width – where the value is being assigned – to make sure it is large enough to accommodate the maximum value that the user anticipates assigning.
Let us now turn to an example that corrects ORA-12899. Suppose the user has created the following table:
CREATE TABLE Clients(
ClientID varchar2(9) PRIMARY KEY,
Client_Contact varchar2(40) NOT NULL,
Address varchar(20) NOT NULL,
Zip varchar2(5) NOT NULL,
Client_Number varchar2(11) NOT NULL)
The user then attempts to issue an INSERT VALUES statement that looks something like this:
INSERT INTO Clients VALUES(
727546345,
‘Roger Holcomb’,
‘—Benjamin Road Syracuse’,
‘-----‘, 05307623754)
The user might try to run the statement from here, but will receive the following error message:
Error starting at line 7 in command: INSERT INTO Clients VALUES(727546345, ‘Ricky Galorey’, ‘18 Benjamin Road Syracuse’, ‘13208’, 05307623754) Error report: SQL Error: ORA-12899: value too large for column “ORGANIZATIONS”. “MARKET”. “ADDRESS”(actual: 25, maximum: 20) 12899. 00000 – “value too large for column %s (actual: %s, maximum: %s)”
This error statement indicates that the variable ‘Address’ cannot hold more than twenty characters as that would exceed the width of the column parameter. When we look back at the address value (’18 Benjamin Road Syracuse’) we can see that the total number of characters (25) exceeds the maximum number allowed for the width of the column. To correct this, the user can change the VARCHAR2 for address to an amount that can accommodate the typical address length that their company would input.
Looking forward
The positive aspect of the ORA-12899 error is that once the user knows how to diagnose the problem, developing a solution becomes a very routine process. If you find that you still have difficulties integrating solutions for an ORA-12899, or perhaps you want to learn more about developing tables for your database, contact a licensed Oracle consultant for more information.
November 5, 2020
When you start an import operation, you can get ” ORA-12899: value too large for column, KUP-11007: conversion error loading table ” errors.
ORA-12899: Value too Large for Column
KUP-11007: conversion error loading table
Error details are as follows.
KUP-11007: conversion error loading table "MSDBA"."TEST_TABLE" ORA-12899: value too large for column ACCOUNT_ID (actual: 31, maximum: 29) KUP-11009: data for row: ACCOUNT_ID : 0X'3238363032353338C7DD465445484132303130303530313135' KUP-11007: conversion error loading table "MSDBA"."TEST_TABLE" ORA-12899: value too large for column ACCOUNT_ID (actual: 31, maximum: 29) KUP-11009: data for row: ACCOUNT_ID : 0X'3238363032353336C7454CDD4B434132303130303531323135' KUP-11007: conversion error loading table "MSDBA"."TEST_TABLE" ORA-12899: value too large for column ACCOUNT_ID (actual: 30, maximum: 29) KUP-11009: data for row: ACCOUNT_ID : 0X'3238363032353334534F46554FD04C32303039313230373136'
KUP-11009: data for row
This error ORA-12899: value too large for column, KUP-11007: conversion error loading table is related with the Character set problem between source ( WE8ISO8859P9 ) and target ( AL32UTF8 ) .
Find Character Set of Oracle
You can check character set of oracle database with following scripts.
SQL> select * from nls_database_parameters where parameter='NLS_CHARACTERSET'; PARAMETER VALUE ------------------------------ ---------------------------------------- NLS_CHARACTERSET WE8ISO8859P9
There are two solutions for these errors.
First solution is very easy. Alter the related column size and increase the column size as follows.
Alter Table Modify Column in Oracle
Column size was 30 byte in target, I will increase it to 36 as follows.
alter table MSDBA.TEST_TABLE modify ACCOUNT_ID VARCHAR2(36 Byte);
TABLE_EXISTS_ACTION=TRUNCATE
Then start the import again with TABLE_EXISTS_ACTION=TRUNCATE, if you start it with replace or skip option, you will get the same error again. So start the impdp with TABLE_EXISTS_ACTION=TRUNCATE option as follows.
impdp "/ as sysdba" directory=VIN dumpfile=Part1_Prime%U.dmp TABLE_EXISTS_ACTION=truncate REMAP_TABLESPACE=TBS_DATA:TBS_MSD logfile=Part1_Prime_imp.log parallel=96 remap_schema=ACCOUNT:MSDBA TABLES=MSDBA.TEST_TABLE
The second sohutdown
sql> startup restriclution is very difficult, because to solve it you need to change the character set of target. But mostly it is not possible in Production databases. But if you are working on Test,Development, Bugfix database, then you may change it as follows to solve this error.
sql> st; sql> ALTER DATABASE CHARACTER SET INTERNAL_USE WE8ISO8859P9 ; sql> shutdown; sql> startup;
You can read the following article, if you want to learn more details about Export – import.
Oracle Data Pump – Export Import ( Expdp Impdp ) Tutorial with Examples-4
Do you want to learn Oracle Database for Beginners, then read the following articles.
Oracle Tutorial | Oracle Database Tutorials for Beginners ( Junior Oracle DBA )
3,389 views last month, 1 views today
About Mehmet Salih Deveci
I am Founder of SysDBASoft IT and IT Tutorial and Certified Expert about Oracle & SQL Server database, Goldengate, Exadata Machine, Oracle Database Appliance administrator with 10+years experience.I have OCA, OCP, OCE RAC Expert Certificates I have worked 100+ Banking, Insurance, Finance, Telco and etc. clients as a Consultant, Insource or Outsource.I have done 200+ Operations in this clients such as Exadata Installation & PoC & Migration & Upgrade, Oracle & SQL Server Database Upgrade, Oracle RAC Installation, SQL Server AlwaysOn Installation, Database Migration, Disaster Recovery, Backup Restore, Performance Tuning, Periodic Healthchecks.I have done 2000+ Table replication with Goldengate or SQL Server Replication tool for DWH Databases in many clients.If you need Oracle DBA, SQL Server DBA, APPS DBA, Exadata, Goldengate, EBS Consultancy and Training you can send my email adress [email protected].- -Oracle DBA, SQL Server DBA, APPS DBA, Exadata, Goldengate, EBS ve linux Danışmanlık ve Eğitim için [email protected] a mail atabilirsiniz.