Error invalid byte sequence for encoding utf8 0x00

I get the following error when inserting data from mysql into postgres. Do I have to manually remove all null characters from my input data? Is there a way to get postgres to do this for me? ERROR:

I get the following error when inserting data from mysql into postgres.

Do I have to manually remove all null characters from my input data?
Is there a way to get postgres to do this for me?

ERROR: invalid byte sequence for encoding "UTF8": 0x00

asked Aug 28, 2009 at 15:13

ScArcher2's user avatar

ScArcher2ScArcher2

84.3k43 gold badges119 silver badges160 bronze badges

PostgreSQL doesn’t support storing NULL (x00) characters in text fields (this is obviously different from the database NULL value, which is fully supported).

Source: http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-UESCAPE

If you need to store the NULL character, you must use a bytea field — which should store anything you want, but won’t support text operations on it.

Given that PostgreSQL doesn’t support it in text values, there’s no good way to get it to remove it. You could import your data into bytea and later convert it to text using a special function (in perl or something, maybe?), but it’s likely going to be easier to do that in preprocessing before you load it.

StackzOfZtuff's user avatar

answered Aug 28, 2009 at 18:06

Magnus Hagander's user avatar

Magnus HaganderMagnus Hagander

23.3k4 gold badges54 silver badges43 bronze badges

1

If you are using Java, you could just replace the x00 characters before the insert like following:

myValue.replaceAll("u0000", "")

The solution was provided and explained by Csaba in following post:

https://www.postgresql.org/message-id/1171970019.3101.328.camel%40coppola.muc.ecircle.de

Respectively:

in Java you can actually have a «0x0» character in your string, and
that’s valid unicode. So that’s translated to the character 0x0 in
UTF8, which in turn is not accepted because the server uses null
terminated strings… so the only way is to make sure your strings
don’t contain the character ‘u0000’.

answered Aug 22, 2017 at 6:24

David Dal Busco's user avatar

David Dal BuscoDavid Dal Busco

7,72515 gold badges53 silver badges94 bronze badges

2

Just regex out null bytes:

s/x00//g;

answered Jan 8, 2013 at 16:12

hicham's user avatar

2

Only this regex worked for me:

sed 's/\0//g'

So as you get your data do this: $ get_data | sed 's/\0//g' which will output your data without 0x00

answered Oct 5, 2018 at 15:21

techkuz's user avatar

techkuztechkuz

3,3675 gold badges32 silver badges60 bronze badges

You can first insert data into blob field and then copy to text field with the folloing function

CREATE OR REPLACE FUNCTION blob2text() RETURNS void AS $$
Declare
    ref record;
    i integer;
Begin
    FOR ref IN SELECT id, blob_field FROM table LOOP

          --  find 0x00 and replace with space    
      i := position(E'\000'::bytea in ref.blob_field);
      WHILE i > 0 LOOP
        ref.bob_field := set_byte(ref.blob_field, i-1, 20);
        i := position(E'\000'::bytea in ref.blobl_field);
      END LOOP

    UPDATE table SET field = encode(ref.blob_field, 'escape') WHERE id = ref.id;
    END LOOP;

End; $$ LANGUAGE plpgsql; 

SELECT blob2text();

Hambone's user avatar

Hambone

15.3k7 gold badges50 silver badges67 bronze badges

answered Oct 13, 2009 at 6:15

Raido's user avatar

RaidoRaido

5715 silver badges11 bronze badges

If you need to store null characters in text fields and don’t want to change your data type other than text then you can follow my solution too:

Before insert:

myValue = myValue.replaceAll("u0000", "SomeVerySpecialText")

After select:

myValue = myValue.replaceAll("SomeVerySpecialText","u0000")

I’ve used «null» as my SomeVerySpecialText which I am sure that there will be no any «null» string in my values at all.

answered Nov 26, 2018 at 10:04

Ismail Yavuz's user avatar

Ismail YavuzIsmail Yavuz

6,4286 gold badges29 silver badges49 bronze badges

This kind of error can also happen when using COPY and having an escaped string containing NULL values(00) such as:

"Hx00x00x00tjxA8x9E#Dx98+xCAxF0xA7xBBlxC5x19xD7x8DxB6x18xEDJx1En"

If you use COPY without specifying the format 'CSV' postgres by default will assume format 'text'. This has a different interaction with backlashes, see text format.

If you’re using COPY or a file_fdw make sure to specify format 'CSV' to avoid this kind of errors.

answered Aug 6, 2019 at 2:36

Steve Chavez's user avatar

I’ve spent the last 8 hours trying to import the output of ‘mysqldump —compatible=postgresql’ into PostgreSQL 8.4.9, and I’ve read at least 20 different threads here and elesewhere already about this specific problem, but found no real usable answer that works.

MySQL 5.1.52 data dumped:

mysqldump -u root -p --compatible=postgresql --no-create-info --no-create-db --default-character-set=utf8 --skip-lock-tables rt3 > foo

PostgreSQL 8.4.9 server as destination

Loading the data with ‘psql -U rt_user -f foo’ is reporting (many of these, here’s one example):

psql:foo:29: ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".

According the following, there are no NULL (0x00) characters in the input file.

database-dumps:rcf-temp1# sed 's/x0/ /g' < foo > nonulls
database-dumps:rcf-temp1# sum foo nonulls
04730 2545610 foo
04730 2545610 nonulls
database-dumps:rcf-temp1# rm nonulls

Likewise, another check with Perl shows no NULLs:

database-dumps:rcf-temp1# perl -ne '/00/ and print;' foo
database-dumps:rcf-temp1#

As the «HINT» in the error mentions, I have tried every possible way to set ‘client_encoding’ to ‘UTF8’, and I succeed but it has no effect toward solving my problem.

database-dumps:rcf-temp1# psql -U rt_user --variable=client_encoding=utf-8 -c "SHOW client_encoding;" rt3
 client_encoding
-----------------
 UTF8
(1 row)

database-dumps:rcf-temp1#

Perfect, yet:

database-dumps:rcf-temp1# psql -U rt_user -f foo --variable=client_encoding=utf-8 rt3
...
psql:foo:29: ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".
...

Barring the «According to Hoyle» correct answer, which would be fantastic to hear, and knowing that I really don’t care about preserving any non-ASCII characters for this seldom-referenced data, what suggestions do you have?

Update: I get the same error with an ASCII-only version of the same dump file at import time. Truly mind-boggling:

database-dumps:rcf-temp1# # convert any non-ASCII character to a space
database-dumps:rcf-temp1# perl -i.bk -pe 's/[^[:ascii:]]/ /g;' mysql5-dump.sql
database-dumps:rcf-temp1# sum mysql5-dump.sql mysql5-dump.sql.bk
41053 2545611 mysql5-dump.sql
50145 2545611 mysql5-dump.sql.bk
database-dumps:rcf-temp1# cmp mysql5-dump.sql mysql5-dump.sql.bk
mysql5-dump.sql mysql5-dump.sql.bk differ: byte 1304850, line 30
database-dumps:rcf-temp1# # GOOD!
database-dumps:rcf-temp1# psql -U postgres -f mysql5-dump.sql --variable=client_encoding=utf-8 rt3
...
INSERT 0 416
psql:mysql5-dump.sql:30: ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the encod.
INSERT 0 455
INSERT 0 424
INSERT 0 483
INSERT 0 447
INSERT 0 503
psql:mysql5-dump.sql:36: ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the encod.
INSERT 0 502
INSERT 0 507
INSERT 0 318
INSERT 0 284
psql:mysql5-dump.sql:41: ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the encod.
INSERT 0 382
INSERT 0 419
INSERT 0 247
psql:mysql5-dump.sql:45: ERROR:  invalid byte sequence for encoding "UTF8": 0x00
HINT:  This error can also happen if the byte sequence does not match the encod.
INSERT 0 267
INSERT 0 348
^C

One of the tables in question is defined as:

                                        Table "public.attachments"
     Column      |            Type             |                        Modifie
-----------------+-----------------------------+--------------------------------
 id              | integer                     | not null default nextval('atta)
 transactionid   | integer                     | not null
 parent          | integer                     | not null default 0
 messageid       | character varying(160)      |
 subject         | character varying(255)      |
 filename        | character varying(255)      |
 contenttype     | character varying(80)       |
 contentencoding | character varying(80)       |
 content         | text                        |
 headers         | text                        |
 creator         | integer                     | not null default 0
 created         | timestamp without time zone |
Indexes:
    "attachments_pkey" PRIMARY KEY, btree (id)
    "attachments1" btree (parent)
    "attachments2" btree (transactionid)
    "attachments3" btree (parent, transactionid)

I do not have the liberty to change the type for any part of the DB schema. Doing so would likely break future upgrades of the software, etc.

The likely problem column is ‘content’ of type ‘text’ (perhaps others in other tables as well). As I already know from previous research, PostgreSQL will not allow NULL in ‘text’ values. However, please see above where both sed and Perl show no NULL characters, and then further down where I strip all non-ASCII characters from the entire dump file but it still barfs.

Symptoms

  • When migrating Stash’s datastore to a PostgreSQL database, the following error is shown in the administration web interface:

    Stash could not be migrated to the new database. PostgreSQL does not allow null characters (U+0000) in text columns. See the following knowledge base to solve the problem: https://confluence.atlassian.com/x/OwOCKQ
  • When restoring a backup to a Stash instance that uses a PostgreSQL database, the restore fails and the following error appears in the atlassian-stash.log:

    Caused by: org.postgresql.util.PSQLException: ERROR: invalid byte sequence for encoding "UTF8": 0x00
        at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2198) ~[postgresql-9.3-1102.jdbc41.jar:na]
        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1927) ~[postgresql-9.3-1102.jdbc41.jar:na]
        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) ~[postgresql-9.3-1102.jdbc41.jar:na]
        at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:561) ~[postgresql-9.3-1102.jdbc41.jar:na]
        at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:419) ~[postgresql-9.3-1102.jdbc41.jar:na]
        at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:365) ~[postgresql-9.3-1102.jdbc41.jar:na]
        at com.jolbox.bonecp.PreparedStatementHandle.executeUpdate(PreparedStatementHandle.java:203) ~[bonecp-0.7.1.RELEASE.jar:0.7.1.RELEASE]
        at com.atlassian.stash.internal.backup.liquibase.DefaultLiquibaseDao.insert(DefaultLiquibaseDao.java:272) ~[stash-dao-impl-3.6.0-SNAPSHOT.jar:na]
        ... 39 common frames omitted

Cause

This problem occurs because PostgreSQL does not allow null characters (U+0000) in its text data types. As a result, when migrating or restoring a backup to a PostgreSQL database, the operation can fail with the error above. This problem is restricted to PostgreSQL.  Other databases supported by Stash are not affected by null characters.

Resolution

Follow the steps below to sanitize the source database and then re-run the migration or restore.

  1. Stop Stash.
  2. Find and remove the null characters (U+0000) in the source database text columns. Most likely candidates are comments (sta_comment table) or plugin settings (plugin_setting table).
    To remove the null characters on those tables, run the following SQL queries on the source database.
      1. If the source database is MySQL:

        SELECT * FROM sta_comment WHERE comment_text like concat('%', 0x00, '%');
        UPDATE sta_comment SET comment_text = replace(comment_text, 0x00, '') WHERE comment_text like concat('%', 0x00, '%');
        SELECT * FROM plugin_setting WHERE key_value like concat('%', 0x00, '%');
        UPDATE plugin_setting SET key_value = replace(key_value, 0x00, '') WHERE key_value like concat('%', 0x00, '%');
      2. If the source database is Oracle:

        SELECT * FROM sta_comment WHERE instr(comment_text, unistr('000')) > 0;
        UPDATE sta_comment SET comment_text = replace(comment_text, unistr('000')) WHERE instr(comment_text, unistr('000')) > 0;
        SELECT * FROM plugin_setting WHERE instr(key_value, unistr('000')) > 0;
        UPDATE plugin_setting SET key_value = replace(key_value, unistr('000')) WHERE instr(key_value, unistr('000')) > 0;
      3. If the source database is Microsoft SQL Server, execute the following T-SQL code (note that a custom function is used because the built-in REPLACE function cannot replace null characters):

        IF OBJECT_ID (N'dbo.removeNullCharacters', N'FN') IS NOT NULL
            DROP FUNCTION removeNullCharacters;
        GO
        CREATE FUNCTION dbo.removeNullCharacters(@s nvarchar(max))
        RETURNS nvarchar(max)
        AS
        BEGIN
                DECLARE @c nchar(1)
                DECLARE @p int
                DECLARE @ret nvarchar(max)
                IF @s is NULL
                        SET @ret = @s
                ELSE
                BEGIN
                        SET @p = 0
                        SET @ret = ''
                        WHILE (@p <= LEN(@s))
                        BEGIN
                                SET @c = SUBSTRING(@s, @p, 1)
                                IF @c <> nchar(0)
                                BEGIN
                                        SET @ret = @ret + @c
                                END
                                SET @p = @p + 1
                        END
                END
                RETURN @ret
        END;
        SELECT * FROM sta_comment WHERE cast(comment_text AS varchar) like '%' + char(0) +'%';
        UPDATE sta_comment SET comment_text = dbo.removeNullCharacters(comment_text) WHERE cast(comment_text AS varchar) like '%' + char(0) +'%';
        SELECT * FROM plugin_setting WHERE cast(key_value AS varchar) like '%' + char(0) +'%';
        UPDATE plugin_setting SET key_value = dbo.removeNullCharacters(key_value) WHERE cast(key_value AS varchar) like '%' + char(0) +'%';
      4. If the source database is HSQLDB, either:

        • Migrate the database to an intermediate external database (such as MySQL), or

        • Find the problematic rows using the following queries and manually edit them to remove the null characters (U+0000);

          SELECT * FROM sta_comment WHERE comment_text like U&'%000%';
          SELECT * FROM plugin_setting WHERE key_value like U&'%000%';

          Note: Before accessing Stash’s HSQLDB (internal database) with an external tool, ensure Stash is not running.
          Note: Stash’s HSQLDB database (its internal database) can be opened by any database management tool that supports the JDBC protocol (such as DbVisualizer), using the following settings: 

          • Database driver: HSQLDB Server
          • Database driver location: STASH_INSTALL/atlassian-stash/WEB-INF/lib/hsqldb-2.2.4.jar (where STASH_INSTALL is the path to the Stash installation directory)

          • Database user: SA
          • JDBC URL: jdbc:hsqldb:file:STASH_HOME/shared/data/db;shutdown=true;hsqldb.tx=mvlocks (where STASH_HOME is the path to the Stash home directory)

  3. Re-create the PostgreSQL database (using the settings highlighted here) used in the original migration if it is not empty (for safety reasons, Stash blocks any migration to a non-empty database).
  4. Start Stash.
  5. Initiate the migration or the restoration of the backup once more.
  6. If the migration or restoration still fails, use the following instructions to diagnose the cause:
    1. Turn on PostgreSQL statement logging.
    2. Recreate the target PostgreSQL database to ensure it is empty.
    3. Restart the migration or the backup restoration to trigger the error again.
    4. Consult the PostgreSQL statement log to determine which SQL INSERT failed. This will indicate which table still contains the null characters which have to be sanitized as described above.
    5. Restart from step (a) until the migration or restore succeeds.

Last modified on Feb 26, 2016

Related content

  • No related content found

Describe what is not working as expected.
while inserting a data from a file there we are getting the error msg of this.

Description of the Issue
we have many columns in a table and like that their many tables we cannot add on each and every table

Is there any other way we can do it As if when we do it in MSSQL it automatically removes the space. But in Postgres, we have the issue I think.

Workaround
we have to replace the space in the column

Exception message:
Stack trace:
at Npgsql.NpgsqlConnector.<>c__DisplayClass161_0.<g__ReadMessageLong|0>d.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Npgsql.NpgsqlConnector.<>c__DisplayClass161_0.<g__ReadMessageLong|0>d.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Threading.Tasks.ValueTask`1.get_Result()
at Npgsql.NpgsqlDataReader.d__46.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Npgsql.NpgsqlDataReader.NextResult()
at Npgsql.NpgsqlCommand.d__100.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Npgsql.NpgsqlCommand.ExecuteDbDataReader(CommandBehavior behavior)
at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior)
at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet)

C# .NET
Npgsql version:4.0.4
PostgreSQL version:11.1
Operating system:Windows R2 2012


In this article, we will see how you can fix error ‘invalid byte sequence for encoding UTF8’ while restoring a PostgreSQL database. At work, I got a task to move DBs which has ASCII encoding to UTF8 encoding. Let me first confess that the ASCII DBs was not created by intention, someone accidentally created it!!! Having a DB ASCII encoded is very dangerous, it should be moved to UTF8 encoding as soon as possible. So the initial plan was to create archive dump of the DB with  pg_dump , create a new DB with UTF8 encoding and restore the dump to the new DB using  pg_restore . The plan worked for most of the DBs, but failed for one DB with below error.

DETAIL: Proceeding with relation creation anyway.
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 35091; 0 2527787452 TABLE DATA my_table release
pg_restore: [archiver (db)] COPY failed for table "my_table": ERROR: invalid byte sequence for encoding "UTF8": 0xa5
CONTEXT: COPY my_table, line 41653
WARNING: errors ignored on res

As the error says, there are some invalid UTF8 characters in table “my_table” which prevents pg_restore from restoring the particular table. I did a lot of research and googling to see what to do. I will list out what all steps I did.

Assume ‘my_db’ and ‘my_table’ is the database name and table name respectively.

Step 1:

Dump the Database excluding particular table ‘my_table’. I would suggest dumping the database in archive format for saving time and disk space.

pg_dump -Fc -T 'my_table' -p 1111  -f dbdump.pgd my_db

Step 2:

Create the new database with UTF8 encoding and restore the dump.

pg_restore -p 2222 -j 8 -d my_new_db dbdump.pgd

The restoration should be successful as we didn’t restore the offending table.

Step 3:

Dump the offending table ‘my_table’ in plain text format.

pg_dump -Fp -t 'my_table' -p 1111 my_db >  my_db_table_only.sql

Step 4:

Now we have table data in plain text. Let’s find invalid UTF8 characters in the file by running below command(make sure locale is set to UTF-8,).

# grep -naxv '.*'   my_db_table_only.sql
102:2010-03-23 ��ԥ�	data1 data2

� represents an invalid UTF8 character and it is present in 102th line of the file.

Step 5:

Find which charset the invalid UTF8 characters belongs to.

#grep -naxv '.*' my_db_table_only.sql > test.txt 
#file -i test.txt
test.txt: text/plain; charset=iso-8859-1

As per the output, those characters belongs to iso-8859-1. The charset may be different in your case.

Step 6:

Let’s convert  iso-8859-1  to  UTF8  using  iconv  command.

#grep -naxv '.*' my_db_table_only.sql |  iconv --from-code=ISO-8859-1 --to-code=UTF-8
102:2010-03-23 ¥Êԥ¡ data1 data2

Now you got the characters in UTF8 encoding. So you can just replace  ��ԥ� with  ¥Êԥ¡   in 102th line of dump file(I used  nano  editor to do this, faced issues with  Vim .)

I know that replacing characters manually could be a pain in the ass if there are lot of invalid UTF8 characters. We can run  iconv  on the whole file as shown below.

iconv --from-code=ISO-8859-1 --to-code=UTF-8 my_db_table_only.sql  > my_db_table_only_utf8.sql

But I won’t recommend this as it may change valid characters(eg: Chinese characters ) to some other characters. If you plan to run iconv on the file, just make sure only invalid UTF8 characters are converted by taking  diff  of both files.

Step7.

Once the characters are replaced. Restore the table to the database.

psql -p 2222 -d my_new_db -f my_db_table_only.sql

No more “Invalid byte sequence for encoding UTF8” error. Thanks for the time taken to read my blog. Subscribe to this blog so that you don’t miss out anything useful   (Checkout Right Sidebar for the Subscription Form and Facebook follow button)  . Please also put your thoughts as comments .

By Anders Cornell, Jr. DBA

PostgreSQL is a great piece of software. Its features are well-designed, and they compose elegantly. It’s among the most versatile and reliable software I’ve ever used and its comprehensive superiority over other relational database products leads me to think of PostgreSQL as the data-store that can do anything. But today I’m here to discuss something that PostgreSQL can’t do: handle null characters (also known as zero bytes) in text values.

Conventionally, a zero byte is reserved to mark the end of a text string, so a zero byte inside a string is a contradiction. If a text string were to contain a zero byte, then that string would be truncated by any software that relies on C’s null-terminator convention. Text encodings have evolved since this convention was established, but no widely-used encoding has introduced a new meaning for the zero byte. So, regardless of encoding, if a zero byte is encountered within a string, it is probably safe to assume that it is there by mistake.

In contrast, modern systems treat zero bytes with much more passivity. Strings are no longer null-terminated in the software of this century; instead, every string is stored with an explicit length. This describes almost all software written in C++, Java, Python, Ruby, Go, JavaScript, Clojure, or Rust, for example, as well as most newer C code that does serious text handling. Application software that treats a zero byte as a string terminator is now the exception.

As a result, mistake or not, zero bytes occur in text nowadays. Cosmic rays, buggy UI code, and meddlesome users are all capable of producing a text string containing them.

Ideally, these zero bytes and other splashes of definite meaninglessness in text could be summarily rejected as errors, but human language is horrifically complicated, and in practice text validation must be conservative. Absent higher-level, application-specific validation, the most you can do without stepping on someone’s toes is to verify that the bytes of a string decode to a sequence of valid characters in your chosen character set.

Since it’s 2020, your chosen character set is Unicode, encoded with UTF-8. In UTF-8, a zero byte represents the code point U+0000 (NULL), just as a 0x61 byte represents U+0061 (LATIN SMALL LETTER A). The Unicode Standard does designate some code points “noncharacters” that “should never be interchanged,” but U+0000 is not one of them. In other words, there is no basis for rejecting null characters in the standard. Accordingly, UTF-8 decoders and Unicode normalization routines, as found in modern library code, do not reject null characters.

But PostgreSQL, whose backend codebase is over 30 years old and written in C, does. Try, for example:

postgres=# SELECT e'string with a  byte';
ERROR: invalid byte sequence for encoding "UTF8": 0x00

PostgreSQL’s UTF-8 text type does not allow zero bytes. In other words, there are valid Unicode text strings that PostgreSQL cannot store as text.

The implications are far-reaching due to the foundational nature of the text type. For example, jsonb represents JSON string values internally as text, which means that, counterintuitively, it is possible to give PostgreSQL some valid JSON and get back an error:

postgres=# SELECT '{"a_json_object": "with_a_u0000_byte"}'::jsonb;
ERROR: unsupported Unicode escape sequence
LINE 1: SELECT '{"a_json_object": "with_a_u0000_byte"}'::jsonb;
^
DETAIL: u0000 cannot be converted to text.
CONTEXT: JSON data, line 1: {"a_json_object":...

To give another example, it is similarly impossible to use PostgreSQL’s text-search features on a document containing null characters, since such a document is not representable as text.

To be fair, for a system that cannot accept text with null characters, PostgreSQL handles text with null characters commendably. It’s careful not to let a null character slip in through DML, and throws a descriptive error rather than silently truncating the string. Furthermore, the frontend-backend protocol does not use null-termination, sparing database driver code the responsibility of catching zero bytes. It’s hard to imagine how a system that uses null-termination internally could behave better in the face of null characters. Most software written in C does much worse, and PostgreSQL’s diligence does a lot to head off potential null-byte injection vulnerabilities.

However, PostgreSQL comes in last among relational databases in null character support. Oracle, MS, and even MariaDB, for all their faults, treat U+0000 like any other character.

If one accepts that text can contain null characters, but still wants to store such text in PostgreSQL, there are two workarounds:

  1. Strip null characters out, or replace them with a different character (I suggest U+FFFD REPLACEMENT CHARACTER) before passing text values to the database. Otherwise, PostgreSQL will abort the transaction and throw an error. The possibility of null characters must be considered at every occasion where your application hands a string off to the database. Even though null characters are rare and unmeaningful, the need to remove them will present an unexpected burden.
  2. Abandon text and store the UTF-8 bytes of the string in PostgreSQL as bytea instead. Zero bytes are, of course, permitted in bytea values, so a column of type bytea can store any valid UTF-8 string. (Not to mention any invalid one.) With bytea, INSERTing a valid string will never cause an “invalid byte sequence” error, and when later SELECTed, a string will return from the database with all its characters intact. This transparency comes at the expense of all of PostgreSQL’s text-processing features. JSON, full-text search, locale-aware comparison, regular expressions and more are unavailable when using bytea.

One of these approaches must be identified and implemented on an application-by-application basis. As long as PostgreSQL rejects null characters, individual engineering teams will continue to spend time working around the problem. Some teams have been lucky, their systems never encountering a null character, and haven’t had to spend time implementing a workaround. This will become less common as UTF8-encoded Unicode becomes the standard approach to text representation and the null terminator convention dies off—both of these processes are well underway.

There is only one solution. It will be difficult, but it is necessary. PostgreSQL must learn to accept null characters.

Supporting null characters will be a breaking change, but on the bright side, it need not break applications or database drivers—the frontend-backend protocol could be respecified to allow embedded nulls, without changes to client-side code, because text values in the protocol are already length-prefixed, not null-terminated. Text values are also length-prefixed in tuples on disk, so no change is needed in the disk data format either.

Here’s my four-step plan for fixing PostgreSQL to support null characters:

  1. Switch to a length-prefixed string representation for all internal text-processing code: Datum (already length-prefixed) instead of char *. Null characters are still disallowed, but at this point, absent extensions, the database could handle them correctly.
  2. Deprecate public functions (functions in both the C and fmgr sense) that use the cstring data type. This includes supporting and preferring Datum-based I/O functions for new base types defined in extensions, instead of the cstring-based ones that are currently required.
  3. Introduce a cluster-wide, off-by-default configuration option for allowing null characters. Turning the option on will break extensions that use the deprecated functions, and applications that assume null characters are illegal.
  4. Far in the future, when the null-termination convention is but a distant memory, allow null characters by default.

Null-termination is a relic, and beginning to show its age. A previous generation of developers may protest, but in software that handles UTF-8 text, erroring on zero bytes should be considered a bug. To keep its status as a reliable, enterprise-grade, production-ready relational database and worthy core component in modern software stacks, PostgreSQL must make this difficult transition.

How has the null-character bug affected your company? Did you discover this blog post after a single zero byte crashed your entire application? What do you think of my four-step plan? Let’s make PostgreSQL better together. Leave a comment below.

Если вам нужно хранить данные UTF8 в своей базе данных, вам нужна база данных, которая принимает UTF8. Вы можете проверить кодировку своей базы данных в pgAdmin. Просто щелкните правой кнопкой мыши базу данных и выберите «Свойства».

Но эта ошибка, похоже, говорит вам о некоторых недопустимых данных UTF8 в исходном файле. Это означает, что утилита copy обнаружила или предположила, что вы загружаете файл UTF8.

Если вы работаете под некоторым вариантом Unix, вы можете проверить кодировку (более или менее) с помощью file.

$ file yourfilename
yourfilename: UTF-8 Unicode English text

(Я думаю, что это будет работать и на Mac в терминале.) Не уверен, как это сделать в Windows.

Если вы используете ту же самую утилиту для файла, который поступает из систем Windows (то есть файла, который не закодирован в UTF8), он, вероятно, будет показывать что-то вроде этого:

$ file yourfilename
yourfilename: ASCII text, with CRLF line terminators

Если ситуация остается странной, вы можете попытаться преобразовать свои входные данные в известную кодировку, изменить свою клиентскую кодировку или и то, и другое. (Мы действительно растягиваем пределы моих знаний о кодировках.)

Вы можете использовать утилиту iconv для изменения кодировки входных данных.

iconv -f original_charset -t utf-8 originalfile > newfile

Вы можете изменить кодировку psql (клиент), следуя инструкциям Поддержка набора символов. На этой странице найдите фразу «Включить автоматическое преобразование набора символов».

Понравилась статья? Поделить с друзьями:
  • Error invalid bignumber string
  • Error invalid array assignment
  • Error invalid arch independent elf magic grub rescue
  • Error invalid app version you need to update droidcam on your device ок
  • Error invalid ajax data