Error insert or update on table violates foreign key constraint

I am a new one in postgreSQL. I have 3 tables, one table references the other 2 table's primary keys. But I couldn't insert data into the Table3. See the code below : DROP TABLE Table1 CASCADE; CR...

There are a few problems with your tables. I’ll try to address the foreign keys first, since you question asked about them :)

But before that, we should realize that the two sets of tables (the first three you created and the second set, which you created after dropping the first set) are the same. Of course, the definition of Table3 in your second attempt has syntax and logical errors, but the basic idea is:

CREATE TABLE table3 (   
  "ID" bigint NOT NULL DEFAULT '0',   
  "DataID" bigint DEFAULT NULL,   
  "Address" numeric(20) DEFAULT NULL,   
  "Data" bigint DEFAULT NULL,
   PRIMARY KEY ("ID"),   
   FOREIGN KEY ("DataID") REFERENCES Table1("DataID") on delete cascade on update cascade,   
   FOREIGN KEY ("Address") REFERENCES Table2("Address") on delete cascade on update cascade
);

This definition tell PostgreSQL roughly the following: «Create a table with four columns, one will be the primary key (PK), the others can be NULL. If a new row is inserted, check DataID and Address: if they contain a non-NULL value (say 27856), then check Table1 for DataID˙and Table2 for Address. If there is no such value in those tables, then return an error.» This last point which you’ve seen first:

ERROR: insert or update on table "Table3" violates foreign key constraint 
    "Table3_DataID_fkey" DETAIL: Key (DataID)=(27856) is not present in table "Table1".

So simple: if there is no row in Table1 where DataID = 27856, then you can’t insert that row into Table3.

If you need that row, you should first insert a row into Table1 with DataID = 27856, and only then try to insert into Table3. If this seems to you not what you want, please describe in a few sentences what you want to achieve, and we can help with a good design.


And now about the other problems.

You define your PKs as

CREATE all_your_tables (
    first_column NOT NULL DEFAULT '0',   
    [...]
    PRIMARY KEY ("ID"),  

A primary key means that all the items in it are different from each other, that is, the values are UNIQUE. If you give a static DEFAULT (like '0') to a UNIQUE column, you will experience bad surprises all the time. This is what you got in your third error message.

Furthermore, '0' means a text string, but not a number (bigint or numeric in your case). Use simply 0 instead (or don’t use it at all, as I written above).

And a last point (I may be wrong here): in Table2, your Address field is set to numeric(20). At the same time, it is the PK of the table. The column name and the data type suggests that this address can change in the future. If this is true, than it is a very bad choice for a PK. Think about the following scenario: you have an address ‘1234567890454’, which has a child in Table3 like

ID        DataID           Address             Data
123       3216547          1234567890454       654897564134569

Now that address happens to change to something other. How do you make your child row in Table3 follow its parent to the new address? (There are solutions for this, but can cause much confusion.) If this is your case, add an ID column to your table, which will not contain any information from the real world, it will simply serve as an identification value (that is, ID) for an address.

Recall the weather and cities tables from Chapter 2. Consider the following problem: You want to make sure that no one can insert rows in the weather table that do not have a matching entry in the cities table. This is called maintaining the referential integrity of your data. In simplistic database systems this would be implemented (if at all) by first looking at the cities table to check if a matching record exists, and then inserting or rejecting the new weather records. This approach has a number of problems and is very inconvenient, so PostgreSQL can do this for you.

The new declaration of the tables would look like this:

CREATE TABLE cities (
        name     varchar(80) primary key,
        location point
);

CREATE TABLE weather (
        city      varchar(80) references cities(name),
        temp_lo   int,
        temp_hi   int,
        prcp      real,
        date      date
);

Now try inserting an invalid record:

INSERT INTO weather VALUES ('Berkeley', 45, 53, 0.0, '1994-11-28');
ERROR:  insert or update on table "weather" violates foreign key constraint "weather_city_fkey"
DETAIL:  Key (city)=(Berkeley) is not present in table "cities".

The behavior of foreign keys can be finely tuned to your application. We will not go beyond this simple example in this tutorial, but just refer you to Chapter 5 for more information. Making correct use of foreign keys will definitely improve the quality of your database applications, so you are strongly encouraged to learn about them.

You can’t disable a foreign key constraint in Postgres, like you can do in Oracle. However, you can remove the foreign key constraint from a column and then re-add it to the column.

Here’s a quick test case in five steps:

  1. Drop the big and little table if they exists. The first drop statement requires a cascade because there is a dependent little table that holds a foreign key constraint against the primary key column of the big table. The second drop statement does not require the cascade keyword because there is not a dependent foreign key constraint.
    DROP TABLE IF EXISTS big CASCADE;
    DROP TABLE IF EXISTS little;
  2. Create the big and little tables:
    -- Create the big table.
    CREATE TABLE big
    ( big_id     SERIAL
    , big_text   VARCHAR(20) NOT NULL
    , CONSTRAINT pk_little_1 PRIMARY KEY (big_id));
     
    -- Display the big table.
    d big
     
    -- Create little table.
    CREATE TABLE little
    ( little_id     SERIAL
    , big_id        INTEGER     NOT NULL
    , little_text   VARCHAR(20) NOT NULL
    , CONSTRAINT fk_little_1 FOREIGN KEY (big_id) REFERENCES big (big_id));
     
    -- Display the little table.
    d little

    If you failed to designate the big_id column as a primary key constrained, Postgres will raise the following exception:

    ERROR:  there IS no UNIQUE CONSTRAINT matching given KEYS FOR referenced TABLE "big"
  3. Insert a non-compliant row in the little table. An insert statement into the little table with a value for the foreign key column that does not exist in the big_id column of the big table would fail with the following error:
    ERROR:  INSERT OR UPDATE ON TABLE "little" violates FOREIGN KEY CONSTRAINT "fk_little_1"
    DETAIL:  KEY (big_id)=(2) IS NOT present IN TABLE "big".

    Re-enabling the foreign key constraint, the insert statement succeeds after you first insert a new row into the big table with the foreign key value for the little table as its primary key. The following two insert statements add a row to both the big and little table:

    -- Insert into a big table.
    INSERT INTO big
    (big_text)
    VALUES
    ('Cat in the Hat 2');
     
    -- Insert into a little table.
    INSERT INTO little
    (big_id
    ,little_text)
    VALUES
    ( 2
    ,'Thing 3');

    Then, you can query it like this:

    SELECT *
    FROM   big b JOIN little l ON b.big_id = l.big_id;
     big_id |     big_text     | little_id | big_id | little_text 
    --------+------------------+-----------+--------+-------------
          1 | Cat IN the Hat 1 |         1 |      1 | Thing 1
          1 | Cat IN the Hat 1 |         2 |      1 | Thing 2
          2 | Cat IN the Hat 2 |         3 |      2 | Thing 3
    (3 ROWS)
  4. You can drop a foreign key constraint with the following syntax:
    ALTER TABLE little DROP CONSTRAINT fk_little_1;
  5. You can add a foreign key constraint with the following syntax:
    ALTER TABLE little ADD CONSTRAINT fk_little_1 FOREIGN KEY (big_id) REFERENCES big (big_id);

As always, I hope this helps you solve problems.

Using Hangfire version 1.9.5, found an issue when trying to Enqueue Hangfire jobs.

The jobs are being enqueued within a TransactionScope that I manage. I originally got the following error:

System.ArgumentException: The transaction specified for TransactionScope has a different IsolationLevel than the value requested for the scope. (Parameter 'transactionOptions')
   at System.Transactions.TransactionScope..ctor(TransactionScopeOption scopeOption, TransactionOptions transactionOptions, TransactionScopeAsyncFlowOption asyncFlowOption)
   at System.Transactions.TransactionScope..ctor(TransactionScopeOption scopeOption, TransactionOptions transactionOptions)
   at Hangfire.PostgreSql.PostgreSqlStorage.CreateTransaction(Nullable`1 isolationLevel)
   at Hangfire.PostgreSql.PostgreSqlStorage.UseTransaction[T](DbConnection dedicatedConnection, Func`3 func, Nullable`1 isolationLevel)
   at Hangfire.PostgreSql.PostgreSqlConnection.CreateExpiredJob(Job job, IDictionary`2 parameters, DateTime createdAt, TimeSpan expireIn)
   at Hangfire.Client.CoreBackgroundJobFactory.<>c__DisplayClass14_0.<Create>b__2(Int32 _)
   at Hangfire.Client.CoreBackgroundJobFactory.RetryOnException[T](Int32& attemptsLeft, Func`2 action)
--- End of stack trace from previous location ---
   at Hangfire.Client.CoreBackgroundJobFactory.RetryOnException[T](Int32& attemptsLeft, Func`2 action)
   at Hangfire.Client.CoreBackgroundJobFactory.Create(CreateContext context)
   at Hangfire.Client.BackgroundJobFactory.<>c__DisplayClass12_0.<CreateWithFilters>b__0()
   at Hangfire.Client.BackgroundJobFactory.InvokeClientFilter(IClientFilter filter, CreatingContext preContext, Func`1 continuation)
   at Hangfire.Client.BackgroundJobFactory.<>c__DisplayClass12_1.<CreateWithFilters>b__2()
   at Hangfire.Client.BackgroundJobFactory.CreateWithFilters(CreateContext context, IEnumerable`1 filters)
   at Hangfire.Client.BackgroundJobFactory.Create(CreateContext context)
   at Hangfire.BackgroundJobClient.Create(Job job, IState state)
   --- End of inner exception stack trace ---
   at Hangfire.BackgroundJobClient.Create(Job job, IState state)
   at Hangfire.BackgroundJobClientExtensions.Create[T](IBackgroundJobClient client, Expression`1 methodCall, IState state)
   at Hangfire.BackgroundJobClientExtensions.Enqueue[T](IBackgroundJobClient client, Expression`1 methodCall)

After finding that jobs are created using IsolationLevel = ReadCommitted, I updated my transactions to do the same. I now get the following error whenever trying to enqueue jobs:

Npgsql.PostgresException (0x80004005): 23503: insert or update on table "state" violates foreign key constraint "state_jobid_fkey"

DETAIL: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
   at Npgsql.Internal.NpgsqlConnector.<ReadMessage>g__ReadMessageLong|213_0(NpgsqlConnector connector, Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
   at Npgsql.NpgsqlDataReader.NextResult(Boolean async, Boolean isConsuming, CancellationToken cancellationToken)
   at Npgsql.NpgsqlDataReader.NextResult()
   at Npgsql.NpgsqlCommand.ExecuteReader(CommandBehavior behavior, Boolean async, CancellationToken cancellationToken)
   at Npgsql.NpgsqlCommand.ExecuteReader(CommandBehavior behavior, Boolean async, CancellationToken cancellationToken)
   at Npgsql.NpgsqlCommand.ExecuteNonQuery(Boolean async, CancellationToken cancellationToken)
   at Npgsql.NpgsqlCommand.ExecuteNonQuery()
   at Dapper.SqlMapper.ExecuteCommand(IDbConnection cnn, CommandDefinition& command, Action`2 paramReader) in /_/Dapper/SqlMapper.cs:line 2858
   at Dapper.SqlMapper.ExecuteImpl(IDbConnection cnn, CommandDefinition& command) in /_/Dapper/SqlMapper.cs:line 581
   at Dapper.SqlMapper.Execute(IDbConnection cnn, String sql, Object param, IDbTransaction transaction, Nullable`1 commandTimeout, Nullable`1 commandType) in /_/Dapper/SqlMapper.cs:line 452
   at Hangfire.PostgreSql.PostgreSqlWriteOnlyTransaction.<>c__DisplayClass8_0.<SetJobState>b__0(IDbConnection con)
   at Hangfire.PostgreSql.PostgreSqlWriteOnlyTransaction.<Commit>b__4_0(DbConnection connection, DbTransaction transaction)
   at Hangfire.PostgreSql.PostgreSqlStorage.<>c__DisplayClass23_0.<UseTransaction>b__0(DbConnection connection, DbTransaction transaction)
   at Hangfire.PostgreSql.PostgreSqlStorage.<>c__DisplayClass24_0`1.<UseTransaction>b__0(DbConnection connection)
   at Hangfire.PostgreSql.PostgreSqlStorage.UseConnection[T](DbConnection dedicatedConnection, Func`2 func)
   at Hangfire.PostgreSql.PostgreSqlStorage.UseTransaction[T](DbConnection dedicatedConnection, Func`3 func, Func`1 transactionScopeFactory)
   at Hangfire.PostgreSql.PostgreSqlStorage.UseTransaction(DbConnection dedicatedConnection, Action`2 action, Func`1 transactionScopeFactory)
   at Hangfire.PostgreSql.PostgreSqlWriteOnlyTransaction.Commit()
   at Hangfire.Client.CoreBackgroundJobFactory.<>c__DisplayClass14_0.<Create>b__3(Int32 attempt)
   at Hangfire.Client.CoreBackgroundJobFactory.<>c__DisplayClass15_0.<RetryOnException>b__0(Int32 attempt)
   at Hangfire.Client.CoreBackgroundJobFactory.RetryOnException[T](Int32& attemptsLeft, Func`2 action)
--- End of stack trace from previous location ---
   at Hangfire.Client.CoreBackgroundJobFactory.RetryOnException[T](Int32& attemptsLeft, Func`2 action)
   at Hangfire.Client.CoreBackgroundJobFactory.RetryOnException(Int32& attemptsLeft, Action`1 action)
   at Hangfire.Client.CoreBackgroundJobFactory.Create(CreateContext context)
   at Hangfire.Client.BackgroundJobFactory.<>c__DisplayClass12_0.<CreateWithFilters>b__0()
   at Hangfire.Client.BackgroundJobFactory.InvokeClientFilter(IClientFilter filter, CreatingContext preContext, Func`1 continuation)
   at Hangfire.Client.BackgroundJobFactory.<>c__DisplayClass12_1.<CreateWithFilters>b__2()
   at Hangfire.Client.BackgroundJobFactory.CreateWithFilters(CreateContext context, IEnumerable`1 filters)
   at Hangfire.Client.BackgroundJobFactory.Create(CreateContext context)
   at Hangfire.BackgroundJobClient.Create(Job job, IState state)
  Exception data:
    Severity: ERROR
    SqlState: 23503
    MessageText: insert or update on table "state" violates foreign key constraint "state_jobid_fkey"
    Detail: Detail redacted as it may contain sensitive data. Specify 'Include Error Detail' in the connection string to include this information.
    SchemaName: hangfire
    TableName: state
    ConstraintName: state_jobid_fkey
    File: d:pginstaller_12.autopostgres.windows-x64srcbackendutilsadtri_triggers.c
    Line: 2474
    Routine: ri_ReportViolation

I get the foreign key violation even when starting with a fresh database. This appears to be a new issue, as I’d previously had success with enqueuing jobs, even while using the default IsolationLevel.

.NET 6
Visual Studio 2022 (17.0.5)
Hangfire 1.7.28
Hangfire.AspNetCore 1.7.28
Hangfire.PostgreSql 1.9.5
Microsoft.EntityFrameworkCore 6.0.1
Npgsql.EntityFrameworkCore.PostgreSQL 6.0.2

Constraints are in important concept in every realtional database system and they guarantee the correctness of your data. While constraints are essentials there are situations when it is required to disable or drop them temporarily. The reason could be performance related because it is faster to validate the constraints at once after a data load. The reason could also be, that you need to load data and you do not know if the data is ordered in such a way that all foreign keys will validate for the time the data is loaded. In such a case it is required to either drop the constraints or to disable them until the data load is done. Validation of the constraints is deferred until all your data is there.

As always lets start with a simple test case, two tables, the second one references the first one:

postgres=# create table t1 ( a int primary key
postgres(#                 , b text
postgres(#                 , c date
postgres(#                 );
CREATE TABLE
postgres=# create table t2 ( a int primary key
postgres(#                 , b int references t1(a)
postgres(#                 , c text
postgres(#                 );
CREATE TABLE

Two rows, for each of them:

postgres=# insert into t1 (a,b,c) values(1,'aa',now());
INSERT 0 1
postgres=# insert into t1 (a,b,c) values(2,'bb',now());
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (1,1,'aa');
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (2,2,'aa');

Currently the two tiny tables look like this:

postgres=# d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 
 c      | date    |           |          | 
Indexes:
    "t1_pkey" PRIMARY KEY, btree (a)
Referenced by:
    TABLE "t2" CONSTRAINT "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# 

Lets assume we want to load some data provided by a script. As we do not know the ordering of the data in the script we decide to disable the foreign key constraint on the t2 table and validate it after the load:

postgres=# alter table t2 disable trigger all;
ALTER TABLE

The syntax might look a bit strange but it actually does disable the foreign key and it would have disabled all the foreign keys if there would have been more than one. It becomes more clear when we look at the table again:

postgres=# d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)
Disabled internal triggers:
    "RI_ConstraintTrigger_c_16460" AFTER INSERT ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_ins"()
    "RI_ConstraintTrigger_c_16461" AFTER UPDATE ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_upd"()

“ALL” means, please also disable the internal triggers that are responsible for verifying the constraints. One restriction of the “ALL” keyword is, that you need to be superuser for doing that. Trying that with a normal user will fail:

postgres=# create user u1 with login password 'u1';
CREATE ROLE
postgres=# c postgres u1
You are now connected to database "postgres" as user "u1".
postgres=> create table t3 ( a int primary key
postgres(>                 , b text
postgres(>                 , c date
postgres(>                 );
CREATE TABLE
postgres=> create table t4 ( a int primary key
postgres(>                 , b int references t3(a)
postgres(>                 , c text
postgres(>                 );
CREATE TABLE
postgres=> alter table t4 disable trigger all;
ERROR:  permission denied: "RI_ConstraintTrigger_c_16484" is a system trigger
postgres=> 

What you could do as a regular user to do disable the user triggers:

postgres=> alter table t4 disable trigger user;
ALTER TABLE

As I do not have any triggers it of course does not make much sense. Coming back to our initial t1 and t2 tables. As the foreign key currently is disabled we can insert data into the t2 table that would violate the constraint:

postgres=# select * from t1;
 a | b  |     c      
---+----+------------
 1 | aa | 2019-11-27
 2 | bb | 2019-11-27
(2 rows)

postgres=# select * from t2;
 a | b | c  
---+---+----
 1 | 1 | aa
 2 | 2 | aa
(2 rows)

postgres=# insert into t2 (a,b,c) values (3,3,'cc');
INSERT 0 1
postgres=# 

There clearly is no matching parent for this row in the t1 table but the insert succeeds, as the foreign key is disabled. Time to validate the constraint:

postgres=# d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)
Disabled internal triggers:
    "RI_ConstraintTrigger_c_16460" AFTER INSERT ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_ins"()
    "RI_ConstraintTrigger_c_16461" AFTER UPDATE ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_upd"()

postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# alter table t2 validate constraint t2_b_fkey;
ALTER TABLE
postgres=# 

Surprise, surprise, PostgreSQL does not complain about the invalid row. Why is that? If we ask the pg_constraint catalog table the constraint is recorded as validated:

postgres=# select convalidated from pg_constraint where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
 convalidated 
--------------
 t
(1 row)

It is even validated if we disable it once more:

postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# select convalidated from pg_constraint where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
 convalidated 
--------------
 t
(1 row)

That implies that PostgreSQL will not validate the constraint when we enable the internal triggers and PostgreSQL will not validate all the data as long as the status is valid. What we really need to do for getting the constraint validated is to invalidate it before:

postgres=# alter table t2 alter CONSTRAINT t2_b_fkey not valid;
ERROR:  ALTER CONSTRAINT statement constraints cannot be marked NOT VALID

Seems this is not the correct way of doing it. The correct way of doing it is to drop the foreign key and then re-create it with status invalid:

postgres=# alter table t2 drop constraint t2_b_fkey;
ALTER TABLE
postgres=# delete from t2 where a in (3,4);
DELETE 2
postgres=# alter table t2 add constraint t2_b_fkey foreign key (b) references t1(a) not valid;
ALTER TABLE
postgres=# d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a) NOT VALID

Now we have the desired state and we can insert our data:

postgres=# insert into t2(a,b,c) values (3,3,'cc');
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(3) is not present in table "t1".

Surprise, again. Creating a “not valid” constraint only tells PostgreSQL not to scan the whole table to validate if all the rows are valid. For data inserted or updated the constraint is still checked, and this is why the insert fails.

What options do we have left? The obvious one is this:

  • Drop all the foreign the keys.
  • Load the data.
  • Re-create the foreign keys, but leave them invalid to avoid the costly scan of the tables. Now data will be validated.
  • Validate the constraints when there is less load on the system.

Another possibility would be this:

postgres=# alter table t2 alter constraint t2_b_fkey deferrable;
ALTER TABLE
postgres=# begin;
BEGIN
postgres=# set constraints all deferred;
SET CONSTRAINTS
postgres=# insert into t2 (a,b,c) values (3,3,'cc');
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (4,4,'dd');
INSERT 0 1
postgres=# insert into t1 (a,b,c) values (3,'cc',now());
INSERT 0 1
postgres=# insert into t1 (a,b,c) values (4,'dd',now());
INSERT 0 1
postgres=# commit;
COMMIT

The downside of this is that this only works until the next commit, so you have to do all your work in one transaction. The key point of this post is, that the assumption that following will validate your data is false:

postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# insert into t2 (a,b,c) values (5,5,'ee');
INSERT 0 1
postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# 

This will only validate new data but it does not guarantee that all the rows satisfy the constraint:

postgres=# insert into t2 (a,b,c) values (6,6,'ff');
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(6) is not present in table "t1".
postgres=# select * from t2 where b = 5;
 a | b | c  
---+---+----
 5 | 5 | ee
(1 row)

postgres=# select * from t1 where a = 5;
 a | b | c 
---+---+---
(0 rows)

Finally: There is another way of doing it, but this directly updates the pg_constraint catalog table and this is something you should _not_ do (never update internal tables directly!):

postgres=# delete from t2 where b = 5;
DELETE 1
postgres=# delete from t2 where b = 5;
DELETE 1
postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# insert into t2 values (5,5,'ee');
INSERT 0 1
postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# update pg_constraint set convalidated = false where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
UPDATE 1
postgres=# alter table t2 validate constraint t2_b_fkey;
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(5) is not present in table "t1".
postgres=# 

In this case the constraint will be fully validated as it is recorded as invalid in the catalog.

Conclusion: Do not rely on assumptions, always carefully test your procedures.

Post Views: 2,685

In SQL, the concept of foreign keys is an important one that can be found in all professional databases used in the industry. The core idea is to prevent your PostgreSQL database from storing inconsistent data by enforcing constraints ensuring the correctness of your tables (at least as far as relations between objects are concerned). Referential integrity is therefore one of the most important concepts ever invented in IT.

However, foreign keys will introduce some issues which you have to take care of when writing applications. If there are no foreign keys, you can insert data into any table in any order. PostgreSQL does not care. However, if a foreign key is in place, order starts to matter (at least in a typical scenario but more on that later).

To show the importance of order, we have to create a data model first:

CREATE TABLE t_currency
(
	id		int,
	shortcut	char (3),
	PRIMARY KEY (id)
);

CREATE TABLE t_location
(
	id		int,
	location_name	text,
	PRIMARY KEY (id)
);

CREATE TABLE t_product
(
	id		int,
	name		text,
	currency_id	int	REFERENCES t_currency (id),
	PRIMARY KEY (id)
);

CREATE TABLE t_product_desc
(
	id		int,
	product_id	int	REFERENCES t_product (id),
	description	text,
	PRIMARY KEY (id)
);

CREATE TABLE t_product_stock
(
	product_id	int	REFERENCES t_product (id),
	location_id	int	REFERENCES t_location (id),
	amount		numeric	CHECK (amount >= 0)
);

We want to store currencies, products, as well as product descriptions. Basically it is a very simple data model. Let us see if we happen to insert into the product table:

test=# INSERT INTO t_product VALUES (1, 'PostgreSQL consulting', 1);
ERROR:  insert or update on table "t_product" violates foreign key constraint "t_product_currency_id_fkey"
DETAIL:  Key (currency_id)=(1) is not present in table "t_currency".
test=# INSERT INTO t_product VALUES (1, 'PostgreSQL consulting', NULL);
INSERT 0 1

Logically the first INSERT is going to fail because currency number 1 does not exist yet. If we want to INSERT, we have to use a NULL value (= unknown currency). In order words: We have to fill the currency table first, then insert locations, and so on. The order does matter in the default case.

Determining the correct insertion order for foreign keys

If you have to start using an existing data model, it can be a bit hard to wrap your head around this stuff. Populating an empty data model can be a bit tricky. So why not write a query telling us the order in which we are supposed to insert data?

Well, here is that magic query…

WITH RECURSIVE fkeys AS (
   /* source and target tables for all foreign keys */
   SELECT conrelid AS source,
          confrelid AS target
   FROM pg_constraint
   WHERE contype = 'f'
),
tables AS (
      (   /* all tables ... */
          SELECT oid AS table_name,
                 1 AS level,
                 ARRAY[oid] AS trail,
                 FALSE AS circular
          FROM pg_class
          WHERE relkind = 'r'
            AND NOT relnamespace::regnamespace::text LIKE ANY
                    (ARRAY['pg_catalog', 'information_schema', 'pg_temp_%'])
       EXCEPT
          /* ... except the ones that have a foreign key */
          SELECT source,
                 1,
                 ARRAY[ source ],
                 FALSE
          FROM fkeys
      )
   UNION ALL
      /* all tables with a foreign key pointing a table in the working set */
      SELECT fkeys.source,
             tables.level + 1,
             tables.trail || fkeys.source,
             tables.trail @> ARRAY[fkeys.source]
      FROM fkeys
         JOIN tables ON tables.table_name = fkeys.target
      /*
       * Stop when a table appears in the trail the third time.
       * This way, we get the table once with "circular = TRUE".
       */
      WHERE cardinality(array_positions(tables.trail, fkeys.source)) < 2
),
ordered_tables AS (
   /* get the highest level per table */
   SELECT DISTINCT ON (table_name)
          table_name,
          level,
          circular
   FROM tables
   ORDER BY table_name, level DESC
)
SELECT table_name::regclass,
       level
FROM ordered_tables
WHERE NOT circular
ORDER BY level, table_name;

The query is not trivial to read, but I have done my best to document it a bit. Basically, the PostgreSQL system tables have all the information we need to determine the correct order. Here is the output:

   table_name    | level
-----------------+-------
 t_currency      |     1
 t_location      |     1
 t_product       |     2
 t_product_desc  |     3
 t_product_stock |     3
(5 rows)

As you can see, the query has correctly given us the tables in the desired order. First, we have to insert into all tables at level one and so on. If we stick to this order, referential integrity will always be ensured (assuming the data is correct).

Making use of “initially deferred” constraints

In some cases, the insertion order can be a nasty thing to deal with. What if we had the means to tell PostgreSQL to ignore the order and check integrity on commit instead? This is exactly what “initially deferred” does. Here is how it works:

BEGIN;

CREATE TABLE t_currency
(
	id            int,
	shortcut      char (3),
	PRIMARY KEY (id)
);

CREATE TABLE t_product
(
	id            int,
	name          text,
	currency_id   int	REFERENCES t_currency (id)
				INITIALLY DEFERRED,
	PRIMARY KEY (id)
);

INSERT INTO t_product VALUES (1, 'PostgreSQL support', 1);
INSERT INTO t_currency VALUES (1, 'EUR');

COMMIT;

In this case, we can modify data in any order we want. As long as integrity is guaranteed to be intact at the end of the transaction, PostgreSQL is perfectly fine. PostgreSQL will postpone the constraint check and take some burden off the developer.

Finally …

If you want to learn more about advanced SQL, you might want to take a look at my blog about some more advanced windowing functions (with ties). So put on your tie and read to learn more.

Понравилась статья? Поделить с друзьями:
  • Error insert has more expressions than target columns
  • Error insert boot media in selected boot device and press a key что делать
  • Error input of anonymous composite types is not implemented
  • Error input file does not exist
  • Error input angular