Содержание
- Postgres – ОШИБКА: подготовленная инструкция «S_1» уже существует
- Новый, лучший ответ
- Старый ответ
- prepared statement »s0» already exists #11643
- Comments
- Bug description
- Supabase Connection Pooling
- Prisma PGBouncer
- How to reproduce
- Expected behavior
- Prisma information
- Environment & setup
- Prisma Version
- Error 42P05 (prepared statement «_auto2» already exists) #4082
- Comments
- Inconsistent error about prepared statement already existing #151
- Comments
- Issue Description and Expected Result
- Database
- Inconsistent error about prepared statement already existing #151
- Comments
- Issue Description and Expected Result
- Database
Postgres – ОШИБКА: подготовленная инструкция «S_1» уже существует
При выполнении пакетных запросов через JDBC для pgbouncer я получаю следующую ошибку:
Я нашел отчеты об ошибках в Интернете, но все они, похоже, имеют дело с Postgres 8.3 или ниже, тогда как мы работаем с Postgres 9.
Здесь код, который вызывает ошибку:
Кто-нибудь видел это раньше?
Изменить 1:
Это оказалось проблемой pgBouncer, возникающей при использовании чего-либо, кроме сеансового пула. Мы использовали транзакционный пул, который, по-видимому, не может поддерживать подготовленные операторы. Перейдя на сеансовый пул, мы столкнулись с проблемой.
К сожалению, это не является хорошим решением для нашего случая использования. У нас есть два отдельных использования для pgBouncer: одна часть нашей системы делает массовые обновления, которые наиболее эффективны в качестве подготовленных операторов, а другая часть требует очень много соединений в очень быстрой последовательности. Поскольку pgBouncer не позволяет переключаться между сеансовым пулом и пулом транзакций, мы вынуждены запускать два отдельных экземпляра на разных портах только для поддержки наших потребностей.
Изменить 2:
Я наткнулся на эту ссылку, где плакат перевернул собственный патч. В настоящее время мы изучаем его применение для наших собственных целей, если оно окажется безопасным и эффективным.
Это оказалось проблемой pgBouncer, возникающей при использовании чего-либо, кроме сеансового пула. Мы использовали транзакционный пул, который, по-видимому, не может поддерживать подготовленные операторы. Перейдя на сеансовый пул, мы столкнулись с проблемой.
К сожалению, это не является хорошим решением для нашего случая использования. У нас есть два отдельных использования для pgBouncer: одна часть нашей системы делает массовые обновления, которые наиболее эффективны в качестве подготовленных операторов, а другая часть требует очень много соединений в очень быстрой последовательности. Поскольку pgBouncer не позволяет переключаться между сеансовым пулом и пулом транзакций, мы вынуждены либо запускать два отдельных экземпляра на разных портах только для поддержки наших потребностей, или реализовать этот патч. Предварительное тестирование показывает, что он работает хорошо, но время покажет, будет ли оно безопасным и эффективным.
Новый, лучший ответ
Чтобы отменить состояние сеанса и эффективно забыть подготовленный оператор “S_1” , используйте параметр server_reset_query в конфигурации PgBouncer.
Старый ответ
Переход в режим сеанса не является идеальным решением. Трансакционный пул намного эффективнее. Но для объединения транзакций вам нужны вызовы DB без учета состояния.
Я думаю, у вас есть три варианта:
- Отключить PS в драйвере jdbc,
- вручную освободить их в коде Java,
- настройте pgbouncer, чтобы отменить их при завершении транзакции.
Я бы попробовал вариант 1 или вариант 3 – в зависимости от того, каким образом ваше приложение использует их.
Для получения дополнительной информации прочитайте документы:
или google для этого:
Отключение готовых операторов в JDBC. Правильный способ сделать это для JDBC – добавить параметр “prepareThreshold = 0” для подключения строки.
Источник
prepared statement »s0» already exists #11643
Bug description
I’m connecting to Supabase with their connection pooling enabled and I’m seeing the following 500 errors after multiple refreshes of the same page, until eventually it forces me to have to restart my server.
Supabase Connection Pooling
Connection String (Pool Mode = Transaction):
postgres://postgres:[YOUR-PASSWORD]@[host].supabase.co:6543/postgres
With Supabase’s connection pooling I still see the 500 error that is shown below.
Prisma PGBouncer
Connection String:
«postgresql://postgres:[PASSWORD]@[HOST].supabase.co:5432/postgres?pgbouncer=true»
With Prisma’s PG Bouncer I am still getting the same error after multiple refreshes of the same page. This is very problematic because very quickly I am getting these errors. Even when testing locally.
How to reproduce
Expected behavior
Prisma queries should be executed normally when connected to a pgbouncer pool or to supabase’s connection pooling.
Prisma information
Environment & setup
- OS: Mac OS 12.0.1
- Database: PostgreSQL 12
- Node.js version: v16.3.0
Prisma Version
The text was updated successfully, but these errors were encountered:
You need to combine the connection pooled connection string from Supabase (port 6543 ) with adding &pgbouncer=true to the connection string to get rid of this problem. The addition to the connection string tells Prisma that it is talking to a server running PgBouncer — which is the case for Supabase’s connection pooled connection string of course.
Can you confirm that solves the problem?
I ended up using your solution and now my connection string is:
postgres://postgres:[YOUR-PASSWORD]@[host].supabase.co:6543/postgres?pgbouncer=true
However, when using prisma locally, after multiple browser refreshes I encounter the issue once again:
prepared statement \»s0\» already exists . I can confirm this does not occur in production, but only when testing locally. I’m not sure how to fix this and this has become a constant everyday issue.
I’ll even restart my db server — and I will still face the same issue.
It seems odd because these errors are intermittent and only occur locally. In production all works fine.
I’ve seen the error message go up to:
prepared statement \»s91\» already exists
That is super weird. Can you produce a repo with your setup and an app that can trigger this problem?
Yes, I was able to create a repro to reproduce this problem. Do you have an email or slack where I can send you the credentials to access it?
I believe you might be able to even reproduce this yourself actually — this might be the issue (not 100% sure yet though):
- Create a utilsDb folder and export a function that contains a Prisma query. Mind you, we will only use this query function in getServerSideProps since we can’t do this in the client.
Example exported function:
utilsDb/index.ts
And in one of your NextJS pages add the following:
After a few page refreshes you will begin to see the errors:
prepared statement \»s0\» already exists and with each new page refresh the numbers will go up:
s0 , s2 , s5 , s10 , etc.
Please let me know if with this you are able to reproduce the error. And if so, I ask:
Why can’t we decouple queries into a utilsDb folder that is only called in getServerSideProps and never in the client.
jan@prisma.io or @janpio here on Github or our public Slack. (My NextJS knowledge is minimal, so a fully set up repository would be very helpful.)
I have also emailed you the credentials to the database. Feel free to share those internally as it’s a newly created db for the reproduction.
From my local testing — it took about 10-15 page refreshes and then the errors appeared.
It doesn’t seem that having the prisma query within utilsDb actually matters, whether or not the prisma calls are decoupled, the errors still persist.
Hi @janpio, were you able to reproduce this issue?
Hey, sorry — forgot to get back to you.
I could not reproduce this at all unfortunately. No matter how often I reproduced the page you mentioned in repro and email, it always loaded successully. I also made sure the code actually executed by .e.g changing the sort order of results in location.ts and that change applied — but still no errors. Anything I could be missing? Are you maybe also using Migrations when reproducing? That should use a different, non pooled database connection string.
Reached out via email.
This problem turned out to ultimately be caused by usage of a pooled connection string (port 6543 at Supabase) without supplying pgbouncer=true . When that was added, the error went away.
Источник
Error 42P05 (prepared statement «_auto2» already exists) #4082
We’re currently upgrading from Postgres 9.6 -> 13.4 and npgsql 3.2.4.1 -> 5.0.10 and while performing some somewhat intense stress testing, we started encountering 42P05 errors (prepared statement «_auto2» already exists). It’s an interesting bug because I observed that once the error occurs for the first time, it tends to taint DB access thereafter, as that same 42P05 keeps occurring.
I managed to repro this with both npgsql versions above and with increased probability by reducing Max Auto Prepare from our usual 256 to 2. I won’t go into too much detail about our project setup because I managed to write an isolated test case which triggers it. I’ve added a test called PreparedStatementAlreadyExistsFailure() to AutoPrepareTests.cs which describes the issue. I only have a limited understanding of the npgsql codebase so I might be way off here but as far as I can tell, it occurs when:
- Statement 1 (S1) is auto prepared in _autoX
- S1 is scheduled for replacement by a new candidate, S2
- A query containing S2 is executed and it errors out before S2 is processed so it never gets prepared and S1/_autoX never gets properly closed.
- S2 is unprepared (catch clause at bottom of NpgsqlDataReader.NextResult() )
- S3 is scheduled to replace the unprepared S2 in PreparedStatementManager. No cleanup occurs.
- When the code attempts to prepare S3, it results in a 42P05 since S1/_autoX was never closed in step 3.
Basically, it looks like the code is assuming that an unprepared statement S2 can be replaced without closing it, which is true. But in some cases, we might need to close the statement S1 which S2 itself was meant to close.
I’ve fixed it in PreparedStatementManager by simply transitively setting StatementBeingReplaced of a candidate statement to that of the unprepared statement. This seems to resolve it but I’m not sure whether it’s correct in all contexts or whether it introduces any other issues? Unfortunately, I don’t know enough about the Postgres protocols to reason about it 100%.
Note that there’s also another commit there where I tweaked the for loop in PreparedStatementManager so that it exits as soon as it finds an unprepared statement and allocates its slot. Otherwise, if there are several unprepared slots, it ends up assigning all of them to a single candidate.
This might be a terrible idea but I’m wondering if it might be worth adding a safeguard against 42P05 in the case of auto-prepared slots? Mainly because even if bugs in that area are rare and failures are even rarer, it seems like it’s potentially catastrophic when the PreparedStateManager bookkeeping does get out of sync with reality. Not sure if it’d be viable to deallocate the offending slot?
This bug is slightly more likely to manifest in our case because we’re using a DB access pattern not dissimilar to that described here (https://hackage.haskell.org/package/postgresql-simple-0.6.4/docs/Database-PostgreSQL-Simple-Transaction.html#v:withTransactionSerializable), so we naturally expect plenty of 40001s as part of ordinary DB chit-chat.
I’ll send a pull request through shortly. Note that I branched off v5.0.10 since that’s the version we’re using and because I had trouble getting v6 compiling. Let me know if you want me to change/rejig anything.
The text was updated successfully, but these errors were encountered:
Источник
Inconsistent error about prepared statement already existing #151
Issue Description and Expected Result
I’m running R Markdown documents with SQL chunks on a scheduler, and using odbc::odbc() to establish my connection to my Redshift database.
These documents run daily, and I’d say about 30% of the time the runs result in this error:
The strange thing is the error is inconsistent — I’m not changing the code. Like I mentioned, it works about 70% of the time, the successes seem random.
These queries are just SELECT queries. I’m not, as far as I know, creating prepared statements.
Database
I’ve googled the error but almost nothing comes up. Any ideas?
The text was updated successfully, but these errors were encountered:
Hey @timabe! Thanks for the note. It sounds like a prepared statement may be created «behind the scenes» by some of the functions you are calling. Can you provide a minimal reproducible example that shows some of the functions that you are calling in this pipeline?
I suspect that the name there ( «_PLAN0x506b440» ) is randomly generated and so if the same name gets randomly generated twice in the wrong context then an error is thrown. This would explain the intermittent nature of the issue.
This happens to me as well.
I can’t provide an example, since I am working on internal redshift AWS databases, but in a task where I query a list of tables with a for loop, using dbplyr to summarize the files, I will get the following error in random and inconsistent places. This doesn’t occur using RODBC queries.
Error: ‘SELECT count(*) AS «n()»
FROM «datatable1″‘
nanodbc/nanodbc.cpp:1587: 42P05: ERROR: prepared statement «_PLAN0x10f82370» already exists;
Error while executing the query
I just updated all my packages including DBI, tidyverse and dbplyr to make sure that wasn’t causing the problem, but it didn’t help.
It may be that RedShift does not like how these consecutive SQL queries are being sent over to it. In the first example, how it’s going from SQL chunk to chunk, and in the second example, how it’s looping and sending SQL statements. I’d suggest to try Sys.sleep(1) to pause between each SQL statement to see if that reduces the incidents
I have tried inserting sys.sleep() as well as disconnecting and reconnecting between queries. The latter helps a bit more, but not consistently and obviously this gets cumbersome. As I mentioned, this doesn’t happen with — for example a loop using RODBC’s sqlQuery.
Against a SQL Azure database and couldn’t hit this issue, I think we need a repro like colearendt suggests with instructions starting from scratch, as in, install DB, run this R script. Might help to provide a docker image if the setup is more complex, etc.
Источник
Inconsistent error about prepared statement already existing #151
Issue Description and Expected Result
I’m running R Markdown documents with SQL chunks on a scheduler, and using odbc::odbc() to establish my connection to my Redshift database.
These documents run daily, and I’d say about 30% of the time the runs result in this error:
The strange thing is the error is inconsistent — I’m not changing the code. Like I mentioned, it works about 70% of the time, the successes seem random.
These queries are just SELECT queries. I’m not, as far as I know, creating prepared statements.
Database
I’ve googled the error but almost nothing comes up. Any ideas?
The text was updated successfully, but these errors were encountered:
Hey @timabe! Thanks for the note. It sounds like a prepared statement may be created «behind the scenes» by some of the functions you are calling. Can you provide a minimal reproducible example that shows some of the functions that you are calling in this pipeline?
I suspect that the name there ( «_PLAN0x506b440» ) is randomly generated and so if the same name gets randomly generated twice in the wrong context then an error is thrown. This would explain the intermittent nature of the issue.
This happens to me as well.
I can’t provide an example, since I am working on internal redshift AWS databases, but in a task where I query a list of tables with a for loop, using dbplyr to summarize the files, I will get the following error in random and inconsistent places. This doesn’t occur using RODBC queries.
Error: ‘SELECT count(*) AS «n()»
FROM «datatable1″‘
nanodbc/nanodbc.cpp:1587: 42P05: ERROR: prepared statement «_PLAN0x10f82370» already exists;
Error while executing the query
I just updated all my packages including DBI, tidyverse and dbplyr to make sure that wasn’t causing the problem, but it didn’t help.
It may be that RedShift does not like how these consecutive SQL queries are being sent over to it. In the first example, how it’s going from SQL chunk to chunk, and in the second example, how it’s looping and sending SQL statements. I’d suggest to try Sys.sleep(1) to pause between each SQL statement to see if that reduces the incidents
I have tried inserting sys.sleep() as well as disconnecting and reconnecting between queries. The latter helps a bit more, but not consistently and obviously this gets cumbersome. As I mentioned, this doesn’t happen with — for example a loop using RODBC’s sqlQuery.
Against a SQL Azure database and couldn’t hit this issue, I think we need a repro like colearendt suggests with instructions starting from scratch, as in, install DB, run this R script. Might help to provide a docker image if the setup is more complex, etc.
Источник
Bug description
I’m connecting to postgres on digital ocean with pgbouncer as guided here: https://www.prisma.io/docs/concepts/database-connectors/postgresql#configuring-an-ssl-connection. And also appended pgbouncer=true&statement_cache_size=0
as suggested by @ryands17
ConnectorError: prepared statement "s0" already exists when connecting to pgbouncer with ssl
is thrown for every query.
Full error:
PrismaClientUnknownRequestError3 [PrismaClientUnknownRequestError]:
Invalid `prisma.siteMember.findMany()` invocation:
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlState("42P05"), message: "prepared statement "s0" already exists", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("prepare.c"), line: Some(463), routine: Some("StorePreparedStatement") }) }) })
at PrismaClientFetcher.request (/XXXXXXXXXXX/node_modules/@prisma/client/runtime/index.js:78125:15)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5) {
clientVersion: '2.13.0'
}
How to reproduce
- Setup a managed database on digital ocean.
- Create pgbouncer pool.
- Connect with SSL config the above.
- Run any query
Expected behavior
Prisma queries should be executed normally when connected to a pgbouncer pool’d db.
Prisma information
Connection string: postgres://<XXXXXXXXXXXX>/default?sslmode=require&&sslcert=db-ssl.crt&sslpassword=<XXXXX>&sslidentity=db-ssl.p12&pgbouncer=true&statement_cache_size=0
Environment & setup
- OS: Mac OS 10.15.4
- Database: PostgreSQL 12
- Node.js version: 14.4.0
- Prisma version: 2.13.0
@prisma/cli : 2.13.0
@prisma/client : 2.13.0
Current platform : darwin
Query Engine : query-engine 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at node_modules/@prisma/engines/query-engine-darwin)
Migration Engine : migration-engine-cli 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 (at node_modules/@prisma/engines/prisma-fmt-darwin)
Studio : 0.329.0
Complete Debug Log
tryLoadEnv Environment variables not found at null +0ms
tryLoadEnv Environment variables not found at undefined +0ms
tryLoadEnv No Environment variables loaded +1ms
tryLoadEnv Environment variables not found at null +10ms
tryLoadEnv Environment variables not found at undefined +0ms
tryLoadEnv No Environment variables loaded +0ms
prisma-client { clientVersion: '2.13.0' } +0ms
prisma-client Prisma Client call: +77ms
prisma-client prisma.siteMember.findMany({
prisma-client where: {
prisma-client userId: 1
prisma-client },
prisma-client include: {
prisma-client site: {
prisma-client include: {
prisma-client posts: {
prisma-client select: {
prisma-client title: true
prisma-client }
prisma-client }
prisma-client }
prisma-client }
prisma-client },
prisma-client orderBy: {
prisma-client createdAt: 'desc'
prisma-client }
prisma-client }) +2ms
prisma-client Generated request: +0ms
prisma-client query {
prisma-client findManySiteMember(
prisma-client where: {
prisma-client userId: 1
prisma-client }
prisma-client orderBy: [
prisma-client {
prisma-client createdAt: desc
prisma-client }
prisma-client ]
prisma-client ) {
prisma-client id
prisma-client siteId
prisma-client userId
prisma-client role
prisma-client createdAt
prisma-client updatedAt
prisma-client addedBy
prisma-client site {
prisma-client id
prisma-client name
prisma-client createdAt
prisma-client updatedAt
prisma-client posts {
prisma-client title
prisma-client }
prisma-client }
prisma-client }
prisma-client }
prisma-client +0ms
engine {
engine cwd: '/Users/XXXXXXX/prisma'
engine } +0ms
engine Search for Query Engine in /Users/XXXXXXX/node_modules/.prisma/client +1ms
plusX Execution permissions of /Users/XXXXXXX/node_modules/.prisma/client/query-engine-darwin are fine +0ms
engine {
engine flags: [
engine '--enable-raw-queries',
engine '--unix-path',
engine '/tmp/prisma-ccda131f44c9c5e83960f619.sock'
engine ]
engine } +1ms
engine stdout {
timestamp: 'Dec 21 16:17:46.726',
level: 'INFO',
fields: { message: 'Starting a postgresql pool with 1 connections.' },
target: 'quaint::pooled'
} +48ms
engine stdout {
timestamp: 'Dec 21 16:17:48.659',
level: 'INFO',
fields: {
message: 'Started http server on http+unix:///private/tmp/prisma-ccda131f44c9c5e83960f619.sock'
},
target: 'query_engine::server'
} +2s
engine Search for Query Engine in Users/XXXXXXX/node_modules/.prisma/client +5ms
plusX Execution permissions of /Users/XXXXXXX/node_modules/.prisma/client/query-engine-darwin are fine +2s
engine Client Version 2.13.0 +25ms
engine Engine Version query-engine 833ab05d2a20e822f6736a39a27de4fc8f6b3e49 +0ms
engine {
engine error: PrismaClientUnknownRequestError3 [PrismaClientUnknownRequestError]: Error occurred during query execution:
engine ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlState("42P05"), message: "prepared statement "s0" already exists", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("prepare.c"), line: Some(463), routine: Some("StorePreparedStatement") }) }) })
engine at NodeEngine.graphQLToJSError (/Users/XXXXXXX/node_modules/@prisma/client/runtime/index.js:27363:14)
engine at /Users/XXXXXXX/node_modules/@prisma/client/runtime/index.js:27285:24
engine at /Users/XXXXXXX/node_modules/@risingstack/react-easy-state/dist/cjs.es6.js:243:63
engine at Object.batchedUpdates$1 (/Users/XXXXXXX/node_modules/react-dom/cjs/react-dom.development.js:21856:12)
engine at batch (/Users/XXXXXXX/node_modules/@risingstack/react-easy-state/dist/cjs.es6.js:243:30)
engine at Object.apply (/Users/XXXXXXX/node_modules/@risingstack/react-easy-state/dist/cjs.es6.js:264:16)
engine at runMicrotasks (<anonymous>)
engine at processTicksAndRejections (internal/process/task_queues.js:97:5) {
engine clientVersion: '2.13.0'
engine }
engine } +280ms
prisma-client Error: Error occurred during query execution:
prisma-client ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlState("42P05"), message: "prepared statement "s0" already exists", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("prepare.c"), line: Some(463), routine: Some("StorePreparedStatement") }) }) })
prisma-client at NodeEngine.graphQLToJSError (/Users/XXXXXXXnode_modules/@prisma/client/runtime/index.js:27363:14)
prisma-client at /Users/XXXXXXX/node_modules/@prisma/client/runtime/index.js:27285:24
prisma-client at /Users/Users/XXXXXXX/node_modules/@risingstack/react-easy-state/dist/cjs.es6.js:243:63
prisma-client at Object.batchedUpdates$1 (/Users/XXXXXXX/node_modules/react-dom/cjs/react-dom.development.js:21856:12)
prisma-client at batch (/Users/XXXXXXX/node_modules/@risingstack/react-easy-state/dist/cjs.es6.js:243:30)
prisma-client at Object.apply (/Users/XXXXXXX/node_modules/@risingstack/react-easy-state/dist/cjs.es6.js:264:16)
prisma-client at runMicrotasks (<anonymous>)
prisma-client at processTicksAndRejections (internal/process/task_queues.js:97:5) +2s
PrismaClientUnknownRequestError3 [PrismaClientUnknownRequestError]:
Invalid `prisma.siteMember.findMany()` invocation:
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlState("42P05"), message: "prepared statement "s0" already exists", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("prepare.c"), line: Some(463), routine: Some("StorePreparedStatement") }) }) })
at PrismaClientFetcher.request (/Users/XXXXXXX/node_modules/@prisma/client/runtime/index.js:78125:15)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5) {
clientVersion: '2.13.0'
}
When executing batch queries via JDBC to pgbouncer, I get the following error:
org.postgresql.util.PSQLException: ERROR: prepared statement "S_1" already exists
I’ve found bug reports around the web, but they all seem to deal with Postgres 8.3 or below, whereas we’re working with Postgres 9.
Here’s the code that triggers the error:
this.getJdbcTemplate().update("delete from xx where username = ?", username);
this.getJdbcTemplate().batchUpdate( "INSERT INTO xx(a, b, c, d, e) " +
"VALUES (?, ?, ?, ?, ?)", new BatchPreparedStatementSetter() {
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setString(1, value1);
ps.setString(2, value2);
ps.setString(3, value3);
ps.setString(4, value4);
ps.setBoolean(5, value5);
}
@Override
public int getBatchSize() {
return something();
}
});
Anyone seen this before?
Edit 1:
This turned out to be a pgBouncer issue that occurs when using anything other than session pooling. We were using transaction pooling, which apparently can’t support prepared statements. By switching to session pooling, we got around the issue.
Unfortunately, this isn’t a good fix for our use case. We have two separate uses for pgBouncer: one part of our system does bulk updates which are most efficient as prepared statements, and another part needs many connections in very rapid succession. Since pgBouncer doesn’t allow switching back and forth between session pooling and transaction pooling, we’re forced to run two separate instances on different ports just to support our needs.
Edit 2:
I ran across this link, where the poster has rolled a patch of his own. We’re currently looking at implementing it for our own uses if it proves to be safe and effective.
New, Better Answer
To discard session state and effectively forget the «S_1» prepared statement, use server_reset_query option in PgBouncer config.
Old Answer
See http://pgbouncer.projects.postgresql.org/doc/faq.html#_how_to_use_prepared_statements_with_transaction_pooling
Switching into session mode is not an ideal solution. Transacion pooling is much more efficient. But for transaction pooling you need stateless DB calls.
I think you have three options:
- Disable PS in jdbc driver,
- manually deallocate them in your Java code,
- configure pgbouncer to discard them on transaction end.
I would try option 1 or option 3 — depending on actual way in which your app uses them.
For more info, read the docs:
http://pgbouncer.projects.postgresql.org/doc/config.html (search for server_reset_query),
or google for this:
postgresql jdbc +preparethreshold
The following guide describes issues relating to Prisma Migrate that can occur in production, and how to resolve them.
This guide does not apply for MongoDB.
Instead of migrate deploy
, db push
is used for MongoDB.
Failed migration
A migration might fail if:
- You modify a migration before running it and introduce a syntax error
- You add a mandatory (
NOT NULL
) column to a table that already has data - The migration process stopped unexpectedly
- The database shut down in the middle of the migration process
Each migration in the _prisma_migrations
table has a logs
column that stores the error.
There are two ways to deal with failed migrations in a production environment:
- Roll back, optionally fix issues, and re-deploy
- Manually complete the migration steps and resolve the migration
Option 1: Mark the migration as rolled back and re-deploy
The following example demonstrates how to roll back a migration, optionally make changes to fix the issue, and re-deploy:
-
Mark the migration as rolled back — this updates the migration record in the
_prisma_migrations
table to register it as rolled back, allowing it to be applied again:$prisma migrate resolve --rolled-back "20201127134938_added_bio_index"
-
If the migration was partially run, you can either:
- Modify the migration to check if a step was already completed (for example:
CREATE TABLE ... IF NOT EXISTS
) OR - Manually revert the steps that were completed (for example, delete created databases)
If you modify the migration, make sure you copy it back to source control to ensure that state of your production database is reflected exactly in development.
- Modify the migration to check if a step was already completed (for example:
-
Fix the root cause of the failed migration, if relevant — for example, if the migration failed due to an issue with the SQL script itself. Make sure that you copy any changed migrations back to source control.
-
Re-deploy the migration:
Option 2: Manually complete migration and resolve as applied
The following example demonstrates how to manually complete the steps of a migration and mark that migration as applied.
-
Manually complete the migration steps on the production database. Make sure that any manual steps exactly match the steps in the migration file, and copy any changes back to source control.
-
Resolve the migration as applied — this tells Prisma Migrate to consider the migration successfully applied:
$prisma migrate resolve --applied "20201127134938_my_migration"
Fixing failed migrations with migrate diff and db execute
To help with fixing a failed migration, Prisma provides the following commands for creating and executing a migration file:
prisma migrate diff
which diffs two database schema sources to create a migration taking one to the state of the second. You can output either a summary of the difference or a sql script. The script can be output into a file via> file_name.sql
or be piped to thedb execute --stdin
command.prisma db execute
which applies a SQL script to the database without interacting with the Prisma migrations table.
These commands are available in Preview in versions 3.9.0
and later (with the --preview-feature
CLI flag), and generally available in versions 3.13.0
and later.
This section gives an example scenario of a failed migration, and explains how to use migrate diff
and db execute
to fix it.
Example of a failed migration
Imagine that you have the following User
model in your schema, in both your local development environment and your production environment:
schema.prisma
1model User {
2 id Int @id
3 name String
4}
At this point, your schemas are in sync, but the data in the two environments is different.
You then decide to make changes to your data model, adding another
Post
model and making thename
field onUser
unique:schema.prisma
1model User {
2 id Int @id
3 name String @unique
4 email String?
5}
6
7model Post {
8 id Int @id
9 title String
10}
You create a migration called ‘Unique’ with the command
prisma migrate dev -n Unique
which is saved in your local migrations history. Applying the migration succeeds in your dev environment and now it is time to release to production.Unfortunately this migration can only be partially executed. Creating the
Post
model and adding thename
field unique fails with the following error:
ERROR 1062 (23000): Duplicate entry 'paul' for key 'User_name_key'
This is because there is non-unique data in your production database (e.g. two users with the same name).
You now need to recover manually from the partially executed migration. Until you recover from the failed state, further migrations using
prisma migrate deploy
are impossible.At this point there are two options, depending on what you decide to do with the non-unique data:
- You realize that non-unique data is valid and you cannot move forward with your current development work. You want to roll back the complete migration. To do this, see Moving backwards and reverting all changes
- The existence of non-unique data in your database is unintentional and you want to fix that. After fixing, you want to go ahead with the rest of the migration. To do this, see Moving forwards and applying missing changes
Moving backwards and reverting all changes
In this case, you need to create a migration that takes your production database to the state of your data model before the last migration.
-
First you need your migration history at the time before the failed migration. You can either get this from your git history, or locally delete the folder of the last failed migration in your migration history.
-
You now want to take your production environment from its current failed state back to the state specified in your local migrations history:
-
Run the following
prisma migrate diff
command:npx prisma migrate diff --from-url "$DATABASE_URL_PROD" --to-migrations ./migrations --script > backward.sql
This will create a SQL script file containing all changes necessary to take your production environment from its current failed state to the target state defined by your migrations history.
-
Run the following
prisma db execute
command:npx prisma db execute --url "$DATABASE_URL_PROD" --file backward.sql
This applies the changes in the SQL script against the target database without interacting with the migrations table.
-
Run the following
prisma migrate resolve
command:npx prisma migrate resolve --rolled-back Unique
This will mark the failed migration called ‘Unique’ in the migrations table on your production environment as rolled back.
-
Your local migration history now yields the same result as the state your production database is in. You can now modify the datamodel again to create a migration that suits your new understanding of the feature you’re working on (with non-unique names).
Moving forwards and applying missing changes
In this case, you need to fix the non-unique data and then go ahead with the rest of the migration as planned:
-
The error message from trying to deploy the migration to production already told you there was duplicate data in the column
name
. You need to either alter or delete the offending rows. -
Continue applying the rest of the failed migration to get to the data model defined in your
schema.prisma
file:-
Run the following
prisma migrate diff
command:npx prisma migrate diff --from-url "$DATABASE_URL_PROD" --to-schema-datamodel schema.prisma --script > forward.sql
This will create a SQL script file containing all changes necessary to take your production environment from its current failed state to the target state defined in your
schema.prisma
file. -
Run the following
prisma db execute
command:npx prisma db execute --url "$DATABASE_URL_PROD" --file forward.sql
This applies the changes in the SQL script against the target database without interacting with the migrations table.
-
Run the following
prisma migrate resolve
command:npx prisma migrate resolve --applied Unique
This will mark the failed migration called ‘Unique’ in the migrations table on your production environment as applied.
-
Your local migration history now yields the same result as the state your production environment is in. You can now continue using the already known migrate dev
/migrate deploy
workflow.
Migration history conflicts
prisma migrate deploy
issues a warning if an already applied migration has been edited — however, it does not stop the migration process. To remove the warnings, restore the original migration from source control.
Prisma Migrate and PgBouncer
You might see the following error if you attempt to run Prisma Migrate commands in an environment that uses PgBouncer for connection pooling:
Error: undefined: Database error
Error querying the database: db error: ERROR: prepared statement "s0" already exists
See Prisma Migrate and PgBouncer workaround for further information and a workaround. Follow GitHub issue #6485 for updates.