Skip to content
In this article, we will see the solution for error like PostgreSQL Error: cannot execute in a read-only transaction.
ERROR: cannot execute CREATE DATABASE in a read-only transaction Error: cannot execute DROP DATABASE in a read-only transaction ERROR: cannot execute CREATE TABLE in a read-only transaction Error: cannot execute DROP TABLE in a read-only transaction cannot execute INSERT in a read-only transaction
Reason:
Reason is simple, PostgreSQL server is recovery(read-only) or standby mode.
Resolution:
1. We have to remove recovery(read-only) or standby mode.
2. Go to your data or main directory then remove standby.signal and recovery.signal files.
Before performing following action, take backup of these files in some other location.
File location in(Ubuntu change your respective version nuymber)
/var/lib/postgresql/13/main
3. After removing signal files, restart PostgreSQL cluster.
sudo systemctl restart postgresql
4. Then, you will able to connect to the server and perform all actions as usual.
I’m trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x
I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction
.
Why did it create a read-only database on my local machine? Can I change this?
asked Jul 2, 2015 at 13:48
3
Normally the most plausible reasons for this kind of error are :
-
trying create statements on a read-only replica (the entire instance is read-only).
-
<username>
hasdefault_transaction_read_only
set to ON -
the database has
default_transaction_read_only
set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA
at line 6, not before.
That means that the CREATE DATABASE
does work, when run by <username>
.
And it wouldn’t work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only
would be ON
in the postgresql.conf
file, and set to OFF
for the database postgres
, the one that the invocation of psql connects to, through an ALTER DATABASE
statement that supersedes the configuration file.
That would be why CREATE DATABASE
works, but then as soon as it connects to a different database with c
, the default_transaction_read_only
setting of the session would flip to ON
.
But of course that would be a pretty weird and unusual configuration.
answered Jul 2, 2015 at 16:32
Daniel VéritéDaniel Vérité
56.1k15 gold badges125 silver badges150 bronze badges
1
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
q
Then I dropped the database from the terminal dropdb exercises
and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
answered Jul 4, 2015 at 4:45
ltrainprltrainpr
2,9653 gold badges28 silver badges40 bronze badges
1
I was having getting cannot execute CREATE TABLE in a read-only transaction
, cannot execute DELETE TABLE in a read-only transaction
and others.
They all followed a cannot execute INSERT in a read-only transaction
. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don’t know if the same effect would happen if I was on a dedicated server.
alecxe
455k116 gold badges1061 silver badges1180 bronze badges
answered Jan 15, 2019 at 4:52
Brad MathewsBrad Mathews
1,4532 gold badges20 silver badges45 bronze badges
1
I had same issue for Postgre
Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database
access by running below query and it will return either true
or false
SELECT pg_is_in_recovery()
true
-> Database has only Read Access
false
-> Database has full Access
if returns true then check with DBA team for the full access and also try for ping
in command prompt
and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
answered Jun 16, 2021 at 16:22
BharathirajaBharathiraja
6441 gold badge11 silver badges20 bronze badges
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby
mode. So I was trying to write data into a node that was meant only for reading, therefore the «read-only» problem.
You can query the node in question with SELECT pg_is_in_recovery()
, and if it returns True
then it is «read-only», and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
answered May 6, 2021 at 9:12
vladimirrorvladimirror
65014 silver badges7 bronze badges
Dbeaver: In my case
This was on.
answered Oct 4, 2022 at 4:00
This doesn’t quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER
privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction
error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro-
is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
answered May 7, 2020 at 15:35
ps2goatps2goat
7,8971 gold badge33 silver badges68 bronze badges
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
answered Dec 6, 2019 at 15:28
Kevin ParkerKevin Parker
16.8k20 gold badges76 silver badges103 bronze badges
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it’ll ask for password, copy-paste from there and you can run Postgres cmds.
answered May 27, 2021 at 18:42
shekhar g hshekhar g h
1,1751 gold badge9 silver badges12 bronze badges
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter
query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
answered Apr 8, 2022 at 9:55
Muhammad AwaisMuhammad Awais
1,3501 gold badge17 silver badges20 bronze badges
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
answered Oct 13, 2022 at 16:18
Ron PiggottRon Piggott
7031 gold badge8 silver badges26 bronze badges
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
answered Nov 23, 2022 at 11:46
Aditya YAditya Y
5015 silver badges9 bronze badges
Relevant system information:
- PostgreSQL 12.2 on x86_64-pc-linux-gnu, compiled by gcc, a 67f83fccf p a91f14b60b, 64-bit
- TimescaleDB version (output of
dx
inpsql
): 1.7.0 - Installation method: [e.g., «using Docker», «apt install», «source»]: Yum
Describe the bug
calling drop_chunks() on a hypertable successfully drops chunks even with transactions set to read_only.
It’s worth noting it fails as expected when cascade_to_materializations
is set to true, and there are materializations to cascade to.
To Reproduce
Steps to reproduce the behavior:
- create a hypertable and add some data.
- Create a continuous agg on that hypertable
- set transactions to read only
set default_transaction_read_only=on;
- Call drop_chunks:
SELECT count(*)
FROM drop_chunks(INTERVAL '4 days', 'some_hypertable', cascade_to_materializations => TRUE);
It will fail:
ERROR: cannot execute DELETE in a read-only transaction
CONTEXT: SQL statement "DELETE FROM _timescaledb_internal._materialized_hypertable_5 AS D WHERE D.chunk_id = $1"
Set cascade_to_materializations to FALSE
SELECT count(*)
FROM drop_chunks(INTERVAL '4 days', 'some_hypertable', cascade_to_materializations => FALSE);
and it will succeed.
This ALSO succeeds regardless of the cascade setting if the hypertable does not have any materializations attached to it.
Expected behavior
I would expect read_only transactions to cause drop_chunks to fail.
Actual behavior
Chunks were actually dropped if no CASCADE action happened.
Asked
7 years, 8 months ago
Viewed
520 times
I am trying to setup 2-node BDR replication and i am getting the following error while running bdr_init_copy.
/usr/pgsql-9.4/bin/bdr_init_copy -d «host=10.10.145.111
dbname=apimgtdb port=5432″ -U postgres
—local-dbname=»host=10.10.145.110 dbname=apimgtdb port=5432» -n nodeawswest -D /postgres/pgsql-9.4/data
< 2015-06-05 14:56:55.846 PDT >LOG: database system was interrupted; last known up at 2015-06-05 14:49:09 PDT
< 2015-06-05 14:56:55.951 PDT >LOG: entering standby mode
< 2015-06-05 14:56:55.951 PDT >LOG: starting up replication identifier with ckpt at 9/F5000060
< 2015-06-05 14:56:55.956 PDT >LOG: redo starts at 9/F5000060
< 2015-06-05 14:56:55.972 PDT >LOG: consistent recovery state reached at 9/F5000128
< 2015-06-05 14:56:55.972 PDT >LOG: database system is ready to accept read only connections
< 2015-06-05 14:56:55.977 PDT >LOG: started streaming WAL from primary at 9/F6000000 on timeline 1
< 2015-06-05 14:56:56.639 PDT >LOG: recovery stopping at restore point "bdr_6157013497737678047", time 2015-06-05 14:56:57.137408-07
< 2015-06-05 14:56:56.640 PDT >LOG: recovery has paused
< 2015-06-05 14:56:56.640 PDT >HINT: Execute pg_xlog_replay_resume() to continue.
< 2015-06-05 14:56:56.821 PDT >ERROR: cannot execute DELETE in a read-only transaction
< 2015-06-05 14:56:56.821 PDT >STATEMENT: DELETE FROM pg_catalog.pg_shseclabel WHERE provider = 'bdr';
< 2015-06-05 14:56:56.837 PDT >LOG: received smart shutdown request
< 2015-06-05 14:56:56.838 PDT >FATAL: terminating walreceiver process due to administrator command
< 2015-06-05 14:56:56.840 PDT >LOG: shutting down
< 2015-06-05 14:56:58.169 PDT >LOG: database system is shut down
Craig Ringer
53.8k3 gold badges147 silver badges185 bronze badges
asked Jun 5, 2015 at 22:26
5
Normally the most plausible reasons for this kind of error are :
-
trying create statements on a read-only replica (the entire instance is read-only).
-
<username>
hasdefault_transaction_read_only
set to ON -
the database has
default_transaction_read_only
set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA
at line 6, not before.
That means that the CREATE DATABASE
does work, when run by <username>
.
And it wouldn’t work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only
would be ON
in the postgresql.conf
file, and set to OFF
for the database postgres
, the one that the invocation of psql connects to, through an ALTER DATABASE
statement that supersedes the configuration file.
That would be why CREATE DATABASE
works, but then as soon as it connects to a different database with c
, the default_transaction_read_only
setting of the session would flip to ON
.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
q
Then I dropped the database from the terminal dropdb exercises
and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I had same issue for Postgre
Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database
access by running below query and it will return either true
or false
SELECT pg_is_in_recovery()
true
-> Database has only Read Access
false
-> Database has full Access
if returns true then check with DBA team for the full access and also try for ping
in command prompt
and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
I was having getting cannot execute CREATE TABLE in a read-only transaction
, cannot execute DELETE TABLE in a read-only transaction
and others.
They all followed a cannot execute INSERT in a read-only transaction
. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage! Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don’t know if the same effect would happen if I was on a dedicated server.
Я пытаюсь настроить данные pgexercises на моем локальном компьютере. Когда я запускаю: psql -U <username> -f clubdata.sql -d postgres -x
, я получаю сообщение об ошибке: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction
.
Почему на моем локальном компьютере была создана база данных, доступная только для чтения? Могу я это изменить?
5 ответов
Лучший ответ
Обратился к pgexercises.com, и они смогли мне помочь.
Я выполнил эти команды (отдельно):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
q
Затем я сбросил базу данных с терминала dropdb exercises
и снова запустил скрипт psql -U <username> -f clubdata.sql -d postgres -x -q
17
ltrainpr
4 Июл 2015 в 04:45
Это произошло, когда я восстанавливал производственную базу данных локально, база данных все еще выполняет восстановление в оперативном режиме из записей WAL.
Немного неожиданно, поскольку я предположил, что pgbackgrest создает мгновенно восстанавливаемые восстановления, возможно, нет.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
0
Kevin Parker
6 Дек 2019 в 15:28
Это не совсем ответ на исходный вопрос, но я получил ту же ошибку и нашел эту страницу, что в конечном итоге привело к исправлению.
Моя проблема заключалась в попытке запустить функцию с созданием и удалением временных таблиц. Функция была создана с правами SECURITY DEFINER
, и пользователь имел доступ локально.
В другой среде я получил сообщение об ошибке cannot execute DROP TABLE in a read-only transaction
. Этой средой была AWS Aurora, и по умолчанию разработчикам, не являющимся администраторами, были предоставлены права только для чтения. Таким образом, их серверные соединения были настроены на использование доступного только для чтения узла Aurora (-ro-
находится в URL-адресе соединения), который должен перевести соединение в состояние только для чтения. Выполнение той же функции с тем же пользователем для узла записи сработало.
Похоже, хороший вариант использования табличных переменных, таких как SQL Server! Или, по крайней мере, AWS следует изменить свой поток, чтобы разрешить создание и удаление временных таблиц на узлах чтения.
1
ps2goat
7 Май 2020 в 15:35
Я получал cannot execute CREATE TABLE in a read-only transaction
, cannot execute DELETE TABLE in a read-only transaction
и другие.
Все они следовали cannot execute INSERT in a read-only transaction
. Как будто соединение переключилось в режим только для чтения в середине моей пакетной обработки.
Оказывается, у меня закончилась память! Доступ для записи был отключен, когда база данных больше не могла ничего писать. Я использую Postgres в Azure. Я не знаю, случился бы такой же эффект, если бы я был на выделенном сервере.
7
Brad Mathews
15 Янв 2019 в 04:52
Обычно наиболее вероятными причинами такого рода ошибок являются:
-
попытка создания операторов на реплике, доступной только для чтения (весь экземпляр доступен только для чтения).
-
<username>
имеетdefault_transaction_read_only
значение ON -
в базе данных установлено значение «
default_transaction_read_only
«
В первых строках упомянутого сценария:
CREATE DATABASE exercises;
c exercises
CREATE SCHEMA cd;
и вы сообщаете, что ошибка возникает с CREATE SCHEMA
в строке 6, а не ранее.
Это означает, что CREATE DATABASE
действительно работает, когда запускается <username>
. И это не сработало бы, если бы любая из вышеперечисленных причин была прямо применима.
Одна из возможностей, которая технически объяснила бы это, заключалась в том, что default_transaction_read_only
будет ON
в файле postgresql.conf
и установлен на OFF
для базы данных postgres
, той самой к которому подключается вызов psql через оператор ALTER DATABASE
, заменяющий файл конфигурации.
Вот почему CREATE DATABASE
работает, но как только он подключается к другой базе данных с помощью c
, параметр default_transaction_read_only
сеанса переключается на ON
.
Но, конечно, это была бы довольно странная и необычная конфигурация.
29
Daniel Vérité
2 Июл 2015 в 16:37
Summary
Jira will display the following error in the logs and fail to run properly.
Environment
Jira Data Center connected to Amazon Aurora PostgreSQL-compatible clustered database.
Diagnosis
The following errors can be observed in the atlassian-jira.log file:
2022-02-10 15:43:51,539+0000 Caesium-1-2 ERROR anonymous System [c.a.beehive.db.DatabaseClusterLock] Unable to unlock com.atlassian.beehive.db.DatabaseClusterLock@37b9c552, Number of retries exceeded, rethrowing .Last events on this node at 1644507831539: [lastError 103ms ago, lastLock 49842ms ago, lastUnlock 105882ms ago, lastRenewal 164346ms ago] Lock statistics: [failByError=3, lastAccess=1644507831436, lockSuccess=1038, unlockSuccess=1037, averageHoldTimeMillis=2644]
com.querydsl.core.QueryException: Caught PSQLException for update public.clusterlockstatus
set locked_by_node = ?, update_time = ?
where clusterlockstatus.lock_name = ? and clusterlockstatus.locked_by_node = ?
Caused by: org.postgresql.util.PSQLException: ERROR: cannot execute UPDATE in a read-only transaction
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2552)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2284)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:322)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:130)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:98)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:98)
at com.atlassian.jira.ofbiz.sql.PreparedStatementWrapper.executeUpdate(PreparedStatementWrapper.java:47)
at com.atlassian.jira.diagnostic.connection.DiagnosticPreparedStatement.lambda$executeUpdate$7(DiagnosticPreparedStatement.java:69)
Cause
This error typically occurs when clustered databases such as Amazon Aurora are in recovery or have recently had a failover.
You can confirm with the following commands:
-
The PostgreSQL cluster is in recovery:
SELECT pg_is_in_recovery();
-
The parameter
default_transaction_read_only
is set toon:
SHOW default_transaction_read_only;
Solution
Ensure that you are using the correct validation query in the dbconfig.xml file as noted in Surviving Connection Closures:
Database type |
Validation query |
Validation query timeout |
---|---|---|
MySQL |
|
3 |
Microsoft SQL Server |
|
N/A |
Oracle |
|
N/A |
PostgreSQL |
|
N/A |
If the validation query is not correct, Jira will need to be restarted.
Last modified on May 2, 2022
Related content
- No related content found