Error must be owner of database

I am working through Michael Hartl's excellent tutorial but when trying to prepare the test database with the command: bundle exec rake db:test:prepare I get this error message: ERROR: must be ...

I am working through Michael Hartl’s excellent tutorial but when trying to prepare the test database with the command:
bundle exec rake db:test:prepare
I get this error message:

ERROR: must be owner of database sample_app_test…

which I never got when using the development database, because I had created the following database role for my Rails app:

CREATE ROLE demo_app WITH CREATEDB LOGIN

(this is using Postgresql)

Does anyone understand why this is failing in the test environment?
TIA…

Besi's user avatar

Besi

22.5k24 gold badges129 silver badges218 bronze badges

asked Aug 26, 2011 at 21:05

rixter's user avatar

2

Did you ensure the ownership of the test DB? try running the l command on Postgres console client and check the ownerships. you can also try the following query:

ALTER DATABASE sample_app_test OWNER TO demo_app;

Filip Bartuzi's user avatar

answered Nov 5, 2012 at 4:31

Seth Malaki's user avatar

Seth MalakiSeth Malaki

4,42623 silver badges48 bronze badges

1

First post, writing this down for posterity. I was having the same problem but was able to fix it. You just have to make sure you were/are signed in as a superuser when you create your databases (or the one that is throwing the error).

I was logging into psql with this code:

sudo sudo -u postgres psql 

And created my databases. This is bad. You want to log in with these superuser credentials:

sudo su - postgres

And then after you’re logged in to postgres:

psql 

Then create your databases. You can kill your old databases with the command

DROP DATABASE "database_to_drop";

Recreate them and you should be good to go!

answered Jan 13, 2015 at 7:04

LearningRuby's user avatar

What you see there with psql l only means that the Postgres role admin has been granted a couple of privileges by the Postgres role postgres — which is the default superuser and probably the actual owner of the database.

Quoting the manual:

If a superuser chooses to issue a GRANT or REVOKE command, the command
is performed as though it were issued by the owner of the affected
object. In particular, privileges granted via such a command will
appear to have been granted by the object owner.

The privileges granted are:

C -- CREATE
c -- CONNECT
T -- TEMPORARY

CONNECT and TEMPORARYon databases are also default default privileges for PUBLIC, but what we see here has been granted explicitly.

The privilege to drop the DB is not given, it cannot be. Only the owner of the database can drop it. (Or a superuser.) Hence the error message:

ERROR: must be owner of database

Typically, each shell user connects with a Postgres role of the same name. So, given you have the necessary sudo rights, this should do it:

sudo -u postgres dropdb my_database_name_here

Or switch to the shell user postgres like demonstrated here:

  • PostgreSQL error: Fatal: role «username» does not exist

But you may have to disconnect other users first. See:

  • Force drop db while others may be connected
ISSUE TYPE
  • Bug Report
SUMMARY

I have differents errors on postgres pod after one fresh install

ENVIRONMENT
  • AWX version: 9.3
  • AWX install method: kubernetes
  • Ansible version: 2.9.4
  • Operating System: Ubuntu 18.04
  • Web Browser: Firefox / chromium

Helm postgresql chart: postgresql-8.3.0 (app version 11.6.0)

STEPS TO REPRODUCE

Inventory file

localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"

[all:vars]
dockerhub_base=ansible
# Kubernetes Install
kubernetes_context=awx-prp
kubernetes_namespace=awx-prp
kubernetes_deployment_name=awx-prp
kubernetes_deployment_replica_size=1
tiller_namespace=kube-system

pg_volume_capacity=20

# Kubernetes Ingress Configuration
kubernetes_ingress_hostname=cloud-awx-z00n-prp.cloud.620nm.net
kubernetes_ingress_annotations={'kubernetes.io/ingress.class': 'nginx'}
kubernetes_ingress_tls_secret=wildcard.cloud.620nm.net

# Common Docker parameters
awx_task_hostname=awx
awx_web_hostname=awxweb
postgres_data_dir="~/.awx/pgdocker"
host_port=80
host_port_ssl=443
#ssl_certificate=
docker_compose_dir="~/.awx/awxcompose"

# Set pg_hostname if you have an external postgres server, otherwise
# a new postgres service will be created
# pg_hostname=postgresql
pg_username=awx
# pg_password should be random 10 character alphanumeric string, when postgresql is running on kubernetes
# NB: it's a limitation of the "official" postgres helm chart
pg_password="<10 character alphanumeric string >"
pg_database=awx
pg_port=5432
#pg_sslmode=require

# pg_admin_password=postgrespass

# RabbitMQ Configuration
rabbitmq_password="< rabbitmq_password >"
rabbitmq_erlang_cookie=cookiemonster

# This will create or update a default admin (superuser) account in AWX, if not provided
# then these default values are used
admin_user=admin
admin_password="< awx_password >"

# Whether or not to create preload data for demonstration purposes
create_preload_data=False

# AWX Secret key
# It's *very* important that this stay the same between upgrades or you will lose the ability to decrypt
# your credentials
secret_key=awxsecret
EXPECTED RESULTS

No error in postgres pod

ACTUAL RESULTS

On the first playbook execution, i have a failed at «kubernetes : create django super user if it does not exist» task.

TASK [kubernetes : Check for Tower Super users] *******************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubectl -n awx-prp exec ansible-tower-management --  bash -c "echo 'from django.contrib.auth.models import User; nsu = User.objects.filter(is_superuser=True).count(); exit(0 if nsu > 0 else 1)' | awx-manage shell"n", "delta": "0:00:02.624181", "end": "2020-03-31 12:28:16.431460", "msg": "non-zero return code", "rc": 1, "start": "2020-03-31 12:28:13.807279", "stderr": "command terminated with exit code 1", "stderr_lines": ["command terminated with exit code 1"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [kubernetes : create django super user if it does not exist] *************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}

If i re lunch the playbook, it’s good for ansible, but in postgres log, i have

ERROR:  must be owner of database awx
STATEMENT:  ALTER DATABASE "awx" OWNER TO "awx"
kubectl exec -it awx-prp-postgresql-postgresql-0 -- /bin/bash
I have no name!@awx-prp-postgresql-postgresql-0:/$ 
I have no name!@awx-prp-postgresql-postgresql-0:/$ psql --user awx
Password for user awx: 
psql (11.6)
Type "help" for help.

awx=> l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 awx       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +
           |          |          |             |             | postgres=CTc/postgres+
           |          |          |             |             | awx=CTc/postgres
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(4 rows)

awx=> du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 awx       | Create DB                                                  | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

awx=> 

ADDITIONAL INFORMATION

ansible-playbook install.yml -i ../inventory_os_k8s_prp

playbook logs

PLAY [Build and deploy AWX] ***************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [localhost]

TASK [check_vars : include_tasks] *********************************************************************************************************************************************************************************
skipping: [localhost]

TASK [check_vars : include_tasks] *********************************************************************************************************************************************************************************
included: /home/mrkw3241/work/k8s/awx/installer/roles/check_vars/tasks/check_docker.yml for localhost

TASK [check_vars : postgres_data_dir should be defined] ***********************************************************************************************************************************************************
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [check_vars : host_port should be defined] *******************************************************************************************************************************************************************
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [image_build : Set global version if not provided] ***********************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Verify awx-logos directory exists for official install] ***************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Copy logos for inclusion in sdist] ************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Set sdist file name] **************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : AWX Distribution] *****************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stat distribution file] ***********************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Clean distribution] ***************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Build sdist builder image] ********************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Build AWX distribution using container] *******************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Build AWX distribution locally] ***************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Set docker build base path] *******************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Set awx_web image name] ***********************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Set awx_task image name] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Ensure directory exists] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage sdist] **********************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Template web Dockerfile] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Template task Dockerfile] *********************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage launch_awx] *****************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage launch_awx_task] ************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage google-cloud-sdk.repo] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage supervisor.conf] ************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage supervisor_task.conf] *******************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage settings.py] ****************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage requirements] ***************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage config watcher] *************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Stage Makefile] *******************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Build base web image] *************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Build base task image] ************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Tag task and web images as latest] ************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_build : Clean docker base directory] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_push : Authenticate with Docker registry if registry password given] **********************************************************************************************************************************
skipping: [localhost]

TASK [image_push : Remove web image] ******************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_push : Remove task image] *****************************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_push : Tag and push web image to registry] ************************************************************************************************************************************************************
skipping: [localhost]

TASK [image_push : Tag and push task image to registry] ***********************************************************************************************************************************************************
skipping: [localhost]

TASK [image_push : Set full image path for Registry] **************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : fail] ******************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : include_tasks] *********************************************************************************************************************************************************************************
skipping: [localhost] => (item=openshift_auth.yml) 
skipping: [localhost] => (item=openshift.yml) 

TASK [kubernetes : include_tasks] *********************************************************************************************************************************************************************************
included: /home/mrkw3241/work/k8s/awx/installer/roles/kubernetes/tasks/kubernetes_auth.yml for localhost
included: /home/mrkw3241/work/k8s/awx/installer/roles/kubernetes/tasks/kubernetes.yml for localhost

TASK [kubernetes : Set the Kubernetes Context] ********************************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Get Namespace Detail] **************************************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Create AWX Kubernetes Project] *****************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Set postgresql service name] *******************************************************************************************************************************************************************
ok: [localhost]

TASK [kubernetes : Get Kubernetes API version] ********************************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Extract server version from command output] ****************************************************************************************************************************************************
ok: [localhost]

TASK [kubernetes : Use kubectl or oc] *****************************************************************************************************************************************************************************
ok: [localhost]

TASK [kubernetes : set_fact] **************************************************************************************************************************************************************************************
ok: [localhost]

TASK [kubernetes : Record deployment size] ************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubectl get sts  awx-prp  -n awx-prp -o=jsonpath='{.status.replicas}'n", "delta": "0:00:00.453192", "end": "2020-03-31 12:25:54.961335", "msg": "non-zero return code", "rc": 1, "start": "2020-03-31 12:25:54.508143", "stderr": "Error from server (NotFound): statefulsets.apps "awx-prp" not found", "stderr_lines": ["Error from server (NotFound): statefulsets.apps "awx-prp" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [kubernetes : Set expected post-deployment Replicas value] ***************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Delete existing Deployment] ********************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Get Postgres Service Detail] *******************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubectl describe svc awx-prp-postgresql -n awx-prp", "delta": "0:00:00.367317", "end": "2020-03-31 12:25:55.641523", "msg": "non-zero return code", "rc": 1, "start": "2020-03-31 12:25:55.274206", "stderr": "Error from server (NotFound): services "awx-prp-postgresql" not found", "stderr_lines": ["Error from server (NotFound): services "awx-prp-postgresql" not found"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [kubernetes : Template PostgreSQL Deployment (OpenShift)] ****************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Deploy and Activate Postgres (OpenShift)] ******************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Create Temporary Values File (Kubernetes)] *****************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Populate Temporary Values File (Kubernetes)] ***************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Deploy and Activate Postgres (Kubernetes)] *****************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Remove tempfile] *******************************************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Set postgresql hostname to helm package service (Kubernetes)] **********************************************************************************************************************************
ok: [localhost]

TASK [kubernetes : Wait for Postgres to activate] *****************************************************************************************************************************************************************
Pausing for 60 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [localhost]

TASK [kubernetes : Check if Postgres 9.6 is being used] ***********************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Set new pg image] ******************************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Wait for change to take affect] ****************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Set env var for pg upgrade] ********************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Wait for change to take affect] ****************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Set env var for new pg version] ****************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Wait for Postgres to redeploy] *****************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Wait for Postgres to finish upgrading] *********************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Unset upgrade env var] *************************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Wait for Postgres to redeploy] *****************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Set task image name] ***************************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Set web image name] ****************************************************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Generate SSL certificates for RabbitMQ, if needed] *********************************************************************************************************************************************
skipping: [localhost]

TASK [kubernetes : Determine StatefulSet api version] *************************************************************************************************************************************************************
ok: [localhost]

TASK [kubernetes : Render deployment templates] *******************************************************************************************************************************************************************
ok: [localhost] => (item=None)
ok: [localhost] => (item=None)
ok: [localhost] => (item=None)
ok: [localhost]

TASK [kubernetes : Apply Deployment] ******************************************************************************************************************************************************************************
changed: [localhost] => (item=None)
changed: [localhost] => (item=None)
changed: [localhost] => (item=None)
changed: [localhost]

TASK [kubernetes : Delete any existing management pod] ************************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Template management pod] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [kubernetes : Create management pod] *************************************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Wait for management pod to start] **************************************************************************************************************************************************************
FAILED - RETRYING: Wait for management pod to start (60 retries left).
changed: [localhost]

TASK [kubernetes : Migrate database] ******************************************************************************************************************************************************************************
changed: [localhost]

TASK [kubernetes : Check for Tower Super users] *******************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubectl -n awx-prp exec ansible-tower-management --  bash -c "echo 'from django.contrib.auth.models import User; nsu = User.objects.filter(is_superuser=True).count(); exit(0 if nsu > 0 else 1)' | awx-manage shell"n", "delta": "0:00:02.624181", "end": "2020-03-31 12:28:16.431460", "msg": "non-zero return code", "rc": 1, "start": "2020-03-31 12:28:13.807279", "stderr": "command terminated with exit code 1", "stderr_lines": ["command terminated with exit code 1"], "stdout": "", "stdout_lines": []}
...ignoring

TASK [kubernetes : create django super user if it does not exist] *************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}

PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost                  : ok=30   changed=15   unreachable=0    failed=1    skipped=57   rescued=0    ignored=3   

kubectl logs awx-prp-postgresql-postgresql-0

postgres logs

+ id
current user id: uid=0(root) gid=0(root) groups=0(root),1001
+ echo current user id: uid=0(root) gid=0(root) groups=0(root),1001
+ mkdir -p /bitnami/postgresql/data
+ chmod 700 /bitnami/postgresql/data
+ find /bitnami/postgresql -mindepth 1 -maxdepth 1 -not -name .snapshot -not -name lost+found
+ xargs chown -R 1001:1001
+ chmod -R 777 /dev/shm
postgresql 10:26:17.28 
postgresql 10:26:17.29 Welcome to the Bitnami postgresql container
postgresql 10:26:17.29 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 10:26:17.29 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 10:26:17.29 Send us your feedback at containers@bitnami.com
postgresql 10:26:17.29 
postgresql 10:26:17.31 INFO  ==> ** Starting PostgreSQL setup **
postgresql 10:26:17.36 INFO  ==> Validating settings in POSTGRESQL_* env vars..
postgresql 10:26:17.36 INFO  ==> Loading custom pre-init scripts...
postgresql 10:26:17.36 INFO  ==> Initializing PostgreSQL database...
postgresql 10:26:17.38 INFO  ==> postgresql.conf file not detected. Generating it...
postgresql 10:26:17.39 INFO  ==> pg_hba.conf file not detected. Generating it...
postgresql 10:26:17.39 INFO  ==> Generating local authentication configuration
postgresql 10:26:18.53 INFO  ==> Starting PostgreSQL in background...
postgresql 10:26:18.91 INFO  ==> Creating user awx
postgresql 10:26:18.92 INFO  ==> Grating access to "awx" to the database "awx"
postgresql 10:26:18.94 INFO  ==> Configuring replication parameters
postgresql 10:26:18.96 INFO  ==> Configuring fsync
postgresql 10:26:18.97 INFO  ==> Loading custom scripts...
postgresql 10:26:18.97 INFO  ==> Enabling remote connections
postgresql 10:26:18.98 INFO  ==> Stopping PostgreSQL...

postgresql 10:26:19.99 INFO  ==> ** PostgreSQL setup finished! **
postgresql 10:26:20.04 INFO  ==> ** Starting PostgreSQL **
2020-03-31 10:26:20.063 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2020-03-31 10:26:20.063 GMT [1] LOG:  listening on IPv6 address "::", port 5432
2020-03-31 10:26:20.066 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-03-31 10:26:20.080 GMT [213] LOG:  database system was shut down at 2020-03-31 10:26:19 GMT
2020-03-31 10:26:20.085 GMT [1] LOG:  database system is ready to accept connections
2020-03-31 10:27:12.953 GMT [268] LOG:  incomplete startup packet
2020-03-31 10:27:14.254 GMT [269] LOG:  incomplete startup packet
2020-03-31 10:27:25.429 GMT [283] ERROR:  relation "conf_setting" does not exist at character 158
2020-03-31 10:27:25.429 GMT [283] STATEMENT:  SELECT "conf_setting"."id", "conf_setting"."created", "conf_setting"."modified", "conf_setting"."key", "conf_setting"."value", "conf_setting"."user_id" FROM "conf_setting" WHERE ("conf_setting"."key" IN ('ACTIVITY_STREAM_ENABLED', 'ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC', 'ORG_ADMINS_CAN_SEE_ALL_USERS', 'MANAGE_ORGANIZATION_AUTH', 'TOWER_URL_BASE', 'PROXY_IP_WHITELIST', 'LICENSE', 'REDHAT_USERNAME', 'REDHAT_PASSWORD', 'INSTALL_UUID', 'CUSTOM_VENV_PATHS', 'AD_HOC_COMMANDS', 'ALLOW_JINJA_IN_EXTRA_VARS', 'AWX_PROOT_ENABLED', 'AWX_PROOT_BASE_PATH', 'AWX_PROOT_HIDE_PATHS', 'AWX_PROOT_SHOW_PATHS', 'AWX_ISOLATED_VERBOSITY', 'AWX_ISOLATED_CHECK_INTERVAL', 'AWX_ISOLATED_LAUNCH_TIMEOUT', 'AWX_ISOLATED_CONNECTION_TIMEOUT', 'AWX_ISOLATED_HOST_KEY_CHECKING', 'AWX_ISOLATED_KEY_GENERATION', 'AWX_ISOLATED_PRIVATE_KEY', 'AWX_ISOLATED_PUBLIC_KEY', 'AWX_RESOURCE_PROFILING_ENABLED', 'AWX_RESOURCE_PROFILING_CPU_POLL_INTERVAL', 'AWX_RESOURCE_PROFILING_MEMORY_POLL_INTERVAL', 'AWX_RESOURCE_PROFILING_PID_POLL_INTERVAL', 'AWX_TASK_ENV', 'INSIGHTS_TRACKING_STATE', 'PROJECT_UPDATE_VVV', 'AWX_ROLES_ENABLED', 'AWX_COLLECTIONS_ENABLED', 'PRIMARY_GALAXY_URL', 'PRIMARY_GALAXY_USERNAME', 'PRIMARY_GALAXY_PASSWORD', 'PRIMARY_GALAXY_TOKEN', 'PRIMARY_GALAXY_AUTH_URL', 'PUBLIC_GALAXY_ENABLED', 'GALAXY_IGNORE_CERTS', 'STDOUT_MAX_BYTES_DISPLAY', 'EVENT_STDOUT_MAX_BYTES_DISPLAY', 'SCHEDULE_MAX_JOBS', 'AWX_ANSIBLE_CALLBACK_PLUGINS', 'DEFAULT_JOB_TIMEOUT', 'DEFAULT_INVENTORY_UPDATE_TIMEOUT', 'DEFAULT_PROJECT_UPDATE_TIMEOUT', 'ANSIBLE_FACT_CACHE_TIMEOUT', 'MAX_FORKS', 'LOG_AGGREGATOR_HOST', 'LOG_AGGREGATOR_PORT', 'LOG_AGGREGATOR_TYPE', 'LOG_AGGREGATOR_USERNAME', 'LOG_AGGREGATOR_PASSWORD', 'LOG_AGGREGATOR_LOGGERS', 'LOG_AGGREGATOR_INDIVIDUAL_FACTS', 'LOG_AGGREGATOR_ENABLED', 'LOG_AGGREGATOR_TOWER_UUID', 'LOG_AGGREGATOR_PROTOCOL', 'LOG_AGGREGATOR_TCP_TIMEOUT', 'LOG_AGGREGATOR_VERIFY_CERT', 'LOG_AGGREGATOR_LEVEL', 'LOG_AGGREGATOR_AUDIT', 'BROKER_DURABILITY', 'AUTOMATION_ANALYTICS_LAST_GATHER', 'AUTOMATION_ANALYTICS_GATHER_INTERVAL', 'SESSION_COOKIE_AGE', 'SESSIONS_PER_USER', 'AUTH_BASIC_ENABLED', 'OAUTH2_PROVIDER', 'ALLOW_OAUTH2_FOR_EXTERNAL_USERS', 'LOGIN_REDIRECT_OVERRIDE', 'PENDO_TRACKING_STATE', 'CUSTOM_LOGIN_INFO', 'CUSTOM_LOGO', 'MAX_UI_JOB_EVENTS', 'UI_LIVE_UPDATES_ENABLED', 'SOCIAL_AUTH_ORGANIZATION_MAP', 'SOCIAL_AUTH_TEAM_MAP', 'SOCIAL_AUTH_USER_FIELDS', 'AUTH_LDAP_SERVER_URI', 'AUTH_LDAP_BIND_DN', 'AUTH_LDAP_BIND_PASSWORD', 'AUTH_LDAP_START_TLS', 'AUTH_LDAP_CONNECTION_OPTIONS', 'AUTH_LDAP_USER_SEARCH', 'AUTH_LDAP_USER_DN_TEMPLATE', 'AUTH_LDAP_USER_ATTR_MAP', 'AUTH_LDAP_GROUP_SEARCH', 'AUTH_LDAP_GROUP_TYPE', 'AUTH_LDAP_GROUP_TYPE_PARAMS', 'AUTH_LDAP_REQUIRE_GROUP', 'AUTH_LDAP_DENY_GROUP', 'AUTH_LDAP_USER_FLAGS_BY_GROUP', 'AUTH_LDAP_ORGANIZATION_MAP', 'AUTH_LDAP_TEAM_MAP', 'AUTH_LDAP_1_SERVER_URI', 'AUTH_LDAP_1_BIND_DN', 'AUTH_LDAP_1_BIND_PASSWORD', 'AUTH_LDAP_1_START_TLS', 'AUTH_LDAP_1_CONNECTION_OPTIONS', 'AUTH_LDAP_1_USER_SEARCH', 'AUTH_LDAP_1_USER_DN_TEMPLATE', 'AUTH_LDAP_1_USER_ATTR_MAP', 'AUTH_LDAP_1_GROUP_SEARCH', 'AUTH_LDAP_1_GROUP_TYPE', 'AUTH_LDAP_1_GROUP_TYPE_PARAMS', 'AUTH_LDAP_1_REQUIRE_GROUP', 'AUTH_LDAP_1_DENY_GROUP', 'AUTH_LDAP_1_USER_FLAGS_BY_GROUP', 'AUTH_LDAP_1_ORGANIZATION_MAP', 'AUTH_LDAP_1_TEAM_MAP', 'AUTH_LDAP_2_SERVER_URI', 'AUTH_LDAP_2_BIND_DN', 'AUTH_LDAP_2_BIND_PASSWORD', 'AUTH_LDAP_2_START_TLS', 'AUTH_LDAP_2_CONNECTION_OPTIONS', 'AUTH_LDAP_2_USER_SEARCH', 'AUTH_LDAP_2_USER_DN_TEMPLATE', 'AUTH_LDAP_2_USER_ATTR_MAP', 'AUTH_LDAP_2_GROUP_SEARCH', 'AUTH_LDAP_2_GROUP_TYPE', 'AUTH_LDAP_2_GROUP_TYPE_PARAMS', 'AUTH_LDAP_2_REQUIRE_GROUP', 'AUTH_LDAP_2_DENY_GROUP', 'AUTH_LDAP_2_USER_FLAGS_BY_GROUP', 'AUTH_LDAP_2_ORGANIZATION_MAP', 'AUTH_LDAP_2_TEAM_MAP', 'AUTH_LDAP_3_SERVER_URI', 'AUTH_LDAP_3_BIND_DN', 'AUTH_LDAP_3_BIND_PASSWORD', 'AUTH_LDAP_3_START_TLS', 'AUTH_LDAP_3_CONNECTION_OPTIONS', 'AUTH_LDAP_3_USER_SEARCH', 'AUTH_LDAP_3_USER_DN_TEMPLATE', 'AUTH_LDAP_3_USER_ATTR_MAP', 'AUTH_LDAP_3_GROUP_SEARCH', 'AUTH_LDAP_3_GROUP_TYPE', 'AUTH_LDAP_3_GROUP_TYPE_PARAMS', 'AUTH_LDAP_3_REQUIRE_GROUP', 'AUTH_LDAP_3_DENY_GROUP', 'AUTH_LDAP_3_USER_FLAGS_BY_GROUP', 'AUTH_LDAP_3_ORGANIZATION_MAP', 'AUTH_LDAP_3_TEAM_MAP', 'AUTH_LDAP_4_SERVER_URI', 'AUTH_LDAP_4_BIND_DN', 'AUTH_LDAP_4_BIND_PASSWORD', 'AUTH_LDAP_4_START_TLS', 'AUTH_LDAP_4_CONNECTION_OPTIONS', 'AUTH_LDAP_4_USER_SEARCH', 'AUTH_LDAP_4_USER_DN_TEMPLATE', 'AUTH_LDAP_4_USER_ATTR_MAP', 'AUTH_LDAP_4_GROUP_SEARCH', 'AUTH_LDAP_4_GROUP_TYPE', 'AUTH_LDAP_4_GROUP_TYPE_PARAMS', 'AUTH_LDAP_4_REQUIRE_GROUP', 'AUTH_LDAP_4_DENY_GROUP', 'AUTH_LDAP_4_USER_FLAGS_BY_GROUP', 'AUTH_LDAP_4_ORGANIZATION_MAP', 'AUTH_LDAP_4_TEAM_MAP', 'AUTH_LDAP_5_SERVER_URI', 'AUTH_LDAP_5_BIND_DN', 'AUTH_LDAP_5_BIND_PASSWORD', 'AUTH_LDAP_5_START_TLS', 'AUTH_LDAP_5_CONNECTION_OPTIONS', 'AUTH_LDAP_5_USER_SEARCH', 'AUTH_LDAP_5_USER_DN_TEMPLATE', 'AUTH_LDAP_5_USER_ATTR_MAP', 'AUTH_LDAP_5_GROUP_SEARCH', 'AUTH_LDAP_5_GROUP_TYPE', 'AUTH_LDAP_5_GROUP_TYPE_PARAMS', 'AUTH_LDAP_5_REQUIRE_GROUP', 'AUTH_LDAP_5_DENY_GROUP', 'AUTH_LDAP_5_USER_FLAGS_BY_GROUP', 'AUTH_LDAP_5_ORGANIZATION_MAP', 'AUTH_LDAP_5_TEAM_MAP', 'RADIUS_SERVER', 'RADIUS_PORT', 'RADIUS_SECRET', 'TACACSPLUS_HOST', 'TACACSPLUS_PORT', 'TACACSPLUS_SECRET', 'TACACSPLUS_SESSION_TIMEOUT', 'TACACSPLUS_AUTH_PROTOCOL', 'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY', 'SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET', 'SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_DOMAINS', 'SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS', 'SOCIAL_AUTH_GOOGLE_OAUTH2_ORGANIZATION_MAP', 'SOCIAL_AUTH_GOOGLE_OAUTH2_TEAM_MAP', 'SOCIAL_AUTH_GITHUB_KEY', 'SOCIAL_AUTH_GITHUB_SECRET', 'SOCIAL_AUTH_GITHUB_ORGANIZATION_MAP', 'SOCIAL_AUTH_GITHUB_TEAM_MAP', 'SOCIAL_AUTH_GITHUB_ORG_KEY', 'SOCIAL_AUTH_GITHUB_ORG_SECRET', 'SOCIAL_AUTH_GITHUB_ORG_NAME', 'SOCIAL_AUTH_GITHUB_ORG_ORGANIZATION_MAP', 'SOCIAL_AUTH_GITHUB_ORG_TEAM_MAP', 'SOCIAL_AUTH_GITHUB_TEAM_KEY', 'SOCIAL_AUTH_GITHUB_TEAM_SECRET', 'SOCIAL_AUTH_GITHUB_TEAM_ID', 'SOCIAL_AUTH_GITHUB_TEAM_ORGANIZATION_MAP', 'SOCIAL_AUTH_GITHUB_TEAM_TEAM_MAP', 'SOCIAL_AUTH_AZUREAD_OAUTH2_KEY', 'SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET', 'SOCIAL_AUTH_AZUREAD_OAUTH2_ORGANIZATION_MAP', 'SOCIAL_AUTH_AZUREAD_OAUTH2_TEAM_MAP', 'SOCIAL_AUTH_SAML_SP_ENTITY_ID', 'SOCIAL_AUTH_SAML_SP_PUBLIC_CERT', 'SOCIAL_AUTH_SAML_SP_PRIVATE_KEY', 'SOCIAL_AUTH_SAML_ORG_INFO', 'SOCIAL_AUTH_SAML_TECHNICAL_CONTACT', 'SOCIAL_AUTH_SAML_SUPPORT_CONTACT', 'SOCIAL_AUTH_SAML_ENABLED_IDPS', 'SOCIAL_AUTH_SAML_SECURITY_CONFIG', 'SOCIAL_AUTH_SAML_SP_EXTRA', 'SOCIAL_AUTH_SAML_EXTRA_DATA', 'SOCIAL_AUTH_SAML_ORGANIZATION_MAP', 'SOCIAL_AUTH_SAML_TEAM_MAP', 'SOCIAL_AUTH_SAML_ORGANIZATION_ATTR', 'SOCIAL_AUTH_SAML_TEAM_ATTR') AND "conf_setting"."user_id" IS NULL) ORDER BY "conf_setting"."id" ASC
2020-03-31 10:27:25.647 GMT [283] ERROR:  relation "django_migrations" does not exist at character 124
2020-03-31 10:27:25.647 GMT [283] STATEMENT:  SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations" WHERE ("django_migrations"."app" = 'main' AND "django_migrations"."name" IN ('0005a_squashed_v310_v313_updates', '0005b_squashed_v310_v313_updates'))
2020-03-31 10:27:25.662 GMT [283] ERROR:  relation "django_migrations" does not exist at character 124
2020-03-31 10:27:25.662 GMT [283] STATEMENT:  SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations" WHERE ("django_migrations"."app" = 'main' AND NOT ("django_migrations"."name"::text LIKE '%squashed%')) ORDER BY "django_migrations"."id" DESC  LIMIT 1
2020-03-31 10:27:25.665 GMT [283] ERROR:  relation "django_migrations" does not exist at character 124
2020-03-31 10:27:25.665 GMT [283] STATEMENT:  SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations" WHERE ("django_migrations"."app" = 'main' AND NOT ("django_migrations"."name"::text LIKE '%squashed%')) ORDER BY "django_migrations"."id" DESC  LIMIT 1
2020-03-31 10:27:25.676 GMT [283] ERROR:  relation "django_migrations" does not exist at character 124
2020-03-31 10:27:25.676 GMT [283] STATEMENT:  SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations" WHERE ("django_migrations"."app" = 'main' AND NOT ("django_migrations"."name"::text LIKE '%squashed%')) ORDER BY "django_migrations"."id" DESC  LIMIT 1
2020-03-31 10:27:25.681 GMT [283] ERROR:  relation "django_migrations" does not exist at character 124
2020-03-31 10:27:25.681 GMT [283] STATEMENT:  SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations" WHERE ("django_migrations"."app" = 'main' AND NOT ("django_migrations"."name"::text LIKE '%squashed%')) ORDER BY "django_migrations"."id" DESC  LIMIT 1
2020-03-31 10:27:25.690 GMT [283] ERROR:  relation "django_migrations" does not exist at character 124
2020-03-31 10:27:25.690 GMT [283] STATEMENT:  SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations" WHERE ("django_migrations"."app" = 'main' AND NOT ("django_migrations"."name"::text LIKE '%squashed%')) ORDER BY "django_migrations"."id" DESC  LIMIT 1
2020-03-31 10:27:25.692 GMT [283] ERROR:  relation "django_migrations" does not exist at character 124
2020-03-31 10:27:25.692 GMT [283] STATEMENT:  SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations" WHERE ("django_migrations"."app" = 'main' AND NOT ("django_migrations"."name"::text LIKE '%squashed%')) ORDER BY "django_migrations"."id" DESC  LIMIT 1
2020-03-31 10:28:19.603 GMT [359] ERROR:  column "deleted_actor" of relation "main_activitystream" does not exist at character 85
2020-03-31 10:28:19.603 GMT [359] STATEMENT:  INSERT INTO "main_activitystream" ("actor_id", "operation", "timestamp", "changes", "deleted_actor", "action_node", "object_relationship_type", "object1", "object2", "setting") VALUES (NULL, 'create', '2020-03-31T10:28:19.598577+00:00'::timestamptz, '{"username": "admin", "first_name": "", "last_name": "", "email": "root@localhost", "is_superuser": true, "password": "hidden", "id": 1}', NULL, 'ansible-tower-management', '', 'user', '', '{}') RETURNING "main_activitystream"."id"

kubectl logs awx-prp-0 -f —all-containers

all containers awx-prp-0 logs

Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false,
2020-03-31 10:27:21.608 [info] <0.8.0> Feature flags: list of feature flags found:
2020-03-31 10:27:21.608 [info] <0.8.0> Feature flags: feature flag states written to disk: yes
2020-03-31 10:27:21.634 [info] <0.239.0> 
    "elapsed": 0,
    "match_groupdict": {},
    "match_groups": [],
    "path": null,
    "port": 5432,
    "search_regex": null,
    "state": "started"
}
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
 Starting RabbitMQ 3.7.21 on Erlang 22.1.7
 Copyright (C) 2007-2019 Pivotal Software, Inc.
 Licensed under the MPL.  See https://www.rabbitmq.com/

  ##  ##
  ##  ##      RabbitMQ 3.7.21. Copyright (C) 2007-2019 Pivotal Software, Inc.
  ##########  Licensed under the MPL.  See https://www.rabbitmq.com/
  ######  ##
  ##########  Logs: <stdout>

              Starting broker...
2020-03-31 10:27:21.635 [info] <0.239.0> 
 node           : rabbit@10.233.93.26
 home dir       : /var/lib/rabbitmq
 config file(s) : /etc/rabbitmq/rabbitmq.conf
 cookie hash    : at619UOZzsenF44tSK3ulA==
    },
 log(s)         : <stdout>
 database dir   : /var/lib/rabbitmq/mnesia/rabbit@10.233.93.26
2020-03-31 10:27:21.650 [info] <0.239.0> Running boot step pre_boot defined by app rabbit
    "changed": false,
    "elapsed": 1,
    "match_groupdict": {},
    "match_groups": [],
    "path": null,
    "port": 11211,
    "search_regex": null,
    "state": "started"
}
Using /etc/ansible/ansible.cfg as config file
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
2020-03-31 10:27:21.650 [info] <0.239.0> Running boot step rabbit_core_metrics defined by app rabbit
2020-03-31 10:27:21.650 [info] <0.239.0> Running boot step rabbit_alarm defined by app rabbit
2020-03-31 10:27:21.654 [info] <0.245.0> Memory high watermark set to 6416 MiB (6727725875 bytes) of 16040 MiB (16819314688 bytes) total
2020-03-31 10:27:21.658 [info] <0.247.0> Enabling free disk space monitoring
2020-03-31 10:27:21.658 [info] <0.247.0> Disk free limit set to 50MB
2020-03-31 10:27:21.661 [info] <0.239.0> Running boot step code_server_cache defined by app rabbit
2020-03-31 10:27:21.662 [info] <0.239.0> Running boot step file_handle_cache defined by app rabbit
2020-03-31 10:27:21.662 [info] <0.250.0> Limiting to approx 1048476 file handles (943626 sockets)
2020-03-31 10:27:21.662 [info] <0.251.0> FHC read buffering:  OFF
2020-03-31 10:27:21.663 [info] <0.251.0> FHC write buffering: ON
2020-03-31 10:27:21.663 [info] <0.239.0> Running boot step worker_pool defined by app rabbit
    },
    "changed": false,
    "elapsed": 0,
    "match_groupdict": {},
    "match_groups": [],
2020-03-31 10:27:21.663 [info] <0.240.0> Will use 4 processes for default worker pool
2020-03-31 10:27:21.663 [info] <0.240.0> Starting worker pool 'worker_pool' with 4 processes in it
2020-03-31 10:27:21.664 [info] <0.239.0> Running boot step database defined by app rabbit
2020-03-31 10:27:21.664 [info] <0.239.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@10.233.93.26 is empty. Assuming we need to join an existing cluster or initialise from scratch...
2020-03-31 10:27:21.664 [info] <0.239.0> Configured peer discovery backend: rabbit_peer_discovery_k8s
2020-03-31 10:27:21.665 [info] <0.239.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s
2020-03-31 10:27:21.665 [info] <0.239.0> Peer discovery backend does not support locking, falling back to randomized delay
2020-03-31 10:27:21.665 [info] <0.239.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized startup delay.
2020-03-31 10:27:21.686 [info] <0.239.0> k8s endpoint listing returned nodes not yet ready: 10.233.93.26
2020-03-31 10:27:21.686 [info] <0.239.0> All discovered existing cluster peers: 
2020-03-31 10:27:21.686 [info] <0.239.0> Discovered no peer nodes to cluster with
2020-03-31 10:27:21.691 [info] <0.43.0> Application mnesia exited with reason: stopped
2020-03-31 10:27:21.743 [info] <0.239.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-03-31 10:27:21.771 [info] <0.239.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-03-31 10:27:21.799 [info] <0.239.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2020-03-31 10:27:21.799 [info] <0.239.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping registration.
2020-03-31 10:27:21.800 [info] <0.239.0> Running boot step database_sync defined by app rabbit
2020-03-31 10:27:21.800 [info] <0.239.0> Running boot step feature_flags defined by app rabbit
2020-03-31 10:27:21.800 [info] <0.239.0> Running boot step codec_correctness_check defined by app rabbit
2020-03-31 10:27:21.800 [info] <0.239.0> Running boot step external_infrastructure defined by app rabbit
2020-03-31 10:27:21.800 [info] <0.239.0> Running boot step rabbit_registry defined by app rabbit
2020-03-31 10:27:21.801 [info] <0.239.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit
2020-03-31 10:27:21.801 [info] <0.239.0> Running boot step rabbit_queue_location_random defined by app rabbit
2020-03-31 10:27:21.801 [info] <0.239.0> Running boot step rabbit_event defined by app rabbit
2020-03-31 10:27:21.801 [info] <0.239.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit
2020-03-31 10:27:21.802 [info] <0.239.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit
2020-03-31 10:27:21.802 [info] <0.239.0> Running boot step rabbit_exchange_type_direct defined by app rabbit
2020-03-31 10:27:21.802 [info] <0.239.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit
2020-03-31 10:27:21.802 [info] <0.239.0> Running boot step rabbit_exchange_type_headers defined by app rabbit
2020-03-31 10:27:21.802 [info] <0.239.0> Running boot step rabbit_exchange_type_topic defined by app rabbit
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step rabbit_priority_queue defined by app rabbit
    "path": null,
    "port": 5432,
    "search_regex": null,
    "state": "started"
}
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => {
2020-03-31 10:27:21.803 [info] <0.239.0> Priority queues enabled, real BQ is rabbit_variable_queue
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step rabbit_queue_location_client_local defined by app rabbit
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step kernel_ready defined by app rabbit
2020-03-31 10:27:21.803 [info] <0.239.0> Running boot step rabbit_sysmon_minder defined by app rabbit
2020-03-31 10:27:21.804 [info] <0.239.0> Running boot step rabbit_epmd_monitor defined by app rabbit
2020-03-31 10:27:21.804 [info] <0.424.0> epmd monitor knows us, inter-node communication (distribution) port: 25672
2020-03-31 10:27:21.805 [info] <0.239.0> Running boot step guid_generator defined by app rabbit
2020-03-31 10:27:21.806 [info] <0.239.0> Running boot step rabbit_node_monitor defined by app rabbit
2020-03-31 10:27:21.806 [info] <0.428.0> Starting rabbit_node_monitor
2020-03-31 10:27:21.807 [info] <0.239.0> Running boot step delegate_sup defined by app rabbit
2020-03-31 10:27:21.807 [info] <0.239.0> Running boot step rabbit_memory_monitor defined by app rabbit
2020-03-31 10:27:21.807 [info] <0.239.0> Running boot step core_initialized defined by app rabbit
2020-03-31 10:27:21.807 [info] <0.239.0> Running boot step upgrade_queues defined by app rabbit
2020-03-31 10:27:21.833 [info] <0.239.0> message_store upgrades: 1 to apply
2020-03-31 10:27:21.833 [info] <0.239.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store
2020-03-31 10:27:21.833 [info] <0.239.0> message_store upgrades: No durable queues found. Skipping message store migration
2020-03-31 10:27:21.833 [info] <0.239.0> message_store upgrades: Removing the old message store data
2020-03-31 10:27:21.834 [info] <0.239.0> message_store upgrades: All upgrades applied successfully
2020-03-31 10:27:21.866 [info] <0.239.0> Running boot step rabbit_connection_tracking defined by app rabbit
2020-03-31 10:27:21.866 [info] <0.239.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit
2020-03-31 10:27:21.866 [info] <0.239.0> Running boot step rabbit_exchange_parameters defined by app rabbit
2020-03-31 10:27:21.866 [info] <0.239.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit
2020-03-31 10:27:21.867 [info] <0.239.0> Running boot step rabbit_policies defined by app rabbit
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false,
    "elapsed": 3,
    "match_groupdict": {},
    "match_groups": [],
    "path": null,
    "port": 11211,
2020-03-31 10:27:21.867 [info] <0.239.0> Running boot step rabbit_policy defined by app rabbit
2020-03-31 10:27:21.868 [info] <0.239.0> Running boot step rabbit_queue_location_validator defined by app rabbit
2020-03-31 10:27:21.868 [info] <0.239.0> Running boot step rabbit_vhost_limit defined by app rabbit
2020-03-31 10:27:21.868 [info] <0.239.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management
2020-03-31 10:27:21.868 [info] <0.239.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent
2020-03-31 10:27:21.868 [info] <0.239.0> Management plugin: using rates mode 'basic'
2020-03-31 10:27:21.868 [info] <0.239.0> Running boot step recovery defined by app rabbit
2020-03-31 10:27:21.869 [info] <0.239.0> Running boot step load_definitions defined by app rabbitmq_management
2020-03-31 10:27:21.869 [info] <0.239.0> Applying definitions from /etc/rabbitmq/rabbitmq_definitions.json
2020-03-31 10:27:21.869 [info] <0.239.0> Asked to import definitions. Acting user: rmq-internal
2020-03-31 10:27:21.870 [info] <0.239.0> Importing users...
    "search_regex": null,
    "state": "started"
}
Using /etc/ansible/ansible.cfg as config file
2020-03-31 10:27:21.870 [info] <0.239.0> Creating user 'awx'
2020-03-31 10:27:21.871 [info] <0.239.0> Setting user tags for user 'awx' to [administrator]
2020-03-31 10:27:21.872 [info] <0.239.0> Importing vhosts...
2020-03-31 10:27:21.872 [info] <0.239.0> Adding vhost 'awx'
2020-03-31 10:27:21.880 [info] <0.471.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@10.233.93.26/msg_stores/vhosts/5A7PFSQL4N28PQOL0INDFDDLU' for vhost 'awx' exists
2020-03-31 10:27:21.882 [info] <0.471.0> Starting message stores for vhost 'awx'
2020-03-31 10:27:21.883 [info] <0.475.0> Message store "5A7PFSQL4N28PQOL0INDFDDLU/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2020-03-31 10:27:21.884 [info] <0.471.0> Started message store of type transient for vhost 'awx'
2020-03-31 10:27:21.884 [info] <0.478.0> Message store "5A7PFSQL4N28PQOL0INDFDDLU/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2020-03-31 10:27:21.884 [warning] <0.478.0> Message store "5A7PFSQL4N28PQOL0INDFDDLU/msg_store_persistent": rebuilding indices from scratch
2020-03-31 10:27:21.885 [info] <0.471.0> Started message store of type persistent for vhost 'awx'
2020-03-31 10:27:21.886 [info] <0.239.0> Importing user permissions...
2020-03-31 10:27:21.886 [info] <0.239.0> Setting permissions for 'awx' in 'awx' to '.*', '.*', '.*'
2020-03-31 10:27:21.887 [info] <0.239.0> Importing topic permissions...
2020-03-31 10:27:21.887 [info] <0.239.0> Importing parameters...
2020-03-31 10:27:21.888 [info] <0.239.0> Importing global parameters...
2020-03-31 10:27:21.888 [info] <0.239.0> Importing policies...
2020-03-31 10:27:21.890 [info] <0.239.0> Importing queues...
2020-03-31 10:27:21.890 [info] <0.239.0> Importing exchanges...
2020-03-31 10:27:21.890 [info] <0.239.0> Importing bindings...
2020-03-31 10:27:21.891 [info] <0.239.0> Running boot step empty_db_check defined by app rabbit
2020-03-31 10:27:21.891 [info] <0.239.0> Running boot step rabbit_looking_glass defined by app rabbit
2020-03-31 10:27:21.891 [info] <0.239.0> Running boot step rabbit_core_metrics_gc defined by app rabbit
2020-03-31 10:27:21.891 [info] <0.239.0> Running boot step background_gc defined by app rabbit
2020-03-31 10:27:21.891 [info] <0.239.0> Running boot step connection_tracking defined by app rabbit
2020-03-31 10:27:21.893 [info] <0.239.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@10.233.93.26'
2020-03-31 10:27:21.895 [info] <0.239.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@10.233.93.26'
2020-03-31 10:27:21.896 [info] <0.239.0> Running boot step routing_ready defined by app rabbit
2020-03-31 10:27:21.896 [info] <0.239.0> Running boot step pre_flight defined by app rabbit
2020-03-31 10:27:21.896 [info] <0.239.0> Running boot step notify_cluster defined by app rabbit
2020-03-31 10:27:21.896 [info] <0.239.0> Running boot step networking defined by app rabbit
2020-03-31 10:27:21.899 [info] <0.524.0> started TCP listener on [::]:5672
2020-03-31 10:27:21.899 [info] <0.239.0> Running boot step direct_client defined by app rabbit
2020-03-31 10:27:21.901 [info] <0.570.0> Peer discovery: enabling node cleanup (will remove nodes not known to the discovery backend). Check interval: 10 seconds.
2020-03-31 10:27:21.929 [info] <0.580.0> Management plugin: HTTP (non-TLS) listener started on port 15672
2020-03-31 10:27:21.929 [info] <0.686.0> Statistics database started.
2020-03-31 10:27:21.929 [info] <0.685.0> Starting worker pool 'management_worker_pool' with 3 processes in it
2020-03-31 10:27:22.027 [info] <0.8.0> Server startup complete; 5 plugins started.
 * rabbitmq_management
 * rabbitmq_web_dispatch
 * rabbitmq_peer_discovery_k8s
 * rabbitmq_peer_discovery_common
 * rabbitmq_management_agent
 completed with 5 plugins.

Usually all that’s necessary to build an extension is the following commands:

make
make install
make installcheck

Followed by a SQL command to create the extension in a database (replace foobar with the name of the extension):

CREATE EXTENSION foobar;

Troubleshooting

If you encounter an error such as:

"Makefile", line 8: Need an operator

You need to use GNU make, which may well be installed on your system as gmake:

gmake
gmake install
gmake installcheck

If you encounter an error such as:

make: pg_config: Command not found

Be sure that you have pg_config installed and in your path. If you used a package management system such as RPM to install PostgreSQL, be sure that the -devel package is also installed. If necessary tell the build process where to find it:

env PG_CONFIG=/path/to/pg_config make && make installcheck && make install

If you encounter an error such as:

ERROR:  must be owner of database regression

You need to run the test suite using a super user, such as the default «postgres» super user:

make installcheck PGUSER=postgres

Once the extension is installed, you can add it to a database. If you’re running PostgreSQL 9.1.0 or greater, it’s a simple as connecting to a database as a super user and running (replace foobar with the name of the extension):

CREATE EXTENSION foobar;

If you’ve upgraded your cluster to PostgreSQL 9.1 and already had the extension installed, you can upgrade it to a properly packaged extension with:

CREATE EXTENSION foobar FROM unpackaged;

For versions of PostgreSQL less than 9.1.0, you’ll need to run the installation script (replace foobar with the name of the extension):

psql -d mydb -f /path/to/pgsql/share/contrib/foobar.sql

If you want to install the extension and all of its supporting objects into a specific schema, use the PGOPTIONS environment variable to specify the schema, like so:

PGOPTIONS=--search_path=extensions psql -d mydb -f foobar.sql

Credits

This troubleshooting guide was originally written by David E. Wheeler. Copied from his hostname extension README.

  • vmamontov

Всем привет!

Подскажите, пожалуйста, как переименовать базу данных postgresql? Пробовал в pgadmin — must be owner of database. Делал через консоль ALTER DATABASE name RENAME TO newname ничего не произошло.


  • Вопрос задан

    более трёх лет назад

  • 12610 просмотров

Пригласить эксперта

Что-то делаете не так.

postgres=# l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
 test      | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 


postgres=# alter database test rename to test1;
ALTER DATABASE
postgres=# l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
 test1     | postgres | UTF8     | en_US.utf8 | en_US.utf8 |

«;» в конце запроса ставить нужно обязательно


  • Показать ещё
    Загружается…

09 февр. 2023, в 17:37

5000 руб./за проект

09 февр. 2023, в 17:35

50000 руб./за проект

09 февр. 2023, в 17:25

3800 руб./за проект

Минуточку внимания

In part one, Migrating PostgreSQL Databases From On-Prem to the Cloud Using AWS RDS, we showed you how to perform the migration. In this blog, we will explore some common errors that might pop up during the migration.

BACKUP ERROR: Permission Denied to Backup Schema

/usr/pgsql-10/bin/pg_dump -v source_database -h onpremdbserver.domain.com -p 5432 -U source_appuser -Fd -j 100 -f /dbbackup/10/source_database   --no-owner --no-privileges
pg_dump: last built-in OID is 16383
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: [archiver (db)] query failed: ERROR:  permission denied for schema source_schema
pg_dump: [archiver (db)] query was: LOCK TABLE source_schema.table1 IN ACCESS SHARE MODE

RESOLUTION – Ensure source_appuser has the proper access. I.e. in this case, source_appuser is owner

postgres=> c source_database
psql (10.5, server 10.6)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
You are now connected to database "source_database" as user "postgres".
source_database=> dn+
                               List of schemas
     Name      |   Owner   |   Access privileges    |      Description       
---------------+-----------+------------------------+------------------------
 public        | postgres | postgres=UC/postgres+| standard public schema
               |           | =UC/postgres          |
 source_schema | postgres |                        |
(2 rows)
 
source_database=> alter schema source_schema owner to source_appuser ;
ALTER SCHEMA
 
source_database=> dt source_schema.table*
             List of relations
    Schema     |  Name  | Type  |   Owner  
---------------+--------+-------+-----------
 source_schema | table1 | table | postgres
 source_schema | table2 | table | postgres
 source_schema | table3 | table | postgres
(3 rows)
 
source_database=> alter table source_schema.table1 owner to source_appuser ;
ALTER TABLE
source_database=> alter table source_schema.table2 owner to source_appuser ;
ALTER TABLE
source_database=> alter table source_schema.table3 owner to source_appuser ;
ALTER TABLE
source_database=> dt source_schema.table*
                List of relations
    Schema     |  Name  | Type  |     Owner     
---------------+--------+-------+----------------
 source_schema | table1 | table | source_appuser
 source_schema | table2 | table | source_appuser
 source_schema | table3 | table | source_appuser
(3 rows)

BACKUP ERROR: Server Version Mismatch

# Backup database...
/usr/pgsql-10/bin/pg_dump -v source_database -h onpremdbserver.domain.com -p 5432 -U source_appuser -Fd -j 100 -f /dbbackup/10/source_database   --no-owner --no-privileges
pg_dump: server version: 10.6; pg_dump version: 9.6.9
pg_dump: aborting because of server version mismatch

RESOLUTION – Use correct PostgreSQL binary (i.e. /usr/pgsql-10/bin Instead of /usr/pgsql-9.6/bin)

# Backup database...
/usr/pgsql-10/bin/pg_dump -v source_database -h onpremdbserver.domain.com -p 5432 -U source_appuser -Fd -j 100 -f /dbbackup/10/source_database   --no-owner --no-privileges
pg_dump: last built-in OID is 16383
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined access methods
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined operator families
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined foreign-data wrappers
pg_dump: reading user-defined foreign servers
pg_dump: reading default privileges
pg_dump: reading user-defined collations
pg_dump: reading user-defined conversions
pg_dump: reading type casts
pg_dump: reading transforms
pg_dump: reading table inheritance information
pg_dump: reading event triggers
pg_dump: finding extension tables
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table "source_schema.table1"
pg_dump: finding the columns and types of table "source_schema.table2"
pg_dump: finding the columns and types of table "source_schema.table3"
pg_dump: finding default expressions of table "source_schema.table3"
pg_dump: flagging inherited columns in subtables
pg_dump: reading indexes
pg_dump: reading extended statistics
pg_dump: reading constraints
pg_dump: reading triggers
pg_dump: reading rewrite rules
pg_dump: reading policies
pg_dump: reading row security enabled for table "source_schema.table1"
pg_dump: reading policies for table "source_schema.table1"
pg_dump: reading row security enabled for table "source_schema.table2"
pg_dump: reading policies for table "source_schema.table2"
pg_dump: reading row security enabled for table "source_schema.table3_id_seq"
pg_dump: reading policies for table "source_schema.table3_id_seq"
pg_dump: reading row security enabled for table "source_schema.table3"
pg_dump: reading policies for table "source_schema.table3"
pg_dump: reading publications
pg_dump: reading publication membership
pg_dump: reading publication membership for table "source_schema.table1"
pg_dump: reading publication membership for table "source_schema.table2"
pg_dump: reading publication membership for table "source_schema.table3"
pg_dump: reading subscriptions
pg_dump: reading large objects
pg_dump: reading dependency data
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
pg_dump: saving search_path =
pg_dump: saving database definition
pg_dump: dumping contents of table "source_schema.table1"
pg_dump: finished item 3797 TABLE DATA table1
pg_dump: dumping contents of table "source_schema.table3"
pg_dump: finished item 3800 TABLE DATA table3
pg_dump: dumping contents of table "source_schema.table2"
pg_dump: finished item 3798 TABLE DATA table2

RESTORATION ERROR: Must Be the Owner of Extension plpgsql

# Restore database...
/usr/pgsql-10/bin/pg_restore -v -d dest_database_newdb -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -U dest_rds_superuser -j 120 -Fd /dbbackup/10/source_database   --no-owner --no-privileges
pg_restore: connecting to database for restore
pg_restore: processing item 3803 ENCODING ENCODING
pg_restore: processing item 3804 STDSTRINGS STDSTRINGS
pg_restore: processing item 3805 SEARCHPATH SEARCHPATH
pg_restore: processing item 3806 DATABASE source_database
pg_restore: processing item 3 SCHEMA public
pg_restore: creating SCHEMA "public"
pg_restore: processing item 3807 COMMENT SCHEMA public
pg_restore: creating COMMENT "SCHEMA public"
pg_restore: processing item 6 SCHEMA source_schema
pg_restore: creating SCHEMA "source_schema"
pg_restore: processing item 1 EXTENSION plpgsql
pg_restore: creating EXTENSION "plpgsql"
pg_restore: processing item 3808 COMMENT EXTENSION plpgsql
pg_restore: creating COMMENT "EXTENSION plpgsql"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3808; 0 0 COMMENT EXTENSION plpgsql
pg_restore: [archiver (db)] could not execute query: ERROR:  must be owner of extension plpgsql
    Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
 
ERROR:  permission denied to create database

RESOLUTION: RDS does not allow true super user, but the error can be ignored.

RESTORATION ERROR: Permission Denied to Drop Database

ERROR:
postgres=> drop database dest_database_newdb ;
ERROR:  must be owner of database dest_database_newdb

RESOLUTION

postgres=> grant dest_rds_superuser to postgres ;
GRANT ROLE
postgres=> drop database dest_database_newdb ;
DROP DATABASE

RESTORATION ERROR: Permission Denied to Alter Role

# Grant dest_rds_superuser createdb...
/usr/pgsql-10/bin/psql -E -e postgres -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -Udest_rds_superuser -c"alter role dest_rds_superuser createdb"
alter role dest_rds_superuser createdb
ERROR:  permission denied

RESOLUTION

postgres=> grant rds_superuser to dest_rds_superuser ;
GRANT ROLE
 
 # Grant dest_rds_superuser createdb...
 /usr/pgsql-10/bin/psql -E -e postgres -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -Udest_rds_superuser -c"alter role dest_rds_superuser createdb"
 alter role dest_rds_superuser createdb
 ALTER ROLE

RESTORATION ERROR: Permission Denied to Create Database

ERROR:  permission denied to create database

RESOLUTION

RESOLUTION: Roles must have createdb privilege otherwise to avoid to the following error:
postgres=> alter role dest_rds_superuser createdb ;

PostgreSQL Management & Automation with ClusterControl

Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

RESTORATION ERROR: Database Already Exists

# Create database...
/usr/pgsql-10/bin/psql -E -e postgres -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -Udest_rds_superuser -c"create database dest_database_newdb"
create database dest_database_newdb
ERROR:  database "dest_database_newdb" already exists
# Restore database...
/usr/pgsql-10/bin/pg_restore -v -d dest_database_newdb -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -U dest_rds_superuser -j 120 -Fd /dbbackup/10/source_database   --no-owner --no-privileges
pg_restore: connecting to database for restore
pg_restore: processing item 3803 ENCODING ENCODING
pg_restore: processing item 3804 STDSTRINGS STDSTRINGS
pg_restore: processing item 3805 SEARCHPATH SEARCHPATH
pg_restore: processing item 3806 DATABASE source_database
pg_restore: processing item 3 SCHEMA public
pg_restore: creating SCHEMA "public"
pg_restore: processing item 3807 COMMENT SCHEMA public
pg_restore: creating COMMENT "SCHEMA public"
pg_restore: processing item 6 SCHEMA source_schema
pg_restore: creating SCHEMA "source_schema"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 6; 2615 20233 SCHEMA source_schema source_appuser
pg_restore: [archiver (db)] could not execute query: ERROR:  schema "source_schema" already exists
    Command was: CREATE SCHEMA source_schema;
       pg_restore: processing item 1 EXTENSION plpgsql
       pg_restore: creating EXTENSION "plpgsql"
       pg_restore: processing item 3808 COMMENT EXTENSION plpgsql
       pg_restore: creating COMMENT "EXTENSION plpgsql"
       pg_restore: [archiver (db)] Error from TOC entry 3808; 0 0 COMMENT EXTENSION plpgsql
       pg_restore: [archiver (db)] could not execute query: ERROR:  must be owner of extension plpgsql
           Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
       pg_restore: processing item 197 TABLE table1
       pg_restore: creating TABLE "source_schema.table1"
       pg_restore: [archiver (db)] Error from TOC entry 197; 1259 20234 TABLE table1 source_appuser
       pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table1" already exists
           Command was: CREATE TABLE source_schema.table1 (
           id integer
       );
       pg_restore: processing item 198 TABLE table2
       pg_restore: creating TABLE "source_schema.table2"
       pg_restore: [archiver (db)] Error from TOC entry 198; 1259 20237 TABLE table2 source_appuser
       pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table2" already exists
           Command was: CREATE TABLE source_schema.table2 (
           id integer
       );
       pg_restore: processing item 200 TABLE table3
       pg_restore: creating TABLE "source_schema.table3"
       pg_restore: [archiver (db)] Error from TOC entry 200; 1259 20242 TABLE table3 source_appuser
       pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table3" already exists
           Command was: CREATE TABLE source_schema.table3 (
           id integer NOT NULL,
           name character varying
       );
       pg_restore: processing item 199 SEQUENCE table3_id_seq
       pg_restore: creating SEQUENCE "source_schema.table3_id_seq"
       pg_restore: [archiver (db)] Error from TOC entry 199; 1259 20240 SEQUENCE table3_id_seq source_appuser
       pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table3_id_seq" already exists
           Command was: CREATE SEQUENCE source_schema.table3_id_seq
           AS integer
           START WITH 1
           INCREMENT BY 1
           NO MINVALUE
           NO MAXVALUE
           CACHE 1;
       pg_restore: processing item 3809 SEQUENCE OWNED BY table3_id_seq
       pg_restore: creating SEQUENCE OWNED BY "source_schema.table3_id_seq"
       pg_restore: processing item 3675 DEFAULT table3 id
       pg_restore: creating DEFAULT "source_schema.table3 id"
       pg_restore: entering main parallel loop
       pg_restore: launching item 3797 TABLE DATA table1
       pg_restore: launching item 3798 TABLE DATA table2
       pg_restore: launching item 3800 TABLE DATA table3
       pg_restore: launching item 3810 SEQUENCE SET table3_id_seq
       pg_restore: pg_restore: executing SEQUENCE SET table3_id_seq
       processing data for table "source_schema.table2"
       pg_restore: finished item 3798 TABLE DATA table2
       pg_restore: finished item 3810 SEQUENCE SET table3_id_seq
       pg_restore: processing data for table "source_schema.table3"
       pg_restore: processing data for table "source_schema.table1"
       pg_restore: finished item 3797 TABLE DATA table1
       pg_restore: finished item 3800 TABLE DATA table3
       pg_restore: finished main parallel loop
       WARNING: errors ignored on restore: 6

RESOLUTION – Ensure you manually remove existing database before executing the script.

# Restore database...
/usr/pgsql-10/bin/pg_restore -v -d dest_database_newdb -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -U dest_rds_superuser -j 120 -Fd /dbbackup/10/source_database   --no-owner --no-privileges
pg_restore: connecting to database for restore
pg_restore: processing item 3803 ENCODING ENCODING
pg_restore: processing item 3804 STDSTRINGS STDSTRINGS
pg_restore: processing item 3805 SEARCHPATH SEARCHPATH
pg_restore: processing item 3806 DATABASE source_database
pg_restore: processing item 3 SCHEMA public
pg_restore: creating SCHEMA "public"
pg_restore: processing item 3807 COMMENT SCHEMA public
pg_restore: creating COMMENT "SCHEMA public"
pg_restore: processing item 6 SCHEMA source_schema
pg_restore: creating SCHEMA "source_schema"
pg_restore: processing item 1 EXTENSION plpgsql
pg_restore: creating EXTENSION "plpgsql"
pg_restore: processing item 3808 COMMENT EXTENSION plpgsql
pg_restore: creating COMMENT "EXTENSION plpgsql"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3808; 0 0 COMMENT EXTENSION plpgsql
pg_restore: [archiver (db)] could not execute query: ERROR:  must be owner of extension plpgsql
    Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
 
 
 
pg_restore: processing item 197 TABLE table1
pg_restore: creating TABLE "source_schema.table1"
pg_restore: processing item 198 TABLE table2
pg_restore: creating TABLE "source_schema.table2"
pg_restore: processing item 200 TABLE table3
pg_restore: creating TABLE "source_schema.table3"
pg_restore: processing item 199 SEQUENCE table3_id_seq
pg_restore: creating SEQUENCE "source_schema.table3_id_seq"
pg_restore: processing item 3809 SEQUENCE OWNED BY table3_id_seq
pg_restore: creating SEQUENCE OWNED BY "source_schema.table3_id_seq"
pg_restore: processing item 3675 DEFAULT table3 id
pg_restore: creating DEFAULT "source_schema.table3 id"
pg_restore: entering main parallel loop
pg_restore: launching item 3797 TABLE DATA table1
pg_restore: launching item 3798 TABLE DATA table2
pg_restore: launching item 3800 TABLE DATA table3 

RESTORATION ERROR: Schema Already Exists

# Create schema...
/usr/pgsql-10/bin/psql -E -e -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 dest_database_newdb -Udest_rds_superuser -c"create schema source_schema"
create schema source_schema
ERROR:  schema "source_schema" already exists
 
# Restore database...
/usr/pgsql-10/bin/pg_restore -v -d dest_database_newdb -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -U dest_rds_superuser -j 120 -Fd /dbbackup/10/source_database --schema=source_schema  --no-owner --no-privileges
pg_restore: connecting to database for restore
pg_restore: processing item 3803 ENCODING ENCODING
pg_restore: processing item 3804 STDSTRINGS STDSTRINGS
pg_restore: processing item 3805 SEARCHPATH SEARCHPATH
pg_restore: processing item 6 SCHEMA source_schema
pg_restore: processing item 197 TABLE table1
pg_restore: creating TABLE "source_schema.table1"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 197; 1259 20234 TABLE table1 source_appuser
pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table1" already exists
    Command was: CREATE TABLE source_schema.table1 (
    id integer
);
 
 
 
pg_restore: processing item 198 TABLE table2
pg_restore: creating TABLE "source_schema.table2"
pg_restore: [archiver (db)] Error from TOC entry 198; 1259 20237 TABLE table2 source_appuser
pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table2" already exists
    Command was: CREATE TABLE source_schema.table2 (
    id integer
);
 
  
pg_restore: processing item 200 TABLE table3
pg_restore: creating TABLE "source_schema.table3"
pg_restore: [archiver (db)] Error from TOC entry 200; 1259 20242 TABLE table3 source_appuser
pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table3" already exists
    Command was: CREATE TABLE source_schema.table3 (
    id integer NOT NULL,
    name character varying
);
 
pg_restore: processing item 199 SEQUENCE table3_id_seq
pg_restore: creating SEQUENCE "source_schema.table3_id_seq"
pg_restore: [archiver (db)] Error from TOC entry 199; 1259 20240 SEQUENCE table3_id_seq source_appuser
pg_restore: [archiver (db)] could not execute query: ERROR:  relation "table3_id_seq" already exists
    Command was: CREATE SEQUENCE source_schema.table3_id_seq
    AS integer
    START WITH 1
    INCREMENT BY 1
    NO MINVALUE
    NO MAXVALUE
    CACHE 1;
 
 
pg_restore: processing item 3806 SEQUENCE OWNED BY table3_id_seq
pg_restore: creating SEQUENCE OWNED BY "source_schema.table3_id_seq"
pg_restore: processing item 3675 DEFAULT table3 id
pg_restore: creating DEFAULT "source_schema.table3 id"
pg_restore: entering main parallel loop
pg_restore: launching item 3797 TABLE DATA table1
pg_restore: launching item 3798 TABLE DATA table2
pg_restore: launching item 3800 TABLE DATA table3
pg_restore: launching item 3807 SEQUENCE SET table3_id_seq
pg_restore: pg_restore: processing data for table "source_schema.table2"
processing data for table "source_schema.table1"
pg_restore: executing SEQUENCE SET table3_id_seq
pg_restore: finished item 3797 TABLE DATA table1
pg_restore: finished item 3798 TABLE DATA table2
pg_restore: finished item 3807 SEQUENCE SET table3_id_seq
pg_restore: processing data for table "source_schema.table3"
pg_restore: finished item 3800 TABLE DATA table3
pg_restore: finished main parallel loop
WARNING: errors ignored on restore: 4

RESOLUTION – Either drop or rename existing schema. I like renaming until all is validated:

postgres=> c dest_database_newdb
psql (10.5, server 10.6)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
You are now connected to database "dest_database_newdb" as user "postgres".
dest_database_newdb=> dn
          List of schemas
     Name      |       Owner        
---------------+--------------------
 public        | postgres
 source_schema | dest_rds_superuser
(2 rows)
 
dest_database_newdb=> alter schema source_schema rename to source_schema_old;
ALTER SCHEMA
dest_database_newdb=> dn
            List of schemas
       Name        |       Owner        
-------------------+--------------------
 public            | postgres
 source_schema_old | dest_rds_superuser
(2 rows)
 
 
# Restore database...
/usr/pgsql-10/bin/pg_restore -v -d dest_database_newdb -hdest_dbinstance.cluster-awsrdsguid.us-east-1.rds.amazonaws.com -p5432 -U dest_rds_superuser -j 120 -Fd /dbbackup/10/source_database --schema=source_schema  --no-owner --no-privileges
pg_restore: connecting to database for restore
pg_restore: processing item 3803 ENCODING ENCODING
pg_restore: processing item 3804 STDSTRINGS STDSTRINGS
pg_restore: processing item 3805 SEARCHPATH SEARCHPATH
pg_restore: processing item 6 SCHEMA source_schema
pg_restore: processing item 197 TABLE table1
pg_restore: creating TABLE "source_schema.table1"
pg_restore: processing item 198 TABLE table2
pg_restore: creating TABLE "source_schema.table2"
pg_restore: processing item 200 TABLE table3
pg_restore: creating TABLE "source_schema.table3"
pg_restore: processing item 199 SEQUENCE table3_id_seq
pg_restore: creating SEQUENCE "source_schema.table3_id_seq"
pg_restore: processing item 3806 SEQUENCE OWNED BY table3_id_seq
pg_restore: creating SEQUENCE OWNED BY "source_schema.table3_id_seq"
pg_restore: processing item 3675 DEFAULT table3 id
pg_restore: creating DEFAULT "source_schema.table3 id"
pg_restore: entering main parallel loop
pg_restore: launching item 3797 TABLE DATA table1
pg_restore: launching item 3798 TABLE DATA table2
pg_restore: launching item 3800 TABLE DATA table3
pg_restore: launching item 3807 SEQUENCE SET table3_id_seq
pg_restore: processing data for table "source_schema.table1"
pg_restore: processing data for table "source_schema.table2"
pg_restore: executing SEQUENCE SET table3_id_seq
pg_restore: finished item 3807 SEQUENCE SET table3_id_seq
pg_restore: processing data for table "source_schema.table3"
pg_restore: finished item 3797 TABLE DATA table1
pg_restore: finished item 3798 TABLE DATA table2
pg_restore: finished item 3800 TABLE DATA table3
pg_restore: finished main parallel loop

Subscribe to get our best and freshest content

In some situations database indexes can be corrupted, and you need to rebuild them with the REINDEX command.

How to reindex a database

Rebuilding indexes is very simple:

  1. Connect to the database
  2. Execute the following query:
    REINDEX DATABASE db_name;
  3. If you are using multiple databases, repeat the steps for every database.
  4. If Postgres.app shows the reindex warning, you can now hide it by clicking “More Info” and then on “Hide this Warning”

Why should I reindex my databases?

In some situations indexes can become corrupted.
This can happen because of bugs in PostgreSQL or because of changes in macOS.

Bugs in PostgreSQL that require a REINDEX are typically mentioned in the PostgreSQL release notes.

Changes in macOS that require a REINDEX are unfortunately not documented anywhere. The only problematic change we are aware of currently is that the default text sort order (collation) has changed in macOS 11, which means indexes created or updated on earlier versions of macOS are no longer valid and must be rebuilt.

How do I know if my database is affected?

It is very hard to tell if indexes have actually been corrupted or not.
Postgres.app tries to detect some scenarios where an index could have been corrupted, and shows a warning suggesting to reindex your database.

If you are unsure, we recommend to perform the reindex operation to be safe.

How long does reindexing take?

For small databases it will take just a few seconds, but if you have a lot of data it could take a few minutes.

You can use the command REINDEX (VERBOSE) DATABASE db_name; to see status messages if you have a large database.

Please note that to perform the reindex any concurrent writing transactions need to come
to an end and new transactions and sessions may need to wait for the reindex to finish. If
any client keeps a writing transaction open, the reindex operation will block and wait for
that without any warning. You can cancel and restart the operation if needed.

What happens if I don’t reindex?

  • Queries could return incorrect results (most likely missing rows)
  • UNIQUE columns may contain duplicate values

Troubleshooting

  • If you see ‘ERROR: must be owner of database’ you must connect to the database either as a superuser or as the owner of the database
  • If you see errors like ERROR: could not create unique index take a note on the
    message and the details, connect to the database in question, and manually
    resolve the unique conflict. When querying the data, try to avoid using indexes, e.g.
    by issuing SET enable_indexscan = off; SET enable_indexonlyscan = off; SET enable_bitmapscan = off;
    in the session you use for this. Then retry the reindex operation.

Backgrund Info on Collation Changes

If not explicitly requested otherwise (ICU collations), PostgreSQL uses the collations
(language dependent rules for sorting text) provided by the operation system. PostgresApp
sets the default locale (and thus collation) to ‘en_US.UTF-8’ when initialising a new
cluster since PostgresQL 9.4.1 (End of 2014). However, UTF-8 based collations are not
actually implemented in macOS (like in most BSD Systems). Thus, the effective sort order
was rather following byte order, see Issue #216.

With the update to macOS 11, Apple started to use the ISO8859-1 collations for about half
of the available locales, including the default locale of Postgres.app, ‘en_US.UTF-8’. As
Database Indexes store the order of elements at the time these are inserted, corruption
can happen if the sorting rules change later.

Is my database affected by macOS collation changes?

Postgres.app records the version of macOS where initdb was called, and also all versions
of macOS that the server was started on. Since this information is not available for old
data directories, Postgres.app guesses the macOS version used to call initdb for
existing data directories based on the install history of macOS updates. If Postgres.app
detects that the data directory was used both pre macOS 11 and post macOS 11 or the macOS
version used for initdb of a data directory is unknown, it shows the reindex warning.

So the warning is a good indicator, but may not be absolutely accurate. If you prefer to
do a manual assessment, here are some guidelines:

You are not affected if:

  • The database cluster (‘Server’ in PostgresApp) was initialized on macOS 11 or later.
  • You are still using macOS 10.15 or earlier.
  • initdb was run manually (not with the button ‘Initialize’ within PostgresApp) with
    --no-locale, --lc-collate or --locale set to “C” or “POSIX” or to an unaffected
    locale and no other libc-based collations are used on columns or in indexes. Database
    default collation can be shown with SELECT datname, datcollate FROM pg_database;,
    use of object level collations can be determined by joining pg_depend with
    pg_collation.
  • The database was restored from a logic dump (pg_dump / pg_dumpall) after updating
    the OS version (exception: range partion keys, see below).

You are likely affected if:

  • The the OS was updated from macOS 10.15 or earlier to macOS 11 or later ‘in-place’ with
    an existing PostgreSQL database.
  • You used Migration Assistant or TimeMachine to copy the user data containing a
    PostgreSQL data directory from a Mac using macOS 10.15 or earlier to macOS 11 or later.
  • You used pg_upgrade on macOS 11 or later to update a cluster inited on macOS 10.15 or
    earlier.
  • You copied over a data directory or a physical backup (pg_basebackup) from a Mac
    using macOS 10.15 or earlier to macOS 11 or later or vice versa.

There is no relation with the version of PostgresApp or PostgreSQL or with the
architecture (Apple Silcon / Intel) in use.

Advanced Stuff

  • You can monitor the reindex progress:
    • To see the finished objects, add -v to the reindexdb command line or (VERBOSE)
      to REINDEX DATABASE command
    • To see progress during index creation in PostgreSQL 11 or newer have a look to the
      pg_stat_progress_create_index view
      in psql: TABLE pg_stat_progress_create_index watch 1
    • To monitor if the process waits to get a lock, look for wait_event ‘Lock’ in the
      pg_stat_activity view
  • To limit the time heavyweight locks are required, it is possible to run the reindex in
    batches, in parallel and concurrently. See documentation for the parameters.
  • You can limit the reindex-operation to indexes that contain text based columns. To
    display these:

    SELECT indrelid::regclass::text, indexrelid::regclass::text, collname, pg_get_indexdef(indexrelid) 
    FROM (SELECT indexrelid, indrelid, indcollation[i] coll FROM pg_index, generate_subscripts(indcollation, 1) g(i)) s 
    JOIN pg_collation c ON coll=c.oid
    WHERE collprovider IN ('d', 'c') AND collname NOT IN ('C', 'POSIX');
    
  • You can further limit the reindex operation to the indexes that actually became invalid
    with the help of the extension amcheck:
    bt_index_check('<index_name>', is_unique);
  • If you use clustered tables, you should recluster them after reindexing:
    clusterdb -a or
    CLUSTER <table_name>;.
  • In case you use partitioned tables with Range Partitioning on a text based column, it
    is possible, a repartitioning is needed. One way to do this (PostgreSQL 12 onwards) uses
    pg_dump with the --load-via-partition-root option.
  • In case you use CHECK constraints that depend on character order it is possible you
    have undetected constraint violations in the existing data.
    Unlike the UNIQUE violations mentioned before, these need to be searched for and
    resolved manually.
  • If you previously created indexes on text-based columns for use with pattern matching
    queries (LIKE, ~ operators), you may want to create additional indexes with the
    matching xxx_pattern_ops opclass
    to restore support for this.
  • When using streaming replication, the collations of the standby need to match these on
    the primary, otherwise corrupting the standby is likely just like if one of the
    machines’s OS is updated across the macOS 10.15 / macOS 11 boundary.

Further reading

  • PostgresSQL Wiki on collation changes
  • Docs on REINDEX command
  • PostgresApp Ticket #216
  • PostgresApp Ticket #665

ERROR: must be owner of database databaseName — postgres error

Hi everyone!

Today’s post is about Postgres SQL.

In my environment we are using Postgres SQL as one option for the backend of AWS (Amazon Web Services).

That’s right. We also have another option — MySQL.

This post however will focus on the Postgres SQL error:

«ERROR: must be owner of database databaseName»

Scenario:
I encountered this error when I was trying to drop a database from an AWS cluster.

Amazon allows you to group databases inside of containers. These containers still have the same logic as SQL schemas which hold our database in SQL server.

You can read more about AWS clusters here .

Solution:
This is very easy. It’s in the error. All I have to do is make myself the owner of the database so I can delete the database.

I. If your user account does not already have read/write privileges, then you will need to sign in to the Postgres cluster with an admin account so you can grant read/write access to your user account.

II. If your user account does have read/write privileges, then you will be able to alter the database inside the cluster and assign yourself as the owner OR you could do it with the admin user account in step 1.

Note: Run the command below from a different database that is inside the same cluster as the database you would like to change ownership for.

ALTER DATABASE databaseName OWNER TO userName;

If you need to read further on granting permissions in Postgres, please check out this previous post here.

You’re all set!

If you have any questions related to this post, please put them below.


Thank you and Happy reading,

-marshé hutchinson
#learnSQLwithme

Понравилась статья? Поделить с друзьями:
  • Error must be member of role postgres
  • Error music download
  • Error multiverse invalid
  • Error multiplex veritas una перевод
  • Error multiple updates to a row by the same query is not allowed