Disk full 7 error could not resize shared memory segment

I have on a dashboard, a number of panels (numbering around 6)to display data points chart making queries to dockerised instance of PostgreSQL database. Panels were working fine until very recently,

I have on a dashboard, a number of panels (numbering around 6)to display data points chart making queries to dockerised instance of PostgreSQL database.

Panels were working fine until very recently, some stop working and report an error like this:

pq: could not resize shared memory segment «/PostgreSQL.2058389254» to 12615680 bytes: No space left on device

Any idea why this? how to work around solving this. Docker container runs on remote host accessed via ssh.

EDIT

Disk space:

$df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       197G  140G   48G  75% /
devtmpfs        1.4G     0  1.4G   0% /dev
tmpfs           1.4G  4.0K  1.4G   1% /dev/shm
tmpfs           1.4G  138M  1.3G  10% /run
tmpfs           1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/dm-16       10G   49M   10G   1% /var/lib/docker/devicemapper/mnt/a0f3c5ab84aa06d5b2db00c4324dd6bf7141500ff4c83e23e9aba7c7268bcad4
/dev/dm-1        10G  526M  9.5G   6% /var/lib/docker/devicemapper/mnt/8623a774d736ed3dc0d2db89b7d07cae85c3d1bcafc245180eec4ffd738f93a5
shm              64M     0   64M   0% /var/lib/docker/containers/260552ebcdf2bf0961329108d3d975110f8ada0a41325f5e7dd81b8ddad9d18b/mounts/shm
/dev/dm-4        10G  266M  9.8G   3% /var/lib/docker/devicemapper/mnt/6f873e62607e7cac4c4b658c72874c787b90290f74d1159eca81af61cb467cfb
shm              64M   50M   15M  78% /var/lib/docker/containers/84c66d9fb5b6ae023d051766f4d35ced87a519a1fee68ca5c89d61ff87cf1e5a/mounts/shm
/dev/dm-2        10G  383M  9.7G   4% /var/lib/docker/devicemapper/mnt/cb3df1ae654ed78802c2e5bd7a51a1b0bdd562855a7c7803750b80b33f5c206e
shm              64M     0   64M   0% /var/lib/docker/containers/22ba2ae2b6859c24623703dcb596527d64257d2d61de53f4d88e00a8e2335211/mounts/shm
/dev/dm-3        10G   99M  9.9G   1% /var/lib/docker/devicemapper/mnt/492a19fc8f3e254c4e5cc691c3300b5fee9d1a849422673bf0c19b4b2d1db571
shm              64M     0   64M   0% /var/lib/docker/containers/39abe855a9b107d4921807332309517697f024b2d169ebc5f409436208f766d0/mounts/shm
/dev/dm-7        10G  276M  9.8G   3% /var/lib/docker/devicemapper/mnt/55c6a6c17c892d149c1cc91fbf42b98f1340ffa30a1da508e3526af7060f3ce2
shm              64M     0   64M   0% /var/lib/docker/containers/bf2e7254cd7e2c6000da61875343580ec6ff5cbf40c017a398ba7479af5720ec/mounts/shm
/dev/dm-8        10G  803M  9.3G   8% /var/lib/docker/devicemapper/mnt/4e51f48d630041316edd925f1e20d3d575fce4bf19ef39a62756b768460d1a3a
shm              64M     0   64M   0% /var/lib/docker/containers/72d4ae743de490ed580ec9265ddf8e6b90e3a9d2c69bd74050e744c8e262b342/mounts/shm
/dev/dm-6        10G   10G   20K 100% /var/lib/docker/devicemapper/mnt/3dcddaee736017082fedb0996e42b4c7b00fe7b850d9a12c81ef1399fa00dfa5
shm              64M     0   64M   0% /var/lib/docker/containers/9f2bf4e2736d5128d6c240bb10da977183676c081ee07789bee60d978222b938/mounts/shm
/dev/dm-5        10G  325M  9.7G   4% /var/lib/docker/devicemapper/mnt/65a2bf48cbbfe42f0c235493981e62b90363b4be0a2f3aa0530bbc0b5b29dbe3
shm              64M     0   64M   0% /var/lib/docker/containers/e53d5ababfdefc5c8faf65a4b2d635e2543b5a807b65a4f3cd8553b4d7ef2d06/mounts/shm
/dev/dm-9        10G  1.2G  8.9G  12% /var/lib/docker/devicemapper/mnt/3216c48346c3702a5cd2f62a4737cc39666983b8079b481ab714cdb488400b08
shm              64M     0   64M   0% /var/lib/docker/containers/5cd0774a742f54c7c4fe3d4c1307fc93c3c097a861cde5f611a0fa9b454af3dd/mounts/shm
/dev/dm-10       10G  146M  9.9G   2% /var/lib/docker/devicemapper/mnt/6a98acd1428ae670e8f1da62cb8973653c8b11d1c98a8bf8be78f59d2ddba062
shm              64M     0   64M   0% /var/lib/docker/containers/a878042353f6a605167e7f9496683701fd2889f62ba1d6c0dc39c58bc03a8209/mounts/shm
tmpfs           285M     0  285M   0% /run/user/0

EDIT-2

$df -ih
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/vda1         13M  101K   13M    1% /
devtmpfs         354K   394  353K    1% /dev
tmpfs            356K     2  356K    1% /dev/shm
tmpfs            356K   693  356K    1% /run
tmpfs            356K    16  356K    1% /sys/fs/cgroup
/dev/dm-16        10M  2.3K   10M    1% /var/lib/docker/devicemapper/mnt/a0f3c5ab84aa06d5b2db00c4324dd6bf7141500ff4c83e23e9aba7c7268bcad4
/dev/dm-1         10M   19K   10M    1% /var/lib/docker/devicemapper/mnt/8623a774d736ed3dc0d2db89b7d07cae85c3d1bcafc245180eec4ffd738f93a5
shm              356K     1  356K    1% /var/lib/docker/containers/260552ebcdf2bf0961329108d3d975110f8ada0a41325f5e7dd81b8ddad9d18b/mounts/shm
/dev/dm-4         10M   11K   10M    1% /var/lib/docker/devicemapper/mnt/6f873e62607e7cac4c4b658c72874c787b90290f74d1159eca81af61cb467cfb
shm              356K     2  356K    1% /var/lib/docker/containers/84c66d9fb5b6ae023d051766f4d35ced87a519a1fee68ca5c89d61ff87cf1e5a/mounts/shm
/dev/dm-2         10M  5.6K   10M    1% /var/lib/docker/devicemapper/mnt/cb3df1ae654ed78802c2e5bd7a51a1b0bdd562855a7c7803750b80b33f5c206e
shm              356K     1  356K    1% /var/lib/docker/containers/22ba2ae2b6859c24623703dcb596527d64257d2d61de53f4d88e00a8e2335211/mounts/shm
/dev/dm-3         10M  4.6K   10M    1% /var/lib/docker/devicemapper/mnt/492a19fc8f3e254c4e5cc691c3300b5fee9d1a849422673bf0c19b4b2d1db571
shm              356K     1  356K    1% /var/lib/docker/containers/39abe855a9b107d4921807332309517697f024b2d169ebc5f409436208f766d0/mounts/shm
/dev/dm-7         10M  7.5K   10M    1% /var/lib/docker/devicemapper/mnt/55c6a6c17c892d149c1cc91fbf42b98f1340ffa30a1da508e3526af7060f3ce2
shm              356K     1  356K    1% /var/lib/docker/containers/bf2e7254cd7e2c6000da61875343580ec6ff5cbf40c017a398ba7479af5720ec/mounts/shm
/dev/dm-8         10M   12K   10M    1% /var/lib/docker/devicemapper/mnt/4e51f48d630041316edd925f1e20d3d575fce4bf19ef39a62756b768460d1a3a
shm              356K     1  356K    1% /var/lib/docker/containers/72d4ae743de490ed580ec9265ddf8e6b90e3a9d2c69bd74050e744c8e262b342/mounts/shm
/dev/dm-6        7.9K  7.3K   623   93% /var/lib/docker/devicemapper/mnt/3dcddaee736017082fedb0996e42b4c7b00fe7b850d9a12c81ef1399fa00dfa5
shm              356K     1  356K    1% /var/lib/docker/containers/9f2bf4e2736d5128d6c240bb10da977183676c081ee07789bee60d978222b938/mounts/shm
/dev/dm-5         10M   27K   10M    1% /var/lib/docker/devicemapper/mnt/65a2bf48cbbfe42f0c235493981e62b90363b4be0a2f3aa0530bbc0b5b29dbe3
shm              356K     1  356K    1% /var/lib/docker/containers/e53d5ababfdefc5c8faf65a4b2d635e2543b5a807b65a4f3cd8553b4d7ef2d06/mounts/shm
/dev/dm-9         10M   53K   10M    1% /var/lib/docker/devicemapper/mnt/3216c48346c3702a5cd2f62a4737cc39666983b8079b481ab714cdb488400b08
shm              356K     1  356K    1% /var/lib/docker/containers/5cd0774a742f54c7c4fe3d4c1307fc93c3c097a861cde5f611a0fa9b454af3dd/mounts/shm
/dev/dm-10        10M  5.2K   10M    1% /var/lib/docker/devicemapper/mnt/6a98acd1428ae670e8f1da62cb8973653c8b11d1c98a8bf8be78f59d2ddba062
shm              356K     1  356K    1% /var/lib/docker/containers/a878042353f6a605167e7f9496683701fd2889f62ba1d6c0dc39c58bc03a8209/mounts/shm
tmpfs            356K     1  356K    1% /run/user/0

EDIT-3
postgres container service:

version: "3.5"
services:

#other containers go here..

 postgres:
    restart: always
    image: postgres:10
    hostname: postgres
    container_name: fiware-postgres
    expose:
      - "5432"
    ports:
      - "5432:5432"
    networks:
      - default
    environment:
      - "POSTGRES_PASSWORD=password"
      - "POSTGRES_USER=postgres"
      - "POSTGRES_DB=postgres"
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    build:
      context: .
      shm_size: '4gb'

Database size:

postgres=# SELECT pg_size_pretty( pg_database_size('postgres'));
 pg_size_pretty
----------------
 42 GB
(1 row)

EDIT-4

Sorry, but none of the workaround related to this question actually work, including this one
On the dashboard, I have 5 panels intended to display data points. The queries are similar, except that each displays different parameters for temperature, relativeHumidity, illuminance, particles and O3. This is the query:

SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
avg(attrvalue::float) as illuminance
FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;

The difference is in the WHERE attrname=#parameterValue statement. I modified the postgresql.conf file to write logs but the logs doesn’t seem to provide helpfull tips: here goes the logs:

$ vim postgres-data/log/postgresql-2019-06-26_150012.log
.
.
2019-06-26 15:03:39.298 UTC [45] LOG:  statement: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as o3
        FROM urbansense.airquality WHERE attrname='O3' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.903 UTC [41] ERROR:  could not resize shared memory segment "/PostgreSQL.1197429420" to 12615680 bytes: No space left on device
2019-06-26 15:03:40.903 UTC [41] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as illuminance
        FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.905 UTC [42] FATAL:  terminating connection due to administrator command
2019-06-26 15:03:40.905 UTC [42] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as illuminance
        FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.909 UTC [43] FATAL:  terminating connection due to administrator command
2019-06-26 15:03:40.909 UTC [43] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as illuminance
        FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.921 UTC [1] LOG:  worker process: parallel worker for PID 41 (PID 42) exited with exit code 1
2019-06-26 15:03:40.922 UTC [1] LOG:  worker process: parallel worker for PID 41 (PID 43) exited with exit code 1
2019-06-26 15:07:04.058 UTC [39] LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp39.0", size 83402752
2019-06-26 15:07:04.058 UTC [39] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800)as time,
        avg(attrvalue::float) as relativeHumidity
        FROM urbansense.weather WHERE attrname='relativeHumidity' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:07:04.076 UTC [40] LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp40.0", size 83681280
2019-06-26 15:07:04.076 UTC [40] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800)as time,
        avg(attrvalue::float) as relativeHumidity
        FROM urbansense.weather WHERE attrname='relativeHumidity' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:07:04.196 UTC [38] LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp38.0", size 84140032

Anyone with a idea how to solve this?

I have a PostgreSQL 11.2 (Debian 11.2-1.pgdg90+1) running in a docker container.

I get the following error with some queries:

could not resize shared memory segment «/PostgreSQL.860708388» to 536870912 bytes: No space left on device`

system:

  • Ubuntu 18.04.5
  • memory: 126 GB
  • cores: 64

disk usage is fine, as well as quotas.

Postgres settings:

  • work_mem: 30GB,
  • dynamic_shared_memory_type: posix
  • max_parallel_workers_per_gather: 32

Paul White's user avatar

Paul White

77.1k28 gold badges387 silver badges607 bronze badges

asked Sep 2, 2020 at 9:16

Dmitriy Grankin's user avatar

Lowering number on max_parallel_workers_per_gather to 8 seems to have fixed the problem in my case.

Paul White's user avatar

Paul White

77.1k28 gold badges387 silver badges607 bronze badges

answered Sep 2, 2020 at 9:39

Dmitriy Grankin's user avatar

On Mon, Dec 16, 2019 at 12:49:06PM -0600, Justin Pryzby wrote:
>A customer’s report query hit this error.
>ERROR: could not resize shared memory segment «/PostgreSQL.2011322019» to 134217728 bytes: No space left on device
>
>I found:
>https://www.postgresql.org/message-id/flat/CAEepm%3D2D_JGb8X%3DLa-0PX9C8dBX9%3Dj9wY%2By1-zDWkcJu0%3DBQbA%40mail.gmail.com
>
>work_mem | 128MB
>dynamic_shared_memory_type | posix
>version | PostgreSQL 12.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit
>Running centos 6.9 / linux 2.6.32-696.23.1.el6.x86_64
>
>$ free -m
> total used free shared buffers cached
>Mem: 7871 7223 648 1531 5 1988
>-/+ buffers/cache: 5229 2642
>Swap: 4095 2088 2007
>
>$ mount | grep /dev/shm
>tmpfs on /dev/shm type tmpfs (rw)
>
>$ du -hs /dev/shm
>0 /dev/shm
>
>$ df /dev/shm
>Filesystem 1K-blocks Used Available Use% Mounted on
>tmpfs 4030272 24 4030248 1% /dev/shm
>
>Later, I see:
>$ df -h /dev/shm
>Filesystem Size Used Avail Use% Mounted on
>tmpfs 3.9G 3.3G 601M 85% /dev/shm
>
>I can reproduce the error running a single instance of the query.
>
>The query plan is 1300 lines long, and involves 482 «Scan» nodes on a table
>which currently has 93 partitions, and for which current partitions are
>»daily». I believe I repartitioned its history earlier this year to «monthly»,
>probably to avoid «OOM with many sorts», as reported here:
>https://www.postgresql.org/message-id/20190708164401.GA22387%40telsasoft.com
>

Interestingly enough, I ran into the same ERROR (not sure if the same
root cause) while investigating bug #16104 [1], i.e. on a much simpler
query (single join).

This This particular machine is a bit smaller (only 8GB of RAM and less
disk space) so I created a smaller table with «just» 1.5B rows:

create table test as select generate_series(1, 1500000000)::bigint i;
set work_mem = ‘150MB’;
set max_parallel_workers_per_gather = 8;
analyze test;

explain select count(*) from test t1 join test t2 using (i);

QUERY PLAN
————————————————————————————————————
Finalize Aggregate (cost=67527436.36..67527436.37 rows=1 width=8)
-> Gather (cost=67527435.53..67527436.34 rows=8 width=8)
Workers Planned: 8
-> Partial Aggregate (cost=67526435.53..67526435.54 rows=1 width=8)
-> Parallel Hash Join (cost=11586911.03..67057685.47 rows=187500024 width=0)
Hash Cond: (t1.i = t2.i)
-> Parallel Seq Scan on test t1 (cost=0.00..8512169.24 rows=187500024 width=8)
-> Parallel Hash (cost=8512169.24..8512169.24 rows=187500024 width=8)
-> Parallel Seq Scan on test t2 (cost=0.00..8512169.24 rows=187500024 width=8)
(9 rows)

explain analyze select count(*) from test t1 join test t2 using (i);

ERROR: could not resize shared memory segment «/PostgreSQL.1743102822» to 536870912 bytes: No space left on device

Now, work_mem = 150MB might be a bit too high considering the machine
only has 8GB of RAM (1GB of which is shared_buffers). But that’s still
just 1.2GB of RAM and this is not an OOM. This actually fills the
/dev/shm mount, which is limited to 4GB on this box

bench ~ # df | grep shm
shm 3994752 16 3994736 1% /dev/shm

So somewhere in the parallel hash join, we allocate 4GB of shared segments …

The filesystem usage from the moment of the query execution to the
failure looks about like this:

Time fs 1K-blocks Used Available Use% Mounted on
—————————————————————
10:13:34 shm 3994752 34744 3960008 1% /dev/shm
10:13:35 shm 3994752 35768 3958984 1% /dev/shm
10:13:36 shm 3994752 37816 3956936 1% /dev/shm
10:13:39 shm 3994752 39864 3954888 1% /dev/shm
10:13:42 shm 3994752 41912 3952840 2% /dev/shm
10:13:46 shm 3994752 43960 3950792 2% /dev/shm
10:13:49 shm 3994752 48056 3946696 2% /dev/shm
10:13:56 shm 3994752 52152 3942600 2% /dev/shm
10:14:02 shm 3994752 56248 3938504 2% /dev/shm
10:14:09 shm 3994752 60344 3934408 2% /dev/shm
10:14:16 shm 3994752 68536 3926216 2% /dev/shm
10:14:30 shm 3994752 76728 3918024 2% /dev/shm
10:14:43 shm 3994752 84920 3909832 3% /dev/shm
10:14:43 shm 3994752 84920 3909832 3% /dev/shm
10:14:57 shm 3994752 93112 3901640 3% /dev/shm
10:15:11 shm 3994752 109496 3885256 3% /dev/shm
10:15:38 shm 3994752 125880 3868872 4% /dev/shm
10:16:06 shm 3994752 142264 3852488 4% /dev/shm
10:19:57 shm 3994752 683208 3311544 18% /dev/shm
10:19:58 shm 3994752 1338568 2656184 34% /dev/shm
10:20:02 shm 3994752 1600712 2394040 41% /dev/shm
10:20:03 shm 3994752 2125000 1869752 54% /dev/shm
10:20:04 shm 3994752 2649288 1345464 67% /dev/shm
10:20:08 shm 3994752 2518216 1476536 64% /dev/shm
10:20:10 shm 3994752 3173576 821176 80% /dev/shm
10:20:14 shm 3994752 3697864 296888 93% /dev/shm
10:20:15 shm 3994752 3417288 577464 86% /dev/shm
10:20:16 shm 3994752 3697864 296888 93% /dev/shm
10:20:20 shm 3994752 3828936 165816 96% /dev/shm

And at the end, the contents of /dev/shm looks like this:

-rw——- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1005341478
-rw——- 1 postgres postgres 1048576 Dec 16 22:20 PostgreSQL.1011142277
-rw——- 1 postgres postgres 1048576 Dec 16 22:20 PostgreSQL.1047241463
-rw——- 1 postgres postgres 16777216 Dec 16 22:16 PostgreSQL.1094702083
-rw——- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.1143288540
-rw——- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.1180709918
-rw——- 1 postgres postgres 7408 Dec 14 15:43 PostgreSQL.1239805533
-rw——- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1292496162
-rw——- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.138443773
-rw——- 1 postgres postgres 4194304 Dec 16 22:15 PostgreSQL.1442035225
-rw——- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.147930162
-rw——- 1 postgres postgres 16777216 Dec 16 22:20 PostgreSQL.1525896026
-rw——- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.1541133044
-rw——- 1 postgres postgres 33624064 Dec 16 22:14 PostgreSQL.1736434498
-rw——- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1845631548
-rw——- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1952212453
-rw——- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1965950370
-rw——- 1 postgres postgres 8388608 Dec 16 22:15 PostgreSQL.1983158004
-rw——- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1997631477
-rw——- 1 postgres postgres 16777216 Dec 16 22:20 PostgreSQL.2071391455
-rw——- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.210551357
-rw——- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.2125755117
-rw——- 1 postgres postgres 8388608 Dec 16 22:14 PostgreSQL.2133152910
-rw——- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.255342242
-rw——- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.306663870
-rw——- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.420982703
-rw——- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.443494372
-rw——- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.457417415
-rw——- 1 postgres postgres 4194304 Dec 16 22:20 PostgreSQL.462376479
-rw——- 1 postgres postgres 16777216 Dec 16 22:16 PostgreSQL.512403457
-rw——- 1 postgres postgres 8388608 Dec 16 22:14 PostgreSQL.546049346
-rw——- 1 postgres postgres 196864 Dec 16 22:13 PostgreSQL.554918510
-rw——- 1 postgres postgres 687584 Dec 16 22:13 PostgreSQL.585813590
-rw——- 1 postgres postgres 4194304 Dec 16 22:15 PostgreSQL.612034010
-rw——- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.635077233
-rw——- 1 postgres postgres 7408 Dec 15 17:28 PostgreSQL.69856210
-rw——- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.785623413
-rw——- 1 postgres postgres 4194304 Dec 16 22:14 PostgreSQL.802559608
-rw——- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.825442833
-rw——- 1 postgres postgres 8388608 Dec 16 22:15 PostgreSQL.827813234
-rw——- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.942923396
-rw——- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.948192559
-rw——- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.968081079

That’s a lot of shared segments, considering there are only ~8 workers
for the parallel hash join. And some of the segments are 512MB, so not
exactly tiny/abiding to the work_mem limit :-(

I’m not very familiar with the PHJ internals, but this seems a bit
excessive. I mean, how am I supposed to limit memory usage in these
queries? Why shouldn’t this be subject to work_mem?

regards

[1] https://www.postgresql.org/message-id/flat/16104-dc11ed911f1ab9df%40postgresql.org


Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24×7 Support, Remote DBA, Training & Services

Понравилась статья? Поделить с друзьями:
  • Disk error press any key to restart перевод
  • Directx h81 0 ошибка как исправить для виндовс 10
  • Disk error press any key to restart disk error press any key to restart
  • Directx h81 0 ошибка как исправить generals
  • Disk error or network error