Ошибка субд 53200 error out of shared memory

1с 8 + PostgreSql out of memory
   Johan

06.09.17 — 08:02

Добрый день,прошу помощи в решении проблемы

пользую 1с 8.3.9.2309 + PostgreSql 9.4.2-1.1C происходит ошибка при загрузке базы dt ,ошибка 53200 error out of memory detail failed on request of size 536870912

ОС windows server 2016 standard,аналогичная проблема и на других ос

Пробовал менять настройки конфига pg,сейчас они такие

Это из основных как я полагаю интересующих :

shared_buffers = 64MB            # min 128kB

temp_buffers = 256MB            # min 800kB

work_mem = 128MB                # min 64kB

maintenance_work_mem = 256MB        # min 1MB

effective_cache_size = 6GB

——————————————

Всего оперативной памяти 16gb

Причём конкретно только одна база не загружается (она исправна,её тестировал БП 2.0)

ещё пробовал увеличить файл подкачки на диске С

Помогите кто сталкивался с такой же проблемой?кто её решил?

   Arh01

1 — 06.09.17 — 08:10

разрядность PostgreSql какая?

   Johan

2 — 06.09.17 — 08:12

(1) 64 разрядная как и windows server

   Johan

3 — 06.09.17 — 08:21

В логе вот что пишет:  

pg_authid_rolname_index: 1024 total in 1 blocks; 552 free (0 chunks); 472 used

  MdSmgr: 8192 total in 1 blocks; 6544 free (0 chunks); 1648 used

  LOCALLOCK hash: 8192 total in 1 blocks; 2880 free (0 chunks); 5312 used

  Timezones: 79320 total in 2 blocks; 5968 free (0 chunks); 73352 used

  ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used

2017-09-04 18:55:43 MSK ERROR:  out of memory

2017-09-04 18:55:43 MSK DETAIL:  Failed on request of size 536870912.

2017-09-04 18:55:43 MSK CONTEXT:  COPY config, line 328, column binarydata

2017-09-04 18:55:43 MSK STATEMENT:  COPY Config FROM STDIN BINARY

   rphosts

4 — 06.09.17 — 08:21

попробуй в  work_mem указать немного больше чем от тебя просят

   Johan

5 — 06.09.17 — 08:22

ставил 256 и 512

   rphosts

6 — 06.09.17 — 08:22

и да, этот ДТ куда-то вообще загружается? Он точно не битый?

   rphosts

7 — 06.09.17 — 08:23

(5) переведи  «on request of size 536870912»

   Johan

8 — 06.09.17 — 08:25

(6) Да как файловая база он загружается

   Johan

9 — 06.09.17 — 08:27

(6) и точно не битый делал тестирование и исправление и chdbfl,без ошибок

   Johan

10 — 06.09.17 — 08:30

(7)размер по запросу 536870912,полагаю что не хватает памяти загрузить какую то таблицу

   Johan

11 — 06.09.17 — 08:31

а где её увеличить!?или может какой то другой параметр нужно увеличить

   rphosts

12 — 06.09.17 — 08:34

(11) 536 больше 512?

   Johan

13 — 06.09.17 — 08:36

(12) я понял к чему ты,я пробовал и 1024 ставить

   Johan

14 — 06.09.17 — 08:36

(12) но вот только не везде

   Johan

15 — 06.09.17 — 08:39

Попробую work_mem выставить > 536 ,но смогу попробовать вечером

   rphosts

16 — 06.09.17 — 08:52

(15) сделай сразу побольше чтобы наверняка

   Johan

17 — 06.09.17 — 09:07

(16) да 1024 поставлю,отпишусь по результату

   Asmody

18 — 06.09.17 — 09:20

и temp_buffers тоже.

   Asmody

19 — 06.09.17 — 09:30

shared_buffers рекомендуется делать побольше. 1/4 — 1/3 RAM.

maintenance_work_mem в 1/2 RAM или больше (до RAM-shared_buffers)

   Johan

20 — 06.09.17 — 09:40

(19) если не ошибаюсь то пробовал я ставить и больше 1 gb в

shared_buffers или temp_buffers служба pg перестаёт запускаться

   Johan

21 — 06.09.17 — 11:01

(16) Попробывал,выставить 1024 work_mem,shared_buffers,temp_buffers ошибка всё равно выходит

   Johan

22 — 06.09.17 — 11:04

Увидел вот какой момент,в свойствах Pg есть строка версии где написано PostgreSQL 9.4.2,complited by Visual C++ build 1500, 32-bit

   Johan

23 — 06.09.17 — 11:06

мне эта строка не нравится,типа используются компоненты Visual C++ 32 бита,посмотрел на давно созданном сервере тоже на pg там 64 стоит м.б дело в этом!?

   Johan

24 — 06.09.17 — 11:26

Похоже что да,вроде как ура..грузит ,но ошибку не выкидывает

   Johan

25 — 06.09.17 — 11:36

победа,парни дико извиняюсь,3 дня не в ту сторону смотрел,не тот дистрибутив поставил поставил общий,а надо было postgresql-9.4.2-1.1C_x64

   Arh01

26 — 06.09.17 — 11:49

(25) Теперь научился отличать приложения х64 от х86?

   dezss

27 — 06.09.17 — 11:55

(12) они равны

  

Johan

28 — 06.09.17 — 12:35

(26) Да,я поставил не посмотрев,а в описании в pg и сервис написано 64 Bit,а вот когда увидел строку версии ,тогда меня смутило

I have a query that inserts a given number of test records.
It looks something like this:

CREATE OR REPLACE FUNCTION _miscRandomizer(vNumberOfRecords int)
RETURNS void AS $$
declare
    -- declare all the variables that will be used
begin
    select into vTotalRecords count(*) from tbluser;
    vIndexMain := vTotalRecords;

    loop
        exit when vIndexMain >= vNumberOfRecords + vTotalRecords;

        -- set some other variables that will be used for the insert
        -- insert record with these variables in tblUser
        -- insert records in some other tables
        -- run another function that calculates and saves some stats regarding inserted records

        vIndexMain := vIndexMain + 1;
        end loop;
    return;
end
$$ LANGUAGE plpgsql;

When I run this query for 300 records it throws the following error:

********** Error **********

ERROR: out of shared memory
SQL state: 53200
Hint: You might need to increase max_locks_per_transaction.
Context: SQL statement "create temp table _counts(...)"
PL/pgSQL function prcStatsUpdate(integer) line 25 at SQL statement
SQL statement "SELECT prcStatsUpdate(vUserId)"
PL/pgSQL function _miscrandomizer(integer) line 164 at PERFORM

The function prcStatsUpdate looks like this:

CREATE OR REPLACE FUNCTION prcStatsUpdate(vUserId int)
RETURNS void AS
$$
declare
    vRequireCount boolean;
    vRecordsExist boolean;
begin
    -- determine if this stats calculation needs to be performed
    select into vRequireCount
        case when count(*) > 0 then true else false end
    from tblSomeTable q
    where [x = y]
      and [x = y];

    -- if above is true, determine if stats were previously calculated
    select into vRecordsExist
        case when count(*) > 0 then true else false end
    from tblSomeOtherTable c
    inner join tblSomeTable q
       on q.Id = c.Id
    where [x = y]
      and [x = y]
      and [x = y]
      and vRequireCount = true;

    -- calculate counts and store them in temp table
    create temp table _counts(...);
    insert into _counts(x, y, z)
    select uqa.x, uqa.y, count(*) as aCount
    from tblSomeOtherTable uqa
    inner join tblSomeTable q
       on uqa.Id = q.Id
    where uqa.Id = vUserId
      and qId = [SomeOtherVariable]
      and [x = y]
      and vRequireCount = true
    group by uqa.x, uqa.y;

    -- if stats records exist, update them; else - insert new
    update tblSomeOtherTable 
    set aCount = c.aCount
    from _counts c
    where c.Id = tblSomeOtherTable.Id
      and c.OtherId = tblSomeOtherTable.OtherId
      and vRecordsExist = true
      and vRequireCount = true;

    insert into tblSomeOtherTable(x, y, z)
    select x, y, z
    from _counts
    where vRecordsExist = false
      and vRequireCount = true;

    drop table _counts;
end;
$$ LANGUAGE plpgsql;

It looks like the error is a result of a memory building up somewhere but since I create temp table, use it and drop right away (thus to my understanding releasing memory), I don’t see how that would be possible.

Update

I updated prcStatsUpdate function to represent the actual function that I have. I just replaced table and column names to be something generic.
The reason I didn’t post this first time is that it’s mostly very simple sql operations and I assumed there could not be any issues with it.

Also, where do you start line counting from? It says error is on line 25, but that just can’t be true since line 25 is a condition in the where clause if you start counting from the beginning. Do you start counting from begin?

Any ideas?

Answer by Ramon Berg

“out of shared memory”: Some of you might have seen that error message in PostgreSQL already. But what does it really mean, and how can you prevent it? The problem is actually not as obscure as it might seem at first glance. max_locks_per_transaction is the critical configuration parameter you need to use to avoid trouble.,After a few thousand tables, PostgreSQL will error out: “out of shared memory”. What you can see is that we created all those tables in a single transaction. PostgreSQL had to lock them and eventually ran out of memory. Remember: The database is using a fixed-size shared memory field to store those locks.,Since we are reading from the table, you can see that PostgreSQL has to keep an ACCESS SHARE LOCK which only ensures that the table cannot be dropped or modified (= DDL) in a way that harms concurrent SELECT statements. The more tables a transaction touches, the more entries pg_locks will have. In case of heavy concurrency, multiple entries can become a problem.,First of all, a parent table is created. Then, 1000 partitions are added. For the sake of simplicity, each partition is only allowed to hold exactly one row– but let’s not worry about that for now. Following that, a simple SELECT statement is executed—such a statement is guaranteed to read all partitions.

Let us run a simple script:

BEGIN;

SELECT 'CREATE TABLE a' || id || ' (id int);' 
       FROM generate_series(1, 20000) AS id;

gexec

Let us see what the SELECT statement produced …

BEGIN
          ?column?          
----------------------------
 CREATE TABLE a1 (id int);
 CREATE TABLE a2 (id int);
 CREATE TABLE a3 (id int);
 CREATE TABLE a4 (id int);
 CREATE TABLE a5 (id int);
...

And now let us see what PostgreSQL does:

...
CREATE TABLE
CREATE TABLE
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
...

The logical question is: What is the size of this memory field? Two parameters come into play:

test=# SHOW max_connections;
 max_connections
-----------------
 100
(1 row)

test=# SHOW max_locks_per_transaction;
 max_locks_per_transaction
---------------------------
 64
(1 row)

The number of locks we can keep in shared memory is max_connections x max_locks_per_transaction. Keep in mind that row level locks are NOT relevant here. You can easily do a …

SELECT * FROM billions_of_rows FOR UPDATE;

How can you figure out what is currently going on? To demonstrate what you can do, I have prepared a small example:

test=# CREATE TABLE t_demo (id int);
CREATE TABLE

First of all, you can create a simple table.
As you might know, in PostgreSQL names are not relevant at all. Internally, only numbers count. To fetch the object ID of a simple table, try the following statement:

test=# SELECT oid, relkind, relname
		FROM 	pg_class
 		WHERE relname = 't_demo';
  oid   | relkind | relname
--------+---------+---------
 232787 | r       | t_demo
(1 row)

In my example, the object id is 232787. Let us figure out where this number pops up:

test=# BEGIN;
BEGIN
test=# SELECT * FROM t_demo;
 id
----
(0 rows)

test=# x
Expanded display is on.
test=# SELECT * FROM pg_locks WHERE relation = '232787';
-[ RECORD 1 ]------+----------------
locktype           | relation
database           | 187812
relation           | 232787
page               |
tuple              |
virtualxid         |
transactionid      |
classid            |
objid              |
objsubid           |
virtualtransaction | 3/6633
pid                | 106174
mode               | AccessShareLock
granted            | t
fastpath           | t

Let us take a look at the following example:

BEGIN;

CREATE TABLE t_part (id int) PARTITION BY LIST (id);

SELECT 'CREATE TABLE t_part_' || id
	|| ' PARTITION OF t_part FOR VALUES IN ('
	|| id || ');'
FROM 	generate_series(1, 1000) AS id;

gexec

SELECT count(*) FROM t_part;

The following listing shows which SQL the script has generated to create partitions:

                              ?column?                              
--------------------------------------------------------------------
 CREATE TABLE t_part_1 PARTITION OF t_part FOR VALUES IN (1);
 CREATE TABLE t_part_2 PARTITION OF t_part FOR VALUES IN (2);
 CREATE TABLE t_part_3 PARTITION OF t_part FOR VALUES IN (3);
 CREATE TABLE t_part_4 PARTITION OF t_part FOR VALUES IN (4);
 CREATE TABLE t_part_5 PARTITION OF t_part FOR VALUES IN (5);
...

After running the

SELECT count(*) FROM t_part

statement, the important observation is now:

SELECT 	count(*)
FROM 	pg_locks
WHERE 	mode = 'AccessShareLock';
 count
-------
  1004
(1 row)

Answer by Michael Cunningham

max_locks_per_transaction,A quick fix for PSQLException error out of shared memory is to set the,

Did you try increasing max_locks_per_transaction?

– Mike Sherrill ‘Cat Recall’

May 11 ’13 at 0:56

,

Did you look up what max_locks_per_transaction controls?

– Mike Sherrill ‘Cat Recall’

May 11 ’13 at 10:45

I have a query that inserts a given number of test records.
It looks something like this:

CREATE OR REPLACE FUNCTION _miscRandomizer(vNumberOfRecords int)
RETURNS void AS $$
declare
    -- declare all the variables that will be used
begin
    select into vTotalRecords count(*) from tbluser;
    vIndexMain := vTotalRecords;

    loop
        exit when vIndexMain >= vNumberOfRecords + vTotalRecords;

        -- set some other variables that will be used for the insert
        -- insert record with these variables in tblUser
        -- insert records in some other tables
        -- run another function that calculates and saves some stats regarding inserted records

        vIndexMain := vIndexMain + 1;
        end loop;
    return;
end
$$ LANGUAGE plpgsql;

When I run this query for 300 records it throws the following error:

********** Error **********

ERROR: out of shared memory
SQL state: 53200
Hint: You might need to increase max_locks_per_transaction.
Context: SQL statement "create temp table _counts(...)"
PL/pgSQL function prcStatsUpdate(integer) line 25 at SQL statement
SQL statement "SELECT prcStatsUpdate(vUserId)"
PL/pgSQL function _miscrandomizer(integer) line 164 at PERFORM

The function prcStatsUpdate looks like this:

CREATE OR REPLACE FUNCTION prcStatsUpdate(vUserId int)
RETURNS void AS
$$
declare
    vRequireCount boolean;
    vRecordsExist boolean;
begin
    -- determine if this stats calculation needs to be performed
    select into vRequireCount
        case when count(*) > 0 then true else false end
    from tblSomeTable q
    where [x = y]
      and [x = y];

    -- if above is true, determine if stats were previously calculated
    select into vRecordsExist
        case when count(*) > 0 then true else false end
    from tblSomeOtherTable c
    inner join tblSomeTable q
       on q.Id = c.Id
    where [x = y]
      and [x = y]
      and [x = y]
      and vRequireCount = true;

    -- calculate counts and store them in temp table
    create temp table _counts(...);
    insert into _counts(x, y, z)
    select uqa.x, uqa.y, count(*) as aCount
    from tblSomeOtherTable uqa
    inner join tblSomeTable q
       on uqa.Id = q.Id
    where uqa.Id = vUserId
      and qId = [SomeOtherVariable]
      and [x = y]
      and vRequireCount = true
    group by uqa.x, uqa.y;

    -- if stats records exist, update them; else - insert new
    update tblSomeOtherTable 
    set aCount = c.aCount
    from _counts c
    where c.Id = tblSomeOtherTable.Id
      and c.OtherId = tblSomeOtherTable.OtherId
      and vRecordsExist = true
      and vRequireCount = true;

    insert into tblSomeOtherTable(x, y, z)
    select x, y, z
    from _counts
    where vRecordsExist = false
      and vRequireCount = true;

    drop table _counts;
end;
$$ LANGUAGE plpgsql;

Answer by Ares Buchanan

But the problem remains if PostgreSQL is just restarted with service postgresql restart, I suspect max_locks_per_transaction won’t tune nothing.,

1

The default max_locks_per_transaction is 64 to begin with — uncommenting that line didn’t effectively change it.

– yieldsfalsehood

Sep 29 ’14 at 13:12

,

Have you tried actually following the hint’s suggestion? I have uncommented max_locks_per_transaction = 64 # min 10 in /etc/postgresql/9.3/main/postgresql.conf so far.

– 48347

Sep 29 ’14 at 13:03

,I’ve been performing kind of intensive schema dropping and creating over a PostgreSQL server, but now complains..:

I’ve been performing kind of intensive schema dropping and creating over a PostgreSQL server, but now complains..:

WARNING:  out of shared memory
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.

MORE INFO 1409291350: Some details missing but I keep the core SQL result.

postgres=# SELECT version();
PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2,
 64-bit

And:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.1 LTS
Release:        14.04
Codename:       trusty

Answer by Anais Anderson

Normally we see the out of shared memory errors comes due to the disk space / kernel parameters issues. And it seems your kernel parameters have default values.,Verify disk space or set kernel parameters based on your storage RAM size.,moreover, i get the message “Out of shared memory”, not “out of memory”.,I see a constant increase of shared memory, more or less 200k every minute. When all users stop it decreases and sometimes it stops, but then it restarts without a clear reason.

We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have: 
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big: 
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks





Answer by Keira Juarez

PostgreSQL’s architecture is based on three fundamental parts: Processes, Memory, and Disk.,Shared buffer pool: Where PostgreSQL loads pages with tables and indexes from disk, to work directly from memory, reducing the disk access.,When you confirm that PostgreSQL is responsible for this issue, the next step is to check why.,If you know that the PostgreSQL process is having a high memory utilization, but the logs didn’t help, you have another tool that can be useful here, pg_top.

Checking both the PostgreSQL and systems logs is definitely a good way to have more information about what is happening in your database/system. You could see messages like:

Resource temporarily unavailable

Out of memory: Kill process 1161 (postgres) score 366 or sacrifice child

Or even multiple database message errors like:

FATAL:  password authentication failed for user "username"

ERROR:  duplicate key value violates unique constraint "sbtest21_pkey"

ERROR:  deadlock detected

Answer by Derrick Hunter

LOAD DATABASE
    FROM mysql://dps:@localhost/XXXXXXX
    INTO postgresql://xxxxx:[email protected]/XXXXXXX

WITH workers = 4, concurrency = 2;

У меня есть запрос, который вставляет заданное количество тестовых записей. Это выглядит примерно так:

CREATE OR REPLACE FUNCTION _miscRandomizer(vNumberOfRecords int)
RETURNS void AS $$
declare
    -- declare all the variables that will be used
begin
    select into vTotalRecords count(*) from tbluser;
    vIndexMain := vTotalRecords;

    loop
        exit when vIndexMain >= vNumberOfRecords + vTotalRecords;

        -- set some other variables that will be used for the insert
        -- insert record with these variables in tblUser
        -- insert records in some other tables
        -- run another function that calculates and saves some stats regarding inserted records

        vIndexMain := vIndexMain + 1;
        end loop;
    return;
end
$$ LANGUAGE plpgsql;

Когда я запускаю этот запрос для 300 записей, он выдает следующую ошибку:

********** Error **********

ERROR: out of shared memory
SQL state: 53200
Hint: You might need to increase max_locks_per_transaction.
Context: SQL statement "create temp table _counts(...)"
PL/pgSQL function prcStatsUpdate(integer) line 25 at SQL statement
SQL statement "SELECT prcStatsUpdate(vUserId)"
PL/pgSQL function _miscrandomizer(integer) line 164 at PERFORM

Функция prcStatsUpdate выглядит следующим образом:

CREATE OR REPLACE FUNCTION prcStatsUpdate(vUserId int)
RETURNS void AS
$$
declare
    vRequireCount boolean;
    vRecordsExist boolean;
begin
    -- determine if this stats calculation needs to be performed
    select into vRequireCount
        case when count(*) > 0 then true else false end
    from tblSomeTable q
    where [x = y]
      and [x = y];

    -- if above is true, determine if stats were previously calculated
    select into vRecordsExist
        case when count(*) > 0 then true else false end
    from tblSomeOtherTable c
    inner join tblSomeTable q
       on q.Id = c.Id
    where [x = y]
      and [x = y]
      and [x = y]
      and vRequireCount = true;

    -- calculate counts and store them in temp table
    create temp table _counts(...);
    insert into _counts(x, y, z)
    select uqa.x, uqa.y, count(*) as aCount
    from tblSomeOtherTable uqa
    inner join tblSomeTable q
       on uqa.Id = q.Id
    where uqa.Id = vUserId
      and qId = [SomeOtherVariable]
      and [x = y]
      and vRequireCount = true
    group by uqa.x, uqa.y;

    -- if stats records exist, update them; else - insert new
    update tblSomeOtherTable 
    set aCount = c.aCount
    from _counts c
    where c.Id = tblSomeOtherTable.Id
      and c.OtherId = tblSomeOtherTable.OtherId
      and vRecordsExist = true
      and vRequireCount = true;

    insert into tblSomeOtherTable(x, y, z)
    select x, y, z
    from _counts
    where vRecordsExist = false
      and vRequireCount = true;

    drop table _counts;
end;
$$ LANGUAGE plpgsql;

Похоже, ошибка является результатом накопления памяти где-то, но поскольку я создаю временную таблицу, использую ее и сразу же удаляю (таким образом, насколько я понимаю, освобождая память), я не понимаю, как это было бы возможно.

Обновить

Я обновил функцию prcStatsUpdate, чтобы представить реальную функцию, которая у меня есть. Я просто заменил имена таблиц и столбцов на что-то общее. Причина, по которой я не опубликовал это в первый раз, заключается в том, что это в основном очень простые операции sql, и я предположил, что с этим не может быть никаких проблем.

Кроме того, с чего начать подсчет строк? Он говорит, что ошибка находится в строке 25, но это просто не может быть правдой, поскольку строка 25 является условием в where пункт, если вы начинаете считать с начала. Вы начинаете считать с begin?

Есть идеи?

Понравилась статья? Поделить с друзьями:
  • Ошибка субд 53200 error out of memory detail failed on request of size
  • Ошибка существенна как пишется
  • Ошибка субд 53100 error could not extend file
  • Ошибка сушильной машины канди е21
  • Ошибка субд 1с файл базы данных поврежден как исправить