Postgresql error out of shared memory

I have a query that inserts a given number of test records. It looks something like this: CREATE OR REPLACE FUNCTION _miscRandomizer(vNumberOfRecords int) RETURNS void AS $$ declare -- declare...

I have a query that inserts a given number of test records.
It looks something like this:

CREATE OR REPLACE FUNCTION _miscRandomizer(vNumberOfRecords int)
RETURNS void AS $$
declare
    -- declare all the variables that will be used
begin
    select into vTotalRecords count(*) from tbluser;
    vIndexMain := vTotalRecords;

    loop
        exit when vIndexMain >= vNumberOfRecords + vTotalRecords;

        -- set some other variables that will be used for the insert
        -- insert record with these variables in tblUser
        -- insert records in some other tables
        -- run another function that calculates and saves some stats regarding inserted records

        vIndexMain := vIndexMain + 1;
        end loop;
    return;
end
$$ LANGUAGE plpgsql;

When I run this query for 300 records it throws the following error:

********** Error **********

ERROR: out of shared memory
SQL state: 53200
Hint: You might need to increase max_locks_per_transaction.
Context: SQL statement "create temp table _counts(...)"
PL/pgSQL function prcStatsUpdate(integer) line 25 at SQL statement
SQL statement "SELECT prcStatsUpdate(vUserId)"
PL/pgSQL function _miscrandomizer(integer) line 164 at PERFORM

The function prcStatsUpdate looks like this:

CREATE OR REPLACE FUNCTION prcStatsUpdate(vUserId int)
RETURNS void AS
$$
declare
    vRequireCount boolean;
    vRecordsExist boolean;
begin
    -- determine if this stats calculation needs to be performed
    select into vRequireCount
        case when count(*) > 0 then true else false end
    from tblSomeTable q
    where [x = y]
      and [x = y];

    -- if above is true, determine if stats were previously calculated
    select into vRecordsExist
        case when count(*) > 0 then true else false end
    from tblSomeOtherTable c
    inner join tblSomeTable q
       on q.Id = c.Id
    where [x = y]
      and [x = y]
      and [x = y]
      and vRequireCount = true;

    -- calculate counts and store them in temp table
    create temp table _counts(...);
    insert into _counts(x, y, z)
    select uqa.x, uqa.y, count(*) as aCount
    from tblSomeOtherTable uqa
    inner join tblSomeTable q
       on uqa.Id = q.Id
    where uqa.Id = vUserId
      and qId = [SomeOtherVariable]
      and [x = y]
      and vRequireCount = true
    group by uqa.x, uqa.y;

    -- if stats records exist, update them; else - insert new
    update tblSomeOtherTable 
    set aCount = c.aCount
    from _counts c
    where c.Id = tblSomeOtherTable.Id
      and c.OtherId = tblSomeOtherTable.OtherId
      and vRecordsExist = true
      and vRequireCount = true;

    insert into tblSomeOtherTable(x, y, z)
    select x, y, z
    from _counts
    where vRecordsExist = false
      and vRequireCount = true;

    drop table _counts;
end;
$$ LANGUAGE plpgsql;

It looks like the error is a result of a memory building up somewhere but since I create temp table, use it and drop right away (thus to my understanding releasing memory), I don’t see how that would be possible.

Update

I updated prcStatsUpdate function to represent the actual function that I have. I just replaced table and column names to be something generic.
The reason I didn’t post this first time is that it’s mostly very simple sql operations and I assumed there could not be any issues with it.

Also, where do you start line counting from? It says error is on line 25, but that just can’t be true since line 25 is a condition in the where clause if you start counting from the beginning. Do you start counting from begin?

Any ideas?

out of shared memory”: Some of you might have seen that error message in PostgreSQL already. But what does it really mean, and how can you prevent it? The problem is actually not as obscure as it might seem at first glance. max_locks_per_transaction is the critical configuration parameter you need to use to avoid trouble.

out of shared memory”: When it happens

Most of the shared memory used by PostgreSQL is of a fixed size. This is true for the I/O cache (shared buffers) and for many other components as well. One of those components has to do with locking. If you touch a table inside a transaction, PostgreSQL has to track your activity to ensure that a concurrent transaction cannot drop the table you are about to touch. Tracking activity is important because you want to make sure that a DROP TABLE (or some other DDL) has to wait until all reading transactions have terminated. The trouble is, you have to store information about tracked activity somewhere– and this point is exactly what you have to understand.

Let us run a simple script:

BEGIN;

SELECT 'CREATE TABLE a' || id || ' (id int);' 
       FROM generate_series(1, 20000) AS id;

gexec

What this script does is to start a transaction and to generate 20.000 CREATE TABLE statements. It simply generates SQL which is then automatically executed (gexec treats the result of the previous SQL statement as input). 

Let us see what the SELECT statement produced …

BEGIN
          ?column?          
----------------------------
 CREATE TABLE a1 (id int);
 CREATE TABLE a2 (id int);
 CREATE TABLE a3 (id int);
 CREATE TABLE a4 (id int);
 CREATE TABLE a5 (id int);
...

And now let us see what PostgreSQL does:

...
CREATE TABLE
CREATE TABLE
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
...

After a few thousand tables, PostgreSQL will error out: “out of shared memory”. What you can see is that we created all those tables in a single transaction. PostgreSQL had to lock them and eventually ran out of memory. Remember: The database is using a fixed-size shared memory field to store those locks.

The logical question is: What is the size of this memory field? Two parameters come into play:

test=# SHOW max_connections;
 max_connections
-----------------
 100
(1 row)

test=# SHOW max_locks_per_transaction;
 max_locks_per_transaction
---------------------------
 64
(1 row)

The number of locks we can keep in shared memory is max_connections x max_locks_per_transaction. Keep in mind that row level locks are NOT relevant here. You can easily do a …

SELECT * FROM billions_of_rows FOR UPDATE;

… without running out of memory because row locks are stored on disk and not in RAM. Therefore the number of tables is relevant – not the number of rows.

Inspecting pg_locks

How can you figure out what is currently going on? To demonstrate what you can do, I have prepared a small example:

test=# CREATE TABLE t_demo (id int);
CREATE TABLE

First of all, you can create a simple table.
As you might know, in PostgreSQL names are not relevant at all. Internally, only numbers count. To fetch the object ID of a simple table, try the following statement:

test=# SELECT oid, relkind, relname
		FROM 	pg_class
 		WHERE relname = 't_demo';
  oid   | relkind | relname
--------+---------+---------
 232787 | r       | t_demo
(1 row)

In my example, the object id is 232787. Let us figure out where this number pops up:

test=# BEGIN;
BEGIN
test=# SELECT * FROM t_demo;
 id
----
(0 rows)

test=# x
Expanded display is on.
test=# SELECT * FROM pg_locks WHERE relation = '232787';
-[ RECORD 1 ]------+----------------
locktype           | relation
database           | 187812
relation           | 232787
page               |
tuple              |
virtualxid         |
transactionid      |
classid            |
objid              |
objsubid           |
virtualtransaction | 3/6633
pid                | 106174
mode               | AccessShareLock
granted            | t
fastpath           | t

Since we are reading from the table, you can see that PostgreSQL has to keep an ACCESS SHARE LOCK which only ensures that the table cannot be dropped or modified (= DDL) in a way that harms concurrent SELECT statements.
The more tables a transaction touches, the more entries pg_locks will have. In case of heavy concurrency, multiple entries can become a problem.

PostgreSQL partitioning and how it relates to “out of shared memory”

If you are running a typical application, out of memory errors are basically rare because the overall number of relevant locks is usually quite low. However, if you are heavily relying on excessive partitioning, life is different. In PostgreSQL, a partition is basically a normal table– and it is treated as such. Therefore, locking can become an issue.

Let us take a look at the following example:

BEGIN;

CREATE TABLE t_part (id int) PARTITION BY LIST (id);

SELECT 'CREATE TABLE t_part_' || id
	|| ' PARTITION OF t_part FOR VALUES IN ('
	|| id || ');'
FROM 	generate_series(1, 1000) AS id;

gexec

SELECT count(*) FROM t_part;

First of all, a parent table is created. Then, 1000 partitions are added. For the sake of simplicity, each partition is only allowed to hold exactly one row– but let’s not worry about that for now. Following that, a simple SELECT statement is executed—such a statement is guaranteed to read all partitions.

The following listing shows which SQL the script has generated to create partitions:

                              ?column?                              
--------------------------------------------------------------------
 CREATE TABLE t_part_1 PARTITION OF t_part FOR VALUES IN (1);
 CREATE TABLE t_part_2 PARTITION OF t_part FOR VALUES IN (2);
 CREATE TABLE t_part_3 PARTITION OF t_part FOR VALUES IN (3);
 CREATE TABLE t_part_4 PARTITION OF t_part FOR VALUES IN (4);
 CREATE TABLE t_part_5 PARTITION OF t_part FOR VALUES IN (5);
...

After running the

SELECT count(*) FROM t_part

statement, the important observation is now:

SELECT 	count(*)
FROM 	pg_locks
WHERE 	mode = 'AccessShareLock';
 count
-------
  1004
(1 row)

PostgreSQL already needs more than 1000 locks to do this. Partitioning will therefore increase the usage of this shared memory field and make “out of memory” more likely. If you are using partitioning HEAVILY, it can make sense to change max_locks_per_transaction.

Finally …

In case you are interested in Data Science and Machine Learning, you can check out Kevin Speyer’s post on “Reinforcement Learning” which can be found here.

Answer by Ramon Berg

“out of shared memory”: Some of you might have seen that error message in PostgreSQL already. But what does it really mean, and how can you prevent it? The problem is actually not as obscure as it might seem at first glance. max_locks_per_transaction is the critical configuration parameter you need to use to avoid trouble.,After a few thousand tables, PostgreSQL will error out: “out of shared memory”. What you can see is that we created all those tables in a single transaction. PostgreSQL had to lock them and eventually ran out of memory. Remember: The database is using a fixed-size shared memory field to store those locks.,Since we are reading from the table, you can see that PostgreSQL has to keep an ACCESS SHARE LOCK which only ensures that the table cannot be dropped or modified (= DDL) in a way that harms concurrent SELECT statements. The more tables a transaction touches, the more entries pg_locks will have. In case of heavy concurrency, multiple entries can become a problem.,First of all, a parent table is created. Then, 1000 partitions are added. For the sake of simplicity, each partition is only allowed to hold exactly one row– but let’s not worry about that for now. Following that, a simple SELECT statement is executed—such a statement is guaranteed to read all partitions.

Let us run a simple script:

BEGIN;

SELECT 'CREATE TABLE a' || id || ' (id int);' 
       FROM generate_series(1, 20000) AS id;

gexec

Let us see what the SELECT statement produced …

BEGIN
          ?column?          
----------------------------
 CREATE TABLE a1 (id int);
 CREATE TABLE a2 (id int);
 CREATE TABLE a3 (id int);
 CREATE TABLE a4 (id int);
 CREATE TABLE a5 (id int);
...

And now let us see what PostgreSQL does:

...
CREATE TABLE
CREATE TABLE
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
...

The logical question is: What is the size of this memory field? Two parameters come into play:

test=# SHOW max_connections;
 max_connections
-----------------
 100
(1 row)

test=# SHOW max_locks_per_transaction;
 max_locks_per_transaction
---------------------------
 64
(1 row)

The number of locks we can keep in shared memory is max_connections x max_locks_per_transaction. Keep in mind that row level locks are NOT relevant here. You can easily do a …

SELECT * FROM billions_of_rows FOR UPDATE;

How can you figure out what is currently going on? To demonstrate what you can do, I have prepared a small example:

test=# CREATE TABLE t_demo (id int);
CREATE TABLE

First of all, you can create a simple table.
As you might know, in PostgreSQL names are not relevant at all. Internally, only numbers count. To fetch the object ID of a simple table, try the following statement:

test=# SELECT oid, relkind, relname
		FROM 	pg_class
 		WHERE relname = 't_demo';
  oid   | relkind | relname
--------+---------+---------
 232787 | r       | t_demo
(1 row)

In my example, the object id is 232787. Let us figure out where this number pops up:

test=# BEGIN;
BEGIN
test=# SELECT * FROM t_demo;
 id
----
(0 rows)

test=# x
Expanded display is on.
test=# SELECT * FROM pg_locks WHERE relation = '232787';
-[ RECORD 1 ]------+----------------
locktype           | relation
database           | 187812
relation           | 232787
page               |
tuple              |
virtualxid         |
transactionid      |
classid            |
objid              |
objsubid           |
virtualtransaction | 3/6633
pid                | 106174
mode               | AccessShareLock
granted            | t
fastpath           | t

Let us take a look at the following example:

BEGIN;

CREATE TABLE t_part (id int) PARTITION BY LIST (id);

SELECT 'CREATE TABLE t_part_' || id
	|| ' PARTITION OF t_part FOR VALUES IN ('
	|| id || ');'
FROM 	generate_series(1, 1000) AS id;

gexec

SELECT count(*) FROM t_part;

The following listing shows which SQL the script has generated to create partitions:

                              ?column?                              
--------------------------------------------------------------------
 CREATE TABLE t_part_1 PARTITION OF t_part FOR VALUES IN (1);
 CREATE TABLE t_part_2 PARTITION OF t_part FOR VALUES IN (2);
 CREATE TABLE t_part_3 PARTITION OF t_part FOR VALUES IN (3);
 CREATE TABLE t_part_4 PARTITION OF t_part FOR VALUES IN (4);
 CREATE TABLE t_part_5 PARTITION OF t_part FOR VALUES IN (5);
...

After running the

SELECT count(*) FROM t_part

statement, the important observation is now:

SELECT 	count(*)
FROM 	pg_locks
WHERE 	mode = 'AccessShareLock';
 count
-------
  1004
(1 row)

Answer by Michael Cunningham

max_locks_per_transaction,A quick fix for PSQLException error out of shared memory is to set the,

Did you try increasing max_locks_per_transaction?

– Mike Sherrill ‘Cat Recall’

May 11 ’13 at 0:56

,

Did you look up what max_locks_per_transaction controls?

– Mike Sherrill ‘Cat Recall’

May 11 ’13 at 10:45

I have a query that inserts a given number of test records.
It looks something like this:

CREATE OR REPLACE FUNCTION _miscRandomizer(vNumberOfRecords int)
RETURNS void AS $$
declare
    -- declare all the variables that will be used
begin
    select into vTotalRecords count(*) from tbluser;
    vIndexMain := vTotalRecords;

    loop
        exit when vIndexMain >= vNumberOfRecords + vTotalRecords;

        -- set some other variables that will be used for the insert
        -- insert record with these variables in tblUser
        -- insert records in some other tables
        -- run another function that calculates and saves some stats regarding inserted records

        vIndexMain := vIndexMain + 1;
        end loop;
    return;
end
$$ LANGUAGE plpgsql;

When I run this query for 300 records it throws the following error:

********** Error **********

ERROR: out of shared memory
SQL state: 53200
Hint: You might need to increase max_locks_per_transaction.
Context: SQL statement "create temp table _counts(...)"
PL/pgSQL function prcStatsUpdate(integer) line 25 at SQL statement
SQL statement "SELECT prcStatsUpdate(vUserId)"
PL/pgSQL function _miscrandomizer(integer) line 164 at PERFORM

The function prcStatsUpdate looks like this:

CREATE OR REPLACE FUNCTION prcStatsUpdate(vUserId int)
RETURNS void AS
$$
declare
    vRequireCount boolean;
    vRecordsExist boolean;
begin
    -- determine if this stats calculation needs to be performed
    select into vRequireCount
        case when count(*) > 0 then true else false end
    from tblSomeTable q
    where [x = y]
      and [x = y];

    -- if above is true, determine if stats were previously calculated
    select into vRecordsExist
        case when count(*) > 0 then true else false end
    from tblSomeOtherTable c
    inner join tblSomeTable q
       on q.Id = c.Id
    where [x = y]
      and [x = y]
      and [x = y]
      and vRequireCount = true;

    -- calculate counts and store them in temp table
    create temp table _counts(...);
    insert into _counts(x, y, z)
    select uqa.x, uqa.y, count(*) as aCount
    from tblSomeOtherTable uqa
    inner join tblSomeTable q
       on uqa.Id = q.Id
    where uqa.Id = vUserId
      and qId = [SomeOtherVariable]
      and [x = y]
      and vRequireCount = true
    group by uqa.x, uqa.y;

    -- if stats records exist, update them; else - insert new
    update tblSomeOtherTable 
    set aCount = c.aCount
    from _counts c
    where c.Id = tblSomeOtherTable.Id
      and c.OtherId = tblSomeOtherTable.OtherId
      and vRecordsExist = true
      and vRequireCount = true;

    insert into tblSomeOtherTable(x, y, z)
    select x, y, z
    from _counts
    where vRecordsExist = false
      and vRequireCount = true;

    drop table _counts;
end;
$$ LANGUAGE plpgsql;

Answer by Ares Buchanan

But the problem remains if PostgreSQL is just restarted with service postgresql restart, I suspect max_locks_per_transaction won’t tune nothing.,

1

The default max_locks_per_transaction is 64 to begin with — uncommenting that line didn’t effectively change it.

– yieldsfalsehood

Sep 29 ’14 at 13:12

,

Have you tried actually following the hint’s suggestion? I have uncommented max_locks_per_transaction = 64 # min 10 in /etc/postgresql/9.3/main/postgresql.conf so far.

– 48347

Sep 29 ’14 at 13:03

,I’ve been performing kind of intensive schema dropping and creating over a PostgreSQL server, but now complains..:

I’ve been performing kind of intensive schema dropping and creating over a PostgreSQL server, but now complains..:

WARNING:  out of shared memory
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.

MORE INFO 1409291350: Some details missing but I keep the core SQL result.

postgres=# SELECT version();
PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2,
 64-bit

And:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.1 LTS
Release:        14.04
Codename:       trusty

Answer by Anais Anderson

Normally we see the out of shared memory errors comes due to the disk space / kernel parameters issues. And it seems your kernel parameters have default values.,Verify disk space or set kernel parameters based on your storage RAM size.,moreover, i get the message “Out of shared memory”, not “out of memory”.,I see a constant increase of shared memory, more or less 200k every minute. When all users stop it decreases and sometimes it stops, but then it restarts without a clear reason.

We are getting crazy with "out of shared memory" errors, and we can't figure
the reason.
We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu
9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0
20160609, 64-bit", the server has 92gb of ram, it is a mixed environment
(mostly OLTP, with some DW), with 100 sessions constantly open (a CRM) and
some long queries run every half an hour.
Everything works fine, except that after 1 day and half we start receiving a
lot of "out of shared memory" messages.
I am sure it is not related with the usual max_locks_per_transaction issue,
because we have set max_locks_per_transaction to 384, and when we receive
these messages we have no more than 50/100 locks totally.
Restarting the server usually works fine for one day and hal more, and then
messages restart.
Looking at the log, we see that this error starts casually, sometimes on
very small queries, returning some kbytes of data.
We have tried a lot of different configurations. we have tried with pgtune
and pgconfig 2.0.

Currently, we have: 
max_connections = 200
shared_buffers = 23GB
effective_cache_size = 69GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 60293kB
min_wal_size = 2GB
max_wal_size = 4GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_locks_per_transaction = 384
but we tried with work_mem to 130mb, shared_buffer to a maximum fo 40gb,
effective_cache to 4gb

shared memory limits are very big: 
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398442373116
min seg size (bytes) = 1

thanks





Answer by Keira Juarez

PostgreSQL’s architecture is based on three fundamental parts: Processes, Memory, and Disk.,Shared buffer pool: Where PostgreSQL loads pages with tables and indexes from disk, to work directly from memory, reducing the disk access.,When you confirm that PostgreSQL is responsible for this issue, the next step is to check why.,If you know that the PostgreSQL process is having a high memory utilization, but the logs didn’t help, you have another tool that can be useful here, pg_top.

Checking both the PostgreSQL and systems logs is definitely a good way to have more information about what is happening in your database/system. You could see messages like:

Resource temporarily unavailable

Out of memory: Kill process 1161 (postgres) score 366 or sacrifice child

Or even multiple database message errors like:

FATAL:  password authentication failed for user "username"

ERROR:  duplicate key value violates unique constraint "sbtest21_pkey"

ERROR:  deadlock detected

Answer by Derrick Hunter

LOAD DATABASE
    FROM mysql://dps:@localhost/XXXXXXX
    INTO postgresql://xxxxx:[email protected]/XXXXXXX

WITH workers = 4, concurrency = 2;

SQL Error [53200]: ОШИБКА: нехватка разделяемой памяти Подсказка: Возможно, следует увеличить параметр max_locks_per_transaction

При выполнении запросов на БД (Postgres) возникла ошибка:

24.02.21 13:50:38.219,main,ERROR,ExecSql,null
com.bssys.db.jdbc.DBSQLException: ОШИБКА: нехватка разделяемой памяти
Подсказка: Возможно, следует увеличить параметр max_locks_per_transaction.

Подробная информация по параметру здесь. Коротко ниже:
max_locks_per_transaction (integer)
Этот параметр управляет средним числом блокировок объектов, выделяемым для каждой транзакции; отдельные транзакции могут заблокировать и больше объектов, если все они умещаются в таблице блокировок.

Значение по умолчанию = 64

рядом также находится параметр max_pred_locks_per_transaction (integer)

В файле postgresql.conf (Postgres/data/) указано так:

#———————————————————————-

# LOCK MANAGEMENT

#———————————————————————-

#deadlock_timeout = 1s

#max_locks_per_transaction = 64 # min 10

# (change requires restart)

#max_pred_locks_per_transaction = 64 # min 10

# (change requires restart)

Изменил на это значение:

#———————————————————————-

# LOCK MANAGEMENT

#———————————————————————-

#deadlock_timeout = 1s

max_locks_per_transaction = 256 # min 10

# (change requires restart)

max_pred_locks_per_transaction = 1000 # min 10

# (change requires restart)

P.S. После изменения убрать символ «#» в начале строки

Популярные сообщения из этого блога

КБК. КВФО — Код вида финансового обеспечения (деятельности)

НПА:  Приказ Минфина России от 01.12.2010 N 157н Письмо Минфина России от 18 января 2018 г. N 02-06-10/2715 В целях организации и ведения бухгалтерского учета, утверждения Рабочего плана счетов применяются следующие коды вида финансового обеспечения (деятельности): для государственных (муниципальных) учреждений, организаций, осуществляющих полномочия получателя бюджетных средств, финансовых органов соответствующих бюджетов и органов, осуществляющих их кассовое обслуживание: 1 — деятельность, осуществляемая за счет средств соответствующего бюджета бюджетной системы Российской Федерации (бюджетная деятельность); 2 — приносящая доход деятельность (собственные доходы учреждения); 3 — средства во временном распоряжении; 4 — субсидии на выполнение государственного (муниципального) задания; 5 — субсидии на иные цели; 6 — субсидии на цели осуществления капитальных вложений; 7 — средства по обязательному медицинскому страхованию; для отражения органами Федерального казн

TRUNCATE / DELETE / DROP или как очистить таблицу

ИМЕЕМ: Таблица MSG (сообщения) с большим количеством записей. SQL> CREATE TABLE msg (id INTEGER NOT NULL PRIMARY KEY,                               description CHAR (50) NOT NULL,                            date_create DATE); ЗАДАЧА: Необходимо очистить таблицу от данных РЕШЕНИЕ: Для решения данной задачи есть несколько способов. Ниже описание и пример каждого из них. Способ №1 — используем DELETE  Самый простой способ (первый вариант) — выполнение оператора удаления записи. При его выполнении вы будете видеть результат (сколько записей удалено). Удобная штука когда необходимо точно знать и понимать правильные ли данные удалены. НО имеет недостатки перед другими вариантами решения поставленной задачи. SQL>  DELETE FROM msg; —Удалит все строки в таблице SQL>  DELETE FROM msg WHERE date_create = ‘2019.02.01’; —Удалит все строки у которых дата создания «2019.02.01»  Способ №2 — используем TRUNCATE  Использование оператора DML для очистки всех строк в та

Linux (РедОС). Сброс пароля

Изображение

Используется ОС РедОС 7.1, которая установлена в VBox. В процессе установки ОС, был задан только пароль для «root», дополнительных пользователей не создавалось. В  рекомендациях на сайте производителя ОС  указано: Помимо администратора РЕД ОС (root) в систему необходимо добавить, по меньшей мере, одного обычного пользователя. Работа от имени администратора РЕД ОС считается опасной (можно по неосторожности повредить систему), поэтому повседневную работу в РЕД ОС следует выполнять от имени обычного пользователя, полномочия которого ограничены. После перезапуска и попытке войти в систему под root, система выдает сообщение «Не сработало .попробуйте еще раз». Поэтому для решения проблемы было решено создать пользователя, для этого выполняем такие действия: После загрузки, в момент выбора системы, быстро нажимаем стрелки вверх и вниз (приостанавливаем обратный отсчет). Выбираем ядро и нажимаем «e». Находим строку, которая относится к ядру: здесь будет ряд «boot parameter

ТФФ 34.0. Полный перечень документов альбома ТФФ (Таблица 2)

  Для удобства и поиска информации по томам, маркерам и обозначении версии ТФФ. Таблица  актуальна  —  версия 34.0  — (дата начала действия с 01.01.2023 г.) Ссылки на предыдущие версии форматов: ТФФ 33.0 —  https://albafoxx.blogspot.com/2021/01/320-2.html ТФФ 32.0 —  https://albafoxx.blogspot.com/2020/01/310-2.html ТФФ 31.0 —  https://albafoxx.blogspot.com/2020/01/310-2.html ТФФ 30.0 —  https://albafoxx.blogspot.com/2019/12/300-2.html ТФФ 29.0 —  https://albafoxx.blogspot.com/2019/05/290-2.html ТФФ 28.0 —  https://albafoxx.blogspot.com/2019/04/2.html Наименование документа (справочника) Маркер Номер версии ТФО документа № тома Казначейское уведомление SU TXSU190101 2 Расходное расписание, Реестр расходных расписаний AP TXAP190101 1 Перечень целевых субсидий TL TXTL170101 1 Уведомление (протокол), Ин

Понравилась статья? Поделить с друзьями:
  • Power interruption 19 ошибка тахографа
  • Postgresql error obtaining mac configuration for user
  • Power input voltage error
  • Postgresql error invalid memory alloc request size
  • Power fan 0 error code 0x100000