Ошибка ввода вывода при отправке бэкенду

I'm seeing the following error from Postgres while running some automated tests: 2020-03-06 23:32:57,051 WARN main c.z.h.p.ProxyConnection - HikariPool-2 - Connection org.postgresql.jdbc.PgConnec...

I enabled query logging and managed to find the offending «insert»:

insert into "myschema"."mytable" ("custcode", "custcar", "custdob", "closed") values ('a33113f2-930c-47de-95a6-b9e07650468a', 'hellow world', '2020-02-02 01:00:00+00:00', 'f')

That is a partitioned table on the «custdob» column, with these partitions:

d+ mytable
                                                           Table "myschema.mytable"
   Column   |           Type           | Collation | Nullable |                Default                 | Storage  | Stats target | Description 
------------+--------------------------+-----------+----------+----------------------------------------+----------+--------------+-------------
 id         | bigint                   |           | not null | nextval('mytable_id_seq'::regclass)    | plain    |              | 
 custcode   | uuid                     |           | not null |                                        | plain    |              | 
 custcar    | character varying        |           | not null |                                        | extended |              | 
 custdob    | timestamp with time zone |           | not null |                                        | plain    |              | 
 closed     | boolean                  |           | not null | false                                  | plain    |              | 
Partition key: RANGE (custdob)
Partitions: mytable_201902_partition FOR VALUES FROM ('2019-02-01 00:00:00+00') TO ('2019-03-01 00:00:00+00'),
            mytable_201903_partition FOR VALUES FROM ('2019-03-01 00:00:00+00') TO ('2019-04-01 00:00:00+00'),
            mytable_201908_partition FOR VALUES FROM ('2019-08-02 00:00:00+00') TO ('2019-09-01 00:00:00+00'),
            mytable_202003_partition FOR VALUES FROM ('2020-03-01 00:00:00+00') TO ('2020-04-01 00:00:00+00'),
            mytable_202004_partition FOR VALUES FROM ('2020-04-01 00:00:00+00') TO ('2020-05-01 00:00:00+00'),
            mytable_000000_partition DEFAULT

Notice the INSERT wants to insert in February’s partition, but taht partition is missing in my CI server, so it should insert the row in the DEFAULT partition. The issue is, the DEFAULT partition has this constraint:

"mytable_partition_check" CHECK (custdob < '2019-08-02 00:00:00+00'::timestamp with time zone)

So Postgres seems to be getting into a bug because it can’t insert a record for February while that constraint is in there. If I drop this constraint and re-issue the offending INSERT, it works this time.

I enabled query logging and managed to find the offending «insert»:

insert into "myschema"."mytable" ("custcode", "custcar", "custdob", "closed") values ('a33113f2-930c-47de-95a6-b9e07650468a', 'hellow world', '2020-02-02 01:00:00+00:00', 'f')

That is a partitioned table on the «custdob» column, with these partitions:

d+ mytable
                                                           Table "myschema.mytable"
   Column   |           Type           | Collation | Nullable |                Default                 | Storage  | Stats target | Description 
------------+--------------------------+-----------+----------+----------------------------------------+----------+--------------+-------------
 id         | bigint                   |           | not null | nextval('mytable_id_seq'::regclass)    | plain    |              | 
 custcode   | uuid                     |           | not null |                                        | plain    |              | 
 custcar    | character varying        |           | not null |                                        | extended |              | 
 custdob    | timestamp with time zone |           | not null |                                        | plain    |              | 
 closed     | boolean                  |           | not null | false                                  | plain    |              | 
Partition key: RANGE (custdob)
Partitions: mytable_201902_partition FOR VALUES FROM ('2019-02-01 00:00:00+00') TO ('2019-03-01 00:00:00+00'),
            mytable_201903_partition FOR VALUES FROM ('2019-03-01 00:00:00+00') TO ('2019-04-01 00:00:00+00'),
            mytable_201908_partition FOR VALUES FROM ('2019-08-02 00:00:00+00') TO ('2019-09-01 00:00:00+00'),
            mytable_202003_partition FOR VALUES FROM ('2020-03-01 00:00:00+00') TO ('2020-04-01 00:00:00+00'),
            mytable_202004_partition FOR VALUES FROM ('2020-04-01 00:00:00+00') TO ('2020-05-01 00:00:00+00'),
            mytable_000000_partition DEFAULT

Notice the INSERT wants to insert in February’s partition, but taht partition is missing in my CI server, so it should insert the row in the DEFAULT partition. The issue is, the DEFAULT partition has this constraint:

"mytable_partition_check" CHECK (custdob < '2019-08-02 00:00:00+00'::timestamp with time zone)

So Postgres seems to be getting into a bug because it can’t insert a record for February while that constraint is in there. If I drop this constraint and re-issue the offending INSERT, it works this time.

System information:

  • Windows 10 Enterprise 1909
  • DBeaver version 7.3.5

Describe the problem you’re observing:

Not sure exactly what the issue is, but if a Query takes longer than 2 mins I get this error

Error Log:
org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [08006]: An I/O error occurred while sending to the backend.
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:509)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:440)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:168)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:427)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:812)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3226)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:168)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4516)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:335)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:327)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:130)
… 12 more
Caused by: java.io.EOFException
at org.postgresql.core.PGStream.receiveChar(PGStream.java:308)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1952)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
… 20 more

I know it is a duplicate question . But I couldn’t find solution for the same.

I have hosted my application in the Amazon EC2 cloud.
And I am using postgresql .

I am getting an exception org.postgresql.util.PSQLException: An I/O error occured while sending to the backend. while running my application in Amazon cloud.

The detailed stack-trace is :

org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:281)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
    at com.spy2k3.core.business.processor.ProcessorImpl.executeUpdate(ProcessorImpl.java:237)
    at com.spy2k3.core.business.object.BusinessObject.executeUpdate(BusinessObject.java:54)
    at com.spy2k3.core.business.object.LoginObject.deleteSession(LoginObject.java:127)
    at com.spy2k3.core.business.processor.LoginProcessor.userValidation(LoginProcessor.java:79)
    at com.spy2k3.core.business.processor.LoginProcessor.execute(LoginProcessor.java:30)
    at com.spy2k3.core.business.processor.ProcessorImpl.process(ProcessorImpl.java:73)
    at com.spy2k3.core.handler.request.RequestHandler.doService(RequestHandler.java:90)
    at com.spy2k3.core.handler.AbstractHandler.doPost(AbstractHandler.java:25)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
    at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799)
    at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705)
    at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketException: Socket closed
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream.java:129)
    at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:143)
    at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:112)
    at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:71)
    at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:269)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1700)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
    ... 37 more

Tests :

1.I connected my remote postgresql server via PgAdmin from my local system , and I could connect and execute queries.

2.I connected to my remote server via putty , and could successfully execute queries.
EXAMPLE :

    [root@ip-xx-xxx-xx-xxx bin]# psql -U myuser -d mydatabase
    psql (9.2.4)
    Type "help" for help.

mydatabase=# SELECT USERID FROM MY_MAST_LOGINSESSION WHERE SESSIONID='5DFD5D1E09D523695D6057SOMETHING';
 userid
--------
(0 rows)

3.When I connected my remote database via jdbc from my application , it successfully connected , but it is taking too much time to execute the queries there.

Can you suggest any solution to find out this time delay ?

UPDATE :

During going deep into the problem , I found the delay happens only for specific queries such as DELETE , UPDATE . The queries such as INSERT ,SELECT executes fine .

The specialty of DELETE and UPDATEqueries are which return nothing .

So the actual problem is the querying client (suppose psql) is waiting for the Database server response , but for these queries server returns nothing . So the client keeps on waiting and after the timeout it throws exception .

But I was unable to find where to change to solve this problem.

Yes, your are right

It is a problem related with multiple threads working on the same Prepared Statement.

I have modified the code to run with one single thread and it runs very smoothly.

Previously there were more than one thread working onthe same prepared statements (they were created at server startup) Even if the code already included some synchronization code, it seems I was loosing control of the threads.

I will modify the code, and make sure every thread has his own Prepared Statements.

I thought that sharing the same prepared statements was more efficient.

Thanks

Kris Jurka escribió:

This appears to be a thread safety related problem. I believe your code has one thread setting the parameter values and another thread executing the prepared statement at the same time. The executor does two passes through the parameter list, once to calculate a total message length and another time to send the values. If the contents change between the length calculation and the message sending we’ll have the wrong length and the whole client-server communication gets messed up. The attached test case demonstrates this failure mode.

I’m unsure how hard we should try to fix this, there are a couple of approaches:

1) Do nothing. It’s really a client problem and they shouldn’t be setting and executing at the same time.

2) Just copy the parameters at execution time so we get a consistent view of them. This may not be exactly what the user wants though if the order things actually execute is: execute, set, copy instead of execute, copy, set.

3) Go through all the PreparedStatement functions making most of them synchronized so that you cannot set while an execute is running.

Kris Jurka

Sergi Vera wrote:

Hi!

I’ve been a little busy thoose days and was unable to work on this, but I’ve made the tcpdump session that you requested and

here are the results

Kris Jurka escribió:

Sergi Vera wrote:

Thanks Kris for the help

Adding loglevel=2 dind’t add any more info on logs, and it will be not easy to make a self contained program, but I have attached the result of

The loglevel=2 logging will go to the driver’s System.out not into the server error log.

tcpdump -vvv -i lo -w pgsqlerror2.dat

This only captures the start of each packet so it doesn’t have the whole thing. Could you recapture with:

tcpdump -n -w pgsqlerror3.dat -s 1514 -i any tcp port 5432

This ups the capture size (-s 1514) and also filters out the unrelated UDP traffic you’ve got going on.

Browsing through the first failing pgsql data chunk, one can see that:

http://img139.imageshack.us/my.php?image=pantallazolm8.png

The last data has column lenght -1 which seems strange even if I don’k know anything of this particular protocol

-1 length indicates a NULL value, so that’s expected.

/*<![CDATA[*/
<!—
@page { size: 21cm 29.7cm; margin: 2cm }
P { margin-bottom: 0.21cm }
—>
/*]]>*/

Sergio Vera

Rosellón, 34, 5 Planta

08029 Barcelona

tel. 902101870

www.emovilia.com

IMPRIME ESTE EMAIL Y SUS ARCHIVOS SI REALMENTE LOS NECESITAS.

GRACIAS POR RESPETAR EL MEDIO AMBIENTE.

NOTA: La información contenida en este email, y sus documentos adjuntos, es confidencial y para uso exclusivo de la persona o personas a las que va dirigido. No está permitido el acceso a este mensaje a otra persona distinta a los indicados. Si no es uno de los destinatarios o ha recibido este mensaje por error, cualquier duplicación, reproducción, distribución, así como cualquier uso de la información contenida o cualquiera otra acción tomada en relación con el mismo, está prohibida y puede ser ilegal.

ADVICE: The information in this email as in the attachments is confidential and private for exclusive use of the target user group. Access to this message is disallowed to any other than the addressee. If you are not the addressee or you have been included by mistake, any duplication, reproduction, distribution as any other action relative to this email is strictly forbidden and might be illegal.

Yes, your are right

It is a problem related with multiple threads working on the same Prepared Statement.

I have modified the code to run with one single thread and it runs very smoothly.

Previously there were more than one thread working onthe same prepared statements (they were created at server startup) Even if the code already included some synchronization code, it seems I was loosing control of the threads.

I will modify the code, and make sure every thread has his own Prepared Statements.

I thought that sharing the same prepared statements was more efficient.

Thanks

Kris Jurka escribió:

This appears to be a thread safety related problem. I believe your code has one thread setting the parameter values and another thread executing the prepared statement at the same time. The executor does two passes through the parameter list, once to calculate a total message length and another time to send the values. If the contents change between the length calculation and the message sending we’ll have the wrong length and the whole client-server communication gets messed up. The attached test case demonstrates this failure mode.

I’m unsure how hard we should try to fix this, there are a couple of approaches:

1) Do nothing. It’s really a client problem and they shouldn’t be setting and executing at the same time.

2) Just copy the parameters at execution time so we get a consistent view of them. This may not be exactly what the user wants though if the order things actually execute is: execute, set, copy instead of execute, copy, set.

3) Go through all the PreparedStatement functions making most of them synchronized so that you cannot set while an execute is running.

Kris Jurka

Sergi Vera wrote:

Hi!

I’ve been a little busy thoose days and was unable to work on this, but I’ve made the tcpdump session that you requested and

here are the results

Kris Jurka escribió:

Sergi Vera wrote:

Thanks Kris for the help

Adding loglevel=2 dind’t add any more info on logs, and it will be not easy to make a self contained program, but I have attached the result of

The loglevel=2 logging will go to the driver’s System.out not into the server error log.

tcpdump -vvv -i lo -w pgsqlerror2.dat

This only captures the start of each packet so it doesn’t have the whole thing. Could you recapture with:

tcpdump -n -w pgsqlerror3.dat -s 1514 -i any tcp port 5432

This ups the capture size (-s 1514) and also filters out the unrelated UDP traffic you’ve got going on.

Browsing through the first failing pgsql data chunk, one can see that:

http://img139.imageshack.us/my.php?image=pantallazolm8.png

The last data has column lenght -1 which seems strange even if I don’k know anything of this particular protocol

-1 length indicates a NULL value, so that’s expected.

/*<![CDATA[*/
<!—
@page { size: 21cm 29.7cm; margin: 2cm }
P { margin-bottom: 0.21cm }
—>
/*]]>*/

Sergio Vera

Rosellón, 34, 5 Planta

08029 Barcelona

tel. 902101870

www.emovilia.com

IMPRIME ESTE EMAIL Y SUS ARCHIVOS SI REALMENTE LOS NECESITAS.

GRACIAS POR RESPETAR EL MEDIO AMBIENTE.

NOTA: La información contenida en este email, y sus documentos adjuntos, es confidencial y para uso exclusivo de la persona o personas a las que va dirigido. No está permitido el acceso a este mensaje a otra persona distinta a los indicados. Si no es uno de los destinatarios o ha recibido este mensaje por error, cualquier duplicación, reproducción, distribución, así como cualquier uso de la información contenida o cualquiera otra acción tomada en relación con el mismo, está prohibida y puede ser ilegal.

ADVICE: The information in this email as in the attachments is confidential and private for exclusive use of the target user group. Access to this message is disallowed to any other than the addressee. If you are not the addressee or you have been included by mistake, any duplication, reproduction, distribution as any other action relative to this email is strictly forbidden and might be illegal.

Понравилась статья? Поделить с друзьями:
  • Ошибка ввода вывода при загрузке windows 10
  • Ошибка ввода вывода на флешке как исправить не форматируется
  • Ошибка ввода вывода на флешке как исправить ubuntu
  • Ошибка ввода вывода на устройстве флешка как исправить
  • Ошибка ввода вывода на устройстве sd карта как исправить