okhttp3 throwing java.io.IOException: shutdown - java

I have configured http client connect: 2sec, read:10sec timeout.
But in somecases HTTP client is taking more than 12 sec to throw an exception max time>900sec.
Exception:
java.io.IOException: shutdown
at okhttp3.internal.framed.FramedConnection.newStream(FramedConnection.java:259) ~
at okhttp3.internal.framed.FramedConnection.newStream(FramedConnection.java:245) ~
at okhttp3.internal.http.Http2xStream.writeRequestHeaders(Http2xStream.java:135) ~
at okhttp3.internal.http.HttpEngine$NetworkInterceptorChain.proceed(HttpEngine.java:748) ~
looks like it's not because of readtimeout because in this this should have thrown HTTPclient readtimeout exception and after 10sec itself.

Related

I/O timeout exception (java.net.ConnectException) When calling API

I'm using HttpClient in order to call an API and get its response, and I have set a 60 secs timeout. In this 60 seconds, Java try and retry to connect to the API by each 20 seconds and shows the exception. In the final of 60 secs, it stops retrying.
My doubt is: this I/O Exception is caused by the API? Since I set a timeout higher than the exception returns (each 20 seconds).
Here's the code and the log:
HttpClient client = new HttpClient();
GetMethod getMethod = new GetMethod(GET_TICKETS_URL);
getMethod.setRequestHeader("Content-Type", "application/json");
getMethod.setRequestHeader("Accept", "application/json");
getMethod.getParams().setSoTimeout(60000);
logger.info("Calling service: " + getMethod.getPath());
client.getHttpConnectionManager().getParams().setConnectionTimeout(60000);
client.getHttpConnectionManager().getParams().setSoTimeout(60000);
client.getParams().setSoTimeout(60000);
client.getParams().setConnectionManagerTimeout(60000);
int getTicketsResponse = client.executeMethod(getMethod);
[2020-08-27 13:41:25,215] pool-3-thread-1 br.com.pfm.tasks.baml.task Task INFO - Calling service: /
[2020-08-27 13:41:46,228] pool-3-thread-1 org.apache.commons.httpclient.HttpMethodDirector INFO - I/O exception (java.net.ConnectException) caught when processing request: Connection timed out: connect
[2020-08-27 13:41:46,228] pool-3-thread-1 org.apache.commons.httpclient.HttpMethodDirector INFO - Retrying request
[2020-08-27 13:42:07,242] pool-3-thread-1 org.apache.commons.httpclient.HttpMethodDirector INFO - I/O exception (java.net.ConnectException) caught when processing request: Connection timed out: connect
[2020-08-27 13:42:07,242] pool-3-thread-1 org.apache.commons.httpclient.HttpMethodDirector INFO - Retrying request
[2020-08-27 13:42:28,258] pool-3-thread-1 org.apache.commons.httpclient.HttpMethodDirector INFO - I/O exception (java.net.ConnectException) caught when processing request: Connection timed out: connect
[2020-08-27 13:42:28,258] pool-3-thread-1 org.apache.commons.httpclient.HttpMethodDirector INFO - Retrying request
[2020-08-27 13:42:49,271] pool-3-thread-1 br.com.pfm.tasks.baml.task ERROR - java.net.ConnectException: Connection timed out: connect
I/O exception is occur when there is an issue in reading from the url, can you able to get the stack stacktrace, will give an insight about the issue, it could be the server unavailable or not found anything from http 400+
so please add this to your catch block and will help you get an idea, what went wrong.
catch(Exception e)
{
e.printStacktrace();
}
Based on the somewhat incomplete information presented here, a stack trace will likely reveal the point of failure is where you try to kick off the HTTP transaction (actually tell it to go a head and connect).
Assuming that you've set everything up correctly in GetMethod (and I cant tell), may i recomend that you check that the target you are trying to reach, is actually reachable from the machine? try a wget or if its on a desk top try to access it with your browser. If that works, its likely you set things up incorrectly in your GetMethod code - posting that would be very helpful to troubleshoot further.

Azure App Service - Spring Boot - Hikari Errors

I have deployed Spring Boot application that has a Database based queue with jobs on App Service.
Yesterday I performed a few Scale out and Scale in operations while the application was working to see how it will behave.
At some point (not necessary related to scaling operations) application started to throw Hikari errors.
com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#1ae66f34 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
com.zaxxer.hikari.pool.ProxyConnection : HikariPool-1 - Connection org.postgresql.jdbc.PgConnection#1ef85079 marked as broken because of SQLSTATE(08006), ErrorCode(0)
The following are stack traces from my scheduled job in spring and other information:
org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
Caused by: javax.net.ssl.SSLException: Connection reset by peer (Write failed)
Suppressed: java.net.SocketException: Broken pipe (Write failed)
Caused by: java.net.SocketException: Connection reset by peer (Write failed)
Next the following stack of errors:
WARN 1 --- [ scheduling-1] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#48d0d6da (This connection has been closed.).
Possibly consider using a shorter maxLifetime value.
org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is java.sql.SQLException: Connection is closed
Caused by: java.sql.SQLException: Connection is closed
The code which is invoked periodically - every 500 milliseconds is here:
#Scheduled(fixedDelayString = "${worker.delay}")
#Transactional
public void execute() {
jobManager.next(jobClass).ifPresent(this::handleJob);
}
Update.
The above code is almost all the time doing nothing, since there was no traffic on the website.
Update2. I've checked Postgres logs and found this:
2020-07-11 22:48:09 UTC-5f0866f0.f0-LOG: checkpoint starting: immediate force wait
2020-07-11 22:48:10 UTC-5f0866f0.f0-LOG: checkpoint complete (240): wrote 30 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.046 s, sync=0.046 s, total=0.437 s; sync files=13, longest=0.009 s, average=0.003 s; distance=163 kB, estimate=13180 kB
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: received immediate shutdown request
2020-07-11 22:48:10 UTC-5f0a3f41.8914-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3f41.8914-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
// Same text about 10 times
2020-07-11 22:48:10 UTC-5f0866f2.7c-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: src/port/kill.c(84): Process (272) exited OOB of pgkill.
2020-07-11 22:48:10 UTC-5f0866f1.fc-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0866f1.fc-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0866f1.fc-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: archiver process (PID 256) exited with exit code 1
2020-07-11 22:48:11 UTC-5f0866ee.68-LOG: database system is shut down
It looks like it is a problem with Azure PostgresSQL server and it closed itself. Am I reading this right?
Like mentioned in your logs, have you tried setting maxLifetime property for the Hikari CP ? I think after setting that property this issue should be resolved.
Based on Hikari doc (https://github.com/brettwooldridge/HikariCP) --
maxLifetime
This property controls the maximum lifetime of a connection in the pool. An in-use connection will never be retired, only when it is closed will it then be removed. On a connection-by-connection basis, minor negative attenuation is applied to avoid mass-extinction in the pool. We strongly recommend setting this value, and it should be several seconds shorter than any database or infrastructure imposed connection time limit. A value of 0 indicates no maximum lifetime (infinite lifetime), subject of course to the idleTimeout setting. The minimum allowed value is 30000ms (30 seconds). Default: 1800000 (30 minutes)

Why doesn't Future.get(...) kill the thread?

I have an application that is using future for asynchronous execution.
I set the parameter on get method, that the thread should get killed after 10 seconds, when it does not get the response:
Future<RecordMetadata> meta = producer.send(record, new ProducerCallBack());
RecordMetadata data = meta.get(10, TimeUnit.SECONDS);
But the thread get killed after 60 second:
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
at io.khinkali.KafkaProducerClient.main(KafkaProducerClient.java:49)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
What am I doing wrong?
From the docs:
The threshold for time to block is determined by max.block.ms after
which it throws a TimeoutException.
Check Kafka Appender config in logback.xml, look for:
<producerConfig>max.block.ms=60000</producerConfig>
I set the parameter on get method, that the thread should get killed after 10 seconds, when it does not get the response:
If we are talking about Future.get(...) there is nothing about it that "kills" the thread at all. To quote from the javadocs, the Future.get(...) method:
Waits if necessary for at most the given time for the computation to complete, and then retrieves its result, if available.
If the get(...) method times out then it will throw TimeoutException but your thread is free to continue to run. If you want to stop the thread running then you'll need to catch TimeoutException and then call meta.cancel(true) but even that doesn't guarantee that the thread will be "killed". That causes the thread to be interrupted which means that certain methods will throw InterruptedException or the thread needs to be checking for Thread.currentThread().isInterrupted().
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Yeah this timeout has nothing to do with the Future.get(...) timeout.

TimeoutException for CloseableHttpAsyncClient

i am trying to connect consume api using CloseableHttpAsyncClient. I making call to the api with connection pool of 45 and timeout of 5 minutes. However, i get the following error:
java.util.concurrent.TimeoutException: null
at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractNIOConnPool.java:364)
at org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(AbstractNIOConnPool.java:344)
at org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.java:318)
at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection(PoolingNHttpClientConnectionManager.java:303)
at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnection(AbstractClientExchangeHandler.java:239)
at org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientExec.java:387)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:168)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
at java.lang.Thread.run(Thread.java:745)
Then on reducing the connection pool size to 10, the number of times error is thrown is down.
i am using singleton instance of CloseableHttpAsyncClient, and do not close it for faster call.
This is how i am calling it:
httpclient.execute(post, new FutureCallback<HttpResponse>(....));
i think, it is NOT from the api side.
any idea which this exception occurs and does it has any connection from connection pool?
By looking at the source code, this happens when you set connectionRequestTimeout to a value different than the default (-1) in the connection manager (e.g. PoolingNHttpClientConnectionManager). If the pool is very busy and timeout is not enough, it will throw this exception.
To solve it, either increase the connectionRequestTimeout of the connection manager pool or set it to -1 (for indefinite wait).

Apache Camel FTP error

I am working on a piece of code that routes from FTP input to a bean process and then to FTP output. The process takes about 15 minutes to complete and when it finishes, the Camel tries to delete a input file from FTP input route. Currently the FTP server throws an error:
13 Jan 2015 18:21:26 DEBUG org.apache.camel.component.file.remote.FtpOperations.deleteFile - Deleting file: ../flex-brazil/Portal_Forn/request_appr/364/NF_4.txt
13 Jan 2015 18:21:26 WARN org.slf4j.helpers.MarkerIgnoringBase.warn - Error during commit. Exchange[org.apache.camel.component.file.GenericFileMessage#3207779]. Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - File operation failed: 421 Timeout (900 seconds): closing control connection.
FTP response 421 received. Server closed connection.. Code: 421]
org.apache.camel.component.file.GenericFileOperationFailedException: File operation failed: 421 Timeout (900 seconds): closing control connection.
FTP response 421 received. Server closed connection.. Code: 421
at org.apache.camel.component.file.remote.FtpOperations.getCurrentDirectory(FtpOperations.java:701)
at org.apache.camel.component.file.remote.FtpOperations.deleteFile(FtpOperations.java:224)
at org.apache.camel.component.file.strategy.GenericFileDeleteProcessStrategy.commit(GenericFileDeleteProcessStrategy.java:71)
at org.apache.camel.component.file.GenericFileOnCompletion.processStrategyCommit(GenericFileOnCompletion.java:124)
at org.apache.camel.component.file.GenericFileOnCompletion.onCompletion(GenericFileOnCompletion.java:80)
at org.apache.camel.component.file.GenericFileOnCompletion.onComplete(GenericFileOnCompletion.java:54)
at org.apache.camel.util.UnitOfWorkHelper.doneSynchronizations(UnitOfWorkHelper.java:100)
at org.apache.camel.impl.DefaultUnitOfWork.done(DefaultUnitOfWork.java:228)
at org.apache.camel.util.UnitOfWorkHelper.doneUow(UnitOfWorkHelper.java:61)
at org.apache.camel.processor.CamelInternalProcessor$UnitOfWorkProcessorAdvice.after(CamelInternalProcessor.java:613)
at org.apache.camel.processor.CamelInternalProcessor$UnitOfWorkProcessorAdvice.after(CamelInternalProcessor.java:581)
at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:240)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:173)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:401)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:99)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:201)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:165)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:187)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:114)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.commons.net.ftp.FTPConnectionClosedException: FTP response 421 received. Server closed connection.
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:367)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:294)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:483)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:608)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:582)
at org.apache.commons.net.ftp.FTP.pwd(FTP.java:1454)
at org.apache.commons.net.ftp.FTPClient.printWorkingDirectory(FTPClient.java:2658)
at org.apache.camel.component.file.remote.FtpOperations.getCurrentDirectory(FtpOperations.java:697)
... 25 more
There are any parameter to teach the Camel to send keepalives to FTP server to avoid this situation?
I suggest to split the whole process into two routes:
Reader Route: Only reads the file from the FTP server.
Process and Writer Route: This route processes the incoming message and writes a response to the FTP server
The advantage is, that the connection will be closed after reading and deleting without exception.
When a new message will be written back to server a new connection will be created.
The first route can only close the connection, when the process is asynchronous. For this purpose you can use queues for reading and writing e.g. seda or jms.
You may consider to set a bigger timeout option - the default is 30s.
To set the timeout to 15 minutes: ftp://user#host?timeout=900000
Keep in mind, however, that the connection may be closed by the FTP server.

Categories

Resources