Context:
We have a Spring Boot (2.3.1.RELEASE) web app
It's written in Java 8 but running inside of a container with Java 11 (openjdk:11.0.6-jre-stretch).
It has a DB connection and an upstream service that is called via https (simple RestTemplate#exchange method) (this is important!)
It is deployed inside of a Kubernetes cluster (not sure if this is important)
Problem:
Every day, I see a small percentage of requests towards the upstream service fail with this error: I/O error on GET request for "https://upstream.xyz/path": Connection reset; nested exception is javax.net.ssl.SSLException: Connection reset
The errors are totally random and happen intermittently.
We have had a similar error (javax.net.ssl.SSLProtocolException: Connection reset) that was related to JRE11 and it's TLS 1.3 negotiation issue. We have updated our Docker image to above mentioned and that fixed it.
This is the stack trace from the error:
java.net.SocketException: Connection reset
at java.base/java.net.SocketInputStream.read(Unknown Source)
at java.base/java.net.SocketInputStream.read(Unknown Source)
at java.base/sun.security.ssl.SSLSocketInputRecord.read(Unknown Source)
at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(Unknown Source)
at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(Unknown Source)
at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(Unknown Source)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:87)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:739)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:674)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:583)
....
Configuration:
public static RestTemplate create(final int maxTotal, final int defaultMaxPerRoute,
final int connectTimeout, final int readTimeout,
final String userAgent) {
final Registry<ConnectionSocketFactory> schemeRegistry = RegistryBuilder.<ConnectionSocketFactory>create()
.register("http", PlainConnectionSocketFactory.getSocketFactory())
.register("https", SSLConnectionSocketFactory.getSocketFactory())
.build();
final PoolingHttpClientConnectionManager connManager = new PoolingHttpClientConnectionManager(schemeRegistry);
connManager.setMaxTotal(maxTotal);
connManager.setDefaultMaxPerRoute(defaultMaxPerRoute);
final CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(connManager)
.setUserAgent(userAgent)
.setDefaultRequestConfig(RequestConfig.custom()
.setConnectTimeout(connectTimeout)
.setSocketTimeout(readTimeout)
.setExpectContinueEnabled(false).build())
.build();
return new RestTemplateBuilder()
.requestFactory(() -> new HttpComponentsClientHttpRequestFactory(httpClient))
.build();
}
Has anyone experienced this issue?
When I turn on debug logs on the http client, it is overflowing with noise and I am unable to discern anything useful...
We had a similar problem when migrating to AWS/Kubernetes.
I've found out why.
You're using a connection pool. The default behavior of the PoolingHttpClientConnectionManager is that it will reuse connections. So connections will not be closed immediately when your request is done. This will save resources by not having to reconnect all the time.
A Kubernetes cluster uses a NAT (Network Address Translation) for outgoing connections. When a connection is not used for a certain amount of time, the connection will be removed from the NAT-table, and the connection will be broken. This causes the seemingly random SSLExceptions.
On AWS, connections will be removed from the NAT-table when it is Idle for 350 seconds. Other Kubernetes instances might have other settings.
See https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-troubleshooting.html
The solution:
Disable connection-reuse:
final CloseableHttpClient closeableHttpClient = HttpClients.custom()
.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE)
.setConnectionManager(poolingHttpClientConnectionManager)
.build();
Or, let the httpClient evict connections that are idle for too long:
return HttpClients.custom()
.evictIdleConnections(300, TimeUnit.SECONDS) //Read the javadocs, may not be used when the instance of HttpClient is created inside an EJB container.
.setConnectionManager(poolingHttpClientConnectionManager)
.build();
Or call setConnectionKeepAliveStrategy(....) with a custom KeepAliveStrategy that will never return -1 or a timeout with a value of more than 300 seconds .
I will share my experience on this error probably it is the same problem you are facing. Comparing the stack trace which I had.
As this is happening randomly is the key phrase which I suspect that this is the same problem.
HTTP connections are made through an HTTP client library(Apache HTTP Client).
HTTP client usually manages, a re-usable pool of connections. This pool has a limit. In our case, the pool of connections is sometimes(Randomly) getting totally occupied. There are no more free connections which can be used anymore.
You can either increase the pool size
Implement a back-off retry mechanism which will try to grab a connection from the pool of HTTP connections when there is a failure on executing the HTTP request successfully.
If you wonder how to tune this underlying HTTP Client that is being used in sprint boot, check out this post.
I guess the issue is related with k8s.
if you use flannel as k8s network, please check flannel status and find if it restarts more times. use below command
kubectl get pod -n kube-system | grep flannel
what version of your linux kennel? if not 4.x version or above, please upgrade to 4.x.
# to check linux kennel version
uname -sr
# upgrade step
1)
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel -y install kernel-lt
2) open and edit /etc/default/grub, and set "GRUB_DEFAULT=0"
3) grub2-mkconfig -o /boot/grub2/grub.cfg
4) reboot
Wish it useful to solving issue.
SSL stacktrace like this could be caused by many different reasons which could have nothing to do with SSL itself. That stacktrace will not help you enough, and furthermore this issue has nothing to do with spring, resttemplate etc.
What will help you is if you implement a logging/monitoring/tracing framework I use elasticsearch. Monitor the behavior for a couple of days, make sure you record as much information in these logs as needed, such as the container id, connection details (when it was initiated, etc). You might find that for example after a connection has lived for a certain amount of time, (eg 1 hour) this occurs, and if you simply make connections live for less time, the issue will goes away.
This way you may be able to fix the issue without needing to figure out the root cause, as that could be many days of work and get you no where. Rather tinkering with the connection parameters will resolve your issue potentially. But for that you need more visibility as the info you've posted so far is not enough to troubleshoot the issue.
Related
The following exception occurs intermittently when making requests with Jetty's HttpClient:
java.net.BindException: Address already in use: connect
at java.base/sun.nio.ch.Net.connect0(Native Method)
at java.base/sun.nio.ch.Net.connect(Unknown Source)
at java.base/sun.nio.ch.Net.connect(Unknown Source)
at java.base/sun.nio.ch.SocketChannelImpl.connect(Unknown Source)
at org.eclipse.jetty.io.ClientConnector.connect(ClientConnector.java:433)
at org.eclipse.jetty.client.AbstractConnectorHttpClientTransport.connect(AbstractConnectorHttpClientTransport.java:73)
at org.eclipse.jetty.client.HttpClient$1.connect(HttpClient.java:602)
at org.eclipse.jetty.client.HttpClient$1.succeeded(HttpClient.java:579)
at org.eclipse.jetty.client.HttpClient$1.succeeded(HttpClient.java:575)
at org.eclipse.jetty.util.SocketAddressResolver$Async.lambda$resolve$1(SocketAddressResolver.java:181)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Unfortunately, I don't know much about the specific scenario in which this happens.
We are using an Eclipse Distribution of OpenJDK 17.0.4 with Jetty 10.0.11 on a Windows machine.
Does anyone have an idea what the cause could be?
A quick research (e.g. question 1, 2, and 3) suggests that the system is running out of "ephemeral ports", so it's a TCP issue.
If that's the case here too, what am I doing wrong, and how can I fix the problem with a Jetty client?
Most Common Cause
You've specified an HttpClient.setBindAddress(SocketAddress) that forces the client to use the same local InetAddress every time it wants to connect out to a remote server.
Either be careful of how you use the local bind address, or just don't set the local bind address on the HttpClient.
Jetty's HttpClient is essentially calling java.net.Socket.bind(SocketAddress) and that's what is causing your problem.
Most projects do not need a local bind address when initiating an outgoing connection request. If your organization requires it, then you'll have to deal with everything that it entails, namely that if one client application is using that bind address, another client application cannot. Not even 2 clients in the same application can share it.
Common Cause
Your local address (ip + port) space has exhausted all of it's local outgoing ports. This can happen if you are too aggressive with your client socket creation, but lazy with your socket usage and cleanup. (such as opening a connection, making a request, but not reading the response, possibly not even closing properly).
Since you are on Windows, hopefully Windows 10 or better, use the netstat command line to check your open sockets, and pay attention to the TIME_WAIT entries, you likely have run out of local address combos (ip + port) you can use. Check for TIME_WAIT entries, as those are an indication that you have a socket that was closed improperly.
SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);
I am using the qpid-jms-client.jar library to form a connection with a broker.
My code is ::
Properties properties = new Properties();
properties.load(this.getClass().getResourceAsStream("jndi.properties"));
Context context = new InitialContext(properties);
System.setProperty("javax.net.ssl.trustStore", "C:/Users/xxxxx/qpid.jks");
System.setProperty("javax.net.ssl.trustStorePassword", "test123");
ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup");
Destination queue = (Destination) context.lookup("myQueueLookup");
Connection connection = factory.createConnection("<my-username>", "<my-password>");
connection.setExceptionListener(new MyExceptionListener());
connection.start();
My jndi.properties file is ::
java.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory
connectionfactory.myFactoryLookup=amqps://esesslx0100.se:9443
queue.myQueueLookup=emft_input
topic.myTopicLookup=topic
destination.topicExchange=amq.topic
jms.user=test
Now the above code gives me the ERROR ::
Connection ExceptionListener fired, exiting.
javax.jms.JMSException: Cannot send to a non-connected transport.
at org.apache.qpid.jms.exceptions.JmsExceptionSupport.create(JmsExceptionSupport.java:66)
at org.apache.qpid.jms.exceptions.JmsExceptionSupport.create(JmsExceptionSupport.java:88)
at org.apache.qpid.jms.JmsConnection.onAsyncException(JmsConnection.java:1188)
at org.apache.qpid.jms.JmsConnection.onConnectionFailure(JmsConnection.java:1104)
at org.apache.qpid.jms.provider.amqp.AmqpProvider.fireProviderException(AmqpProvider.java:847)
at org.apache.qpid.jms.provider.amqp.AmqpProvider.pumpToProtonTransport(AmqpProvider.java:820)
at org.apache.qpid.jms.provider.amqp.AmqpProvider.access$300(AmqpProvider.java:90)
at org.apache.qpid.jms.provider.amqp.AmqpProvider$16.run(AmqpProvider.java:683)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot send to a non-connected transport.
at org.apache.qpid.jms.transports.netty.NettyTcpTransport.checkConnected(NettyTcpTransport.java:279)
at org.apache.qpid.jms.transports.netty.NettyTcpTransport.allocateSendBuffer(NettyTcpTransport.java:176)
at org.apache.qpid.jms.provider.amqp.AmqpProvider.pumpToProtonTransport(AmqpProvider.java:806)
... 9 more
Since the broker is configures with SaSL, I am also providing my username and password. I am currently unaware of why this ERROR occurs. Ive looked around on the internet but there is no clear explanation as to why it would occur with qpid. Any ideas why this ERROR occurs ?
My trustStore file is correct since I have verifies SSL connectivity using it.
Turning up the clients logging might give some more trace/debug information. Both the 'regular logging', and 'protocol trace logging' (if it even gets that far) might be useful. See http://qpid.apache.org/releases/qpid-jms-0.22.0/docs/index.html#logging for more details.
To the issue here, where it seems the TCP connection is being cut, the server logs could also perhaps be useful in giving a reason.
You dont mention which server you are using here, but have mentioned ActiveMQ and RabbitMQ in other related questions. Its unclear how far the connection gets, but if the server is RabbitMQ, one potential explanation might also be: https://github.com/rabbitmq/rabbitmq-amqp1.0/issues/47. As mentioned in another answer, this may not matter due to other issues, e.g. I didn't have much success using the JMS client or some other AMQP 1.0 clients against RabbitMQ previously due to an issue I reported which stops them in their tracks when creating consumers and producers: https://github.com/rabbitmq/rabbitmq-amqp1.0/issues/34
Thanks for all your replies. In my case turns out all the libraries/configurations I was using were indeed correct. I contacted the team that manages the broker. Turns out the user I was trying to connect with had some PRIVILEGE issues. As soon as they gave my user the correct rights, I was able to form a successful connection and transmit and receive messages.
In my case when receiving this error it was simply because when I was in the office our company proxy was filtering this traffic out when I executed programatically. Using Service Bus Explorer still worked though as it must have picked up machine settings that add the proxy.
We were using a publicly hosted Service Bus so I attempted from my home network without the proxy and everything worked as expected. Using Wireshark or similar program probably would have helped identify this quicker. Hope this helps someone.
This line shows your error:
Caused by: java.io.IOException: Cannot send to a non-connected transport.
That's saying your connection is configured incorrectly. One thing I see is that your're not referencing the properties in your property file. For example,
this:
ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup");
Destination queue = (Destination) context.lookup("myQueueLookup");
Should probably be this:
ConnectionFactory factory = (ConnectionFactory) context.lookup("connectionfactory.myFactoryLookup");
Destination queue = (Destination) context.lookup("queue.myQueueLookup");
A client application has been built using Jdeveloper 10.1.3.2 and it is running on OC4J server. This application is sending data to external server application. It is working for quite long time without any issue. Lately a connection issue occurred and the following stack trace is generated:
com.sun.xml.ws.client.ClientTransportException: HTTP transport error: java.net.SocketException: Connection reset
at com.sun.xml.ws.transport.http.client.HttpClientTransport.getOutput(HttpClientTransport.java:133)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.process(HttpTransportPipe.java:153)
at com.sun.xml.ws.transport.http.client.HttpTransportPipe.processRequest(HttpTransportPipe.java:93)
at com.sun.xml.ws.transport.DeferredTransportPipe.processRequest(DeferredTransportPipe.java:105)
at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:629)
at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:588)
at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:573)
at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:470)
at com.sun.xml.ws.client.Stub.process(Stub.java:319)
at com.sun.xml.ws.client.sei.SEIStub.doProcess(SEIStub.java:157)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:109)
at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:89)
at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:140)
at $Proxy44.sendRem(Unknown Source)
After goggling I found out a good discussion about the error sockets - What's causing my java.net.SocketException: Connection reset? .One answer in this link says that the issue mostly from the client side because if it is from the server side the exception will be (SocketException reset by peer).
What I did:
I tried out to increase the socket time out for the OC4J with the help of this form How to change OC4J HTTP Timeout. What I did is I changed the propriety oracle.j2ee.http.socket.timeout to be 5000 instead of 500 (10 times longer)
But the error still there. So, any suggestion to over come this issue?
Note: I able to use telnet command for external server IP and Port and it is working fine.
-------------------------------------------------------- Update 1 --------------------------------------------------------
I increase the server clock skew where the client application is running using the following command on server start up:
-Dweblogic.wsee.security.clock.skew=72000000
-Dweblogic.wsee.security.delay.max=72000000
But no luck, problem is not resolved.
-------------------------------------------------------- Update 2 --------------------------------------------------------
I realized that the problem is not from application at all; I test the external URL using SoapUI and I got the same error Connection rest. I think this new update clreay shows that there is nothing wrong with program code. But I need to know where to go or check now. Where is the starting point now to overcome the issue. Any clue will be helpful.
As you can see from Update 2 in the question, the problem was not from the client application because same error occurred from SoapUI.
The problem was that the machine where the client application was running have low bandwidth which was not enough for APIs communication. Using simple speed test , I found out that the upload bandwidth was low comparing to minimum requirements given by server application team.
I concluded this fact by monitoring the network resource using Resource Monitor in Windows while the client application was running and by using online speed check
To solve the issue, the machine bandwidth has to be increased where the client application is running.
I have a Tomcat based web application. I am intermittently getting the following exception,
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:532)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:501)
at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:563)
at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:124)
at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:346)
at org.apache.coyote.Request.doRead(Request.java:422)
at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290)
at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:431)
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:315)
at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:200)
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
Unfortunately I don't have access to the client, so I am just trying to confirm on various reasons this can happen,
Server is trying to read data from the request, but its taking longer than the timeout value for the data to arrive from the client. Timeout here would typically be Tomcat connector → connectionTimeout attribute.
Client has a read timeout set, and server is taking longer than that to respond.
One of the threads I went through, said this can happen with high concurrency and if the keepalive is enabled.
For #1, the initial value I had set was 20 sec, I have bumped this up to 60sec, will test, and see if there are any changes.
Meanwhile, if any of you guys can provide you expert opinion on this, that'l be really helpful. Or for that matter any other reason you can think of which might cause this issue.
Server is trying to read data from the request, but its taking longer than the timeout value for the data to arrive from the client. Timeout here would typically be tomcat connector -> connectionTimeout attribute.
Correct.
Client has a read timeout set, and server is taking longer than that to respond.
No. That would cause a timeout at the client.
One of the threads i went through, said this can happen with high concurrency and if the keepalive is enabled.
That is obviously guesswork, and completely incorrect. It happens if and only if no data arrives within the timeout. Period. Load and keepalive and concurrency have nothing to do with it whatsoever.
It just means the client isn't sending. You don't need to worry about it. Browser clients come and go in all sorts of strange ways.
Here are the basic instructions:-
Locate the "server.xml" file in the "conf" folder beneath Tomcat's base directory (i.e. %CATALINA_HOME%/conf/server.xml).
Open the file in an editor and search for <Connector.
Locate the relevant connector that is timing out - this will typically be the HTTP connector, i.e. the one with protocol="HTTP/1.1".
If a connectionTimeout value is set on the connector, it may need to be increased - e.g. from 20000 milliseconds (= 20 seconds) to 120000 milliseconds (= 2 minutes). If no connectionTimeout property value is set on the connector, the default is 60 seconds - if this is insufficient, the property may need to be added.
Restart Tomcat
Connection.Response resp = Jsoup.connect(url) //
.timeout(20000) //
.method(Connection.Method.GET) //
.execute();
actually, the error occurs when you have slow internet so try to maximize the timeout time and then your code will definitely work as it works for me.
I had the same problem while trying to read the data from the request body. In my case which occurs randomly only to the mobile-based client devices. So I have increased the connectionUploadTimeout to 1min as suggested by this link
I have the same issue. The java.net.SocketTimeoutException: Read timed out error happens on Tomcat under Mac 11.1, but it works perfectly in Mac 10.13. Same Tomcat folder, same WAR file. Have tried setting timeout values higher, but nothing I do works.
If I run the same SpringBoot code in a regular Java application (outside Tomcat 9.0.41 (tried other versions too), then it works also.
Mac 11.1 appears to be interfering with Tomcat.
As another test, if I copy the WAR file to an AWS EC2 instance, it works fine there too.
Spent several days trying to figure this out, but cannot resolve.
Suggestions very welcome! :)
This happenned to my application, actually I was using a single Object which was being called by multiple functions and those were not thread safe.
Something like this :
Class A{
Object B;
function1(){
B.doSomething();
}
function2(){
B.doSomething();
}
}
As they were not threadsafe, I was getting these errors :
redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Socket is closed
and
redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
This is how I fixed it :
Class A{
function1(){
Object B;
B.doSomething();
}
function2(){
Object B;
B.doSomething();
}
}
Hope it helps
It means time out from your server response. It causes due to server config and internet response.
I am using 11.2 and received timeouts.
I resolved by using the version of jsoup below.
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.7.2</version>
<scope>compile</scope>
</dependency>