I have a Tomcat based web application. I am intermittently getting the following exception,
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:532)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:501)
at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:563)
at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:124)
at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:346)
at org.apache.coyote.Request.doRead(Request.java:422)
at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290)
at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:431)
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:315)
at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:200)
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
Unfortunately I don't have access to the client, so I am just trying to confirm on various reasons this can happen,
Server is trying to read data from the request, but its taking longer than the timeout value for the data to arrive from the client. Timeout here would typically be Tomcat connector → connectionTimeout attribute.
Client has a read timeout set, and server is taking longer than that to respond.
One of the threads I went through, said this can happen with high concurrency and if the keepalive is enabled.
For #1, the initial value I had set was 20 sec, I have bumped this up to 60sec, will test, and see if there are any changes.
Meanwhile, if any of you guys can provide you expert opinion on this, that'l be really helpful. Or for that matter any other reason you can think of which might cause this issue.
Server is trying to read data from the request, but its taking longer than the timeout value for the data to arrive from the client. Timeout here would typically be tomcat connector -> connectionTimeout attribute.
Correct.
Client has a read timeout set, and server is taking longer than that to respond.
No. That would cause a timeout at the client.
One of the threads i went through, said this can happen with high concurrency and if the keepalive is enabled.
That is obviously guesswork, and completely incorrect. It happens if and only if no data arrives within the timeout. Period. Load and keepalive and concurrency have nothing to do with it whatsoever.
It just means the client isn't sending. You don't need to worry about it. Browser clients come and go in all sorts of strange ways.
Here are the basic instructions:-
Locate the "server.xml" file in the "conf" folder beneath Tomcat's base directory (i.e. %CATALINA_HOME%/conf/server.xml).
Open the file in an editor and search for <Connector.
Locate the relevant connector that is timing out - this will typically be the HTTP connector, i.e. the one with protocol="HTTP/1.1".
If a connectionTimeout value is set on the connector, it may need to be increased - e.g. from 20000 milliseconds (= 20 seconds) to 120000 milliseconds (= 2 minutes). If no connectionTimeout property value is set on the connector, the default is 60 seconds - if this is insufficient, the property may need to be added.
Restart Tomcat
Connection.Response resp = Jsoup.connect(url) //
.timeout(20000) //
.method(Connection.Method.GET) //
.execute();
actually, the error occurs when you have slow internet so try to maximize the timeout time and then your code will definitely work as it works for me.
I had the same problem while trying to read the data from the request body. In my case which occurs randomly only to the mobile-based client devices. So I have increased the connectionUploadTimeout to 1min as suggested by this link
I have the same issue. The java.net.SocketTimeoutException: Read timed out error happens on Tomcat under Mac 11.1, but it works perfectly in Mac 10.13. Same Tomcat folder, same WAR file. Have tried setting timeout values higher, but nothing I do works.
If I run the same SpringBoot code in a regular Java application (outside Tomcat 9.0.41 (tried other versions too), then it works also.
Mac 11.1 appears to be interfering with Tomcat.
As another test, if I copy the WAR file to an AWS EC2 instance, it works fine there too.
Spent several days trying to figure this out, but cannot resolve.
Suggestions very welcome! :)
This happenned to my application, actually I was using a single Object which was being called by multiple functions and those were not thread safe.
Something like this :
Class A{
Object B;
function1(){
B.doSomething();
}
function2(){
B.doSomething();
}
}
As they were not threadsafe, I was getting these errors :
redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Socket is closed
and
redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
This is how I fixed it :
Class A{
function1(){
Object B;
B.doSomething();
}
function2(){
Object B;
B.doSomething();
}
}
Hope it helps
It means time out from your server response. It causes due to server config and internet response.
I am using 11.2 and received timeouts.
I resolved by using the version of jsoup below.
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.7.2</version>
<scope>compile</scope>
</dependency>
Related
SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);
Jmeter I keep getting: 'java.net.ConnectException: Connection timed out: connect' ?
I have created a load test which tests a specific url at 200 users
when running the load test for x1 iteration i keep seem to be getting: Connection timed outs?
I have made the following changes listed here: https://msdn.microsoft.com/en-us/library/aa560610(v=bts.20).aspx
But the issue is still there:
[1
You most probably don't have access to the target host from where you test.
Did you configure proxy as your web browser is probably configured
http://jmeter.apache.org/usermanual/get-started.html#proxy_server
But if failure is partial, then you server might be overloaded and rejecting some requests.
My expectation is that "problematic" requests are simply not able to finish in 20 seconds (most probably you have modified Connect timeout and set this value in HTTP Request or HTTP Request Defaults)
20 seconds looks like a long response time to me so your finding indicates application under test performance problem.
Going forward if you would like to see more "human readable" message in the results file switch to Duration Assertion instead of setting timeouts on protocol level
See How to Use JMeter Assertions in Three Easy Steps article for more information on conditionally failing JMeter requests.
Please check client configuration from where you are running your tests. It might be like your client system is not able to handle 200 threads. Do the test iteration wise means try with 10, 50 , 70 and so on. Check from which iteration onwards you are getting the error. It is also advisable not to include the listeners during load testing.
Please check the best practices for load testing using jmeter.
http://jmeter.apache.org/usermanual/best-practices.html
I'm using Soap UI 4.6.0 to hit a WCF web service, and when I have really large message payloads, I'm seeing the following error:
Error getting response; java.net.SocketException: Connection reset
The WCF service has around 10 methods, each with progressively larger inputs (eg, 10 int properties, 50 int properties, 100 int properties, etc). This works with the smaller messages, but as they get around 2000-3000 int properties, the error occurs.
The call appears to succeed on the server side, and with this coming from java, I'm assuming I'm butting up against some size limitation/configuration in the client. Is this something I can tweak within Soap UI, the java runtime, or elsewhere?
For me the trick that worked was adding below entry in SoapUI-5.2.0.vmoptions file (it can be found in the bin directory of installa
-Dsoapui.https.protocols=SSLv3,TLSv1.2
Normally a connection reset means that one of the underlying servers timed out waiting for data from another server/application and it reset the connection.
You should try out the suggestions #kroonwijk gave it'll tell you which server is causing the reset and what is causing the server to reset the connection.
Also see What's causing my java.net.SocketException: Connection reset?
If above solutions won't work for you then try this:
Close SoapUI
Go to SoapUi directory for example: C:\Program Files\SmartBear\SoapUI-5.3.0\
Rename directory "jre" to "jre.ignore"
Done. Open SoapUi and it should work now.
I'm trying to configure to use spymemcached to retrieve data from a memcached server (tried both 1.2 and 1.4). I configured it with the values provided in their wiki here (http://code.google.com/p/spymemcached/wiki/SpringIntegration). However, if I inject that bean as a MemcachedClient into my class, every time I try to access the cache I get a timeout. My line of code is as simple as that:
MyClass object = (MyClass) memcachedClient.get(cacheKey);
at this moment the value is not in the cache, but I would expect it to return null. Instead, all I'm getting is a CXF exception (this is a webservice), in which the cause is:
Caused by: net.spy.memcached.OperationTimeoutException: Timeout waiting for value
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1003)
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1018)
No info in the logs (although I'm still trying to put them in DEBUG, as for now I see only spymemcached logs for INFO). Anyone had similar issues? I can access the memcached server via telnet and the get correctly returns END.
Thanks.
The problem was using the BINARY protocol. Switching to TEXT works fine. I guess the installed build of memcached didn't support this protocol - however it was not an easy catch!
I encounter this error after a number of https requests. Anyone have any idea what could be the reason? It seens to be related to SSL. But previously it was working fine. I really don't understand what could have caused this issue
Error commiting responsejava.io.IOException: Broken pipe at
sun.nio.ch.FileDispatcher.write0(Native Method) at
sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at
sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104) at
sun.nio.ch.IOUtil.write(IOUtil.java:75) at
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:302) at
com.sun.enterprise.server.ss.ASOutputStream.write(ASOutputStream.java:120) at
com.sun.net.ssl.internal.ssl.OutputRecord.writeBuffer(OutputRecord.java:283) at
com.sun.net.ssl.internal.ssl.OutputRecord.write(OutputRecord.java:272) at
com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:666) at
com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59) at
org.apache.coyote.http11.InternalOutputBuffer.commit(InternalOutputBuffer.java:602) at
com.sun.enterprise.web.connector.grizzly.ProcessorTask.action(ProcessorTask.java:721) at
org.apache.coyote.Response.action(Response.java:188) at
org.apache.coyote.Response.sendHeaders(Response.java:380) at
org.apache.coyote.tomcat5.OutputBuffer.doFlush(OutputBuffer.java:357) at
org.apache.coyote.tomcat5.OutputBuffer.close(OutputBuffer.java:318) at
org.apache.coyote.tomcat5.CoyoteResponse.finishResponse(CoyoteResponse.java:528) at
org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:192) at
com.sun.enterprise.web.connector.grizzly.ProcessorTask.doProcess(ProcessorTask.java:604) at
com.sun.enterprise.web.connector.grizzly.ProcessorTask.process(ProcessorTask.java:475) at
com.sun.enterprise.web.connector.grizzly.ProcessorTask.doTask(ProcessorTask.java:426) at
com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:281) at
com.sun.enterprise.web.connector.grizzly.WorkerThread.run(WorkerThread.java:83
I don't know about sun.nio.ch. but...
This is a standard annoying error you get sometimes in Java web apps. You get this error when requesting a URL and then either hitting stop in your browser or clicking away to another url. The app is complaining that it wasn't able to send you the complete response.
In my case this was happening when I generated an Excel or csv file on the server for local download (crash at HttpServletResponse getOutputStream().flush())
However it is related to the browser configuration, in my case Chrome 32 bits on Windows 7. Nothing on server side.
For 3 days I've investigated in depth my web application, looking for the cause of the problem, checked many apache librairies etc.
Then I tried to perform the same action from an another computer and finally from the same one with FireFox. No problem.
Finally I discovered that the cause was Chrome’s default Cache Size.
I changed it with -disk-cache-size-2147483648 (at the end of the shortcut's target) and problem disappeared.
I hope it can save time to someone.
A Java NIO Pipe is a one-way data connection between two threads. A Pipe has a source channel and a sink channel. You write data to the sink channel. This data can then be read from the source channel.
Now coming to the problem. Whenever sink channel is FULL (reads are NOT fast enough to leave some space in the buffer ), pipe is closed!!
So any writes coming after this point will fail.