I have a statement that takes about 20 minutes to run, which is of the form:
create table new_table diststyle key distkey(column1) sortkey(column2)
as (select ....);
When I run it using an SQL IDE or with the psql command line client, the statement executes successfully but when I run it from my Java program, the server closes the connection after 10 minutes with the following exception:
org.springframework.jdbc.UncategorizedSQLException: StatementCallback; uncategorized SQLException for SQL [create table new_table diststyle key distkey(column1) sortkey(column2) as (select ....);];
SQL state [HY000]; error code [600001]; [Amazon](600001) The server closed the connection.;
nested exception is java.sql.SQLException: [Amazon](600001) The server closed the connection.
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:419) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:538) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at com.abc.mypackage.MyClass.myMethod(Myclass.java:123) [classes/:?]
Caused by: java.sql.SQLException: [Amazon](600001) The server closed the connection.
at com.amazon.support.channels.TLSSocketChannel.read(Unknown Source) ~[?:?]
Caused by: com.amazon.support.exceptions.GeneralException: [Amazon](600001) The server closed the connection.
at com.amazon.support.channels.TLSSocketChannel.read(Unknown Source) ~[?:?]
I'm using org.apache.commons.dbcp2.BasicDataSource to create connections. I've tried extending the timeout via defaultQueryTimeout, maxConnLifetimeMillis and socketTimeout but to no avail. The server keeps closing the connection after the same 10 minutes.
dataSource = new BasicDataSource();
dataSource.setUsername(dbUser);
dataSource.setPassword(dbPassword);
dataSource.setUrl(dbUrl);
dataSource.setDefaultAutoCommit(true);
dataSource.setTestOnBorrow(true);
dataSource.setTestOnReturn(true);
dataSource.setDriverClassName("com.amazon.redshift.jdbc41.Driver");
dataSource.setDefaultQueryTimeout(7200);
dataSource.setMaxConnLifetimeMillis(7200000);
dataSource.addConnectionProperty("socketTimeout", "7200");
How do I keep the connection alive for longer?
P.S. I do not have any problems establishing connections and running queries that take less than 10 minutes to finish.
You might want to extend your socket timeout.
Current it is 7200ms only:
dataSource.addConnectionProperty("socketTimeout", "7200");
check if the redshift server have a workload management policy that is timing out queries after 10 minutes.
your java code might be setting this policy
You need to set the tcpKeepAlive time to 1 min or less while getting the connection to redshift cluster.
Properties props = new Properties();
props.setProperty("user", user);
props.setProperty("password", password);
props.setProperty("tcpKeepAlive", "true");
props.setProperty("TCPKeepAliveMinutes", "1");
DriverManager.getConnection("jdbc:redshift://"+endpoint+":"
+port+"/"+database, props);
OP here- I was able to make it work by writing wrappers over BasicDataSource and Connection to poll active connection with isValid(int) every few minutes (any frequency more than once-per-10-minutes works). In hindsight, it seems that most timeout-related properties on BasicDataSource apply to connections which are in the pool but are not being used. setDefaultQueryTimeout and tcpKeepAlive + TCPKeepAliveMinutes did not work.
P.S. It has been a while since I resolved this problem and I do not have the code for the wrappers now. Here's a brief description of the wrappers.
WrappedConnection class takes a Connection object (conn) and a TimerTask object (timerTask) in its constructor and implements the Connection interface by simply calling the methods from conn. timerTask calls this.isValid(100) every few minutes as long as the connection is active. WrappedConnection.close stops timerTask and then calls conn.close.
WrappedBasicDataSource implements the DataSource interface, redirecting methods to a BasicDataSource object. BasicDataSourceWrapper.getConnection gets a connection from the aforementioned BasicDataSource and generates a WrappedConnection using the connection and a new TimerTask object.
I might have missed explaining some details but this is the gist of it.
Related
Like mentioned here we have almost the same issues with a java project (playframework) with neo4j-ogm and the ogm-bolt-driver (3.2.27). Connection resets sometimes, throws exception and reconnects with the next request (mostly).
The nodes will be queried from database via repositories and neo4j-ogm session (will be opened for small pieces of work).
Any advice will be appreciated.
play.api.UnexpectedException: Unexpected exception[ConnectionException: Connection to the database failed]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:358)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:264)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:430)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:422)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:454)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: org.neo4j.ogm.exception.ConnectionException: Connection to the database failed
at org.neo4j.ogm.drivers.bolt.driver.BoltDriverExceptionTranslator.translateExceptionIfPossible(BoltDriverExceptionTranslator.java:38)
at org.neo4j.ogm.session.Neo4jSession.doInTransaction(Neo4jSession.java:601)
at org.neo4j.ogm.session.Neo4jSession.doInTransaction(Neo4jSession.java:558)
at org.neo4j.ogm.session.delegates.LoadOneDelegate.load(LoadOneDelegate.java:87)
at org.neo4j.ogm.session.delegates.LoadOneDelegate.load(LoadOneDelegate.java:53)
at org.neo4j.ogm.session.Neo4jSession.load(Neo4jSession.java:178)
at modules.shared.repositories.CrudRepository.find(CrudRepository.java:50)
at config.secured.Secured.getUserFromSession(Secured.java:88)
at config.secured.Secured.getUser(Secured.java:56)
at config.secured.Secured.getUser(Secured.java:39)
at config.secured.DataWizardSecurity$AuthenticatedAction.call(DataWizardSecurity.java:63)
at config.filter.SafeFormFactoryToRequest.call(SafeFormFactoryToRequest.java:24)
at config.filter.Messaged.call(Messaged.java:29)
at play.core.j.JavaAction.$anonfun$apply$8(JavaAction.scala:175)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:672)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:431)
at play.core.j.HttpExecutionContext.$anonfun$execute$1(HttpExecutionContext.scala:64)
at play.api.libs.streams.Execution$trampoline$.execute(Execution.scala:70)
at play.core.j.HttpExecutionContext.execute(HttpExecutionContext.scala:59)
at scala.concurrent.impl.Promise$Transformation.submitWithValue(Promise.scala:393)
at scala.concurrent.impl.Promise$DefaultPromise.submitWithValue(Promise.scala:302)
at scala.concurrent.impl.Promise$DefaultPromise.dispatchOrAddCallbacks(Promise.scala:276)
at scala.concurrent.impl.Promise$DefaultPromise.map(Promise.scala:146)
at scala.concurrent.Future$.apply(Future.scala:672)
at play.core.j.JavaAction.apply(JavaAction.scala:176)
at play.api.mvc.Action.$anonfun$apply$4(Action.scala:82)
at play.api.libs.streams.StrictAccumulator.$anonfun$mapFuture$4(Accumulator.scala:168)
at scala.util.Try$.apply(Try.scala:210)
at play.api.libs.streams.StrictAccumulator.$anonfun$mapFuture$3(Accumulator.scala:168)
at scala.Function1.$anonfun$andThen$1(Function1.scala:85)
at scala.Function1.$anonfun$andThen$1(Function1.scala:85)
at scala.Function1.$anonfun$andThen$1(Function1.scala:85)
at scala.Function1.$anonfun$andThen$1(Function1.scala:85)
at scala.Function1.$anonfun$andThen$1(Function1.scala:85)
at play.api.libs.streams.StrictAccumulator.run(Accumulator.scala:199)
at play.core.server.AkkaHttpServer.$anonfun$runAction$4(AkkaHttpServer.scala:417)
at akka.http.scaladsl.util.FastFuture$.strictTransform$1(FastFuture.scala:41)
at akka.http.scaladsl.util.FastFuture$.$anonfun$transformWith$3(FastFuture.scala:51)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:448)
... 12 common frames omitted
Caused by: org.neo4j.driver.exceptions.ServiceUnavailableException: Connection to the database failed
at org.neo4j.driver.internal.util.Futures.blockingGet(Futures.java:143)
at org.neo4j.driver.internal.InternalSession.beginTransaction(InternalSession.java:98)
at org.neo4j.driver.internal.InternalSession.beginTransaction(InternalSession.java:92)
at org.neo4j.ogm.drivers.bolt.transaction.BoltTransaction.newOrExistingNativeTransaction(BoltTransaction.java:62)
at org.neo4j.ogm.drivers.bolt.transaction.BoltTransaction.<init>(BoltTransaction.java:50)
at org.neo4j.ogm.drivers.bolt.driver.BoltDriver.lambda$null$0(BoltDriver.java:128)
at org.neo4j.ogm.session.transaction.DefaultTransactionManager.openTransaction(DefaultTransactionManager.java:75)
at org.neo4j.ogm.session.Neo4jSession.beginTransaction(Neo4jSession.java:530)
at org.neo4j.ogm.session.Neo4jSession.doInTransaction(Neo4jSession.java:580)
... 49 common frames omitted
Suppressed: org.neo4j.driver.internal.util.ErrorUtil$InternalExceptionCause: null
at org.neo4j.driver.internal.async.inbound.ChannelErrorHandler.transformError(ChannelErrorHandler.java:127)
at org.neo4j.driver.internal.async.inbound.ChannelErrorHandler.fail(ChannelErrorHandler.java:113)
at org.neo4j.driver.internal.async.inbound.ChannelErrorHandler.exceptionCaught(ChannelErrorHandler.java:99)
at org.neo4j.driver.internal.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
at org.neo4j.driver.internal.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
at org.neo4j.driver.internal.shaded.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273)
at org.neo4j.driver.internal.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1377)
at org.neo4j.driver.internal.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302)
at org.neo4j.driver.internal.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281)
at org.neo4j.driver.internal.shaded.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:907)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:177)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at org.neo4j.driver.internal.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.net.SocketException: Connection reset
at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:345)
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:376)
at org.neo4j.driver.internal.shaded.io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253)
at org.neo4j.driver.internal.shaded.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
at org.neo4j.driver.internal.shaded.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at org.neo4j.driver.internal.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at org.neo4j.driver.internal.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at org.neo4j.driver.internal.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
As I mention in my answer here, this exception comes from Neo4j Java Driver and there may be several reasons for this to happen.
For instance, this might happen when driver acquires a connection from internal connection pool that has been idle for too long. Some cloud providers limit connection idle time for load balancers. Driver may fail as soon as it tries using such connection.
You can enable connection liveness check to assist with this. It makes sure that a short network exchange happens for every connection that has been idle over the configured timeout. If exchange fails, another connection is picked or established before it is provided for further driver usage. While this involves an extra network exchange, it is likely to prevent driver failures that might lead to higher-level retry management with potentially exponential delay. The timeout can be set to zero if you wish to test every connection.
Sample configuration:
var config = Config.builder()
.withConnectionLivenessCheckTimeout( 60000, TimeUnit.MILLISECONDS )
.build();
var driver = GraphDatabase.driver( URL, AuthTokens, config );
I'm trying to understand why this error occurs:
javax.net.ssl|WARNING|01|main|2021-07-04 12:08:30.668 CEST|SSLSocketImpl.java:497|SSLSocket duplex close failed (
"throwable" : {
java.net.SocketException: Socket is closed
at java.base/java.net.Socket.shutdownInput(Socket.java:1538)
at java.base/sun.security.ssl.BaseSSLSocketImpl.shutdownInput(BaseSSLSocketImpl.java:216)
at java.base/sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:751)
at java.base/sun.security.ssl.SSLSocketImpl.bruteForceCloseInput(SSLSocketImpl.java:701)
at java.base/sun.security.ssl.SSLSocketImpl.duplexCloseOutput(SSLSocketImpl.java:562)
at java.base/sun.security.ssl.SSLSocketImpl.close(SSLSocketImpl.java:486)
at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.close(SSLSocketImpl.java:1034)
at com.ibm.db2.jcc.t4.a0.j(a0.java:343)
at com.ibm.db2.jcc.t4.b.freeTransport_(b.java:5523)
at com.ibm.db2.jcc.t4.a.close_(a.java:455)
at com.ibm.db2.jcc.am.Agent.close(Agent.java:345)
at com.ibm.db2.jcc.t4.b.b(b.java:965)
at com.ibm.db2.jcc.t4.b.a(b.java:804)
at com.ibm.db2.jcc.t4.b.a(b.java:441)
at com.ibm.db2.jcc.t4.b.a(b.java:414)
at com.ibm.db2.jcc.t4.b.<init>(b.java:352)
at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:233)
at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:200)
at com.ibm.db2.jcc.DB2SimpleDataSource.getConnection(DB2SimpleDataSource.java:182)
at com.example.MainApplication.main(MainApplication.java:36)}
)
javax.net.ssl|ALL|01|main|2021-07-04 12:08:30.668 CEST|SSLSocketImpl.java:1217|Closing output stream
Exception in sql: com.ibm.db2.jcc.am.SqlNonTransientConnectionException
DB2 SQL Error: SQLCODE=-20157, SQLSTATE=08004, SQLERRMC=WEBADMIN;QUIESCE DATABASE;;, DRIVER=4.25.13
com.ibm.db2.jcc.am.SqlNonTransientConnectionException: DB2 SQL Error: SQLCODE=-20157, SQLSTATE=08004
The exception is thrown when I'm invoking getConnection() method of the DriverManager:
connection = DriverManager.getConnection(DB_URL, properties);
I'm using Java 11.0.11 from Oracle (non-OpenJDK).
Talk with your DBA team or whoever manages the database - you get this exception because someone (or some job) has put the database into a specific state that is used for maintenance activity.
Normally this is a temporary situation, and the database (or Db2-instance) needs to be brought back to normal mode by an unquiesce action, when the maintenance activity is completed. After the unquiesce action, your connection should complete as normal.
The SQLCODE (-20157) and SQLERRMC ( SQLERRMC=WEBADMIN;QUIESCE DATABASE;) that are in the message tell you the cause of this exception.
Lookup SQL20157N in the docs to get the detailed explanation.
I have a weak connection with database server and time consuming query which is sometimes fails with exception:
Caused by: java.sql.SQLException: I/O Error: Socket closed
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2481)
at net.sourceforge.jtds.jdbc.TdsCore.getNextRow(TdsCore.java:805)
at net.sourceforge.jtds.jdbc.JtdsResultSet.next(JtdsResultSet.java:611)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at net.sourceforge.jtds.jdbc.SharedSocket.readPacket(SharedSocket.java:885)
at net.sourceforge.jtds.jdbc.SharedSocket.getNetPacket(SharedSocket.java:731)
at net.sourceforge.jtds.jdbc.ResponseStream.getPacket(ResponseStream.java:477)
at net.sourceforge.jtds.jdbc.ResponseStream.read(ResponseStream.java:146)
at net.sourceforge.jtds.jdbc.TdsData.readData(TdsData.java:901)
at net.sourceforge.jtds.jdbc.TdsCore.tdsRowToken(TdsCore.java:3175)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2433)
How can connection interruption be handled? currently I have to manually re-run the operation until it executes successfully, maybe it could be done at jdbc driver level
ps socketTimeout property doesn't seem to affect this
Are you sure this is because of query/procedure taking long time ? As Socket closed error normally means something went wrong with the network connection itself, basically something that your driver could not control...
In general, I would suggest to move to a Pool based mechanism, which allows greater control on your persistence layer interaction.
Problem Description: MongoDB version is 3.4
In fact, did not do anything on the normal query, write,
because it is in the testing phase, QPS is small.
Question:
1: How is this anomaly produced.
2: what configuration or adjustment needs to be done? help me
02-01 15:11:47 WARN - Got socket exception on connection [connectionId{localValue:43}] to 172.16.199.96:22001. All connections to 172.16.199.96:22001 will be closed.
02-01 15:11:47 INFO - Closed connection [connectionId{localValue:43}] to 172.16.199.96:22001 because there was a socket exception raised by this connection.
org.springframework.data.mongodb.UncategorizedMongoDbException: Exception receiving message; nested exception is com.mongodb.MongoSocketReadException: Exception receiving message
at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:107)
at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2135)
at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1978)
at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1784)
at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1767)
at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:641)
at org.springframework.data.mongodb.core.MongoTemplate.findOne(MongoTemplate.java:606)
at org.springframework.data.mongodb.core.MongoTemplate.findOne(MongoTemplate.java:598)
at com.xxx.xxx.xxx.xxx(xxxService.java:46)
at com.xxx.xxx.xxx.xxx(xxxService.java:157)
at com.xxx.xxx.xxx.xxx(xxxService.java:142)
at com.xxx.xxx.xxx.xxx(xxxService.java:87)
at com.alibaba.dubbo.common.bytecode.Wrapper2.invokeMethod(Wrapper2.java)
at com.alibaba.dubbo.rpc.proxy.javassist.JavassistProxyFactory$1.doInvoke(JavassistProxyFactory.java:46)
at com.alibaba.dubbo.rpc.proxy.AbstractProxyInvoker.invoke(AbstractProxyInvoker.java:72)
at com.alibaba.dubbo.rpc.protocol.InvokerWrapper.invoke(InvokerWrapper.java:53)
at com.alibaba.dubbo.rpc.filter.ExceptionFilter.invoke(ExceptionFilter.java:64)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:75)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.rpc.filter.TimeoutFilter.invoke(TimeoutFilter.java:42)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.rpc.protocol.dubbo.filter.TraceFilter.invoke(TraceFilter.java:78)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.rpc.filter.ContextFilter.invoke(ContextFilter.java:61)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.rpc.filter.GenericFilter.invoke(GenericFilter.java:132)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.rpc.filter.ClassLoaderFilter.invoke(ClassLoaderFilter.java:38)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.rpc.filter.EchoFilter.invoke(EchoFilter.java:38)
at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:69)
at com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol$1.reply(DubboProtocol.java:100)
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.handleRequest(HeaderExchangeHandler.java:98)
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:170)
at com.alibaba.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:52)
at com.alibaba.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:81)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.mongodb.MongoSocketReadException: Exception receiving message
at com.mongodb.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:483)
at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:228)
at com.mongodb.connection.UsageTrackingInternalConnection.receiveMessage(UsageTrackingInternalConnection.java:96)
at com.mongodb.connection.DefaultConnectionPool$PooledConnection.receiveMessage(DefaultConnectionPool.java:440)
at com.mongodb.connection.CommandProtocol.execute(CommandProtocol.java:112)
at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:168)
at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:289)
at com.mongodb.connection.DefaultServerConnection.command(DefaultServerConnection.java:176)
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:216)
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:207)
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:113)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:516)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:510)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:431)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:404)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:510)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:81)
at com.mongodb.Mongo.execute(Mongo.java:836)
at com.mongodb.Mongo$2.execute(Mongo.java:823)
at com.mongodb.DBCursor.initializeCursor(DBCursor.java:870)
at com.mongodb.DBCursor.hasNext(DBCursor.java:142)
at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1964)
... 37 common frames omitted
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at com.mongodb.connection.SocketStream.read(SocketStream.java:85)
at com.mongodb.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:494)
at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:224)
... 57 common frames omitted
java version 1.8.
spring boot version 1.5.3.
deployed with docker.
mongo.hosts=ip:port,ip:port,ip:port
mongo.database.name=dbname
mongo.username=username
mongo.password=pwd
mongo.connections.per.host=32
mongo.max.wait.time=2000
mongo.connect.timeout=2000
You can try,
autoConnectRetry simply means the driver will automatically attempt to reconnect to the server(s) after unexpected disconnects. In production environments you usually want this set to true.
This is from another post, How to configure MongoDB Java driver MongoOptions for production use?
for everybody who is experiencing the same random MongoSocketReadException, you may need the socketTimeoutMS or maxIdleTimeMS parameters instead. The parameter autoConnectRetry is not exposed any more in the mongodb connection string.
Our situation: we switched to mongodb atlas serverless solution for our development and testing environments, ever since then we got this MongoSocketReadException like every 15 min. or randomly. We are also behind a enterprise firewall.
According to https://www.mongodb.com/docs/v6.0/tutorial/connection-pool-performance-tuning/:
a misconfigured firewall closes a socket connection incorrectly and the driver cannot detect that the connection closed improperly.
you need => Use socketTimeoutMS to ensure that sockets are always closed. Set socketTimeoutMS to two or three times the length of the slowest operation that the driver runs.
because the socketTimeoutMS is by default 0, which will never timeout.
And another parameter maxIdleTimeMS may also affect the connection because if the socket is closed and on the client side it's not detected, the connection will be still waiting in idle time and not cloesd. And by default it's 0 meaning it waits forever with no upper boundaries.
So configure this to a small amount may help the driver to close the the problematic connection with its closed socket, before it tries to connect to the db using the same connection and presumes the connection is still there.
So our solution:
...mongodbUri...?socketTimeoutMS=150000&maxIdleTimeMS=150000
we changed the socketTimeoutMS from 0 to 15s and same for the maxIdleTimeMS.
I tried to set a connection timeout with the following code:
public class ConnectionTimeout {
public static void main(String[] args) throws Exception {
String entry = "jdbc:oracle:thin:#xxx:1521:xxx";
Connection con=null;
Class.forName("oracle.jdbc.driver.OracleDriver");
DriverManager.setLoginTimeout(1);
con=DriverManager.getConnection(entry,"username","password");
Statement s=con.createStatement();
s.execute("select 1 from dual");
s.close();
con.close();
}
}
The instance xxx is not existing. But I get the following exception:
Exception in thread "main" java.sql.SQLException: E/A-Exception: Socket is not connected
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:439)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at my.package.connection.timeout.ConnectionTimeout.main(ConnectionTimeout.java:22)
How I can implement a timeout to an not existing or available Oracle database instance?
Edit:
If I set the DriverManager.setLoginTimeout(30); to 30 second, the exception happens so fast as before!
Your DriverManager.setLoginTimeout(1); sets the maximum time in seconds for the driver to wait while connecting to the database. In your case, it's set to 1.
To have no limit, set setLoginTimeout(0) where 0 means no limit.
I hope this helps.
Update if your instance xxx doesn't exists, how would you expect your Oracle Driver to connect to the database? It won't make any difference how long you set your loginTimeout there's not "host" to connect to.
Because, in Java doc, it shows: timeout in seconds, but in the implementation of JDBC Oracle, it is milliseconds.
You can try using a measure of milliseconds.