I am trying to execute a query which takes huge amount of time sometimes which causes closed connection, meaning the connection gets closed before the query executes/commits. I want to recover from the error, get a new connection and then retry.
Caused by: java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:4051)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1473)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:406)
at com.fimt.sat.testora12date.dao.DateSaverGetterDao.testAbandonedConnectionWithDS(DateSaverGetterDao.java:73)
You can try catch this particular error:
public save(MyData data) {
try {
...
} catch (SQLRecoverableException e) {
// Better handling a parameter to set
// the maximum number of retries
// Eventually consider to retry on a secondary thread
// delayed of a certain number of seconds
save(data);
}
}
The answer of #Davide Lorenzo MARINO is great unless you have a query heavy enough to not to manage to be executed even after such several recovers.
I'm not a professional in Oracle but what I've found is that you can tune some kind of RAC failover that will keep you transaction even after exceeding the timeout.
But generally my vision is that it is better to split data somehow in multiple queries
Better split / bucket the data in the select query and commit at some intervals. E.g if the query is for a duration of 2 months, bucket it / loop / split it 15 days of period and do the necessary statements and commit it
I have got the following error upon connection to DB and accessing the tables.
"An error was encountered performing the requested operation"
Closed Connection
Vendor code 17008.
Was able to resolve this after downloading the latest version of SQL Developer 19.2.1.247. Earlier older version was 3.x.
Related
DB team of our application has reported ASYNC_NETWORK_IO issue along with a big but optimized query that gets executed in around 36Sec and brings around 6,44,000 rows.
The main reasons behind this might be one of these:
A. Problem with the client application
B. Problems with the network – (but we have Ethernet speed 1 GB)
So, might be this is a code side issue because The session must wait for the client application to process the data received from SQL Server in order to send the signal to SQL Server that it can accept new data for processing.
This is a common scenario that may reflect bad application design, and is the most often cause of excessive ASYNC_NETWORK_IO wait type values
and here is the how we are getting data from DB in code.
try {
queue.setTickets(jdbcTemplate.query(sql, params, new QueueMapper()));
} catch (DataAccessException e) {
logger.error("get(QueueType queueType, long ticketId)", e);
}
Can anyone advise me on this?
Thanks in advance.
I am trying to set the write timeout in Cassandra with the Java drive. SocketOptions allows me to set a read and connect timeout but not a write timeout.
Does anyone knows the way to do this without changing the cassandra.yaml?
thanks
Altober
The name is misleading, but SocketOptions.getReadTimeoutMillis() applies to all requests from the driver to cassandra. You can think of it as a client-level timeout. If a response hasn't been returned by a cassandra node in that period of time an OperationTimeoutException will be raised and another node will be tried. Refer to the javadoc link above for more nuanced information about when the exception is raised to the client. Generally, you will want this timeout to be greater than your timeouts in cassandra.yaml, which is why 12 seconds is the default.
If you want to effectively manage timeouts at the client level, you can control this on a per query basis by using executeAsync along with a timed get on the ResultSetFuture to give up on the request after a period of time, i.e.:
ResultSet result = session.executeAsync("your query").get(300, TimeUnit.MILLISECONDS);
This will throw a TimeoutException if the request hasn't been completed in 300 ms.
In batch script I use a loop to execute a bunch of sql (hql) against
a Teradata databse. After some iterations I receive the following error:
Teradata databse: 3130 Response limit exceeded
Now the documentation suggests (as well the answer on this question) that this is due to to many open result sets for the same session.
Now the session and the ResultSet are managed by the EntityManager, and I wonder if there is a way to avoid closing and reopening the connection in this case via jpa/hiberate.
I have tried entityManager.clear or flush without any effect.
is there a way to handle this better? maybe I am missing something. My "batch" runes under spring 2.5. in a "cli" mode.
in my case it turned out to be a row with large blob data. after refining steps I could retrieve data without 3130 popping out.
I'm using latest derby10.11.1.1.
Doing something like this:
DriverManager.registerDriver(new org.apache.derby.jdbc.EmbeddedDriver())
java.sql.Connection connection = DriverManager.getConnection("jdbc:derby:filePath", ...)
Statement stmt = connection.createStatement();
stmt.setQueryTimeout(2); // shall stop query after 2 seconds but id does nothing
stmt.executeQuery(strSql);
stmt.cancel(); // this in fact would run in other thread
I get exception "java.sql.SQLFeatureNotSupportedException: Caused by: ERROR 0A000: Feature not implemented: cancel"
Do you know if there is way how to make it work? Or is it really not implemented in Derby and I would need to use different embedded database? Any tip for some free DB, which I can use instead of derby and which would support SQL timeout?
As i got in java docs
void cancel() throws SQLException
Cancels this Statement object if both the DBMS and driver support aborting an SQL statement. This method can be used by one thread to cancel a statement that is being executed by another thread.
and it will throws
SQLFeatureNotSupportedException - if the JDBC driver does not support this method
you can go with mysql.
there are so many embedded database available you can go through
embedded database
If you get Feature not implemented: cancel then that is definite, cancel is not supported.
From this post by H2's author it looks like H2 supports two ways to timeout your queries, both through the JDBC API and through a setting on the JDBC URL.
Actually I found that there is deadlock timeout in derby as well only set to 60 seconds by default and I never have patience to reach it :).
So the correct answer would be:
stmt.setQueryTimeout(2); truly seems not working
stmt.cancel(); truly seems not implemented
But luckily timeout in database manager exists. And it is set to 60 seconds. See derby dead-locks.
Time can be changed using command:
statement.executeUpdate("CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY(" +
"'derby.locks.waitTimeout', '5')");
And it works :)
I presume i have a close to default HicariConfiguration with MaximumPoolSize(5).
The problem i faced with is there're a lot of attempts to connect to database even the first one failed. I mean, for instance, the password i'm going to use to connect to Oracle is wrong and connection fails, but then we have one more attempts to connect to database which lock the account as a result.
Question: What HicariCP setting is supposed to be used to limit up to 1 number of attempt to connect?
Thanks for any information!
### UPDATE
env.conf:
jdbc {
test1 {
datasourceClassName="oracle.jdbc.pool.OracleDataSource"
dataSourceUrl=.....jdbc url
dataSourceUser=USER
dataSourcePassword=password
setMaximumPoolSize = 5
setJdbc4ConnectionTest = true
}
}
Conf file is read by means of ConfigFactory, and create HicariConfig based on conf file (setDriverClassName etc).
Output of HikariConfig:
autoCommit.....................true
connectionTimeOut..............30000
idleTimeOut....................600000
initializationFailFast.........false
isolateInternalQueries.........false
jdbc4ConnectionTest............test
maxLifetime....................1800000
minimumIdle....................5
https://github.com/brettwooldridge/HikariCP/issues/312, As explained at the end of this issue, HikariCP will keep trying to acquire a connection. It removed the acquireRetries parameters deliberately. so the way is to configure the right username/password, since DB only lock after authenticaions failures.
Here's extracted from the issue. HikariCP intends to retry forever.
Back to acquireRetries... Without a concept of acquireRetries, how
long does the dedicated thread continue to try to create a new
connection? Forever. The background creation thread will continue to
try to add a connection to the pool forever, or until one of three
conditions is met: