Handle a HicariCP Oracle connection attempts - java

I presume i have a close to default HicariConfiguration with MaximumPoolSize(5).
The problem i faced with is there're a lot of attempts to connect to database even the first one failed. I mean, for instance, the password i'm going to use to connect to Oracle is wrong and connection fails, but then we have one more attempts to connect to database which lock the account as a result.
Question: What HicariCP setting is supposed to be used to limit up to 1 number of attempt to connect?
Thanks for any information!
### UPDATE
env.conf:
jdbc {
test1 {
datasourceClassName="oracle.jdbc.pool.OracleDataSource"
dataSourceUrl=.....jdbc url
dataSourceUser=USER
dataSourcePassword=password
setMaximumPoolSize = 5
setJdbc4ConnectionTest = true
}
}
Conf file is read by means of ConfigFactory, and create HicariConfig based on conf file (setDriverClassName etc).
Output of HikariConfig:
autoCommit.....................true
connectionTimeOut..............30000
idleTimeOut....................600000
initializationFailFast.........false
isolateInternalQueries.........false
jdbc4ConnectionTest............test
maxLifetime....................1800000
minimumIdle....................5

https://github.com/brettwooldridge/HikariCP/issues/312, As explained at the end of this issue, HikariCP will keep trying to acquire a connection. It removed the acquireRetries parameters deliberately. so the way is to configure the right username/password, since DB only lock after authenticaions failures.
Here's extracted from the issue. HikariCP intends to retry forever.
Back to acquireRetries... Without a concept of acquireRetries, how
long does the dedicated thread continue to try to create a new
connection? Forever. The background creation thread will continue to
try to add a connection to the pool forever, or until one of three
conditions is met:

Related

JDBC Connections not released till end of thread

I have a websphere application server with an MSSQL database connection with 10 in the connection pool. When my worker thread runs I have a process that runs the following:
Psuedo code
for(List<String> items){
Connection connection = null;
Statement statement = null;
try{
connection = getConnection();
statement = connection.createStatement();
String sql = "exec someStoredProcedure '" + item + "';";
statement.execute(sql);
}finally{
//real code includes null checks and try catch for errors
statement.close();
connection.close();
}
}
My problem is that if my loop is larger than 10 it will hang till it hits the timeout and error "ConnectionWaitTimeoutException: J2CA1010E: Connection not available; timed out waiting for 180 seconds".
I would think that closing the statements and connections would allow me to reuse the connection from the pool. The connections do get closed/collected after the thread finishes. I've updated my code to reuse the same connection through the for loop and I can run the process rapid fire over and over without problems because each thread cleans up after itself it seems. Any idea what's going on or how to resolve? I'm worried in the future I'll have a process that needs more than 10 threads over the course of running.
The behavior that you describe sounds inconsistent with how the connection pool is supposed to work. However, there are a number of important details that impact connection reuse which are not apparent from your writeup.
You are either using sharable or unsharable connections. You can find that out by looking at the resource reference that you used for the DataSource lookup. For example, #Resource with shareable=false or a deployment descriptor resource reference with <res-sharing-scope>Unshareable</res-sharing-scope> is unsharable, whereas if you set true for the former or Shareable for the latter or omit the attribute, then the connection is sharable. If you didn't use a resource reference, connections will typically be sharable, unless you explicitly overrode that with advanced configuration.
If you are using an unsharable connection and the entire code block is not encompassed within a global transaction, then every connection.close will return the connection to the pool to become immediately reusable. If you are in a global transaction, then each connection will remain unavailable after close because there is outstanding work on it that still needs to be committed or rolled back.
If you are using a sharable connection and each of the connection requests match (same user/password or absence thereof for all), then you should keep getting the same underlying connection back and should not run out of connections. If however, the connection requests do not match, then the connections are kept around for the duration of the scope (could be a transaction or a request scope like servlet boundary) and you will keep getting a new connection with each request until you run out.
If the above isn't enough to figure out the issue, add more detail to your scenario indicating the sharability, how you request each connections, and the placement of transaction boundaries and request boundaries, so that a more detailed answer for your specific scenario can be given.

Using galera mariadb with jdbc

I've created a mariadb cluster and I'm trying to get a Java application to be able to failover to another host when one of them dies.
I've created an application that creates a connection with "jdbc:mysql:sequential://host1,host2,host3/database?socketTimeout=2000&autoReconnect=true". The application makes a query in a loop every second. If I kill the node where the application is currently executing the query (Statement.executeQuery()) I get a SQLException because of a timeout. I can catch the exception and re-execute the statement and I see that the request is being sent to another server, so failover in that case works ok. But I was expecting that executeQuery() would not throw an exception and silently retry another server automatically.
Am I wrong in assuming that I shouldn't have to handle an exception and explicitely retry the query? Is there something more I need to configure for that to happen?
It is dangerous to auto reconnect for the following reason. Let's say you have this code:
BEGIN;
SELECT ... FROM tbl WHERE ... FOR UPDATE;
(line 3)
UPDATE tbl ... WHERE ...;
COMMIT;
Now let's say the server crashes at (line 3). The transaction will be rolled back. In my fabricated example, that only involves releasing the lock on tbl.
Now let's say that some other connection succeeds in performing the same transaction on the same row while you are auto-reconnecting.
Now, with auto-reconnect, the first thread is oblivious that the first half of the transaction was rolled back and proceeds to do the UPDATE based on data that is now out of date.
You need to get an exception so that you can go back to the BEGIN so that you can be "transaction safe".
You need this anyway -- With Galera, and no crashes, a similar thing could happen. Two threads performing that transaction on two different nodes at the same time... Each succeeds until it gets to the COMMIT, at which point the Galera magic happens and one of the COMMITs is told to fail. The 'right' response is replay the entire transaction on the server that was chosen for failure.
Note that Galera, unlike non-Galera, requires checking for errors on COMMIT.
More Galera tips (aimed at devs and dbas migrating from non-Galera)
Failover doesn't mean that application doesn't have to handle exceptions.
Driver will try to reconnect to another server when connection is lost.
If driver fail to reconnect to another server a SQLNonTransientConnectionException will be thrown, pools will automatically discard those connection.
If connection is recovered, there is some marginals cases where relaunching query is safe: when query is not in a transaction, and connection is currently in read-only mode (using Spring #Transactional(readOnly = false)) for example. For thoses cases, MariaDb java connection will then relaunch query automatically. In those particular cases, no exception will be thrown, and failover is transparent.
Driver cannot re-execute current query during a transaction.
Even without without transaction, if query is an UPDATE command, driver cannot know if the last request has been received by the database server and executed.
Then driver will send an SQLException (with SQLState begining by "25" = INVALID_TRANSACTION_STATE), and it's up to the application to handle those cases.

Error: System Resource Exceeded

A console application executing under:
1). Multiple threads
2). Connection Pooling (as the database connections range could be 5 to 30) of type Microsoft Access using DBCP.
While executing this application at my end (not tested the database limit) it works fine. And whenever I try to introduce the same application on one of other machines it generates an error.
I'm wondering why this is happening as there is only the difference of machines here. So, it works perfectly at my end.
I don't know much about connection pooling but it seems whatever I have understood I have implemented as:
public class TestDatabases implements Runnable{
public static Map<String, Connection> correctDatabases;
#Override
public void run() {
// validating the databases using DBCP
datasource.getConnection(); // Obtaining the java.sql.Connection from DataSource
// if validated successfully °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ putting them in correctDatabases
}
}
The above case is implemented using ExecutorService = Number of databases.
Finally, I'm trying to put them in a static Collection of Type
Map<String, Connection> and making use of it throughout the application. In other words: I'm trying to collect the connectionString along with the Connection in a Map.
In other parts of my application I'm simply dealing with multiple threads coming along with the Connection URL. So, to perform any database operations I'm calling the
Connection con = TestDatabases.correctDatases.get(connectUrl);
For that machine, this application works fine for around ~5 databases. And the error is always getting generated when I'm trying to fire the query using above Connection (con) as stmt.executeQuery(query);
As, I'm not able to reproduce this issue at my end, it seems something is going-on wrong with the Connection Pooling or I have not configured my application to deal with Connection Pooling correctly.
Just for your information, I'm correctly performing Connection close in finally block where my application terminates and this Application is using Quartz Scheduler as well. For Connection Pooling, a call to the following from TestDatabases class is done for setUp as:
public synchronized DataSource setUp() throws Exception {
Class.forName(RestConnectionValidator.prop.getProperty("driverClass")).newInstance();
log.debug("Class Loaded.");
connectionPool = new GenericObjectPool();
log.debug("Connection pool made.");
connectionPool.setMaxActive(100);
ConnectionFactory cf = new DriverManagerConnectionFactory(
RestConnectionValidator.prop.getProperty("connectionUrl")+new String(get().toString().trim()),
"","");
PoolableConnectionFactory pcf =
new PoolableConnectionFactory(cf, connectionPool,
null, null, false, true);
return new PoolingDataSource(connectionPool);
}
Following is the error I'm getting (at the other machine)
java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] System resource exceeded.
Following is the Database Path:
jdbc:odbc:DRIVER= {Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\\DataSources\\PR01.mdb
Each of those database seems to be not much heavy (its ~ 5 to 15 MB of total size).
So, I'm left with the following solutions:
1). Correction of Connection Pooling or migrate to the newer one's like c3p0 or DBPool or BoneCP.
2). Introducing batch concept - in which I will schedule my application for each group of 4 databases. It could be very expensive to deal with as any time the other schedule may also collapse.
I’m pretty sure that this is Java related error but I can’t fathom out why.
Just done the migration to BoneCP which solved my problem. I guess due to multi-threaded environment the dpcp was not providing the connection from pool rather it was trying to hit the database again and again. Maybe I could have solved the dpcp issue but migrating to BoneCP also provides advantage of performance.

Java JDBC : establish a persistent connection which lasts days

I am currently working on a java project which implements web-scraping and I am facing a weird issue so far.
Here is what I do :
Get an URL Connection with a page of a website
Parse the HTML code to get some content (OpenData)
Add the content in my database
Move onto the next page and go back to Step 1
This is actually very long and it can last for days so I need to let the script running. The problem is that sometimes, it stops for no reason (no errors, no messages, no window close ; It just litterally stops and I need to press one of my button to restart it). I have implemented a short code which restarts the application from where it stopped. I believe it's a connection problem to the database so I would like to know how could I fix it.
I use a static class which creates an instance of this class at the beginning of the application and then I use static methods from this class to run my queries like this for example :
ConnexionBDD.con.prepareStatement(query);
public static Connection loadDriver() {
try {
Class.forName(Driver);
con = DriverManager.getConnection(ConnectionString, user, pwd);
} catch (ClassNotFoundException e) {
System.err.println("Classe not found : Class.forName(...)");
} catch (SQLException e) {
System.err.println(e.getMessage());
}
return con;
}
I am not sure I am doing the right thing to make my connection lasts forever (in theory) and eventually close it when It has finished to iterate over my links.
You're jumping the gun a bit here. There's no evidence that the database connection is actually the problem. Usually if you were having DB connection issues you'd be getting an exception from the connection when you try to perform operations on it, a timeout, etc.
You need to:
Add detailed logging to your application, so you can see what it's doing as it progresses, and what it's trying to do when it stops; and
Run it with -Xdebug and other suitable options for remote debugging, so you can attach a debugger to it when it stops and examine its state to see what it is doing at the time. Use the debugger user interface from NetBeans, Eclipse, or whatever you prefer to attach to the program when the logging indicates that it's stopped progressing.
For logging, you can use java.util.logging. See the javadoc and the logging overview docs.
Here's an example of how to do remote debugging with Eclipse. You'll be able to find similar guides for your chosen IDE. Java also has a command line debugger, but it's pretty painful.
You also need to check to see whether the program might be crashing or exiting, rather than just stopping working. You should capture any standard error output from the program and check the program's error return code from the shell. Also look for hs_error files in the directory the program runs in, in case there's a JVM crash, though that should generate output on stderr as well.
You should also:
Set an application_name when you establish a connection to PostgreSQL, so you can easily see what your client is doing with the database. You can specify application_name as a JDBC connection parameter, or run a SET application_name = 'blah' statement after connecting.
When logging (or however you currently tell that your program is no longer progressing) indicates that the program has stopped working, examine pg_stat_activity in the server, looking at the entry/entries for your application. See if the connection is idle, idle in transaction, or running a statement, and what that statement is. If it's running a statement, query against pg_locks to see if it's blocked on an ungranted lock.

Hibernate check connection to database in Thread ( every time period )

What is best/good/optimal way to monitor connection to database.
Im writing swing application. What I want it to do is to check connection with database every time period. I've tried something like this.
org.hibernate.Session session = null;
try {
System.out.println("Check seesion!");
session = HibernateUtil.getSessionFactory().openSession();
} catch (HibernateException ex) {
} finally {
session.close();
}
But that dosn't work.
Second question which is comming to my mind is how this session closing will affect other queries.
Use a connection pool like c3p0 or dbcp. You can configure such pool to monitor connections in pool - before passing connection to Hibernate, after receiving it back or periodically. If the connection is broken, the pool with transparently close it, discard and open a new one - without you noticing.
Database connection pools are better suited for multi-user, database heavy application where several connections are opened at the same time, but I don't think it's an overkill. Pools should work just fine being bound to max 1 connection.
UPDATE: every time you try to access the database Hibernate will ask the DataSource (connection pool). If no active connection is available (e.g. because database is down), this will throw an exception. If you want to know in advance when database is unavailable (even when user is not doing anything), unfortunately you need a background thread checking the database once in a while.
However barely opening a session might not be enough in some configurations. You'll better run some dummy, cheap query like SELECT 1 (in raw JDBC).

Categories

Resources