In my application I have multithreads that needs to access database and I am using apache.tomcat.jdbc.pool.DataSource as a JDBC connection pool.
In some cases users execute stored procedures that might affect the database session context/variables before executing another query to retrieve some data.
After a thread is finished, connection is closed but the way the pooled connections work, the connection is not actually closed but rather returned to the connection pool for reuse. The problem is similar to this question question.
The problem is that these variables usually affect the data retrieved from the database. So when another thread acquire a connection where these variables/context where set and then query the database the data retrieved is affected by the context/variables.
What is the best way to handle such an issue? I have outlined some solutions but not sure which is best practise and not how to implement them all?
Execute a Procedure / Statement that reset session / variables before releasing the connection back to the pool:
This solution could work but the issue is my application use different databases. For now MySQL and Oracle are supported. For Oracle we have RESET_PACKAGE Procedure, but there is no equivalent for MySQL. Is there a way to do it in MySQL? also what if new databases are supported?
Is there a way to enforce or explicitly close the actual/physical connection instead of just returning it to the pool? or Is there a property to enforce pool to close the connection?
Does rollback to a savepoint revert db session variables / context?
Savepoint savepoint = connection.setSavepoint();
// execute some procedure connection that affect session
connection.rollback(savepoint);
connection.close();
Or is there any other way to enforce clearing of session or restart/close the actual connection?
My question is related to this question.
On MySQL you might perform the sequence:
BEGIN;
COMMIT RELEASE;
as your validationQuery. The RELEASE part disconnects the current session and creates a new one (cf. MySQL Documentation).
Related
According to lettuce, we don't need connection pool and it uses single thread safe shared connection
https://github.com/lettuce-io/lettuce-core/wiki/Connection-Pooling
But, according to hikari-cp we need to have pool of connections of preferably size connections = ((core_count * 2) + effective_spindle_count)
https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing
I am confused why we don't need pooling in one case but required in other?
Hikari is a JDBC connection pool. Connection pooling is required because database has long things to do and while that connection open you can not run another query over it, so it is better to have multiple "open" connection ready. Read more here
On the other hand Redis is not a database. Redis is a key value based in memory store/cache used for fast access. There you do not need connection pool because it is simple and fast, you ask to redis is "key1" exist give me the data and you get it or not. On the other hand with database you run long running SQLs sometime stored procedures depends to complexity but it is not one step work.
in spring, let method A with #transitional annotation don't call DAO and execute any SQL query.
then, method A never take a db connection?
The #Transactional annotation doesn't cause a database connection to be established, because it wouldn't know which database to connect to.
Remember, a program might connect to more than one database.
When is a DB connection is opened in java ?
That is really a broad question.
The answer depends on the way which the application code configured the connection to the DB.
Generally for "real" applications, you don't open a connection for each client request.
It would be inefficient.
Instead, when the application starts, a component called connection pool creates a specific number of connections while that number can increase, decrease according to the actual client requests. And these connections are stored in memory.
At last when the client code requests a connection, the pool provides it.
About the database transaction, represented in spring by #Transactional is a different thing.
It symbolizes a unit of work performed within a database management system.
About :
let method A with #transitional annotation don't call DAO and execute
any SQL query. then, method A never take a db connection?
Even without #Transactional a query needs a connection to be executed.
If a code don't perform any query, there is few risk that it borrows a connection to the pool.
The answer is method A will not take a DB connection.
Assuming that you are using spring boot with spring data JPA.
With default configuration (spring.jpa.open-in-view is set to true), each request will be bound with a Hibernate Session object, and database access is processed with the help of the object.
If there is database access happens, the session object will borrow database connection from the connection pool, which was initialized in the starting phase of the application, and it will do nothing if there isn't that thing happen.
I have a severe problem with my database connection in my web application. Since I use a single database connection for the whole application from singleton Database class, if i try concurrent db operations (two users) the database rollsback the transactions.
This is my static method used:
All threads/servlets call static Database.doSomething(...) methods, which in turn call the the below method.
private static /* synchronized*/ Connection getConnection(final boolean autoCommit) throws SQLException {
if (con == null) {
con = new MyRegistrationBean().getConnection();
}
con.setAutoCommit(true); //TODO
return con;
}
What's the recommended way to manage this db connection/s I have, so that I don't incurr in the same problem.
Keeping a Connection open forever is a very bad idea. It doesn't have an endless lifetime, your application may crash whenever the DB times out the connection and closes it. Best practice is to acquire and close Connection, Statement and ResultSet in the shortest possible scope to avoid resource leaks and potential application crashes caused by the leaks and timeouts.
Since connecting the DB is an expensive task, you should consider using a connection pool to improve connecting performance. A decent applicationserver/servletcontainer usually already provides a connection pool feature in flavor of a JNDI DataSource. Consult its documentation for details how to create it. In case of for example Tomcat you can find it here.
Even when using a connection pool, you still have to write proper JDBC code: acquire and close all the resources in the shortest possible scope. The connection pool will on its turn worry about actually closing the connection or just releasing it back to pool for further reuse.
You may get some more insights out of this article how to do the JDBC basics the proper way. As a completely different alternative, learn EJB and JPA. It will abstract away all the JDBC boilerplate for you into oneliners.
Hope this helps.
See also:
Is it safe to use a static java.sql.Connection instance in a multithreaded system?
Am I Using JDBC Connection Pooling?
How should I connect to JDBC database / datasource in a servlet based application?
When is it necessary or convenient to use Spring or EJB3 or all of them together?
I've not much experience with PostgreSql, but all the web applications I've worked on have used a single connection per set of actions on a page, closing it and disposing it when finished.
This allows the server to pool connections and stops problems such as the one that you are experiencing.
Singleton should be the JNDI pool connection itself; Database class with getConnection(), query methods et al should NOT be singleton, but can be static if you prefer.
In this way the pool exists indefinitely, available to all users, while query blocks use dataSource.getConnection() to draw a connection from the pool; exec the query, and then close statement, result set, and connection (to return it to the pool).
Also, JNDI lookup is quite expensive, so it makes sense to use a singleton in this case.
I'm creating a server-side Java task that executes the same SQL UPDATE every 60-seconds forever so it is ideal for using a java.sql.PreparedStatement.
I would rather re-connect to the database every 60-seconds than assume that a single connection will still be working months into the future. But if I have to re-generate a new PreparedStatement each time I open a new connection, it seems like it is defeating the purpose.
My question is: since the PreparedStatement is created from a java.sql.Connection does it mean that the connection must be maintained in order to use the PreparedStatement efficiently or is the PreparedStatement held in the database and not re-compiled with each new connection? I'm using postgresql at the present, but may not always.
I suppose I could keep the connection open and then re-open only when an exception occurs while attempting an update.
Use a database connection pool. This will maintain the connections alive in sleep mode even after closing them. This approach also saves performance for your application.
Despite the connection that created the PreparedStatement, the SQL statement will be cached by the database engine and there won't be any problems when recreating the PreparedStatement object.
Set your connection timeout to the SQL execution time+few minutes.
Now, you can take 2 different approaches here -
Check before executing the update, if false is returned then open new Connection
if( connection == null || !connection.isValid(0)) {
// open new connection and prepared statement
}
Write a stored procedure in the Db, and call it passing necessary params. This is an alternate approach.
Regarding you approach of closing and opening db connection every 60 seconds for the same prepared statement, it does not sound like a good idea.
In JDBC can we say the transaction begins as soon as we get the connection and finishes
as we close the connection. IS this right? If yes can we say In different requests sharing
the same connection, even all the uncommitted transactions will be visible to all
all requests?
#BalusC - This is not correct really. By default autocommit is set to true, which means that transaction begins before any JDBC operation and finishes RIGHT AFTER that single operation - not when connection is closed.
But you are right that sharing connection is indeed not good, If you want to multi-thread your DB it's best to handle it in the way that you have thread pool (look for ThreadPoolExecutor in java.util.concurrent) and for each thread you get a separate connection. ConnectionPool is also a good one, but I would rather limit that through ThreadPool - this way there is never a thread waiting for a connection from the pool.
That's right. That's the default behaviour. You can take over this control by setting the auto commit to false by connection.setAutoCommit(false) after retrieving the connection and committing the transaction by connection.commit() after doing all queries.
However, sharing a connection between different requests (threads) is at its own already a bad design. Your application is this way not threadsafe. You do not want to share the same connection among different threads. If all you want is eliminating the cost of connecting a database, then you should consider using a connection pool.
first rule when you access the database.
Every nontransactional operation should:
1.open connection, if have connection pool then get connection from the pool
2. Create execute statement
3. if is read query then map the result set.
4. close the result set.
5. close the statement.
6. close the connection.
if you want your operation to be in transaction then you should consider this approach:
Operation 1:
1. getSharedConnection
2.create/execute statement
3. if is read query then map the result set.
4. close resultSEt
5. close statement
operation 2:
Same as operation 1.
and the transaction:
public void updateInTransaction(){
Connection conn=pool.getConnection();//or you can create a new connection
conn.setAutocommit(false);
operation1(conn);
operation2(conn);
conn.close;
}
This is just basics for small applications.
If you are developing bigger applicatoin you should using same framework like JDBCTemplates from Springsoruce.