I am curious to know that, if I create SQL statement for SQLITE database file in java as
public Statement GetStatement()
{
if(connection==null || connection.isClosed())
{
connection = DriverManager.getConnection("jdbc:sqlite:"+filePath);
}
return connection.createStatement(); //Connection is private variable of type java.sql.connection
}
This statement returned by this function is used in executing insert, select, update SQL in different scenarios and multiple reads or multiple functions will be inserting, updating or selecting from database.
Now if I do not close the statements then are there chances of memory leaks.
But I close all the result set objects got by executing select SQL.
I know it is good practice to close the statements but what are counter effects if I do not do it?
SQLite enforces sequential write-access to the database (one process at a time). This makes it vital that you close the database connection when you have completed any INSERT or UPDATE operations. If you don't, you might receive "DatabaseObjectNotClosed" exceptions when your script next attempts to write to it. Not to mention memory leaks, and possible performance decrease.
Related
PreparedStatment ps = null;
public void executeQueries(){
try{
ps = conn.prepareStatement(Query1);
// Execute Query1 here and do the processing.
ps = conn.prepareStatement(Query2);
// Execute Query2 here and do the processing.
//... more queries
}catch(){}
finally{
ps.close(); // At this point would the caching of queries in DB be lost?
}
}
In my Application, I call the method executeQueries() frequently.
My question is, If I close the PreparedStatement in the finally block inside the method (that I use frequently), would the database system remove the caching? If YES, can I make a global PreparedStatement for the entire application as there are loads of JAVA CLASSES in my application that query the database.
Thank you!
Update : The question has been marked duplicate but the linked thread does not answer my question at all. AFAIK, the database system stores the executed queries in the cache memory. It also stores their execution plan. This is where PreparedStatement perfoms better than Statement. However, I am not very sure if the information related to the query is removed once the PreparedStatement is closed.
Specifically with regard to MySQL, according to
8.10.3 Caching of Prepared Statements and Stored Programs
The server maintains caches for prepared statements and stored programs on a per-session basis. Statements cached for one session are not accessible to other sessions. When a session ends, the server discards any statements cached for it.
So closing a PreparedStatement would not remove the statement(s) from the cache, but closing the Connection presumably would.
... unless the application uses a connection pool, in which case closing the Connection may not necessarily end the database session; it may keep the session open and just return the connection to the pool.
Then there's also the question of whether the statements are actually being PREPAREd on the server. That is controlled by the useServerPrepStmts connection string attribute. IIRC, by default, server-side prepared statements are not enabled.
I know that, when used the first time, jdbc keeps somewhere the compiled prepared statement so that next time it will be accessed in a more efficient way.
Now, suppose I have this situation:
public class MyDao{
public void doQuery(){
try(PreparedStatement stmt = connection.prepareStatement(MY_STMT)){
}
}
}
Both the following snippets will keep the compiled prepared statement in memory?
Snippet 1:
MyDao dao = new MyDao();
dao.doQuery(); //first one, expensive
dao.doQuery(); //second one, less expensive as it has been already compiled
Snippet 2:
MyDao dao = new MyDao();
dao.doQuery(); //first one, expensive
MyDao dao2 = new MyDao();
dao2.doQuery(); //will it be expensive or less expensive?
I am afraid that, by creating a new dao object, the jvm will see that prepared statement as a new one and so it will not compile it.
And, if it's not the case, is there any situation in which the jvm will "forget" the compiled statement and will compile it again?
Thanks
The most basic scenario for prepared statement reuse is that your code keeps the PreparedStatement open and reuses that prepared statement. Your example code does not fit this criteria because you close the prepared statement. On the other hand trying to keep a prepared statement open for multiple method invocations is usually not a good plan because of potential concurrency problems (eg if multiple threads use the same DAO, you could be executing weird combinations of values from multiple threads, etc).
Some JDBC drivers have an (optional) cache (pool) of prepared statements internally for reuse, but that reuse will only happen if an attempt is made to prepare the same statement text again on the same physical connection. Check the documentation of your driver.
On a separate level, it is possible that the database system will cache the execution plan for a prepared statement, and it can (will) reuse that if the same statement text is prepared again (even for different connections).
You're correct it will be compiled again. PreparedStatements will only be reused if you actually use the statement itself multiple times (ie, you call executeQuery on it multiple times).
However, I wouldn't worry too much about the cost of compiling the statement. If your query takes more than a few milliseconds, the cost of compiling will be insignificant. The overhead of compiling statements only becomes apparent when doing 1000's of operations per second.
Do a benchmark. It is the best way to get some certainty about the performance difference. It is not necessarily the case that the statement is always recompiled at server side. Depending on your RDBMS, it may cache the statements previously compiled. In order to maximize the cache hit probability, submit always exactly the same parameterized SQL text and do it over the same connection.
I am working on a program that allows multiple users to access a db (MySQL), and at various times I'm getting a SQLException: Lock wait timeout exceeded .
The connection is created using:
conn = DriverManager.getConnection(connString, username, password);
conn.setAutoCommit(false);
and the calls all go through this bit of code:
try {
saveRecordInternal(record);
conn.commit();
} catch (Exception ex) {
conn.rollback();
throw ex;
}
Where saveRecordInternal has some internal logic, saving the given record. Somewhere along the way is the method which I suspect is the problem:
private long getNextIndex() throws Exception {
String query = "SELECT max(IDX) FROM MyTable FOR UPDATE";
PreparedStatement stmt = conn.prepareStatement(query);
ResultSet rs = stmt.executeQuery();
if (rs.next()) {
return (rs.getLong("IDX") + 1);
} else {
return 1;
}
}
This method is called by saveRecordInternal somewhere along it's operation, if needed.
For reasons that are currently beyond my control I cannot use auto-increment index, and anyway the index to-be-inserted is needed for some internal-program logic.
I would assume having either conn.commit() or conn.rollback() called would suffice to release the lock, but apparently it's not. So my question is - Should I use stmt.close() or rs.close() inside getNextIndex? Would that release the lock before the transaction is either committed or rolled back, or would it simply ensure the lock is indeed released when calling conn.commit() or conn.rollback()?
Is there anything else I'm missing / doing entirely wrong?
Edit: At the time the lock occurs all connected clients seem to be responsive, with no queries currently under-way, but closing all connected clients does resolve the issue. This leads me to think the lock is somehow preserved even though the transaction (supposedly?) ends, either by committing or rolling back.
Even though not closing a Statement or ResultSet is a bad idea but that function doesn't seem responsible for error that you are receiving. Function , getNextIndex() is creating local Statement andResultSet but not closing it. Close those right there or create those Statement and ResultSetobjects in saveRecordInternal() and pass as parameters or better if created in your starting point and reused again and again. Finally, close these when not needed anymore ( in following order - ResultSet, Statement, Connection ).
Error simply means that a lock was present on some DB object ( Connection, Table ,Row etc ) while another thread / process needed it at the same time but had to wait ( already locked ) but wait timed out due to longer than expected wait.
Refer , How to Avoid Lock wait timeout exceeded exception.? to know more about this issue.
All in all this is your environment specific issue and need to be debugged on your machine with extensive logging turned on.
Hope it helps !!
From the statements above I don't see any locks that remain open!
In general MySql should release the locks whenever a commit or rollback is called, or when the connection is closed.
In your case
SELECT max(IDX) FROM MyTable FOR UPDATE
would result in locking the whole table, but I assume that this is the expected logic! You lock the table until the new row is inserted and then release it to let the others insert.
I would test with:
SELECT IDX FROM MyTable FOR UPDATE Order by IDX Desc LIMIT 1
to make sure that the lock remains open even when locking a single row.
If this is not the case, I might be a lock timeout due to a very large table.
So, what I think happen here is: you query is trying to executed on some table but that table is locked with some another process. So, till the time the older lock will not get released from the table, your query will wait to get executed and after some time if the lock will no release you will get the lock time out exception.
You can also take a look on table level locks and row level locks.
In brief: table level locks lock the whole table and till the lock is there you want be able to execute any other query on the same table.
while
row level lock will put a lock on a specific row, so apart from that row you can execute queries on the table.
you can also check if there is any query running on the table from long time and due to that you are not able to execute another query and getting exception. How, to check this will vary upon database, but you can google it for your specific database to find out query to get the open connections or long running queries on database.
I'm creating a server-side Java task that executes the same SQL UPDATE every 60-seconds forever so it is ideal for using a java.sql.PreparedStatement.
I would rather re-connect to the database every 60-seconds than assume that a single connection will still be working months into the future. But if I have to re-generate a new PreparedStatement each time I open a new connection, it seems like it is defeating the purpose.
My question is: since the PreparedStatement is created from a java.sql.Connection does it mean that the connection must be maintained in order to use the PreparedStatement efficiently or is the PreparedStatement held in the database and not re-compiled with each new connection? I'm using postgresql at the present, but may not always.
I suppose I could keep the connection open and then re-open only when an exception occurs while attempting an update.
Use a database connection pool. This will maintain the connections alive in sleep mode even after closing them. This approach also saves performance for your application.
Despite the connection that created the PreparedStatement, the SQL statement will be cached by the database engine and there won't be any problems when recreating the PreparedStatement object.
Set your connection timeout to the SQL execution time+few minutes.
Now, you can take 2 different approaches here -
Check before executing the update, if false is returned then open new Connection
if( connection == null || !connection.isValid(0)) {
// open new connection and prepared statement
}
Write a stored procedure in the Db, and call it passing necessary params. This is an alternate approach.
Regarding you approach of closing and opening db connection every 60 seconds for the same prepared statement, it does not sound like a good idea.
In the tutorial "Using Prepared Statements" it states that they should always be closed. Suppose I have a function
getPrice() {
}
that I expect to be called multiple times per second. Should this method be opening and closing the PreparedStatement with every single method call? This seems like a lot of overhead.
First of all, PreparedStatement are never opened. It's just a prepared Statement that is executed. The statement is sent to the RDBMS that executes the SQL statement compiled by the PreparedStatement. The connection to the SQL statement should be opened during the duration of the SQL querying and closed when no other RDMS calls is needed.
You can send many Statement/PreparedStatement as you require provided that you finally close its ResultSet and PreparedStatement once you're completed with them and then close the RDBMS connection.
Should this method be opening and closing the PreparedStatement with every single method call?
If you are creating the PreparedStatement object within the method, then you must close it, once you are done with it. You may reuse the PreparedStatement object for multiple executions, but once you are done with it, you must close it.
This is because, although all Statement objects (including PreparedStatements) are supposed to be closed on invoking Connection.close(), it is rarely the case. In certain JDBC drivers, especially that of Oracle, the driver will be unable to close the connection if the connection has unclosed ResultSet and Statement objects. This would mean that, on these drivers:
You should never lose a reference to a PreparedStatement object. If you do, then the connection will not be closed, until garbage collection occurs. If you are reusing PreparedStatement instances for different SQL statements, it is easy to forget this.
You should close the PreparedStatement once you no longer need it. Only then can the Connection.close() actually tear down the physical connection.
As the example in the tutorial shows you should close it after all your queries have been performed.
Once the statement is closed the RDMS may release all resources associated with your statement. Thus to use it further you'd have to re-prepare the very same statement.
I think that, after every database interaction, every component like statement, resultset must be closed, except for connection, if u tend to perform more operation.
And there is no need to worry, if you are creting the prepared statement again and again, because as you will be using the same statement again and again, there wont be any performannce issue.
Yes..No issues are there if you are creating the prepared statement n number of times, because as you will be using the same statement at all the places. No need to have any observation here regarding performance
Thanks