There is something i don't understand with java.sql.Connection.commit().
I am using Derby(Java DB) as database server.
when I do a setAutoCommit(false) , I expect my query not to work before I explicitly call the commit() method.
but in fact, it still commit even if I don't call commit().
when I call a select * on my table to print the content, I can see that the rows have been added even though i didn't explicitly commit the query.
Could someone give me some explanation please?
con.setAutoCommit(false);
PreparedStatement updateHair = null;
PreparedStatement addMan = null;
try {
String updateString =
"update PERSONNE " +
"set haircolor = 'RED' where haircolor = 'SHAVE'";
String updateStatement =
"insert into personne values " +
"(3,'MICHEL','SHAVE')";
addMan = con.prepareStatement(updateStatement);
addMan.executeUpdate();
updateHair = con.prepareStatement(updateString);
updateHair.executeUpdate();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Auto-commit means that each individual SQL statement is treated as a transaction and is automatically committed right after it is executed. The default is for a SQL statement to be committed when it is completed, not when it is executed. A statement is completed when all of its result sets and update counts have been retrieved. In almost all cases, however, a statement is completed, and therefore committed, right after it is executed.
The way to allow two or more statements to be grouped into a transaction is to disable the auto-commit mode.
con.setAutoCommit(false);
When the auto-commit mode is disabled, no SQL statements are committed until you call the method commit explicitly. All statements executed after the previous call to the method commit are included in the current transaction and committed together as a unit.
-- EDIT_1
Updates may be committed because you're closing your Connection without calling rollback().
If a Connection is closed without an explicit commit or a rollback the behaviour depends on database.
It is strongly recommended that an application explicitly commits or
rolls back an active transaction prior to calling the close method. If
the close method is called and there is an active transaction, the
results are implementation-defined.
Connection.close()
Related
I'm having a little problem with transactions using JDBC.
I want to start an IMMEDIATE transaction which in pure SQL is:
BEGIN IMMEDIATE;
In Java JDBC SQLite, you cannot do this. You can't call BEGIN IMMEDIATE on a statement if you have autocommit enabled. Committing queries will result in an "autocommit is enabled" error.
db = DriverManager.getConnection("jdbc:sqlite:sqlite.db");
// start a transaction using an sql query...
db.createStatement().execute("BEGIN IMMEDIATE");
// create another statement because this is running from another method...
stmt = db.createStatement();
stmt.executeUpdate("UPDATE table SET column='value' WHERE id=1");
// this will cause an error(exception): AUTOCOMMIT IS ENABLED.
db.commit();
The code above will throw an AUTOCOMMIT IS ENABLED exception.
However, there is also a problem when disabling autocommit because it starts the transaction after using that code. consider the code below:
db = DriverManager.getConnection("jdbc:sqlite:ez-inventory.db");
// doing the createstatement and setautocommit reciprocally still produce the same exception.
db.setAutoCommit(false);
db.createStatement().execute("BEGIN IMMEDIATE");
This code will throw another exception:
[SQLITE_ERROR] SQL error or missing database (cannot start a
transaction within a transaction)
There is a setTransactionIsolation method in the connection but it's not for transaction locking. It's for isolation. I need to start a transaction using any of the SQLite transaction modes: DEFFERED, IMMEDIATE, or EXCLUSIVE
Is this possible with SQLite JDBC?
OK I got it! You should create a Properties object with transaction_mode key and a desired transaction mode value. and put the Properties object as a parameter when your creating your new SQL Connection instance.
import java.sql.*; // <-- bad practice.. just too lazy to put the needed modules one by one for this example
public void immediate_transaction_example() throws SQLException {
// create a properties with a transaction_mode value
Properties sqlprop = new Properties();
properties.put("transaction_mode", "IMMEDIATE"); // <-- can be DEFERRED, IMMEDIATE, or EXCLUSIVE
db = new DriverManager.getConection("jdbc:sqlite:sqlite.db", sqlprop); // <-- pass the properties to the new sql connection instance.
db.setAutoCommit(false); // <-- this will automatically begin the transaction with the specified transaction mode...
// other new transactions attempts with immediate transaction mode will be blocked until the connection is closed.
try {
// proceed the transactions here...
db.createStatement().execute("INSERT INTO table (id, value) VALUES (1, 'myvalue')");
db.createStatement().execute("INSERT INTO table (id, value) VALUES (2, 'myvalue')");
// no errors proceed
db.commit();
} catch (SQLException ex) {
// there is an error!
db.rollback();
}
db.close() // <-- you need to close the connection for sqlite to create a new immediate transaction.
}
Note: This uses xerial's sqlite-jdbc module.
Module Link: https://github.com/xerial/sqlite-jdbc
We've been using a pattern like this for a while to ensure a specific operation is executed with BATCH NOWAIT, for performance reasons.
try {
session.createSQLQuery("ALTER SESSION SET COMMIT_LOGGING='BATCH' COMMIT_WAIT='NOWAIT'").executeUpdate();
// Do the operation (which also calls transaction.commit())
return callback.apply(session);
} finally {
session.createSQLQuery("ALTER SESSION SET COMMIT_LOGGING='IMMEDIATE' COMMIT_WAIT='WAIT'").executeUpdate();
}
This has worked fine in Hibernate 4. As of Hibernate 5, the last statement fails because it's not inside a transaction (as it's just been committed).
javax.persistence.TransactionRequiredException: Executing an update/delete query
It isn't an update or a delete, but executeUpdate() is the only method you can call to execute this statement without returning any rows. It shouldn't need to be in a transaction since session variables apply to the entirety of the connection, and it does need to be executed to restore the session variables because a connection pool is in use.
I've tried using one of the query methods instead, but this statement has -1 rows, and it won't let me stack SELECT 1 FROM DUAL on the end.
Is there any way to execute a native query from Hibernate that's neither update/delete or results-returning, outside of a transaction?
Using the underlying Connection directly bypasses Hibernate's checks and allows me to execute such a statement in peace.
try {
session.doWork(conn ->
conn.createStatement().execute("ALTER SESSION SET COMMIT_LOGGING='BATCH' COMMIT_WAIT='NOWAIT'")
);
return callback.apply(session);
} finally {
session.doWork(conn ->
conn.createStatement().execute("ALTER SESSION SET COMMIT_LOGGING='IMMEDIATE' COMMIT_WAIT='WAIT'")
);
}
I am trying to better instrument which web applications make use of Oracle (11g) connections in our Tomcat JDBC connection pool when a connection is created and closed; this way, we can see what applications are using connections by monitoring the V$SESSION table. This is working, but since adding this "instrumentation" I am seeing ORA-01000: maximum open cursors exceeded errors being logged and noticing some connections being dropped out of the pool during load testing (which is probably fine as I have testOnBorrow enabled, so I'm assuming the connection is being flagged as invalid and dropped from the pool).
I have spent the better part of the week scouring the internet for possible answers. Here is what I have tried (all result in the open cursors error after a period of time)...
The below methods are all called the same way...
On Create
We obtain a connection from the pool
We call a method that executes the below code, passing in the context name of the web application
On Close
We have the connection being closed (returned to the pool)
Before we issue close() on the connection, we call a method that executes the code below, passing in "Idle" as the name to store in V$SESSION
Method 1:
CallableStatement cs = connection.prepareCall("{call DBMS_APPLICATION_INFO.SET_MODULE(?,?)}");
try {
cs.setString(1, appId);
cs.setNull(2, Types.VARCHAR);
cs.execute();
log.trace(">>> Executed Oracle DBMS_APPLICATION_INFO.SET_MODULE with module_name of '" + appId + "'");
} catch (SQLException sqle) {
log.error("Error trying to call DBMS_APPLICATION_INFO.SET_MODULE('" + appId + "')", sqle);
} finally {
cs.close();
}
Method 2:
I upgraded to the 12c OJDBC driver (ojdbc7) and used the native setClientInfo method on the connection...
// requires ojdbc7.jar and oraclepki.jar to work (setEndToEndMetrics is deprecated in ojdbc7)
connection.setClientInfo("OCSID.CLIENTID", appId);
Method 3:
I'm currently using this method.
String[] app_instrumentation = new String[OracleConnection.END_TO_END_STATE_INDEX_MAX];
app_instrumentation[OracleConnection.END_TO_END_CLIENTID_INDEX] = appId;
connection.unwrap(OracleConnection.class).setEndToEndMetrics(app_instrumentation, (short)0);
// in order for this to be sent, a query needs to be sent to the database - this works fine when a
// connection is created, but when it is closed, we need a little something to get the change into the db
// try using isValid()
connection.isValid(1);
Method 4:
String[] app_instrumentation = new String[OracleConnection.END_TO_END_STATE_INDEX_MAX];
app_instrumentation[OracleConnection.END_TO_END_CLIENTID_INDEX] = appId;
connection.unwrap(OracleConnection.class).setEndToEndMetrics(app_instrumentation, (short)0);
// in order for this to be sent, a query needs to be sent to the database - this works fine when a
// connection is created, but when it is closed, we need a little something to get the change into the db
if ("Idle".equalsIgnoreCase(appId)) {
Statement stmt = null;
ResultSet rs = null;
try {
stmt = connection.createStatement();
rs = stmt.executeQuery("select 1 from dual");
} finally {
if (rs != null) {
rs.close();
}
if (stmt != null) {
stmt.close();
}
}
}
When I query for open cursors, I notice the following SQL being returned on the account being used in the pool (for each connection in the pool)...
select NULL NAME, -1 MAX_LEN, NULL DEFAULT_VALUE, NULL DESCR
This does not explicitly exist anywhere in our code, so I can only assume it is coming from the pool when running the validation query (select 1 from dual) or from the setEndToEndMetrics method (or from the DBMS_APPLICATION_INFO.SET_MODULE proc, or from the isValid() call). I tried to be explicit in creating and closing Statement (CallableStatement) and ResultSet objects in methods 1 and 4, but they made no difference.
I don't want to increase the number of allowed cursors, as this will only delay the inevitable (and we have never had this issue until I added in the "instrumentation").
I've read through the excellent post here (java.sql.SQLException: - ORA-01000: maximum open cursors exceeded), but I must still be missing something. Any help would be greatly appreciated.
So Mr. Poole's statement: "that query looks like it's getting fake metadata" set off a bell in my head.
I started to wonder if it was some unknown remnant of the validation query being run on the testOnBorrow attribute of the pool's datasource (even though the validation query is defined as select 1 from dual). I removed this from the configuration, but it had no effect.
I then tried removing the code that sets the client info in V$SESSION (Method 3 above); Oracle continued to show that unusual query and after only a few minutes, the session would hit the maximum open cursors limit.
I then found that there was a "logging" method in our DAO class that logged some metadata from the connection object (values for settings like current auto commit, current transaction isolation level, JDBC driver version, etc.). Within this logging was the use of the getClientInfoProperties() method on the DatabaseMetaData object. When I looked at the JavaDocs for this method, it became crystal clear where that unusual query was coming from; check it out...
ResultSet java.sql.DatabaseMetaData.getClientInfoProperties() throws SQLException
Retrieves a list of the client info properties that the driver supports. The result set contains the following columns
1. NAME String=> The name of the client info property
2. MAX_LEN int=> The maximum length of the value for the property
3. DEFAULT_VALUE String=> The default value of the property
4. DESCRIPTION String=> A description of the property. This will typically contain information as to where this property is stored in the database.
The ResultSet is sorted by the NAME column
Returns:
A ResultSet object; each row is a supported client info property
You can clearly see that unusual query (select NULL NAME, -1 MAX_LEN, NULL DEFAULT_VALUE, NULL DESCR) matches what the JavaDocs say about the DatabaseMetaData.getClientInfoProperties() method. Wow, right!?
This is the code that was performing the function. As best as I can tell, it looks correct from a "closing of the ResultSet" standpoint - not sure what was happening that would keep the ResultSet open - it clearly being closed in the finally block.
log.debug(">>>>>> DatabaseMetaData Client Info Properties (jdbc driver)...");
ResultSet rsDmd = null;
try {
boolean hasResults = false;
rsDmd = dmd.getClientInfoProperties();
while (rsDmd.next()) {
hasResults = true;
log.debug(">>>>>>>>> NAME = '" + rsDmd.getString("NAME") + "'; DEFAULT_VALUE = '" + rsDmd.getString("DEFAULT_VALUE") + "'; DESCRIPTION = '" + rsDmd.getString("DESCRIPTION") + "'");
}
if (!hasResults) {
log.debug(">>>>>>>>> DatabaseMetaData Client Info Properties was empty (nothing returned by jdbc driver)");
}
} catch (SQLException sqleDmd) {
log.warn("DatabaseMetaData Client Info Properties (jdbc driver) not supported or no access to system tables under current id");
} finally {
if (rsDmd != null) {
rsDmd.close();
}
}
Looking at the logs, when an Oracle connection was used, the >>>>>>>>> DatabaseMetaData Client Info Properties was empty (nothing returned by jdbc driver) line was logged, so an exception wasn't being thrown, but no record was being returned either. I can only assume that the ojdbc6 (11.2.0.x.x) driver doesn't properly support the getClientInfoProperties() method - it is weird (I think) that an exception wasn't being thrown, as the query itself is missing the FROM keyword (it won't run when executed in TOAD for example). And no matter what, the ResultSet should have at least been getting closed (the connection itself would still be in use though - maybe this causes Oracle to not release the cursors even though the ResultSet was closed).
So all of the work I was doing was in a branch (I mentioned in a comment to my original question that I was working in trunk - my mistake - I was in a branch that was already created thinking it was based on trunk code and not modified - I failed to do my due diligence here), so I checked the SVN commit history and found that this additional logging functionality was added by a fellow teammate a couple of weeks ago (fortunately it hasn't been promoted to trunk or to higher environments - note this code works fine against our Sybase database). My update from the SVN branch brought in his code, but I never really paid attention to what got updated (my bad). I spoke with him about what this code was doing against Oracle, and we agreed to remove the code from the logging method. We also put in place a check to only log the connection metadata when in our development environment (he said he added this code to help troubleshoot some driver version and auto commit questions he had). Once this was done, I was able to run my load tests without any open cursor issues (success!!!).
Anyway, I wanted to answer this question because when I searched for select NULL NAME, -1 MAX_LEN, NULL DEFAULT_VALUE, NULL DESCR and ORA-01000 open cursors no credible hits were returned (the majority of the hits returned were to make sure you are closing your connection resources, i.e., ResultSets, Statements, etc.). I think this shows it was the database metadata query through JDBC against Oracle was the culprit of the ORA-01000 error. I hope this is useful to others. Thanks.
I have two delete operation on two different tables. After deleting I have know whether both of the queries executed. This is my code.
properties.load(inputStream);
String sql = properties.getProperty("deleteImageByImageID");
ps = DBConnection.prepareStatement(sql);
ps.setString(1, imageID);
sql = properties.getProperty("deleteAnnImageByImageID");
ps2 = DBConnection.prepareStatement(sql);
ps2.setString(1, imageID);
int count = ps.executeUpdate();
count += ps2.executeUpdate();
But now I did change in the code by adding
DBConnection.setAutoCommit(false);
....
DBConnection.commit();
Now how do I know whether both the statements were executed successfully (Both deletes happened)??
Your code should look like this:
connection.setAutoCommit(false);
ps = DBConnection.prepareStatement(sql);
ps.setString(1, imageID);
ps2 = DBConnection.prepareStatement(sql);
ps2.setString(1, imageID);
int count = ps.executeUpdate();
count += ps2.executeUpdate();
connection.commit();
The counts returned by the executeUpdate() calls are the number of rows that will be affected when the transaction commits. If the transaction rolls back, then no rows will be affected.
Now how do I know whether both the statements were executed successfully??
Depends what you mean by "executed successfully":
If you mean, "without SQL errors" and the like, then you know that it has happened if there are no SQL exceptions in the prepare, set and execute statements. If any of the SQL statements fails, the transaction won't be commit-able.
If you mean that the changes were written safely to disk (or whatever), then you know that it has happened if the commit didn't throw an exception.
If you mean that the changes were what you expected, then all of the above, AND the counts are what you expected.
Firstly depending on the severity of the failure there could be an exception thrown, so you should make sure and use a try/catch - if it throws an exception you want to handle it.
Secondly, in your catch you will need to roll back the traction.
DbConnection.rollback();
In JDBC, can I use single Statement object to call executeQuery("") multiple times? Is it safe? Or should I close the statement object after each query, and create new object for executing another query.
E.G:
Connection con;
Statement s;
ResultSet rs;
ResultSet rs2;
try
{
con = getConnection();
// Initially I was creating the Statement object in an
// incorrect way. It was just intended to be a pseudocode.
// But too many answerers misinterpretted it wrongly. Sorry
// for that. I corrected the question now. Following is the
// wrong way, commented out now.
// s = con.prepareStatement();
// Following is the way which is correct and fits for my question.
s = con.createStatement();
try
{
rs = s.executeQuery(".......................");
// process the result set rs
}
finally
{
close(rs);
}
// I know what to do to rs here
// But I am asking, should I close the Statement s here? Or can I use it again for the next query?
try
{
rs2 = s.executeQuery(".......................");
// process the result set rs2
}
finally
{
close(rs2);
}
}
finally
{
close(s);
close(con);
}
Yes you can re-use a Statement(specifically a PreparedStatement) and should do so in general with JDBC. It would be inefficient & bad style if you didn't re-use your statement and immediately created another identical Statement object. As far as closing it, it would be appropriate to close it in a finally block, just as you are in this snippet.
For an example of what you're asking check out this link: jOOq Docs
I am not sure why you are asking. The API design and documentation show it is perfectly fine (and even intended) to reuse a Statement object for multiple execute, executeUpdate and executeQuery calls. If it wouldn't be allowed that would be explicitly documented in the Java doc (and likely the API would be different).
Furthermore the apidoc of Statement says:
All execution methods in the Statement interface implicitly close a statment's [sic] current ResultSet object if an open one exists.
This is an indication that you can use it multiple times.
TL;DR: Yes, you can call execute on single Statement object multiple times, as long as you realize that any previously opened ResultSet will be closed.
Your example incorrectly uses PreparedStatement, and you cannot (or: should not) be able to call any of the execute... methods accepting a String on a PreparedStatement:
SQLException - if [...] the method is called on a PreparedStatement or CallableStatement
But to answer for PreparedStatement as well: the whole purpose of a PreparedStatement is to precompile a statement with parameter placeholders and reuse it for multiple executions with different parameter values.
I can't find anything in the API docs that would state, that you shouldn't call executeQuery() on a given PreparedStatement instance more than once.
However your code does not close the PreparedStatement - a call to executeQuery() would throw a SQLException in that case - but the ResultSet that is returned by executeQuery(). A ResultSet is automatically closed, when you reexecute a PreparedStatement. Depending on your circumstances you should close it, when you don't need it anymore. I would close it, because i think it's bad style not to do so.
UPDATE Upps, I missed your comment between the two try blocks. If you close your PreparedStatement at this point, you shouldn't be able to call executeQuery() again without getting a SQLException.
A Prepared Statement tells the database to remember your query and to be prepared to accept parameterized variables to execute in that query. It's a lot like a stored procedure.
Prepared Statement accomplishes two main things:
It automatically escapes your query variables to help guard against SQL Injection.
It tells the database to remember the query and be ready to take variables.
Number 2 is important because it means the database only has to interpret your query once, and then it has the procedure ready to go. So it improves performance.
You should not close a prepared statement and/or the database connection in between execute calls. Doing so is incredibly in-efficient and it will cause more overhead than using a plain old Statement since you instruct the database each time to create a procedure and remember it. Even if the database is configured for "hot spots" and remembers your query anyways even if you close the PreparedStatement, you still incur network overhead as well as small processing time.
In short, keep the Connection and PreparedStatement open until you are done with them.
Edit: To comment on not returning a ResultSet from the execution, this is fine. executeQuery will return the ResultSet for whatever query just executed.
Firstly I am confused about your code
s = con.prepareStatement();
Is it work well?I can't find such function in JAVA API,at least one parameter is needed.Maybe you want to invoke this function
s = con.createStatement();
I just ran my code to access DB2 for twice with one single Statement instance without close it between two operation.It's work well.I think you can try it yourself too.
String sql = "";
String sql2 = "";
String driver = "com.ibm.db2.jcc.DB2Driver";
String url = "jdbc:db2://ip:port/DBNAME";
String user = "user";
String password = "password";
Class.forName(driver).newInstance();
Connection conn = DriverManager.getConnection(url, user, password);
Statement statement = conn.createStatement();
ResultSet resultSet = statement.executeQuery(sql);
int count = 0;
while (resultSet.next()) {
count++;
}
System.out.println("Result row count of query number one is: " + count);
count = 0;
resultSet = statement.executeQuery(sql2);
while (resultSet.next()) {
count++;
}
System.out.println("Result row count of query number two is: " + count);