I am developing high-load application using tomcat jdbc connection pool and Oracle database. It is very important to ensure my app to have very small DB query timeouts (no longer than 3 seconds) to prevent long-running queries or database slowness from blocking all my application. To simulate long-running queries I have put the DB in QUIESCE state using ALTER SYSTEM QUIESCE RESTRICTED statement.
But it looks like the timeout values have no impact - when i begin to test my application, it hangs...
Here is my jdbc pool configuration:
String connprops = "oracle.net.CONNECT_TIMEOUT=3000;oracle.jdbc.ReadTimeout=3000;"
+ "oracle.net.READ_TIMEOUT=3000";
pp.setConnectionProperties(connprops);
pp.setDriverClassName("oracle.jdbc.OracleDriver");
pp.setTestOnBorrow(true);
pp.setTestOnConnect(true);
pp.setTestOnReturn(true);
pp.setTestWhileIdle(true);
pp.setMaxWait(2000);
pp.setMinEvictableIdleTimeMillis(20000);
pp.setTimeBetweenEvictionRunsMillis(20000);
pp.setValidationInterval(3000);
pp.setValidationQuery("SELECT 1 FROM DUAL");
pp.setMaxAge(3000);
pp.setRemoveAbandoned(true);
pp.setRemoveAbandonedTimeout(3);
pp.setJdbcInterceptors("org.apache.tomcat.jdbc.pool.interceptor.QueryTimeoutInterceptor(queryTimeout=3)");
dataSource = new DataSource();
dataSource.setPoolProperties(pp);
That's how i work with connections (pretty simple):
Connection conn = dataSource.getConnection();
Statement stmt = null;
ResultSet rs = null;
try {
stmt = conn.createStatement();
rs = stmt.executeQuery(/*some select query*/);
if (rs.next()) {
result = rs.getInt(1);
/*process the result*/
}
rs.close();
stmt.close();
conn.close();
}
catch(Exception e) {
logger.error("Exception: " + e.getMessage(), e);
}finally {
if (conn != null) {
if(rs!=null)
rs.close();
if(stmt!=null)
stmt.close();
conn.close();
}
}
Any ideas? Thanks in advance!
Try to use this config:
String connprops = "oracle.net.CONNECT_TIMEOUT=\"3000\";oracle.jdbc.ReadTimeout=\"3000\";"
+ "oracle.net.READ_TIMEOUT=\"3000\"";
All non-string values are ignored by java.util.Properties.java:
public String getProperty(String key) {
Object oval = super.get(key);
String sval = (oval instanceof String) ? (String)oval : null; // <- !!!!
return ((sval == null) && (defaults != null)) ? defaults.getProperty(key) : sval;
}
You should probably also use java.sql.Statement's query timeout:
stmt.setQueryTimeout(3); // int seconds
Related
I'm not sure the best practice for this, but my overall problem is that I can't figure out why my connection isn't closing.
I'm basically iterating through a list, and then inserting them into a table. Before I insert them into a table, I check and make sure it's not a duplicate. if it is, I update the row instead of inserting it. As of now, I can only get 13 iterations to work before the debug lets me know I had a connection not close.
Since I have 2 connections, I'm having trouble figuring out where I'm suppose to close my connections, and I was trying to use other examples to help. Here is what I got:
Connection con = null;
PreparedStatement stmt = null;
PreparedStatement stmt2 = null;
ResultSet rs = null;
Connection con2 = null;
for (Object itemId: aList.getItemIds()){
try {
con = cpds2.getConnection();
stmt = con.prepareStatement("select [ID] from [DB].[dbo].[Table1] WHERE [ID] = ?");
stmt.setInt(1, aList.getItem(itemId).getBean().getID());
rs = stmt.executeQuery();
//if the row is already there, update the data/
if (rs.isBeforeFirst()){
System.out.println("Duplicate");
stmt2 = con2.prepareStatement("UPDATE [DB].[dbo].[Table1] SET "
+ "[DateSelected]=GETDATE() where [ID] = ?");
stmt2.setInt(1,aList.getItem(itemId).getBean().getID());
stmt2.executeUpdate();
}//end if inserting duplicate
else{
con2 = cpds2.getConnection();
System.out.println("Insertion");
stmt.setInt(1, aList.getItem(itemId).getBean().getID());
//Otherwise, insert them as if they were new
stmt2 = con.prepareStatement("INSERT INTO [DB].[dbo].[Table1] ([ID],[FirstName],"
+ "[LastName],[DateSelected]) VALUES (?,?,?,?)");
stmt2.setInt(1,aList.getItem(itemId).getBean().getID() );
stmt2.setString(2,aList.getItem(itemId).getBean().getFirstName());
stmt2.setString(3,aList.getItem(itemId).getBean().getLastName() );
stmt2.setTimestamp(4, new Timestamp(new Date().getTime()));
stmt2.executeUpdate();
}//End Else
}catch(Exception e){
e.printStackTrace();
}//End Catch
finally{
try { if (rs!=null) rs.close();} catch (Exception e) {}
try { if (stmt2!=null) stmt2.close();} catch (Exception e) {}
try { if (stmt!=null) stmt.close();} catch (Exception e) {}
try { if (con2!=null) con2.close();} catch (Exception e) {}
try {if (con!=null) con.close();} catch (Exception e) {}
}//End Finally
} //end for loop
Notification.show("Save Complete");
This is my pooled connection:
//Pooled connection
cpds2 = new ComboPooledDataSource();
try {
cpds2.setDriverClass("net.sourceforge.jtds.jdbc.Driver");
} catch (PropertyVetoException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} //loads the jdbc driver
cpds2.setJdbcUrl( "jdbc:jtds:sqlserver://SERVERNAME;instance=DB" );
cpds2.setUser("username");
cpds2.setPassword("password");
cpds2.setMaxStatements( 180 );
cpds2.setDebugUnreturnedConnectionStackTraces(true); //To help debug
cpds2.setUnreturnedConnectionTimeout(2); //to help debug
My main questions are, am i closing my connections right? Is my connection pool set up right?
Should I be closing the connection inside the for loop or outside?
Is my problem with c3p0? Or JTDS?
It's great that you are working to be careful to robustly close() your resources, but this is overly complicated.
Unless you are using a pretty old version of Java (something prior to Java 7) you can use try-with-resources, which really simplifies this stuff. Working with two different Connections in one logic unit-of-work invites misunderstandings. Resources should be a close()ed as locally to their use as possible, rather than deferring everything to the end.
Your Exception handling is dangerous. If an Exception occurs that you don't understand, you might want to print its stack trace, but your code should signall the fact that whatever you were doing didn't work. You swallow the Exception, and even notify "Save Complete" despite it.
All this said, your life might be made much easier by a MERGE statement, which I think SQL Server supports.
Here is an (untested, uncompiled) example reorganization:
try ( Connection con = cpds2.getConnection() ) {
for (Object itemId: aList.getItemIds()){
boolean id_is_present = false;
try ( PreparedStatement stmt = con.prepareStatement("select [ID] from [DB].[dbo].[Table1] WHERE [ID] = ?") ) {
stmt.setInt(1, aList.getItem(itemId).getBean().getID());
try ( ResultSet rs = stmt.executeQuery() ) {
id_is_present = rs.next();
}
}
if ( id_is_present ) {
System.out.println("Duplicate");
try ( PreparedStatement stmt = con.prepareStatement("UPDATE [DB].[dbo].[Table1] SET [DateSelected]=GETDATE() where [ID] = ?") ) {
stmt.setInt(1,aList.getItem(itemId).getBean().getID());
stmt.executeUpdate();
}
} else {
System.out.println("Insertion");
try ( PreparedStatement stmt = con.prepareStatement("INSERT INTO [DB].[dbo].[Table1] ([ID],[FirstName], [LastName],[DateSelected]) VALUES (?,?,?,?)") ) {
stmt.setInt(1,aList.getItem(itemId).getBean().getID() );
stmt.setString(2,aList.getItem(itemId).getBean().getFirstName());
stmt.setString(3,aList.getItem(itemId).getBean().getLastName() );
stmt.setTimestamp(4, new Timestamp(new Date().getTime()));
stmt.executeUpdate();
}
}
}
Notification.show("Save Complete");
}
I'm writing an application with java form and sqlite and I have a function to connect to database and get data like this:
public ResultSet getResultSet(String query) {
Connection conn = null;
ResultSet rs = null;
try {
// create a database conn
conn = DriverManager.getConnection("jdbc:sqlite:C:\\Users\\nguye_000\\Desktop\\qlct.db");
Statement statement = conn.createStatement();
statement.setQueryTimeout(30);
rs = statement.executeQuery(query);
return rs;
}
catch(SQLException e) {
// if the error message is "out of memory",
// it probably means no database file is found
System.err.println(e.getMessage());
}
finally {
try {
if(conn != null)
conn.close();
}
catch(SQLException e) {
// conn close failed.
System.err.println(e);
}
}
return rs;
}
Why when I call it in main function I get an error?
Database db = new Database();
ResultSet rs = db.getResultSet("SELECT * FROM qlct_options");
while(rs.next()) {
// read the result set
System.out.println("id = " + rs.getInt("option_id"));
System.out.println("name = " + rs.getString("option_key"));
System.out.println("value = " + rs.getString("option_value"));
}
id = 0
name = null
value = null
Exception in thread "main" java.sql.SQLException: [SQLITE_MISUSE] Library used incorrectly (out of memory)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.DB.newSQLException(DB.java:901)
at org.sqlite.core.DB.throwex(DB.java:868)
at org.sqlite.jdbc3.JDBC3ResultSet.next(JDBC3ResultSet.java:83)
at qlct.Qlct.main(Qlct.java:18)
Java Result: 1
BUILD SUCCESSFUL (total time: 0 seconds)
I want to everytime run a any query I can use getResultSet() function and pass query into it. I can't write a code like this http://chuoidichvu.com/downloads/Database.java, each time I need to write a query I do try{} catch{}, final{}. It's hard!
You return a ResultSet from your function after having close the connection (in the finally part of the outer try).
This causes an error, since you can scan the ResultSet only while the Statement and the Connection are still open.
To obtain your objective (“reuse connect function database java mysqlite”) you have to “rearrange” the abstraction that you are trying to define. For instance you could pass to your method a lambda expression (if you are using Java 8) that processes the result inside it.
Im doing a Oracle Select for Update with java and it works on times and sometimes it hangs with the session and cannot remove the locked session (have to manually kill the session )
this works fine for most of the scenarios but when I deployed it in two servers ( web service ) and request them both at once this happens , I can't understand whether it's a problem with my code ,
my code
public boolean checkJobStatus(long taskId)
{
Connection con = null;
PreparedStatement selectForUpdate = null;
String lastJobStatus = null;
boolean runNow = false;
try
{
con = conPool.getConnection();
con.setAutoCommit(false);
selectForUpdate = con.prepareStatement("SELECT LAST_JOB_STATUS FROM ADM_JOB WHERE TASK_ID = ? FOR UPDATE ");
selectForUpdate.setLong(1, taskId);
ResultSet resultSet = selectForUpdate.executeQuery();
while(resultSet.next())
{
if (resultSet.getObject("LAST_JOB_STATUS") == null)
{
lastJobStatus = ScheduledJob.STATUS_FAILED;
}
else
{
lastJobStatus = resultSet.getString("LAST_JOB_STATUS");
}
}
if(ScheduledJob.STATUS_RUNNING.equalsIgnoreCase(lastJobStatus) || ScheduledJob.STATUS_STARTED.equalsIgnoreCase(lastJobStatus))
{
runNow = false;
// commit n update setting autocommit to true
selectForUpdate = con.prepareStatement("UPDATE ADM_JOB SET LAST_JOB_STATUS =? WHERE TASK_ID = ?");
selectForUpdate.setString(1, lastJobStatus);
selectForUpdate.setLong(2, taskId);
selectForUpdate.executeUpdate();
}
else
{
runNow =true;
// commit n update setting autocommit to true
selectForUpdate = con.prepareStatement("UPDATE ADM_JOB SET LAST_JOB_STATUS =? WHERE TASK_ID = ?");
selectForUpdate.setString(1, ScheduledJob.STATUS_STARTED);
selectForUpdate.setLong(2, taskId);
selectForUpdate.executeUpdate();
con.commit();
con.setAutoCommit(true);
}
} catch (SQLException e)
{
Logger.getLogger( "" ).log(Level.SEVERE, "Error in getting database connection", e);
try
{
con.rollback(); // rolling back the row lock in case of a exception
} catch (SQLException e1)
{
e1.printStackTrace();
}
}
finally
{
DBUtility.close( selectForUpdate );
DBUtility.close( con );
}
return runNow;
}
Commit occurs only in the else branch. If this condition doesn't happen, transaction is not closed, so a second thread hangs up forever on the select for update.
My DB is Oracle.
I know Statement can mix SQL sentences(insert or delete or update) into one single batch. Here is my code.
DBConnection db = new DBConnection();
Connection c = db.getConn();
Statement s = null ;
try
{
String sql = "insert into t1(id, name) values ('10', 'apple')";
String sql1 = "insert into t1(id, name) values ('14', 'pie')";
String sql2 = "delete from t1 where id = '10'";
s = c.createStatement();
s.addBatch(sql);
s.addBatch(sql1);
s.addBatch(sql2);
int[] re = s.executeBatch();...
My question is can PreparedStatement do this? and how?
You can create a batch by PreparedStatement.addBatch() and you can execute it by PreparedStatement.executeBatch
For more about PreparedStatement you can look into documentation
Now if i am not wrong you want to do something like this:
public void save(List<Entity> elements) throws SQLException {
Connection connection = null;
PreparedStatement statement = null;
try {
connection = database.getConnection();
statement = connection.prepareStatement(SQL_INSERT);
for (int i = 0; i < elements.size(); i++) {
Element element= elements.get(i);
statement.setString(1, element.getProperty1());
statement.setString(2, element.getProperty2());
.....
statement.addBatch();
if ((i + 1) % 200 == 0) {
statement.executeBatch(); // Execute every 200 items.
}
}
statement.executeBatch();
} finally {
if (statement != null) try { statement.close(); } catch (SQLException e) { //}
if (connection != null) try { connection.close(); } catch (SQLException e) {//}
}
}
In this case i am executing every 200 items, if you wish you can set your own. But do test it because it also depends on drivers limitation on batch operations.
Statement:
Use for general-purpose access to your database. Useful when you are using static SQL statements at runtime. The Statement interface cannot accept parameters.
PreparedStatement:
Use when you plan to use the SQL statements many times. The PreparedStatement interface accepts input parameters at runtime.
CallableStatement:
Use when you want to access database stored procedures. The CallableStatement interface can also accept runtime input parameters.
I have to modify a few tables in one function. They must all succeed, or all fail. If one operation fails, I want them all to fail. I have the following:
public void foo() throws Exception {
Connection conn = null;
try {
conn = ...;
conn.setAutoCommit(false);
grok(conn);
conn.commit();
}
catch (Exception ex) {
// do I need to call conn.rollback() here?
}
finally {
if (conn != null) {
conn.close();
conn = null;
}
}
}
private void grok(Connection conn) throws Exception {
PreparedStatement stmt = null;
try {
// modify table "apple"
stmt = conn.prepareStatement(...);
stmt.executeUpdate();
stmt.close();
// modify table "orange"
stmt = conn.prepareStatement(...);
stmt.executeUpdate();
stmt.close();
...
}
finally {
if (stmt != null) {
stmt.close();
}
}
}
I'm wondering if I need to call rollback() in the case that something goes wrong during this process.
Other info: I'm using connection pooling. In the sample above, I'm also making sure to close each PreparedStatement using finally statements as well, just left out for brevity.
Thank you
Yes you need to call rollback if any of your statements fails or you have detected an exception prior to calling commit. This is an old post but the accepted answer is wrong. You can try it for yourself by throwing an exception before commit and observing that your inserts still make it into the database if you do not manually rollback.
JDBC Documentation
https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html#call_rollback
Example Correct Usage from the doc
public void updateCoffeeSales(HashMap<String, Integer> salesForWeek)
throws SQLException {
PreparedStatement updateSales = null;
PreparedStatement updateTotal = null;
String updateString =
"update " + dbName + ".COFFEES " +
"set SALES = ? where COF_NAME = ?";
String updateStatement =
"update " + dbName + ".COFFEES " +
"set TOTAL = TOTAL + ? " +
"where COF_NAME = ?";
try {
con.setAutoCommit(false);
updateSales = con.prepareStatement(updateString);
updateTotal = con.prepareStatement(updateStatement);
for (Map.Entry<String, Integer> e : salesForWeek.entrySet()) {
updateSales.setInt(1, e.getValue().intValue());
updateSales.setString(2, e.getKey());
updateSales.executeUpdate();
updateTotal.setInt(1, e.getValue().intValue());
updateTotal.setString(2, e.getKey());
updateTotal.executeUpdate();
con.commit();
}
} catch (SQLException e ) {
JDBCTutorialUtilities.printSQLException(e);
if (con != null) {
try {
System.err.print("Transaction is being rolled back");
con.rollback();
} catch(SQLException excep) {
JDBCTutorialUtilities.printSQLException(excep);
}
}
} finally {
if (updateSales != null) {
updateSales.close();
}
if (updateTotal != null) {
updateTotal.close();
}
con.setAutoCommit(true);
}
}
You don't need to call rollback(). If the connection closes without completing commit() it will be rolled back.
You don't need to set conn to null either; and since the try block starts after conn is initialized (assuming ... cannot evaluate to null) you don't need the != null in finally either.
If you call "commit" then the transaction will be committed. If you have multiple insert/update statements and one of them fails, committing will cause the inserts/updates that didn't fail to commit to the database. So yes, if you don't want the other statements to commit to the db, you need to call rollback. What you are essentially doing by setting autocommit to false is allowing multiple statements to be committed or rolledback together. Otherwise each individual statement will automatically commit.