Why connection throw timeoutException without commit - java

I have a DAO class, that has method below. I call this one inside Transaction manager. When I ran it without "conn.commit()" line - it throws timeout exception, but when I ran it with this one - it is ok. What's the problem? As I know it is not necessary to commit if you not modify db?
#Override
public List<String> getLinks(int id) throws SQLException {
List<String> list = new ArrayList<>();
Connection conn = factory.newConnection();
Statement statement = null;
ResultSet rs = null;
try {
String expression = "select link from users.links where id=" + id + " order by id_link desc";
statement = conn.createStatement();
rs = statement.executeQuery(expression);
while (rs.next()) {
list.add(rs.getString("link"));
}
// !!!!!!!!!!!!! without next line method throw TimeoutException
conn.commit(); // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
return list;
} catch (SQLException e) {
rollBackQuietly(conn);
e.printStackTrace();
} finally {
closeQuaitly(rs);
closeQuaitly(statement);
closeQuaitly(conn);
}
return null;
}

Seeing as the commit() call is after the line which is throwing the exception, this problem must come after repeated invocations of this method (useful information to include in your question). this leads me to believe that your Connection factory is re-using Connections, and that it is handing out "stale" Connections (Connections which have been sitting around for too long and are no longer usable). if this is all true, then you need to make your factory manage Connections better. if it is a reasonably built connection pool, it probably has some feature like "test while idle" or "test on get" which you need to enable.

Related

MYSQL Insert query not updating data to the table in JAVA

I am working on a JAVA program which need to update database from text files. I have successfully inserted and updated data. But i am facing a problem with here, this method. Query runs without error and giving me the response. But the database table is not updating.
private void filteData() {
System.out.println("filteData");
Statement statementAtenLogInsert = null;
Statement statementqCheck = null;
Statement statementUpdateProLog = null;
Statement statementEnterError = null;
ResultSet rs = null;
int rcount;
//Update successfull attendance_test
String attenLogInsertSuccess = "INSERT INTO attendance_log (user_id, check_in, check_out) SELECT user_id, check_in, check_out FROM process_log WHERE flag = 'S'";
try {
statementAtenLogInsert = connection.createStatement();
statementAtenLogInsert.execute(attenLogInsertSuccess);
int qSuccess = statementAtenLogInsert.executeUpdate(attenLogInsertSuccess);
System.out.println("qSuccess " + qSuccess);
if(qSuccess > 0){
String deleteProcessLog = "DELETE FROM process_log WHERE flag = 'S'";
statementAtenLogInsert.execute(deleteProcessLog);
}
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Here the attenLogInsertSuccess and deleteProcessLog queries are not working. Mean nothing happened from database table side. But qSuccess giving me a value. That means attenLogInsertSuccess is triggering. But nothing happened from mysql side.
You need to close your connection in order to flush the changes to the database.
Try adding connection.close(); somewhere in your pipeline, typically you close the connection in a finally block to ensure it is always closed but it appears you have defined your connection elsewhere, presumably for re-use in the calling function.
You also need to close your statements before closing the connection. See this similar answer for the pattern.

JDBC Too Many Connections Error

I know this probably is a similar question from the rest, (well originally, before I tried something new, it was a bit unique but it never solved the main problem), but I probably need to discuss this with someone who can help because I could never get what's causing this despite already reading various posts from this site. Bottom line is I need to keep on making plenty of sequential queries but I ended up making too many connections.
What my program does is that it displays data about each member and that it's sort of a tree or network where, in order to get the data you need for each member, you have to scout through every other member that points to that current member (or child's data) , and the data of the member that points to the member that points to the current member (or grandchild's data) and so on. Hence, why I need to keep making queries cause I need to get the data off of each child. Each node has I think a minimum children of 5 and on my 34th member, it gave off that "Too Many Connections" error.
I have read how to open and close the Connections and all but am I still doing it incorrectly? I've tried changing the max connections but that's not really a long term solution for me. Here's how I do it:
public class SQLConnect {
private Connection con;
private Statement st;
private ResultSet rs;
public SQLConnect() {
try {
Class.forName("com.mysql.jdbc.Driver");
con = DriverManager.getConnection("jdbc:mysql://localhost:3306/dbname?zeroDateTimeBehavior=convertToNull", "root", "");
st = con.createStatement();
} catch (ClassNotFoundException | SQLException ex) {
System.out.println("Error in constructor: " + ex);
}
}
//this method gets called before I make another query
public void reconnect() {
try {
st.close();
con.close();
if (con.isClosed()) {
con = DriverManager.getConnection("jdbc:mysql://localhost:3306/dbname", "root", "");
st = con.createStatement();
}
} catch (SQLException ex) {
Logger.getLogger(SQLConnect.class.getName()).log(Level.SEVERE, null, ex);
}
}
//sample method on how I do queries
public ResultSet getMemberViaMemberId(String mID) {
try {
String query = "CALL getMemberViaMemberId(" + mID + ");"; //procedure call
rs = st.executeQuery(query);
} catch (Exception ex) {
System.out.println("Error: " + ex);
}
return rs;
}
}//end of class
The way I call it in my JForm is this..
SQLConnect connect;
public Class(){
connect = new SQLConnect();
}
public void methodThatGetsCalledALot(String current_id){
connect.reconnect(); //refer to SQLConnectClass displayed above
ResultSet member = connect.getMemberViaMemberId(current_id);
try{
if (member.next()) {
lastName = member.getString("last_name");
firstName = member.getString("first_name");
}
//display data...
} catch (SQLException ex){
}
}
The code:
connect.reconnect();
ResultSet rs = connect.callSQLMethod();
is the most essential bit and is called by every class, and by every method that needs to fetch data. I have to acknowledge that I never bother closing ResultSet because often times it's inside a loop and gets replaced with new data anyway.
Again, my problem is: I cant continue fetching data anymore because of too many connections. Am I really closing things properly or am I missing something? Any suggestions on how to fix this? If my question is too confusing, I'd add more details if required. Thank you. If anyone's to keen on freely helping me out, I'd go for some emailing. Thank you! And Happy New Year btw.
You seem to be creating a lot of connections and recursing with the ResultSet open. Don't create new connections all the time, all you need is one connection and don't reconnect all the time. You actually don't need the reconnect method at all (unless you connection closes automatically, in which case you can check if it is closed before executing query). And you need to close the ResultSet once you are done retrieving values.
All you need is the data and not the resultset. So take the data and release the resource ie ResultSet. So do this -
In your getMemberViaMemberId don't return ResultSet, in that method itself, iterate through the resultset and create the object for the row and store it into a collection and return that collection after closing the ResultSet. And dont call reconnect method at all.
Close the single connection that you have when exiting the program.

Unexpected NPE thrown when trying to put resultset of query into a map

I am getting a Null pointer exception at the shown line in the following code when i am putting the values from resultset into a map, it works most times, but sometimes it is throwing NPE, it is a service type application which runs in the background and every method opens a connection to the database and close it after use. There is a synchronized lock for every method so that no conflict occurs during connecting to DB.
public Configuration getConfiguration() {
String sql = "SELECT * FROM tbl_settings;";
HashMap<String, String> map = new HashMap<String, String>();
synchronized (_synchObject){
try {
open();
PreparedStatement stmt = (PreparedStatement) conn.prepareStatement(sql);
ResultSet rs = stmt.executeQuery();
while (rs.next()) {
>>line 495: map.put(rs.getString("Field"), rs.getString("Value")); //NPE is thrown in this line: 495
}
} catch (SQLException e) {
logger.error(e);
} finally {
try {
close();
} catch (SQLException e) {
logger.error(e);
}
}
return new Configuration(map);
}
The method open() here simply creates the connection to the DB-
private void open() throws SQLException {
String connectionString = ConfigSettings.getInstance().getDatabaseConnectionString();
conn = (Connection) DriverManager.getConnection(connectionString);
}
The Exception thrown -
Exception in thread "Thread-2" java.lang.NullPointerException
at com.mysql.jdbc.ResultSetImpl.buildIndexMapping(ResultSetImpl.java:683)
at com.mysql.jdbc.ResultSetImpl.findColumn(ResultSetImpl.java:1042)
at com.mysql.jdbc.ResultSetImpl.getString(ResultSetImpl.java:5202)
at com.twora.entryexit.db.DatabaseAccess.getConfiguration(DatabaseAccess.java:495)
at com.twora.entryexit.model.Configuration.getInstance(Configuration.java:33)
at com.twora.entryexit.service.runnables.IntervalledService.runSpecific(IntervalledService.java:33)
at com.twora.entryexit.service.runnables.RunningService.run(RunningService.java:38)
at java.lang.Thread.run(Thread.java:745)
I am running the query once in each run and reading resultset immediately after that. What should I do to overcome this?
You are getting an exception in the code that builds the three maps in the result set which map column names and labels to their index in the query. This has nothing to do with the calls to getString.
What is most likely happening is that your thread synchronization is flawed and you have two threads working on the same result set object at the same time. This happens because the result set comes from the connection object and you have two threads using the same result set object from the same connection at the same time.
The maps in that result set are cleared when "close" is called in the result set. So you have one thread closing the result set while another thread is still using it. This also explains why it works some of the time, it all depends on the timing.
I can see in your code that the object you use to synchronize is not clear. You need to ensure that you are not using the same connection object in two separate threads at the same time, and your synchronization is failing to do this at this time.

JDBC optimize MySql request on Multithread

I'm building a webcrawler and I'm looking for the best way to handle my requests and connection between my threads and the database (MySql).
I've 2 types of threads :
Fetchers : They crawl websites. They produce url and add they into 2 tables : table_url and table_file. They select from table_url
to continue the crawl. And update table_url to set visited=1 when they
have read a url. Or visited=-1 when they are reading it. They can
delete row.
Downloaders : They download files. They select from table_file. They update table_file to change the Downloaded column. They never
insert anything.
Right now I'm working with this :
I've a pool of connection based on c3p0.
Every target (website) have thoses variables :
private Connection connection_downloader;
private Connection connection_fetcher;
I create both connection only once when I instanciate a website. Then every thread will use thoses connections based on their target.
Every thread have thoses variables :
private Statement statement;
private ResultSet resultSet;
Before every Query I open a SqlStatement :
public static Statement openSqlStatement(Connection connection){
try {
return connection.createStatement();
} catch (SQLException e) {
e.printStackTrace();
}
return null;
}
And after every Query I close sql statement and resultSet with :
public static void closeSqlStatement(ResultSet resultSet, Statement statement){
if (resultSet != null) try { resultSet.close(); } catch (SQLException e) {e.printStackTrace();}
if (statement != null) try { statement.close(); } catch (SQLException e) {e.printStackTrace();}
}
Right now my Select queries only work with one select (I never have to select more than one for now but this will change soon) and is defined like this :
public static String sqlSelect(String Query, Connection connection, Statement statement, ResultSet resultSet){
String result = null;
try {
resultSet = statement.executeQuery(Query);
resultSet.next();
result = resultSet.toString();
} catch (SQLException e) {
e.printStackTrace();
}
closeSqlStatement(resultSet, statement);
return result;
}
And Insert, Delete and Update queries use this function :
public static int sqlExec(String Query, Connection connection, Statement statement){
int ResultSet = -1;
try {
ResultSet = statement.executeUpdate(Query);
} catch (SQLException e) {
e.printStackTrace();
}
closeSqlStatement(resultSet, statement);
return ResultSet;
}
My question is simple : can this be improved to be faster ? And I'm concerned about mutual exclusion to prevent a thread to update a link while another is doing it.
I believe your design is flawed. Having one connection assigned full-time for one website will severly limit your overall workload.
As you already have setup a connection pool, it's perfectly okay to fetch before you use (and return afterwards).
Just the same, try-with-catch for closing all your ResultSets and Statements after will make code more readable - and using PreparedStatement instead of Statement would not hurt as well.
One Example (using a static dataSource() call to access your pool):
public static String sqlSelect(String id) throws SQLException {
try(Connection con = dataSource().getConnection();
PreparedStatement ps = con.prepareStatement("SELECT row FROM table WHERE key = ?")) {
ps.setString(1, id);
try(ResultSet resultSet = ps.executeQuery()) {
if(rs.next()) {
return rs.getString(1);
} else {
throw new SQLException("Nothing found");
}
}
} catch (SQLException e) {
e.printStackTrace();
throw e;
}
}
Following the same pattern I suggest you create methods for all the different Insert/Update/Selects your application uses as well - all using the connection only for the short time inside the DB logic.
I can not see a real advantage to have all the Database stuff in your webcrawler threads.
Why don't you use a static class with the sqlSelect and sqlExec method, but without the Connection and ResultSet parameters. Both connection objects are static as well. Make sure the connection objects are valid befor using them.

Java memory leak caused by MySQL libraries

I have a thread that executes and updates a database with some values. I commented out all the operations done to the data and just left the lines you can see below. While running the program with the lines you see bellow I get a memory leak.
This is what it looks like in VisualVM
Mysql Class
public ResultSet executeQuery(String Query) throws SQLException {
statement = this.connection.createStatement();
resultSet = statement.executeQuery(Query);
return resultSet;
}
public void executeUpdate(String Query) throws SQLException {
Statement tmpStatement = this.connection.createStatement();
tmpStatement.executeUpdate(Query);
tmpStatement.close();
tmpStatement=null;
}
Thread file
public void run() {
ResultSet results;
String query;
int id;
String IP;
int port;
String SearchTerm;
int sleepTime;
while (true) {
try {
query = "SELECT * FROM example WHERE a='0'";
results = this.database.executeQuery(query);
while (results.next()) {
id = results.getInt("id");
query = "UPDATE example SET a='1' WHERE id='"
+ id + "'";
SearchTerm=null;
this.database.executeUpdate(query);
}
results.close();
results = null;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
The problem has happened with many other people after researching the web,
https://forum.hibernate.org/viewtopic.php?f=1&t=987128
Is bone cp or mysql.jdbc.JDBC4Connection known for leaking?
and a few more if you google "jdbc4resultset memory leak"
The leak is yours, not MySQL's. You aren't closing the statement.
The design of your method is poor. It should close the statement, and it should return something that will survive closing of the statement, such as a CachedRowSet.
Or else it should not exist at all. It's only three lines, and it doesn't support query parameters, so it isn't really much use. I would just delete it.
You also appear to have statement as an instance member, which is rarely if ever correct. It should be local to the method. At present your code isn't even thread-safe.
You should also be closing the ResultSet in a finally block to ensure it gets closed. Ditto the Statement.
Make sure that you are explicitly closing the database connections.

Categories

Resources