How does LIMIT in MySQL query make it possible to cancel stream - java

I am wondering how does LIMIT in query prevent application thread reading from MySQL stream from hanging in close operation, and why does limit enable query canceling which is otherwise not working.
Statement statement = connection.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(Integer.MIN_VALUE);
// Statement statement = connection.createStatement(); // this can be canceled
new Thread(new Runnable() {
// tries to cancel query after streaming starts
#Override
public void run() {
try {
Thread.sleep(5);
statement.cancel(); // does nothing when streaming
// statement.close(); // makes application thread hang
} catch (SQLException | InterruptedException e) {
e.printStackTrace();
}
}
}).start();
// adding limit to the query makes it possible to cancel stream, even if limit is not yet reached
resultSet = statement.executeQuery("SOME_LONG_RUNNING_QUERY");
int i = 0;
while (resultSet.next()) {
System.out.println(++i);
}
connection.close();
Regular(non-streaming) query can be safely canceled, with or without limit. In streaming mode however, close/cancel operations simply make application thread hang/do nothing, presumably while performing blocking read on socket.
If i add some large LIMIT to the long running query then, as expected, cancel() operation results with:
com.mysql.jdbc.exceptions.jdbc4.MySQLQueryInterruptedException: Query execution was interrupted
I understand there are a couple of questions on this matter but none of them discusses aspects bellow:
Why does LIMIT make it possible to cancel streaming query
Can this bug-feature be relied upon, can it be changed in next releases, is there any official explanation ?

Limit make is so only a certain amount of records are pulled from the database. Limit is useful and is the best case to use in case you have a large query that is subject to hang.
In your case, when you are streaming your query that has both, a limit and close statement, it follows the order of operation. Since LIMIT comes first it will in result end your query. This would explain why even though you have a close statement it does not reach it and why you receive the exception.
I hope this clears up some of the issues you are having.

Related

Multiple DB connections using a connection pool vs Single connection with multiple statements

I am developing a server working with MySQL, and I have been trying to understand advantage of working with a connection pool vs a single connection that is kept open, and being passed down to the different methods through out the application.
The idea of working with a connection pool is understood, however there could be scenarios that this could create a bottleneck, that wouldn't be in case of working without the pool.
Better explain my meaning using code:
Lets say the following method is called simultaneously connectionPoolSize + 1 (e.g. 10) times, meaning that we have exhausted our connections from the connection pool, the last query attempt will fail since no connections available:
public void getData(con) {
Connection con = null;
Statement s = null;
ResultSet rs = null;
try {
con = connectionPool.getConnection();
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
con.close();
}
}
However if we are using a single connection, that is kept open, and all methods can use it, there is no need for any of the methods to wait for the connection to be sent back to pool (which as we saw above, could take some time).
e.g. call this method also 10 times, this would work
public void getData(con) {
Statement s = null;
ResultSet rs = null;
try {
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
// But this time we don't care that this will take time,
// since nobody is waiting for us to release the connection
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
}
}
Obviously the statements and result sets will still be kept open until the method is finished, but this doesn't affect the connection itself, so it doesn't hold back any other attempts to use this connection.
I assume there is some further insight that I am missing, I understand the standard is working with connection pools, so how do you handle these issues?
Depends on your use case. Suppose you are building a web application that would be used by multiple users simultaneously. Now if you have a single connection, all the queries from multiple user threads will be queued. And single db connection will process them one by one. So in a multi-user system(mostly all normal cases), single db connection will be a bottleneck & won't work. Additionally, you need to take care of thread safety in case you are writing & committing data to db.
If you need truly simultaneous query execution in db, then you should go ahead with connection pool. Then different user threads can use different connections & can execute queries in parallel.
Connection pools are used to keep a number of opened connections ready for use and to eliminate the need to open a new connection each time it is required.
If your application is single threaded then you probably don’t need a pool and can use a single connection instead.
Even though sharing a connection between multiple threads is permitted there are some pitfalls of this approach. Here is a description for Java DB: https://docs.oracle.com/javadb/10.8.3.0/devguide/cdevconcepts89498.html. You should check if this is also the case for MySQL.
In many cases it is easier to have an individual connection for each thread.

How to safely kill a query which has timed out

I'm using PostgreSQL JDBC, and I have a connection for some select and insert queries. Some queries take some time, so I added a timeout. The problem is that the timeout closes the connection, but the query is still executed in the db server and it creates locks.
A simplified code for the problem (The real code is much complex and bigger, but it doesn't matter):
PGPoolingDataSource source = new PGPoolingDataSource();
source.setUrl(url);
source.setUser(user);
source.setPassword(password);
source.setMaxConnections(10);
source.setSocketTimeout(5); // Timeout of 5 seconds
// Insert a row data, and make timeout
Connection con = source.getConnection();
con.setAutoCommit(false);
try {
Statement st2 = con.createStatement();
st2.execute("insert into data.person values (4, 'a')");
Statement st3 = con.createStatement();
st3.executeQuery("select pg_sleep(200)"); // A query which takes a lot of time and causes a timeout
con.commit();
con.close();
} catch (SQLException ex) {
if (!con.isClosed()) {
con.rollback();
con.close();
}
ex.printStackTrace();
}
Connection con2 = source.getConnection();
con2.setAutoCommit(false);
try {
Statement st2 = con2.createStatement();
// This insert query is locked because the query before is still executed, and the rollback didn't happen yet, and the row with the id of 4 is not released
st2.execute("insert into data.person values (4, 'b')");
con2.commit();
con2.close();
} catch (SQLException ex) {
if (!con2.isClosed()) {
con2.rollback();
con2.close();
}
ex.printStackTrace();
}
(data.person is a table with id and name.)
The timeout closes the connection, and it didn't even get to the line con.rollback(); . I have read that when an exception occurs on a query it does rollback in the background, so it is ok.
But the query takes a lot of time (a few hours) and as a result, the rollback will occur after the big select query has finished. So, I can't add the row to data.person for several hours (The second time I try to insert, I get a timeout exception because it waits for the lock to be released...).
I have read that I can use the function pg_terminate_backend in PostgreSQL to terminate the query, and so I can execute the insert query the second time.
My questions are :
1) How safe it is?
2) How common this solution is?
3) Is there a safer solution that JDBC or PostgreSQL provide?
pg_terminate_backend will work and is the safe and correct procedure if you want to interrupt the query and close the database connection.
There is also pg_cancel_backend which will interrupt the query but leave the connection open.
These functions require that you know the process ID of the session backend process, which you can get with the pg_backend_pid function.
You must run these statements on a different database connection than the original one!
Another, probably simpler method is to set the statement_timeout. This can be set in the configuration file or for an individual session or transaction. To set it for a transaction, use:
BEGIN; -- in JDBC, use setAutoCommit(false)
SET LOCAL statement_timeout = 30000; -- in milliseconds
SELECT /* your query */;
COMMIT;

Why am i getting mysql lag on my minecraft bukkit plugin?

I made a bukkit plugin with mysql on it and i need to know why i am having lag whenever this code runs i run the server on my system and the mysql server with hostgator heres my code
openConnection();
try{
int level1 = 0;
if(playerDataConatinsPlayer(p)){
PreparedStatement sql = connection.prepareStatement("SELECT level FROM `player_data` WHERE player=?;");
sql.setString(1, p.getName());
ResultSet result = sql.executeQuery();
result.next();
level1 = result.getInt("level");
PreparedStatement levelUpdate = connection.prepareStatement("UPDATE `player_data` SET level=? WHERE player=?;");
levelUpdate.setInt(1, level1+1);
levelUpdate.setString(2, p.getName());
levelUpdate.executeUpdate();
levelUpdate.close();
sql.close();
result.close();
}else{
PreparedStatement newPlayer = connection.prepareStatement("INSERT INTO `player_data` values(?,0,1,0);");
newPlayer.setString(1, p.getName());
newPlayer.execute();
newPlayer.close();
}
}catch(Exception e1){
e1.printStackTrace();
}finally{
closeConnection();
}
here is my openconnection method
public synchronized static void openConnection(){
try{
connection = DriverManager.getConnection(""); //i know its empty cause i dont wanna give that info out
}catch(Exception e){
e.printStackTrace();
}
}
heres my closeconnection
public synchronized static void closeConnection(){
try{
connection.close();
}catch(Exception e){
e.printStackTrace();
}
}
There are a few things you can do to speed up your queries latency:
If your app is query intensive use persistent connections and keep them open instead of opening a new connection every time you need to access the database.
Run the MySQL server locally to speed up connection times.
Index the search fields of your tables (e.g. player on player_data) to have the search run faster.
Run the MySQL server on a powerful, dedicated machine with SSD drives and lots of RAM, and set the proper parameters on my.cnf (worker threads, max processes, max number of connections, memory limit, buffer sizes) to make use of that RAM and processing power and speed up search and processing times. Things like this question and answers may help you with the memory settings, but the best you can do is your own, exhaustive, online research and testing. Do your homework!
Use some kind of caching system to speed up reading (like memcached).
If your app is data intensive and has to support a huge number of connections, get a higher bandwidth or even consider setting up a cluster to balance the load.
Reduce the number of queries! You don't need to query the database twice to increase the level!
Try:
if (playerDataContainsPlayer(p)){
PreparedStatement levelUpdate = connection.prepareStatement(
"UPDATE player_data SET level=level+1 WHERE player=?;"
);
levelUpdate.setString(1, p.getName());
levelUpdate.executeUpdate();
levelUpdate.close();
sql.close();
} else {
...
}
Sounds like you are running your query on the main server Thread. You really shouldn't do this especially if your SQL server isn't on the local machine.
Have a read of the tutorial about how to run more CPU intensive or longer running tasks in the background to avoid this type of performance loss.
What you need to do is put your code into a BukkitRunnable:
public class ExampleTask extends BukkitRunnable {
private final JavaPlugin plugin;
public ExampleTask(JavaPlugin plugin) {
this.plugin = plugin;
}
#Override
public void run() {
// Put your task's code here
}
}
And then, to allow you to run the code in its own Thread leaving the main server Thread to take care of the game uninterrupted, call your Task like so:
BukkitTask task = new ExampleTask(this.plugin).runTask(this.plugin);
This should avoid the lag you mention. Just be careful about concurrency issues and note that Bukkit's docs specify that no Bukkit API interactions should happen inside asynchronous tasks. So just perform your query and any validation/parsing in the Task and pass the results back to the server thread for use in-game if needed.

NullPointerException, at thread

I'm coding Java application that decode TCAP frame which will be reading from a text file, then insert decoded data in a database(Oracle)! So, at the beginning decoding and integration are perfectly performed, but when it reachs a finite of number decoded and inserted data, it starts triggering this error at the thread that assumes the insertion in the database:
" java.lang.OutOfMemoryError: unable to create new native thread "
" Exception in thread "Thread-465" java.lang.NullPointerException "
Code extract:
public void run(){
Conn_BD connexion=new Conn_BD("thin:#localhost:1521:XE", "SYSTEM", "SYSTEM");
java.sql.Connection cn=connexion.connect();
try {
Statement instruction = cn.createStatement();
instruction.executeUpdate("update tcapBegin set "+
trame+"='"+trame_val+"' where "+message+" like '"+trameId+"'");
cn.close();
} catch(SQLException e) {
System.out.print(e);
}
}
Does anyone have an idea to resolve this problem?
Instead of instantiating a thread per insert (or whatever other action you do), try and create a queue of "tasks", each task will represent an insert such thread should perform.
When you have such a queue, you need to have a thread that "pushes" tasks into the queue and threads that perform the actual tasks by "pulling" them out of the queue and performing them.
By working this way you won't need a thread per task but instead you'll be able to use a small set of general purpose threads that will take a task from the queue, execute it, and return to the queue for more work.
p.s. when you reuse your thread, don't create a connection in the run method, you don't have to recreate the connection each time.
Read about Executors and Thread Pooling
See Producer Consumer
See DB Connection pooling
You have this statement at the beginning of trhead
Conn_BD connexion=new Conn_BD("thin:#localhost:1521:XE", "SYSTEM", "SYSTEM");
Seems like you are creating a new connection everytime a new thread is created. Creating a connetion and then executing statement takes time so by the time your first connection gets closed so many other connections have been created that you can not create any more.
a better option will be if you use one static reference for connection.
private static Conn_BD connexion=new Conn_BD("thin:#localhost:1521:XE", "SYSTEM", "SYSTEM");
private static java.sql.Connection cn=connexion.connect();
public void run(){
Statement instruction = cn.createStatement();
//code here
instruction.close();
}
once all the threads are done executing close the connection.

Is this use of PreparedStatements in a Thread in Java correct?

I'm still an undergrad just working part time and so I'm always trying to be aware of better ways to do things. Recently I had to write a program for work where the main thread of the program would spawn "task" threads (for each db "task" record) which would perform some operations and then update the record to say that it has finished. Therefore I needed a database connection object and PreparedStatement objects in or available to the ThreadedTask objects.
This is roughly what I ended up writing, is creating a PreparedStatement object per thread a waste? I thought static PreparedStatments could create race conditions...
Thread A stmt.setInt();
Thread B stmt.setInt();
Thread A stmt.execute();
Thread B stmt.execute();
A´s version never gets execed..
Is this thread safe? Is creating and destroying PreparedStatement objects that are always the same not a huge waste?
public class ThreadedTask implements runnable {
private final PreparedStatement taskCompleteStmt;
public ThreadedTask() {
//...
taskCompleteStmt = Main.db.prepareStatement(...);
}
public run() {
//...
taskCompleteStmt.executeUpdate();
}
}
public class Main {
public static final db = DriverManager.getConnection(...);
}
I believe it is not a good idea to share database connections (and prepared statements) between threads. JDBC does not require connections to be thread-safe, and I would expect most drivers to not be.
Give every thread its own connection (or synchronize on the connection for every query, but that probably defeats the purpose of having multiple threads).
Is creating and destroying PreparedStatement objects that are always the same not a huge waste?
Not really. Most of the work happens on the server, and will be cached and re-used there if you use the same SQL statement. Some JDBC drivers also support statement caching, so that even the client-side statement handle can be re-used.
You could see substantial improvement by using batched queries instead of (or in addition to) multiple threads, though. Prepare the query once, and run it for a lot of data in a single big batch.
The threadsafety is not the issue here. All looks syntactically and functionally fine and it should work for about half a hour. Leaking of resources is however the real issue here. The application will crash after about half a hour because you never close them after use. The database will in turn sooner or later close the connection itself so that it can claim it back.
That said, you don't need to worry about caching of preparedstatements. The JDBC driver and the DB will take care about this task. Rather worry about resource leaking and make your JDBC code as solid as possible.
public class ThreadedTask implements runnable {
public run() {
Connection connection = null;
Statement statement = null;
try {
connection = DriverManager.getConnection(url);
statement = connection.prepareStatement(sql);
// ...
} catch (SQLException e) {
// Handle?
} finally {
if (statement != null) try { statement.close(); } catch (SQLException logOrIgnore) {}
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
}
}
To improve connecting performance, make use of a connection pool like c3p0 (this by the way does not mean that you can change the way how you write the JDBC code; always acquire and close the resources in the shortest possible scope in a try-finally block).
You're best to use a connection pool and get each thread to request a connection from the pool. Create your statements on the connection you're handed, remembering to close it and so release it back to the pool when you're done. The benefit of using the pool is that you can easily increase the number of available connections should you find that thread concurrency is becoming an issue.

Categories

Resources