Why am i getting mysql lag on my minecraft bukkit plugin? - java

I made a bukkit plugin with mysql on it and i need to know why i am having lag whenever this code runs i run the server on my system and the mysql server with hostgator heres my code
openConnection();
try{
int level1 = 0;
if(playerDataConatinsPlayer(p)){
PreparedStatement sql = connection.prepareStatement("SELECT level FROM `player_data` WHERE player=?;");
sql.setString(1, p.getName());
ResultSet result = sql.executeQuery();
result.next();
level1 = result.getInt("level");
PreparedStatement levelUpdate = connection.prepareStatement("UPDATE `player_data` SET level=? WHERE player=?;");
levelUpdate.setInt(1, level1+1);
levelUpdate.setString(2, p.getName());
levelUpdate.executeUpdate();
levelUpdate.close();
sql.close();
result.close();
}else{
PreparedStatement newPlayer = connection.prepareStatement("INSERT INTO `player_data` values(?,0,1,0);");
newPlayer.setString(1, p.getName());
newPlayer.execute();
newPlayer.close();
}
}catch(Exception e1){
e1.printStackTrace();
}finally{
closeConnection();
}
here is my openconnection method
public synchronized static void openConnection(){
try{
connection = DriverManager.getConnection(""); //i know its empty cause i dont wanna give that info out
}catch(Exception e){
e.printStackTrace();
}
}
heres my closeconnection
public synchronized static void closeConnection(){
try{
connection.close();
}catch(Exception e){
e.printStackTrace();
}
}

There are a few things you can do to speed up your queries latency:
If your app is query intensive use persistent connections and keep them open instead of opening a new connection every time you need to access the database.
Run the MySQL server locally to speed up connection times.
Index the search fields of your tables (e.g. player on player_data) to have the search run faster.
Run the MySQL server on a powerful, dedicated machine with SSD drives and lots of RAM, and set the proper parameters on my.cnf (worker threads, max processes, max number of connections, memory limit, buffer sizes) to make use of that RAM and processing power and speed up search and processing times. Things like this question and answers may help you with the memory settings, but the best you can do is your own, exhaustive, online research and testing. Do your homework!
Use some kind of caching system to speed up reading (like memcached).
If your app is data intensive and has to support a huge number of connections, get a higher bandwidth or even consider setting up a cluster to balance the load.
Reduce the number of queries! You don't need to query the database twice to increase the level!
Try:
if (playerDataContainsPlayer(p)){
PreparedStatement levelUpdate = connection.prepareStatement(
"UPDATE player_data SET level=level+1 WHERE player=?;"
);
levelUpdate.setString(1, p.getName());
levelUpdate.executeUpdate();
levelUpdate.close();
sql.close();
} else {
...
}

Sounds like you are running your query on the main server Thread. You really shouldn't do this especially if your SQL server isn't on the local machine.
Have a read of the tutorial about how to run more CPU intensive or longer running tasks in the background to avoid this type of performance loss.
What you need to do is put your code into a BukkitRunnable:
public class ExampleTask extends BukkitRunnable {
private final JavaPlugin plugin;
public ExampleTask(JavaPlugin plugin) {
this.plugin = plugin;
}
#Override
public void run() {
// Put your task's code here
}
}
And then, to allow you to run the code in its own Thread leaving the main server Thread to take care of the game uninterrupted, call your Task like so:
BukkitTask task = new ExampleTask(this.plugin).runTask(this.plugin);
This should avoid the lag you mention. Just be careful about concurrency issues and note that Bukkit's docs specify that no Bukkit API interactions should happen inside asynchronous tasks. So just perform your query and any validation/parsing in the Task and pass the results back to the server thread for use in-game if needed.

Related

Multiple DB connections using a connection pool vs Single connection with multiple statements

I am developing a server working with MySQL, and I have been trying to understand advantage of working with a connection pool vs a single connection that is kept open, and being passed down to the different methods through out the application.
The idea of working with a connection pool is understood, however there could be scenarios that this could create a bottleneck, that wouldn't be in case of working without the pool.
Better explain my meaning using code:
Lets say the following method is called simultaneously connectionPoolSize + 1 (e.g. 10) times, meaning that we have exhausted our connections from the connection pool, the last query attempt will fail since no connections available:
public void getData(con) {
Connection con = null;
Statement s = null;
ResultSet rs = null;
try {
con = connectionPool.getConnection();
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
con.close();
}
}
However if we are using a single connection, that is kept open, and all methods can use it, there is no need for any of the methods to wait for the connection to be sent back to pool (which as we saw above, could take some time).
e.g. call this method also 10 times, this would work
public void getData(con) {
Statement s = null;
ResultSet rs = null;
try {
s = con.createStatement();
rs = s.executeQuery("SELECT * FROM MY_TABLE;");
// Some long process that takes a while....
// But this time we don't care that this will take time,
// since nobody is waiting for us to release the connection
catch(Exception e) {
throw new Exception(e.getMessage())
} finally {
s.close();
rs.close();
}
}
Obviously the statements and result sets will still be kept open until the method is finished, but this doesn't affect the connection itself, so it doesn't hold back any other attempts to use this connection.
I assume there is some further insight that I am missing, I understand the standard is working with connection pools, so how do you handle these issues?
Depends on your use case. Suppose you are building a web application that would be used by multiple users simultaneously. Now if you have a single connection, all the queries from multiple user threads will be queued. And single db connection will process them one by one. So in a multi-user system(mostly all normal cases), single db connection will be a bottleneck & won't work. Additionally, you need to take care of thread safety in case you are writing & committing data to db.
If you need truly simultaneous query execution in db, then you should go ahead with connection pool. Then different user threads can use different connections & can execute queries in parallel.
Connection pools are used to keep a number of opened connections ready for use and to eliminate the need to open a new connection each time it is required.
If your application is single threaded then you probably don’t need a pool and can use a single connection instead.
Even though sharing a connection between multiple threads is permitted there are some pitfalls of this approach. Here is a description for Java DB: https://docs.oracle.com/javadb/10.8.3.0/devguide/cdevconcepts89498.html. You should check if this is also the case for MySQL.
In many cases it is easier to have an individual connection for each thread.

How to send request to cassandra at a particular rate using Guava RateLimiter?

I am using datastax java driver 3.1.0 to connect to cassandra cluster and my cassandra cluster version is 2.0.10. I am writing asynchronously with QUORUM consistency.
private final ExecutorService executorService = Executors.newFixedThreadPool(10);
private final Semaphore concurrentQueries = new Semaphore(1000);
public void save(String process, int clientid, long deviceid) {
String sql = "insert into storage (process, clientid, deviceid) values (?, ?, ?)";
try {
BoundStatement bs = CacheStatement.getInstance().getStatement(sql);
bs.setConsistencyLevel(ConsistencyLevel.QUORUM);
bs.setString(0, process);
bs.setInt(1, clientid);
bs.setLong(2, deviceid);
concurrentQueries.acquire();
ResultSetFuture future = session.executeAsync(bs);
Futures.addCallback(future, new FutureCallback<ResultSet>() {
#Override
public void onSuccess(ResultSet result) {
concurrentQueries.release();
logger.logInfo("successfully written");
}
#Override
public void onFailure(Throwable t) {
concurrentQueries.release();
logger.logError("error= ", t);
}
}, executorService);
} catch (Exception ex) {
logger.logError("error= ", ex);
}
}
My above save method will be called from multiple threads at very fast speed. If I write at very high speed than my Cassandra cluster can handle then it will start throwing errors and I want all my writes should go successfully into cassandra without any loss.
Question:
I was thinking to use some sort off queue or buffer to enqueue requests (e.g. java.util.concurrent.ArrayBlockingQueue). "Buffer full" would mean that clients should wait. Buffer would also be used to re-enqueue failed requests. However to be more fair failed requests probably should be put to front of queue so they are retried first. Also we should somehow handle situation when queue is full and there are new failed requests at the same time. A single-threaded worker then would pick requests from queue and send them to Cassandra. Since it should not do much it's unlikely that it becomes a bottle-neck. This worker can apply it's own rate limits, e.g. based on timing with com.google.common.util.concurrent.RateLimiter.
What is the best way to implement this queue or buffer feature which can apply particular guava rate limiting as well while writing into Cassandra or if there is any better approach let me know as well? I wanted to write to Cassandra at 2000 request per second (this should be configurable so that I can play with it to see what is optimal setting).
As noted below in the comments, if memory keeps increasing we can use Guava Cache or CLHM to keep dropping old records to make sure my program doesn't run out of memory. We will be having around 12GB of memory on the box and these records are very small so I don't see it should be a problem.
If I write at very high speed than my Cassandra cluster can handle then it will start throwing errors and I want all my writes should go successfully into cassandra without any loss.
Datastax driver allows to configure number of connections per host and number of concurrent requests per connection (see PoolingOptions settings)
Adjust these settings to decrease pressure on Cassandra cluster.

How does LIMIT in MySQL query make it possible to cancel stream

I am wondering how does LIMIT in query prevent application thread reading from MySQL stream from hanging in close operation, and why does limit enable query canceling which is otherwise not working.
Statement statement = connection.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(Integer.MIN_VALUE);
// Statement statement = connection.createStatement(); // this can be canceled
new Thread(new Runnable() {
// tries to cancel query after streaming starts
#Override
public void run() {
try {
Thread.sleep(5);
statement.cancel(); // does nothing when streaming
// statement.close(); // makes application thread hang
} catch (SQLException | InterruptedException e) {
e.printStackTrace();
}
}
}).start();
// adding limit to the query makes it possible to cancel stream, even if limit is not yet reached
resultSet = statement.executeQuery("SOME_LONG_RUNNING_QUERY");
int i = 0;
while (resultSet.next()) {
System.out.println(++i);
}
connection.close();
Regular(non-streaming) query can be safely canceled, with or without limit. In streaming mode however, close/cancel operations simply make application thread hang/do nothing, presumably while performing blocking read on socket.
If i add some large LIMIT to the long running query then, as expected, cancel() operation results with:
com.mysql.jdbc.exceptions.jdbc4.MySQLQueryInterruptedException: Query execution was interrupted
I understand there are a couple of questions on this matter but none of them discusses aspects bellow:
Why does LIMIT make it possible to cancel streaming query
Can this bug-feature be relied upon, can it be changed in next releases, is there any official explanation ?
Limit make is so only a certain amount of records are pulled from the database. Limit is useful and is the best case to use in case you have a large query that is subject to hang.
In your case, when you are streaming your query that has both, a limit and close statement, it follows the order of operation. Since LIMIT comes first it will in result end your query. This would explain why even though you have a close statement it does not reach it and why you receive the exception.
I hope this clears up some of the issues you are having.

Which is a suitable architecture?

I have tested a socket connection programme with the idea where the socket connection will be one separate thread by itself and then it will enqueue and another separate thread for dbprocessor will pick from the queue and run through a number of sql statement. So I notice here is where the bottle neck that the db processing. I would like to get some idea is what I am doing the right architecture or I should change or improve on my design flow?
The requirement is to capture data via socket connections and run through a db process then store it accordingly.
public class cServer
{
private LinkedBlockingQueue<String> databaseQueue = new LinkedBlockingQueue<String>();
class ConnectionHandler implements Runnable {
ConnectionHandler(Socket receivedSocketConn1) {
this.receivedSocketConn1=receivedSocketConn1;
}
// gets data from an inbound connection and queues it for databse update
public void run(){
databaseQueue.add(message); // put to db queue
}
}
class DatabaseProcessor implements Runnable {
public void run(){
// open database connection
createConnection();
while (true){
message = databaseQueue.take(); // keep taking message from the queue add by connectionhandler and here I will have a number of queries to run in terms of select,insert and updates.
}
}
void createConnection(){
System.out.println("Crerate Connection");
connCreated = new Date();
try{
dbconn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test1?"+"user=user1&password=*******");
dbconn.setAutoCommit(false);
}
catch(Throwable ex){
ex.printStackTrace(System.out);
}
}
}
public void main()
{
new Thread(new DatabaseProcessor()).start(); //calls the DatabaseProcessor
try
{
final ServerSocket serverSocketConn = new ServerSocket(8000);
while (true){
try{
Socket socketConn1 = serverSocketConn.accept();
new Thread(new ConnectionHandler(socketConn1)).start();
}
catch(Exception e){
e.printStackTrace(System.out);
}
}
}
catch (Exception e){
e.printStackTrace(System.out);
}
}
}
It's hard (read 'Impossible') to judge a architecture without the requirements. So I will just make some up:
Maximum Throughput:
Don't use a database, write to a flatfile, possibly stored on something fast like a solid state disc.
Guaranteed Persistence (If the user gets an answer not consisting of an error, the data must be stored securely):
Make the whole thing single threaded, save everything in a database with redundant discs. Make sure you have a competent DBA who knows about Back up and Recovery. Test those on regular intervals.
Minimum time for finishing the user request:
Your approach seems reasonable.
Minimum time for finishing the user request + Maximizing Throughput + Good Persistence (what ever that means):
Your approach seems good. You might plan for multiple threads processing the DB requests. But test how much (more) throughput you actually get and where precisely the bottleneck is (Network, DB CPU, IO, Lock contention ...). Make sure you don't introduce bugs by using a concurrent approach.
Generally, your architecture sounds correct. You need to make sure that your two threads are synchronised correctly when reading/writing from/to the queue.
I am not sure what you mean by "bottle neck that the db processing"? If DB processing takes a long time and and you end up with a long queue, there's not much you can do apart from having multiple threads performing the DB processing (assuming the processing can be parallelised, of course) or do some performance tuning in the DB thread.
If you post some specific code that you believe is causing the problem, we can have another look.
You don't need two threads for this simple task. Just read the socket and execute the statements.

Is this use of PreparedStatements in a Thread in Java correct?

I'm still an undergrad just working part time and so I'm always trying to be aware of better ways to do things. Recently I had to write a program for work where the main thread of the program would spawn "task" threads (for each db "task" record) which would perform some operations and then update the record to say that it has finished. Therefore I needed a database connection object and PreparedStatement objects in or available to the ThreadedTask objects.
This is roughly what I ended up writing, is creating a PreparedStatement object per thread a waste? I thought static PreparedStatments could create race conditions...
Thread A stmt.setInt();
Thread B stmt.setInt();
Thread A stmt.execute();
Thread B stmt.execute();
A´s version never gets execed..
Is this thread safe? Is creating and destroying PreparedStatement objects that are always the same not a huge waste?
public class ThreadedTask implements runnable {
private final PreparedStatement taskCompleteStmt;
public ThreadedTask() {
//...
taskCompleteStmt = Main.db.prepareStatement(...);
}
public run() {
//...
taskCompleteStmt.executeUpdate();
}
}
public class Main {
public static final db = DriverManager.getConnection(...);
}
I believe it is not a good idea to share database connections (and prepared statements) between threads. JDBC does not require connections to be thread-safe, and I would expect most drivers to not be.
Give every thread its own connection (or synchronize on the connection for every query, but that probably defeats the purpose of having multiple threads).
Is creating and destroying PreparedStatement objects that are always the same not a huge waste?
Not really. Most of the work happens on the server, and will be cached and re-used there if you use the same SQL statement. Some JDBC drivers also support statement caching, so that even the client-side statement handle can be re-used.
You could see substantial improvement by using batched queries instead of (or in addition to) multiple threads, though. Prepare the query once, and run it for a lot of data in a single big batch.
The threadsafety is not the issue here. All looks syntactically and functionally fine and it should work for about half a hour. Leaking of resources is however the real issue here. The application will crash after about half a hour because you never close them after use. The database will in turn sooner or later close the connection itself so that it can claim it back.
That said, you don't need to worry about caching of preparedstatements. The JDBC driver and the DB will take care about this task. Rather worry about resource leaking and make your JDBC code as solid as possible.
public class ThreadedTask implements runnable {
public run() {
Connection connection = null;
Statement statement = null;
try {
connection = DriverManager.getConnection(url);
statement = connection.prepareStatement(sql);
// ...
} catch (SQLException e) {
// Handle?
} finally {
if (statement != null) try { statement.close(); } catch (SQLException logOrIgnore) {}
if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {}
}
}
}
To improve connecting performance, make use of a connection pool like c3p0 (this by the way does not mean that you can change the way how you write the JDBC code; always acquire and close the resources in the shortest possible scope in a try-finally block).
You're best to use a connection pool and get each thread to request a connection from the pool. Create your statements on the connection you're handed, remembering to close it and so release it back to the pool when you're done. The benefit of using the pool is that you can easily increase the number of available connections should you find that thread concurrency is becoming an issue.

Categories

Resources