Updating MySQL Database every second - java

I'm making an online game. I'm testing the game with 300 players now and I have a problem. I have to update about 300 rows in database every second but the update takes too long. It takes about 11143ms (11s) which is pretty much for task which must be done in less than 1s. I'm making those updates to database from JAVA. I tried with PHP already but it's the same. The update SQL query is very simple...
String query5 = "UPDATE naselje SET zelezo = " + zelezo + ", zlato = " + zlato + ", les = " + les + ", hrana = " + hrana + " WHERE ID =" + ID;
So anyone knows how to make updates to database every second with faster performance or any other solution how to update resources for game (gold, wood, food,...)?
My configuration:
Intel Core i5 M520 2.40GHz
6 GB RAM

You are probably updating each row seperatly, you need to use batch update

Switch to PDO if you are not already on it, and use transactions. Also, restructure your tables to use InnoDB vs MyISAM.
InnoDB works better with larger tables which are frequently read/written.
This is one of the things that it was designed to handle. Multiple SELECT/UPDATE/INSERT statements which are very similar in style.
It is also good coding practice to use transactions when handling multiple consecutive calls of the above types.
Use this Google Search to learn more of PHP PDO and MySQL Transactions.
Example:
With Transactions
$pdo = new PDO(...);
$pdo->beginTransaction();
for ( $i = 0; $i < 1001; $i++) {
$pdo->query("UPDATE table SET column='$var' WHERE ID = $i");
}
$pdo->commit();

Related

What can be causing a foreign key exception when check is on 0?

I'm trying to fix a little plugin that I'm making for MineCraft servers. The plugin uses code that tries to automatically adjusts to the server needs, first converting the tables on old tables, creating the new ones and after using some objects that contains human decisions to parse or update specific information to the new tables, parsing all the data that is not duplicated already to the new tables, then removing the old ones.
The code is kinda messy, I didn't had lot of time this days, but I was trying to get a free week to remake all the code of the plugin. The problem is that everything was working fine, but one day I decided to update the plugin on a server that I use for testing, that is using MySQL. The problem in this server is that I was using the same code all the time and I didn't had problems, but after some time without using it, now it's not working.
This is the part of the code that is failing:
protected boolean tables() {
boolean update = false, result = update;
if (!this.sql.execute(
"CREATE TABLE IF NOT EXISTS information(param VARCHAR(16),value VARCHAR(16),CONSTRAINT PK_information PRIMARY KEY (param));",
new Data[0]))
return false;
List<String> tlist = new ArrayList<>();
try {
this.sql.execute("SET FOREIGN_KEY_CHECKS=0;", new Data[0]);
ResultSet set = this.sql.query("SELECT value FROM information WHERE `param`='version';", new Data[0]);
String version = "";
if (set.next())
version = set.getString(1);
if (!version.equals(MMOHorsesMain.getPlugin().getDescription().getVersion())) {
update = true;
ResultSet tables = this.sql.query("SHOW TABLES;", new Data[0]);
while (tables.next()) {
String name = tables.getString(1);
if (!name.equals("information")) {
if (!this.sql.execute("CREATE TABLE " + name + "_old LIKE " + name + ";", new Data[0]))
throw new Exception();
if (!this.sql.execute("INSERT INTO " + name + "_old SELECT * FROM " + name + ";", new Data[0]))
throw new Exception();
tlist.add(name);
}
}
String remove = "";
for (String table : tlist)
remove = String.valueOf(remove) + (remove.isEmpty() ? "" : ",") + table;
this.sql.reconnect();
this.sql.execute("DROP TABLE IF EXISTS " + remove + ";", new Data[0]);
The database stores an extra data that it's the version of the plugin. I use it to check if the database is from another version and, if that's the case, regenerate the database. It's working fine on SQLite, but the only problem comes here on MySQL.
The first part gets the actual version and checks. The plugin starts disabling the foreign keys. This is not the best part but as I said, I didn't actually had time to remake all this code, also this code comes from a compiled version cause due some GitHub issues I lost part of the last updates. If it requires the update, it starts transforming every table on _old tables. Everything works fine here, data is parsed to the _old tables and is managed correctly, but the problem is when it has to removes the original tables.
DROP TABLE IF EXISTS cosmetics,horses,inventories,items,trust,upgrades;
This is the SQL statement that is used to remove the original ones, but, I don't know if it works like that, but if that's the case, the _old tables got the foreign keys that the original tables too and when I try to remove them, it doesn't allow, even if the FOREIGN_KEY_CHECKS is on 0. I also set a debug before to check if the checking was disabled and it was. To simulate the best environment where people is used to work, I'm using a prebuilder minecraft hosting from a friend, using MariaDB 10.4.12.
I'm asking him if he updated it since the last time I was preparing this server, but I'm still waiting for his answer. Anyway, even if it's a newer or older MariaDB version, what I'm trying is to make it the most elastic possible so it can be adapted to different versions without problems. Everything seems to work fine, but as I can't delete the original databases, I can't replace them with the new format.
I wish this is just an error that happens with certain DB configurations, but I'd like to get an answer of someone with knowledge to make sure I didn't upload a broken version.
Thanks you nicomp, the answered was keeping the same session. My reconnect method is not really flexible, as I came from some strange experiences of high latency and like 1 sec sessions, cause after nothing it was getting disconnected easily, and was detecting incorrectly the connection so it was reconnecting and removing the configuration of the session.

Java persistancemanager fetch return less entities

In java fetching entities with query some time return less entities in some rare case. I am using javapersistance manager. Is it ideal to use it or need to switch to low level datastore fetch to solve it?
String query = "CUID == '" + cuidKey + "' && staffKey == '" + staffKey +"'&& StartTimeLong >= "+ startDate + " && StartTimeLong < " + endDate + " && status == 'confirmed'";
List<ResultJDO> tempResultList = jdoUtils.fetchEntitiesByQueryWithRangeOrder(ResultJDO.class, query, null, null, "StartTimeLong desc");
The result returned 4 entities in rare case, but most time return all 5 entities.
jdoUtils is a PersistanceManager object.
Should I need to switch to low level datastore fetch for exact results.
I have tried researching about the library you mentioned and for similar issues and found nothing that far. It's hard to know why this is happening or how to fix it with as little information.
On the other hand, the recommended way of programmatically interact with Google Cloud Platform products is through Google's client libraries since they are already tested and assured to work in almost all cases. Furthermore, their usage allows to open Github issue's if you find any problem so that the developers could address them. For the rare cases that you need some functionality not already covered you can open a feature request or directly call the API's.
In addition to Google's libraries there are two other options for Java that are under active development. One is Objectify and the other is Catatumbo.
I would suggest switching to Java Datastore libraries. You could find examples on how to interact with Datastore in link1 and link2. Also you could find community shared code samples in this programcreek page.

how to append a sql query to existing file in java?

I am currently working on a java code that allows me to query a database and extract its content to a file.
So far no problem for small requests.
But I will quickly have to extract large volumes of data and I have been trying for a few days to implement the most efficient solution in order to limit memory consumption as much as possible.
Because as soon as I make an important request, the memory of the source machine and the target machine is saturated.
The java version I use on the redhat linux environment is java-1.8.0
So far, I have been able to redirect the result of my query to a file. But after a lot of documentation, I could see that there were many different methods to limit memory consumption.
DriverManager.registerDriver(new
com.wily.introscope.jdbc.IntroscopeDriver());
Connection conn = DriverManager.getConnection("jdbc:introscope:net//" +
user + ":" + password + "#" + hostname + ":" + port);
String query = "select * from metric_data"
+ " where agent='"
+ agents_filter
+ "' and metric='"
+ metrics_filter
+ "' and timestamp between "
+ queryInterval;
Statement ps=conn.createStatement();
ResultSet rs=ps.executeQuery(query);
rs.setFetchSize(Size);
ResultSetMetaData rsm = rs.getMetaData();
File output = new File("result");
PrintWriter out = new PrintWriter(new BufferedWriter(
new OutputStreamWriter(
new FileOutputStream(output), "UTF-8")), false);
for(int i = 1; i <= rs.getMetaData().getColumnCount(); i++){
String colName = rs.getMetaData().getColumnName(i);
out.print(" " + colName + "\t\t" + "|");
}
while (rs.next()) {
for(int i = 1; i <= rs.getMetaData().getColumnCount(); i++){
String colValue = rs.getString(i);
out.print(" " + colValue + "\t" + "|");
}
out.println();
}
out.close();
out.flush();
rs.close();
ps.close();
conn.close();
Currently the request is fully loaded into memory and then redirected to my file. But as soon as the request is too important I get the following messages:
Exception in thread "PO:client_main Mailman 2" java.lang.OutOfMemoryError: Java heap space
Exception in thread "UnknownHub Hub Receive 1" java.lang.lang.OutOfMemoryError: Java heap space
I would like to be able to write for example 1000 lines by 1000 lines in the file so as not to saturate the memory.
Knowing that files can sometimes reach 40gb
The execution time is not really a problem, but the memory consumption is a really important criterion.
I am far from being a java professional, so that's why I would need a little help from you.
Thank you in advance for your time
constructing your SQL string by concatenating strings is a security leak. Imagine those variables hold something like: "1'; DROP ALL TABLES; --". Even if here you know the strings are 'safe', code changes, and you should not adopt bad habits. Fix this; you can use PreparedStatement to fix it.
metadata isn't free. Cache that stuff. Specifically, cache the value rs.getMetaData().getColumnCount().
For real speed here, run an SQL command that tells the DB engine to directly pump that data to a file, and then transfer this file if it's not on local host. Can't really go any faster than that.
you can't flush after close, and close implies flush. You can just remove the flush() line.
Assuming your fetch size isn't ludicrously large, there's nothing in this code that would indicate an out of memory error would occur. So, it's either the repeated invocations of getMetaData (which means caching the column size would fix your problem here), or the DB engine and/or its JDBC driver is badly written. I haven't heard of introscope which is why I mention it. If that is the case, at best you can use SQL OFFSET and LIMIT to separate your query into 'pages' and thus not grab too many results at once, but without an ORDER in your SQL, technically the DB engine is allowed to change the order on you, and with it, the process might become quite slow.

Java out of heap space error if I use 'or' in sql instead of 'in'

I am using spring and hibernate in my project and few day ago I found that Dev environment has crashed due to Java out of heap space exception. After some preliminary analysis using some heap analysis tools and visual vm, I found that the problem is with the one select SQL query. I rewrote the SQL in a different way which solved the memory issue. But now I am not sure why the previous SQL has caused the memory issue.
Note: The method is inside a DAO and is called in a while loop with a batch size of 800 until all the data is pulled. Table size is around 20 million rows.
For each call, a new hibernate session is created and destroyed.
Previous SQL:
#Override
public List<Book> getbookByJournalId(UnitOfWork uow,
List<Journal> batch) {
StringBuilder sb = new StringBuilder();
sb.append("select i from Book i where ( ");
if (batch == null || batch.size() <= 0)
sb.append("1=0 )");
else {
for (int i = 0; i < batch.size(); i++) {
if (i > 0)
sb.append(" OR ");
sb.append("( i.journalId='" + batch.get(i).journalId() + "')");
}
sb.append(")");
sb.append(
" and i.isDummy=:isNotDummy and i.statusId !=:BookStatus and i.BookNumber like :book ");
}
Query query = uow.getSession().createQuery(sb.toString());
query.setParameter("isNotDummy", Definitions.BooleanIdentifiers_Char.No);
query.setParameter("Book", "%" + Definitions.NOBook);
query.setParameter("BookStatus", Definitions.BookStatusID.CLOSED.getValue());
List<Book> bookList = (List<Book>) query.getResultList();
return bookList;
}
Rewritten SQL:
#Override
public List<Book> getbookByJournalId(UnitOfWork uow,
List<Journal> batch) {
List<String> bookIds = new ArrayList<>();
for(Journal J : batch){
bookIds.add(J.getJournalId());
}
StringBuilder sb = new StringBuilder();
sb.append("select i from Book i where i.journalId in (:bookIds) and i.isDummy=:isNotDummy and i.statusId !=:BookStatus and i.BookNumber like :Book");
Query query = uow.getSession().createQuery(sb.toString());
query.setParameter("isNotDummy", Definitions.BooleanIdentifiers_Char.No);
query.setParameter("Book", "%" + Definitions.NOBook);
query.setParameter("BookStatus", Definitions.BookStatusID.CLOSED.getValue());
query.setParameter("specimenNums",specimenNums);
query.setParameter("bookIds", bookIds);
List<Book> bookList = (List<Book>) query.getResultList();
return bookList;
}
When you create dynamic SQL statements, you miss out on ability of the database to cache the statement, indexes and even entire tables to optimise your data retrieval. That said, dynamic SQL can still be a practical solution.
But you need to be a good citizen on the both the application and database servers, by being very efficient with your memory usage. For a solution that needs to scale to 20 million rows, I recommend using more of a disk-based approach, using as little RAM as possible (i.e. avoiding arrays).
Problems I can see from the first statement are the following:
Up to 800 OR conditions may be added to the first statement for each batch. That makes for a very long SQL statement (not good). This I believe [please correct me if I'm wrong] would need to be cached in JVM heap and then passed to the database.
Java may not release this statement from the heap straight away, and garbage collection might be too slow to keep up with your code, increasing the RAM usage. You shouldn't rely on it to clean up after you while your code is running.
If you ran this code in parallel, many sessions on hibernate may risk having many sessions on the database too. I believe you should only use one session for this, unless there is a specific reason. Creating and destroying sessions that you don't need just creates unnecessary traffic on servers and the network.
If you are running this code serially, then why drop the session, when you can reuse it for the next batch? You may have a valid reason, but the question must be asked.
In the second statement, creating the bookIds array again uses up RAM in the JVM heap, and the where i.journalId in (:bookIds) part of the SQL will still be lengthy. Not as bad as before, but I think still too long.
You would be much better off doing the following:
Create a table on the database, with batchNumber, bookId and perhaps some meta-data, such as flags or timestamps. Join the Book table to your new table using a static statement, and pass in the batchNumber as a new parameter.
create table Batch
(
id integer primary key,
batchNumber integer not null,
bookId integer not null,
processed_datetime timestamp
);
create unique index Batch_Idx on Batch (batchNumber, bookId);
-- Put this statement into a loop, or use INSERT/SELECT if the data is available in the database
insert into Batch batchNumber values (:batchNumber, :bookId);
-- Updated SQL statement. This is now static. Note that batchNumber needs to be provided as a parameter.
select i
from Book i
inner join Batch b on b.bookId = i.journalId
where b.batchNumber = :batchNumber
and i.isDummy=:isNotDummy and i.statusId !=:BookStatus and i.BookNumber like :Book;

Android Improve SQLite Performance Optimise

I have an onCreate that currently opens an sqlite database and reads off 4 values. I then have a conditional and depending on what activity has sent it there it either displays those values or updates two values and then displays the other.
Now if I run this activity without updating the database it is lightning fast. Whereas if I run two queries to write to the database it can be sluggish. Is there anything I am able to do to optimise this.
The problem is the display stays on the previous activity until the sqlite updating has completed. This seems to be the problem.
Sorry for what is most likely a rubbish explanation. Please feel free to ask me to better describe anything.
Any help appreciated.
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.reason);
//Opens DB connection
type = b.getString("TYPE");
get();
if(type.equals("next")){ update();}
db.close();
}
public void get(){
Cursor b = db.rawQuery("SELECT * FROM " +
DB_TABLE2 +" WHERE _id='1'" , null);
b.moveToFirst();
id = b.getInt(b.getColumnIndex("nextq"));
nextvalue = b.getInt(b.getColumnIndex(type));
if(nextvalue==0){nextvalue=1;}
b.close();
nextvalue ++;
}
public void update(){
db.execSQL("UPDATE "
+ DB_TABLE2
+ " SET nextq='" + nextvalue + "'"
+ " WHERE _id='1'");
db.execSQL("UPDATE "
+ DB_TABLE
+ " SET answered_correctly='" + anscorrect +"' , open ='1' WHERE _id='" + id + "'");
}
Enclose all of your updates inside a single transaction. Not only is it better from a data integrity point of view, but it's also much faster.
So, put a db.beginTransaction() at the start of your update(), and a db.setTransactionSuccessful() followed by db.endTransaction() at the end.
You can do something like this, but be warned, the Pragma Syncronize setting can be dangerous, as it turns off security features in Sqlite. Having said that, it increased my recording to roughly 0.5ms per row, or going from 350ms down to 15-20, and for another table, going from 5000-9000ms down to roughly 300.
// this cut down Insert time from 250-600ms to 14-30 ms.
// with prgma syncronous set to off, this drops it down to 0.5ms/row for membernames
final InsertHelper ih = new InsertHelper(database, SQLiteHelper.TABLE_MEMBERS);
final int nameColumn = ih.getColumnIndex(SQLiteHelper.MEMBER_TABLE_MEMBERNAME);
final long startTime = System.currentTimeMillis();
try {
database.execSQL("PRAGMA synchronous=OFF");
database.setLockingEnabled(false);
database.beginTransaction();
for (int i = 0; i < Members.size(); i++) {
ih.prepareForInsert();
ih.bind(nameColumn, Members.get(i));
ih.execute();
}
database.setTransactionSuccessful();
} finally {
database.endTransaction();
database.setLockingEnabled(true);
database.execSQL("PRAGMA synchronous=NORMAL");
ih.close();
if (Globals.ENABLE_LOGGING) {
final long endtime = System.currentTimeMillis();
Log.i("Time to insert Members: ", String.valueOf(endtime - startTime));
}
}
the main things you want are the InsertHelper, the "SetLockingEnabled" features, and the "execSQL Pragma...". Keep in mind as I said that using both of those can potentially cause DB corruption if you experience a power outage on your phone, but can speed up DB inserts greatly. I learned about this from here: http://www.outofwhatbox.com/blog/2010/12/android-using-databaseutils-inserthelper-for-faster-insertions-into-sqlite-database/#comment-2685
You can also ignore my logging stuff, I had it in there to do some sort of benchmarking to see how fast things took.
Edit: To explain briefly what those options do, I'm basically disabling security and integrity features in SQLite in order to basically pipe data into the database. Since this occurs so fast (around 14-20ms on average now), the risk is acceptable. If this was taking seconds to occur, I wouldn't risk it, because in the event something happens, you could get a corrupted DB. The Syncronize Option is the greatest risk of all, so judge if you want to take that risk with your data. I would recommend using timing features like I've included, to see how long it takes to insert data into your db each time you try something, then determine what level of risk you want. Even if you don't use those two, the other features (InsertHelper and the BeginTransaction stuff) are going to help improve your database work greatly.
Either create a new thread for the database to run on and have a callback for UI update, or if the UI is not dependent on the database change just create the new thread. Executing database stuff on the UI thread will always slow down the UI responsiveness a bit. Check out AsyncTasks or just create a new thread if the UI doesn't need a callback on complete.
Just be careful to not get too careless with thread creation :)

Categories

Resources