Android Improve SQLite Performance Optimise - java

I have an onCreate that currently opens an sqlite database and reads off 4 values. I then have a conditional and depending on what activity has sent it there it either displays those values or updates two values and then displays the other.
Now if I run this activity without updating the database it is lightning fast. Whereas if I run two queries to write to the database it can be sluggish. Is there anything I am able to do to optimise this.
The problem is the display stays on the previous activity until the sqlite updating has completed. This seems to be the problem.
Sorry for what is most likely a rubbish explanation. Please feel free to ask me to better describe anything.
Any help appreciated.
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.reason);
//Opens DB connection
type = b.getString("TYPE");
get();
if(type.equals("next")){ update();}
db.close();
}
public void get(){
Cursor b = db.rawQuery("SELECT * FROM " +
DB_TABLE2 +" WHERE _id='1'" , null);
b.moveToFirst();
id = b.getInt(b.getColumnIndex("nextq"));
nextvalue = b.getInt(b.getColumnIndex(type));
if(nextvalue==0){nextvalue=1;}
b.close();
nextvalue ++;
}
public void update(){
db.execSQL("UPDATE "
+ DB_TABLE2
+ " SET nextq='" + nextvalue + "'"
+ " WHERE _id='1'");
db.execSQL("UPDATE "
+ DB_TABLE
+ " SET answered_correctly='" + anscorrect +"' , open ='1' WHERE _id='" + id + "'");
}

Enclose all of your updates inside a single transaction. Not only is it better from a data integrity point of view, but it's also much faster.
So, put a db.beginTransaction() at the start of your update(), and a db.setTransactionSuccessful() followed by db.endTransaction() at the end.

You can do something like this, but be warned, the Pragma Syncronize setting can be dangerous, as it turns off security features in Sqlite. Having said that, it increased my recording to roughly 0.5ms per row, or going from 350ms down to 15-20, and for another table, going from 5000-9000ms down to roughly 300.
// this cut down Insert time from 250-600ms to 14-30 ms.
// with prgma syncronous set to off, this drops it down to 0.5ms/row for membernames
final InsertHelper ih = new InsertHelper(database, SQLiteHelper.TABLE_MEMBERS);
final int nameColumn = ih.getColumnIndex(SQLiteHelper.MEMBER_TABLE_MEMBERNAME);
final long startTime = System.currentTimeMillis();
try {
database.execSQL("PRAGMA synchronous=OFF");
database.setLockingEnabled(false);
database.beginTransaction();
for (int i = 0; i < Members.size(); i++) {
ih.prepareForInsert();
ih.bind(nameColumn, Members.get(i));
ih.execute();
}
database.setTransactionSuccessful();
} finally {
database.endTransaction();
database.setLockingEnabled(true);
database.execSQL("PRAGMA synchronous=NORMAL");
ih.close();
if (Globals.ENABLE_LOGGING) {
final long endtime = System.currentTimeMillis();
Log.i("Time to insert Members: ", String.valueOf(endtime - startTime));
}
}
the main things you want are the InsertHelper, the "SetLockingEnabled" features, and the "execSQL Pragma...". Keep in mind as I said that using both of those can potentially cause DB corruption if you experience a power outage on your phone, but can speed up DB inserts greatly. I learned about this from here: http://www.outofwhatbox.com/blog/2010/12/android-using-databaseutils-inserthelper-for-faster-insertions-into-sqlite-database/#comment-2685
You can also ignore my logging stuff, I had it in there to do some sort of benchmarking to see how fast things took.
Edit: To explain briefly what those options do, I'm basically disabling security and integrity features in SQLite in order to basically pipe data into the database. Since this occurs so fast (around 14-20ms on average now), the risk is acceptable. If this was taking seconds to occur, I wouldn't risk it, because in the event something happens, you could get a corrupted DB. The Syncronize Option is the greatest risk of all, so judge if you want to take that risk with your data. I would recommend using timing features like I've included, to see how long it takes to insert data into your db each time you try something, then determine what level of risk you want. Even if you don't use those two, the other features (InsertHelper and the BeginTransaction stuff) are going to help improve your database work greatly.

Either create a new thread for the database to run on and have a callback for UI update, or if the UI is not dependent on the database change just create the new thread. Executing database stuff on the UI thread will always slow down the UI responsiveness a bit. Check out AsyncTasks or just create a new thread if the UI doesn't need a callback on complete.
Just be careful to not get too careless with thread creation :)

Related

What can be causing a foreign key exception when check is on 0?

I'm trying to fix a little plugin that I'm making for MineCraft servers. The plugin uses code that tries to automatically adjusts to the server needs, first converting the tables on old tables, creating the new ones and after using some objects that contains human decisions to parse or update specific information to the new tables, parsing all the data that is not duplicated already to the new tables, then removing the old ones.
The code is kinda messy, I didn't had lot of time this days, but I was trying to get a free week to remake all the code of the plugin. The problem is that everything was working fine, but one day I decided to update the plugin on a server that I use for testing, that is using MySQL. The problem in this server is that I was using the same code all the time and I didn't had problems, but after some time without using it, now it's not working.
This is the part of the code that is failing:
protected boolean tables() {
boolean update = false, result = update;
if (!this.sql.execute(
"CREATE TABLE IF NOT EXISTS information(param VARCHAR(16),value VARCHAR(16),CONSTRAINT PK_information PRIMARY KEY (param));",
new Data[0]))
return false;
List<String> tlist = new ArrayList<>();
try {
this.sql.execute("SET FOREIGN_KEY_CHECKS=0;", new Data[0]);
ResultSet set = this.sql.query("SELECT value FROM information WHERE `param`='version';", new Data[0]);
String version = "";
if (set.next())
version = set.getString(1);
if (!version.equals(MMOHorsesMain.getPlugin().getDescription().getVersion())) {
update = true;
ResultSet tables = this.sql.query("SHOW TABLES;", new Data[0]);
while (tables.next()) {
String name = tables.getString(1);
if (!name.equals("information")) {
if (!this.sql.execute("CREATE TABLE " + name + "_old LIKE " + name + ";", new Data[0]))
throw new Exception();
if (!this.sql.execute("INSERT INTO " + name + "_old SELECT * FROM " + name + ";", new Data[0]))
throw new Exception();
tlist.add(name);
}
}
String remove = "";
for (String table : tlist)
remove = String.valueOf(remove) + (remove.isEmpty() ? "" : ",") + table;
this.sql.reconnect();
this.sql.execute("DROP TABLE IF EXISTS " + remove + ";", new Data[0]);
The database stores an extra data that it's the version of the plugin. I use it to check if the database is from another version and, if that's the case, regenerate the database. It's working fine on SQLite, but the only problem comes here on MySQL.
The first part gets the actual version and checks. The plugin starts disabling the foreign keys. This is not the best part but as I said, I didn't actually had time to remake all this code, also this code comes from a compiled version cause due some GitHub issues I lost part of the last updates. If it requires the update, it starts transforming every table on _old tables. Everything works fine here, data is parsed to the _old tables and is managed correctly, but the problem is when it has to removes the original tables.
DROP TABLE IF EXISTS cosmetics,horses,inventories,items,trust,upgrades;
This is the SQL statement that is used to remove the original ones, but, I don't know if it works like that, but if that's the case, the _old tables got the foreign keys that the original tables too and when I try to remove them, it doesn't allow, even if the FOREIGN_KEY_CHECKS is on 0. I also set a debug before to check if the checking was disabled and it was. To simulate the best environment where people is used to work, I'm using a prebuilder minecraft hosting from a friend, using MariaDB 10.4.12.
I'm asking him if he updated it since the last time I was preparing this server, but I'm still waiting for his answer. Anyway, even if it's a newer or older MariaDB version, what I'm trying is to make it the most elastic possible so it can be adapted to different versions without problems. Everything seems to work fine, but as I can't delete the original databases, I can't replace them with the new format.
I wish this is just an error that happens with certain DB configurations, but I'd like to get an answer of someone with knowledge to make sure I didn't upload a broken version.
Thanks you nicomp, the answered was keeping the same session. My reconnect method is not really flexible, as I came from some strange experiences of high latency and like 1 sec sessions, cause after nothing it was getting disconnected easily, and was detecting incorrectly the connection so it was reconnecting and removing the configuration of the session.

Updating MySQL Database every second

I'm making an online game. I'm testing the game with 300 players now and I have a problem. I have to update about 300 rows in database every second but the update takes too long. It takes about 11143ms (11s) which is pretty much for task which must be done in less than 1s. I'm making those updates to database from JAVA. I tried with PHP already but it's the same. The update SQL query is very simple...
String query5 = "UPDATE naselje SET zelezo = " + zelezo + ", zlato = " + zlato + ", les = " + les + ", hrana = " + hrana + " WHERE ID =" + ID;
So anyone knows how to make updates to database every second with faster performance or any other solution how to update resources for game (gold, wood, food,...)?
My configuration:
Intel Core i5 M520 2.40GHz
6 GB RAM
You are probably updating each row seperatly, you need to use batch update
Switch to PDO if you are not already on it, and use transactions. Also, restructure your tables to use InnoDB vs MyISAM.
InnoDB works better with larger tables which are frequently read/written.
This is one of the things that it was designed to handle. Multiple SELECT/UPDATE/INSERT statements which are very similar in style.
It is also good coding practice to use transactions when handling multiple consecutive calls of the above types.
Use this Google Search to learn more of PHP PDO and MySQL Transactions.
Example:
With Transactions
$pdo = new PDO(...);
$pdo->beginTransaction();
for ( $i = 0; $i < 1001; $i++) {
$pdo->query("UPDATE table SET column='$var' WHERE ID = $i");
}
$pdo->commit();

Populate table from database

I'm trying to retreive rows from my database and populate a table. I don't understand where the problem is with this code:
if ((report.getMsg()=="selectEventoAll") && (report.getEsito()==1))
{
DefaultTableModel dtm = new DefaultTableModel();
eventi_tb.setModel(dtm);
try
{
ResultSet res_eventi = report.getRes();
i = 0;
Object[][] datiEventi = new Object[report.getRowCount()][5];
while(res_eventi.next())
{
j = 0;
while (j < 5)
{
datiEventi[i][j] = res_eventi.getObject(j+2);
j++;
}
dtm.addRow(datiEventi[i]);
i++;
}
}
This is a bad design. You're mixing your UI and database together. Your code is no good if you change from Swing to a web UI. It's harder to test and debug this way, too.
Break the problem into two pieces: database access and Swing display.
Have one object that does nothing but query for results and load them into a data structure.
Have another that does nothing but accept a data structure and load it into your Swing UI for display.
Your application will have the database decoupled from the user interface. Your testing and debugging life will be easier.
Post more code and perhaps an error message would help us help you faster than guessing.
The loop over the columns in the result set looks suspicious to me. They run from 1 to the number of columns, but you start at 2. Why? If your query has five or fewer columns you'll have an issue there.
Are you sure that your ResultSet contains any rows?
Are you sure that some exception is not occurring before the call to addRow? You're in a try block, what does the catch block do?

SQLite DB is locked exception, how do I unlock it if I haven't ever toyed with it?

java.sql.SQLException: database is locked
at org.sqlite.DB.throwex(DB.java:288)
at org.sqlite.NestedDB.prepare(NestedDB.java:115)
at org.sqlite.DB.prepare(DB.java:114)
at org.sqlite.Stmt.executeQuery(Stmt.java:89)
When I make a query I get this exception. I read up on it on SA and Google, and the most common conclusion is that someone started making another query which never finished. The problem I'm having is that I've never made a query on this DB on this machine before. I downloaded the db file from where I hosted it (I created it earlier) and haven't done anything with it, so I don't know why it would be locked. When I do a query using a program called SQLite Database Browser, it works just fine. Thanks for the help, I'll provide more info if need be, just let me know.
adapter = new DbAdapter();
ResultSet info;
ResultSet attributes;
for (int i = 1; i < 668; i++) {
if (i%50 == 0) {
System.out.print('.');
}
info = adapter.makeQuery("SELECT * FROM vehicles WHERE id = '" + i + "'");
attributes = adapter.makeQuery("SELECT * FROM vehicle_moves WHERE vehicle_id = '" + i + "'");
if(info.next()) {
base = new (info, attributes);
}
vehicleArray[i] = base;
}
System.out.println("Done.");
info.close();
attributes.close();
adapter.close();
Above is the code where this is occurring. I did some homework throughout my code and sure enough the problem is in this code, other DB queries work just fine. Anything jump out at you guys?
SQLite itself can most certainly handle doing a query while the results of another query are being processed. It'd be terribly useless if that couldn't be done! What's more likely to cause problems is if you've got two connections to the database open at once. I don't know that DbAdapter class at all – not what package it is in, or what module provides it – but if it is assuming that it can open many connections (or if it isn't maintaining proper connection hygiene) then that most certainly would be a cause of the sort of problems you're seeing. Look there first.

improving speed of query processing

having major issues with my query processing time :(
i think it is because the query is getting recompiled evrytime. but i dont see any way around it.
the following is the query/snippet of code:
private void readPerformance(String startTime, String endTime,
String performanceTable, String interfaceInput) throws SQLException, IOException {
String interfaceId, iDescp, iStatus = null;
String dtime, ingress, egress, newLine, append, routerId= null;
StringTokenizer st = null;
stmtD = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmtD.setFetchSize(Integer.MIN_VALUE);
BufferedReader interfaceRead = new BufferedReader(new FileReader(interfaceInput));
BufferedWriter pWrite = new BufferedWriter(new FileWriter("performanceInput.txt"));
while((newLine = interfaceRead.readLine())!= null){
st = new StringTokenizer(newLine,",");
while(st.hasMoreTokens()){
append = st.nextToken()+CSV+st.nextToken()+st.nextToken()+CSV+st.nextToken();
System.out.println(append +" ");
iStatus = st.nextToken().trim();
interfaceId = st.nextToken().trim();
append = append + CSV+iStatus+CSV+interfaceId;
System.out.println(append +" ");
pquery = " Select d.dtime,d.ifInOctets, d.ifOutOctets from "+performanceTable+"_1_60" +" AS d Where d.id = " +interfaceId
+ " AND dtime BETWEEN " +startTime+ " AND "+ endTime;
rsD = stmtD.executeQuery(pquery);
/* interface query*/
while(rsD.next()){
dtime = rsD.getString(1);
ingress= rsD.getString(2);
egress = rsD.getString(3);
pWrite.write(append + CSV + dtime+CSV+ingress+CSV+egress+NL);
}//end while
}//end while
}// end while
pWrite.close();
interfaceRead.close();
rsD.close() ;
stmtD.close();
}
my interfaceId value keeps changing. so i have put the query inside the loop resulting in recompilation of query multiple times.
is there any betetr way? can i sue stored procedure in java? if so how? do not have much knowledge of it.
current processing time is almost 60 mins (:(()!!! Text file getting generated is over 300 MB
Please help!!!
Thank you.
You can use a PreparedStatement and paramters, which may avoid recompiling the query. Since performanceTable is constant, this can be put into the prepared query. The remaining variables, used in the WHERE condition, are set as parameters.
Outside the loop, create a prepared statement, rather than a regular statement:
PreparedStatement stmtD = conn.prepareStatement(
"Select d.dtime,d.ifInOctets, d.ifOutOctets from "+performanceTable+"_1_60 AS d"+
" Where d.id = ? AND dtime BETWEEN ? AND ?");
Then later, in your loop, set the parameters:
stmtD.setInteger(1, interfaceID);
stmtD.setInteger(2, startTime);
stmtD.setInteger(3, endTime);
ResultSet rsD = stmtD.executeQuery(); // note no SQL passed in here
It may be a good idea to also check the query plan from MySQL with EXPLAIN to see if that is part of the bottleneck also. Also, there is quite a bit of diagnostic string concatenation going on in the function. Once the query is working, removing that may also improve performance.
Finally, note that even if the query is fast, network latency may slow things down. JDBC provides batch execution of multiple queries to help reduce overall latency per statement. See addBatch/executeBatch on Connection.
More information required but I can offer some general questions/suggestions. It may have nothing to do with the compilation of the query plan (that would be unusual)
Are the id and dtime columns indexed?
How many times does a query get executed in the 60mins?
How much time does each query take?
If the time per query is large then the problem is the query execution itself, not the compilation. Check the indexes as described above.
If there are many many many queries then it might be the sheer volume of queries that is causing the problem. Using PreparedStatement (see mdma's answer) may help. Or you can try and batch the interfaceIDs you want by using an "in" statement and running a query for every 100 interfaceIDs rather than one for each.
EDIT: As a matter of good practice you should ALWAYS use PreparedStatement as it will correctly handle datatypes such as dates so you don't have to worry about formatting them into correct SQL syntax. Also prevents SQL injection.
From the looks of things you are kicking off multiple select queries (even 100's based on your file size)
Instead of doing that, from your input file create a comma delimited list of all the interfaceId values and then make 1 SQL call using the "IN" keyword. You know the performanceTable, startTime and endTime arent changing so the query would look something like this
SELECT d.dtime,d.ifInOctets, d.ifOutOctets
FROM MyTable_1_60 as d
WHERE dtime BETWEEN '08/14/2010' AND '08/15/2010'
AND d.id IN ( 10, 18, 25, 13, 75 )
Then you are free to open your file, dump the result set in one swoop.

Categories

Resources