My question concerns precompiled SQL statements, created with SQLiteDatabase.compileStatement. In all examples I've seen first there is precompilation, next parameters are added, and next the statement is executed.
However I have impression that using precompiled statements makes sense if I precompile them once somewhere at the beggining and next I'm using previously precompiled statements many times only giving to them new parameters.
Thus the question. Is there any good practice, where (in what moment, what method of which class) to precompile those statements and how to execute them next.
I was wondering about extending SQLiteOpenHelper class (in fact I have to extend it always to cover onCreate and onUpdate methods) and to add to this firstly precompilation of all my sql statements in constructor and next to create my own methods to assure access to my database (=to bind parameters and next to execute statements).
Is that a good approach? If now, what is a good practice?
Just in order to clarify the situation, an example:
public class MySQLStatements
SQLiteDatabase db;
SQLiteStatement recordInsert;
public SQLDatabaseAndroid (SQLiteDatabase db){
this.db=db;
}
public void tableCreateExecute () {
db.execSQL ("CREATE TABLE....");
}
public long recordInsertExecute (par1, par2 etc....) {
// let's precompile sql statement if it hasn't been previously used
recordInsert = (recordInsert == null) ?
recordInsert = db.compileStatement(recordInsert()) : recordInsert;
and next sql part.
The part above is not fully correct java code but I believe it gives a good example what I want to achieve. Simply sql statement is pre-compiled only when used for the first time. Next it's kept for future use.
The question is what do you thinking about such a solution.
And another in what moment can I create MySQLStatements object? Can it be done in constructor of my class derived from SQLiteOpenHelper as last thing to be done? But then is that safe to get db by getWritableDatabase method. In methods onCreate, onUpdate can I skip the given db parameter and use previously got in constructor (by getWritableDatabase)?
Related
I am getting a parameter index out of range error when executing a prepared statement. I have several other statements working correctly. The only difference with this query is it's the only UPDATE. The rest are all INSERT, ADD, DELETE etc. Any guidance on what I may be doing wrong would be greatly appreciated.
sqlStatement = "UPDATE customer SET customerName = ?, addressId = ? WHERE customerId = ?;";
StatementHandler.setPreparedStatement(ConnectionHandler.connection, sqlStatement);
StatementHandler.getPreparedStatement().setString(1, name);
StatementHandler.getPreparedStatement().setInt(2, AddressDAO.getAddressId(address));
StatementHandler.getPreparedStatement().setInt(3, customerId);
StatementHandler.getPreparedStatement().executeUpdate();
Error:
java.sql.SQLException: Parameter index out of range (3 > number of parameters, which is 1).
I have put a couple print statement sin the middle of the code block and it seems to fail on the 3rd parameter. All values coming in are valid and match the types being assigned. MySQL is being used and the statement works fine if executed in the console.
Thank you for reading and any help you can provide.
Edit: Here is the statement handler method I am using as well. I am combing through to see what else I should add to help get this thing figured out. Thank you for the comments!
public class StatementHandler {
/**
* create statement reference
*/
private static PreparedStatement preparedStatement;
/**
* method to create statement object
*/
public static void setPreparedStatement(Connection connection, String sqlStatement) throws SQLException {
preparedStatement = connection.prepareStatement(sqlStatement);
}
/**
* getter to return statement object
*/
public static PreparedStatement getPreparedStatement(){
return preparedStatement;
}
}
Your snippet doesn't make it clear, but I can guess. I'll list a series of conclusions I'm drawing; you'd have to doublecheck these:
StatementHandler is a class (not a variable). (reason: You've capitalized it).
setPreparedStatement and getPreparedStatement are static methods in the StatementHandler class. (follows naturally).
You are using multiple threads (reason: That would be sufficient to explain this problem).
You aren't synchronizing (reason: Same as #3).
Then this result is obvious: You can't do that. Your entire VM has one global 'prepared statement' with multiple threads calling setPreparedStatement and getPreparedStatement in more or less arbitrary orders. One thread calls setPreparedStatement, then another thread does, then the first tries to get the prepared statement the other one set, and it all goes to hades in a handbasket.
You can't do it this way. Heck, you can't even share a connection between two threads (as they'd be getting in each other's way and messing up your transactions).
If you don't quite get what static does (And it is, admittedly, a bit of an advanced topic), don't ever use it. You can pretty much write all the java you'd ever want without using static methods. The one exception is public static void main which must be static, but just make that the one-liner: new MyClass().go();, with go() being a non-static method, and you're good to go.
I'd like to go one step further than rzwitserloot and presume that your AddressDAO uses StatementHandler, too.
The query for AddressDAO.getAddressId(address) has probably one parameter, which matches the 1 from the Exception, and replaces the prepredStatemt before setting the third parameter.
As proof it would be suffient assigning AddressDAO.getAddressId(address) to a variable(and use it afterwards) before setting the prepared statement.
Alternativly you can get once the prepared statement in a variable and use this variable afterwards.
Here's a little bit of background to give you some context to my question.
I am working on a project with a local company through my college, and the program that I am building must interact with a database and perform basic CRUD operations. The company is insisting that my code be unit tested and that I mock the connection to the database when performing unit tests. I have tried testing my code on a separate database but was told that doing this is an implementation test, not a unit test.
I have written a java class that contains methods which simply call and execute other java.sql methods, such as createStatement() and executeUpdate(...), so that instead of writing four or five lines of code to interact with the database, I can just call another piece of code to automate it slightly for me. Here is an example of one of the methods within my class:
public boolean insertIntoTable(Connection connection, IQueryBuilder queryBuilder, String tableName, String[] dbFields, String[] dbFieldsValues) {
String query = queryBuilder.insertIntoStatement(tableName, dbFields, dbFieldsValues);
try {
Statement st = connection.createStatement();
st.executeUpdate(query);
st.close();
connection.close();
return true;
} catch (SQLException exception) {
return false;
}
}
The above insertIntoTable method only relies on two other pieces of code: Connection and IQueryBuilder. It returns true if all of the lines within the try execute without fail, and false otherwise.
Connection is an interface from the java.sql package, so we know that its method calls and implementations should work properly.
IQueryBuilder is another interface of mine that returns formatted SQL Strings which are intended to be be used by the executeUpdate method of the Statement interface. The implementation of IQueryBuilder that I will be using for this method has been unit tested and approved by the company, so I am assuming the passed implementation at this point functions properly as well.
Thus, we reach my question. How do I unit test something like insertIntoTable that doesn't necessarily have any business logic and must use a mock connection?
Furthermore, what exactly am I testing here? That the method returns true or false? I mock the connection to the database all I want, no problem. However, I feel that if I mock the connection to the database I'm not really testing anything, since there's no way to know if my code truly worked or not.
I am writing an integration test between my JPA layer and the database to check the SQL I've written is correct. The real database is Oracle, unfortunately down to reasons out my control my test database has to be Derby so naturally there are some differences. For example my JPA class has the following SQL String constant
private static final String QUERY = "Select * from Users where regexp_like(user_code, '^SS(B)?N')";
Because Derby doesn't support regexp_like I am using JMockits Deencapsulation.setField to change the SQL on the fly. eg.
#Test
public void testMyDaoFind() {
new Expectations() {
{
Deencapsulation.setField(MyClass.class, "QUERY", "Select * from Users");
}
};
dao.findUsers();
}
Now ignoring the fact that this isn't a good test as it's not testing the actual query that will be running on the real database (this is purely to satisfy my curiousity as to what is going on), I am getting a SQL exception error from Eclipselink/Derby complaining about regexp_like is not recognized as a function or a procedure.
If I place a break point on the line in the DAO that attempts to get the result list, I can see from a new watch that
JMockit has substituted the query correctly
getResultList() returns the data I'm expecting to see
If however I let the test run all the way through then I get the afformentioned exception?!
Strings in Java are not handled the way you are thinking. The Java source compiler replaces reads from fields holding string literals with the fixed "address" where the string is stored (in the class' constant pool); the field is not read anymore at runtime. So, even if JMockit replaces the string reference stored in the field, it makes no difference as that reference isn't seen by the client code using the field.
(BTW, why is the test putting the call to Deencapsulatin.setField inside an expectation block? Such blocks are only meant for recording expectations...)
Bottom line, there is no way to achieve what you're trying to. Instead, either use an Oracle database for integration testing, or make all SQL code portable, avoiding RDBMS-specific functions such as regexp_like.
Abstract:
An application I work on uses top link, I'm having trouble finding out if and when top link automatically uses bind variables.
Problem Description:
Lets say I need to do something akin to validating if a vehicle full of people can travel somewhere, where each person could invalidate the trip, and provide error messages so the person could get their restrictions removed before the trip starts. A simple way to do that is to validate each member in the list, and display a list of errors. Lets say their info is stored on an oracle database and I query for each riders info using their unique ids, This query will be executed for each member in the list. A naïve implementation would cause a hard parse, a new execution path, despite only the unique id changing.
I've been reading about bind variables in sql, and how they allow for reuse of an execution path, avoiding cpu intensive hard parses.
A couple links on them are:
http://www.akadia.com/services/ora_bind_variables.html
https://oracle-base.com/articles/misc/literals-substitution-variables-and-bind-variables
An application I work on uses toplink and does something similar to the situation described above. I'm looking to make the validation faster, without changing the implementation much.
If I do something like the following:
Pseudo-code
public class userValidator{
private static DataReadQuery GET_USER_INFO;
static{
GET_USER_INFO = "select * from schema.userInfo ui where ui.id= #accountId"
GET_USER_INFO.bindAllParameters();
GET_USER_INFO.cacheStatement();
GET_USER_INFO.addArgument("accountId", String.class);
}
void validate(){
List<String> listOfUserAccountIds = getuserAccountIdList();
list args;
for(String userAccountId: listOfUserAccountIds){
args = new ArrayList(1);
args.add(userAccountId)
doSomethingWithInfo(getUnitOfWork().executequery(GET_USER_INFO, args);
}
}
}
The Question:
Will a new execution path be parsed for each execution of GET_USER_INFO?
What I have found:
If I understand the bindAllParameters function inside of the DatabaseQuery class well enough, it simple is a type validation to stop sql injection attacks.
There is also a shouldPrepare function inside the same class, however that seems to have to do more with allowing dynamic sql usage where the number of arguments is variable. A prepared DatabaseQuery has its sql written once with just the values of the variables changing based on the argument list passed in, which sounds like simple substitution and not bind variables.
So I'm at a lost.
This seems answered by the TopLink documentation
By default, TopLink enables parameterized SQL but not prepared
statement caching.
So prepared statements are used by default, just not cached. This means subsequent queries will have the added cost of re-preparing statements if not optimized by the driver. See this for more information on optimizations within TopLink
I am writing an Android app, in Java, which uses an SQLite database containing dozens of tables. I have a few Datasource classes set up to pull data from these tables and turn them into their respective objects. My problem is that I do not know the most efficient way to structure code that accesses the database in Java.
The Datasource classes are getting very repetitive and taking a long time to write. I would like to refactor the repetition into a parent class that will abstract away most of the work of accessing the database and creating objects.
The problem is, I am a PHP (loosely-typed) programmer and I'm having a very hard time solving this problem in a strictly-typed way.
Thinking in PHP, I'd do something like this:
public abstract class Datasource {
protected String table_name;
protected String entity_class_name;
public function get_all () {
// pseudo code -- assume db is a connection to our database, please.
Cursor cursor = db.query( "select * from {this.table_name}");
class_name = this.entity_class_name;
entity = new $class_name;
// loops through data in columns and populates the corresponding fields on each entity -- also dynamic
entity = this.populate_entity_with_db_hash( entity, cursor );
return entity;
}
}
public class ColonyDatasource extends Datasource {
public function ColonyDataSource( ) {
this.table_name = 'colony';
this.entity_class_name = 'Colony';
}
}
Then new ColonyDatasource.get_all() would get all the rows in table colony and return a bunch of Colony objects, and creating the data source for each table would be as easy as creating a class that has little more than a mapping of table information to class information.
Of course, the problem with this approach is that I have to declare my return types and can't use variable class names in Java. So now I'm stuck.
What should one do instead?
(I am aware that I could use a third-party ORM, but my question is how someone might solve this without one.)
First, is that you don't want to do these lines in your Java code:
class_name = this.entity_class_name;
entity = new $class_name;
It is possible to do what you are suggesting, and in languages such as Java it is called reflection.
https://en.wikipedia.org/wiki/Reflection_(computer_programming)
In this (and many cases) using reflection to do what you want is a bad idea for many reasons.
To list a few:
It is VERY expensive
You want the compiler to catch any mistakes, eliminating as many runtime errors as possible.
Java isn't really designed to be quacking like a duck: What's an example of duck typing in Java?
Your code should be structured in a different way to avoid this type of approach.
Sadly, I do believe that because it is strictly typed, you can't automate this part of your code:
// loops through data in columns and populates the corresponding fields on each entity -- also dynamic
entity = this.populate_entity_with_db_hash( entity, cursor );
Unless you do it through means of reflection. Or shift approaches entirely and begin serializing your objects (¡not recommending, just saying it's an option!). Or do something similar to Gson https://code.google.com/p/google-gson/. I.e. turn the db hash into a json representation and then using gson to turn that into an object.
What you could do, is automate the "get_all" portion of the object in the abstract class since that would be repetitive nearly every instance, but use an implementation so that you can have the abstract function rest assured that it can call a method of it's extending object. This will get you most of the way towards your "automated" approach, reducing the amount of code you must retype.
To do this we must consider the fact that Java has:
Generics (https://en.wikipedia.org/wiki/Generics_in_Java)
Function overloading.
Every Object in Java extends from the Object class, always.
Very Liskov-like https://en.wikipedia.org/wiki/Liskov_substitution_principle
Package scope: What is the default scope of a method in Java?
Try something like this (highly untested, and most likely wont compile) code:
// Notice default scoping
interface DataSourceInterface {
//This is to allow our GenericDataSource to call a method that isn't defined yet.
Object cursorToMe(Cursor cursor);
}
//Notice how we implement here?, but no implemented function declarations!
public abstract class GenericDataSource implements DataSourceInterface {
protected SQLiteDatabase database;
// and here we see Generics and Objects being friends to do what we want.
// This basically says ? (wildcard) will have a list of random things
// But we do know that these random things will extend from an Object
protected List<? extends Object> getAll(String table, String[] columns){
List<Object> items = new ArrayList<Object>();
Cursor cursor = database.query(table, columns, null, null, null, null,null);
cursor.moveToFirst();
while (!cursor.isAfterLast()) {
// And see how we can call "cursorToMe" without error!
// depending on the extending class, cursorToMe will return
// all sorts of different objects, but it will be an Object nonetheless!
Object object = this.cursorToMe(cursor);
items.add(object);
cursor.moveToNext();
}
// Make sure to close the cursor
cursor.close();
return items;
}
}
//Here we extend the abstract, which also has the implements.
// Therefore we must implement the function "cursorToMe"
public class ColonyDataSource extends GenericDataSource {
protected String[] allColumns = {
ColonyOpenHelper.COLONY_COLUMN_ID,
ColonyOpenHelper.COLONY_COLUMN_TITLE,
ColonyOpenHelper.COLONY_COLUMN_URL
};
// Notice our function overloading!
// This getAll is also changing the access modifier to allow more access
public List<Colony> getAll(){
//See how we are casting to the proper list type?
// Since we know that our getAll from super will return a list of Colonies.
return (List<Colony>)super.getAll(ColonyOpenHelper.COLONY_TABLE_NAME, allColumns);
}
//Notice, here we actually implement our db hash to object
// This is the part that would only be able to be done through reflection or what/not
// So it is better to just have your DataSource object do what it knows how to do.
public Colony cursorToMe(Cursor cursor) {
Colony colony = new Colony();
colony.setId(cursor.getLong(0));
colony.setTitle(cursor.getString(1));
colony.setUrl(cursor.getString(2));
return colony;
}
}
If your queries are virtually identical except for certain parameters, consider using prepared statements and binding
In SQLite, do prepared statements really improve performance?
So another option that I have yet to explore fully is something called Java Persistence API, there are projects that implement annotations very similar to this. The majority of these are in the form of an ORM which provide you with Data access objects (http://en.wikipedia.org/wiki/Data_access_object)
An open source project called "Hibernate" seems to be one of the go-to solutions for ORM in Java, but I have also heard that it is a very heavy solution. Especially for when you start considering a mobile app.
An android specific ORM solution is called OrmLite (http://ormlite.com/sqlite_java_android_orm.shtml), this is based off of Hibernate, but is very much stripped down and without as many dependencies for the very purpose of putting it on an android phone.
I have read that people using one will transition to the other very nicely.