INSERT INTO unique field - java

I'm creating parcel machine program. Every parcel has unique parcelID which is exported to mysql db. The problem is that every time when I run the program, the program is counting parcelID from 0. I'm looking for a solution which will allow me to check the last parcelID in the database and create row after the last one.
Now it looks like this: 1. I'm creating a new row in db (successfully) by java program. 2. I'm closing the program after some time. 3. I run the program again and I can't add another new row because there is error "duplicate entry '1' for key 'PRIMARY'".
public static void post() throws Exception{
int parcelID = Parcel.generateID();
int clientMPNumber = Parcel.typeClientNumber();
int orderPassword = Parcel.generatePass();
try{
Connection con = getConnection();
PreparedStatement posted = con.prepareStatement("INSERT INTO Parcels.Orders (parcelID, clientMPNumber, orderPassword) VALUES ('"+parcelID+"', '"+clientMPNumber+"', '"+orderPassword+"')");
posted.executeUpdate();
}
catch(Exception e){
System.out.println(e);
}
finally{
System.out.println("Insert completed");
}
}
and the method is:
public static int generateID(){
parcelID = parcelID + 1;
return parcelID;
}

I'd let the database do the heavy lifting for you - Just define the parcelID column as serial instead of trying to set its value yourself.

You shouldn't use Id generation, just create auto_increment column in database table

As described here , define your primary key column to auto increment for each insert so your java code doesn't have to manually calculate primary key value each time.
If that is not a possibility, you need to show how you declare & initialize parcelID. As of your current code, parcelID looks to be a class level field that gets initialized to zero for each run so you always get the same value - 1. You need to initialize with last value from data base.
Also, implement suggestion as mentioned in comment to your question regarding PreparedStatement

There are a couple of things to attent to.
// parcelID should be an INT AUTOINCREMENT primary key.
try (PreparedStatement posted = con.prepareStatement(
"INSERT INTO Parcels.Orders (clientMPNumber, orderPassword) "
+ "VALUES (?, ?)",
Statement.RETURN_GENERATED_KEYS);
posted.setString(1, clientMPNumber);
posted.setString(2, orderPassword);
posted.executeUpdate();
try (ResultSet rsKey = posted.getGeneratedKeys()) {
if (rsKey.next()) {
int parcelID = rsKey.getInt(1);
return parcelID; // Or such
}try-with-resources
}
}
The database can deal with automatic numbering best, so that two transactions at the same time do not steal the same "next" number.
You should close things like Connection, PreparedStatement and ResultSet. This can best be done using the a bit awkward syntax of try-with-resources. That closes automatically even on exception and return.
PreparedStatements should be used with placeholders ?. This takes care for escaping special characters like ' in the password. Also prevents SQL injection.
Stylistic better use SQLException above Exception. Better maybe even a throws SQLException.

Related

JavaFX Address Book updating contacts with same info (SQL)

So I am finishing up an Address book in JavaFX. The user can add, edit, delete and view all the contacts that are stored in the database. However I have a small problem, if multiple records share one piece of information, when I edit something for one of them it updates it for everything. Here's an example:
Let's say there's three people in my DB (John, Charli, and another John)
And let's say all three people have different phone numbers. I want to update the first John's phone number. When I change it, it edits both John's phone number to the new value I entered.
This issue 100% has something to do with how I wrote my SQL for the update function. Here's what I have now.
public void updateData(String column, String newValue, String id) throws SQLException {
String updateQuery = "UPDATE contacts SET " + column + " = ? WHERE " + column + "= ? ";
try {
PreparedStatement psmt = DBConnect.getConnection().prepareStatement(updateQuery);
psmt.setString(1, newValue);
psmt.setString(2, id);
psmt.executeUpdate();
} catch (SQLException ex) {
ex.printStackTrace();
}
}
UpdateData is called by methods whose fields I want to change, like so.
public void changePhoneNumberCellEvent(CellEditEvent editedCell) throws SQLException {
Person person = table_contact.getSelectionModel().getSelectedItem();
String oldPhoneNumber = person.getPhone_number();
person.setPhone_number(editedCell.getNewValue().toString());
updateData("phone_number", editedCell.getNewValue().toString(), oldPhoneNumber);
}
Now I know my SQL is breaking because it finds multiple of the same values in the first_name column for example? Is there a way I can edit this SQL so it only changes the info of person I want to change? I did some research and the DISTINCT keyword was brought up but I don't think that could help because 2 people could share the same first name/last name, etc.
Thanks.
Always use a primary key as a unique identifier for the dataset. When updating, deleting you should then always use that unique identifier. Otherwise your queries will cause side effects. For example deleting multiple datasets that share the same data in a column.

Spring jdbc template executing batch updates has bad performance [duplicate]

I'm trying to find the faster way to do batch insert.
I tried to insert several batches with jdbcTemplate.update(String sql), where
sql was builded by StringBuilder and looks like:
INSERT INTO TABLE(x, y, i) VALUES(1,2,3), (1,2,3), ... , (1,2,3)
Batch size was exactly 1000. I inserted nearly 100 batches.
I checked the time using StopWatch and found out insert time:
min[38ms], avg[50ms], max[190ms] per batch
I was glad but I wanted to make my code better.
After that, I tried to use jdbcTemplate.batchUpdate in way like:
jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
// ...
}
#Override
public int getBatchSize() {
return 1000;
}
});
where sql was look like
INSERT INTO TABLE(x, y, i) VALUES(1,2,3);
and I was disappointed! jdbcTemplate executed every single insert of 1000 lines batch in separated way. I loked at mysql_log and found there a thousand inserts.
I checked the time using StopWatch and found out insert time:
min[900ms], avg[1100ms], max[2000ms] per Batch
So, can anybody explain to me, why jdbcTemplate doing separated inserts in this method? Why method's name is batchUpdate?
Or may be I am using this method in wrong way?
These parameters in the JDBC connection URL can make a big difference in the speed of batched statements --- in my experience, they speed things up:
?useServerPrepStmts=false&rewriteBatchedStatements=true
See: JDBC batch insert performance
I found a major improvement setting the argTypes array in the call.
In my case, with Spring 4.1.4 and Oracle 12c, for insertion of 5000 rows with 35 fields:
jdbcTemplate.batchUpdate(insert, parameters); // Take 7 seconds
jdbcTemplate.batchUpdate(insert, parameters, argTypes); // Take 0.08 seconds!!!
The argTypes param is an int array where you set each field in this way:
int[] argTypes = new int[35];
argTypes[0] = Types.VARCHAR;
argTypes[1] = Types.VARCHAR;
argTypes[2] = Types.VARCHAR;
argTypes[3] = Types.DECIMAL;
argTypes[4] = Types.TIMESTAMP;
.....
I debugged org\springframework\jdbc\core\JdbcTemplate.java and found that most of the time was consumed trying to know the nature of each field, and this was made for each record.
Hope this helps !
I have also faced the same issue with Spring JDBC template. Probably with Spring Batch the statement was executed and committed on every insert or on chunks, that slowed things down.
I have replaced the jdbcTemplate.batchUpdate() code with original JDBC batch insertion code and found the Major performance improvement.
DataSource ds = jdbcTemplate.getDataSource();
Connection connection = ds.getConnection();
connection.setAutoCommit(false);
String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
PreparedStatement ps = connection.prepareStatement(sql);
final int batchSize = 1000;
int count = 0;
for (Employee employee: employees) {
ps.setString(1, employee.getName());
ps.setString(2, employee.getCity());
ps.setString(3, employee.getPhone());
ps.addBatch();
++count;
if(count % batchSize == 0 || count == employees.size()) {
ps.executeBatch();
ps.clearBatch();
}
}
connection.commit();
ps.close();
Check this link as well
JDBC batch insert performance
Simply use transaction. Add #Transactional on method.
Be sure to declare the correct TX manager if using several datasources #Transactional("dsTxManager"). I have a case where inserting 60000 records. It takes about 15s. No other tweak:
#Transactional("myDataSourceTxManager")
public void save(...) {
...
jdbcTemplate.batchUpdate(query, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
...
}
#Override
public int getBatchSize() {
if(data == null){
return 0;
}
return data.size();
}
});
}
Change your sql insert to INSERT INTO TABLE(x, y, i) VALUES(1,2,3). The framework creates a loop for you.
For example:
public void insertBatch(final List<Customer> customers){
String sql = "INSERT INTO CUSTOMER " +
"(CUST_ID, NAME, AGE) VALUES (?, ?, ?)";
getJdbcTemplate().batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Customer customer = customers.get(i);
ps.setLong(1, customer.getCustId());
ps.setString(2, customer.getName());
ps.setInt(3, customer.getAge() );
}
#Override
public int getBatchSize() {
return customers.size();
}
});
}
IF you have something like this. Spring will do something like:
for(int i = 0; i < getBatchSize(); i++){
execute the prepared statement with the parameters for the current iteration
}
The framework first creates PreparedStatement from the query (the sql variable) then the setValues method is called and the statement is executed. that is repeated as much times as you specify in the getBatchSize() method. So the right way to write the insert statement is with only one values clause.
You can take a look at http://docs.spring.io/spring/docs/3.0.x/reference/jdbc.html
I had also some bad time with Spring JDBC batch template. In my case, it would be, like, insane to use pure JDBC, so instead I used NamedParameterJdbcTemplate. This was a must have in my project. But it was way slow to insert hundreds os thousands of lines in the database.
To see what was going on, I've sampled it with VisualVM during the batch update and, voilà:
What was slowing the process was that, while setting the parameters, Spring JDBC was querying the database to know the metadata each parameter. And seemed to me that it was querying the database for each parameter for each line every time. So I just taught Spring to ignore the parameter types (as it is warned in the Spring documentation about batch operating a list of objects):
#Bean(name = "named-jdbc-tenant")
public synchronized NamedParameterJdbcTemplate getNamedJdbcTemplate(#Autowired TenantRoutingDataSource tenantDataSource) {
System.setProperty("spring.jdbc.getParameterType.ignore", "true");
return new NamedParameterJdbcTemplate(tenantDataSource);
}
Note: the system property must be set before creating the JDBC Template object. It would be possible to just set in the application.properties, but this solved and I've never after touched this again
I don't know if this will work for you, but here's a Spring-free way that I ended up using. It was significantly faster than the various Spring methods I tried. I even tried using the JDBC template batch update method the other answer describes, but even that was slower than I wanted. I'm not sure what the deal was and the Internets didn't have many answers either. I suspected it had to do with how commits were being handled.
This approach is just straight JDBC using the java.sql packages and PreparedStatement's batch interface. This was the fastest way that I could get 24M records into a MySQL DB.
I more or less just built up collections of "record" objects and then called the below code in a method that batch inserted all the records. The loop that built the collections was responsible for managing the batch size.
I was trying to insert 24M records into a MySQL DB and it was going ~200 records per second using Spring batch. When I switched to this method, it went up to ~2500 records per second. so my 24M record load went from a theoretical 1.5 days to about 2.5 hours.
First create a connection...
Connection conn = null;
try{
Class.forName("com.mysql.jdbc.Driver");
conn = DriverManager.getConnection(connectionUrl, username, password);
}catch(SQLException e){}catch(ClassNotFoundException e){}
Then create a prepared statement and load it with batches of values for insert, and then execute as a single batch insert...
PreparedStatement ps = null;
try{
conn.setAutoCommit(false);
ps = conn.prepareStatement(sql); // INSERT INTO TABLE(x, y, i) VALUES(1,2,3)
for(MyRecord record : records){
try{
ps.setString(1, record.getX());
ps.setString(2, record.getY());
ps.setString(3, record.getI());
ps.addBatch();
} catch (Exception e){
ps.clearParameters();
logger.warn("Skipping record...", e);
}
}
ps.executeBatch();
conn.commit();
} catch (SQLException e){
} finally {
if(null != ps){
try {ps.close();} catch (SQLException e){}
}
}
Obviously I've removed error handling and the query and Record object is notional and whatnot.
Edit:
Since your original question was comparing the insert into foobar values (?,?,?), (?,?,?)...(?,?,?) method to Spring batch, here's a more direct response to that:
It looks like your original method is likely the fastest way to do bulk data loads into MySQL without using something like the "LOAD DATA INFILE" approach. A quote from the MysQL docs (http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html):
If you are inserting many rows from the same client at the same time,
use INSERT statements with multiple VALUES lists to insert several
rows at a time. This is considerably faster (many times faster in some
cases) than using separate single-row INSERT statements.
You could modify the Spring JDBC Template batchUpdate method to do an insert with multiple VALUES specified per 'setValues' call, but you'd have to manually keep track of the index values as you iterate over the set of things being inserted. And you'd run into a nasty edge case at the end when the total number of things being inserted isn't a multiple of the number of VALUES lists you have in your prepared statement.
If you use the approach I outline, you could do the same thing (use a prepared statement with multiple VALUES lists) and then when you get to that edge case at the end, it's a little easier to deal with because you can build and execute one last statement with exactly the right number of VALUES lists. It's a bit hacky, but most optimized things are.
Solution given by #Rakesh worked for me.
Significant improvement in performance. Earlier time was 8 min, with this solution taking less than 2 min.
DataSource ds = jdbcTemplate.getDataSource();
Connection connection = ds.getConnection();
connection.setAutoCommit(false);
String sql = "insert into employee (name, city, phone) values (?, ?, ?)";
PreparedStatement ps = connection.prepareStatement(sql);
final int batchSize = 1000;
int count = 0;
for (Employee employee: employees) {
ps.setString(1, employee.getName());
ps.setString(2, employee.getCity());
ps.setString(3, employee.getPhone());
ps.addBatch();
++count;
if(count % batchSize == 0 || count == employees.size()) {
ps.executeBatch();
ps.clearBatch();
}
}
connection.commit();
ps.close();
Encountered some serious performance issue with JdbcBatchItemWriter.write() (link) from Spring Batch and find out the write logic delegates to JdbcTemplate.batchUpdate() eventually.
Adding a Java system properties of spring.jdbc.getParameterType.ignore=true fixed the performance issue entirely ( from 200 records per second to ~ 5000 ).
The patch was tested working on both Postgresql and MsSql (might not be dialect specific)
... and ironically, Spring documented this behaviour under a "note" section link
In such a scenario, with automatic setting of values on an underlying PreparedStatement, the corresponding JDBC type for each value needs to be derived from the given Java type. While this usually works well, there is a potential for issues (for example, with Map-contained null values). Spring, by default, calls ParameterMetaData.getParameterType in such a case, which can be expensive with your JDBC driver. You should use a recent driver version and consider setting the spring.jdbc.getParameterType.ignore property to true (as a JVM system property or in a spring.properties file in the root of your classpath) if you encounter a performance issue — for example, as reported on Oracle 12c (SPR-16139).
Alternatively, you might consider specifying the corresponding JDBC
types explicitly, either through a 'BatchPreparedStatementSetter' (as
shown earlier), through an explicit type array given to a
'List<Object[]>' based call, through 'registerSqlType' calls on a
custom 'MapSqlParameterSource' instance, or through a
'BeanPropertySqlParameterSource' that derives the SQL type from the
Java-declared property type even for a null value.

Best way to find out if user is already registered

How to find out, that a user is already registered, for e.g. in a web application? Like I have a database of 1 million users. Every time comparing every single row in database is inefficient. Is there any other optimal approach?
Every time comparing every single row in database is inefficient
I gather that you're hauling the entire DB table contents into Java's memory by a SELECT * FROM User and then looping over every row in a while loop and comparing its username by equals() like below, is that true?
public boolean exists(String username) throws SQLException {
// ... Declare, etc.
statement = connection.prepareStatement("SELECT * FROM User");
resultSet = statement.executeQuery();
while (resultSet.next()) {
if (username.equals(resultSet.getString("username"))) {
return true;
}
}
return false;
// ... Close, etc (in finally!)
}
Then that's indeed very inefficient. You should be indexing the username column (it probably already is, with UNIQUE constraint) and then make use of SQL WHERE clause. It'll return exactly zero or one rows and the DB will do its very best finding it, which is usually much, much faster than the above Java approach.
public boolean exists(String username) throws SQLException {
// ... Declare, etc.
statement = connection.prepareStatement("SELECT id FROM User WHERE username = ?");
statement.setString(1, username);
resultSet = statement.executeQuery();
return resultSet.next();
// ... Close, etc (in finally!)
}
In a nutshell, as long as you make proper use of DB indexes and write SQL queries as such that it returns exactly the information you need, without any necessity for filtering using Java or additional queries afterwards, then it'll be most efficient approach.
You can optimize your query.You can maintain a unique username(or your unique parameter) for user and when someone trying to register,You can pass that usename to query and check it is already registered or not

Total Number of Row Resultset getRow Method

Read the Following Code:
public class selectTable {
public static ResultSet rSet;
public static int total=0;
public static ResultSet onLoad_Opetations(Connection Conn, int rownum,String sql)
{
int rowNum=rownum;
int totalrec=0;
try
{
Conn=ConnectionODBC.getConnection();
Statement stmt = Conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
String sqlStmt = sql;
rSet = stmt.executeQuery(sqlStmt);
total = rSet.getRow();
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
System.out.println("Total Number of Records="+totalrec);
return rSet;
}
}
The folowing code dos't show actual total:
total = rSet.getRow();
my jTable display 4 record in jTable but total = 0; when I evaluate through debug, it shows:
total=(int)0;
rather than total=(int)4
And if I use
rSet=last(); above from the code total = rSet.getRow();
then total shows accurate value = 4 but rSet return nothing. then jTable is empty.
Update me!
BalusC's answer is right! but I have to mention according to the user instance variable such as:
rSet.last();
total = rSet.getRow();
and then which you are missing
rSet.beforeFirst();
the remaining code is same you will get your desire result.
You need to call ResultSet#beforeFirst() to put the cursor back to before the first row before you return the ResultSet object. This way the user will be able to use next() the usual way.
resultSet.last();
rows = resultSet.getRow();
resultSet.beforeFirst();
return resultSet;
However, you have bigger problems with the code given as far. It's leaking DB resources and it is also not a proper OOP approach. Lookup the DAO pattern. Ultimately you'd like to end up as
public List<Operations> list() throws SQLException {
// Declare Connection, Statement, ResultSet, List<Operation>.
try {
// Use Connection, Statement, ResultSet.
while (resultSet.next()) {
// Add new Operation to list.
}
} finally {
// Close ResultSet, Statement, Connection.
}
return list;
}
This way the caller has just to use List#size() to know about the number of records.
The getRow() method retrieves the current row number, not the number of rows. So before starting to iterate over the ResultSet, getRow() returns 0.
To get the actual number of rows returned after executing your query, there is no free method: you are supposed to iterate over it.
Yet, if you really need to retrieve the total number of rows before processing them, you can:
ResultSet.last()
ResultSet.getRow() to get the total number of rows
ResultSet.beforeFirst()
Process the ResultSet normally
As others have answered there is no way to get the count of rows without iterating till the end. You could do that, but you may not want to, note the following points:
For a many RDBMS systems ResultSet is a streaming API, this means
that it does not load (or maybe even fetch) all the rows from the
database server. See this question on SO. By iterating to the
end of the ResultSet you may add significantly to the time taken to
execute in certain cases.
A default ResultSet object is not updatable and has a cursor
that moves forward only. I think this means that unless you
execute
the query with ResultSet.TYPE_SCROLL_INSENSITIVE rSet.beforeFirst() will throw
SQLException. The reason it is this way is because there is cost
with scrollable cursor. According to the documentation, it may throw SQLFeatureNotSupportedException even if you create a scrollable cursor.
Populating and returning a List<Operations> means that you will
also need extra memory. For very large resultsets this will not
work
at all.
So the big question is which RDBMS?. All in all I would suggest not logging the number of records.
One better way would be to use SELECT COUNT statement of SQL.
Just when you need the count of number of rows returned, execute another query returning the exact number of result of that query.
try
{
Conn=ConnectionODBC.getConnection();
Statement stmt = Conn.createStatement();
String sqlStmt = sql;
String sqlrow = SELECT COUNT(*) from (sql) rowquery;
String total = stmt.executeQuery(sqlrow);
int rowcount = total.getInt(1);
}
The getRow() method will always yield 0 after a query:
ResultSet.getRow()
Retrieves the current row number.
Second, you output totalrec but never assign anything to it.
You can't get the number of rows returned in a ResultSet without iterating through it. And why would you return a ResultSet without iterating through it? There'd be no point in executing the query in the first place.
A better solution would be to separate persistence from view. Create a separate Data Access Object that handles all the database queries for you. Let it get the values to be displayed in the JTable, load them into a data structure, and then return it to the UI for display. The UI will have all the information it needs then.
I have solved that problem. The only I do is:
private int num_rows;
And then in your method using the resultset put this code
while (this.rs.next())
{
this.num_rows++;
}
That's all
The best way to get number of rows from resultset is using count function query for database access and then rs.getInt(1) method to get number of rows.
from my code look it:
String query = "SELECT COUNT() FROM table";
ResultSet rs = new DatabaseConnection().selectData(query);
rs.getInt(1);
this will return int value number of rows fetched from database.
Here DatabaseConnection().selectData() is my code for accessing database.
I was also stuck here but then solved...

How to protect against SQL injection when the WHERE clause is built dynamically from search form?

I know that the only really correct way to protect SQL queries against SQL injection in Java is using PreparedStatements.
However, such a statement requires that the basic structure (selected attributes, joined tables, the structure of the WHERE condition) will not vary.
I have here a JSP application that contains a search form with about a dozen fields. But the user does not have to fill in all of them - just the one he needs. Thus my WHERE condition is different every time.
What should I do to still prevent SQL injection?
Escape the user-supplied values? Write a wrapper class that builds a PreparedStatement each time? Or something else?
The database is PostgreSQL 8.4, but I would prefer a general solution.
Thanks a lot in advance.
Have you seen the JDBC NamedParameterJDBCTemplate ?
The NamedParameterJdbcTemplate class
adds support for programming JDBC
statements using named parameters (as
opposed to programming JDBC statements
using only classic placeholder ('?')
arguments.
You can do stuff like:
String sql = "select count(0) from T_ACTOR where first_name = :first_name";
SqlParameterSource namedParameters = new MapSqlParameterSource("first_name", firstName);
return namedParameterJdbcTemplate.queryForInt(sql, namedParameters);
and build your query string dynamically, and then build your SqlParameterSource similarly.
I think that fundamentally, this question is the same as the other questions that I referred to in my comment above, but I do see why you disagree — you're changing what's in your where clause based on what the user supplied.
That still isn't the same as using user-supplied data in the SQL query, though, which you definitely want to use PreparedStatement for. It's actually very similar to the standard problem of needing to use an in statement with PreparedStatement (e.g., where fieldName in (?, ?, ?) but you don't know in advance how many ? you'll need). You just need to build the query dynamically, and add the parameters dynamically, based on information the user supplied (but not directly including that information in the query).
Here's an example of what I mean:
// You'd have just the one instance of this map somewhere:
Map<String,String> fieldNameToColumnName = new HashMap<String,String>();
// You'd actually load these from configuration somewhere rather than hard-coding them
fieldNameToColumnName.put("title", "TITLE");
fieldNameToColumnName.put("firstname", "FNAME");
fieldNameToColumnName.put("lastname", "LNAME");
// ...etc.
// Then in a class somewhere that's used by the JSP, have the code that
// processes requests from users:
public AppropriateResultBean[] doSearch(Map<String,String> parameters)
throws SQLException, IllegalArgumentException
{
StringBuilder sql;
String columnName;
List<String> paramValues;
AppropriateResultBean[] rv;
// Start the SQL statement; again you'd probably load the prefix SQL
// from configuration somewhere rather than hard-coding it here.
sql = new StringBuilder(2000);
sql.append("select appropriate,fields from mytable where ");
// Loop through the given parameters.
// This loop assumes you don't need to preserve some sort of order
// in the params, but is easily adjusted if you do.
paramValues = new ArrayList<String>(parameters.size());
for (Map.Entry<String,String> entry : parameters.entrySet())
{
// Only process fields that aren't blank.
if (entry.getValue().length() > 0)
{
// Get the DB column name that corresponds to this form
// field name.
columnName = fieldNameToColumnName.get(entry.getKey());
// ^-- You'll probably need to prefix this with something, it's not likely to be part of this instance
if (columnName == null)
{
// Somehow, the user got an unknown field into the request
// and that got past the code calling us (perhaps the code
// calling us just used `request.getParameterMap` directly).
// We don't allow unknown fields.
throw new IllegalArgumentException(/* ... */);
}
if (paramValues.size() > 0)
{
sql.append("and ");
}
sql.append(columnName);
sql.append(" = ? ");
paramValues.add(entry.getValue());
}
}
// I'll assume no parameters is an invalid case, but you can adjust the
// below if that's not correct.
if (paramValues.size() == 0)
{
// My read of the problem being solved suggests this is not an
// exceptional condition (users frequently forget to fill things
// in), and so I'd use a flag value (null) for this case. But you
// might go with an exception (you'd know best), either way.
rv = null;
}
else
{
// Do the DB work (below)
rv = this.buildBeansFor(sql.toString(), paramValues);
}
// Done
return rv;
}
private AppropriateResultBean[] buildBeansFor(
String sql,
List<String> paramValues
)
throws SQLException
{
PreparedStatement ps = null;
Connection con = null;
int index;
AppropriateResultBean[] rv;
assert sql != null && sql.length() > 0);
assert paramValues != null && paramValues.size() > 0;
try
{
// Get a connection
con = /* ...however you get connections, whether it's JNDI or some conn pool or ... */;
// Prepare the statement
ps = con.prepareStatement(sql);
// Fill in the values
index = 0;
for (String value : paramValues)
{
ps.setString(++index, value);
}
// Execute the query
rs = ps.executeQuery();
/* ...loop through results, creating AppropriateResultBean instances
* and filling in your array/list/whatever...
*/
rv = /* ...convert the result to what we'll return */;
// Close the DB resources (you probably have utility code for this)
rs.close();
rs = null;
ps.close();
ps = null;
con.close(); // ...assuming pool overrides `close` and expects it to mean "release back to pool", most good pools do
con = null;
// Done
return rv;
}
finally
{
/* If `rs`, `ps`, or `con` is !null, we're processing an exception.
* Clean up the DB resources *without* allowing any exception to be
* thrown, as we don't want to hide the original exception.
*/
}
}
Note how we use information the user supplied us (the fields they filled in), but we didn't ever put anything they actually supplied directly in the SQL we executed, we always ran it through PreparedStatement.
The best solution is to use a middle that does data validation and binding and acts as an intermediary between the JSP and the database.
There might be a list of column names, but it's finite and countable. Let the JSP worry about making the user's selection known to the middle tier; let the middle tier bind and validate before sending it on to the database.
Here is a useful technique for this particular case, where you have a number of clauses in your WHERE but you don't know in advance which ones you need to apply.
Will your user search by title?
select id, title, author from book where title = :title
Or by author?
select id, title, author from book where author = :author
Or both?
select id, title, author from book where title = :title and author = :author
Bad enough with only 2 fields. The number of combinations (and therefore of distinct PreparedStatements) goes up exponentially with the number of conditions. True, chances are you have enough room in your PreparedStatement pool for all those combinations, and to build the clauses programatically in Java, you just need one if branch per condition. Still, it's not that pretty.
You can fix this in a neat way by simply composing a SELECT that looks the same regardless of whether each individual condition is needed.
I hardly need mention that you use a PreparedStatement as suggested by the other answers, and a NamedParameterJdbcTemplate is nice if you're using Spring.
Here it is:
select id, title, author
from book
where coalesce(:title, title) = title
and coalesce(:author, author) = author
Then you supply NULL for each unused condition. coalesce() is a function that returns its first non-null argument. Thus if you pass NULL for :title, the first clause is where coalesce(NULL, title) = title which evaluates to where title = title which, being always true, has no effect on the results.
Depending on how the optimiser handles such queries, you may take a performance hit. But probably not in a modern database.
(Though similar, this problem is not the same as the IN (?, ?, ?) clause problem where you don't know the number of values in the list, since here you do have a fixed number of possible clauses and you just need to activate/disactivate them individually.)
I'm not confident if there is a quote() method, which was widely used in PHP's PDO. This would allow you a more flexible query building approach.
Also, one of the possible ideas could be creating special class, which would process filter criterias and would save into a stack all placeholders and their values.

Categories

Resources