I want to be able to update parts of a row in a table in an oracle database. The database has a number (which is the primary key) and 5 other columns.
The method takes an object and compares it with the object with the same primary key in the database. It should then compare the columns and change those which are changed. I've thought of a few different ways of doing this:
Perform a check for every single possible permutation of changes (long way of doing this).
For example:
public boolean updateOrder(Order o, Connection con) {
int rowUpdated = 0;
String SQLString = "";
Order origOrder = getOrder(o.getOno(), con);
if (origOrder.getCustomerNo() != o.getCustomerNo()
&& origOrder.getEmployeeNo() == o.getEmployeeNo()
&& origOrder.getReceived().compareTo(o.getReceived()) == 0
&& origOrder.getBeginDate().compareTo(o.getBeginDate()) == 0
&& origOrder.getEndDate().compareTo(o.getEndDate()) == 0
&& origOrder.getProjectLocation().compareTo(o.getProjectLocation()) == 0) {
SQLString = "UPDATE ORDERS SET "
+ "CNO = " + o.getCustomerNo()
+ "where ONO = " + o.getOno();
}
PreparedStatement statement = null;
try {
//== insert value----- Unit of work start
con.setAutoCommit(false);
statement = con.prepareStatement(SQLString);
rowUpdated = statement.executeUpdate();
etc...
Just change everything every time (pretty simple, I'm afraid it might go wrong though).
Does anyone have a clever way of doing this?
Why do you want to perform the check for something that has changed? Just perform the update.
If you really need to make the check, push the comparison logic into a method of the Order class.
if(origOrder.hasChanged(o)) {
// perform update
}
P.S. Variable names like o are not very meaningful or helpful.
Related
Using Java I want to obtain all IDs from my database and select GW_STATUS if it is equal to 0. I used the following SQL statement to achieve this.
PreparedStatement get_id = con.prepareStatement("SELECT ID from SF_MESSAGES where GW_STATUS = 0");
Once the IDs have been obtained, I want to update GW_STATUS to 1 according to their ID as demonstrated in the code below but only one field is being updated when I execute the code.
PreparedStatement update = con.prepareStatement("update SF_MESSAGES set GW_STATUS=? where ID = ?");
update.setInt(1,1);
ResultSet x = get_id.executeQuery();
while(x.next()){
int uber = x.getInt(1);
int array[] = new int[] {uber};
for (int value : array) {
System.out.println("Value = " + value); //Successfully obtains and prints each ID from the databse table
update.setInt(2,value); // Only one ID is updated therefore only field updated
}
}
int result = update.executeUpdate();
System.out.println(result + " Records updated");
I've tried using another update statement within the for loop to update every ID obtained but that doesn't work too. How can I successfully update every field according to their ID?
You can make the whole processing much simple. It turns out that you just want to update SF_MESSAGES which have GW_STATUS equals to 0, so your query can look like the following:
update SF_MESSAGES set GW_STATUS=1 where GW_STATUS=0
Therefore, you do not have to fetch IDs, loop over them so it is more efficient solution.
I'm developing a program that scrapes the web for certain data and feeds it back to the database. The problem is that I don't want duplicate entries of the same data as soon as the crawlers run for a second time. If some attributes changed, but the majority of the data is still the same, I'd like to update the DB entry rather than simply adding a new one. I know how to do this in code, but I was wondering if this could be done better.
The way the update works right now:
//This method calls several other methods to check if the event in question already exists. If it does, it updates it using the id it returns.
//If it doesn't exist, -1 is returned as an id.
public static void check_event(Event event)
{
int id = -1;
id = check_exact_event(event); //Check if an event exists with the same title, location and time.
if(id > 0)
{
update_event(event, id);
Logger.log("EventID #" + id + " found using exact comparison");
return;
}
id = check_similar_event_titles(event); //Check if event exists with a different (but similar) title
if(id > 0)
{
update_event(event, id);
Logger.log("EventID #" + id + " found using similar title comparison");
return;
}
id = check_exact_image(event); //Check if event exists with the exact same image
if(id > 0)
{
update_event(event, id);
Logger.log("EventID #" + id + " found using image comparison");
return;
}
//Otherwise insert new event
create_new_event(event);
}
This works, but it's not very pleasing to the eye. What's the best way to go about this?
Personally i can'tsee anything wron with your code, it is clean and effective.
If you really want to change it, you could do it in single if statement
public static void check_event(Event event) {
int id = -1;
if ((id = check_exact_event(event)) > 0
|| (id = check_similar_event_titles(event)) > 0
|| (id = check_exact_image(event)) > 0) {
update_event(event, id);
}
;
create_new_event(event);
}
But i cant see much gain in this way
I have a SQL query as shown below.
SELECT O_DEF,O_DATE,O_MOD from OBL_DEFINITVE WHERE OBL_DEFINITVE_ID =?
A collection of Ids is passed to this query and ran as Batch query. This executes for 10000
times for retriveing values from Database.(Some one else mess)
public static Map getOBLDefinitionsAsMap(Collection oblIDs)
throws java.sql.SQLException
{
Map retVal = new HashMap();
if (oblIDs != null && (!oblIDs.isEmpty()))
{
BatchStatementObject stmt = new BatchStatementObject();
stmt.setSql(SELECT O_DEF,O_DATE,O_MOD from OBL_DEFINITVE WHERE OBL_DEFINITVE_ID=?);
stmt.setParameters(
PWMUtils.convertCollectionToSubLists(taskIDs, 1));
stmt.setResultsAsArray(true);
QueryResults rows = stmt.executeBatchSelect();
int rowSize = rows.size();
for (int i = 0; i < rowSize; i++)
{
QueryResults.Row aRow = (QueryResults.Row) rows.getRow(i);
CoblDefinition ctd = new CoblDefinition(aRow);
retVal.put(aRow.getLong(0), ctd);
}
}
return retVal;
Now we had identified that if the query is modified to
add as
SELECT O_DEF,O_DATE,O_MOD from OBL_DEFINITVE WHERE OBL_DEFINITVE_ID in (???)
so that we can reduce it to 1 query.
The problem here is MSSQL server is throwing exception that
Prepared or callable statement has more than 2000 parameter
And were struck here . Can some one provide any better alternative to this
There is a maximum number of allowed parameters, let's call it n. You can do one of the following:
If you have m*n + k parameters, you can create m batches (or m+1 batches, if k is not 0). If you have 10000 parameters and 2000 is the maximum allowed parameters, you will only need 5 batches.
Another solution is to generate the query string in your application and adding your parameters as string. This way you will run your query only once. This is an obvious optimization in speed, but you'll have a query string generated in your application. You would set your where clause like this:
String myWhereClause = "where TaskID = " + taskIDs[0];
for (int i = 1; i < numberOfTaskIDs; i++)
{
myWhereClause += " or TaskID = " + taskIDs[i];
}
It looks like you are using your own wrapper around PreparedStatement and addBatch(). You are clearly reaching a limit of how many statements/parameters can be batched at once. You will need to use executeBatch (eg every 100 or 1000) statements, instead of having it build up until the limit is reached.
Edit: Based on the comment below I reread the problem. The solution: make sure you use less than 2000 parameters when building the query. If necessary, breaking it up in two or more queries as required.
I currently have an ArrayList holding objects of a class I have created, I then parse through the ArrayList in a for loop searching and comparing some data from the ArrayList and some global variables that are loaded else where, however this ArrayList is constantly growing and will eventually have about 115 elements to it towards the end, which then takes a very long time to search through, the function that does this is also called once for every line I read from a text file and the text file will usually be around 400-500 lines long so as you can tell it is very slow process even when testing on small files. Is there a way to speed this up by maybe using another collection instead of an ArrayList, my reasoning for using the ArrayList is I have to know what index it is on when it finds a match.
Here is the class:
private ArrayList<PanelData> panelArray = new ArrayList<PanelData>(1);
public class PanelData {
String dev = "";
String inst = "";
double tempStart = 0.0;
double tempEnd = 0.0;
}
Function:
public void panelTimeHandler (double timeStart, double timeEnd) throws SQLException {
PanelData temps = new PanelData();
temps.dev = devIDStr;
temps.inst = instanceStr;
temps.tempStart = timeStart;
temps.tempEnd = timeEnd;
boolean flag = false;
if(!flag)
{
panelArray.add(temps);
flag = true;
}
for(int i = 0; i < panelArray.size(); ++i ) {
if(panelArray.get(i).dev.equals(devIDStr) && panelArray.get(i).inst.equals(instanceStr)) {
if(panelArray.get(i).tempStart <= timeStart && panelArray.get(i).tempEnd >= timeEnd ) {
//Do Nothing
}
else
{
temps.dev = devIDStr;
temps.inst = instanceStr;
temps.tempStart = timeStart;
temps.tempEnd = timeEnd;
insert();
panelArray.set(i, temps);
}
}
else
{
temps.dev = devIDStr;
temps.inst = instanceStr;
temps.tempStart = timeStart;
temps.tempEnd = timeEnd;
panelArray.add(temps);
insert();
}
}
}
If there is something more you would like to see just ask, thanks. Beef.
Update: Added insert() function
private void insert() throws SQLException
{
stmt = conn.createStatement();
String sqlStm = "update ARRAY_BAC_SCH_Schedule set SCHEDULE_TIME = {t '" + finalEnd + "'} WHERE SCHEDULE_TIME >= {t '" + finalStart + "'} AND" +
" SCHEDULE_TIME <= {t '" + finalEnd + "'} AND VALUE_ENUM = 0 AND DEV_ID = " + devIDStr + " and INSTANCE = " + instanceStr;
int updateSuccess = stmt.executeUpdate(sqlStm);
if (updateSuccess < 1)
{
sqlStm = "insert into ARRAY_BAC_SCH_Schedule (SITE_ID, DEV_ID, INSTANCE, DAY, SCHEDULE_TIME, VALUE_ENUM, Value_Type) " +
" values (1, " + devIDStr + ", " + instanceStr + ", " + day + ", {t '" + finalStart + "'}, 1, 'Unsupported')";
stmt.executeUpdate(sqlStm);
sqlStm = "insert into ARRAY_BAC_SCH_Schedule (SITE_ID, DEV_ID, INSTANCE, DAY, SCHEDULE_TIME, VALUE_ENUM, Value_Type) " +
" values (1," + devIDStr + ", " + instanceStr + ", " + day + ", {t '" + finalEnd + "'}, 0, 'Unsupported')";
stmt.executeUpdate(sqlStm);
}
if(stmt!=null)
stmt.close();
}
Update:
Thank you to Matteo, I realized I was adding to the array even if I didnt find a match till the 10th element it would then added to the array the first 9 times which created many extra elements in the array, which was why it was so slow, I added some breaks and did a little tweaking in the function, and it improved the performance a lot. Thanks for all the input
you can use LinkedHashSet. It seems you add only elements to the end of the list, which is exactly what LinkedHashSet does as well, when inserting an element.
Note however, a LinkedHashSet will not allow duplicates, since it is a set.
Searching if an element exists will be O(1) using contains()
Using the LinkedHashSet will also allow you to keep track of where an element was added, and iterating it will be in order of insertion.
What about using a hashmap?
I would create a small class for the key:
class Key {
String dev, instr;
// todo: implements equals & hashCode
}
and create the map:
Map<Key, PanelData> map = new HashMap...
then you can easily find the element you need by invoking map.get(new Key(...)).
Instead of creating a new class, you could also tweak the PanelData class, implementing methods equals & hashcode so that two classes are equal iff their dev and instr are equal. In this case, your map becomes:
Map<PanelData, PanelData> map ...
// to add:
map.put(temps, temps)
// to search:
PanelData elem = map.get(new PanelData(desiredDev, desiredInstr));
Quite a few optimiztions here.
1) the call: panelArray.get(i) is used repeatedly. Declare a PanelData variable outside the loop, but initialize it only once, at the very begining of the loop:
PanelData pd = null;
for (int i = 0; i < panelArray.size(); ++i) {
pd = panelArray.get(i);
...
}
2) If your dataset allows it, consider using a few maps to help speed look up times:
HashMap<String, PanelData> devToPanelDataMapping = new HashMap<String,PanelData>();
HashMap<String, PanelData> instToPanelDataMapping = new HashMap<String,PanelData>();
3) Consider hashing your strings into ints or longs since String.equals() is slow compared to (int == int)
4) If the ArrayList will be read only, perhaps a multithread solution may help. The thread that reads lines from the text file can hand out individual lines of data to different 'worker' threads.
1) Create PanelArray with the max expected size + 10% when you first create it.
List<PanelData> panelArray = new ArrayList<PanelData>(130) - this will prevent dynamic reallocations of the array which will save processing time.
2) What does insert() do? Odds are that is your resource hog.
This problem might best be solved with a different data structure such as a HashMap or SortedSet.
In order to use a HashMap, you would need to define a class that can produce a hash code for the dev and inst string pairs. One solution is something like:
public class DevAndInstPair
{
private String dev, inst;
#Override
public int hashCode() {
return ((dev.hashCode() * 0x490aac18) ^ inst.hashCode());
}
#Override
public boolean equals(Object o) {
if (o == null || !(o instanceof DevAndInstPair)) {
return false;
}
DevAndInstPair other = (DevAndInstPair) o;
return (dev.equals(other.dev) && inst.equals(other.inst));
}
}
You would then use HashMap<DevAndInstPair, PanelData> as the map type.
Alternatively, if you know that a certain character never appears in dev strings, then you can use that character as a delimiter separating the dev value from the inst value. Supposing that this character is a hyphen ('-'), the key values would be dest + '-' + inst and the key type of the map would be String.
To use SortedSet, you would either have PanelData implement Comparable<PanelData> or write a class implementing Comparator<PanelData>. Remember that the compare operation must be consistent with equals.
A SortedSet is somewhat trickier to use than a HashMap, but I personally think that it is the more elegant solution to this problem.
I want to get the size of the ResultSet inside the while loop.
Tried the code below, and I got the results that I want. But it seems to be messing up with result.next() and the while loop only loops once if I do this.
What's the proper way of doing this?
result.first();
while (result.next()){
System.out.println(result.getString(2));
System.out.println("A. " + result.getString(5) + "\n" + "B. " + result.getString(6) + "\n" + "C. " + result.getString(7) + "\n" + "D. " + result.getString(8));
System.out.println("Answer: ");
answer = inputquiz.next();
result.last();
if (answer.equals(result.getString(10))) {
score++;
System.out.println(score + "/" + result.getRow());
} else {
System.out.println(score + "/" + result.getRow());
}
}
What's the proper way of doing this?
Map it to a List<Entity>. Since your code is far from self-documenting (you're using indexes instead of column names), I can't give a well suited example. So I'll take a Person as example.
First create a javabean class representing whatever a single row contains.
public class Person {
private Long id;
private String firstName;
private String lastName;
private Date dateOfBirth;
// Add/generate c'tors/getters/setters/equals/hashcode and other boilerplate.
}
(a bit decent IDE like Eclipse can autogenerate them)
Then let JDBC do the following job.
List<Person> persons = new ArrayList<Person>();
while (resultSet.next()) {
Person person = new Person();
person.setId(resultSet.getLong("id"));
person.setFirstName(resultSet.getString("fistName"));
person.setLastName(resultSet.getString("lastName"));
person.setDataOfBirth(resultSet.getDate("dateOfBirth"));
persons.add(person);
}
// Close resultSet/statement/connection in finally block.
return persons;
Then you can just do
int size = persons.size();
And then to substitute your code example
for (int i = 0; i < persons.size(); i++) {
Person person = persons.get(i);
System.out.println(person.getFirstName());
int size = persons.size(); // Do with it whatever you want.
}
See also:
How to check if there is zero-or-one result or one-or-more results and their size
you could do result.last(); and call result.getRow(); (which retrieves the current row number) to get count. but it'll have load the all the rows and if it's a big result set, it might not be very efficient. The best way to go about is to do a SELECT COUNT(*) on you query and get the count like it's demonstrated in this post, beforehand.
This is a tricky question.
Normally, result.last() scrolls to the end of the ResultSet, and you can't go back.
If you created the statement using one of the createStatement or prepareStatement methods with a "resultSetType" parameter, and you've set the parameter to ResultSet.TYPE_SCROLL_INSENSITIVE or ResultSet.TYPE_SCROLL_SENSITIVE, then you can scroll the ResultSet using first() or relative() or some other methods.
However, I'm not sure if all databases / JDBC drivers support scrollable result sets, and there are likely to be performance implications in doing this. (A scrollable result set implies that either the database or the JVM needs to buffer the entire resultset somewhere ... or recalculate it ... and that's expensive for a large resultset.)
The way of getting size of ResultSet, No need of using ArrayList etc
int size =0;
if (rs != null)
{
rs.beforeFirst();
rs.last();
size = rs.getRow();
}
Now You will get size, And if you want print the ResultSet, before printing use following line of code too,
rs.beforeFirst();
There are also another way to get the count from DB.
Note :
This column gets updated when DBA'S do realtime statistics
select num_rows from all_Tables where table_name ='<TABLE_NAME>';