Solve Concurrent Update/Delete Statements Java Oracle - java

The problem I have right now deals with the SQL UPDATE and DELETE statements concurrently. If the program is only called one after the other then there is no problems, however, if two people decide to run the program it might fail.
What my program does:
A program about food which all has a description and a date of when that description was made. As people enter the description of the food it gets entered into a database where you can quickly retrieve the description. If the description is lets say 7 days old then we delete it cause its outdated. However, if a user enters a food already in the database with a different description then we update it and change the date. The deletion happens after the update/insertion (those that dont need updating will be inserted and then the program checks for outdated things in the database and deletes them).
The problem:
Two people run the program and right as one person is trying to update a food, the other clears it out with the deletion cause it just finished. The update will not happen, and the program will continue with the rest of the updates (<- I read that this is because my driver doesn't stop. Some drivers stop updating if there is an error).
What I want to do:
I want my program to stop at the bad update or grab that food position and restart the process/thread. The restarting will include sorting out which foods needs to be updated or inserted. Therefore, the bad record will be moved into the inserting method and not the update. The update will continue where it left off. And all's well.
I know this is not the only way, so different methods on how to solve this problem is welcome. I have looked up that you can use an upsert statement, but that also has race conditions. (Question about the upsert statement: If I make the upsert method synchronized will it not have race conditions?)
Thanks

There are different pratical solutions to your problem depending on jout jdbc connection management.
If the application is a client server one and it uses a dedicated persistent connection (i.e. it opens a jdbc connection at program startup and it closes when the program shutdowns) for each client you can use a select for update statement.
You must issue a select for update when displaying records to the user and when the user does its action you do what is needed and commit.
This approach serializes the dabatabase operations and if you show and lock multiple records it may not be feasible.
A second approach is usable when you have a web application with a connection pool or when you don't have a dedicated connection you can use for the read and update/delete operation. In this case you have this scenario
user 1 selects its data with jdbc connection 1
user 2 selects its data (the same as user 1) with jdbc connection 2
user 2 submit data causing some deletions with jdbc connection 3
user 1 submit data and lot an update beacuse the data was deleted with jdbc connection 2
Since you cannot realy on the same jdbc connection to lock the data you read, you can issue a select for update before updating the data and check if there are data. If you have the data you can update them (and they will not be deleted by other sessions since every delete command on the same data is waiting your select for update to terminate); if you don't have the data because they where deleted during user display you must reinsert them. You delete statement must have a filter on the date column that represent the last update.
You can use other approaches and avoid the select for update using for example an
update food-table set last_update=? where id=? and last_update=<the last update you have in java program>
and you must check that the update statement did update a row (in jdbc executeUpdate returns the number of rows modified, but you did not specifiy if you are using "plain" JDBC or some sort of framework) and if it did not update a row you must isse the insert statement.

Set transaction level to serializable in java code. Then your statements should look like:
update food_table set update_time = ? where ....
delete from food_table where update_time < ?
You may get an serializable exception in either case. In the case of the update you will need to reinsert the entry. In the second case, just ignore and run again.

Related

How does JDBC implement ResultSet.TYPE_SCROLL_SENSITIVE?

When we query a database and retrieve a list of records, let's say we executed this query : SELECT * FROM students OFFSET 1 LIMIT 100. If, while we are working with these items, another thread changes one of these records our resultSet object will get updated automatically? If yes then how? Is Java constantly running queries to retrieve the data and compare? Or does the database notify the change? If it's notification from database, could you please give me some examples how MySQL and PostgreSQL do this?
Is it updated if another thread from our Java code updates those records or even if a connection from another source updates it we'll still get updated?

The strange behaviour of the Oracle "insert into" command

I'm observing the strange situation in work "insert into" command.
I'll try to explain the situation from my point a view
There is TEMP_LINKS table in my database and application inserts data into it.
Say the query lays in insert1.sql
insert into TEMP_LINK (ID, SIDE)
select ID, SIDE
from //inner query//
group by ID, SIDE;
commit;
and there is java1 class which execute it
...
executeSqlScript(getResource("path-to-query1"));
...
After that, another java2 class make another insert into the same TEMP_LINK table
...
executeSqlScript(getResource("path-to-query2"));
...
where query2 looks like
insert into TEMP_LINK (ID, SIDE)
select
ID, 'B'
from (
select ID
from ...tables
where ..conditions
minus (
select ID
from ..tables
union
select ID
from TEMP_LINKS
);
commit;
Both java1 and java2 are executed in different threads and java1 is finished earlier that java2.
But time to time, second insert(from query2) don't insert data at all. I see in log: Update count 0 and in TEPM_LINKS there are data only from query1.
If I'm running the application again the issue is disappeared and both of the queries inserted properly data.
Earlier I tried to put both of the queries into one sql file, but the issue has appeared too.
So, maybe someone has ideas about what should I do, because mine is over. One interesting fact - sql "minus" operation is used only once - in that query2.
A big difference between Oracle and SQL Server, Oracle NEVER blocks a read. This is true even when records are locked. The following is a simplified explanation. Oracle uses the System Change Number (SCN) at the time a transaction starts to determine the state of the database for that transaction. All sorts of things can happen, inserts, updates, and deletes, the transaction sees the database as it was at the start of that transaction. Changes only matters at the point where the commit/rollback is executed.
In your situation, if the second query starts before the first has committed, the second won't see any changes the first has made, even after the first commits. You need to synchronize those transactions. The easiest way is to combine them into a single sequential execution. Oracle has many more complex synchronization methods, I would not go that route in this situation.

Counting Number Of Specific Record In Database

I have a application which needs to aware of latest number of some records from a table from database, the solution should be applicable without changing the database code or add triggers or functions to it ,so I need a database vendor independent solution.
My program written in java but database could be (SQLite,MySQL,PostgreSQL or MSSQL),for now I'm doing Like that:
In a separate thread that is set as a daemon my application sends a simple command through JDBC to database to be aware of latest number of the records with condition:
while(true){
SELECT COUNT(*) FROM Mytable WHERE exited='1'
}
and this sort of coding causes DATABASE To lock,slows down the whole system and generates huge DB Logs which finally brings down the whole thing!
how can i do it in a right way to always have latest number of certain records or only counting when the number changed?
A SELECT statement should not -- by itself -- have the behavior that you are describing. For instance, nothing is logged with a SELECT. Now, it is possible that concurrent insert/update/delete statements are going on, and that these cause problems because the SELECT locks the table.
Two general things you can do:
Be sure that the comparison is of the same type. So, if exited is a number, do not use single quotes (mixing of types can confuse some databases).
Create an index on (exited). In basically all databases, this is a single command: create index idx_mytable_exited on mytable(exited).
If locking and concurrent transactions are an issue, then you will need to do more database specific things, to avoid that problem.
As others have said, make sure that exited is indexed.
Also, you can set the transaction isolation on your query to do a "dirty read"; this indicates to the database server that you do not need to wait for other processes' transactions to commit, and instead you wish to read the current value of exited on rows that are being updated by those other processes.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED is the standard syntax for using "dirty read".

Webapp java synchronized object aquire

I have a situation in my java, spring based web app. My server generates coupons ( a number mixed with alphabets , all random but unique) , each coupon can be applied or used by only one and only on logged in customer. They are shown on the front end to all the users, which then gets accepted/selected by the customers.But once accepted by one customer it gets assigned to him and not available to anyone else.
I tried to do synchronization of code block which checks if the coupon is already applied / availed, it worked but , cases like when two users click avail it at exact same time, it fails ( get allocated to both)
Please help.
Do not use synchronization for this. You can store the state of the coupons in a database, and work on these data in a DB transaction, using locks. So:
User tries the coupon, you get the ID
Start a DB transaction, get the coupon row from it, and lock it
Do what you need to, then invalidate the coupon
End the DB transaction, release the lock
The database do not necessarly need to be a standalone RDMS, in a simple case, even SQLite is sufficient. Anyway, DBs most certainly handle race conditions betten than you (or most of us) can.
If you prefer avoid database transactions you can use a Set with all the generated coupons and a set referencing only available coupons. When a user select a coupon in a synch block remove the coupon from available ones. The second user then fail to obtain it

is Select Before Update a good approach or vice versa?

I am developing an application using normal JDBC connection. The application is developed with Java-Java EE SpringsMVC 3.0 and SQL Server 08 as database. I am required to update a table based on a non primary key column.
Now, before updating the table we had to decide an approach for updating the table, as table may contain huge amount of data. The update Query will be executed in a batch and we are required to design application in a manner wherein it doesn't hog the system resources.
Now, We had to decide between either of the approaches,
1. SELECT DATA BEFORE YOU UPDATE or
2. UPDATE DATA AND THEN SELECT MISSING DATA.
Select data before update is only benificial if chances of failure are maximum, i.e. if a batch 100 Query update is executed, and out of which if only 20 rows are updated successfully, then this approach should be taken
Update data and then check missing data is benificial only when failure records are far less. By this ap[proach one database select call can be avoided, i.e after a batch update, the count of records updated can be taken and the select query should be executed if and only if theres is a count in mismatch w.r.t no of query.
We are totally unaware about the system on Production environment, but we want to counter for all possibilities and want a faster system. I need your inputs as which is a better approach.
Since there is 50:50 chance of successful updates or faster selects, its hard to tell from the current scenario mentioned. You probably would want a fuzzy logic approach, getting constant feedback of how many updates were successful over the period of time, and then decide on the basis of that data to either do an update before select or do a select before update.

Categories

Resources