I have a situation in my java, spring based web app. My server generates coupons ( a number mixed with alphabets , all random but unique) , each coupon can be applied or used by only one and only on logged in customer. They are shown on the front end to all the users, which then gets accepted/selected by the customers.But once accepted by one customer it gets assigned to him and not available to anyone else.
I tried to do synchronization of code block which checks if the coupon is already applied / availed, it worked but , cases like when two users click avail it at exact same time, it fails ( get allocated to both)
Please help.
Do not use synchronization for this. You can store the state of the coupons in a database, and work on these data in a DB transaction, using locks. So:
User tries the coupon, you get the ID
Start a DB transaction, get the coupon row from it, and lock it
Do what you need to, then invalidate the coupon
End the DB transaction, release the lock
The database do not necessarly need to be a standalone RDMS, in a simple case, even SQLite is sufficient. Anyway, DBs most certainly handle race conditions betten than you (or most of us) can.
If you prefer avoid database transactions you can use a Set with all the generated coupons and a set referencing only available coupons. When a user select a coupon in a synch block remove the coupon from available ones. The second user then fail to obtain it
Related
I have a application which needs to aware of latest number of some records from a table from database, the solution should be applicable without changing the database code or add triggers or functions to it ,so I need a database vendor independent solution.
My program written in java but database could be (SQLite,MySQL,PostgreSQL or MSSQL),for now I'm doing Like that:
In a separate thread that is set as a daemon my application sends a simple command through JDBC to database to be aware of latest number of the records with condition:
while(true){
SELECT COUNT(*) FROM Mytable WHERE exited='1'
}
and this sort of coding causes DATABASE To lock,slows down the whole system and generates huge DB Logs which finally brings down the whole thing!
how can i do it in a right way to always have latest number of certain records or only counting when the number changed?
A SELECT statement should not -- by itself -- have the behavior that you are describing. For instance, nothing is logged with a SELECT. Now, it is possible that concurrent insert/update/delete statements are going on, and that these cause problems because the SELECT locks the table.
Two general things you can do:
Be sure that the comparison is of the same type. So, if exited is a number, do not use single quotes (mixing of types can confuse some databases).
Create an index on (exited). In basically all databases, this is a single command: create index idx_mytable_exited on mytable(exited).
If locking and concurrent transactions are an issue, then you will need to do more database specific things, to avoid that problem.
As others have said, make sure that exited is indexed.
Also, you can set the transaction isolation on your query to do a "dirty read"; this indicates to the database server that you do not need to wait for other processes' transactions to commit, and instead you wish to read the current value of exited on rows that are being updated by those other processes.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED is the standard syntax for using "dirty read".
The problem I have right now deals with the SQL UPDATE and DELETE statements concurrently. If the program is only called one after the other then there is no problems, however, if two people decide to run the program it might fail.
What my program does:
A program about food which all has a description and a date of when that description was made. As people enter the description of the food it gets entered into a database where you can quickly retrieve the description. If the description is lets say 7 days old then we delete it cause its outdated. However, if a user enters a food already in the database with a different description then we update it and change the date. The deletion happens after the update/insertion (those that dont need updating will be inserted and then the program checks for outdated things in the database and deletes them).
The problem:
Two people run the program and right as one person is trying to update a food, the other clears it out with the deletion cause it just finished. The update will not happen, and the program will continue with the rest of the updates (<- I read that this is because my driver doesn't stop. Some drivers stop updating if there is an error).
What I want to do:
I want my program to stop at the bad update or grab that food position and restart the process/thread. The restarting will include sorting out which foods needs to be updated or inserted. Therefore, the bad record will be moved into the inserting method and not the update. The update will continue where it left off. And all's well.
I know this is not the only way, so different methods on how to solve this problem is welcome. I have looked up that you can use an upsert statement, but that also has race conditions. (Question about the upsert statement: If I make the upsert method synchronized will it not have race conditions?)
Thanks
There are different pratical solutions to your problem depending on jout jdbc connection management.
If the application is a client server one and it uses a dedicated persistent connection (i.e. it opens a jdbc connection at program startup and it closes when the program shutdowns) for each client you can use a select for update statement.
You must issue a select for update when displaying records to the user and when the user does its action you do what is needed and commit.
This approach serializes the dabatabase operations and if you show and lock multiple records it may not be feasible.
A second approach is usable when you have a web application with a connection pool or when you don't have a dedicated connection you can use for the read and update/delete operation. In this case you have this scenario
user 1 selects its data with jdbc connection 1
user 2 selects its data (the same as user 1) with jdbc connection 2
user 2 submit data causing some deletions with jdbc connection 3
user 1 submit data and lot an update beacuse the data was deleted with jdbc connection 2
Since you cannot realy on the same jdbc connection to lock the data you read, you can issue a select for update before updating the data and check if there are data. If you have the data you can update them (and they will not be deleted by other sessions since every delete command on the same data is waiting your select for update to terminate); if you don't have the data because they where deleted during user display you must reinsert them. You delete statement must have a filter on the date column that represent the last update.
You can use other approaches and avoid the select for update using for example an
update food-table set last_update=? where id=? and last_update=<the last update you have in java program>
and you must check that the update statement did update a row (in jdbc executeUpdate returns the number of rows modified, but you did not specifiy if you are using "plain" JDBC or some sort of framework) and if it did not update a row you must isse the insert statement.
Set transaction level to serializable in java code. Then your statements should look like:
update food_table set update_time = ? where ....
delete from food_table where update_time < ?
You may get an serializable exception in either case. In the case of the update you will need to reinsert the entry. In the second case, just ignore and run again.
HELP!
the case is when two or more transactions are trying to affect the same client monetary account in some external system. I need the second transaction be performed until the first one has finished.
consider:
- there are two or more transactions trying to affect the same balance
- there are multiple clients at the same time
- working with 1000 TPS with 100ms avg per transaction
ideas:
- as we are working with multi threads to support 1000TPS i'm trying to create Queues based on the Client ID. using some kind of workmanager that limit one thread by Client. So if i have 2 request with the same clientID at same time dynamically can queue the second.
tools
i'm trying to use Oracle tools for example:
- Fusion Middleware: using the Workmanager based on message context [not sure if possible because looks like context can be based only on session data] i like WorkManager because has no performance issues
- Oracle OCEP: creating a dynamic queue using the CQL [ not sure if possible and performance]
- Oracle Advance Queuing: maybe possible with transactions group.
thanks for any idea
I hope I got your problem.
In you question you asked, wether it is possible to perform a second transaction on a row before the first is completed. This is impossible! A database which follows the ACID paradigm has to be Consistent! So you can't "overtake" the first transaction!!! If you want to do that, you should use NoSQL Databases (like MongoDB, ...) where consistency is not that strong.
But maybe you want to know, if there is a Oracle view to figure out, wether a row is locked or not? Let's assume, that there is a view like that. You would check this view and if there is no lock, you start your update/delete. But you can't be sure that this will work because even 1ms after you checked it, another process can put a lock on it.
The only thing you can do is, to put a "select ... for update NOWAIT" before you UPDATE/DELETE statement.
If the row is locked, you will get a exception (ORA-00054: Resource busy). This is the recommended/"out of the box way" to let the database manage row-level-locking for you!
See the following example with the emp table. Consider: to check this out, start this code in two different sessions at the same time.
declare
l_sal number;
resource_busy exception; -- declare your own exception
pragma exception_init (resource_busy, -54); -- connect your exception with ORA-00054
begin
select sal
into l_sal
from emp
where empno = 7934
for update NOWAIT;
update emp
set sal = sal + 100
where empno = 7934;
exception
when resource_busy then
null; -- in your case, simply do nothing, if the row is locked
end;
I would like to get some advice on designing a count based access control. For example I want to restrict the number of users that a customer can create in my system based on their account. So by default a customer can create 2 users but if the upgrade their account they get to create 5 users and so on.
There are a few more features that I need to restrict on a similar basis.
The application follows a generic model so every feature exposed has a backing table and we have a class which handles the CRUD operation on that table. Also the application runs on multiple nodes and has a distributed cache.
The approach that I am taking to implement this is as follows
- I have a new table which captures the functionality to control and the allowed limit (stored per customer).
- I intercept the create method for all tables and check if the table in question needs to have access control applied. If so I fetch the count of created entities and compare against the limit to decide if I should allow the creation or not.
- I am using the database to handle synchronization in case of concurrent requests. So after the create method is called I update the table using the following where clause
where ( count_column + 1 ) = #countInMemory#
. i.e. the update will succeed only if the value stored in the DB + 1 = value in memory. This will ensure that even if two threads attempt a create at the same time, only one of them will be able to successfully update. The thread that successfully updates wins and the other one is rolled back. This way I do not need to synchronize any code in the application.
I would like to know if there is any other / better way of doing this. My application runs on Oracle and MySQL DB.
Thanks for the help.
When you roll back, do you retry (after fetching the new user count) or do you fail? I recommend the former, assuming that the new fetched user count would permit another user.
I've dealt with a similar system recently, and a few things to consider: do you want CustomerA to be able to transfer their users to CustomerB? (This assumes that customers are not independent, for example in our system CustomerA might be an IT manager and CustomerB might be an accounting manager working for the same company, and when one of CustomerA's employees moves to accounting he wants this to be reflected by CustomerB's account.) What happens to a customer's users when the customer is deleted? (In our case another customer/manager would need to adopt them, or else they would be deleted.) How are you storing the customer's user limit - in a separate table (e.g. a customer has type "Level2," and the customer-type table says that "Level2" customers can create 5 users), or in the customer's row (which is more error prone, but would also allow a per-customer override on their max user count), or a combination (a customer has a type column that says they can have 5 users, and an override column that says they can have an additional 3 users)?
But that's beside the point. Your DB synchronization is fine.
I have a simple domain model as follows
Driver - key(string), run-count, unique-track-count
Track - key(string), run-count, unique-driver-count, best-time
Run - key(?), driver-key, track-key, time, boolean-driver-update, boolean-track-updated
I need to be able to update a Run and a Driver in the same transaction; as well as a Run and a Track in the same transaction (obviously to make sure i don't update the statistics twice, or miss out on an increment counter)
Now I have tried assigning as run key, a key made up of driver-key/track-key/run-key(string)
This will let me update in one transaction the Run entity and the Driver entity.
But if I try updating the Run and Track entities together, it will complain that it cannot transact over multiple groups. It says that it has both the Driver and the Truck in the transaction and it can't operate on both...
tx.begin();
run = pmf.getObjectById(Run.class, runKey);
track = pmf.getObjectById(Track.class, trackKey);
//This is where it fails;
incrementCounters();
updateUpdatedFlags();
tx.commit();
Strangely enough when I do a similar thing to update Run and Driver it works fine.
Any suggestions on how else I can map my domain model to achieve the same functionality?
With Google App Engine, all of the datastore operations must be on entities in the same entity group. This is because your data is usually stored across multiple tables, and Google App Engine cannot do transactions across multiple tables.
Entities with owned one-to-one and one-to-many relationships are automatically in the same entity group. So if an entity contains a reference to another entity, or a collection of entities, you can read or write to both in the same transactions. For entities that don't have an owner relationship, you can create an entity with an explicit entity group parent.
You could put all of the objects in the same entity group, but you might get some contention if too many users are trying to modify objects in an entity group at the same time. If every object is in its own entity group, you can't do any meaningful transactions. You want to do something in between.
One solution is to have Track and Run in the same entity group. You could do this by having Track contain a List of Runs (if you do this, then Track might not need run-count, unique-driver-count and best-time; they could be computed when needed). If you do not want Track to have a List of Runs, you can use an unowned one-to-many relationship and specify the entity group parent of the Run be its Track (see "Creating Entities With Entity Groups" on this page). Either way, if a Run is in the same entity group as its track, you could do transactions that involve a Run and some/all of its Tracks.
For many large systems, instead of using transactions for consistency, changes are done by making operations that are idempotent. For instance, if Driver and Run were not in the same entity group, you could update the run-count for a Driver by first doing a query to get the count of all runs before some date in the past, then, in a transaction, update the Driver with the new count and the date when it was last computed.
Keep in mind when using dates that machines can have some kind of a clock drift, which is why I suggested using a date in the past.
I think I found a lateral but still clean solution which still makes sense in my domain model.
The domain model changes slightly as follows:
Driver - key(string-id), driver-stats - ex. id="Michael", runs=17
Track - key(string-id), track-stats - ex. id="Monza", bestTime=157
RunData - key(string-id), stat-data - ex. id="Michael-Monza-20101010", time=148
TrackRun - key(Track/string-id), track-stats-updated - ex. id="Monza/Michael-Monza-20101010", track-stats-updated=false
DriverRun - key(Driver/string-id), driver-stats-updated - ex. id="Michael/Michael-Monza-20101010", driver-stats-updated=true
I can now update atomically (i.e. precisely) the statistics of a Track with the statistics from a Run, immediately or in my own time. (And same with the Driver / Run statistics).
So basically I have to expand a little bit the way I model my problem, in a non-conventional relational way. What do you think?
realize this is late, but..
Have you seen this method for Bank Account transfers?
http://blog.notdot.net/2009/9/Distributed-Transactions-on-App-Engine
It seems to me that you could do something similar by breaking out your increment counters into two steps as a IncrementEntity and process that, picking up the pieces later if a transaction fails etc.
From the blog:
In a transaction, deduct the required
amount from the paying account, and
create a Transfer child entity to
record this, specifying the receiving
account in the 'target' field, and
leaving the 'other' field blank for
now.
In a second transaction, add the
required amount to the receiving
account, and create a Transfer child
entity to record this, specifying the
paying account in the 'target' field,
and the Transfer entity created in
step 1 in the 'other' field.
Finally,
update the Transfer entity created in
step 1, setting the 'other' field to
the Transfer we created in step 2.
The blog has code examples in Python, but is should be easy to adapt
There's an interesting google io session on this topic http://www.google.com/events/io/2010/sessions/high-throughput-data-pipelines-appengine.html
I guess you could update the Run stats and then fire two tasks to update the Driver and the Track individually.