In my web-service i have a user table which contains column called "HITS",now as a user in this user class/table consumes the web-service,"HITS" count is incremented by 1.which involves write operation and hence comes in optimistic/pessimistic locks.
Issue am facing is high concurrency scenario,in which 100 hits come simultaneously for same user.
Can anyone suggest me a strategy to avoid delays caused by locking a user row to update "HITS" when web-service is used simultaneously from same user's account ?
OR
you can also suggest a way to update some variable which i can monitor after every 12 hours,to generate the "HITS" value,by summing up the variable(s) or something like that ?
Am using hibernate and EJB stateless session bean.
just insert one record for every hit into table all_hits...
then on some schedule - select from that table, and insert into hits_summary table, and delete the originals.
Related
I have a situation in my java, spring based web app. My server generates coupons ( a number mixed with alphabets , all random but unique) , each coupon can be applied or used by only one and only on logged in customer. They are shown on the front end to all the users, which then gets accepted/selected by the customers.But once accepted by one customer it gets assigned to him and not available to anyone else.
I tried to do synchronization of code block which checks if the coupon is already applied / availed, it worked but , cases like when two users click avail it at exact same time, it fails ( get allocated to both)
Please help.
Do not use synchronization for this. You can store the state of the coupons in a database, and work on these data in a DB transaction, using locks. So:
User tries the coupon, you get the ID
Start a DB transaction, get the coupon row from it, and lock it
Do what you need to, then invalidate the coupon
End the DB transaction, release the lock
The database do not necessarly need to be a standalone RDMS, in a simple case, even SQLite is sufficient. Anyway, DBs most certainly handle race conditions betten than you (or most of us) can.
If you prefer avoid database transactions you can use a Set with all the generated coupons and a set referencing only available coupons. When a user select a coupon in a synch block remove the coupon from available ones. The second user then fail to obtain it
HELP!
the case is when two or more transactions are trying to affect the same client monetary account in some external system. I need the second transaction be performed until the first one has finished.
consider:
- there are two or more transactions trying to affect the same balance
- there are multiple clients at the same time
- working with 1000 TPS with 100ms avg per transaction
ideas:
- as we are working with multi threads to support 1000TPS i'm trying to create Queues based on the Client ID. using some kind of workmanager that limit one thread by Client. So if i have 2 request with the same clientID at same time dynamically can queue the second.
tools
i'm trying to use Oracle tools for example:
- Fusion Middleware: using the Workmanager based on message context [not sure if possible because looks like context can be based only on session data] i like WorkManager because has no performance issues
- Oracle OCEP: creating a dynamic queue using the CQL [ not sure if possible and performance]
- Oracle Advance Queuing: maybe possible with transactions group.
thanks for any idea
I hope I got your problem.
In you question you asked, wether it is possible to perform a second transaction on a row before the first is completed. This is impossible! A database which follows the ACID paradigm has to be Consistent! So you can't "overtake" the first transaction!!! If you want to do that, you should use NoSQL Databases (like MongoDB, ...) where consistency is not that strong.
But maybe you want to know, if there is a Oracle view to figure out, wether a row is locked or not? Let's assume, that there is a view like that. You would check this view and if there is no lock, you start your update/delete. But you can't be sure that this will work because even 1ms after you checked it, another process can put a lock on it.
The only thing you can do is, to put a "select ... for update NOWAIT" before you UPDATE/DELETE statement.
If the row is locked, you will get a exception (ORA-00054: Resource busy). This is the recommended/"out of the box way" to let the database manage row-level-locking for you!
See the following example with the emp table. Consider: to check this out, start this code in two different sessions at the same time.
declare
l_sal number;
resource_busy exception; -- declare your own exception
pragma exception_init (resource_busy, -54); -- connect your exception with ORA-00054
begin
select sal
into l_sal
from emp
where empno = 7934
for update NOWAIT;
update emp
set sal = sal + 100
where empno = 7934;
exception
when resource_busy then
null; -- in your case, simply do nothing, if the row is locked
end;
I have a multi-threaded client/server system with thousands of clients continuously sending data to the server that is stored in a specific table. This data is only important for a few days, so it's deleted afterwards.
The server is written in J2SE, database is MySQL and my table uses InnoDB engine. It contains some millions of entries (and is indexed properly for the usage).
One scheduled thread is running once a day to delete old entries. This thread could take a large amount of time for deleting, because the number of rows to delete could be very large (some millions of rows).
On my specific system deletion of 2.5 million rows would take about 3 minutes.
The inserting threads (and reading threads) get a timeout error telling me
Lock wait timeout exceeded; try restarting transaction
How can I simply get that state from my Java code? I would prefer handling the situation on my own instead of waiting. But the more important point is, how to prevent that situation?
Could I use
conn.setIsolationLevel( Connection.TRANSACTION_READ_UNCOMMITTED )
for the reading threads, so they will get their information regardless if it is most currently accurate (which is absolutely OK for this usecase)?
What can I do to my inserting threads to prevent blocking? They purely insert data into the table (primary key is the tuple userid, servertimemillis).
Should I change my deletion thread? It is purely deleting data for the tuple userid, greater than specialtimestamp.
Edit:
When reading the MySQL documentation, I wonder if I cannot simply define the connection for inserting and deleting rows with
conn.setIsolationLevel( Connection.TRANSACTION_READ_COMMITTED )
and achieve what I need. It says that UPDATE- and DELETE statements, that use a unique index with a unique search pattern only lock the matching index entry, but not the gap before and with that, rows can still be inserted into that gap. It would be great to get your experience on that, since I can't simply try it on production - and it is a big effort to simulate it on test environment.
Try in your deletion thread to first load the IDs of the records to be deleted and then delete one at a time, committing after each delete.
If you run the thread that does the huge delete once a day and it takes 3 minutes, you can split it to smaller transactions that delete a small number of records, and still manage to get it done fast enough.
A better solution :
First of all. Any solution you try must be tested prior to deployment in production. Especially a solution suggested by some random person on some random web site.
Now, here's the solution I suggest (making some assumptions regarding your table structure and indices, since you didn't specify them):
Alter your table. It's not recommended to have a primary key of multiple columns in InnoDB, especially in large tables (since the primary key is included automatically in any other indices). See the answer to this question for more reasons. You should add some unique RecordID column as primary key (I'd recommend a long identifier, or BIGINT in MySQL).
Select the rows for deletion - execute "SELECT RecordID FROM YourTable where ServerTimeMillis < ?".
Commit (to release the lock on the ServerTimeMillis index, which I assume you have, quickly)
For each RecordID, execute "DELETE FROM YourTable WHERE RecordID = ?"
Commit after each record or after each X records (I'm not sure whether that would make much difference). Perhaps even one Commit at the end of the DELETE commands will suffice, since with my suggested new logic, only the deleted rows should be locked.
As for changing the isolation level. I don't think you have to do it. I can't suggest whether you can do it or not, since I don't know the logic of your server, and how it will be affected by such a change.
You can try to replace your one huge DELETE with multiple shorter DELETE ... LIMIT n with n being determined after testing (not too small to cause many queries and not too large to cause long locks). Since the locks would last for a few ms (or seconds, depending on your n) you could let the delete thread run continuously (provided it can keep-up; again n can be adjusted so it can keep-up).
Also, table partitioning can help.
I would like to get some advice on designing a count based access control. For example I want to restrict the number of users that a customer can create in my system based on their account. So by default a customer can create 2 users but if the upgrade their account they get to create 5 users and so on.
There are a few more features that I need to restrict on a similar basis.
The application follows a generic model so every feature exposed has a backing table and we have a class which handles the CRUD operation on that table. Also the application runs on multiple nodes and has a distributed cache.
The approach that I am taking to implement this is as follows
- I have a new table which captures the functionality to control and the allowed limit (stored per customer).
- I intercept the create method for all tables and check if the table in question needs to have access control applied. If so I fetch the count of created entities and compare against the limit to decide if I should allow the creation or not.
- I am using the database to handle synchronization in case of concurrent requests. So after the create method is called I update the table using the following where clause
where ( count_column + 1 ) = #countInMemory#
. i.e. the update will succeed only if the value stored in the DB + 1 = value in memory. This will ensure that even if two threads attempt a create at the same time, only one of them will be able to successfully update. The thread that successfully updates wins and the other one is rolled back. This way I do not need to synchronize any code in the application.
I would like to know if there is any other / better way of doing this. My application runs on Oracle and MySQL DB.
Thanks for the help.
When you roll back, do you retry (after fetching the new user count) or do you fail? I recommend the former, assuming that the new fetched user count would permit another user.
I've dealt with a similar system recently, and a few things to consider: do you want CustomerA to be able to transfer their users to CustomerB? (This assumes that customers are not independent, for example in our system CustomerA might be an IT manager and CustomerB might be an accounting manager working for the same company, and when one of CustomerA's employees moves to accounting he wants this to be reflected by CustomerB's account.) What happens to a customer's users when the customer is deleted? (In our case another customer/manager would need to adopt them, or else they would be deleted.) How are you storing the customer's user limit - in a separate table (e.g. a customer has type "Level2," and the customer-type table says that "Level2" customers can create 5 users), or in the customer's row (which is more error prone, but would also allow a per-customer override on their max user count), or a combination (a customer has a type column that says they can have 5 users, and an override column that says they can have an additional 3 users)?
But that's beside the point. Your DB synchronization is fine.
I am developing an application using normal JDBC connection. The application is developed with Java-Java EE SpringsMVC 3.0 and SQL Server 08 as database. I am required to update a table based on a non primary key column.
Now, before updating the table we had to decide an approach for updating the table, as table may contain huge amount of data. The update Query will be executed in a batch and we are required to design application in a manner wherein it doesn't hog the system resources.
Now, We had to decide between either of the approaches,
1. SELECT DATA BEFORE YOU UPDATE or
2. UPDATE DATA AND THEN SELECT MISSING DATA.
Select data before update is only benificial if chances of failure are maximum, i.e. if a batch 100 Query update is executed, and out of which if only 20 rows are updated successfully, then this approach should be taken
Update data and then check missing data is benificial only when failure records are far less. By this ap[proach one database select call can be avoided, i.e after a batch update, the count of records updated can be taken and the select query should be executed if and only if theres is a count in mismatch w.r.t no of query.
We are totally unaware about the system on Production environment, but we want to counter for all possibilities and want a faster system. I need your inputs as which is a better approach.
Since there is 50:50 chance of successful updates or faster selects, its hard to tell from the current scenario mentioned. You probably would want a fuzzy logic approach, getting constant feedback of how many updates were successful over the period of time, and then decide on the basis of that data to either do an update before select or do a select before update.