Concurrent update in mysql row using trigger - java

I have a soap-based web service with Java + Mysql.
The web services consist in save and send as a response generated documents. Each user has a limited number of documents available. This service provide documents to external systems, so, i have to know the documents available any time for an specific user.
To improve this a build a trigger that updates the user row when a new document is created.
CREATE TRIGGER `Service`.`discount_doc_fromplan`
AFTER INSERT ON `Service`.`Doc` FOR EACH ROW
UPDATE `Service`.`User` SET User.DocAvailable = User.DocAvailable - 1 where User.id = NEW.idUser
The problem comes when an user tries to create 2 or more documents at the same time because of their systems. This give me a "Deadlock found when trying to get lock".
Somebody has an idea to improve this without the deadlock problem and at the same time the right number of documents available?. This is my first web service. Thanks.

You are trying to implement your business logic inside a database trigger. Instead of trigger, you can implement this logic in either (1) your web service application middleware or (2) a stored procedure. I prefer approach (1) though. The basic code in either will collect all inserts in Doc table by a user in a cumulative counter and at the end of all inserts, update the User table DocAvailable = DocAvailable -counter in one go. You can do this in a transaction so that you can rollback in case of a problem. You will have to read the available Doc quota for the user before starting the transaction.

Related

Microservice Pattern To Retrieve Data From Old Database And Write To New Database

I am given a situation where there is a database and it's getting used from last 6 months. But from now on, a new database will be used. All the insertion operation would happen in the new database but for retrievals or all gets, a search has to be made in both the old and new databases. Design a microservice and how can the database configuration be done to achieve this?
Though not practical but you can define multiple DataSource in your spring boot project. Define a controller that intercept the get call. route the call to your service which will have the logic to transact between two different sources to build response for your rest queries. You can find an example from here :-
https://www.baeldung.com/spring-data-jpa-multiple-databases
Another thing you can do is introduce, elastic search and index all your old db data which will be part of get inquiry call to it and use elastic to fire query than db.

Designing a count based access control

I would like to get some advice on designing a count based access control. For example I want to restrict the number of users that a customer can create in my system based on their account. So by default a customer can create 2 users but if the upgrade their account they get to create 5 users and so on.
There are a few more features that I need to restrict on a similar basis.
The application follows a generic model so every feature exposed has a backing table and we have a class which handles the CRUD operation on that table. Also the application runs on multiple nodes and has a distributed cache.
The approach that I am taking to implement this is as follows
- I have a new table which captures the functionality to control and the allowed limit (stored per customer).
- I intercept the create method for all tables and check if the table in question needs to have access control applied. If so I fetch the count of created entities and compare against the limit to decide if I should allow the creation or not.
- I am using the database to handle synchronization in case of concurrent requests. So after the create method is called I update the table using the following where clause
where ( count_column + 1 ) = #countInMemory#
. i.e. the update will succeed only if the value stored in the DB + 1 = value in memory. This will ensure that even if two threads attempt a create at the same time, only one of them will be able to successfully update. The thread that successfully updates wins and the other one is rolled back. This way I do not need to synchronize any code in the application.
I would like to know if there is any other / better way of doing this. My application runs on Oracle and MySQL DB.
Thanks for the help.
When you roll back, do you retry (after fetching the new user count) or do you fail? I recommend the former, assuming that the new fetched user count would permit another user.
I've dealt with a similar system recently, and a few things to consider: do you want CustomerA to be able to transfer their users to CustomerB? (This assumes that customers are not independent, for example in our system CustomerA might be an IT manager and CustomerB might be an accounting manager working for the same company, and when one of CustomerA's employees moves to accounting he wants this to be reflected by CustomerB's account.) What happens to a customer's users when the customer is deleted? (In our case another customer/manager would need to adopt them, or else they would be deleted.) How are you storing the customer's user limit - in a separate table (e.g. a customer has type "Level2," and the customer-type table says that "Level2" customers can create 5 users), or in the customer's row (which is more error prone, but would also allow a per-customer override on their max user count), or a combination (a customer has a type column that says they can have 5 users, and an override column that says they can have an additional 3 users)?
But that's beside the point. Your DB synchronization is fine.

Duplicates because Oracle is too fast or multithread?

I have a problem with duplicate records arriving in our database via a Java web service, and I think it's to do with Oracle processing threads.
Using an iPhone app we built, users add bird observations to a new site they visit on holiday. They create three records at "New Site A" (for example). The iPhone packages each of these three records into separate JSON strings containing the same date and location details.
On Upload, the web service iterates through each JSON string.
Iteration/Observation 1. It checks the database to see if the site exists, and if not, creates a new site and adds the observation into a hanging table.
Iteration/Obs 2. The site should now exists in the database, but it isn't found by the database site check in Iteration 1, and a second new site is created.
Iteration/Obs 3. The check for existing site NOW WORKS, and the third observation is attached to one of the existing sites. So the web service and database code does work.
The web service commits at the end of each iteration.
Is the reason for the second iteration not finding the new site in the database due to delays in Oracle commit after it's been called by the Java, so that it's already started processing iteration 2 by the time iteration 1 is truly complete, OR is it possible that Oracle is running each iteration on a separate thread?
One solution we thought about was to use Thread.sleep(1000) in the web service, but I'd rather not penalize the iPhone users.
Thanks for any help you can provide.
Iain
Sounds like a race condition to me. Probably your observation 1 and 2 are arriving very close to each other, so that 1 is still processing when 2 arrives. Oracle is ACID-compliant, meaning your transaction for observation 2 cannot see the changes made in transaction one, unless this one was completed before transaction two started.
If you need a check-then-create functionality, you'd best synchronize this at a single point in your back end.
Also, add a constraint in your DB to avoid the duplication at all costs.
It's not an Oracle problem; Thread.sleep would be a poor solution, especially since you don't know root cause.
Your description is confusing. Are the three JSON strings sent in one HTTP request? Does the order matter, or does processing any of them first set up the new location for the ones that follow?
What's a "hanging table"?
Is this a parent-child relation between location and observation? So the unit of work is to INSERT a new location into the parent table followed by three observations in the child table that refer back to the parent?
I think it's a problem with your queries and how they're written. I can promise you that Oracle is fast enough for this trivial problem. If it can handle NASDAQ transaction rates, it can handle your site.
I'd write your DAO for Observation this way:
public interface ObservationDao {
void saveOrUpdate(Observation observation);
}
Keep all the logic inside the DAO. Test it outside the servlet and put it aside. Once you have it working you can concentrate on the web app.

Posting updates to "a" wall

I have a facebook 'like' application - a virtual white board for multiple 'teams' who share a 'wall' common to that project. There are about 9-12 entities for which I capture the data. I'm trying to have the user's homepage display the update of activities that have happened since the past login - like how facebook posts notifications:
"[USER] has done [some activity] on [some entity] - 20 minutes ago"
where [...] are clickable links and the activities are primarily (rather only) CRUD.
I'll have to persist these updates. I'm using MySQL as the backend DB and thought of having an update table per project that could store the activities. But it seems there needs to be one trigger per table and that would just be redundant. More so It's difficult to nail down the tabular schema for that update table since there are many different entities.
The constraint is to use MySQL but I'm open to other options of "how" to achieve this functionality.
Any ideas?
PS: Using jQuery + REST + Restlet + Glassfish + MySQL + Java
It doesn't have to be handled at the database level. You can have a transaction logging service that you call in each operation. Each transaction gets a (unique, sequential) key.
Store the key of the last item the person saw, and show any updates where the key is higher, the update the last key seen.
A periodic routine can go through the user accounts and see what is the lowest seen transaction log key across all users (i.e. what is the newest log entry that all users have already seen) and delete/archive any entries with a key <= that one.

Using memcache in google app engine

I have created an app using GAE. I am expecting 100k request daily. At present for each request app need to lookup 4 tables and 8 diff columns before performing needed task.
These 4 tables are my master tables having 5k,500, 200 and 30 records. It is under 1 MB (The limit).
Now I want to put my master records in memcache for faster access and reduce RPC call. When any user update master I'll replace the memcache object.
I need community suggestion about this.
Is it OK to change the current design?
How can I put 4 master table data in memcache?
Here is how application works currently
100 of users access same application page.
They provide a unique identification token and 3 more parameters (let's say p1, p2 and p3).
My servlet receives the request.
Application fetches user table by token and and check enable state.
Application fetches another table (say department) and checks for p1 existence. If exists check enable status.
If above return true, a service table is queried based on parameter p2 to check whether this service is enabled or not for this user and check Service EndDate.
Based on p3 length another table is checked for availability.
You shouldn't be thinking in terms of inserting tables into memcache. Instead, use an 'optimistic cache' strategy: any time you need to perform an operation that you want to cache, first attempt to look it up in memcache, and if that fails, fetch it from the datastore, then store in memcache. Here's an example:
def cached_get(key):
entity = memcache.get(str(key))
if not entity:
entity = db.get(key)
memcache.set(str(key), entity)
return entity
Note, though, that caching individual entities is fairly low return - the datastore is fairly fast at doing fetches. Caching query results or rendered pages will give a much better improvement in speed.

Categories

Resources