Task
A user can only collect ONE bonus ONCE on the website per day.
Server structure:
two web servers
one database
These two web servers are deployed with the same code and are connected to the same database.
Requests are randomly directed to one of web servers.
What I have
I wrote the whole business logic, check if this is user's first request of the day, whether this user is legit etc, in one procedure.
I use #Transactional in Spring Framework in the hope of making the procedure I wrote transactional in the DB.
Problem
A user sent 10,000 requests at the same time and two of them were directed to two different servers and invoke the procedure run simultaneously, which means the user received TWO bonuses this day.
Help
So, from what I understand, #Transactional in Spring Framework blocks the code's access to DB, but not the DB directly? Users from Server A can still have access to tables in DB when Server B is running methods annotated with #Transactional?
And, how can I use transaction in a stored procedure in Oracle?
Thanks in advance.
Each connection to Oracle is a transaction; it's implicit in Oracle unlike SQL Server where you need to specify begin trans. If you have a web farm / garden directly connecting to the databse and a load balancing switch sends request A and then B to web server 1 and 2 you are going to get two transactions against the database. By default Oracle's isolation level is set to read committed. If you have something in your stored proc querying to check if the bonus has been applied you would want to select for update to lock the row so the other transaction would be blocked from reading the row until the other transaction finishes the update. Also, have you considered using any sessions to keep each session sticky to one web server? Otherwise I would consider using some middle ware code following the CQRS pattern as another alternative prior to the requests getting sent to the database: http://blog.trifork.com/2010/01/27/cqrs-designing-domain-events/
Related
We have an application behind a load balancer that only supports round-robin, no sticky sessions.
The Spring Boot application runs in an OpenShift cluster, with a number of pods in a deployment. We have at least two pods, but might scale this up to 10 or 20, depending on load.
The application uses Spring Session and Spring Security. Spring Session is configured to use MongoDB (not Redis), since the application is already using MongoDB for other purposes.
During functional testing with low to moderate load, we have noticed issues with session attributes "going missing": the code that updates these entries runs successfully, but after the request is finished, the older contents of the attributes is in the session. This happens randomly.
Testing with a single instance of the application, no such observations were made.
To me, this smells like a race condition between the write back of the session object to Mongo, with some HTTP request on one pod racing the write back in another pod, and the "wrong one" winning.
Is this a valid usage scenario for Spring Session with MongoDB? In other words, is this supposed to work?
If it is supposed to work, how can I find out what's happening, and what can I do to solve the issue?
If it's not supposed to work, is there a Spring Session setup that would allow a cross application server sharing of session state without race conditions?
We spent a lot of time with the DB team trying to figure out if this had anything to do with the MongoDB client connection or server configuration, but after some more thorough research, I've found the culprit: it's spring-session-data-mongodb, because it fails to implement delta updates.
https://github.com/spring-projects/spring-session-data-mongodb/issues/106
The problem is that there is no logic to check if a write to the session repository is necessary, or any tracking which attributes of the session have been changed. The session is written back at the end of every request unconditionally.
If you have multiple concurrent requests, like any normal web application has, the session state survives from the request that was started last and finished first. So a simple image retrieval (that is handled through a Spring handler) will cause the session to be written back. If you have, like we do, a login handler that takes a substantial amount of time (up to 2 seconds) because it retrieves a bunch of user info from a backend system, the request for the image will have started after the login request, but will have finished before it. spring-session-data-mongodb then decides that the session state from the login handler is stale, and fails to save it.
So until that bug is fixed, we will need to use a different repository like Redis.
I have a Java web application which is deployed on two VMs. and NLB (Network Load Balancing) is set for these VMs. My Application uses sessions. I am confused that how the user session is managed in both VMs. i.e. For Example- If I make a request that goes to VM1 and create a user session. Now the second time I make request and it goes to VM2 and want to access the session data. How would it find the session which has been created in VM1.
Please Help me to clear this confusion.
There are several solutions:
configure the load balancer to be sticky: i.e. requests belonging to the same session would always go to the same VM. The advantage is that this solution is simple. The disadvantage is that if one VM fails, half of the users lose their session
configure the servers to use persistent sessions. If sessions are saved to a central database and loaded from this central database, then both VMs will see the same data in the session. You might still want to have sticky sessions to avoid concurrent accesses to the same session
configure the servers in a cluster, and to distribute/replicate the sessions on all the nodes of the cluster
avoid using sessions, and just use an signed cookie to identify the users (and possibly contain a few additional information). A JSON web token could be a good solution. Get everything else from the database when you need it. This ensures scalability and failover, and, IMO, often makes things simpler on the server instead of making it more complicated.
You'll have to look in the documentation of your server to see what is possible with that server, or use a third-party solution.
We can use distributed Redis to store the session and that could solve this problem.
I am working on an application where I have to consolidate data from clients into a central DB.
The problem is that how can I monitor any data changes (save, update, delete) on clients in real time. I there any way out for that?
I am using Hibernate for data fetching from clients in batches. Checking every row of data and comparing it cell-wise with central DB is not practical. There are about 25 tables that I have to work on.
Appreciate any help or hint.
Regards
DB2 cannot do this for you.
You need two things:
Catch all write operations in your web application. If you know the actions/URLs/servlets this is your starting point. Or give hibernate listeners a chance.
If you need real time update of your web clients. You should look for web sockets. This allows a server-client communication. With a web socket you can tell your web clients to update their data once it has been changed by an other client.
If a Java-based web application needs to update clients based on a database record update using server sent events, are there ways to getting the database updates via a callback mechanism from a database to a servlet so that a java servlet does not need to continuously poll a database to detect for updates?
There is nothing in the JDBC specification (that I know of) that will allow you to do this. If at all, this is a database-specific functionality. Oracle supports such functionality starting Oracle 11g:
http://docs.oracle.com/cd/E18283_01/java.112/e16548/dbchgnf.htm
I was asked the following question in an interview and couldn't answer that.
How do you include a jdbc operation,a web service call and a JMS operation into one single transaction. That means if one of them fails all has to be roll backed.
I have heard about two-phase commit protocol and oracl XA in case of database transactions involving multiple databases. But not sure whether the same can be used here.
The critical factor is that the web services you connect to have been built using a web services framework that supports transactions. JBoss Narayana is one such web services framework. Once the web services endpoint you are connecting to is on such a framework, it's just a matter of configuring spring to use the appropriate client.
In the case of Narayana, the spring config (from http://bgshinhung.blogspot.ca/2012/10/integrating-spring-framework-jetty-and.html) for transactions with web services:
You are never going to be able to do this in a completely bomb-proof way as the systems are separate. A failure in one stage of the system (for example between the SQL commit and the JMS commit the power on your server gets turned off) will leave the SQL commit in place.
The only way to resolve that would be to keep some record of partial commits somewhere and scan that on startup to fix any resulting problems but now what happens if you have a failure processing or keeping that list.
Essentially the solution is to do your own implementation of the multiple-stage-commit and rollback process wrapping the three operations you need to make. If any of the operations fails then you need to reverse (preferably using an internal transaction mechanism, if not then by issuing reversing commands) any that have been done so far.
There are a lot of corner cases and potential ways for a system like this to fail though, so really the first approach should be to consider whether you can redesign the system so you don't need to do this at all!
It may be trick question and the right answer is "it can not be done".
But I would try to pseodo-code something like this:
try{
jdbc.startTransaction();
Savepoint saveJdbc = jdbc.setSavepoint();
JMS.startTransaction();
Savepoint saveJMS = JMS.setSavepoint();
jdbs.doSomeStuff();
JMS.doSomeStuff();
jdbc.commit();
JMS.commit();
if(webServise.doSomeStuff() == fail){throw new Exception();}
}
catch(Eception e){
jdbc.rollback(saveJdbc);
JMS.rollback(saveJMS);
}
You prepare one servise that has roll back. You prepare second servise that has roll back. You will try web servise and if web servise fail you will roll back those two which have rollback.
May be it is a way to implement rollback to your web servise.
We had same situation like web service will push the data, we have to read the xml stream and persist to db(oracle). implementation we followed is.
Web service send soap message and that will contain xml stream data.
all request soap messages pushed to jms.
respective listner will read the stream and persist the data into 'Temporary tables'.
if request processed successfully then move data from temp table to actual table.
if any error roll back.
hope above points may help.
To my mind, it looks like interviewer liked to understand your ability to think in terms of enterprise wide distribution. Few points:
JDBC is used for Database connectivity
WebService is probably a mechanism to send control command to a
server from any client.
JMS is mainly used for alerts of what is being happened in the
system.
My guess is your interviewer might be having a typical scenario with him that they wish to suffice the following situation:
Data is on one tier ( cluster, or machine )
Clients may be any kind, mobile, app, ios, objective c, browser pay, etc.
JMS is configured to listen to topics. Or is that he wishes he could do that.
Now probably the best approach is to write a JMS Subscriber which decides what to do in the onMessage() method. As an example, suppose a web service is initiated a payment request from client. This will initiate a JMS publisher to tell a DAO do the necessary internal connection to database and when transaction is in middle and when it finishes, one message will be published to subscriber. You will have full grain control of every step as that would be configured to be published through JMS. Though this is difficult to achieve, this could be your interviewer's expected approach from you. (This is Only my guess, and please note.)