Java EE - Singleton EJB with Concurrent Access to Synchronized Method - java

I have an EJB as below. This has been created solely for test purposes - I'm "sleeping" the thread as I want to simulate the case where the query is scheduled again before the synchronized method has finished executing.
The observed behaviour is as expected - but is this the correct way to poll the database for, for example, rows that have been inserted so that some processing can be performed, before they are updated? I want the method to be synchronized as I don't want another call to modify the database state while processing those from a previous method call
#Singleton
public class MyResource {
#PersistenceContext(unitName="MyMonitor")
private EntityManager em;
#Schedule(second="*", minute="*", hour="*")
public synchronized void checkDb() throws SQLException, InterruptedException {
List<ReferenceNames> l =
em.createQuery("from Clients cs", Clients.class).getResultList();
Thread.sleep(2000);
System.out.println(l.size());
}
}

You should not implement a single point of database access yourself, just to make sure, that records are not changed during an update. For that, you want to use database locking. In Java EE / JPA 2.0 you have several locking modes at hand, check out for example this Oracle blog or this wikibook article. Concerning the other components trying to write during the locking, you have to react to the lock exception and implement some sort of retry mechanism.

Related

Preventing concurrent access to a method in servlet

I have a method in servlet that inserts tutoring bookings in database. This method has a business rule that checks if the tutor of this session is already busy in that date and hour. The code looks something like this :
class BookingService {
public void insert(Booking t) {
if(available(t.getTutor(), t.getDate(), t.getTime())) {
bookingDao.insert(t);
} else {
// reject
}
}
}
The problem is that multiple users may simultaneously try to book the same tutor on the same date and time, and there is nothing that prevents them both to pass the test and insert their bookings. I've tried making insert() synchronized and using locks, but it doesn't work. How can I prevent concurrent access to this method?
Using synchronized is an inadequate way to try to solve this problem:
First, you will have coded your application so that only one instance can be deployed at a time. This isn’t just about scaling in the cloud. It is normal for an IT department to want to stand up more than one instance of an application so that it is not a single point of failure (so that in case the box hosting one instance goes down the application is still available). Using static synchronized means that the lock doesn’t extend beyond one application classloader so multiple instances can still interleave their work in an error prone way.
If you should leave the project at some point, later maintainers may not be aware of this issue and may try to deploy the application in a way you did not intend. Using synchronized means you will have left a land mine for them to stumble into.
Second, using the synchronized block is impeding the concurrency of your application since only one thread can progress at a time.
So you have introduced a bottleneck, and at the same time cut off operations’ ability to work around the bottleneck by deploying a second instance. Not a good solution.
Since the posted code shows no signs of where transactions are, I’m guessing either each DAO creates its own transaction, or you’re connecting in autocommit mode. Databases provide transactions to help with this problem, and since the functionality is implemented in the database, it will work regardless of how many application instances are running.
An easy way to fix the problem which would avoid the above drawbacks would be to put the transaction at the service layer so that all the DAO calls would execute within the same transaction. You could have the service layer retrieve the database connection from a pool, start the transaction, pass the connection to each DAO method call, commit the transaction, then return the connection to the pool.
One way you could solve the problem is by using a synchronized block. There are many things you could choose as your locking object - for the moment this should be fine:
class BookingService {
public void insert(Booking t) {
synchronized(this) {
if(available(t.getTutor(), t.getDate(), t.getTime())) {
bookingDao.insert(t);
} else {
// reject
}
}
}
}
If you have more than one instance of the servlet, then you should use a static object as a lock.

How to delete entities in (multithreaded) transactional Spring/JPA

Assume there is a service component, taking use of a CrudRepository of Book entities. Assume one of the methods of the service should be transactional, and should (among other things) delete one entity from the database with transactional semantics (i.e. should the delete be impossible to perform, all effects should be rolled-back).
Roughly,
#Component
public class TraService {
#Autowired
BookRepo repo;
#Transactional
public void removeLongest() {
//some repo.find's and business logic --> Book toDel
repo.delete(toDel);
}
}
Now this should work in a multithreaded context, e.g. in a Spring MVC. For simplicity I launch 2 threads, each on a task provided with a reference to the TraService bean. Logs show, that indeed two EntityManagers are created and bound to the respective threads. However, as the first thread is successful with delete, the other throws
org.springframework.orm.jpa.JpaOptimisticLockingFailureException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1;
from which I do not know how to recover (i.e. I suspect the rollback is not complete, and the code the thread was supposed to execute after calling the transactional method will not be executed). Worker's code:
public void run() {
service.removeLongest(); //transactional
System.out.println("Code progressing really well " + Thread.currentThread()); //not executed on thread with exception
}
How do we properly handle such transactional deletes in Spring/JPA?
Short answer: The correct behaviour on optimistic lock exceptions is Catch the exception and Retry.
Long answer: Optimistic locking is a mutex strategy that assume that
from: https://en.wikipedia.org/wiki/Optimistic_concurrency_control
multiple transactions can frequently complete without interfering with each other
Optimistic locking exist mainly because of performances, and is usually implemented with a field that takes into account each modification with a version counter. If during a transaction the version counter changes, that means that a concurrent modification is happening and that cause the exception to be thrown.
Pessimistic locking will instead prevent any possible concurrent modification, easier to manage, but with worse performances.

What happens when a #Transactional annotated method is hit in parallel by multiple instances?

Please correct me if I am wrong somewhere.
I am having an issue where my transaction are not being saved to the data base and some sort of racing is occurring which screws up the data. The app is hit in parallel by multiple instances. I have used #Transactional, which I know is to do a transaction with database and the transaction is committed when the method returns.
The question is, does hitting it through multiple instance still maintain this one transaction per hit thing, or it does not handle the situation and data will screw up because of racing?
Can a solution be suggested for the given condition?
The #Transactional is not related to synchronization. It just makes sure that your flow either succeeds or fails. Each hit has its own flow and its own success or failure.
I guess what you're experiencing is due to the use of shared data.
For example. If you have a class Foo that looks like this:
public class Foo {
private static boolean flag = true;
#Transactional
public void doSomething() {
flag = false;
}
}
In this case it doesn't matter that you have many Foo instances because they all use the same flag.
Another scenario would be if you have one instance of Foo (very common if you use something like Spring) and you have data that is changed for this instance. You can look at the same Foo example and just remove the static from flag:
public class Foo {
private boolean flag = true;
#Transactional
public void doSomething() {
flag = false;
}
}
In either of those cases you need to synchronize the data changes somehow. It has nothing to do with #Transactional.
That transactions are database transactions and behavior is database engine dependant but it usually works this way:
A thread enter the method.
A thread enter the same or any other transactional method. It does not block as #Transactional is not about synchronization.
One thread execute any query that blocks a database resource. Eg. SELECT * FROM MYTABLE FOR UPDATE;.
Another thread try to execute anything that needs the same database resource. Eg. UPDATE MYTABLE SET A = A + 1; And it blocks.
The thread that acquired the lock on step 3 completes the transactional method successfully making an implicit commit or fails making an implicit rollback.
The blocked thread wakes up and continues as it can now get the resource that was locked.

How do I block a query thread when another thread is updating the database?

I have a server that makes a child thread for every user that connects to the server.The child server class has the run method and other methods.
One method searches a mysql database with select.
Another method updates the databases.
How can I block the method that searches the database when another thread uses the method that updates the database ?
The proper way to handle your requirement is to do all database operations within a transaction. This will avoid any need of the mutual exclusion of database code and will also guarantee isolation between your Java process and any other database client doing its own operations.
Best way to do this is database transaction and proper isolation level.
Below are some isolation levels in MySql:
Read uncommitted
Read committed
Repeatable reads
Serializable
Not sure if you have a good design doing this, but if you want mutex on a method, declare the method as synchronized like
public synchronized void putInDbase(String value) {
//only one thread will execute here at a time
}
But if you have two seperate methods within the same class, you want to synchronize the actual code dealing with the database, you can make synchronized blocks
public class myDbase {
public void search() {
synchronized(this) {
//database code
}
}
public void update() {
synchronized(this) {
//database code
}
}
}

Spring transactions and their interaction with the synchronized keyword

I have a DAO class that uses Spring JDBC to access an SQLite database. I have declared transactions on the DAO methods themselves since my service layer never combines queries in a transaction.
Since I use a few worker threads in parallel but only one thread can update an SQLite DB at the same time, I use synchronized to serialize access to the DAO.
At first, I synchronized externally from my service class, for example:
synchronized (dao) {
dao.update(...);
}
Then, I figured I might as well get rid of the external synchronization and put synchronized on the DAO method itself:
public synchronized void update(...) {
// Spring JDBC calls here
}
The strange thing is: my queries now take twice the time they used to!
Why?
Well, one difference is obvious:
synchronized (dao) {
// here you are synchronizing on the transactional proxy
}
public synchronized void update(...) {
// and here you are synchronizing on the target class, *inside* the proxy
}
What the implications of this are depends on your other code, but that's the obvious difference.
My guess is your update method or entire class is annotated with Transactional or wrapped by transactional proxy through other means. This means whenever you call dao's method, the transactional proxy retrieves db connection from the pool, opens a transaction and then calls the real method.
In your first scenario you synchronize before even reaching the proxy, thus no connection and transaction magic happens. In the second scenario you do the waiting call after that.
If there are multiple threads trying to perform simultaneous updates there will be only one doing the update and the rest will be first opening new connections and then waiting for dao access. As a consequence instead of one connection being constantly reused you will have multiple connections in use. I can only guess how this really affects the performance but you can experiment with different pool size starting with one.

Categories

Resources