How to create singleton java class for multiple jvm support? - java

For example I have DBManager.java Singleton Class, which I have to deploy on clustered environment.
It is a web based application, with following deployment stratergy
Apache Load Balancer --> Tomcat 6 (3 Servers in cluster).
I have to maintain single instance of DBManager for 3 tomcat instances.
My code is
package com.db.util;
public class DBManager {
private static DBManager singleInstance;
private DBManager () {}
public static DBManager getSingleInstance() {
if (singleInstance == null) {
synchronized (DBManager.class) {
if (singleInstance == null) {
singleInstance = new DBManager ();
}
}
}
return singleInstance;
}
}
I have been searching a solution to this problem, and found something like JGroups API.
Can this be achieved using JGroups ? Any Idea, How to implement that ?

Java gives you a singleton in each instance, you need some kind of coordination between the instances so at any given time one of them is active, but if the active one dies then a different instance becomes active.
Some app servers have built in capabilities to control such coordinated worker instances, I don't know whether Tomcat has such a function.
Building such functionality yourself is surprisingly difficult, see this question and note that that question gives links to a useful library - which to me looks quite complex to use.
However in your case you have a database, and that gives you a point of coordination. I haven't designed this in detail, but I reckon it's possible to create a reservation scheme using a dedicated row in a control table. It will be a bit tricky to do this efficiently, balancing the speed of detection of an instance death with the overheads of polling the database to see which instance is active, but it seems doable.
The idea is that the record contains a "reservedUntil" timestamp and "processId". Each process reads the record, if it contains it's own id and the timestamp has not yet expired it knows it can work. When the time is nearly expired, the active process updates the timestamp using an optimistic locking style "Update where timestamp == old timestamp" to manage race conditions. Each non active process waits until the timestamp it last read has expired and then attempts to to take control by updating the record, again using an optimistic locking Update where. Usually that attempt to take control will fail, but if it succeeds we now have a new active instance, and due to optimistic locking we can only ever get one active instance.

Singleton ensures only one instance of the class in a given JVM.
What is the issue with multiple DBManagers, one for each JVM, in your case?

Related

Singleton returns null when accessed by threads

As the title states I'm trying to troubleshoot an issue where some threads which read data from a Singleton get a null value. My investigation into our logs read as though its a concurrency issue.
The Singleton is defined as follows:
#Singleton
public class StaticDatabaseEntries {
private static final Map<String,Thing> databaseEntries = new HashMap<>();
#Lock(LockType.READ)
public Thing getThing(String index) {
return databaseEntries.get(index);
}
}
At first I was under the impression that only one element within the data was corrupted as access to the same item is repeatededly returning null. Further access to debug entries show that the issue appears isolated to a specific thread. It's as though once whatever occurs that induces the null return on a thread continues to do so but only on the affected thread.
An earlier version of this class did not apply the LockType.READ so per the specification a LockType.WRITE is assumed. I deployed an update with the correct lock to enable concurrent read. This did not improve the situation.
The data is loaded into the HashMap from a database upon deployment and remains unchange for the duration. Since the class isn't tagged with #Startup the application instead uses a context listener to trigger the loading of the entries from database.
With threads primarily performing a read activity I don't believe a switch to ConcurrentHashMap is benficial. I am considering removing the static final portion as it seems unnecessary when the container is managing concurrent access and the singleton lifecycle. I have experienced side effects when the container cannot subclass/proxy things which are marked as final in EJBs.
The other possibility I've considered is there is some manner of bug in the container software. This is running on a older Java 1.7 and JBOSS 6 EAP. Worst case I'll have to forego the singleton pattern and instead load the entries from the database on demand.
In general: If you work with threads, read activity can cause problems, if you are calling an object method, that isn't thread-save! This is exactly what often leads to undetectable errors in larger projects.
HashMap isn't thread-safe!
You should switch to ConcurrentHashMap.
For further details, have a look at this article: https://www.baeldung.com/java-concurrent-map

Is Session.sendToTarget() thread-safe?

I am trying to integrate QFJ into a single-threaded application. At first I was trying to utilize QFJ with my own TCP layer, but I haven't been able to work that out. Now I am just trying to integrate an initiator. Based on my research into QFJ, I would think the overall design should be as follows:
The application will no longer be single-threaded, since the QFJ initiator will create threads, so some synchronization is needed.
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
There are 2 aspects to the integration of the initiator into my application:
Receiving side (fromApp callback): I believe this is straightforward, I simply push messages to a thread-safe queue consumed by my MainProcessThread.
Sending side: I'm struggling to find documentation on this front. How should I handle synchronization? Is it safe to call Session.sendToTarget() from the MainProcessThread? Or is there some synchronization I need to put in place?
As Michael already said, it is perfectly safe to call Session.sendToTarget() from multiple threads, even concurrently. But as far as I see it you only utilize one thread anyway (MainProcessThread).
The relevant part of the Session class is in method sendRaw():
private boolean sendRaw(Message message, int num) {
// sequence number must be locked until application
// callback returns since it may be effectively rolled
// back if the callback fails.
state.lockSenderMsgSeqNum();
try {
.... some logic here
} finally {
state.unlockSenderMsgSeqNum();
}
Other points:
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
Will you always use only one Session? If yes, then there is no use in utilizing the ThreadedSocketInitiator since all it does is creating a thread per Session.
The application will no longer be single threaded, since the QFJ initiator will create threads
As already stated here Use own TCP layer implementation with QuickFIX/J you could try passing an ExecutorFactory. But this might not be applicable to your specific use case.

Preventing concurrent access to a method in servlet

I have a method in servlet that inserts tutoring bookings in database. This method has a business rule that checks if the tutor of this session is already busy in that date and hour. The code looks something like this :
class BookingService {
public void insert(Booking t) {
if(available(t.getTutor(), t.getDate(), t.getTime())) {
bookingDao.insert(t);
} else {
// reject
}
}
}
The problem is that multiple users may simultaneously try to book the same tutor on the same date and time, and there is nothing that prevents them both to pass the test and insert their bookings. I've tried making insert() synchronized and using locks, but it doesn't work. How can I prevent concurrent access to this method?
Using synchronized is an inadequate way to try to solve this problem:
First, you will have coded your application so that only one instance can be deployed at a time. This isn’t just about scaling in the cloud. It is normal for an IT department to want to stand up more than one instance of an application so that it is not a single point of failure (so that in case the box hosting one instance goes down the application is still available). Using static synchronized means that the lock doesn’t extend beyond one application classloader so multiple instances can still interleave their work in an error prone way.
If you should leave the project at some point, later maintainers may not be aware of this issue and may try to deploy the application in a way you did not intend. Using synchronized means you will have left a land mine for them to stumble into.
Second, using the synchronized block is impeding the concurrency of your application since only one thread can progress at a time.
So you have introduced a bottleneck, and at the same time cut off operations’ ability to work around the bottleneck by deploying a second instance. Not a good solution.
Since the posted code shows no signs of where transactions are, I’m guessing either each DAO creates its own transaction, or you’re connecting in autocommit mode. Databases provide transactions to help with this problem, and since the functionality is implemented in the database, it will work regardless of how many application instances are running.
An easy way to fix the problem which would avoid the above drawbacks would be to put the transaction at the service layer so that all the DAO calls would execute within the same transaction. You could have the service layer retrieve the database connection from a pool, start the transaction, pass the connection to each DAO method call, commit the transaction, then return the connection to the pool.
One way you could solve the problem is by using a synchronized block. There are many things you could choose as your locking object - for the moment this should be fine:
class BookingService {
public void insert(Booking t) {
synchronized(this) {
if(available(t.getTutor(), t.getDate(), t.getTime())) {
bookingDao.insert(t);
} else {
// reject
}
}
}
}
If you have more than one instance of the servlet, then you should use a static object as a lock.

Play! Framework: Reuse of instances over multiple requests

I develop an application using the Play! Framework which makes heavy use of the javax.script Package, including the ScriptEngine. As ScriptEngines are expensive to create and it would make sense to use them across multiple requests (I don't bother to create multiple ScriptEngines, one per Thread - at least i won't create ScriptEngines for each Request over and over).
I think this case is not restriced to ScriptEngines, there might be something in the framework I'm not aware of to handle such cases.
Thank you for any ideas you have!
Malax
Play is stateless, so there is no "session-like" mechanism to link an object to a user. You may have 2 alternatives:
Use the Cache. Store the ScriptEngine in the cache with a unique ID, and add a method that checks if it's still there. Something like:
public static Engine getScriptEngine(Long userId) {
String key = "MY_ENGINE" + userId;
ScriptEngine eng = (ScriptEngine) Cache.get(key);
if(eng == null) {
eng = ScriptEngine.create();
Cache.put(key, eng);
}
return eng;
}
Or create a singleton object that contains a static instance of the ScriptEngine so it's always there once the server starts.
I would say the Cache one is the best approach.
EDIT: on your comment, this will depend on situation:
If you want to reuse a Engine across multiple request of a unique user (that is, each user has his own ScriptEngine to work with) the cache method works as the cache links the Engine to the user id. This would solve any threading issue too.
Otherwise, if you want to reuse it across multiple requests of multiple users, the static method is a better approach. But as you mention the access won't be thread safe, in Play or in any system.
I'm thinking your best bet is to work asynchronously with them. I don't know how you will use the ScriptEngines, but try to do something like this:
On request, store an entry in a table from the db marking a ScriptEngine processing request
In the same request, launch an asynchronous job (or have on running every 30 seconds)
The job will read the first entry of the table, remove it, do the task, return answer to the user. The job may have a pool of ScriptEngine to work with.
As jobs are not launched again while a current job is working, if you have enought requests the job will never cease working. If it does it means that you don't need engines at that time, and they will be recreated on demand.
This way you work linearly with a pool, ignoring threading issues. If you can't do this, then you need to fix the thread-safety of your ScriptEngine, as you can't pretend to share an object that it's not thread safe in a server environemnt which spawns multiple threads :)
Why don't you implement a Script-Pool? So each request get a instance from the pool the same way as a JDBC-Connection-Pool.
But make sure the Script-Engine is stateless.

Handling a timeout in EJB3 without using threads

I have the following situation. I have a job that:
May time out after a given amount of time, and if so occurs needs to throw an exception
If it does not time out, will return a result
If this job returns a result, it must be returned as quickly as possible, because performance is very much an issue. Asynchronous solutions are hence off the table, and naturally tying up the system by hammering isn't an option either.
Lastly, the system has to conform to the EJB standard, so AFAIK using ordinary threads is not an option, as this is strictly forbidden.
Our current solution uses a thread that will throw an exception after having existed for a certain amount of time without being interrupted by an external process, but as this clearly breaks the EJB standard, we're trying to solve it with some other means.
Any ideas?
Edited to add: Naturally, a job which has timed out needs to be removed (or interrupted) as well.
Edited to add 2:
This issue doesn't seem to have any solution, because detecting a deadlock seems to be mostly impossible sticking to pure EJB3 standards. Since Enno Shioji's comments below reflect this, I'm setting his suggestion as the correct answer.
This is more like a request for clarification, but it's too long to fit as a comment..
I'm not sure how you are doing it right now, since from what you wrote, just using the request processing thread seems to be the way to go. Like this:
//Some webservice method (synchronous)
public Result process(Blah blah){
try{
return getResult(TimeUnit.SECONDS, 10);
}catch(InterruptedException e){
//No result within 10 seconds!
throw new ServiceUnavailableException("blah");
}
}
I'm not sure why you are creating threads at all. If you are forced to use threads because the getResult method doesn't timeout at all, you would have a thread leak. If it timeouts after a longer time and thus you want to "shortcut" your reply to the user, that would be the only case I'd consider using a thread like I imagine how you are using it. This could result in Threads piling up under load and I'd strive to avoid such situation.
Maybe you can post some code and let us know why you are creating in your service at all?
Also, what's your client interface? Sounds like it's a synchronous webservice or something?
In that case, if I were you I would use a HashedWheelTimer as a singleton... this mechanism should work great with your requirement (here is an implementation). However, this unfortunately seem to conflict with the ban on threading AND the ban on singleton in the EJB spec. In reality though there really isn't a problem if you would do this. See this discussion for example. We have also used the singleton pattern in our EJB app. which used JBoss. However, if this isn't a viable choice then I might look at isolating the processing in its own JVM by defining a new web service (and deploy it in a web-container or something), and call that service from the EJB app. This would however obviously incur performance hit and now you would have another whole new app.
With Bean Managed Transaction, the timeout for the specific transaction can be specified by using UserTransaction interface.
Modify the timeout value that is
associated with transactions started
by the current thread with the begin
method.
void setTransactionTimeout(int seconds) throws SystemException
Transaction will timeout after specified seconds & may not get propagated further. If exception is not thrown implicitly, then can throw it explicitly based on the result.
Will return a result on successful completion within specified time.
Can use it with stateless session beans so there may not be a performance issue.
Its EJB standard so that will not be an issue to implement.
With little-bit work around, it should work fine in the given scenario.
Edit : Also can use server specific properties to manage transaction timeout.
JBoss : At either at class or method level annotation #TransactionTimeout(100) can be applied.
Weblogic : Specifying the parameters in weblogic-ejb-jar.xml
<transaction-descriptor>
<trans-timeout-seconds>100</trans-timeout-seconds>
</transaction-descriptor>
GlassFish : Using the optional cmt-timeout-in-seconds element in sun-ejb-jar.xml
Stick the process and it's timeout thread in to a class annotated with #WebService, put that class in to a WAR, then invoke the WebService from your EJB.
WARs don't have the same limitations or live under the same contract that EJBs do, so they can safely run threads.
Yes, I consider this a "hack", but it meets the letter of the requirements, and it's portable.
You can create threads using the commonj WorkManager. There are implementations built into WebSphere and Weblogic as they proposed the standard, but you can also find implementations for other appservers as well.
Basically, the WorkManager allows you to create managed threads inside the container, much like using an Executor in regular Java. Your only other alternative would be to use MDB's, but that would be a 'heavier' solution.
Since I don't know your actual platform, you will have to google commonj with your platform yourself 8-)
Here is a non IBM or Oracle solution.
Note: This is not an actual standard, but it is widely available for different platforms and should suit your purposes nicely.
For EJBs, there is a concept of "Container Managed Transactions". By specifying #TransactionAttribute on your bean, or specific method, the container will create a transaction when ever the method(s) are invoked. If the execution of the code takes longer than the transaction threshold, the container will throw an exception. If the call finishes under the transaction threshold, it will return as usual. You can catch the exception in your calling code and handle it appropriately.
For more on container managed transactions, check out: http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/Transaction3.html and http://download.oracle.com/javaee/5/tutorial/doc/bncij.html
You could use #TimeOut. Something like:
#Stateless
public class TimedBean {
#Resource
private TimerService timerService;
static private AtomicInteger counter = new AtomicInteger(0);
static private Map<Integer, AtomicBoolean> canIRunStore = new ...;
public void doSomething() {
Integer myId = counter.getAndIncrement();
AtomicBoolean canIRun = new AtomicBoolean(true);
canIRunStore.put(myId, canIRun);
timerService.createTimer(1000, 0, myId);
while (canIRun.get() /* && some other condition */) {
// do my work ... untill timeout ...
}
}
#Timeout
#PermitAll
public void timeout(Timer timer) {
Integer expiredId = (Integer) timer.getInfo();
AtomicBoolean canHeRun = canIRunStore.get(expiredId);
canIRunStore.remove(expiredId);
canHeRun.set(false);
}
}

Categories

Resources