we have a situation where we want to perform some tasks at the end of a request or transaction. More specifically, we need to collect some data during that request and at the end we use that data to do some automatic database updates.
This process should be as transparent as possible, i.e. users of the EJBs that require this should not have to worry about that.
In addition we can't control the exact call stack since there are multiple entry points to the process.
To achieve our goal, we're currently considering the following concept:
certain low level operations (that are always called) fire a CDI event
a stateless EJB listens for those events and upon receiving one it collects the data and stores it into a scoped CDI bean (either request scope or conversation scope would be fine)
at the end of the request another event is fired which causes the data in the scoped CDI bean to be processed
So far, we managed to get steps 1 and 2 up and running.
However, the problem is step 3:
As I already said, there are multiple entry points to the process (originating from web requests, scheduled jobs or remote calls) and thus we thought of the following approach:
3a. A CDI extension scans all beans and adds an annotation to every EJB.
3b. An interceptor is registered for the added annotation and thus on every call to an EJB method the interceptor is invoked.
3c. The first invocation of that interceptor will fire an event after the invoked method has returned.
And here's the problem (again in the 3rd step :) ):
How would the interceptor know whether it was the first invocation or not?
We thought of the following, but neither worked so far:
get a request/conversation scoped bean
fails because no context is active
get the request/conversation context and activate it (which then should mark the first invocation since in subsequent ones the context should be active)
the system created another request context and thus WELD ends up with at least two active request contexts and complained about this
the conversion context stayed active or was deactivated prematurely (we couldn't yet figure out why)
start a long running conversation and end it after the invocation
failed because there was no active request context :(
Another option we didn't try yet but which seems to be discouraged:
use a ThreadLocal to either store some context data or at least to use invocation context propagatation as described here: http://blog.dblevins.com/2009/08/pattern-invocationcontext-propagation.html
However, AFAIK there's no guarantee that the request will be handled entirely by the same thread and thus wouldn't even invocation context propagation break when the container decides to switch to another thread?
So, thanks to all who endured with me and read through all that lengthy description.
Any ideas of how to solve this are welcome.
Btw, here are some of the software components/standards we're using (and which we can't switch):
JBoss 7.1.0.Final (along with WELD and thus CDI 1.0)
EJB 3.1
Hibernate 3.6.9 (can't switch to 4.0.0 yet)
UPDATE:
With the suggestions you gave so far, we came up with the following solution:
use a request scoped object to store the data in
the first time an object is stored in that object an event is fired
a listener is invoked before the end of the transaction (using #Observes(during=BEFORE_COMPLETION) - thanks, #bkail)
This works so far but there's still one problem:
We also have MBeans that are managed by CDI and automatically registered to the MBean server. Thus those MBeans can get EJB references injected.
However, when we try and call an MBean method which in turn calls an EJB and thus causes the above process to start we get a ContextNotActiveException. This indicates that within JBoss the request context is not started when executing an MBean method.
This also doesn't work when using JNDI lookups to get the service instead of DI.
Any ideas on this as well?
Update 2:
Well, seems like we got it running now.
Basically we did what I described in the previous update and solved the problem with the context not being active by creating our own scope and context (which activated the first time an EJB method is called and deactivated when the corresponding interceptor finishes).
Normally we should have been able to do the same with request scope (at least if we didn't miss anything in the spec) but since there is a bug in JBoss 7.1 there is not always an active request context when calling EJBs from MBeans or scheduled jobs (which do a JNDI lookup).
In the interceptor we could try to get an active context and on failure activate one of those present in the bean manager (most likely EjbRequestContext in that case) but despite our tests we'd rather not count on that working in every case.
A custom scope, however, should be independent from any JBoss scope and thus should not interfere here.
Thanks to all who answered/commented.
So there's a last problem though: whose answer should I accept as you all helped us get into the right direction? - I'll try to solve that myself and attribute those points to jan - he's got the fewest :)
Do the job in a method which is annotated with #PreDestroy.
#Named
#RequestScoped
public class Foo {
#PreDestroy
public void requestDestroyed() {
// Here.
}
}
It's invoked right before the bean instance is destroyed by the container.
What you're looking for is SessionSynchronization. This lets an EJB tie in to the transaction lifecycle and be notified when transactions are being completed.
Note, I am being specific about Transactions, you mention "requests and transactions" and I don't know if you specifically mean EJB Transactions or something tied to your application.
But I'm talking about EJB Transactions.
The downside is that it is only invoked when the specific EJB is invoked, not to "all" transactions in general. But that may well be appropriate anyway.
Finally, be careful in these interim call back areas -- I've had weird things happen with transactional at these lifecycle methods. In the end, ended up putting stuff in to a local, memory based queue that another thread reaped for committing to JMS or whatever. Downside is that they were tied to the transaction at hand, upside was that they actually worked.
Phew, that's a complex scenario :)
From how I understand what you've tried so far you are pretty advanced with the CDI techniques - there is nothing big you are missing.
I would say that you should be able to activate a conversation context at the entry point (you've probably seen the relevant documentaton?) and work with it for the whole processing. It might actually be worthwhile considering implementing an own scope. I did that once in a distantly related scenario where we could not tell whether we've been invoked by HTTP-request or EJB-remoting.
But to be honest, all this feels far too complex. It's a rather fragile construct of interceptors notifying each other with events which all in all seems just too easy to break.
Can it be that there is another approach which better fits your needs? E.g. you might try to hook on the transaction management itself and execute your data accumulation from there?
Related
I was trying to understand how the Spring Listener Container handles the transactions within a retry context.
I configured something like this:
<rabbit:listener-container connection-factory="connectionFactory"
transaction-manager="chainedTransactionManager"
channel-transacted="true"
advice-chain="retryAdvice">
<rabbit:listener ref="myMessageProcessor" queue-names="test.messages" method="handleMessage"/>
</rabbit:listener-container>
And I was hoping that the transaction would be contained within the retry, such that if my transaction fails for any reason, I can decide to retry for specific exceptions and for others just send the message to my DLQ.
However, I was surprised to notice that the retry code is contained within the transaction code and not the other way around, which seemed more sensible.
In other words, Spring listener seems to do:
doIntransaction -> doWithRetry -> invokeMyCode
I was hoping it would be like this:
doWithRetry - doIntransaction -> invokeMyCode
My plan was to use a ChainedTransactionManager containing both a JpaTransactionManager and a RabbitTransactionManager here to handle both, the acknowledgement of the messages I read, and the commit of the messages I sent during this transaction and retry my entire transaction depending on certain conditions, but this does not seem to work that way.
Not only that, but after an exception occurs within a transaction, the context might become useless. I need a new transaction for the retry to make sense.
And there is the problem that any exceptions happening during the commit/rollback phase won't be retried, because they occur outside the retry context. I assume they're only retried depending on the ErrorHandler configuration, and not based on my advised code. Unforunately the ErrorHandler does not have a back off policy or the useful RetryContext details counting the number of times I have retried a transaction.
What would be the right or most recommended way to configure a listener witha transaction manager and retry functionality like in this case?
I haven't tried it, but you should be able to achieve your desired behavior by removing the transaction manager from the container and adding a normal Spring TransactionInterceptor to the advice chain (after the retry advice).
When the container has the transaction manager, you are telling it to start the transaction before invoking the listener (which is wrapped in the advice chain).
However, you might get some noise in the log because the container will probably still try to ack/commit the delivery because it "thinks" it's using local transactions (where the interceptor would have already done it, if it has the RabbitTransactionManager configured).
As long as you don't include the RabbitTransactionManager in the chainedTransactionManager this won't happen though; the container will simply use a local transaction.
If you include the RTM, you might need to use manual acks or add a dummy transaction manager to the container to prevent that.
Let us know how you make out; I can take a look tomorrow.
EDIT
As discussed below, using stateful retry is a simpler solution since the message is rejected and redelivered. But, you need a messageId header (or a custom key generator).
I have an ATG application running on a jboss as an App Server. The request-scoped component(bean) say CartManager has a method addToBag(...).
Since it has request scope my understanding is that its instantiated upon each request and the App Server guarantees that only one thread have access to that instance.
We're experiencing a concurrency issues so I just want to rule out one possible explanation.
You are likely experiencing an issue with users double clicking on a button (quite common for the Add To Bag button). Within ATG there is a way to counter this and it is called the RepeatingRequestMonitor.
Essentially it keeps track of requests executing the current handler and either block or allow a subsequent request for the same handler.
In the shopping cart process it is already implemented in the PurchaseProcessFormHandler so if you extend this particular FormHandler you can use its accessor methods.
I need to run some time-consuming task from a controller. To do it I have implemented an #Async method in my service so that the controller can return immediately (for example with 202 Created status).
The problem is that the task need access to some session-scoped beans. With this approach I am getting org.springframework.beans.factory.BeanCreationException: Error creating bean with name (...): Scope 'session' is not active for the current thread (...).
The same result is when I manually create an ExecutionService instead of #Async.
Is it possible to somehow make a worker thread attached to the current session?
EDIT
The purpose is to implement a bulk operation, providing a way to monitor the status of processing. Something like described in this answer: https://stackoverflow.com/a/28787774/718590
If I run it synchronously, there will be no indication of the status (how many items processed), and a request timeout may occur.
If I correctly understand, you want to be able to start a long time asynchronous processing from a spring web application, and be able to follow advancement of processing from the session that started it. And the processing could use beans contained in the session.
For a good separation of concerns, I would never have an asynchronous thread know a session. The session is related to HTTP and can be destroyed at any time before the thread can finish (or even begin in race conditions) its processing.
IMHO, a correct design would be to create a class containing all the informations shared between the web part and the asynchronous processing : the status (whatever it can be), the user that started processing if is is relevant and every other relevant piece of information. In your controller (of preferently in the service method called by the controller) you prepare an object of that class, and pass it to the #Async method. Then before returning, the controller stores the object in session. That way :
the asynchronous processing has all its required information, even is the session is destroyed later. It does not need to know the session and only cares for its processing and updates its status
the session of the web application knows that the asynchronous processing is running, know how it was started and what is the current status
It can be adapted to your real problem, but this should meet your requirements.
Suppose I let my customer reserve seats on a plane using Stateful Session Bean. If the client explicitly calls my Remove method, all of his reservations will be cancelled and the bean is removed afterward.
However, in case the client is idle for some time and the Bean get passivated, if the Bean times out while being passivated, it would be deleted without calling any of my functions. Hence, I'd be very grateful if someone could show me how I can make sure that the reservations would be cancelled if the bean get deleted. If I use the #PreDestroy annotation, will it solve this problem?
Best regards,
James Tran
It is quite possible for the #PreDestroy method to not be invoked. The EJB 3.1 specification, explicitly states this:
4.6.3 Missed PreDestroy Calls
The Bean Provider cannot assume that the container will always invoke
the PreDestroy lifecycle callback interceptor method(s) (or ejbRemove
method) for a session bean instance. The following scenarios result in
the PreDestroy lifecycle callback interceptor method(s) not being
called for an instance:
• A crash of the EJB container.
• A system exception thrown from the instance’s method to the container.
• A timeout of client inactivity while the instance is in the passive state. The timeout is specified by the Deployer in an EJB container implementation-specific way.
The specification also details how resources may be removed if the #PreDestroy method is not invoked in such scenarios:
For example, if a shopping cart component is implemented as a session
bean, and the session bean stores the shopping cart content in a
database, the application should provide a program that runs
periodically and removes “abandoned” shopping carts from the database.
In your case, it would depend on how you are storing the state of your reservations. If they are persisted in the database, then I would suggest employing the same approach as mandated in the specification. You could use the EJB Timer service, to perform this activity periodically, or use a scheduler like Quartz. Note, that it is imperative to distinguish between the contents of passivated session bean instances that no longer exist, and those that will be made ready once again.
A passivated bean will get destroyed on timeout and hence any method annotated with #PreDestroy will do what you are looking for.
While A is active, A's instance of the Stateful bean will not be shared with B until A's instance is destroyed. See the diagram on this article for further reading
Yes it should. The method annotated with #PreDestroy will be called prior to bean removal (even if i times-out in the passivated state)
I have the following situation. I have a job that:
May time out after a given amount of time, and if so occurs needs to throw an exception
If it does not time out, will return a result
If this job returns a result, it must be returned as quickly as possible, because performance is very much an issue. Asynchronous solutions are hence off the table, and naturally tying up the system by hammering isn't an option either.
Lastly, the system has to conform to the EJB standard, so AFAIK using ordinary threads is not an option, as this is strictly forbidden.
Our current solution uses a thread that will throw an exception after having existed for a certain amount of time without being interrupted by an external process, but as this clearly breaks the EJB standard, we're trying to solve it with some other means.
Any ideas?
Edited to add: Naturally, a job which has timed out needs to be removed (or interrupted) as well.
Edited to add 2:
This issue doesn't seem to have any solution, because detecting a deadlock seems to be mostly impossible sticking to pure EJB3 standards. Since Enno Shioji's comments below reflect this, I'm setting his suggestion as the correct answer.
This is more like a request for clarification, but it's too long to fit as a comment..
I'm not sure how you are doing it right now, since from what you wrote, just using the request processing thread seems to be the way to go. Like this:
//Some webservice method (synchronous)
public Result process(Blah blah){
try{
return getResult(TimeUnit.SECONDS, 10);
}catch(InterruptedException e){
//No result within 10 seconds!
throw new ServiceUnavailableException("blah");
}
}
I'm not sure why you are creating threads at all. If you are forced to use threads because the getResult method doesn't timeout at all, you would have a thread leak. If it timeouts after a longer time and thus you want to "shortcut" your reply to the user, that would be the only case I'd consider using a thread like I imagine how you are using it. This could result in Threads piling up under load and I'd strive to avoid such situation.
Maybe you can post some code and let us know why you are creating in your service at all?
Also, what's your client interface? Sounds like it's a synchronous webservice or something?
In that case, if I were you I would use a HashedWheelTimer as a singleton... this mechanism should work great with your requirement (here is an implementation). However, this unfortunately seem to conflict with the ban on threading AND the ban on singleton in the EJB spec. In reality though there really isn't a problem if you would do this. See this discussion for example. We have also used the singleton pattern in our EJB app. which used JBoss. However, if this isn't a viable choice then I might look at isolating the processing in its own JVM by defining a new web service (and deploy it in a web-container or something), and call that service from the EJB app. This would however obviously incur performance hit and now you would have another whole new app.
With Bean Managed Transaction, the timeout for the specific transaction can be specified by using UserTransaction interface.
Modify the timeout value that is
associated with transactions started
by the current thread with the begin
method.
void setTransactionTimeout(int seconds) throws SystemException
Transaction will timeout after specified seconds & may not get propagated further. If exception is not thrown implicitly, then can throw it explicitly based on the result.
Will return a result on successful completion within specified time.
Can use it with stateless session beans so there may not be a performance issue.
Its EJB standard so that will not be an issue to implement.
With little-bit work around, it should work fine in the given scenario.
Edit : Also can use server specific properties to manage transaction timeout.
JBoss : At either at class or method level annotation #TransactionTimeout(100) can be applied.
Weblogic : Specifying the parameters in weblogic-ejb-jar.xml
<transaction-descriptor>
<trans-timeout-seconds>100</trans-timeout-seconds>
</transaction-descriptor>
GlassFish : Using the optional cmt-timeout-in-seconds element in sun-ejb-jar.xml
Stick the process and it's timeout thread in to a class annotated with #WebService, put that class in to a WAR, then invoke the WebService from your EJB.
WARs don't have the same limitations or live under the same contract that EJBs do, so they can safely run threads.
Yes, I consider this a "hack", but it meets the letter of the requirements, and it's portable.
You can create threads using the commonj WorkManager. There are implementations built into WebSphere and Weblogic as they proposed the standard, but you can also find implementations for other appservers as well.
Basically, the WorkManager allows you to create managed threads inside the container, much like using an Executor in regular Java. Your only other alternative would be to use MDB's, but that would be a 'heavier' solution.
Since I don't know your actual platform, you will have to google commonj with your platform yourself 8-)
Here is a non IBM or Oracle solution.
Note: This is not an actual standard, but it is widely available for different platforms and should suit your purposes nicely.
For EJBs, there is a concept of "Container Managed Transactions". By specifying #TransactionAttribute on your bean, or specific method, the container will create a transaction when ever the method(s) are invoked. If the execution of the code takes longer than the transaction threshold, the container will throw an exception. If the call finishes under the transaction threshold, it will return as usual. You can catch the exception in your calling code and handle it appropriately.
For more on container managed transactions, check out: http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/Transaction3.html and http://download.oracle.com/javaee/5/tutorial/doc/bncij.html
You could use #TimeOut. Something like:
#Stateless
public class TimedBean {
#Resource
private TimerService timerService;
static private AtomicInteger counter = new AtomicInteger(0);
static private Map<Integer, AtomicBoolean> canIRunStore = new ...;
public void doSomething() {
Integer myId = counter.getAndIncrement();
AtomicBoolean canIRun = new AtomicBoolean(true);
canIRunStore.put(myId, canIRun);
timerService.createTimer(1000, 0, myId);
while (canIRun.get() /* && some other condition */) {
// do my work ... untill timeout ...
}
}
#Timeout
#PermitAll
public void timeout(Timer timer) {
Integer expiredId = (Integer) timer.getInfo();
AtomicBoolean canHeRun = canIRunStore.get(expiredId);
canIRunStore.remove(expiredId);
canHeRun.set(false);
}
}