I have two ManagedBeans.
Concerning my problem, they do the following:
First:
#ManagedBean
public class Provider {
private Event<ProvideEvent> event;
private static boolean handling = false;
public provide(#Observes ConsumeEvent consume){
if(!handling){
//provide some stuff
event.fire(new ProvideEvent(ProvidedStuff stuff);
}
}
}
Second:
#ManagedBean
#SessionScoped
public class Consumer {
private Event<ConsumeEvent> event;
#PostConstruct
public void initialize(){
event.fire(new ConsumeEvent());
}
private static boolean handling = false;
public consume(#Observes ProvideEvent providedStuff){
if(!handling){
//use the provided stuff
}
}
}
This happens, when the website is called:
1. Consumer is instantiated.
2. Consumer fires the event.
3. Provider is instantiated.
4. provide() is called.
5. A NEW CONSUMER IS INSTANTIATED
6. consume() is called.
As you can see, I had to use a boolean "handling" to keep the application from looping infinitly.
Why is the container not using the instantiated SessionScoped ManagedBean? I thought SessionScoped ManagedBeans are like Singleton for the Session?
I guess I could work around this by:
A: Using static variables for the changed properties.
B: Implementing the Observer-Pattern manually.
But there has to be an easier way here!?
I believe the problem could be that you fire the event in the #PostConstruct method of your Customer.
From the javadocs:
This method MUST be invoked before the class is put into service.
As far as I understand, that results in a race condition. The Provider is probably firing the second event earlier than your Customer instance finishes executing initialize() and the container puts it into service. Hence, it won't receive the event. I'm too inexperienced with Java EE to give good advice how to prevent that race condition though. I would probably work around it with an ugly SynchronousQueue as a meeting point.
Additional info: the default with #Observes is to create a new instance of the event receiver if none exists (is in service). That's why another customer is created. Use #Observes(notifyObserver = Reception.IF_EXISTS) to only notify existing instances that are in service.
I thought SessionScoped ManagedBeans are like Singleton for the Session?
No, it just defines the lifetime of the object(s). It doesn't really enforce session-singleton behavior. The container would probably prefer the existing instance though, if it was in service at the time the second event is fired.
Related
I have a weird issue which involves #TransactionalEventListener not firing correctly or behavior as expected when triggered by another #TransactionalEventListener.
The general flow is:
AccountService publish an Event (to AccountEventListener)
AccountEventListener listens for the Event
Perform some processing and then publish another Event (to MailEventListener)
MailEventListener listens for the Event and peform some processing
So here's the classes (excerpt).
public class AccountService {
#Transactional
public User createAccount(Form registrationForm) {
// Some processing
// Persist the entity
this.accountRepository.save(userAccount);
// Publish the Event
this.applicationEventPublisher.publishEvent(new RegistrationEvent());
}
}
public class AccountEventListener {
#TransactionalEventListener
#Transactional(propagation = Propagation.REQUIRES_NEW)
public MailEvent onAccountCreated(RegistrationEvent registrationEvent) {
// Some processing
// Persist the entity
this.accountRepository.save(userAccount);
return new MailEvent();
}
}
public class MailEventListener {
private final MailService mailService;
#Async
#EventListener
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
}
This code works but my intention is to use #TransactionalEventListener in my MailEventListener class. Hence, the moment I change from #EventListener to #TransactionalEventListener in MailEventListener class. The MailEvent does not get triggered.
public class MailEventListener {
private final MailService mailService;
#Async
#TransactionalEventListener
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
}
MailEventListener was never triggered. So I went to view Spring Documentation, and it states that #Async #EventListener is not support for event that is published by the return of another event. And so I changed to using ApplicationEventPublisher in my AccountEventListener class.
public class AccountEventListener {
#TransactionalEventListener
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void onAccountCreated(RegistrationEvent registrationEvent) {
// Some processing
this.accountRepository.save(userAccount);
this.applicationEventPublisher.publishEvent(new MailEvent());
}
}
Once I changed to the above, my MailEventListener now will pick up the event that is sent from AccountEventListener but the webpage hangs when form is submitted and it throws some exception after awhile, and then it also sent me about 9 of the same email to my email account.
I added some logging, and found out that my AccountEventListener (this.accountRepository.save()) actually ran 9 times before hitting the exception, which then causes my MailEventListener to execute 9 times I believe, and that is why I received 9 mails in my inbox.
Here's the logs in Pastebin.
I'm not sure why and what is causing it to run 9 times. There is no loop or anything in my methods, be it in AccountService, AccountEventListener, or MailEventListener.
Thanks!
So I went to view Spring Documentation, and it states that #Async #EventListener is not support for event that is published by the return of another event. And so I changed to using ApplicationEventPublisher in my AccountEventListener class.
Your understand is incorrect.
The document said that:
This feature is not supported for asynchronous listeners.
It does not mean
it states that #Async #EventListener is not support for event that is published by the return of another event.
It means:
This feature does not support events return from #Async #EventListener.
Your setup:
#Async
#TransactionalEventListener
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
Does not work because as stated in document:
If the event is not published within the boundaries of a managed transaction, the event is discarded unless the fallbackExecution() flag is explicitly set. If a transaction is running, the event is processed according to its TransactionPhase.
If you use the debug, you can see that if your event is returned from an event listener, it happens after the transaction commit, hence the event is discarded.
So if you set the fallbackExecution = true as stated in the document, your event will correctly listened:
#Async
#TransactionalEventListener(fallbackExecution = true)
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
The repeated behavior is look like some retry behavior, the connection queued up, exhaust the pool and throw the exception. Unless you provide a minimal source code to reproduce the problem, I can't identify it.
Update
Reading your code, the root cause is clear now.
Look at your setup for POST /registerPublisherCommon
MailPublisherCommonEvent and AccountPublisherCommonEvent are subevent of BaseEvent
createUserAccountPublisherCommon publish an event of type AccountPublisherCommonEvent
MailPublisherCommonEventListener is registered to handle MailPublisherCommonEvent
AccountPublisherCommonEventListener is registered to handle BaseEvent and ALL SUB-EVENT of it.
AccountPublisherCommonEventListener also publishes MailPublisherCommonEvent (which is also a BaseEvent).
Read 4 + 5 you will see the root cause: AccountPublisherCommonEventListener publishes MailPublisherCommonEvent which is also handled by itself, hence the infinite event processing occur.
To resolve it, simply narrow down the type of event it can handle like you did.
Note
Your setup for MailPublisherCommonEvent working regardless the fallbackExecution flag because you're publishing it INSIDE A TRANSACTION, not OUTSIDE A TRANSACTION (by return from an event listener) like you specified in your question.
For what it's worth, I found out what is causing the looping and how to resolve it but I still cannot understand why does it happens as such.
And correct me if I'm wrong, setting fallbackExecution = true isn't really the answer to the issue.
Based on Spring documentation, the event is processed according to its TransactionPhase. So I had #Transactional(propagation = Propagation.REQUIRES_NEW) in my AccountEventListener class which should be a transaction by itself, and MailEventListener should only be executing in the event that the phase which by default is AFTER_COMMIT for #TransactionalEventListener.
I setup a git, to reproduce the issue and while doing so, allows me to discover what really went wrong. Having said that, I still do not understand the root cause of it.
Before I do, there are some things that I am not 100% sure but it's just my guess/understand at this moment.
As mentioned in the Spring Documentation,
If the event is not published within the boundaries of a managed transaction, the event is discarded unless the fallbackExecution() flag is explicitly set. If a transaction is running, the event is processed according to its TransactionPhase.
And my guess of the reason why MailEventListener class did not pick up the event when using the event as the return type to let Spring automatically publish is because it publishes outside of the boundaries of a managed transaction. Which is why, if you set (fallbackExecution = true) in MailEventListener, it will work/run because it doesn't matter if it in within the transaction or not.
Note: Classes mentioned above are taken from my initial post. The
classes below are named slightly differently but essentially, all are
still the same, just different name.
Now, back to the point where I said I found the answer as to why it is causing the loop.
Basically, it is when the parameter put in the listener is a BaseEvent of sort.
So assuming that I have the following classes:
public class BaseEvent {
private final User userAccount;
}
public class AccountPublisherCommonEvent extends BaseEvent {
public AccountPublisherCommonEvent(User userAccount) {
super(userAccount);
}
}
public class MailPublisherCommonEvent extends BaseEvent {
public MailPublisherCommonEvent(User userAccount) {
super(userAccount);
}
}
And the listeners classes (Notice that the parameter is the BaseEvent):
public class AccountPublisherCommonEventListener {
private final AccountRepository accountRepository;
private final ApplicationEventPublisher eventPublisher;
// Notice that the parameter is the BaseEvent
#TransactionalEventListener
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void onAccountPublisherCommonEvent(BaseEvent accountEvent) {
User userAccount = accountEvent.getUserAccount();
userAccount.setUserFirstName("common");
this.accountRepository.save(userAccount);
this.eventPublisher.publishEvent(new MailPublisherCommonEvent(userAccount));
}
}
public class MailPublisherCommonEventListener {
#Async
#TransactionalEventListener
public void onMailPublisherCommonEvent(MailPublisherCommonEvent mailEvent) {
log.info("Sending common email ...");
}
}
Basically, if the setup of the listener is as such (above), then you enter a loop and hit an exception as mentioned by the previous poster.
The repeated behavior is look like some retry behavior, the connection queued up, exhaust the pool and throw the exception.
And to resolve the issue, simply, change the input, and define the classes to listen by (Notice the addition of ({AccountPublisherCommonEvent.class})):
public class AccountPublisherCommonEventListener {
private final AccountRepository accountRepository;
private final ApplicationEventPublisher eventPublisher;
// Notice the addition of ({AccountPublisherCommonEvent.class})
#TransactionalEventListener({AccountPublisherCommonEvent.class})
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void onAccountPublisherCommonEvent(BaseEvent accountEvent) {
User userAccount = accountEvent.getUserAccount();
userAccount.setUserFirstName("common");
this.accountRepository.save(userAccount);
this.eventPublisher.publishEvent(new MailPublisherCommonEvent(userAccount));
}
}
An alternative would be changing the parameter to the actual class name instead of the BaseEvent class I suppose. And there is no changes required to the MailPublisherCommonEventListener
By doing so, it no longer loop, nor hit the exception. The behavior would run as I want and expected it to.
I would appreciate if anyone could answer to as of why does this happen exactly if I place the BaseEvent as the input would caused a looping to occur. Here's the link to git for the poc. Hope I'm making some sense here.
Thank you.
In the official document, I read an article about Providers for lazy loading. However, I can't understand why this below code means the delay of creating a provider because I can't find any annotation or any code which is corresponding with the lazy loading.
And the code is this code.
public class DatabaseTransactionLog implements TransactionLog {
private final Provider<Connection> connectionProvider;
#Inject
public DatabaseTransactionLog(Provider<Connection> connectionProvider) {
this.connectionProvider = connectionProvider;
}
public void logChargeResult(ChargeResult result) {
/* only write failed charges to the database */
if (!result.wasSuccessful()) {
Connection connection = connectionProvider.get();
}
}
Where in the world can we see the special point which causes a delay of loading?
creating a connection may be expensive, and it may not alway be needed. Therefore, rather than creating a connection at injection time, the guice framework allows the injection of a 'provider', which will create the dependency when the get() method is called.
The delay is in the way you call provider.get(), and it's delayed relative to the time the constructors are called for each dependency. In the example you have, the constructor for DatabaseTransactionLog gets called, but no connection is created at that time. A Connection is only created when the method logChargeResult is called (because of the provider.get() call in it).
What happens when a LockType WRITE method in a singleton, container managed session bean which is of LockType READ at class level calls another method within the same bean which is of LockType READ.
#Singleton
#ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
#Lock(LockType.READ)
public class EmployeBean implements Employee {
#Lock(LockType.WRITE)
public Employee update() {
//update
}
public void calculate () {
//calculate and set
}
}
With the above bean, is it correct to have an implementation like this? What happens when this update() is being executed and at the same time some other service calls the calculate() ? Will the service wait until update() finishes or it also executes calculate() in parallel?I believe if it does go on in parallel it has high chances of corrupting the data or ending in data mismatch.
The calculate method can be made private and use only under a WRITE protected method. That way it is made sure there cannot be mismatch because of concurrent requests.
Wanted to know the impact and follow correct approach in handling concurrent requests in case like the above.
i have a simple Jsp page that hits to servlet and in the servlet i call a method from another class and in this method i am declaring a static variable globally and setting a value to it and then the servlet's task is over so the control is back to the jsp page (or to a page that i forward the request and response to).
so wat just happened is termed a session???
the value set to that static variable remains the same for all the sessions that are coming next!! why is this happening. dint the earlier session end ?? if it has ended, then why is the value for the static variable that i have set is still remaining like that only in my subsequent sessions?? please correct me if i am wrong. Help me to learn! stackoverflow has never let me down!!!! thanks in advance
static fields in a class will live until the class itself is unloaded and garbage collected. So, static fields in a serlvet will not only live across all the sessions but across the whole application, in this case, until the web application is undeployed.
In fact, it is not wise to have any field in a servlet unless this field cannot be modified after being initialized or if it is injected by the container like an EJB or a CDI bean. This is because a single Servlet instance will be used to attend several requests made to the server, so even if you have a non-static field in your servlet and you update it through requests, its value can be modified by two or several requests happening at the same time. Try to keep the variables to the shortest possible scope, for example, inside a method only.
More info:
How do servlets work? Instantiation, sessions, shared variables and multithreading
From comments, looks like your real problem is about a design to support synchronization across several threads. A better option would be creating an object instance that will be shared among your threads, then use a final non-static field to handle the synchronization:
class MyClass {
final Object lock = new Object();
//other fields in the class...
}
class Multijobs {
class Job implements Runnable {
MyClass myClass;
public Job(MyClass myClass) {
this.myClass = myClass;
}
#Override
public void run() {
//handle the job here...
//using the synchronization point
synchronize(myClass.lock) {
}
}
}
static final int NUM_THREADS = 10;
public void executeSeveralJobs() {
ExecutorService executorService = Executors.newFixedThreadPool(NUM_THREADS);
MyClass myClass = new MyClass();
executorService.execute(new Job(myClass));
executorService.execute(new Job(myClass));
//initialize the jobs and add them to the ExecutorService
//...
executorService.shutdown();
//...
}
}
I am facing a problem in converting my state pattern using plain java to spring DI since I am new to spring.
Actually I made a project using state pattern but I took the approach that every state knows it successive states not the context class.
The context class has a field "currentState" its type is IState, and it has method setState(IState state).
The IState has one method geNext(Context context).
And in the context class I made a while(keepOn) keepOn is true and it become false in ExitState to stop processing, in this loop I call currentState.goNext().
Each state make some database transactions and webservice's calls and depending on the result it set the next state using context.setState(new StateFour()) -for example-.
The first state is set by the client after creating the context.
Code sample:
public interface IState{public void goNext(Context context);}
public class StateOne implements IState{
public void goNext(Context context){
//do some logic
if(user.getTitle.equals("manager"){context.setState(new StateThree());}
else if(user.getTitle.equals("teamLead"){context.setState(new StateTwo());}
else{context.setState(new ExitState());}
}
}
public class Context{
private boolean keepOn = true;
private IState currentState;
public void setState(IState state){
currentState = state;
}
while(keepOn){currentState.goNext(this);}
}
Now I am trying to use spring DI annotation-based, the problem I am facing is that the context will annotated "currentState field" with #Autowired but I need the spring container to do the same logic if I am in state one and "if statement" success inject state three "else if" inject state two otherwise inject exitState.
If I use #Qualifier(value ="stateOne") it will specify only the first state which implements the interface but the other states which I set depending on the situation I don't know how to specify it in spring.
Also org.springframework.core.Ordered need specifying the orders of the beans in advance but I don't know the values I will receive from the database or webservice in advance, it should be specified at runtime.
So is it possible to replace this plain java with spring DI and how?
Thanks in advance for any help and sorry for lengthening.
You should use ApplicationContext. Example below:
// Inject application context into your bean
#Autowired
ApplicationContext applicationContext;
// Get bean from the context (equivalent to #Autowired)
applicationContext.getBean(StateThree.class);
The most versatile way to auto wire the state is by registering a resolvable dependency with a ConfigurableListableBeanFactory. As a dependency you could drop in your implementation of org.springframework.beans.factory.ObjectFactory<T> which will get the current user and creates/fetches the state to be injected.
This is exactly what happens when you, for instance, auto wire a field of type HttpServletRequest. A RequestObjectFactory will get the current request and inject it using this implementation.
// org.springframework.web.context.support.WebApplicationContextUtils
private static class RequestObjectFactory implements ObjectFactory<ServletRequest>, Serializable {
#Override
public ServletRequest getObject() {
return currentRequestAttributes().getRequest();
}
#Override
public String toString() {
return "Current HttpServletRequest";
}
}