Providers for lazy loading in Guice - java

In the official document, I read an article about Providers for lazy loading. However, I can't understand why this below code means the delay of creating a provider because I can't find any annotation or any code which is corresponding with the lazy loading.
And the code is this code.
public class DatabaseTransactionLog implements TransactionLog {
private final Provider<Connection> connectionProvider;
#Inject
public DatabaseTransactionLog(Provider<Connection> connectionProvider) {
this.connectionProvider = connectionProvider;
}
public void logChargeResult(ChargeResult result) {
/* only write failed charges to the database */
if (!result.wasSuccessful()) {
Connection connection = connectionProvider.get();
}
}
Where in the world can we see the special point which causes a delay of loading?

creating a connection may be expensive, and it may not alway be needed. Therefore, rather than creating a connection at injection time, the guice framework allows the injection of a 'provider', which will create the dependency when the get() method is called.
The delay is in the way you call provider.get(), and it's delayed relative to the time the constructors are called for each dependency. In the example you have, the constructor for DatabaseTransactionLog gets called, but no connection is created at that time. A Connection is only created when the method logChargeResult is called (because of the provider.get() call in it).

Related

Modifying annotation value in superclass and dynamically instantiating child classes with new value

We are using Spring Cloud Stream as the underlying implementation for event messaging in our microservice-based architecture. We wanted to go a step further and provide an abstraction layer between our services and the Spring Cloud Stream library to allow for dynamic channel subscriptions without too much boilerplate configuration code in the services themselves.
The original idea was as follows:
The messaging-library provides a BaseHandler abstract class which all individual services must implement. All handlers of a specific service would like to the same input channel, though only the one corresponding to the type of the event to handle would be called. This looks as follows:
public abstract class BaseEventHandler<T extends Event> {
#StreamListener
public abstract void handle(T event);
}
Each service offers its own events package, which contains N EventHandlers. There are plain POJOs which must be instantiated programmatically. This would look as follows:
public class ServiceEventHandler extends BaseEventHandler<ImportantServiceEvent> {
#Override
public void handle(ImportantServiceEvent event) {
// todo stuff
}
}
Note that these are simple classes and not Spring beans at this point, with ImportantServiceEvent implementing Event.
Our messaging-library is scanned on start-up as early as possible, and performs handler initialization. To do this, the following steps are done:
We scan all available packages in the classpath which provide some sort of event handling and retrieve all subclasses of BaseEventHandler.
We retrieve the #StreamListener annotation in the hierarchy of the subclass, and change its value to the corresponding input channel for this service.
Since our handlers might need to speak to some other application components (repositories etc.), we use DefaultListableBeanFactory to instantiate our handlers as singleton, as follows:
val bean = beanFactory.createBean(eventHandlerClass, AutowireCapableBeanFactory.AUTOWIRE_BY_TYPE, true);
beanFactory.registerSingleton(eventHandlerClass.getSimpleName(), bean);
After this, we ran into several issues.
The Spring Cloud Stream #StreamListener annotation cannot be inherited as it is a method annotation. Despite this, some mechanism seems to be able to find it on the parent (as the StreamListenerAnnotationBeanPostProcessor is registered) and attempts to perform post-processing upon the ServiceEventHandler being initialized. Our assumption is that the Spring Cloud Stream uses something like AnnotationElementUtils.findAllMergedAnnotations().
As a result of this, we thought that we might be able to alter the annotation value of the base class prior to each instantiation of a child class. Due to this, we thought that although our BaseEventHandler would simply get a new value which would then stay constant at the end of this initialization phase, the child classes would be instantiated with the correct channel name at the time of instantiation, since we do not expect to rebind. However, this is not the case and the value of the #StreamListener annotation that is used is always the one on the base.
The question is then: is what we want possible with Spring Cloud Stream? Or is it rather a plain Java problem that we have here (does not seem to be the case)? Did the Spring Cloud Stream team foresee a use case like this, and are we simply doing it completely wrong?
This question was also posted on on the Spring Cloud Stream tracker in case it might help garner a bit more attention.
Since the same people monitor SO and GitHub issues, it's rather pointless to post in both places. Stack Overflow is preferred for questions.
You should be able to subclass the BPP; it specifically has this extension point:
/**
* Extension point, allowing subclasses to customize the {#link StreamListener}
* annotation detected by the postprocessor.
*
* #param originalAnnotation the original annotation
* #param annotatedMethod the method on which the annotation has been found
* #return the postprocessed {#link StreamListener} annotation
*/
protected StreamListener postProcessAnnotation(StreamListener originalAnnotation, Method annotatedMethod) {
return originalAnnotation;
}
Then override the bean definition with yours
#Bean(name = STREAM_LISTENER_ANNOTATION_BEAN_POST_PROCESSOR_NAME)
public static StreamListenerAnnotationBeanPostProcessor streamListenerAnnotationBeanPostProcessor() {
return new StreamListenerAnnotationBeanPostProcessor();
}

transactional method throwing "no transactional entityManager available"

I have a method that is defined as #transactional. In fact I have a method calling a method that calls a method and all three are #transactional. The transactional logic worked fine, until I pulled a few methods out into an abstract class for some code reuse, which appears to have broken my logic somehow.
The transactional method is from an abstract class, here is a partial snippet of the relevant parts (I have to rewrite this by hand so forgive me for typos):
public abstract class ReadWriteService<ReadEntityTempalte extends IEntity, WriteEntityTemplate extends IEntity>
//extends jpaRepository, created using #enableJpaRepositories
private searchRepository<WriteEntityTemplate, String> writeRepository;
#PersistenceContext
private EntityManager em;
#transactional
public ReadEntityTemplate save(final WriteEntityTemplate entity){
if(entity == null) return null;
WriteEntityTemplate returnValue = writeRepository_.save(entity);
postSave(returnValue); //checks our security logic
flush();
ReadEntityTemplate returnEntity = find(returnValue.getId());
//required to detect changes made to the view by our save
em.refresh(returnEntity);
}
It's written this way because we are using views so the return value may be modified in the find() to the view. This logic worked in the past, and still works for a number of calls.
The method that fails is:
#Override
#transational
public void configure(EntityFileConfig config) throws ClassNotFoundException{
//load config from file
for(EntityConfig entityConfig: entityConfigs){
EntityType entityType=EntityTypeService_.find(entityConfig.getKey());
if(entityType==null){
entityType = EntityType.createByRequiredFields(entityConfig.getKey());
}
//update entityType to reflect config file.
entityType = entityTypeService_.save(entityType);
for(String permissionName: entityConfig.getPermissions()){
if(!entityTypeService_.hasPermission(entityType, permissionName)){
Permission permission = permissionSetup.getPermission(permissionName);
if(permission!=null)
//fails on below lines
permissionService._.addPermission(entityType, permission);
}
}
}
}
both the entityTypeService and the permissionService extend the above abstract class and use the same save method without alteration, addPermissions is a forloop that calls save on each permission.
The entityTypeService works, but the permissionService fails. When The permission service is called if I do em.isTransactionalEntity it returns false.
All #transactional annotations are using the spring annotation, not the javax one.
Actually, it seems as if a few of the permissions would save and others wouldn't, almost as if it's non-deterministic, but this may simple be due to modifying a database file that had some of the values already set and thus didn't need to run some of the logic the first time through.
I've done quite a bit of stumbling around but am no closer to determining what would cause my transaction to end. I had thought perhaps it was the #persistenceContext, since the JPARepos get their entityManager through a different approach then autowireing with #persistenceContext, but if that were the case everything would fail?
Any help would be appreciated, I'm pretty stumped on the cause of this.
Assuming you have enabled #EnableTransactionManagement on #Configuration class.
Since you didn't set any propagation on #Transaction the default value is Required. It means all methods must be part of transaction. Since one of your abstract methods is not part of the #Transactional hence the error.
For more information on Spring Transactions.
Note: Image taken from above link.

What happens when LockType.READ method is called by LockType.Write method

What happens when a LockType WRITE method in a singleton, container managed session bean which is of LockType READ at class level calls another method within the same bean which is of LockType READ.
#Singleton
#ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
#Lock(LockType.READ)
public class EmployeBean implements Employee {
#Lock(LockType.WRITE)
public Employee update() {
//update
}
public void calculate () {
//calculate and set
}
}
With the above bean, is it correct to have an implementation like this? What happens when this update() is being executed and at the same time some other service calls the calculate() ? Will the service wait until update() finishes or it also executes calculate() in parallel?I believe if it does go on in parallel it has high chances of corrupting the data or ending in data mismatch.
The calculate method can be made private and use only under a WRITE protected method. That way it is made sure there cannot be mismatch because of concurrent requests.
Wanted to know the impact and follow correct approach in handling concurrent requests in case like the above.

Java shutdown hook across different JVM

Can i attach java shutdown hook across jvm .
I mean can I attach shut down from my JVM to weblogic server running in different jvm?
The shutdown hook part is in Runtime.
The across JVM part you'll have to implement yourself, because only you know how your JVMs can discover and identify themselves.
It could be as simple as creating a listening socket at JVM1 startup, and sending port number of JVM2 to it. JVM1 would send shutdown notification to JVM2 (to that port) in its shutdown hook.
The short anser is: You can, but not out of the box and there are some pitfalls so please read the section pitfalls at the end.
A shutdown hook must be a thread object Runtime.addShutdownHook(Thread) that the jvm can access. Thus it must be instantiated within that jvm.
The only way I see to do it is to implement a Runnable that is also Serializable and some kind of remote service (e.g. RMI) which you can pass the SerializableRunnable. This service must then create a Thread pass the SerializableRunnable to that Thread's constructor and add it as a shutdown hook to the Runtime.
But there is also another problem in this case. The SerializableRunnable has no references to objects within the remote service's jvm and you have to find a way how that SerializableRunnable can obtain them or to get them injected. So you have the choice between a ServiceLocator or an
dependency injection mechanism. I will use the service locator pattern for the following examples.
I would suggest to define an interface like this:
public interface RemoteRunnable extends Runnable, Serializable {
/**
* Called after de-serialization from a remote invocation to give the
* RemoteRunnable a chance to obtain service references of the jvm it has
* been de-serialized in.
*/
public void initialize(ServiceLocator sl);
}
The remote service method could then look like this
public class RemoteShutdownHookService {
public void addShutdownhook(RemoteRunnable rr){
// Since an instance of a RemoteShutdownHookService is an object of the remote
// jvm, it can provide a mechanism that gives access to objects in that jvm.
// Either through a service locator
ServiceLocator sl = ...;
rr.initialize(sl);
// or a dependency injection.
// In case of a dependecy injection the initialize method of RemoteRunnable
// can be omitted.
// A short spring example:
//
// AutowireCapableBeanFactory beanFactory = .....;
// beanFactory.autowireBean(rr);
Runtime.getRuntime().addShutdownHook(new Thread(rr));
}
}
and your RemoteRunnable might look lioke this
public class SomeRemoteRunnable implements RemoteRunnable {
private static final long serialVersionUID = 1L;
private SomeServiceInterface someService;
#Override
public void run() {
// call someService on shutdown
someService.doSomething();
}
#Override
public void initialize(ServiceLocator sl) {
someService = sl.getService(SomeServiceInterface.class);
}
}
Pitfalls
There is only one problem with this approach that is not obvious. The RemoteRunnable implementation class must be available in the remote service's classpath. Thus you can not just create a new RemoteRunnable class and pass an instance of it to the remote service. You always have to add it to the remote JVMs classpath.
So this approach only makes sense if the RemoteRunnable implements an algorithm that can be configured by the state of the RemoteRunnable.
If you want to dynamically add arbitrary shutdown hook code to the remote JVM without the need to modify the remote JVMs classpath you must use a dynamic language and pass that script to the remote service, e.g. groovy.

Java EE Firing an Event creates a new Instance

I have two ManagedBeans.
Concerning my problem, they do the following:
First:
#ManagedBean
public class Provider {
private Event<ProvideEvent> event;
private static boolean handling = false;
public provide(#Observes ConsumeEvent consume){
if(!handling){
//provide some stuff
event.fire(new ProvideEvent(ProvidedStuff stuff);
}
}
}
Second:
#ManagedBean
#SessionScoped
public class Consumer {
private Event<ConsumeEvent> event;
#PostConstruct
public void initialize(){
event.fire(new ConsumeEvent());
}
private static boolean handling = false;
public consume(#Observes ProvideEvent providedStuff){
if(!handling){
//use the provided stuff
}
}
}
This happens, when the website is called:
1. Consumer is instantiated.
2. Consumer fires the event.
3. Provider is instantiated.
4. provide() is called.
5. A NEW CONSUMER IS INSTANTIATED
6. consume() is called.
As you can see, I had to use a boolean "handling" to keep the application from looping infinitly.
Why is the container not using the instantiated SessionScoped ManagedBean? I thought SessionScoped ManagedBeans are like Singleton for the Session?
I guess I could work around this by:
A: Using static variables for the changed properties.
B: Implementing the Observer-Pattern manually.
But there has to be an easier way here!?
I believe the problem could be that you fire the event in the #PostConstruct method of your Customer.
From the javadocs:
This method MUST be invoked before the class is put into service.
As far as I understand, that results in a race condition. The Provider is probably firing the second event earlier than your Customer instance finishes executing initialize() and the container puts it into service. Hence, it won't receive the event. I'm too inexperienced with Java EE to give good advice how to prevent that race condition though. I would probably work around it with an ugly SynchronousQueue as a meeting point.
Additional info: the default with #Observes is to create a new instance of the event receiver if none exists (is in service). That's why another customer is created. Use #Observes(notifyObserver = Reception.IF_EXISTS) to only notify existing instances that are in service.
I thought SessionScoped ManagedBeans are like Singleton for the Session?
No, it just defines the lifetime of the object(s). It doesn't really enforce session-singleton behavior. The container would probably prefer the existing instance though, if it was in service at the time the second event is fired.

Categories

Resources