iPojo instance creation and management - java

I am currently having lots of trouble with iPojo leaks due to constructed instances that we forget to dispose. I see this as an inevitable drawback of using imperative instantiation using the ipojo Factory technique: basically you say when you need your service by calling factory.createComponentInstance(config), so you have the responsibility to say when you are done with it. This forces me to keep two references, one for the service that I want to consume, but also a second one of the iPojo ComponentInstance so that when the consumer is done, it can call componentInstance.dispose(). If not, there's a leak
Is there a more declarative way to do this where the consumer doesn't need to handle the lifecycle of the iPojo service and its instance?
To simplify my usecase, imagine that there's a UI with a button in it and every time the button is pressed, i need a new, unique instance of an iPojo service. Ideally, the instance would be GC'd when it goes out of scope, without the consumer having to do anything
Maybe my mistake is using services as instances, but I have three reasons to use a service instead of a normal class and calling new.
The service impl should be substitutable
The consumer should depend on an interface, not an implementation/provider, not only because of #1 but also because of tons more transitive dependencies pulled when depending on a concrete impl
The service impl has some dependencies itself that I'm hoping will be injected by iPojo (dependency injection).
As a second request, does anyone know of any opensource, real (i.e not dummy, demo) projects using iPojo that I can use as example of good usage of iPojo?

Instead of creating a component instance, you probably should use a custom 'creation strategy'. So you will have only one component instance, but with several 'implementation' instances (service objects) managed. You decide when these objects are created and disposed. More information on http://felix.apache.org/documentation/subprojects/apache-felix-ipojo/apache-felix-ipojo-userguide/describing-components/providing-osgi-services.html#service-serving-object-creation.
About a project using iPOJO, you can have a look to Wisdom Framework, which relies on iPOJO: http://wisdom-framework.org (code available there: github.com/wisdom-framework/wisdom/)

Related

How to make a state available to all beans in a "session"?

I have the following design. When a client makes a request to the server, the server creates a state that holds all sorts of info. There are various stateless and stateful beans which need to read and write to this state. Refer to this unprofessional diagram:
The ComputationCycle class is where the processing starts and works by phases. During each phase it calls upon other Manager classes (which behave like utility classes) to help in the computation (diagram shows only for 1 phase). The state is being read and written to both from the CC class and the managers, both are stateless.
State holds Employee, Department and Car classes (in some irrelevant data structure) which are stateful. These classes can also call the Manager classes. This is done by a simple #Inject Manager1. The same way CC uses managers.
My problem is how to access the stateful state (and its contained classes) from the stateless classes (and from the Car, Department and Employee classes too, although I think solving one will solve the other). I can't inject a stateful bean into a stateless bean. So after the client makes a request and the computation cycle starts, how do I access the state related to this request?
One solution is to pass the state to every method in the stateless classes, but this is really cumbersome and bloaty because all methods will have an "idiotic" State argument everywhere.
How can I make this design work the way I want it to?
I can't inject a stateful bean into a stateless bean.
You can absolutely inject dependencies this way.
If the stateful bean is #RequestScoped, any call into the stateless bean on that thread that hits a CDI injected contextual reference (iow proxy) will find its way to the right instance of the stateful bean.
As long as you use CDI, you don't need to trouble yourself with trying to stash things away in your own threadlocals.
Buyer beware, ThreadLocal will possibly do what you're wanting, along with a static accessor. However, this class is prone to causing memory leaks if you are not extremely careful to remove each entry at the end of the request. In addition, you seem to be using EJB; I assume they are all in the same JRE. I use ThreadLocal quite a bit in similar situations, and I've had no problems. I use SerletContextListener's to null the static reference to the ThreadLocal when the context shuts down, although that has been problematic on some older Web app servers, so I make sure the ThreadLocal exists before attempting to use it.
EJB can "talk" to each other across servers. It sounds local all your EJB are running in the same context.
Create a class that holds your state.
Extend ThreadLocal--you can do this anonymously--and override initialValue() to return a new instance of your class.
Create a utility class to hold the ThreadLocal as a static field. Don't make it final Create static fetch and remove methods that call ThreadLocal.get() and remove(). Create a static destroy() method that is called when your context shuts down--see ServletContextListener.

What are EJB callbacks and why do we need them?

I'm just started to look at Java EE but I'm struggling to understand what callbacks are exactly and what they are used for.
Does anyone have a clear explanation of what they are? I've looked around the site but been unable to find much information.
The Formal Definition
Callback is a mechanism by which life cycle of an enterprise bean can be intercepted.
A Practical Example
I think a single example will help show off the usefulness of these callback annotations. Let's take a look at the #PreDestroy callback. From the JBoss docs on EJB, we can see that:
PreDestroy - is invoked when the bean is removed from the pool or destroyed.
And you've got a Bean that has some kind of File Resource. You want to ensure that when the Bean is destroyed, that file lock goes with it. Well, we know that it's "risky" practise to wait for the Garbage Collector to handle these things for us; we don't know when it's going to run.
But what we can do is put in place some logic that is called when the bean is removed.
#PreDestroy
public void cleanUp() {
// Clean up your FileOutputStreams etc.
}
In your bean, it's very clear that this method is executed when the bean is destroyed and it requires no extra code from the outside. This ensures that your resources are cleaned up, as and when the bean is destroyed.
Callbacks are your primary opportunity to execute custom code at specific points in the EJB (or the container's lifecycle).
So take for example, you want to initialize specific fields or components
inside the EJB,
after the EJB has been instantiated, but
before it starts to service requests
You'll implement the #PostConstruct callback method. A method annotated with this is an advertisement to the JavaEE runtime that that method must be run immediately after an instance of that class has been created. A common use of this annotation is to instamtiate class-level variables or to prepare shared resources.
The JavaEE specification has designated several annotations such as this one, as lifecycle callbacks. What this means is that at startup, the container knows to scan the deployment kit for artifacts that implement any of the available callbacks. In doing so, it knows to notify the interested components (EJBs, CDI components , JAX-WS bean implementations) of specific events, or call specific methods when...specific actions occur in the app server.
The callback mechanism is in itself an implementation of the Callback pattern (or event-driven programming if you're coming from a UI programming world)
Further Reading:
Oracle's intro to lifecycle callbacks

dropwizard-guice: Order of Managed objects

I have two classes Foo and Bar which implements Managed.
I am using 'dropwizard-guice' with enableAutoConfig (Dropwizard Guice) to automatically add bundles and managed objects. But AutoConfig adds the managed objects in random order.
But in my case, I am injecting singleton Foo instance to Bar and I always want Foo to be created and added first and Foo to be destroyed after Bar. Is there a way to achieve the required ordering ?
so looking at the code, managed objects are simply added to a list. This means that the order you add them to will be the order they are executed at. Now there might be subtleties that will screw you, so I would not rely on that.
The lifecycle in DW is handled by Jetty. So the functionality that starts/stops your beans lives there.
I would implement a custom solution and since you are using guice this will be fairly straight forward and easy.
Add a new managed interface "MyManaged"
This will enable you to have 2 different types of managed. MyManaged can also implement sortable or whatever you need to create an order and that way you will be able to exactly control execution order.
Add a new Container "MyManagedContainer"
This one will be responsible for your MyManaged classes. It must implement Managed and will be handled by DW. So basically you wrap your own managed objects into a Managed object, so that you have control over what to do.
In MyManagedContainer, in the start/stop you simply delagate to your own start/stop objects.
Create everything in Guice.
Guice offers you MultiBindings: https://github.com/google/guice/wiki/Multibindings
So, you create your Foo and Bar, they both implement MyManaged and some sort of ordering.
You bind them and inject them as Set into MyManagedContainer. MyManagedContainer you add to the Managed lifecycle of dropwizard.
Tada, you now have exactly controlled execution order.
I apologise for the lack of code, but I have not in fact implemented this. I also use guicey (which has internal support for multibindings and much much more) instead of guice.
Let me know if you need more help with this.
Thanks,
Artur

On properly implementing complex service layers

I have the following situation:
Three concrete service classes implement a service interface: one is for persistence, the other deals with notifications, the third deals with adding points to specific actions (gamification). The interface has roughly the following structure:
public interface IPhotoService {
void upload();
Photo get(Long id);
void like(Long id);
//etc...
}
I did not want to mix the three types of logic into one service (or even worse, in the controller class) because I want to be able to change them (or shut them) without any problems. The problem comes when I have to inject a concrete service into the controller to use. Usually, I create a fourth class, named roughly ApplicationNamePhotoService, which implements the same interface, and works as a wrapper (mediator) between the other three services, which gets input from the controller, and calls each service correspondingly. It is a working approach, though one, which creates a lot of boilerplate code.
Is this the right approach? Currently, I am not aware of a better one, although I will highly appreciate to know if it is possible to declare the execution sequence declaratively (in the context) and to inject the controller with and on-the fly generated wrapper instance.
Also, it would be nice to cache some stuff between the three services. For example, all are using DAOs, i.e. making sometimes the same calls to the DB over and over again. If all the logic were into one place that could have been avoided, but now... I know that it is possible to enable some request or session based caching. Can you suggest me some example code? BTW, I am using Hibernate for the persistence part. Is there already some caching provided (probably, if they reside in the same transaction or something - with that one I am totally lost)
The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. It sounds like you are mixing service classes when they could be in the same class and method. You can inject service classes into one another when required too, rather than create another "mediator".
It is perfectly acceptable to "mix the three types of logic", in fact it is preferable if they form an expected use case/unit of work
Cache-ing I would look to use eh cache which is, I believe, well integrated with hibernate.

Is there ever a case for 'new' when using dependency injection?

Does dependency injection mean that you don't ever need the 'new' keyword? Or is it reasonable to directly create simple leaf classes such as collections?
In the example below I inject the comparator, query and dao, but the SortedSet is directly instantiated:
public Iterable<Employee> getRecentHires()
{
SortedSet<Employee> entries = new TreeSet<Employee>(comparator);
entries.addAll(employeeDao.findAll(query));
return entries;
}
Just because Dependency Injection is a useful pattern doesn't mean that we use it for everything. Even when using DI, there will often be a need for new. Don't delete new just yet.
One way I typically decide whether or not to use dependency injection is whether or not I need to mock or stub out the collaborating class when writing a unit test for the class under test. For instance, in your example you (correctly) are injecting the DAO because if you write a unit test for your class, you probably don't want any data to actually be written to the database. Or perhaps a collaborating class writes files to the filesystem or is dependent on an external resource. Or the behavior is unpredictable or difficult to account for in a unit test. In those cases it's best to inject those dependencies.
For collaborating classes like TreeSet, I normally would not inject those because there is usually no need to mock out simple classes like these.
One final note: when a field cannot be injected for whatever reason, but I still would like to mock it out in a test, I have found the Junit-addons PrivateAccessor class helpful to be able to switch the class's private field to a mock object created by EasyMock (or jMock or whatever other mocking framework you prefer).
There is nothing wrong with using new like how it's shown in your code snippet.
Consider the case of wanting to append String snippets. Why would you want to ask the injector for a StringBuilder ?
In another situation that I've faced, I needed to have a thread running in accordance to the lifecycle of my container. In that case, I had to do a new Thread() because my Injector was created after the callback method for container startup was called. And once the injector was ready, I hand injected some managed classes into my Thread subclass.
Yes, of course.
Dependency injection is meant for situations where there could be several possible instantiation targets of which the client may not be aware (or capable of making a choice) of compile time.
However, there are enough situations where you do know exactly what you want to instantiate, so there is no need for DI.
This is just like invoking functions in object-oriented langauges: just because you can use dynamic binding, doesn't mean that you can't use good old static dispatching (e.g., when you split your method into several private operations).
My thinking is that DI is awesome and great to wire layers and also pieces of your code that needs sto be flexible to potential change. Sure we can say everything can potentially need changing, but we all know in practice some stuff just wont be touched.
So when DI is overkill I use 'new' and just let it roll.
Ex: for me wiring a Model to the View to the Controller layer.. it's always done via DI. Any Algorithms my apps uses, DI and also any pluggable reflective code, DI. Database layer.. DI but pretty much any other object being used in my system is handled with a common 'new'.
hope this helps.
It is true that in today, framework-driven environment you instantiate objects less and less. For example, Servlets are instantiated by servlet container, beans in Spring instantiated with Spring etc.
Still, when using persistence layer, you will instantiate your persisted objects before they have been persisted. When using Hibernate, for example you will call new on your persisted object before calling save on your HibernateTemplate.

Categories

Resources