Dagger 2 and use of #Singleton - java

I have inherited a Java web service project that is using Dagger 2. Based on my so far limited understanding of Dagger I am confused as to why every single class that is injected has the singleton annotation on it in the dagger module class's. If I was creating this application without dagger they would not all be singletons, is this something specific to dagger or have the previous developers simply misused Dagger?

[...] every single class that is injected has the singleton annotation [...] is this something specific to dagger [...]?
Nope. #Singleton is the only scope included with Dagger by default, but you can also create custom scopes, use #Reusable which may create multiple objects but will reuse them if possible, or no scope at all.
or have the previous developers simply misused Dagger?
If possible you should just ask them. If every object is a Singleton this looks like they did not invest a lot of thought in the setup and just copy-pasted declarations, at least this would be my assumption.

From the section about Reusable in the user guide:
Sometimes you want to limit the number of times an #Inject-constructed
class is instantiated or a #Provides method is called, but you don’t
need to guarantee that the exact same instance is used during the
lifetime of any particular component or subcomponent. This can be
useful in environments such as Android, where allocations can be
expensive.
Two main differences:
#Singleton annotated class is guaranted to give always the same instance. It is needed if we keep global state in it. #Reusable do not give any guarantee.
If any class requests the instance of #Singleton annotated class, double checking is performed (which is slow). In case of #Reusable, it isn't.
I'd use #Reusable scope for classes that are expensive to build (for example I'm using for Retrofit instance - but to be honest I've never made performance tests if it is worth to use this annotation at all).
On the other hand, I'm using #Singleton annotated class for the cache.
Also, if you have class which keeps encapsulated global state like this:
class StateWrapper {
final State state;
#Inject
StateWrapper(State state) {
this.state = state;
}
}
I mean the state is de facto kept in the State class, do not annotate StateWrapper as #Singleton, always annotate the smallest part: in this case the State class.
(This hint is taken from the video)

Related

How to make a state available to all beans in a "session"?

I have the following design. When a client makes a request to the server, the server creates a state that holds all sorts of info. There are various stateless and stateful beans which need to read and write to this state. Refer to this unprofessional diagram:
The ComputationCycle class is where the processing starts and works by phases. During each phase it calls upon other Manager classes (which behave like utility classes) to help in the computation (diagram shows only for 1 phase). The state is being read and written to both from the CC class and the managers, both are stateless.
State holds Employee, Department and Car classes (in some irrelevant data structure) which are stateful. These classes can also call the Manager classes. This is done by a simple #Inject Manager1. The same way CC uses managers.
My problem is how to access the stateful state (and its contained classes) from the stateless classes (and from the Car, Department and Employee classes too, although I think solving one will solve the other). I can't inject a stateful bean into a stateless bean. So after the client makes a request and the computation cycle starts, how do I access the state related to this request?
One solution is to pass the state to every method in the stateless classes, but this is really cumbersome and bloaty because all methods will have an "idiotic" State argument everywhere.
How can I make this design work the way I want it to?
I can't inject a stateful bean into a stateless bean.
You can absolutely inject dependencies this way.
If the stateful bean is #RequestScoped, any call into the stateless bean on that thread that hits a CDI injected contextual reference (iow proxy) will find its way to the right instance of the stateful bean.
As long as you use CDI, you don't need to trouble yourself with trying to stash things away in your own threadlocals.
Buyer beware, ThreadLocal will possibly do what you're wanting, along with a static accessor. However, this class is prone to causing memory leaks if you are not extremely careful to remove each entry at the end of the request. In addition, you seem to be using EJB; I assume they are all in the same JRE. I use ThreadLocal quite a bit in similar situations, and I've had no problems. I use SerletContextListener's to null the static reference to the ThreadLocal when the context shuts down, although that has been problematic on some older Web app servers, so I make sure the ThreadLocal exists before attempting to use it.
EJB can "talk" to each other across servers. It sounds local all your EJB are running in the same context.
Create a class that holds your state.
Extend ThreadLocal--you can do this anonymously--and override initialValue() to return a new instance of your class.
Create a utility class to hold the ThreadLocal as a static field. Don't make it final Create static fetch and remove methods that call ThreadLocal.get() and remove(). Create a static destroy() method that is called when your context shuts down--see ServletContextListener.

CDI (Weld SE) not injecting inner dependencies when using Producer method

I'm using WELD SE on a standalone java project which seemed to work fine till I started using producers.
The producer method works - the container uses it, but never injects the inner pdependencies of the produced bean. When I remove the producer, it works normally. I can't find the cause even after a long search on the spec and on Google.
Example of a Producer:
#ApplicationScoped
public class LaminaValidadorProducer {
private static final String XSD_PATH = getConfig("processador.xsd.path");
private static final Map<VersaoLamina,String> XSD_PER_VERSION = new HashMap<>();
static {
XSD_PER_VERSION.put(VersaoLamina.V1, getConfig("processador.lamina.xsd.file"));
XSD_PER_VERSION.put(VersaoLamina.V2, getConfig("processador.laminav2.xsd.file"));
}
#Produces
public LaminaValidador buildValidador() {
return new LaminaValidador(XSD_PATH, XSD_PER_VERSION);
}
}
LaminaValidador is injected normally, but its INNER attributes (marked with #Inject) are not being injected. THe project has a beans.xml with bean-discovery-mode="all".
Any clues on what's happening?
This is not only a matter of SE and it is in fact a desired/expected behaviour of CDI.
The reason behind this is that normally, if you do not have producers, CDI creates the bean classes for you (by calling no-args constructor, or one with injections) and subsequently resolves the injection points within the bean (and does some other things, see spec). E.g. you leave the lifecycle management to CDI container.
On the other hand, using a producer is usually a way to create a bean out of a class where:
you cannot control lifecycle youself, e.g. EntityManager
you intergrate with other frameworks and they have complex initialization
you need to do some checks for external config before calling certain constructor
or you maybe want a bean for a primitive type (int)
and many many more use cases
Now this means you are responsible for the creation of the bean. And that includes any fields within. Container just takes the producer as a way to create a full-blown bean and assumes you took care or what the initialization required.
Now, from your question I judge you need the injection point resolution inside. There is no easy way, if any, to "enforce" the resolution manually due to static nature of CDI (and other, more complex reasons). Hence I would propose to use a different approach and leverage constructor injection or maybe initializer methods? If you provide more information, I might be able to help.

Calling one DAO from another DAOFactory

Currently, my application architecture flows like this:
View → Presenter → Some asynchronous executor → DAOFactory → DAO (interface) → DAO (Impl)
For the time being, this kind of architecture works; mainly because I've only been needing one kind of DAO at the moment. But as the requirement grows, I'd need to expand to multiple DAOs, each with their own implementation on how to get the data.
Here's an illustration to my case:
The main headache comes from FooCloudDao which loads data from an API. This API needs some kind of authentication method - a string token that was stored during login (say, a Session object - yes, this too has its own DAO).
It's tempting to just pass a Session instance through FooDaoFactory, just in case there's no connection, but it seems hackish and counter-intuitive. The next thing I could imagine is to access SessionDAOFactory from within FooDaoFactory to gain instance of a Session (and then pass that when I need a FooCloudDAO instance).
But as I said, I'm not sure whether or not I could do a thing like this - well, may be I could, but is it this really the correct way of doing it?
I presume your problem is actually that FooCloudDao has different "dependencies" than other components, and you want to avoid passing the dependencies through every class on the way.
Altough there are quite some design patterns which would kind of solve your problem, I would suggesting taking a look on Dependency Injection / Inversion of Control principles and frameworks. What you would do with this is:
You would create an interface for what your FooCloudDao needs, for example:
interface ApiTokenProvider {
string GetToken();
}
You would create and implementation of that interface which would get it from the session or wherever that thing comes from:
class SessionBasedApiTokenPrivider implements ApiTokenProvider {
public string GetToken() {
// get it from the session here
}
}
The defined class above would need to be registered with IoC container of your choice as the implementation of ApiTokenProvider
interface (so that whoever asks for ApiTokenProvider will be decoupled
from the actual implementation -> the container would give him the
proper implementation).
You would have something called constructor injection on your FooCloudDao class (this is later used by the container to "inject"
your dependency):
public FooCloudDao(ApiTokenProvider tokenProvider) {
// store the provider so that the class can use it later where needed
}
Your FooDaoFactory would use the IoC container to resolve the FooCloudDao with all its dependencies (so you would not
instantiate the FooCloudDao with new)
When following these steps you will make sure that:
FooDaoFactory remains clean of passing dependecies through
you make your code much more testable because you could test your FooCloudDao without the real session (you could only give in the fake interface implementation)
and all other benefits which come with Inversion of Control...
Note on the session: if you encounter the problem of getting the session in the SessionBasedApiTokenProvider, most of the time the session itself is also registered with the IoC controller, and injected where needed.

Guice: Do I have to annotate every class of an object graph with #Inject?

I'd like to introduce Guice for the use of an existing mid-sized project.
For my demands I need a custom scope (session is too big, while request to small for my project).
Imagine that I request guice to provide me an instance of Class A which has direct and indirect dependencies to many other classes (composition).
My custom provider is able to provide the instance of classes which are used as constructor arguments of all involved classes.
Question:
Do I really have to put an #Inject (and my custom scope) annotation on the constructors of all involved classes or is there a way that guice only requires these annotations on the top-level class which I request and that all further dependencies are resolved by "asking" my custom scope for a provider of the dependent types?
If this is true this would increase the effort of introducing Guice because I have to adjust more than 1000 classes. Any help and experiences during the introduction of guice is appreciated.
First of all, it's possible to use Guice without putting an #Inject annotation anywhere. Guice supports Provider bindings, #Provides methods and constructor bindings, all of which allow you to bind types however you choose. However, for its normal operation it requires #Inject annotations to serve as metadata telling it what dependencies a class requires and where it can inject them.
There reason for this is that otherwise, it cannot deterministically tell what it should inject and where. For example, classes may have multiple constructors and Guice needs some way of choosing one to inject that doesn't rely on any guessing. You could say "well, my classes only have one constructor so it shouldn't need #Inject on that", but what happens when someone adds a new constructor to a class? Then Guice no longer has its basis for deciding and the application breaks. Additionally, this all assumes that you're only doing constructor injection. While constructor injection is certainly the best choice in general, Guice allows injection of methods (and fields) as well, and the problem of needing to specify the injection points of a class explicitly is stronger there since most classes will have many methods that are not used for injection and at most a few that are.
In addition to #Inject's importance in telling Guice, it also serves as documentation of how a class is intended to be used--that the class is part of an application's dependency injection wired infrastructure. It also helps to be consistent in applying #Inject annotations across your classes, even if it wouldn't currently be absolutely necessary on some that just use a single constructor. I'd also note that you can use JSR-330's #javax.inject.Inject annotation in Guice 3.0 if a standard Java annotation is preferable to a Guice-specific one to you.
I'm not too clear on what you mean by asking the scope for a provider. Scopes generally do not create objects themselves; they control when to ask the unscoped provider of a dependency for a new instance and how to control the scope of that instance. Providers are part of how they operate, of course, but I'm not sure if that's what you mean. If you have some custom way of providing instances of objects, Provider bindings and #Provides methods are the way to go for that and don't require #Inject annotations on the classes themselves.
NO YOU DONT
GUICE does not ask you to inject every single object. GUICE will try and create only injected objects. So you can #Inject objects that you want to be injected.
On the scope bit - Scope essentially controls how your objects gets created by GUICE. When you write your own custom scope you can have a datastructure that controls the way objects are created. When you scope a class with your custom annotation, GUICE will call your scope method before creation with a Provider for that class. You can then decide if you want to create a new object or use an existing object from a datastructure (such as hashmap or something). If you want to use an existing one you get that and return the object, else you do a provider.get() and return.
Notice this
public <T> Provider<T> scope(final Key<T> key, final Provider<T> unscoped) {
return new Provider<T>() {
public T get() {
Map<Key<?>, Object> scopedObjects = getScopedObjectMap(key);
#SuppressWarnings("unchecked")
T current = (T) scopedObjects.get(key);
if (current == null && !scopedObjects.containsKey(key)) {
current = unscoped.get();
scopedObjects.put(key, current);
}
// what you return here is going to be injected ....
// in this scope object you can have a datastructure that holds all references
// and choose to return that instead depending on your logic and external
// dependencies such as session variable etc...
return current;
}
};
}
Here's a tutorial ...
http://code.google.com/p/google-guice/wiki/CustomScopes
At the most basic level, the #Inject annotation identifies the stuff guice will need to set for you. You can have guice inject into a field directly, into a method, or into a constructor. You must use the #Inject annotation every time you want guice to inject an object.
Here is a guice tutorial.

Is there ever a case for 'new' when using dependency injection?

Does dependency injection mean that you don't ever need the 'new' keyword? Or is it reasonable to directly create simple leaf classes such as collections?
In the example below I inject the comparator, query and dao, but the SortedSet is directly instantiated:
public Iterable<Employee> getRecentHires()
{
SortedSet<Employee> entries = new TreeSet<Employee>(comparator);
entries.addAll(employeeDao.findAll(query));
return entries;
}
Just because Dependency Injection is a useful pattern doesn't mean that we use it for everything. Even when using DI, there will often be a need for new. Don't delete new just yet.
One way I typically decide whether or not to use dependency injection is whether or not I need to mock or stub out the collaborating class when writing a unit test for the class under test. For instance, in your example you (correctly) are injecting the DAO because if you write a unit test for your class, you probably don't want any data to actually be written to the database. Or perhaps a collaborating class writes files to the filesystem or is dependent on an external resource. Or the behavior is unpredictable or difficult to account for in a unit test. In those cases it's best to inject those dependencies.
For collaborating classes like TreeSet, I normally would not inject those because there is usually no need to mock out simple classes like these.
One final note: when a field cannot be injected for whatever reason, but I still would like to mock it out in a test, I have found the Junit-addons PrivateAccessor class helpful to be able to switch the class's private field to a mock object created by EasyMock (or jMock or whatever other mocking framework you prefer).
There is nothing wrong with using new like how it's shown in your code snippet.
Consider the case of wanting to append String snippets. Why would you want to ask the injector for a StringBuilder ?
In another situation that I've faced, I needed to have a thread running in accordance to the lifecycle of my container. In that case, I had to do a new Thread() because my Injector was created after the callback method for container startup was called. And once the injector was ready, I hand injected some managed classes into my Thread subclass.
Yes, of course.
Dependency injection is meant for situations where there could be several possible instantiation targets of which the client may not be aware (or capable of making a choice) of compile time.
However, there are enough situations where you do know exactly what you want to instantiate, so there is no need for DI.
This is just like invoking functions in object-oriented langauges: just because you can use dynamic binding, doesn't mean that you can't use good old static dispatching (e.g., when you split your method into several private operations).
My thinking is that DI is awesome and great to wire layers and also pieces of your code that needs sto be flexible to potential change. Sure we can say everything can potentially need changing, but we all know in practice some stuff just wont be touched.
So when DI is overkill I use 'new' and just let it roll.
Ex: for me wiring a Model to the View to the Controller layer.. it's always done via DI. Any Algorithms my apps uses, DI and also any pluggable reflective code, DI. Database layer.. DI but pretty much any other object being used in my system is handled with a common 'new'.
hope this helps.
It is true that in today, framework-driven environment you instantiate objects less and less. For example, Servlets are instantiated by servlet container, beans in Spring instantiated with Spring etc.
Still, when using persistence layer, you will instantiate your persisted objects before they have been persisted. When using Hibernate, for example you will call new on your persisted object before calling save on your HibernateTemplate.

Categories

Resources