Lets say I have controller RequestController in Spring which is marked as Singleton. Inside this controller there is a builder which is injected using dependency injection. The main goal for this class is to receive requests and build responses.
#Singleton
class RequestController {
private ResponseBuilder responseBuilder;
private RequestController(ResponseBuilder responseBuilder){
this.responseBuilder=responseBuilder;
}
public Response getResponse(Request request) {
return responseBuilder.getRequest(request).build();
}
}
My question:
What kind of pitfalls does this code hide? What could go wrong when we try to use it in normal spring application. #Singleton is only an information that this class will be created only once per applications.
I know that builder should be thread-safe since it will be responsible for handling multiple requests. But is anything else dangerous here?
Before going into your question there's one thing to be mentioned. You have declared your class RequestController as Singleton with #Singleton. If your class is a singleton you should make sure it is immutable, so no state change after creation. So I assume it is a stateless class. So you don't need to have a private constructor, which looks rather messy. Instead you can use,
#Inject
private RequestController(ResponseBuilder responseBuilder){
this.responseBuilder=responseBuilder;
}
Please note that, if you maintain a a good design, ResponseBuilder class should be an inject-able one. And RequestController class should be called only by injection.
Back to your problem, I think you have almost no problem. We should be very careful about what our real problem is? rather than what patterns we could drag and drop into our code?
Making a Controller, a Singleton is pretty much fine. But I don't find any need to use Builder pattern to your ResponseBuilder (you can change the name also). If your classes are rather simple and have just limited number of operations (a sign of a good design), you would never need builder pattern for your lifetime. It's more like we try to tear the paper with an axe. Why don't you use bare hands?
We should only use Builder in situations where we have to make considerably large scope class for some reason. You can have a good example of the use of Builder pattern in Hamcrest. It's an assertion tool for Java testing developed by Google. They have used builder pattern for some classes just to make life easier for the programmer who uses that by providing multiple list of tasks out of a single object.
Thank for a reply...I have an idea what it can be wrong with this code...Of course I should use injection on ResponseBuilder. But classically builder pattern have a state so in stateless controller we're using statefull class provided by ResponseBuilder. So automatically builder if it's not thread safe it can cause concurrency problems (race condition). Whenever multiple threads try to get access to builder fields they can get different state (because builder is not thread safe). So when we try to make builder thread safe it will work but a new problem will arise. Since our builder become thread safe only one thread will be able to use it and this can lead us to bootleneck for requests (multiple requests/threads use our method but they will be blocked by builder which is thread safe). Gimme a shout if my thinking is good :)
Related
Service1 injects Repository1. Service2 injects Repository2.
Suppose two different scenarios:
1)
Some method of Service2 needs to retrieve data from Repository1.
Should Service2 inject Service1 or Repository1 when both of them provide respective get() method?
2) Some method of Service1 at it's end should call another method from Service2. Is it a bad practice to inject Service2 to Service1 for such needs? Is it a good practice to use event listen techniques like AOP for such needs?
There are many factors to consider here when we talked about best practices.
As a good start, try to understand the concept of SOLID principles.
Generally, it is good to have multiple classes with very focused roles that calls the other rather than combining all functionalities in one class. High reusability and least code duplication which in turn gives maintainability.
For scenario 1.)
It is perfectly fine to have a service calling another service if that business code defined in the method is the same business functionality needed by the other service. This follows the DRY principle, no redundant codes.
But it is also perfectly fine to just directly call the Dao from a service instead of calling a different service to do that for you if it is just a simple call with no further business logic. Especially if the two services are in the same module anyway, there is no strong reason to make another service a bridge class for an obvious simple single line of code unless you want to abstract it, but in your case, its just a simple get call.
For scenario 2.)
But another thing to consider is modularity and direction of dependency. If each service calls each other, there could be problem in your design, as much as possible avoid circular dependency on different modules because this could lead to spaghetti code, better to extract same code to a different class declared on common module that can be shared by many modules.
Final note, as what Robert Martin says, you won't be able to code at once the cleanest code in one round. Best codes are forged by continuous refactoring and code cleanup. To quote Robert Martin,
The Boy Scouts have a rule: "Always leave the campground cleaner than you found it."
I am not greatly experienced with this problem, but personally I would avoid coupling controllers. My first approach would be trying to create an interface that would fit all models if possible. It would then be possible to create a model that wires multiple models together to access the data you need without adding references to the controller. For instance:
Model1 implements iModel{}
Model2 implements iModel{}
ModelWrapper implements iModel{
private iModel model1;
private iModel model2;
public ModelWrapper(iModel model1, iModel model2)
{
this.model1 = model1;
this.model2 = model2;
}
public SomeDataType getSomeValue(){
SomeObject.param1 = model1.method();
SomeObject.param2 = model2.method();
return SomeObject;
}
}
I am sure there is a better way to approach the number of models passed into the constructor and also a way to search each model for the data you are looking for. If the data is not found a null reference or better a custom error could be thrown. If the implementation is consistent perhaps the wrapper could combine all models and allow access to many custom combinations. At least this way, when requirements change you can simply add an additional wrapper to get what you need without changing the current implementation.
Perhaps a more experienced developer will build on my response to provide you a better implementation, but I hope this helps.
For a school project, I need to write a simple Server in Java that continuously listens on an incoming directory and moves files from this directory to some place else. The server needs to log info and error messages, so I thought I could use the Proxy pattern for this. Thus, I created the following ServerInterface:
public interface ServerInterface extends Runnable {
public void initialize(String repPath, ExecutorInterface executor, File propertiesFile) throws ServerInitException;
public void run();
public void terminate();
public void updateHTML();
public File[] scanIncomingDir();
public List<DatasetAttributes> moveIncomingFilesIfComplete(File[] incomingFiles);
}
Then I've created an implementation Server representing the real object and a class ProxyServer representing the proxy. The Server furthermore has a factory method that creates a ProxyServer object but returns it as a ServerInterface.
The run-method on the proxy-object looks like this:
#Override
public void run(){
log(LogLevels.INFO, "server is running ...");
while( !stopped ){
try {
File[] incomingContent = scanIncomingDir();
moveIncomingFilesIfComplete(incomingContent);
updateHTML();
pause();
} catch (Exception e) {
logger.logException(e, new Timestamp(timestampProvider.getTimestamp()));
pause();
}
}
log(LogLevels.INFO, "server stopped");
}
The functions that are called within the try statement simply log something and then propagate the call to the real object. So far, so good. But now that I've implemented the run-method this way in the proxy object, the run-method in the real object becomes obsolete and thus, is just empty (same goes for the terminate-method).
So I ask my-self: is that ok? Is that the way the proxy pattern should be implemented?
The way I see it, I'm mixing up "real" and "proxy"-behaviour ... Normally, the real-server should be "stuck" in the while-loop and not the proxy-server, right? I tried to avoid mixing this up, but neither approaches were satisfying:
I could implement the run-method in the real object and then hand over the proxy object to the real object in order to still be able to log during the while-loop. But then the real object would do some logging, which is I tried to avoid by writing a proxy in the first place.
I could say, only Proxy-Server is Runnable, thus deleting run and terminate from the Interface, but this would break up the Proxy pattern.
Should I may be use another design? Or I am seeing a problem where there is none?
You're definitely thinking in the right way. You've hit upon an interesting notion.
Features like logging, as you've described, are an example of what we call cross-cutting concerns in Aspect Oriented programming.
A cross-cutting concern is a requirement that will be used in many objects.
. . therefore, they have the tendency to break object oriented programming. What does this mean?
If you try to create a class that is all about moving files from place A to place B, and the implementation of a method to do that first talks about logging (and then transactions, and then security) then that isn't very OO is it? It breaks the single responsibility principle.
Enter Aspect Oriented Programming
This is the reason we have AOP - it exists to modularize and encapsulate these cross-cutting concerns. It works as follows:
Define all the places where we want the cross-cutting feature to be applied.
Use the intercept design pattern to "weave" in that feature.
Ways we can "weave" in a requirement with AOP
One way is to use a Java DynamicProxy as you've described. This is the default in for example the Spring Framework. This only works for interfaces.
Another way is to use a byte-code engineering library such as asm, cglib, Javassist - these intercept the classloader to provide a new sub-class at runtime.
A 3rd way is to use compile-time weaving - to change the code (or byte-code) at compile-time.
One more way is to use a java agent (an argument to the JVM).
The latter two options are supported in AspectJ.
In Conclusion:
It sounds as though you're moving towards Aspect Oriented Programming (AOP), so please check this out. Note also that the Spring Framework has a lot of features to simplify the application of AOP, though in your case, given this is a school assignment, its probably better to delve into the core concepts behind AOP itself.
NB: If you're building a production-grade server, logging may be a full-blown feature, and thus worth using AOP. . in other cases its probably simple enough to just in-line.
You should use Observer pattern in this case:
The observer pattern is a software design pattern in which an object,
called the subject, maintains a list of its dependents, called
observers, and notifies them automatically of any state changes,
usually by calling one of their methods.
Your Observable will observe changes in directory, by time pooling, or as already was suggested here, with WatchService. Changes of directory will notify Observer which will take action of moving files. Both Observable and Observer should log their actions.
You shold also know that Observer pattern became a part of Java JDK by implementing java.util.Observable and java.util.Observer.
You can make your proxy aware of the real object. Basically your proxy will delegate the call to run method to the real implementation.
Before the delegation, the proxy first logs the startup. After delegation, the proxy logs the "shutdown":
// Snapshot from what should look like the run method implementation
// in your proxy.
public ServerInterfaceProxy(ServerInterface target){
this.proxiedTarget = target;
}
public void run(){
log(LogLevels.INFO, "server is running ...");
this.proxiedTarget.run();
log(LogLevels.INFO, "server is running ...");
}
This implementation can also be perceived as a Decorator pattern implementation. IMHO, I believe that to some extent (when it comes to implementation) Proxy and Decorator are equivalent : Both intercept/capture behavior of a target.
Look at Java 7's WatchService class.
Using Proxy behaviour for this is almost certainly overkill.
My original question was quite incorrect, I have classes (not POJO), which have shortcut methods for business logic classes, to give the consumer of my API the ability to use it like:
Connector connector = new ConnectorImpl();
Entity entity = new Entity(connector);
entity.createProperty("propertyName", propertyValue);
entity.close;
Instead of:
Connector connector = new ConnectorImpl();
Entity entity = new Entity();
connector.createEntityProperty(entity, "propertyName", propertyValue);
connector.closeEntity(entity);
Is it good practice to create such shortcut methods?
Old question
At the moment I am developing a small framework and have a pretty nice separation of the business logic in different classes (connectors, authentication tokens, etc.), but one thing is still bothers me. I have methods which manipulates with POJOs, like this:
public class BuisnessLogicImpl implements BusinessLogic{
public void closeEntity(Entity entity) {
// Business Logic
}
}
And POJO entities which also have a close method:
public class Entity {
public void close(){
businessLogic.closeEntity(this);
}
}
Is it good practice to provide two ways to do the same thing? Or better, just remove all "proxy" methods from POJOs for clarity sake?
You should remove the methods from the "POJOs"... They aren't really POJO's if you encapsulate functionality like this. The reason for this comes from SOA design principles which basically says you want loose coupling between the different layers of your application.
If you are familiar with Inversion of control containers, like Google_Guice or Spring Framework-- this separation is a requirement. For instance, let's say you have a CreditCard POJO and a CreditCardProcessor service, and a DebugCreditCardProcess service that doesn't actually charge the CC money (for testing).
#Inject
private CardProcessor processor;
...
CreditCard card = new CreditCard(...params...);
processor.process(card);
In my example, I am relying on an IoC container to provide me with a CardProcessor. Whether this is the debug one, or the real one... I don't really care and neither does the CreditCard object. The one that is provided is decided by your application configuration.
If you had coupling between the processor and credit card where I could say card.process(), you would always have to pass in the processor in the card constructor. CreditCards can be used for other things besides processing however. Perhaps you just want to load a CreditCard from the database and get the expiration date... It shouldn't need a processor to do this simple operation.
You may argue: "The credit card could get the processor from a static factory". While true, singletons are widely regarded as an anti-pattern requiring keeping a global state in your application.
Keeping your business logic separate from your data model is always a good thing to do to reduce the coupling required. Loose coupling makes testing easier, and it makes your code easier to read.
I do not see your case as "two methods", because the logic of the implementation is kept in bussinessLogic. It would be akin of asking if it is a good idea java.lang.System has both a method getProperties() and a getProperty(String), more than a different method is just a shortcut to the same method.
But, in general, no, it is not good practice. Mainly because:
a) if the way to do that thing changes in the future, you need to remember that you have to touch two implementations.
b) when reading your code, other programmers will wonder if there are two methods because they are different.
Also, it does not fit very well with assigning responsabilities to a specific class for a given task, which is one of the tenets of OOP.
Of course, all absolute rules may have a special case where some considerations (mainly performance) may suggest breaking the rule. Think if you win something by doing so and document it heavily.
I have a singleton class called SingletonController1.
This SingletonController1 instantiates a bunch of others Singleton classes.
SingletonController1{
Authenticator - Singleton;
DBAccessor - Singleton;
RiskAccessor - Singleton;
}
My question is, what if I rework this design to:
SingletonController2{
Authenticator -non-singleton;
DBAccessor -non-singleton;
RiskAccessor -non-singleton;
}
As long as SingletonController2 is the only class that instantiates those three non-Singleton classes, wouldn't this be functionally the same as the previous design?
Cheers
Functionality will be the same, but flexibility much greater in the second case as the non-singleton classes can be reused elsewhere in your application/system. If they don't need to be singletons let they not be singletons.
Yes. These 2 designs accomplish the same thing, given your condition that no class other than Singleton2 instantiates Authenticator, DBAccessor and RiskAccessor.
I think you are on the right track, but push it further. Go right back to your root of your program and you only need one singleton. There's a logical step after that too.
Lately, what I've been doing is using dependency injection frameworks for the creation of objects. What they can do is make a class into a singleton with a line of code that configures how that class is created. That way if you ever need more than one of an object, you just delete the line and change the architecture for calling it slightly. I've just used a framework built to work with Unity 3D, so I don't know for certain if the frameworks outside of Unity 3D support this, but I have a good feeling they do.
Does dependency injection mean that you don't ever need the 'new' keyword? Or is it reasonable to directly create simple leaf classes such as collections?
In the example below I inject the comparator, query and dao, but the SortedSet is directly instantiated:
public Iterable<Employee> getRecentHires()
{
SortedSet<Employee> entries = new TreeSet<Employee>(comparator);
entries.addAll(employeeDao.findAll(query));
return entries;
}
Just because Dependency Injection is a useful pattern doesn't mean that we use it for everything. Even when using DI, there will often be a need for new. Don't delete new just yet.
One way I typically decide whether or not to use dependency injection is whether or not I need to mock or stub out the collaborating class when writing a unit test for the class under test. For instance, in your example you (correctly) are injecting the DAO because if you write a unit test for your class, you probably don't want any data to actually be written to the database. Or perhaps a collaborating class writes files to the filesystem or is dependent on an external resource. Or the behavior is unpredictable or difficult to account for in a unit test. In those cases it's best to inject those dependencies.
For collaborating classes like TreeSet, I normally would not inject those because there is usually no need to mock out simple classes like these.
One final note: when a field cannot be injected for whatever reason, but I still would like to mock it out in a test, I have found the Junit-addons PrivateAccessor class helpful to be able to switch the class's private field to a mock object created by EasyMock (or jMock or whatever other mocking framework you prefer).
There is nothing wrong with using new like how it's shown in your code snippet.
Consider the case of wanting to append String snippets. Why would you want to ask the injector for a StringBuilder ?
In another situation that I've faced, I needed to have a thread running in accordance to the lifecycle of my container. In that case, I had to do a new Thread() because my Injector was created after the callback method for container startup was called. And once the injector was ready, I hand injected some managed classes into my Thread subclass.
Yes, of course.
Dependency injection is meant for situations where there could be several possible instantiation targets of which the client may not be aware (or capable of making a choice) of compile time.
However, there are enough situations where you do know exactly what you want to instantiate, so there is no need for DI.
This is just like invoking functions in object-oriented langauges: just because you can use dynamic binding, doesn't mean that you can't use good old static dispatching (e.g., when you split your method into several private operations).
My thinking is that DI is awesome and great to wire layers and also pieces of your code that needs sto be flexible to potential change. Sure we can say everything can potentially need changing, but we all know in practice some stuff just wont be touched.
So when DI is overkill I use 'new' and just let it roll.
Ex: for me wiring a Model to the View to the Controller layer.. it's always done via DI. Any Algorithms my apps uses, DI and also any pluggable reflective code, DI. Database layer.. DI but pretty much any other object being used in my system is handled with a common 'new'.
hope this helps.
It is true that in today, framework-driven environment you instantiate objects less and less. For example, Servlets are instantiated by servlet container, beans in Spring instantiated with Spring etc.
Still, when using persistence layer, you will instantiate your persisted objects before they have been persisted. When using Hibernate, for example you will call new on your persisted object before calling save on your HibernateTemplate.