On properly implementing complex service layers - java

I have the following situation:
Three concrete service classes implement a service interface: one is for persistence, the other deals with notifications, the third deals with adding points to specific actions (gamification). The interface has roughly the following structure:
public interface IPhotoService {
void upload();
Photo get(Long id);
void like(Long id);
//etc...
}
I did not want to mix the three types of logic into one service (or even worse, in the controller class) because I want to be able to change them (or shut them) without any problems. The problem comes when I have to inject a concrete service into the controller to use. Usually, I create a fourth class, named roughly ApplicationNamePhotoService, which implements the same interface, and works as a wrapper (mediator) between the other three services, which gets input from the controller, and calls each service correspondingly. It is a working approach, though one, which creates a lot of boilerplate code.
Is this the right approach? Currently, I am not aware of a better one, although I will highly appreciate to know if it is possible to declare the execution sequence declaratively (in the context) and to inject the controller with and on-the fly generated wrapper instance.
Also, it would be nice to cache some stuff between the three services. For example, all are using DAOs, i.e. making sometimes the same calls to the DB over and over again. If all the logic were into one place that could have been avoided, but now... I know that it is possible to enable some request or session based caching. Can you suggest me some example code? BTW, I am using Hibernate for the persistence part. Is there already some caching provided (probably, if they reside in the same transaction or something - with that one I am totally lost)

The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. It sounds like you are mixing service classes when they could be in the same class and method. You can inject service classes into one another when required too, rather than create another "mediator".
It is perfectly acceptable to "mix the three types of logic", in fact it is preferable if they form an expected use case/unit of work
Cache-ing I would look to use eh cache which is, I believe, well integrated with hibernate.

Related

Spring entities and business logic?

I am developing a spring application, where I have three layers as most of other spring apps. The Rest Controllers on front, Services in middle, and JPA repositories in behind. Now we have spring entities mapped to the db, in my case they are plain old java objects(POJO), with only some fields and getters and setters which I usually prefer and don't want to put any business logic in there. However, in this project, I find out that in a lot of services I am repeating the same piece of code, something like that
User user=userRepository.findUserByName("some name here");
if(user==null){
throw new UserNotFoundException("User not found");
}
Now, this is not only for a single entity, there are many other similar repeated parts too. So, I have started to worry about it and looking possible areas to push that code and eliminate the repeated parts. One thing makes sens as stated in domain driven design, put that business logic inside the entity, now they will have both data and part of business logic. Is that a common practice?
Pretty much looks like a simple code reuse problem. If you are always throwing the same exception in all contexts then what about implementing a findExistingUserByName method on the repository that throws if the user doesn't exist?
Your code would become:
User user = userRepository.findExistingUserByName("username");
If you do not want to change the repository contract you could also implement a UserFinderService at the application level which wraps over a UserRepository and provides that service-level behavior.
Another more generic idea could be to implement a generic method and make it available to your application services either by inheritance, composition or a static class which would allow you to do something like:
withExistingAggregate<User>(userRepository.findUserByName("username"), (User user) -> ...)
You cat return Optional<User> from repository in this and similar cases.
Then you service code will look like:
userRepository.findUserByName("some name here")
.ifPresent(user -> doThmsWithUser(user));

MVC practices. Service within another service

Service1 injects Repository1. Service2 injects Repository2.
Suppose two different scenarios:
1)
Some method of Service2 needs to retrieve data from Repository1.
Should Service2 inject Service1 or Repository1 when both of them provide respective get() method?
2) Some method of Service1 at it's end should call another method from Service2. Is it a bad practice to inject Service2 to Service1 for such needs? Is it a good practice to use event listen techniques like AOP for such needs?
There are many factors to consider here when we talked about best practices.
As a good start, try to understand the concept of SOLID principles.
Generally, it is good to have multiple classes with very focused roles that calls the other rather than combining all functionalities in one class. High reusability and least code duplication which in turn gives maintainability.
For scenario 1.)
It is perfectly fine to have a service calling another service if that business code defined in the method is the same business functionality needed by the other service. This follows the DRY principle, no redundant codes.
But it is also perfectly fine to just directly call the Dao from a service instead of calling a different service to do that for you if it is just a simple call with no further business logic. Especially if the two services are in the same module anyway, there is no strong reason to make another service a bridge class for an obvious simple single line of code unless you want to abstract it, but in your case, its just a simple get call.
For scenario 2.)
But another thing to consider is modularity and direction of dependency. If each service calls each other, there could be problem in your design, as much as possible avoid circular dependency on different modules because this could lead to spaghetti code, better to extract same code to a different class declared on common module that can be shared by many modules.
Final note, as what Robert Martin says, you won't be able to code at once the cleanest code in one round. Best codes are forged by continuous refactoring and code cleanup. To quote Robert Martin,
The Boy Scouts have a rule: "Always leave the campground cleaner than you found it."
I am not greatly experienced with this problem, but personally I would avoid coupling controllers. My first approach would be trying to create an interface that would fit all models if possible. It would then be possible to create a model that wires multiple models together to access the data you need without adding references to the controller. For instance:
Model1 implements iModel{}
Model2 implements iModel{}
ModelWrapper implements iModel{
private iModel model1;
private iModel model2;
public ModelWrapper(iModel model1, iModel model2)
{
this.model1 = model1;
this.model2 = model2;
}
public SomeDataType getSomeValue(){
SomeObject.param1 = model1.method();
SomeObject.param2 = model2.method();
return SomeObject;
}
}
I am sure there is a better way to approach the number of models passed into the constructor and also a way to search each model for the data you are looking for. If the data is not found a null reference or better a custom error could be thrown. If the implementation is consistent perhaps the wrapper could combine all models and allow access to many custom combinations. At least this way, when requirements change you can simply add an additional wrapper to get what you need without changing the current implementation.
Perhaps a more experienced developer will build on my response to provide you a better implementation, but I hope this helps.

Calling one DAO from another DAOFactory

Currently, my application architecture flows like this:
View → Presenter → Some asynchronous executor → DAOFactory → DAO (interface) → DAO (Impl)
For the time being, this kind of architecture works; mainly because I've only been needing one kind of DAO at the moment. But as the requirement grows, I'd need to expand to multiple DAOs, each with their own implementation on how to get the data.
Here's an illustration to my case:
The main headache comes from FooCloudDao which loads data from an API. This API needs some kind of authentication method - a string token that was stored during login (say, a Session object - yes, this too has its own DAO).
It's tempting to just pass a Session instance through FooDaoFactory, just in case there's no connection, but it seems hackish and counter-intuitive. The next thing I could imagine is to access SessionDAOFactory from within FooDaoFactory to gain instance of a Session (and then pass that when I need a FooCloudDAO instance).
But as I said, I'm not sure whether or not I could do a thing like this - well, may be I could, but is it this really the correct way of doing it?
I presume your problem is actually that FooCloudDao has different "dependencies" than other components, and you want to avoid passing the dependencies through every class on the way.
Altough there are quite some design patterns which would kind of solve your problem, I would suggesting taking a look on Dependency Injection / Inversion of Control principles and frameworks. What you would do with this is:
You would create an interface for what your FooCloudDao needs, for example:
interface ApiTokenProvider {
string GetToken();
}
You would create and implementation of that interface which would get it from the session or wherever that thing comes from:
class SessionBasedApiTokenPrivider implements ApiTokenProvider {
public string GetToken() {
// get it from the session here
}
}
The defined class above would need to be registered with IoC container of your choice as the implementation of ApiTokenProvider
interface (so that whoever asks for ApiTokenProvider will be decoupled
from the actual implementation -> the container would give him the
proper implementation).
You would have something called constructor injection on your FooCloudDao class (this is later used by the container to "inject"
your dependency):
public FooCloudDao(ApiTokenProvider tokenProvider) {
// store the provider so that the class can use it later where needed
}
Your FooDaoFactory would use the IoC container to resolve the FooCloudDao with all its dependencies (so you would not
instantiate the FooCloudDao with new)
When following these steps you will make sure that:
FooDaoFactory remains clean of passing dependecies through
you make your code much more testable because you could test your FooCloudDao without the real session (you could only give in the fake interface implementation)
and all other benefits which come with Inversion of Control...
Note on the session: if you encounter the problem of getting the session in the SessionBasedApiTokenProvider, most of the time the session itself is also registered with the IoC controller, and injected where needed.

How to decouple a module which listens on a hibernate event from the entities themselves?

I have a layered web-application driven by spring-jpa-hibernate and I'm now trying to integrate elasticsearch (search engine).
What I Want to do is to capture all postInsert/postUpdate events and send those entities to elasticsearch so that it will reindex them.
The problem I'm facing is that my "dal-entities" project will have a runtime dependency on the "search-indexer" and the "search-indexer" will have a compile dependency on "dal-entities" since it needs to do different things for different entities.
I thought about having the "search-indexer" as part of the DAL (since it can be argued it does operations on the data) but even still it should be as part of the DAO section.
I think my question can be rephrased as: How can I have logic in a hibernate event listener which cannot be encapsulated solely in an entities project (since it's not its responsibility).
Update
The reason the dal-entities project is dependant on the indexer is that I need to configure the listener in the spring configuration file which is responsible for the jpa context (which obviousely resides in the dal-entities).
The dependency is not a compile time scope but a runtime scope (since at runtime the hibernate context will need that listener).
The answer is Interfaces.
Rather than depend on the various classes directly (in either direction), you can instead depend on Interfaces that surface the capabilities you need. This way, you are not directly dependent on the classes but instead depend on the interface, and you can have the interfaces required by the "dal-entities", for example, live in the same package as the dal-entities and the indexer simply implements that interface.
This doesn't fully remove the dependency, but it does give you a much less tight of a coupling and makes your application a bit more flexible.
If you are still worried about things being too tightly coupled or if you really don't want the two pieces to be circularly dependent at all, then I would suggest you re-think your application design. Asking another question here on SO with more details about some of your code and how it could be better structured would be likely to get some good advice on how to improve the design.
Hibernate supports PostUpdateEventListener and PostInsertEventListener.
Here is a good example that might suite your case
The main concept is being able to locate when your entity was changed and act after it as shown here.
public class ElasticSearchListener implements PostUpdateEventListener {
#Override
public void onPostUpdate(PostUpdateEvent event) {
if (event.getEntity() instanceof ElasticSearchEntity ) {
callSearchIndexerService(event.getEntity());
Or
InjectedClass.act(event.getEntity());
Or
callWebService(InjectedClassUtility.modifyData(event.getEntity()));
........
}
}
Edit
You might consider Injecting the class that you want to isolate from the project (that holds the logic) using spring.
Another option might be calling an outside web service that is not dependent on your code.
passing to it either the your original project object or one that is modified by a utility, to fit elasticsearch.

Java custom annotation to restricted access to method

I am building an application on two layer. Web layer and business layer.
Inside the business layer I have some public method that can be called within the business layer or from the web layer.
I only want some of these methods being called from the web layer (the safe one).
I was wondering if I can create a annotation in my business layer, for example #Public which means I can call this method from the web layer, and #Private so I should not use this method from the web layer.
And when I try to call a #private method from the web layer (in eclipse) it gives me a warning?
As well: Can I have a way to list automatically all this method private and public?
AFAIK you can't make Eclipse use annotation to determine whether you can access a method from a certain file. For this to be possible Eclipse would have to know whether the file is part of the web layer or the business layer.
In order to list all methods having a certain annotation, you could use reflection at runtime. In Eclipse there might be filters, but I don't know of any annotation based filters.
Maybe you should choose another approach, I'll shortly describe how we do that:
We have two interfaces that our services may implement:
one public interface that contains all the methods the web layer may see
one private interface that contains all the methods internal to the business logic
We split those interfaces into two eclipse projects - one public api project and one implementation project that contains the services and internal api - and just allow access to the public api project from the web layer.
Since our services (EJB 3.0) need an interface, we have to add the internal one, if we have internal methods. However, with other technology (like EJB 3.1) you might also just provide the public interface.
Another approach might be to split the interfaces into two packages, e.g. myproject.api.pub (public is a keyword) and myproject.api.internal, and then use package based filters in Eclipse.
The first thing that comes up in my mind is that this would only be needed in a bad-designed two-layer app.
You can use access control to make sure that the web gui can only access the safe methods, and keep other methods to the business layer.
It is probably possible to just make those "public" methods that you don't want to be used by the web interface private; that way you can use them in the public, safe, methods in the business logic.
Though without knowing how your project is set up, giving concrete examples is kind of impossible.
But say; you can have:
com.somecompany.gui > contains all web-gui stuff
com.somecompany.logic > contains business logic.
In the logic package, you create classes that have public methods to be used from gui, and private or - if needed by other logic components - package private (protected) methods that cannot be accessed from the gui package. That way you can separate your logic from the interface without having a need for the annotation you want to make.
In general I'd say: yes, it could work. At least to produce compiler warnings.
We have the #Override annotation for methods. This annotation is used for a similar reason: verify at compile time, that certain conditions are met. In this case: the annotated methods overrides a method from a superclass or it implements an interface method or an abstract method. If the verifier finds out, the this is not the case, then the compiler will produce a compile time error.
So it should be possible here too. We could think of an annotations like
#Layer("servicelayer") // class annotation
#Private(layer="servicelayer") // method annotation
And now we could verify at compile time, that annotated methods can only be called from classes that have the same layer annotation. If the condition is not met, the compiler could produce a warning (iaw: the compiler could detect, if we accidentally call an internal service layer method from a web layer class.

Categories

Resources