My original question was quite incorrect, I have classes (not POJO), which have shortcut methods for business logic classes, to give the consumer of my API the ability to use it like:
Connector connector = new ConnectorImpl();
Entity entity = new Entity(connector);
entity.createProperty("propertyName", propertyValue);
entity.close;
Instead of:
Connector connector = new ConnectorImpl();
Entity entity = new Entity();
connector.createEntityProperty(entity, "propertyName", propertyValue);
connector.closeEntity(entity);
Is it good practice to create such shortcut methods?
Old question
At the moment I am developing a small framework and have a pretty nice separation of the business logic in different classes (connectors, authentication tokens, etc.), but one thing is still bothers me. I have methods which manipulates with POJOs, like this:
public class BuisnessLogicImpl implements BusinessLogic{
public void closeEntity(Entity entity) {
// Business Logic
}
}
And POJO entities which also have a close method:
public class Entity {
public void close(){
businessLogic.closeEntity(this);
}
}
Is it good practice to provide two ways to do the same thing? Or better, just remove all "proxy" methods from POJOs for clarity sake?
You should remove the methods from the "POJOs"... They aren't really POJO's if you encapsulate functionality like this. The reason for this comes from SOA design principles which basically says you want loose coupling between the different layers of your application.
If you are familiar with Inversion of control containers, like Google_Guice or Spring Framework-- this separation is a requirement. For instance, let's say you have a CreditCard POJO and a CreditCardProcessor service, and a DebugCreditCardProcess service that doesn't actually charge the CC money (for testing).
#Inject
private CardProcessor processor;
...
CreditCard card = new CreditCard(...params...);
processor.process(card);
In my example, I am relying on an IoC container to provide me with a CardProcessor. Whether this is the debug one, or the real one... I don't really care and neither does the CreditCard object. The one that is provided is decided by your application configuration.
If you had coupling between the processor and credit card where I could say card.process(), you would always have to pass in the processor in the card constructor. CreditCards can be used for other things besides processing however. Perhaps you just want to load a CreditCard from the database and get the expiration date... It shouldn't need a processor to do this simple operation.
You may argue: "The credit card could get the processor from a static factory". While true, singletons are widely regarded as an anti-pattern requiring keeping a global state in your application.
Keeping your business logic separate from your data model is always a good thing to do to reduce the coupling required. Loose coupling makes testing easier, and it makes your code easier to read.
I do not see your case as "two methods", because the logic of the implementation is kept in bussinessLogic. It would be akin of asking if it is a good idea java.lang.System has both a method getProperties() and a getProperty(String), more than a different method is just a shortcut to the same method.
But, in general, no, it is not good practice. Mainly because:
a) if the way to do that thing changes in the future, you need to remember that you have to touch two implementations.
b) when reading your code, other programmers will wonder if there are two methods because they are different.
Also, it does not fit very well with assigning responsabilities to a specific class for a given task, which is one of the tenets of OOP.
Of course, all absolute rules may have a special case where some considerations (mainly performance) may suggest breaking the rule. Think if you win something by doing so and document it heavily.
Related
Let's say there is a simple REST controller in a Spring Boot application. This controller receives a CalculationDetails object which is commonly referred to as a request DTO.
#PostMapping
public CalculationResponse calculateAverage(#RequestBody CalculationDetails details) {
return calculationService.calculateAverage(details);
}
public record CalculationDetails(#JsonProperty("value_1") BigDecimal bd1,
#JsonProperty("value_2") BigDecimal bd2,
#JsonProperty("sourceList") List<CalculationSource> sources) {}
Now, the CalculationDetails class is also used in other service methods and mapper classes which use the same data structure, but these services don't represent the same "feature".
I'm asking myself which of these approaches would make more sense:
Move the CalculationDetails class to a "common" package and reuse it as is in other services.
Keep the CalculationDetails class specific to this controller/service invocation and create a copy altough the copy would look identically (at least for now).
Approach 2 would surely pay in to decoupling but that would come with the cost of duplication. Approach 1 would allow me to reuse existing code; however, it would create a tighter coupling?
I would say, go for the first approach. Let's say you add 10 more features which need the same DTO with the same structure. Then you need to duplicate the same code 10 more times. In my eyes it's some sort of technical debt and increases the maintenance cost (in case you change the structure of the DTO).
I have the following class hierarchy
Promotion - abstract
- Coupon
- Sales
- Deals
(Coupons, Sales and Deals are all subclasses of Promotion).
and would like to determine the type of the object when exchanging data between the REST APIs (JSON) and the Client (Angular). Users can submit a Coupon or a Deal or a Sale. For instance when a coupon is sent from the client, I want to be able to know that this is coupon so that i can call the correct method.
To solve this problem I have declared a variable and an abstract method in Promotion.
protected String promotionType = getPromotionType();
protected abstract String getPromotionType();
In the subclasses for instance in Coupon I have something like this
protected String getPromotionType() {
return "coupon"
// OR return this.getClass().getSimpleName().toLowerCase();
}
This will automatically initialize the promotionType variable so that in the Controllers I can check if the object is Coupon or Sales or Deal. Remember that JSON send data in String formats so I must I have a way to determine the type of object coming.
In this case I will have a single controller to handle all my CRUD operations. In my controller method I will do something like::
#PostMapping public void create(#RequestBody Promotion){
// And inside here I will check the type of **promotionType**
}
Here am using Promotion as argument instead of any of the subclasses in the create() method.
My question is, is it the best way to solve this?
Or do I have to create a separate Controller for each of the subclass? I am looking for the best way to do it in the real world.
I am using Hibernate for my mappings.
My question is, is it the best way to solve this?
Answers to this question will always be opinion-based, especially, as we don't know about your entire application, not only technically but business-wise, and how the client-code consumes and displays the code.
Or do i have to create a separate Controller for each of the subclass?
No, not necessarily. If the code is and would probably stay simple - sometime you can anticipate this - it doesn't make sense to inflate the code. Having three Controllers instead of a single PromotionController will very likely increase redundant code. Otherwise, if the subclasseses are rather heterogeneous, three Controllers could be more advisable.
Another thought, you might have a (human) client that manages only the Deals and that client has special requirements leading to a bunch of customized rest interfaces only for the Deal, you'd probably like to have a separate Controller.
I am looking for the best way to do it in the real world.
There is no best way. Five developers have probably five opinions on how to solve this. And even if one is more reasonable for the time being, it may change on the next day due to or changed new business requirements.
The best way is to discuss this in the team, create a common sense and if unsure, let the lead architect decide which way to go. Imo, your approach seems quite ok. That's my 2 cents.
Service1 injects Repository1. Service2 injects Repository2.
Suppose two different scenarios:
1)
Some method of Service2 needs to retrieve data from Repository1.
Should Service2 inject Service1 or Repository1 when both of them provide respective get() method?
2) Some method of Service1 at it's end should call another method from Service2. Is it a bad practice to inject Service2 to Service1 for such needs? Is it a good practice to use event listen techniques like AOP for such needs?
There are many factors to consider here when we talked about best practices.
As a good start, try to understand the concept of SOLID principles.
Generally, it is good to have multiple classes with very focused roles that calls the other rather than combining all functionalities in one class. High reusability and least code duplication which in turn gives maintainability.
For scenario 1.)
It is perfectly fine to have a service calling another service if that business code defined in the method is the same business functionality needed by the other service. This follows the DRY principle, no redundant codes.
But it is also perfectly fine to just directly call the Dao from a service instead of calling a different service to do that for you if it is just a simple call with no further business logic. Especially if the two services are in the same module anyway, there is no strong reason to make another service a bridge class for an obvious simple single line of code unless you want to abstract it, but in your case, its just a simple get call.
For scenario 2.)
But another thing to consider is modularity and direction of dependency. If each service calls each other, there could be problem in your design, as much as possible avoid circular dependency on different modules because this could lead to spaghetti code, better to extract same code to a different class declared on common module that can be shared by many modules.
Final note, as what Robert Martin says, you won't be able to code at once the cleanest code in one round. Best codes are forged by continuous refactoring and code cleanup. To quote Robert Martin,
The Boy Scouts have a rule: "Always leave the campground cleaner than you found it."
I am not greatly experienced with this problem, but personally I would avoid coupling controllers. My first approach would be trying to create an interface that would fit all models if possible. It would then be possible to create a model that wires multiple models together to access the data you need without adding references to the controller. For instance:
Model1 implements iModel{}
Model2 implements iModel{}
ModelWrapper implements iModel{
private iModel model1;
private iModel model2;
public ModelWrapper(iModel model1, iModel model2)
{
this.model1 = model1;
this.model2 = model2;
}
public SomeDataType getSomeValue(){
SomeObject.param1 = model1.method();
SomeObject.param2 = model2.method();
return SomeObject;
}
}
I am sure there is a better way to approach the number of models passed into the constructor and also a way to search each model for the data you are looking for. If the data is not found a null reference or better a custom error could be thrown. If the implementation is consistent perhaps the wrapper could combine all models and allow access to many custom combinations. At least this way, when requirements change you can simply add an additional wrapper to get what you need without changing the current implementation.
Perhaps a more experienced developer will build on my response to provide you a better implementation, but I hope this helps.
I have the following situation:
Three concrete service classes implement a service interface: one is for persistence, the other deals with notifications, the third deals with adding points to specific actions (gamification). The interface has roughly the following structure:
public interface IPhotoService {
void upload();
Photo get(Long id);
void like(Long id);
//etc...
}
I did not want to mix the three types of logic into one service (or even worse, in the controller class) because I want to be able to change them (or shut them) without any problems. The problem comes when I have to inject a concrete service into the controller to use. Usually, I create a fourth class, named roughly ApplicationNamePhotoService, which implements the same interface, and works as a wrapper (mediator) between the other three services, which gets input from the controller, and calls each service correspondingly. It is a working approach, though one, which creates a lot of boilerplate code.
Is this the right approach? Currently, I am not aware of a better one, although I will highly appreciate to know if it is possible to declare the execution sequence declaratively (in the context) and to inject the controller with and on-the fly generated wrapper instance.
Also, it would be nice to cache some stuff between the three services. For example, all are using DAOs, i.e. making sometimes the same calls to the DB over and over again. If all the logic were into one place that could have been avoided, but now... I know that it is possible to enable some request or session based caching. Can you suggest me some example code? BTW, I am using Hibernate for the persistence part. Is there already some caching provided (probably, if they reside in the same transaction or something - with that one I am totally lost)
The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. It sounds like you are mixing service classes when they could be in the same class and method. You can inject service classes into one another when required too, rather than create another "mediator".
It is perfectly acceptable to "mix the three types of logic", in fact it is preferable if they form an expected use case/unit of work
Cache-ing I would look to use eh cache which is, I believe, well integrated with hibernate.
I've taken the plunge and used Guice for my latest project. Overall impressions are good, but I've hit an issue that I can't quite get my head around.
Background: It's a Java6 application that accepts commands over a network, parses those commands, and then uses them to modify some internal data structures. It's a simulator for some hardware our company manufactures. The changes I make to the internal data structures match the effect the commands have on the real hardware, so subsequent queries of the data structures should reflect the hardware state based on previously run commands.
The issue I've encountered is that the command objects need to access those internal data structures. Those structures are being created by Guice because they vary depending on the actual instance of the hardware being emulated. The command objects are not being created by Guice because they're essentially dumb objects: they accept a text string, parse it, and invoke a method on the data structure.
The only way I can get this all to work is to have those command objects be created by Guice and pass in the data structures via injection. It feels really clunky and totally bloats the constructor of the data objects.
What have I missed here?
Dependency injection works best for wiring services. It can be used to inject value objects, but this can be a bit awkward especially if those objects are mutable.
That said, you can use Providers and #Provides methods to bind objects that you create yourself.
Assuming that responding to a command is not that different from responding to a http request, I think you're going the right path.
A commonly used pattern in http applications is to wrap logic of the application into short lived objects that have both parameters from request and some backends injected. Then you instantiate such object and call a simple, parameterless method that does all magic.
Maybe scopes could inspire you somehow? Look into documentation and some code examples for read the technical details. In code it looks more less like that. Here's how this might work for your case:
class MyRobot {
Scope myScope;
Injector i;
public void doCommand(Command c) {
myScope.seed(Key.get(Command.class),
i.getInstance(Handler.class).doSomething();
}
}
class Handler {
private final Command c;
#Inject
public Handler(Command c, Hardware h) {
this.c = c;
}
public boolean doSomething() {
h.doCommand(c);
// or c.modifyState(h) if you want c to access internals of h
}
}
Some people frown upon this solution, but I've seen this in code relying heavily on Guice in the past in at least two different projects.
Granted you'll inject a bit of value objects in the constructors, but if you don't think of them as value objects but rather parameters of the class that change it's behaviour it all makes sense.
It is a bit awkward and some people frown upon injecting value objects that way, but I have seen this in the past in projects that relied heavily on Guice for a while and it worked great.