Drawbacks of a Sudo Strategy Pattern - java

I'm working on a project where I need to call many different services and wanted to abstract out as much of the common logic as I can. The kicker is that I also want to return custom objects instead of something such as json. As I designed a way of doing this, I arrived at a paradigm that reminds me of a Strategy Pattern, but I don't quite think it fits. I'm wondering if there are any design flaws or dangers in how I've done this.
Basically I've created an abstract class (ServiceCall) that holds the common logic for calling a service (have an internet client and get a json response from the desired service). This is all done with the public method call(Map<String, String> params, Map<String, String> headers). The abstract class also defines two abstrach methods: createCustomDTO(Map<String, String> params, Map<String, String> headers) and parseResponseToObject(String json). I'll explain the purpose of these in just a second.
Each call to a different service will be created with a class that extends ServiceCall and creates an implementation of the abstract methods. createCustomDTO will create a custom object that contains all the information needed to call the service (url, and headers). parseResponseToObject will take a json response and turn it into the java object I want to use later in my code.
Here is a simple implementation of what I did
ServiceCall
public abstract class ServiceCall {
protected abstract Object parseResponseToObject(String json);
protected abstract CustomServiceDTO createDTO(Map<String, String> keys,
Map<String, String> headers);
public Object call(Map<String, String> params, Map<String, String> headers) {
// create and configure a CustomServiceDTO to call services with
CustomServiceDTO dto = createDTO(params, headers);
try {
// make the service request
String json = getServiceResponse(dto);
catch (Exception e) {
return new CustomServiceError(e);
}
// parse the response into the desired java object
Object response = parseResponseToObject(json);
return response;
}
private String getServiceResponse(CustomServiceDTO dto) {
// use common logic to call the service using the information provided
// by the dto
String dummyResponse = "{\"prop\":\"value\"}";
return dummyResponse;
}
}
SampleImplementation
public class AddressCall extends ServiceCall {
#Override
protected Object parseResponseToObject(String json) {
return new Address(json);
}
#Override
protected CustomServiceDTO createDTO(Map<String, String> keys,
Map<String, String> headers) {
CustomServiceDTO dto = new CustomServiceDTO();
dto.setUrl("www.getMyDummyAddress.com/" + keys.get(0) + "=" + keys.get(1));
dto.setHeaders(headers);
return dto;
}
}
The main drawback I see to this is that never directly calling createCustomDTO and parseResponseToObject is a little strange.
The main advantage for me is the ease of use and having my responses returned as java objects.
What I really want to know is are there any other concerns or drawbacks to this paradigm? It's very nice in the code, but I admit it seems a bit different from how java is normally used.

This is not different from how Java is normally used, this is called a Template Method design pattern.
It's pretty common, except for the use of Object, which is better replaced by a generic type.

Related

How to avoid null implementation of interface methods

There may be some problem in my design too. Here is my problem:
I have AbstractCustomAuthHandler which first;
gets IUser (gets users with implementing logic)
than calls IUser's IRole object's function (gets roles with the implementing logic)
So in the beginnig of the design;
every IUser implementation has some IRole logic
those are seperated because they are separate rest calls in seperate microservices
but I related them with IUser has a IRole relation
But now there is some implementations that some IUser's implementation should not have IRole object. So for now I'm returning null for this implementations and I didn't like it. I thought about splitting the intefaces but couldn't find the solution which satisfies by AbstractCustomAuthHandler too. Here is a diagram and the code:
Here is the some part of AbstractCustomAuthHandler
IUser userAuth= this.getUserAuth();
final Response userResponse= userAuth.findUser();
// ...
Map<String, Object> attributes= userAuth.getMappedAttributes();
// ...
IRole roleAuth= userAuth.getRoleAuth();
if (roleAuth!= null)
{
final Response rolesResponse = roleAuth.findRolesBy();
}
// ....
Here is AuthMethodWithoutRole that I have problem about returning null
public class AuthMethodWithoutRole implements IUser
{
#Override public Response findUserBy( )
{
// some logic
}
#Override public IRole getRoleAuth( )
{
return null;
}
}
Here is IUser interface
public interface IUser extends IAuth
{
Response findUserBy();
IRole getRoleAuth();
}
Here is IRole interface
public interface IRole
{
Response findRolesBy( );
}
Why you not just create a class NullRole implements IRole?
So you do not need the AuthMethodWithoutRole. You can just use your default AuthMethod dealing with a "NullRole".
If you want really remove the role check from AbstractCustomAuthHandler, you should rethink your design. You could move the logic that uses the role in the IUser class/subclasses.
In this way each IUser implementation will use it if required.
This approach sounds a DDD approach. Making objects collaborate according to their nature/definition and don't let an artificial object to perform the whole logic (AbstractCustomAuthHandler).
This logic :
IUser userAuth= this.getUserAuth();
final Response userResponse= userAuth.findUser();
// ...
Map<String, Object> attributes= userAuth.getMappedAttributes();
// ...
IRole roleAuth= userAuth.getRoleAuth();
if (roleAuth!= null)
{
final Response rolesResponse = roleAuth.findRolesBy();
}
would be done in IUser :
IUser userAuth= this.getUserAuth();
Response response = userAuth.computeResponse(...);
Or maybe :
ResponsePart responsePart = userAuth.computeSomePartOfTheResponse(...);
// and use responsePart to complete the logic.
Of course IUser subclass could rely on some base method defined in a super class or in the interface to perform the common logic.
If you don't want to change your approach, that is you want to go on retrieving roles information for the IUser object in order to let another class (AbstractCustomAuthHandler) to use it, you need to manipulate uniformally IUser for the class that manipulates them.
So providing implementation with empty or null roles is required even for subclasses that don't have these.
I don't think that it is a design issue if you follow this approach. As improvement you could consider :
Define a default implementation in the interface that returns null.
Or change the return type to Optional<Role> and define a default implementation in the interface that returns an empty Optional.
This would give :
public interface IUser extends IAuth
{
Response findUserBy();
default Optional<IRole> getRoleAuth(){return Optional.empty()}
}
Now override the method only when it is required.

ClassCastException while returning generic type

I'm writing Spring application and I want to divide it into several layers to separate domain from framework totally.
So, my domain methods are returning vavr's Either<Error, T> and my controllers are using simple resolver for all of those methods.
<T> ResponseEntity resolve(Either<Error, T> either) {
return either
.map(this::successResponse)
.getOrElseGet(this::failureResponse);
}
private ResponseEntity<Object> successResponse(Object obj) {
return new ResponseEntity<>(mapper.toDto(obj), HttpStatus.OK);
}
The problem is, I don't want to return my domain entities from controllers, so I wrote generic mapper, which will translate each domain object to its dto with single method.
public <T, Y> Y toDto(T domainObject) {
if(domainObject instanceof Reservation) {
return (Y) reservationMapper.toDto((Reservation)domainObject);
}
return null;
}
Reservation and ReservationMapper are here concrete implementations of one domain, if can be ofcourse replaced with better solution, but that's not the case.
The case is, it isn't working. It throws
java.lang.ClassCastException: class
com.johndoe.reservationsystem.adapter.dto.ReservationDto cannot be
cast to class org.springframework.util.MultiValueMap
I found workaround which is creating empty abstract class Dto, make all my dto classes extend this, and just return Dto instead of Y in toDto method. The point is, I don't really like it and I'd like to find better solution.
Probably its not necessary, but here is implementation of ReservationMapper
class ReservationMapper {
ReservationDto toDto(Reservation reservation) {
return ReservationDto.builder()
.id(reservation.getId())
.ownerId(reservation.getOwnerId())
.reservedObjectId(reservation.getReservedObjectId())
.title(reservation.getTitle())
.description(reservation.getDescription())
.startDate(reservation.getStartDate())
.endDate(reservation.getEndDate())
.build();
}
}
Your method signature
public <T, Y> Y toDto(T domainObject)
is causing the problem, because, in the context of your invocation
return new ResponseEntity<>(mapper.toDto(obj), HttpStatus.OK)
is interpreted as if it's able to return a MultiValueMap<String, String> to conform to the constructor
public ResponseEntity(MultiValueMap<String, String> headers, HttpStatus status)
See javadoc: ResponseEntity(MultiValueMap<String, String>, HttpStatus)
By modifying your toDto method to make it explicit that it's definitely not returning a MultiValueMap you can work around this problem. For example, just specifying:
public Object toDto(Object domainObject)
for the conversion method will probably do it. Or you could constrain your generic arguments so as to make it explicit that it cannot return an instance of an arbitrary type based on the type inference context its invocation currently is.

java open closed principle for multiple services

Let's say I wanted to define an interface which represents a call to a remote service.
Both Services have different request and response
public interface ExecutesService<T,S> {
public T executeFirstService(S obj);
public T executeSecondService(S obj);
public T executeThirdService(S obj);
public T executeFourthService(S obj);
}
Now, let's see implementation
public class ServiceA implements ExecutesService<Response1,Request1>
{
public Response1 executeFirstService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeSecondService(Request1 obj)
{
//execute some service
}
public Response1 executeThirdService(Request1 obj)
{
//execute some service
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
public class ServiceB implements ExecutesService<Response2,Request2>
{
public Response1 executeFirstService(Request1 obj)
{
//execute some service
}
public Response1 executeSecondService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeThirdService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
In a other class depending on some value in request I am creating instance of either ServiceA or ServiceB
I have questions regarding the above:
Is the use of a generic interface ExecutesService<T,S> good in the case where you want to provide subclasses which require different Request and Response.
How can I do the above better?
Basically, your current design violates open closed principle i.e., what if you wanted to add executeFifthService() method to ServiceA and ServiceB etc.. classes.
It is not a good idea to update all of your Service A, B, etc.. classes, in simple words, classes should be open for extension but closed for modification.
Rather, you can refer the below approach:
ExecutesService interface:
public interface ExecutesService<T,S> {
public T executeService(S obj);
}
ServiceA Class:
public class ServiceA implements ExecutesService<Response1,Request1> {
List<Class> supportedListOfServices = new ArrayList<>();
//load list of classnames supported by ServiceA during startup from properties
public Response1 executeService(Request1 request1, Service service) {
if(!list.contains(Service.class)) {
throw new UnsupportedOperationException("This method should
not be called for this class");
} else {
return service.execute(request1);
}
}
}
Similarly, you can implement ServiceB as well.
Service interface:
public interface Service<T,S> {
public T execute(S s);
}
FirstService class:
public class FirstService implements Service<Request1,Response1> {
public Response1 execute(Request1 req);
}
Similarly, you need to implement SecondService, ThirdService, etc.. as well.
So, in this approach, you are basically passing the Service (to be actually called, it could be FirstService or SecondService, etc..) at runtime and ServiceA validates whether it is in supportedListOfServices, if not throws an UnsupportedOperationException.
The important point here is that you don't need to update any of the existing services for adding new functionality (unlike your design where you need to add executeFifthService() in ServiceA, B, etc..), rather you need to add one more class called FifthService and pass it.
I would suggest you to create two different interfaces every of which is handling its own request and response types.
Of course you can develop an implementation with one generic interface handling all logic but it may make the code more complex and dirty from my point of view.
regards
It makes not really sense to have a interface if you know that for one case, most of methods of the interface are not supported and so should not be called by the client.
Why provide to the client an interface that could be error prone to use ?
I think that you should have two distinct API in your use case, that is, two classes (if interface is not required any longer) or two interfaces.
However, it doesn't mean that the two API cannot share a common interface ancestor if it makes sense for some processing where instances should be interchangeable as they rely on the same operation contract.
Is the use of a generic interace (ExecutesService) good in the case
where you want to provide subclasses which require different Request
and Response.
It is not classic class deriving but in some case it is desirable as
it allows to use a common interface for implementations that has some enough similar methods but don't use the same return type or parameter types in their signature :
public interface ExecutesService<T,S>
It allows to define a contract where the classic deriving cannot.
However, this way of implementing a class doesn't allow necessarily to program by interface as the declared type specifies a particular type :
ExecutesService<String, Integer> myVar = new ExecutesService<>();
cannot be interchanged with :
ExecutesService<Boolean, String> otherVar
like that myVar = otherVar.
I think that your question is a related problem to.
You manipulate implementations that have close enough methods but are not really the same behavior.
So, you finish to mix things from two concepts that have no relation between them.
By using classic inheriting (without generics), you would have probably introduced very fast distinct interfaces.
I guess it is not a good idea to implement interface and make possible to call unsupported methods. It is a sign, that you should split your interface into two or three, depending on concrete situation, in a such way, that each class implements all methods of the implemented interface.
In your case I would split the entire interface into three, using inheritance to avoid doubling. Please, see the example:
public interface ExecutesService<T, S> {
T executeFourthService(S obj);
}
public interface ExecutesServiceA<T, S> extends ExecutesService {
T executeSecondService(S obj);
T executeThirdService(S obj);
}
public interface ExecutesServiceB<T, S> extends ExecutesService {
T executeFirstService(S obj);
}
Please, also take into account that it is redundant to place public modifier in interface methods.
Hope this helps.

How to use inheritance with ModelMapper's PropertyMap

I have been using the ModelMapper to convert Identity objects (entities) with DTO objects. I want to implement generic property conversions in a generic class for all entities (GenericEToDtoPropertyMap) and explicit property conversions in separate child-classes for each entity (PersonEToDTOPropertyMap for entity Person). To make it more clear, this is my code:
Generic property map:
public class GenericEToDtoPropertyMap<E extends Identity, DTO extends PersistentObjectDTO> extends PropertyMap<E, DTO> {
#Override
protected void configure() {
// E.oid to DTO.id conversion
map().setId(source.getOid());
}
}
Specific property map for entity Person:
public class PersonEToDTOPropertyMap extends GenericEToDtoPropertyMap<Person, PersonDTO> {
#Override
protected void configure() {
super.configure();
// implement explicit conversions here
}
}
Usage of property map:
modelMapper = new ModelMapper();
Configuration configuration = modelMapper.getConfiguration();
configuration.setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.addMappings(new PersonEToDTOPropertyMap());
// convert person object
PersonDTO personDto = modelMapper.map(person);
The problem is that the generic conversions do not apply. In my case person.oid does not get copied to personDto.id. It works correctly only if I remove the part:
map().setId(source.getOid());
from the GenericEToDtoPropertyMap.configure() method and put it in the PersonEToDTOPropertyMap.configure() method.
I guess, it has something to do with ModelMapper using Reflection to implement the mappings, but it would be nice if I could use inheritance in my property maps. Do you have any idea how to do this?
I just found an answer from the creator of ModelMapper, Jonathan Halterman:
https://groups.google.com/forum/#!topic/modelmapper/cvLTfqnHhqQ
This sort of mapping inheritance is not currently possible.
So I guess I have to implement all conversions in the child-class
Well after 5 years... I got a solution (I have same problem)
After view some post and search answers I found this post The problem happen because the method "configure()" restricts what you can do inside so I thinking in create an method outside for get the custom format, so this is my code:
public class UserMap extends PropertyMap<User,UserLoginDTO> {
#Override
protected void configure() {
using(generateFullname()).
map(source,destination.getFullName());
}
private Converter<User, String> generateFullname(){
return context -> {
User user = context.getSource();
return user.getName()+ " " + user.getFirstLastname() + " " + user.getSecondLastname();
};
}
}
so you could call it from your main method or wherever you need, like this:
ModelMapper modelMapper=new ModelMapper();
modelMapper.addMappings(new UserMap());
and the output is:
UserLoginDTO{id='001', fullName='Jhon Sunderland Rex'}
How you can see, the logic is out of configure method and it works, I hope this helps for others
pd: sorry for my english

Java: pass parameter in constructor or method?

Currently I have a class, TransactionData, which is just a little bit more than a POJO. I build the object from an HTTPServletRequest. What I do:
public class TransactionData
{
// ...
public TransactionData(HttpServletRequest request) throws IOException
{
// do actual work here
}
}
There many WTF here, the most disturbing one is that the object TransactionData is tightly coupled to HTTPServletRequest. What I thought: create an interface, TransactionDataExtractor, with an extract() method, so that I might implement different classes to build the object.
public interface TransactionDataExtractor
{
public TransactionData extract();
}
But how do I pass the stuff needed to build the TransactionData to every implementation? The firt thing that came to mind was to use the different constructors, like this:
public class TransactionDataExtractorRequest implements TransactionDataExtractor
{
private HttpServletRequest httpRequest;
public TransactionDataExtractorRequest(HttpServletRequest httpRequest)
{
this.httpRequest = httpRequest;
}
public TransactionData extract()
{
// do whatever is required
}
}
But in this case whenever I need to build a new TransactionData object I have to create a new TransactionDataExtractorRequest. An implicit dependency I don't like at all.
The other alternative I could think of was passing an Object parameter to extract() and cast it whenever required, giving up the type safety and introducing a lot of boiler plate ugly code
public TransactionData extract(Object o)
{
HttpServletRequest h;
if (o instanceof HttpServletRequest)
{
h = (HttpServletRequest)o;
}
//...
}
I don't know if I have made myself clear. I do feel like I'm missing something, I know the solution is very simple but I can't get hold of it.
Any thoughts?
TIA.
EDIT: the problem might even be that my hunch is completely wrong and I may dismiss it without any regret
If your only problem is ensuring type safety when passing the source object to extract(), you can use generics:
public interface TransactionDataExtractor<E> {
public TransactionData extract(E source);
}
public class TransactionDataExtractorRequest
implements TransactionDataExtractor<HttpServletRequest> {
public TransactionData extract(HttpServletRequest source) { ... }
}
I do feel like I'm missing something ... Any thoughts?
Well my thought is that you are attempting to solve a problem that isn't really a problem. There's no obvious (to me) reason why the coupling you are trying to get rid of is actually harmful. Certainly, your attempts to remove the coupling are not making the code any easier to understand.
In case you rely on request parameters only, you can get request.getParameterMap() and use the Map instead. (if you need headers - getHeaders())
Creating/reusing a TransactionDataExtractorRequest instance isn't a problem, IMHO. You'd need to somewhere distinguish between the parameter types anyway and if you decouple TransactionData and the parameter types by using some sort of factory, what's wrong with that?
I'm still not convinced that much is gained by removing the dependency on HttpServletRequest, but I would suggest something along the lines of:
public class TransactionData {
public TransactionData(TransactionDataOptions options) throws IOException {
// do actual work here
}
}
//TransactionData wants some state that it currently gets from a HttpServletRequest,
//figure out what that state is, and abstract an interface for accessing it
public interface TransactionDataOptions {
//getters for things that TransactionData needs
}
//take all the code that pulls state out of the HttpServletRequest, and move it here
public class TransactionDataHttpOptions implements TransactionDataOptions {
private HttpServletRequest request;
//getter implementations that pull the required information out of the request
public TransactionDataHttpOptions(HttpServletRequest request) {
this.request = request;
}
}
//now you can also do this, and use TransactionData even without a HttpServletRequest
public class TransactionDataMapOptions implements TransactionDataOptions {
private Map<String, Object> map;
//getter implementations that pull the required information out of the map
public TransactionDataHttpOptions(Map<String, Object> map) {
this.map = map;
}
}
If you go this route, then TransactionDataHttpOptions is the only object with a dependency on HttpServletRequest. And since it is basically a wrapper that is intended to work with a HttpServletRequest I think that should be fine.

Categories

Resources