java open closed principle for multiple services - java

Let's say I wanted to define an interface which represents a call to a remote service.
Both Services have different request and response
public interface ExecutesService<T,S> {
public T executeFirstService(S obj);
public T executeSecondService(S obj);
public T executeThirdService(S obj);
public T executeFourthService(S obj);
}
Now, let's see implementation
public class ServiceA implements ExecutesService<Response1,Request1>
{
public Response1 executeFirstService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeSecondService(Request1 obj)
{
//execute some service
}
public Response1 executeThirdService(Request1 obj)
{
//execute some service
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
public class ServiceB implements ExecutesService<Response2,Request2>
{
public Response1 executeFirstService(Request1 obj)
{
//execute some service
}
public Response1 executeSecondService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeThirdService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
In a other class depending on some value in request I am creating instance of either ServiceA or ServiceB
I have questions regarding the above:
Is the use of a generic interface ExecutesService<T,S> good in the case where you want to provide subclasses which require different Request and Response.
How can I do the above better?

Basically, your current design violates open closed principle i.e., what if you wanted to add executeFifthService() method to ServiceA and ServiceB etc.. classes.
It is not a good idea to update all of your Service A, B, etc.. classes, in simple words, classes should be open for extension but closed for modification.
Rather, you can refer the below approach:
ExecutesService interface:
public interface ExecutesService<T,S> {
public T executeService(S obj);
}
ServiceA Class:
public class ServiceA implements ExecutesService<Response1,Request1> {
List<Class> supportedListOfServices = new ArrayList<>();
//load list of classnames supported by ServiceA during startup from properties
public Response1 executeService(Request1 request1, Service service) {
if(!list.contains(Service.class)) {
throw new UnsupportedOperationException("This method should
not be called for this class");
} else {
return service.execute(request1);
}
}
}
Similarly, you can implement ServiceB as well.
Service interface:
public interface Service<T,S> {
public T execute(S s);
}
FirstService class:
public class FirstService implements Service<Request1,Response1> {
public Response1 execute(Request1 req);
}
Similarly, you need to implement SecondService, ThirdService, etc.. as well.
So, in this approach, you are basically passing the Service (to be actually called, it could be FirstService or SecondService, etc..) at runtime and ServiceA validates whether it is in supportedListOfServices, if not throws an UnsupportedOperationException.
The important point here is that you don't need to update any of the existing services for adding new functionality (unlike your design where you need to add executeFifthService() in ServiceA, B, etc..), rather you need to add one more class called FifthService and pass it.

I would suggest you to create two different interfaces every of which is handling its own request and response types.
Of course you can develop an implementation with one generic interface handling all logic but it may make the code more complex and dirty from my point of view.
regards

It makes not really sense to have a interface if you know that for one case, most of methods of the interface are not supported and so should not be called by the client.
Why provide to the client an interface that could be error prone to use ?
I think that you should have two distinct API in your use case, that is, two classes (if interface is not required any longer) or two interfaces.
However, it doesn't mean that the two API cannot share a common interface ancestor if it makes sense for some processing where instances should be interchangeable as they rely on the same operation contract.
Is the use of a generic interace (ExecutesService) good in the case
where you want to provide subclasses which require different Request
and Response.
It is not classic class deriving but in some case it is desirable as
it allows to use a common interface for implementations that has some enough similar methods but don't use the same return type or parameter types in their signature :
public interface ExecutesService<T,S>
It allows to define a contract where the classic deriving cannot.
However, this way of implementing a class doesn't allow necessarily to program by interface as the declared type specifies a particular type :
ExecutesService<String, Integer> myVar = new ExecutesService<>();
cannot be interchanged with :
ExecutesService<Boolean, String> otherVar
like that myVar = otherVar.
I think that your question is a related problem to.
You manipulate implementations that have close enough methods but are not really the same behavior.
So, you finish to mix things from two concepts that have no relation between them.
By using classic inheriting (without generics), you would have probably introduced very fast distinct interfaces.

I guess it is not a good idea to implement interface and make possible to call unsupported methods. It is a sign, that you should split your interface into two or three, depending on concrete situation, in a such way, that each class implements all methods of the implemented interface.
In your case I would split the entire interface into three, using inheritance to avoid doubling. Please, see the example:
public interface ExecutesService<T, S> {
T executeFourthService(S obj);
}
public interface ExecutesServiceA<T, S> extends ExecutesService {
T executeSecondService(S obj);
T executeThirdService(S obj);
}
public interface ExecutesServiceB<T, S> extends ExecutesService {
T executeFirstService(S obj);
}
Please, also take into account that it is redundant to place public modifier in interface methods.
Hope this helps.

Related

Retry failed service call but use different implementation

The code I'm working with has the following structure.
public interface SomeService {
Optional<SomeClass> getThing();
// more methods
}
public abstract class SomeServiceBase implements SomeService {
Optional<SomeClass> getThing() {
// logic
this.onGetThing();
}
protected abstract Optional<SomeClass> onGetThing();
}
Additionally, there are then 3 different classes that extends SomeServiceBase and each one calls to a different 3rd party exteranl API to get some results and they all implement thier own version of onGetThing().
class FooService extends SomeServiceBase { #Override protected Optional<SomeClass> onGetThing() { } }
class DooService extends SomeServiceBase { #Override protected Optional<SomeClass> onGetThing() { } }
class RooService extends SomeServiceBase { #Override protected Optional<SomeClass> onGetThing() { } }
There's a factory service that wires up all three of the above services and returns the right one based on a "Provider" that is passed in from the client to the API.
Optional<SomeClass> myThing = SomeServiceFactory.getService(provider).getThing();
What I need to do is if FooService doesn't return a result, I want to retry with DooService. But I am struggling trying to find a good way to implement this in a somewhat generic reusable way. Any help is appreciated. Let me know if I need to provide more details.
Maybe you could take a look a to the Circuit breaker pattern.
It allows you to use a "fallback" if the original call raised an exception.
If i may resume with your sample :
A circuit breaker is provided/developped around the FooService
If everything is fine on the FooService, the original response will be given back
Else if the FooService does not provide a response or throw an exception, you will go to the linked fallback
In your fallback you will implement the call to the DooService
You can give a try with Resilience4J (you have some samples with diffrent kind of implementation) or Netflix Circuit Breaker (but deprecated)

Does strategy always needs to be passed from the client code in Strategy Pattern?

I have below piece of code:
public interface SearchAlgo { public Items search(); }
public class FirstSearchAlgo implements SearchAlgo { public Items search() {...} }
public class SecondSearchAlgo implements SearchAlgo { public Items search() {...} }
I also have a factory to create instances of above concrete classes based on client's input. Below SearchAlgoFactory code is just for the context.
public class SearchAlgoFactory {
...
public SearchAlgo getSearchInstance(String arg) {
if (arg == "First") return new FirstSearchAlgo();
if (arg == "Second") return new SecondSearchAlgo();
}
}
Now, I have a class that takes input from client, get the Algo from Factory and executes it.
public class Manager{
public Items execute(String arg) {
SearchAlgo algo = SearchAlgoFactory.getSearchInstance(arg);
return algo.search();
}
}
Question:
I feel that I am using both Factory and Strategy pattern but I am not sure 'cause whatever examples I have seen they all have a Context class to execute the strategy and client provides the strategy which they want to use. So, is this a correct implementation of Strategy?
If it comes to implementing design patterns, it is much more important to understand what they do than to conform to some gold standard reference implementation. And it looks like you understand the strategy pattern.
The important thing about strategies is that the implementation is external to some client code (usually called the context) and that it can be changed at runtime. This can be done by letting the user provide the strategy object directly. However, introducing another level of indirection through your factory is just as viable. Your Manager class acts as the context you see in most UML diagrams.
So, yes. In my opinion, your code implements the strategy pattern.

How to avoid null implementation of interface methods

There may be some problem in my design too. Here is my problem:
I have AbstractCustomAuthHandler which first;
gets IUser (gets users with implementing logic)
than calls IUser's IRole object's function (gets roles with the implementing logic)
So in the beginnig of the design;
every IUser implementation has some IRole logic
those are seperated because they are separate rest calls in seperate microservices
but I related them with IUser has a IRole relation
But now there is some implementations that some IUser's implementation should not have IRole object. So for now I'm returning null for this implementations and I didn't like it. I thought about splitting the intefaces but couldn't find the solution which satisfies by AbstractCustomAuthHandler too. Here is a diagram and the code:
Here is the some part of AbstractCustomAuthHandler
IUser userAuth= this.getUserAuth();
final Response userResponse= userAuth.findUser();
// ...
Map<String, Object> attributes= userAuth.getMappedAttributes();
// ...
IRole roleAuth= userAuth.getRoleAuth();
if (roleAuth!= null)
{
final Response rolesResponse = roleAuth.findRolesBy();
}
// ....
Here is AuthMethodWithoutRole that I have problem about returning null
public class AuthMethodWithoutRole implements IUser
{
#Override public Response findUserBy( )
{
// some logic
}
#Override public IRole getRoleAuth( )
{
return null;
}
}
Here is IUser interface
public interface IUser extends IAuth
{
Response findUserBy();
IRole getRoleAuth();
}
Here is IRole interface
public interface IRole
{
Response findRolesBy( );
}
Why you not just create a class NullRole implements IRole?
So you do not need the AuthMethodWithoutRole. You can just use your default AuthMethod dealing with a "NullRole".
If you want really remove the role check from AbstractCustomAuthHandler, you should rethink your design. You could move the logic that uses the role in the IUser class/subclasses.
In this way each IUser implementation will use it if required.
This approach sounds a DDD approach. Making objects collaborate according to their nature/definition and don't let an artificial object to perform the whole logic (AbstractCustomAuthHandler).
This logic :
IUser userAuth= this.getUserAuth();
final Response userResponse= userAuth.findUser();
// ...
Map<String, Object> attributes= userAuth.getMappedAttributes();
// ...
IRole roleAuth= userAuth.getRoleAuth();
if (roleAuth!= null)
{
final Response rolesResponse = roleAuth.findRolesBy();
}
would be done in IUser :
IUser userAuth= this.getUserAuth();
Response response = userAuth.computeResponse(...);
Or maybe :
ResponsePart responsePart = userAuth.computeSomePartOfTheResponse(...);
// and use responsePart to complete the logic.
Of course IUser subclass could rely on some base method defined in a super class or in the interface to perform the common logic.
If you don't want to change your approach, that is you want to go on retrieving roles information for the IUser object in order to let another class (AbstractCustomAuthHandler) to use it, you need to manipulate uniformally IUser for the class that manipulates them.
So providing implementation with empty or null roles is required even for subclasses that don't have these.
I don't think that it is a design issue if you follow this approach. As improvement you could consider :
Define a default implementation in the interface that returns null.
Or change the return type to Optional<Role> and define a default implementation in the interface that returns an empty Optional.
This would give :
public interface IUser extends IAuth
{
Response findUserBy();
default Optional<IRole> getRoleAuth(){return Optional.empty()}
}
Now override the method only when it is required.

webTarget.request returns type not acceptable by builder.method

I am trying to unittest an API REST function.
builder = webTarget.request();
returns builder of the type
javax.ws.rs.client.Invocation.Builder
But if I take that builder and call builder.method("POST", entity) on it, the method called looks thus:
public Response method(final String name, final Entity<?> entity) throws ProcessingException {
requestContext.setMethod(name);
storeEntity(entity);
return new JerseyInvocation(this).invoke();
}
And the last line uses as "this" different builder:
org.glassfish.jersey.client.JerseyInvocation.Builder
And the run fails on that line.
I am looking at it and feel me crazy: How could it be, that the function is called as a member of one class, but when "this" is used in that method, absolutely different class is used?
Both Invocation and Invocation.Builder are interfaces. The WebTarget.request() contract is to return Invocation.Builder. These are all interfaces we are talking about here; WebTarget, Invocation, Invocation.Builder. This is contract design by the JAX-RS specification. It is up the JAX-RS implementation to implement these interfaces. Jersey implementation are JerseyWebTarget, JerseyInvocation, and JerseyInvociation.Builder, respectively.
It's the same if I created something like this
public interface Model {}
public interface Service {
Model getModel();
}
public ModelImpl implements Model {}
public class ServiceImpl implements Service {
#Override
public Model getModel() {
return new ModelImpl();
}
}
There's nothing special goin on here. The Service contract says that the getModel() method returns a Model, which is an interface, the actual return value will be of type ModelImpl, the implementation. Polymorphism in the works.

Benefits of Using Generics in a Base Class that Also Implement the Same Class

I recently ran across this scenario in code that I didn't write and while there may be some design benefit to this approach, I can't seem to squeeze this rationale out of my own brain. So before I go and look foolish, I'm hoping for some feedback here.
Service interface something like this:
public interface Service {...}
Then, a base class that adds a generic reference to the Service interface where T extends the Service, but then the overall base class also implements the interface. Something like this:
public class ServiceBase<T extends Service> implements Service {...}
Why would you do this? I'm noticing that in practice the extension of ServiceBase always uses the same class name as T as the one that is being declared; so there's not really any magic polymorphic benefit here. Something like this:
public class MyService extends ServiceBase<MyService> {...}
and, the MyService class is never a container for the generic (e.g., I don't believe this is signaling some kind of self-containing list, where MyService could contain a list of MyServices).
Any ideas/thoughts on why someone would do this?
Why would you do this? I'm noticing that in practice the extension of
ServiceBase always uses the same class name as T as the one that is
being declared; so there's not really any magic polymorphic benefit
here.
Generics don't exist to create magic polymorphim. It is mainly a way to add constraints on types at compile time in order to reduce clumsy cast and error type at runtime.
In your case, suppose that ServiceBase class is abstract and has a process() method which needs to create at each call a new instance of the concrete class we declare in the parameterized type.
We call this abstract method createService().
Without using generics, we could declare the method like that public abstract ServiceBase createService().
ServiceBase without generics
public abstract class ServiceBase implements Service {
public abstract ServiceBase createService();
#Override
public void process() {
createService().process();
}
}
With this declaration, the concrete class may return any instance of ServiceBase.
For example, the following child class will compile because we are not forced to change the returned type of createService() to the current declared type.
MyService without generics
public class MyService extends ServiceBase {
#Override
public ServiceBase createService() {
return new AnotherService();
}
}
But if I use generics in base class :
ServiceBase with generics
public abstract class ServiceBase<T extends Service> implements Service {
public abstract T createService();
#Override
public void process() {
createService().process();
}
}
The concrete class has no choice, it is forced to change the returned type of createService() with the parameterized type declared.
MyService with generics
public class MyService extends ServiceBase<MyService> {
#Override
public MyService createService() {
return new MyService();
}
}
I made up an example using your class and interface declarations (except that I made ServiceBase abstract) which should illustrate the use of the generic types:
public interface Service {
int configure(String cmd);
}
public abstract class ServiceBase<T extends Service> implements Service {
private ServiceManager manager;
public ServiceBase(ServiceManager manager){
this.manager = manager;
}
public final void synchronize(T otherService){
manager.configureEnvironment(otherService.configure("syncDest"), configure("syncSrc"));
synchronizeTo(otherService);
}
protected abstract void synchronizeTo(T otherService);
}
public class ProducerService extends ServiceBase<ConsumerService> {
public ProducerService(ServiceManager manager) {
super(manager);
}
#Override
protected void synchronizeTo(ConsumerService otherService) {
/* specific code for synchronization with consumer service*/
}
#Override
public int configure(String cmd) { ... }
}
public class ConsumerService extends ServiceBase<ProducerService> {
public ConsumerService(ServiceManager manager) {
super(manager);
}
#Override
protected void synchronizeTo(ProducerService otherService) {
/* specific code for synchronization with producer service */
}
#Override
public int configure(String cmd) { ... }
}
Imagine we have services managed by a ServiceManager which can configure the environment of the services so that they are ready for synchronization with each other. How a configure command is interpreted is up to the specific service. Therefore a configure() declaration resides in our interface.
The ServiceBase handles the basic synchronization stuff that has to happen generally when two services want to synchronize. The individual implementations of ServiceBase shouldn't have to deal with this.
However ServiceBase doesn't know how a specific implementation of itself synchronizes to a specific other implementation of service. Therefore it has to delegate this part of synchronization to its subclass.
Now generics come into the play. ServiceBase also doesn't know to which type of service it is able to synchronize to. He has also to delegate this decision to its subclass. He can do this using the construct T extends Service
Given this structure now imagine two concrete subclasses of ServiceBase: ProducerService and ConsumerService; The consumer service can only synchronize to the producer service and the other way around. Therefore the two classes specify in their declaration ServiceBase<ConsumerService> respectively ServiceBase<ProducerService>.
Conclusion
Just like abstract methods can be used by superclasses to delegate the implementation of functionality to their subclasses, generic type parameters can be used by superclasses to delegate the "implementation" of type placeholders to their subclasses.
You haven't posted any of the definitions of these classes where the type parameter is used (which would most likely convey the rationale behind this design, or maybe the lack of it...), but in all cases, a type parameter is a way of parameterizing a class, just like a method can be parameterized.
The ServiceBase class implements a Service. This tells us that it implements the contract (methods) of a Service (to be more precise, subclasses of it can act as the implementation).
At the same time, ServiceBase takes a type argument that is a subtype of Service. This tells us that a service implementation probably has a "relationship" with another implementation type (possibly the same type as the current one). This relationship could be anything needed by the specific design requirement, e.g. the type of Service that this implementation can delegate to, the type of Service that can call this service, etc.
The way I read the following declaration
public class ServiceBase<T extends Service> implements Service {...}
is roughly: ServiceBase is a base implementation of a service, which can have a statically typed relationship with some other type of service.
These two aspects are completely independent.

Categories

Resources