There may be some problem in my design too. Here is my problem:
I have AbstractCustomAuthHandler which first;
gets IUser (gets users with implementing logic)
than calls IUser's IRole object's function (gets roles with the implementing logic)
So in the beginnig of the design;
every IUser implementation has some IRole logic
those are seperated because they are separate rest calls in seperate microservices
but I related them with IUser has a IRole relation
But now there is some implementations that some IUser's implementation should not have IRole object. So for now I'm returning null for this implementations and I didn't like it. I thought about splitting the intefaces but couldn't find the solution which satisfies by AbstractCustomAuthHandler too. Here is a diagram and the code:
Here is the some part of AbstractCustomAuthHandler
IUser userAuth= this.getUserAuth();
final Response userResponse= userAuth.findUser();
// ...
Map<String, Object> attributes= userAuth.getMappedAttributes();
// ...
IRole roleAuth= userAuth.getRoleAuth();
if (roleAuth!= null)
{
final Response rolesResponse = roleAuth.findRolesBy();
}
// ....
Here is AuthMethodWithoutRole that I have problem about returning null
public class AuthMethodWithoutRole implements IUser
{
#Override public Response findUserBy( )
{
// some logic
}
#Override public IRole getRoleAuth( )
{
return null;
}
}
Here is IUser interface
public interface IUser extends IAuth
{
Response findUserBy();
IRole getRoleAuth();
}
Here is IRole interface
public interface IRole
{
Response findRolesBy( );
}
Why you not just create a class NullRole implements IRole?
So you do not need the AuthMethodWithoutRole. You can just use your default AuthMethod dealing with a "NullRole".
If you want really remove the role check from AbstractCustomAuthHandler, you should rethink your design. You could move the logic that uses the role in the IUser class/subclasses.
In this way each IUser implementation will use it if required.
This approach sounds a DDD approach. Making objects collaborate according to their nature/definition and don't let an artificial object to perform the whole logic (AbstractCustomAuthHandler).
This logic :
IUser userAuth= this.getUserAuth();
final Response userResponse= userAuth.findUser();
// ...
Map<String, Object> attributes= userAuth.getMappedAttributes();
// ...
IRole roleAuth= userAuth.getRoleAuth();
if (roleAuth!= null)
{
final Response rolesResponse = roleAuth.findRolesBy();
}
would be done in IUser :
IUser userAuth= this.getUserAuth();
Response response = userAuth.computeResponse(...);
Or maybe :
ResponsePart responsePart = userAuth.computeSomePartOfTheResponse(...);
// and use responsePart to complete the logic.
Of course IUser subclass could rely on some base method defined in a super class or in the interface to perform the common logic.
If you don't want to change your approach, that is you want to go on retrieving roles information for the IUser object in order to let another class (AbstractCustomAuthHandler) to use it, you need to manipulate uniformally IUser for the class that manipulates them.
So providing implementation with empty or null roles is required even for subclasses that don't have these.
I don't think that it is a design issue if you follow this approach. As improvement you could consider :
Define a default implementation in the interface that returns null.
Or change the return type to Optional<Role> and define a default implementation in the interface that returns an empty Optional.
This would give :
public interface IUser extends IAuth
{
Response findUserBy();
default Optional<IRole> getRoleAuth(){return Optional.empty()}
}
Now override the method only when it is required.
Related
I am trying to unittest an API REST function.
builder = webTarget.request();
returns builder of the type
javax.ws.rs.client.Invocation.Builder
But if I take that builder and call builder.method("POST", entity) on it, the method called looks thus:
public Response method(final String name, final Entity<?> entity) throws ProcessingException {
requestContext.setMethod(name);
storeEntity(entity);
return new JerseyInvocation(this).invoke();
}
And the last line uses as "this" different builder:
org.glassfish.jersey.client.JerseyInvocation.Builder
And the run fails on that line.
I am looking at it and feel me crazy: How could it be, that the function is called as a member of one class, but when "this" is used in that method, absolutely different class is used?
Both Invocation and Invocation.Builder are interfaces. The WebTarget.request() contract is to return Invocation.Builder. These are all interfaces we are talking about here; WebTarget, Invocation, Invocation.Builder. This is contract design by the JAX-RS specification. It is up the JAX-RS implementation to implement these interfaces. Jersey implementation are JerseyWebTarget, JerseyInvocation, and JerseyInvociation.Builder, respectively.
It's the same if I created something like this
public interface Model {}
public interface Service {
Model getModel();
}
public ModelImpl implements Model {}
public class ServiceImpl implements Service {
#Override
public Model getModel() {
return new ModelImpl();
}
}
There's nothing special goin on here. The Service contract says that the getModel() method returns a Model, which is an interface, the actual return value will be of type ModelImpl, the implementation. Polymorphism in the works.
Let's say I wanted to define an interface which represents a call to a remote service.
Both Services have different request and response
public interface ExecutesService<T,S> {
public T executeFirstService(S obj);
public T executeSecondService(S obj);
public T executeThirdService(S obj);
public T executeFourthService(S obj);
}
Now, let's see implementation
public class ServiceA implements ExecutesService<Response1,Request1>
{
public Response1 executeFirstService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeSecondService(Request1 obj)
{
//execute some service
}
public Response1 executeThirdService(Request1 obj)
{
//execute some service
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
public class ServiceB implements ExecutesService<Response2,Request2>
{
public Response1 executeFirstService(Request1 obj)
{
//execute some service
}
public Response1 executeSecondService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeThirdService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
In a other class depending on some value in request I am creating instance of either ServiceA or ServiceB
I have questions regarding the above:
Is the use of a generic interface ExecutesService<T,S> good in the case where you want to provide subclasses which require different Request and Response.
How can I do the above better?
Basically, your current design violates open closed principle i.e., what if you wanted to add executeFifthService() method to ServiceA and ServiceB etc.. classes.
It is not a good idea to update all of your Service A, B, etc.. classes, in simple words, classes should be open for extension but closed for modification.
Rather, you can refer the below approach:
ExecutesService interface:
public interface ExecutesService<T,S> {
public T executeService(S obj);
}
ServiceA Class:
public class ServiceA implements ExecutesService<Response1,Request1> {
List<Class> supportedListOfServices = new ArrayList<>();
//load list of classnames supported by ServiceA during startup from properties
public Response1 executeService(Request1 request1, Service service) {
if(!list.contains(Service.class)) {
throw new UnsupportedOperationException("This method should
not be called for this class");
} else {
return service.execute(request1);
}
}
}
Similarly, you can implement ServiceB as well.
Service interface:
public interface Service<T,S> {
public T execute(S s);
}
FirstService class:
public class FirstService implements Service<Request1,Response1> {
public Response1 execute(Request1 req);
}
Similarly, you need to implement SecondService, ThirdService, etc.. as well.
So, in this approach, you are basically passing the Service (to be actually called, it could be FirstService or SecondService, etc..) at runtime and ServiceA validates whether it is in supportedListOfServices, if not throws an UnsupportedOperationException.
The important point here is that you don't need to update any of the existing services for adding new functionality (unlike your design where you need to add executeFifthService() in ServiceA, B, etc..), rather you need to add one more class called FifthService and pass it.
I would suggest you to create two different interfaces every of which is handling its own request and response types.
Of course you can develop an implementation with one generic interface handling all logic but it may make the code more complex and dirty from my point of view.
regards
It makes not really sense to have a interface if you know that for one case, most of methods of the interface are not supported and so should not be called by the client.
Why provide to the client an interface that could be error prone to use ?
I think that you should have two distinct API in your use case, that is, two classes (if interface is not required any longer) or two interfaces.
However, it doesn't mean that the two API cannot share a common interface ancestor if it makes sense for some processing where instances should be interchangeable as they rely on the same operation contract.
Is the use of a generic interace (ExecutesService) good in the case
where you want to provide subclasses which require different Request
and Response.
It is not classic class deriving but in some case it is desirable as
it allows to use a common interface for implementations that has some enough similar methods but don't use the same return type or parameter types in their signature :
public interface ExecutesService<T,S>
It allows to define a contract where the classic deriving cannot.
However, this way of implementing a class doesn't allow necessarily to program by interface as the declared type specifies a particular type :
ExecutesService<String, Integer> myVar = new ExecutesService<>();
cannot be interchanged with :
ExecutesService<Boolean, String> otherVar
like that myVar = otherVar.
I think that your question is a related problem to.
You manipulate implementations that have close enough methods but are not really the same behavior.
So, you finish to mix things from two concepts that have no relation between them.
By using classic inheriting (without generics), you would have probably introduced very fast distinct interfaces.
I guess it is not a good idea to implement interface and make possible to call unsupported methods. It is a sign, that you should split your interface into two or three, depending on concrete situation, in a such way, that each class implements all methods of the implemented interface.
In your case I would split the entire interface into three, using inheritance to avoid doubling. Please, see the example:
public interface ExecutesService<T, S> {
T executeFourthService(S obj);
}
public interface ExecutesServiceA<T, S> extends ExecutesService {
T executeSecondService(S obj);
T executeThirdService(S obj);
}
public interface ExecutesServiceB<T, S> extends ExecutesService {
T executeFirstService(S obj);
}
Please, also take into account that it is redundant to place public modifier in interface methods.
Hope this helps.
Assume we have following classes:
public class User
{
//User Definitions Goes Here
}
public class Product
{
//Product Definitions Goes Here
}
public class Order
{
//Order Definitions Goes Here
}
Having above models, Should I Create only one repository like:
public interface IRepository
{
//IRepository Definition Goes Here
}
Or it is better to have multiple repository:
public interface IUserRepository
{
//IUserRepository Definition Goes Here
}
public interface IProductRepository
{
//IProductRepository Definition Goes Here
}
public interface IOrderRepository
{
//IOrderRepository Definition Goes Here
}
And what is each pros and cons?
There is no must . You create as many as the app needs.You could have a repository interface for each business object and a generic interface.
Something like
interface ICRUDRepo<T> //where T is always a Domain object
{
T get(GUid id);
void Add(T entity);
void Save(T entity);
}
//then it's best (for maintainability) to define a specific interface for each case
interface IProductsRepository:ICRUDRepo<Product>
{
//additional methods if needed by the domain use cases only
//this search the storage for Products matching a certain criteria,
// then returns a materialized collection of products
//which satisfy the given criteria
IEnumerable<Product> GetProducts(SelectByDate criteria);
}
It's all about having a clean and clear abstraction which will allow proper decoupling of the Domain from persistence.
The generic abstraction is there so that we save a few keystrokes and maybe to have some common extension methods. However using a common generic interface for these purposes doesn't really count as DRY
If you adopt the first approach, you avoid repeating yourself, satisfying DRY principles. But you break separation of concerns principles by lumping unconnected items in one interface and any implementing class.
If you adopt the second approach, you implement good separation of concerns, but risk repeating yourself, so breaking DRY principles.
One solution is a third way: do a mixture.
public interface IRepository<T>
{
IEnumerable<T> Query {get;}
void Add(TEntity entity);
void Delete(TEntity entity);
}
public interface IUserRepository : IRepository<IUser>;
public interface IProductRepository : IRepository<IProduct>;
public interface IOrderRepository : IRepository<IOrder>;
This approach then satisfies both principles.
Consider the following ServerResource derived type:
public class UserResource extends ServerResource {
#Get
public User getUser(int id) {
return new User(id, "Mark", "Kharitonov");
}
}
(Yes, it always returns the same user no matter the given id).
Is it possible to make it work in Restlet? Because, as far as I understand, the expected signature of the GET handler is:
Representation get();
OR
Representation get(Variant v); // (no idea what it means yet)
Now I understand, that I can implement the non type safe GET handler to extract the arguments from the request and then invoke getUser, after which to compose the respective Representation instance from the result and return. But this is a boilerplate code, it does not belong with the application code, its place is inside the framework. At least, this is how it is done by OpenRasta - the REST framework I have been using in .NET
Thanks.
You should remove the parameter from the signature
#Get
public User getUser() {
String id = getQuery().getFirstValue("id");
return new User(id, "Mark", "Kharitonov");
}
No need to override the get() methods in this case as the #Get annotation will be automatically detected.
I have a Factory Class use case I want to implement with Guice, but not sure how.
I have an Abstract Class named Action which represent different kind of actions the user could perform on my app.
Each of the Actions are subclasses of Action class, and each of them also have an identification of String type.
Because Actions are heavy objects I don't want to have it all instanciated at once, so I provides a Factory to instanciate each of them depending on the ID the client ask for.
The Factory Interface looks like:
public interface ActionFactory {
Action getActionByID(String id);
}
Our implementation of this Factory uses a HashMap to maintain the relationship between the String instance and a so called ActionInstantiator that will provides the concrete Action instance.
Implementation of this looks like:
public class ActionFactoryImpl implements ActionFactory {
private HashMap<String, ActionInstantiator> actions;
private static ActionFactoryImpl instance;
protected ActionFactoryImpl(){
this.actions=new HashMap<String, ActionInstantiator>();
this.buildActionRelationships();
}
public static ActionFactoryImpl instance(){
if(instance==null)
instance=new ActionFactoryImpl();
return instance;
}
public Action getActionByID(String id){
ActionInstantiator ai = this.actions.get(id);
if (ai == null) {
String errMessage="Error. No action with the given ID:"+id;
MessageBox.alert("Error", errMessage, null);
throw new RuntimeException(errMessage);
}
return ai.getAction();
}
protected void buildActionRelationships(){
this.actions.put("actionAAA",new ActionAAAInstantiator());
this.actions.put("actionBBB",new ActionBBBInstantiator());
.....
.....
}
}
So some client that could use this factory and wants ActionAAA instance class calls it like this:
Action action=ActionFactoryImpl.instance().getActionByID(actionId);
Where actionId was obtained at runtime from database.
I found that some kind of annotation injection could do something similar, but in my case I think that that wouldn't work, because I only know the instance that the user will requieres at runtime, so I couldn't annotated on the code.
I'm new to Guice so maybe this is something very common I couldn't found in the docs, I appologies if that is the case.
Any help will be appreciated.
Regards
Daniel
You want to use the Multibindings extension, specifically MapBinder. You probably want your ActionInstantiator type to implement Provider<Action>. Then you can do:
MapBinder<String, Action> mapbinder
= MapBinder.newMapBinder(binder(), String.class, Action.class);
mapbinder.addBinding("actionAAA", ActionAAAInstantiator.class);
// ...
Then you can inject a Map<String, Provider<Action>> where you want. You'll also be able to inject things in to your ActionInstantiators.