I'm interested in opinions on the best way to handle the concept of "entitlement" using either Spring Security or Shiro.
For example, imagine, say, a JAX-RS endpoint that has a signature like this:
AccountDetails getAccountDetails(String accountId);
Using Spring Security, I might annotate an implementation like:
#Secured(AUTHORIZED_USER)
public AccountDetails getAccountDetails(String accountId) { ... }
or using Shiro,
#RequiresAuthentication
public AccountDetails getAccountDetails(String accountId) { ... }
What I am looking for, however, is some recommendations on "best practices" for how to ensure that the user has permission to access the particular account id (which I think is called "entitlement management").
I could imagine a couple of different approaches:
#Secured(AUTHORIZED_USER)
#AccountEntitled
public AccountDetails getAccountDetails(#Account String accountId) { ... }
(which strikes me as not completely straightforward using Spring Security, but I'd love to be wrong).
Or, I could imagine introducing an AccountId domain object, and a factory which will only succeed in turning a String into an AccountId if the principle held by the current security context allows that users to see that account. But that starts to get a bit messy.
On the whole, I don't want to invent new concepts here; this seems like bread & butter stuff, but I've not had much luck finding credible recommendations around best practices here.
Thanks for any suggestions.
It sounds like what you are trying to do is implement row-level security for specific accounts. There are other Stackoverflow questions (How to implement row-level security in Java? and Database independent row level security solution) that discuss potential solutions to this very problem. Additionally, the link provided in the first answer discusses implementing Row Level Security with Spring and Hibernate. However, the higher ranked answer recommends implementing row-level security directly at the database level.
Having worked with Shiro I can say that it can be done. However you must implement your own security structures (Realms, Permissions, Annotations) to accommodate the type of functionality you describe. One approach would be to add an annotation similar to what you have in your last example that indicates the method requires a permission check. This annotation would be tied to an Interceptor which would in turn generate the appropriate permission and then call to the security framework to verify the permission.
It would look something like this.
Method:
#RequiresAuthorization
#Entitled
public AccountDetails getAccountDetails(#Account String accountId) {...}
Interceptor:
#Interceptor
#Entitled
public class EntitledInterceptor {
#AroundInvoke
public void interceptOrder(InvocationContext ctx) {
// return type is AccountDetails
// parameter[0] is acccoundId
Permission p = new CustomPermission(context.getMethod().getReturnType(),
ctx.getParameters()[0]);
if(SecurityUtils.getSubject().isPermitted(p)){
return ctx.proceed();
} else {
throw new RowLevelSecurityException("No access!");
}
}
Realm:
public boolean isPermitted(SubjectPrincipals principal, Permission p){
if( p instanceof CustomPermission){
CustomPermission cp = (CustomPermission) p;
Class<?> type = cp.getType(); //AccountDetails
Integer id = cp.getId(); //accountId
Integer userId = principal.getPrimaryPrincipal(); //or username
customPermissionCheckingLogic(userId, type, id);
}
}
Obviously this implementation relies on CDI and you having a way to determine what table(s) to check based on the object type provided (JPA annotations work in this regard). Additionally there may be ways to hook into Shiro's annotation scanning to provide more direct/native permission functionality than what I've done here.
Documentation on CDI interceptors.
Related
I'm trying to understand SRP principle and most of the sof threads didn't answer this particular query I'm having,
Use-case
I'm trying to send an email to the user's email address to verify himself whenever he tries to register/create an user-account in a website.
Without SRP
class UserRegistrationRequest {
String name;
String emailId;
}
class UserService {
Email email;
boolean registerUser(UserRegistrationRequest req) {
//store req data in database
sendVerificationEmail(req);
return true;
}
//Assume UserService class also has other CRUD operation methods()
void sendVerificationEmail(UserRegistrationRequest req) {
email.setToAddress(req.getEmailId());
email.setContent("Hey User, this is your OTP + Random.newRandom(100000));
email.send();
}
}
The above class 'UserService' violates SRP rule as we are clubbing 'UserService' CRUD operations and triggering verification email code into 1 single class.
Hence I do,
With SRP
class UserService {
EmailService emailService;
boolean registerUser(UserRegistrationRequest req) {
//store req data in database
sendVerificationEmail(req);
return true;
}
//Assume UserService class also has other CRUD operation methods()
void sendVerificationEmail(UserRegistrationRequest req) {
emailService.sendVerificationEmail(req);
}
}
class EmailService {
void sendVerificationEmail(UserRegistrationRequest req) {
email.setToAddress(req.getEmailId());
email.setContent("Hey User, this is your OTP + Random.newRandom(100000));
email.send();
}
But even 'with SRP', UserService as a class again holds a behaviour of sendVerificationEmail(), though this time it didn't hold the entire logic of sending the email.
Isn't it again we are clubbing crud operation's and sendVerificationEmail() into 1 single class even after applying SRP?
Your feeling is absolutely right. I agree with you.
I think your problem starts with your naming style, since you seem to be quite aware what SRP means. Class names like '...Service' or '...Manager' carry a very vague meaning or semantics. They describe a more generalized context or concept. In other words a '...Manager' class invites you to put everything inside and it still feels right, because it's a manager.
When you get more concrete by trying to focus on the true concepts of your classes or their responsibilities, you will automatically find bigger names with a stronger meaning or semantics. This will really help you to split up classes and to identify responsibilities.
SRP:
There should never be more than one reason to change a certain module.
You could start with renaming the UserService to UserDatabaseContext. Now this would automatically force you to only put database related operations into this class (e.g. CRUD operations).
You even can get more specific here. What are you doing with a database? You read from and write to it. Obviously two responsibilities, which means two classes: one for read operations and another responsible for write operations. This could be very general classes that can just read or write anything. Let's call them DatabaseReader and DatabaseWriter and since we are trying to decouple everything we are going to use interfaces everywhere. This way we get the two IDatabaseReader and IDatabaseWriter interfaces. This types are very low level since they know the database (Microsoft SQL or MySql), how to connect to it and the exact language to query it (using e.g. SQL or MySql):
// Knows how to connect to the database
interface IDatabaseWriter {
void create(Query query);
void insert(Query query);
...
}
// Knows how to connect to the database
interface IDatabaseReader {
QueryResult readTable(string tableName);
QueryResult read(Query query);
...
}
On top, you could implement a more specialized layer of read and write operations, e.g. user related data. We would introduce a IUserDatabaseReader and a IUserDatabaseWriter interface. This interfaces don't know how to connect to the database or what type of database is used. This interfaces only know what information is required to read or write user details (e.g. using a Query object that is transformed into a real query by the low level IDatabaseReader or IDatabaseWriter):
// Knows only about structure of the database (e.g. there is a table called 'user')
// Implementation will internally use IDatabaseWriter to access the database
interface IUserDatabaseWriter {
void createUser(User newUser);
void updateUser(User user);
void updateUserEmail(long userKey, Email emailInfo);
void updateUserCredentials(long userKey, Credential userCredentials);
...
}
// Knows only about structure of the database (e.g. there is a table called 'user')
// Implementation will internally use IDatabaseReader to access the database
interface IUserDatabaseReader {
User readUser(long userKey);
User readUser(string userName);
Email readUserEmail(string userName);
Credential readUserCredentials(long userKey);
...
}
We are still not done with the persistence layer. We can introduce another interface IUserProvider. The idea is to decouple the database access from the rest of our application. In other words we consolidate the user related data query operations into this class. So, IUserProvider will be the only type that has direct access to the data layer. It forms the interface to the application's persistence layer:
interface IUserProvider {
User getUser(string userName);
void saveUser(User user);
User createUser(string userName, Email email);
Email getUserEmail(string userName);
}
The implementation of IUserProvider. The only class in the whole application that has direct access to the data layer by referencing IUserDatabaseReader and IUserDatabaseWriter. It wraps reading and writing of data to make data handling more convenient. The responsibility of this type is to provide user data to the application:
class UserProvider {
IUserDatabaseReader userReader;
IUserDatabaseWriter userWriter;
// Constructor
public UserProvider (IUserDatabaseReader userReader,
IUserDatabaseWriter userWriter) {
this.userReader = userReader;
this.userWriter = userWriter;
}
public User getUser(string userName) {
return this.userReader.readUser(username);
}
public void saveUser(User user) {
return this.userWriter.updateUser(user);
}
public User createUser(string userName, Email email) {
User newUser = new User(userName, email);
this.userWriter.createUser(newUser);
return newUser;
}
public Email getUserEmail(string userName) {
return this.userReader.readUserEmail(userName);
}
}
Now that we tackled the database operations we can focus on the authentication process and continue to extract the authentication logic from the UserService by adding a new interface IAuthentication:
interface IAuthentication {
void logIn(User user)
void logOut(User);
void registerUser(UserRegistrationRequest registrationData);
}
The implementation of IAuthentication implements the special authentication procedure:
class EmailAuthentication implements IAuthentication {
EmailService emailService;
IUserProvider userProvider;
// Constructor
public EmailAuthentication (IUserProvider userProvider,
EmailService emailService) {
this.userProvider = userProvider;
this.emailService = emailService;
}
public void logIn(string userName) {
Email userEmail = this.userProvider.getUserEmail(userName);
this.emailService.sendVerificationEmail(userEmail);
}
public void logOut(User user) {
// logout
}
public void registerUser(UserRegistrationRequest registrationData) {
this.userProvider.createNewUser(registrationData.getUserName, registrationData.getEmail());
this.emailService.sendVerificationEmail(registrationData.getEmail());
}
}
To decouple the EmailService from the EmailAuthentication class, we can remove the dependency on UserRegistrationRequest by letting sendVerificationEmail() take an Email` parameter object instead:
class EmailService {
void sendVerificationEmail(Email userEmail) {
email.setToAddress(userEmail.getEmailId());
email.setContent("Hey User, this is your OTP + Random.newRandom(100000));
email.send();
}
Since the authentication is defined by an interface IAuthentication, you can create a new implementation at any time when you decide to use a different procedure (e.g. WindowsAuthentication), but without modifying existing code. This will also work with the IDatabaseReader and IDatabaseWriter once you decide to switch to a different database (e.g. Sqlite). The IUserDatabaseReader and IUserDatabaseWriter implementations will still work without any modification.
With this class design, you now have exactly one reason to modify each existing type:
EmailService when you need to change the implementation (e.g. use
different email API)
IUserDatabaseReader or IUserDatabaseWriter when you want to add additional user related read or write operations (e.g. to handle user role)
provide new implementations of IDatabaseReader or IDatabaseWriter when you want to switch underlying database or you need to modify database access
implementations of IAuthentication when the procedure changes (e.g. using build in OS authentication)
Now everything is cleanly separated. Authentication doesn't mix with CRUD operations. We have an additional layer between application and persistence layer to add flexibility regarding the underlying persistence system. So CRUD operations don't mix with the actual persistence operations.
As a tip: in future you better start with the thinking (design) part first: what must my application do?
handle authentication
handle users
handle a database
handle email
create user responses
show view pages to the user
etc.
As you can see, you can start to implement each step or requirement separately. But this doesn't mean each requirement is realized by exactly one class. As you remember, we split up database access into four responsibilities or classes: read and write to real database (low level), read and write to database abstraction layer, to reflect concrete use cases (high level). Using interfaces adds flexibility and testability to the application.
There is already a great answer to this question by #BionicCode. I just wan't to add a short summary and some of my thoughts on the matter.
The SRP can be a tricky one.
In my experience the granularity of the responsibilities and the number of abstactions that you place in your system will affect it's ease of use and it's size.
You can add a-lot of abstractions and break everything down to very small components. This indeed is something that we should strive for.
Now the question then is: When to stop?
This will depend on:
The size of your application
What parts of it will change more frequently than others
Do you need to compose objects together, or most of the time your modules are independent of one another and you don't reause many objects.
What time do you have
What is the size of your team
A lot of other stuff...
Let's start with how big is the team.
One reason we break our code into separate modules and classes into seprate files is so that we can work in a team and avoid too many merges in our favorite source control system. If you need to change a file that contains a component of your system and someone else needs to change it too, this may get ugly pretty fast. Now if you do separate modules using SRP you get more but smaller modules that most of the time will change independent of one another.
What if the team isn't that big and our modules are not that big too? Do you need to generate more of them?
Here's an example.
Let's say that you have a mobile application that has setings. We may say that containg these settigns in one responsibility and add it to one interface IApplicationSettings to hold all of them.
In the case where we have 30 settings this interface will be huge and that's bad. It also means that we are probably violating the SRP again as this interface will probably hold settings for multiple different categories.
So we decide to apply Interface seggregation principle and SRP and divide the settings to multiple interfaces ISomeCategorySettings, IAnotherCategorySettings etc.
Now let's say that our applications isn't too big (yet) and we have 5 settings. Even if they are from different categories, is it bad to keep these settings in one interface?
I would say that it's fine to have all settigns in one interface as long as it doesn't start to slow us down or start to get ugly (30 or more settigns!).
Is it that bad to construct an email and send it from your service object? This indeed is something that can get ugly pretty quickly, so you better move this responsibility from the service object to an EmailSender fast.
If you have a service object that contains 5 methods, do you realy need to break this into 5 different objects for every operation? If these methods are big, yes. If they small, keeping them in one object it's that big of a problem.
SRP is great, but take granularity into account and choose it wisely based on code size, team size etc.
I have managed to add custom directives to the GraphQL schema but I am struggling to work out how to add a custom directive to a field definition. Any hints on the correct implementation would be very helpful.
I am using GraphQL SPQR 0.9.6 to generate my schema
ORIGINAL ANSWER: (now outdated, see the 2 updates below)
It's currently not possible to do this. GraphQL SPQR v0.9.9 will be the first to support custom directives.
Still, in 0.9.8 there's a possible work-around, depending on what you're trying to achieve. SPQR's own meta-data about a field or a type is kept inside custom directives. Knowing that, you can get a hold of the Java method/field underlying the GraphQL field definition. If what you want is e.g. an instrumentation that does something based on a directive, you could instead obtain any annotations on the underlying element, having the full power of Java at your disposal.
The way to get the method would something like:
Operation operation = Directives.getMappedOperation(env.getField()).get();
Resolver resolver = operation.getApplicableResolver(env.getArguments().keySet());
Member underlyingElement = resolver.getExecutable().getDelegate();
UPDATE:
I posted a huge answer on this GitHub issue. Pasting it here as well.
You can register an additional directive as such:
generator.withSchemaProcessors(
(schemaBuilder, buildContext) -> schemaBuilder.additionalDirective(...));
But (according to my current understanding), this only makes sense for query directives (something the client sends as a part of the query, like #skip or #deffered).
Directives like #dateFormat simply make no sense in SPQR: they're there to help you when parsing SDL and mapping it to your code. In SPQR, there's no SDL and you start from your code.
E.g. #dateFormat is used to tell you that you need to provide date formatting to a specific field when mapping it to Java. In SPQR you start from the Java part and the GraphQL field is generated from a Java method, so the method must already know what format it should return. Or it has an appropriate annotation already. In SPQR, Java is the source of truth. You use annotations to provide extra mapping info. Directives are basically annotation in SDL.
Still, field or type level directives (or annotations) are very useful in instrumentations. E.g. if you want to intercept field resolution and inspect the authentication directives.
In that case, I'd suggest you simply use annotations for the same purpose.
public class BookService {
#Auth(roles= {"Admin"}) //example custom annotation
public Book addBook(Book book) { /*insert a Book into the DB */ }
}
As each GraphQLFieldDefinition is backed by a Java methods (or a field), you can get the underlying objects in your interceptor or wherever:
GraphQLFieldDefinition field = ...;
Operation operation = Directives.getMappedOperation(field).get();
//Multiple methods can be hooked up to a single GraphQL operation. This gets the #Auth annotations from all of them
Set<Auth> allAuthAnnotations = operation.getResolvers().stream()
.map(res -> res.getExecutable().getDelegate()) //get the underlying method
.filter(method -> method.isAnnotationPresent(Auth.class))
.map(method -> method.getAnnotation(Auth.class))
.collect(Collectors.toSet());
Or, to inspect only the method that can handle the current request:
DataFetchingEnvironment env = ...; //get it from the instrumentation params
Auth auth = operation.getApplicableResolver(env.getArguments().keySet()).getExecutable().getDelegate().getAnnotation(Auth.class);
Then you can inspect your annotations as you wish, e.g.
Set<String> allNeededRoles = allAuthAnnotations.stream()
.flatMap(auth -> Arrays.stream(auth.roles))
.collect(Collectors.toSet());
if (!currentUser.getRoles().containsAll(allNeededRoles)) {
throw new AccessDeniedException(); //or whatever is appropriate
}
Of course, there's no real need to actually implement authentication this way, as you're probably using a framework like Spring or Guice (maybe even Jersey has the needed security features), that already has a way to intercept all methods and implement security. So you can just use that instead. Much simpler and safer. E.g. for Spring Security, just keep using it as normal:
public class BookService {
#PreAuth(...) //standard Spring Security
public Book addBook(Book book) { /*insert a Book into the DB */ }
}
Make sure you also read my answer on implementing security in GraphQL if that's what you're after.
You can use instrumentations to dynamically filter the results in the same way: add an annotation on a method, access it from the instrumentation, and process the result dynamically:
public class BookService {
#Filter("title ~ 'Monkey'") //example custom annotation
public List<Book> findBooks(...) { /*get books from the DB */ }
}
new SimpleInstrumentation() {
// You can also use beginFieldFetch and then onCompleted instead of instrumentDataFetcher
#Override
public DataFetcher<?> instrumentDataFetcher(DataFetcher<?> dataFetcher, InstrumentationFieldFetchParameters parameters) {
GraphQLFieldDefinition field = parameters.getEnvironment().getFieldDefinition();
Optional<String> filterExpression = Directives.getMappedOperation(field)
.map(operation ->
operation.getApplicableResolver(parameters.getEnvironment().getArguments().keySet())
.getExecutable().getDelegate()
.getAnnotation(Filter.class).value()); //get the filtering expression from the annotation
return filterExpression.isPresent() ? env -> filterResultBasedOn Expression(dataFetcher.get(parameters.getEnvironment()), filterExpression) : dataFetcher;
}
}
For directives on types, again, just use Java annotations. You have access to the underlying types via:
Directives.getMappedType(graphQLType).getAnnotation(...);
This, again, probably only makes sense only in instrumentations. Saying that because normally the directives provide extra info to map SDL to a GraphQL type. In SPQR you map a Java type to a GraphQL type, so a directive makes no sense in that context in most cases.
Of course, if you still need actual GraphQL directives on a type, you can always provide a custom TypeMapper that puts them there.
For directives on a field, it is currently not possible in 0.9.8.
0.9.9 will have full custom directive support on any element, in case you still need them.
UPDATE 2: GraphQL SPQR 0.9.9 is out.
Custom directives are now supported. See issue #200 for details.
Any custom annotation meta-annotated with #GraphQLDirective will be mapped as a directive on the annotated element.
E.g. imagine a custom annotation #Auth(requiredRole = "Admin") used to denote access restrictions:
#GraphQLDirective //Should be mapped as a GraphQLDirective
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.METHOD}) //Applicable to methods
public #interface Auth {
String requiredRole();
}
If a resolver method is then annotated with #Auth:
#GraphQLMutation
#Auth(requiredRole = {"Admin"})
public Book addBook(Book newBook) { ... }
The resulting GraphQL field fill look like:
type Mutation {
addBook(newBook: BookInput): Book #auth(requiredRole : "Admin")
}
That is to say the #Auth annotation got mapped to a directive, due to the presence of #GraphQLDirective meta-annotation.
Client directives can be added via: GraphQLSchemaGenerator#withAdditionalDirectives(java.lang.reflect.Type...).
SPQR 0.9.9 also comes with ResolverInterceptors which can intercept the resolver method invocation and inspect the annotations/directives. They are much more convenient to use than Instrumentations, but are not as general (have a much more limited scope). See issue #180 for details, and the related tests for usage examples.
E.g. to make use of the #Auth annotation from above (not that #Auth does not need to be a directive for this to work):
public class AuthInterceptor implements ResolverInterceptor {
#Override
public Object aroundInvoke(InvocationContext context, Continuation continuation) throws Exception {
Auth auth = context.getResolver().getExecutable().getDelegate().getAnnotation(Auth.class);
User currentUser = context.getResolutionEnvironment().dataFetchingEnvironment.getContext();
if (auth != null && !currentUser.getRoles().containsAll(Arrays.asList(auth.rolesRequired()))) {
throw new IllegalAccessException("Access denied"); // or return null
}
return continuation.proceed(context);
}
}
If #Auth is a directive, you can also get it via the regular API, e.g.
List<GraphQLDirective> directives = dataFetchingEnvironment.getFieldDefinition().get.getDirectives();
DirectivesUtil.directivesByName(directives);
It's RESTful web app. I am using Hibernate Envers to store historical data. Along with revision number and timestamp, I also need to store other details (for example: IP address and authenticated user). Envers provides multiple ways to have a custom revision entity which is awesome. I am facing problem in setting the custom data on the revision entity.
#RevisionEntity( MyCustomRevisionListener.class )
public class MyCustomRevisionEntity extends DefaultRevisionEntity {
private String userName;
private String ip;
//Accessors
}
public class MyCustomRevisionListener implements RevisionListener {
public void newRevision( Object revisionEntity ) {
MyCustomRevisionEntity customRevisionEntity = ( MyCustomRevisionEntity ) revisionEntity;
//Here I need userName and Ip address passed as arguments somehow, so that I can set them on the revision entity.
}
}
Since newRevision() method does not allow any additional arguments, I can not pass my custom data (like username and ip) to it. How can I do that?
Envers also provides another approach as:
An alternative method to using the org.hibernate.envers.RevisionListener is to instead call the getCurrentRevision( Class revisionEntityClass, boolean persist ) method of the org.hibernate.envers.AuditReader interface to obtain the current revision, and fill it with desired information.
So using the above approach, I'll have to do something like this:
Change my current dao method like:
public void persist(SomeEntity entity) {
...
entityManager.persist(entity);
...
}
to
public void persist(SomeEntity entity, String userName, String ip) {
...
//Do the intended work
entityManager.persist(entity);
//Do the additional work
AuditReader reader = AuditReaderFactory.get(entityManager)
MyCustomRevisionEntity revision = reader.getCurrentRevision(MyCustomRevisionEntity, false);
revision.setUserName(userName);
revision.setIp(ip);
}
I don't feel very comfortable with this approach as keeping audit data seems a cross cutting concern to me. And I obtain the userName and Ip and other data through HTTP request object. So all that data will need to flow down right from entry point of application (controller) to the lowest layer (dao layer).
Is there any other way in which I can achieve this? I am using Spring.
I am imagining something like Spring keeping information about the 'stack' to which a particular method invocation belongs. So that when newRevision() in invoked, I know which particular invocation at the entry point lead to this invocation. And also, I can somehow obtain the arguments passed to first method of the call stack.
One good way to do this would be to leverage a ThreadLocal variable.
As an example, Spring Security has a filter that initializes a thread local variable stored in SecurityContextHolder and then you can access this data from that specific thread simply by doing something like:
SecurityContext ctx = SecurityContextHolder.getSecurityContext();
Authorization authorization = ctx.getAuthorization();
So imagine an additional interceptor that your web framework calls that either adds additional information to the spring security context, perhaps in a custom user details object if using spring security or create your own holder & context object to hold the information the listener needs.
Then it becomes a simple:
public class MyRevisionEntityListener implements RevisionListener {
#Override
public void newRevision(Object revisionEntity) {
// If you use spring security, you could use SpringSecurityContextHolder.
final UserContext userContext = UserContextHolder.getUserContext();
MyRevisionEntity mre = MyRevisionEntity.class.cast( revisionEntity );
mre.setIpAddress( userContext.getIpAddress() );
mre.setUserName( userContext.getUserName() );
}
}
This feels like the cleanest approach to me.
It is worth noting that the other API getCurrentRevision(Session,boolean) was deprecated as of Hibernate 5.2 and is scheduled for removal in 6.0. While an alternative means may be introduced, the intended way to perform this type of logic is using a RevisionListener.
I am struggling to define where the validation process would be better placed in the different layers of the application? (I am not talking about user input validation here, I'm really talking about the object consistency).
A simple case:
A Blog entity which has a field List<Comment>, and a method
boolean addComment(Comment comment)
I want to check if the comment parameter of the boolean addComment(Comment comment) is null, which would return false
To me, such a check could be done in both the Service and Entity layers to ensure that everything is consistent at any layer.
But it seems redundant and something tells me that only one layer should have that responsability.
I would say the highest one in the stack, thus the Service layer should do this validation? But when I'm writing my unit tests, it feels wrong to not make that check again in the Entity layer.
My recommendation is to put these at the "public" interfaces to the services. With any public method you can't give any kind of guarantees as to the quality of input.
Here is the reasoning:
Services may present functionality to internal code clients
As well being exposed, through a controller, to a webservice.
Dao's should never be publicly exposed so they should never need entity validation. Realistically though, they will get exposed. If you make sure that only services call dao's (and only relevant services call appropriate dao's) Then you realize dao's are the wrong place
Services represent logical choke points to the code where easy validation can occurr.
The easiest way to enforce this logic is to create an aspect and put the validation code in there.
<aop:aspect ref="validator" order="3">
<aop:before method="doValidation" pointcut="execution(public * com.mycompany.myapp.services.*.*(..))"/>"/>
</aop:aspect>
So, this aspect bean example covers all public methods in the service layer.
#Aspect
public class ServiceValidator{
private Validator validator;
public ServiceValidator() {
}
public ServiceValidator(Validator validator) {
this.validator = validator;
}
public void doValidation(JoinPoint jp){
for( Object arg : jp.getArgs() ){
if (arg != null) {
// uses hibernate validator
Set<ConstraintViolation<Object>> violations = validator.validate(arg);
if( violations.size() > 0 ){
// do something
}
}
}
}
}
I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.