Implementation of the Factory Design Pattern - java

I am developing a small application for my client and I tried to apply there Factory Method design pattern. I am not sure if I have done it correctly.
Basically I have an abstract class Scheme that is extended by concrete Schemes (AccountScheme, ContactScheme, OrderScheme etc.). Each class consists mainly of instance variables and a method responsible for transforming Scheme into actual system object (AccountScheme will be used eventually to create Account, ContactScheme to create Contact and so on).
I also have SchemeFactory class which has a static method createScheme taking two parameters - type of system object the Scheme should be able to transform into AND JSON String which will be parsed into the Scheme object itself.
And finally there is a ApiService class which handles Rest Requests and uses the SchemeFactory to create Schemes (using request body). The schemes are processed after that and at certain point if needed particular System Objects is created (using scheme) and inserted to database.
I believe the UML diagram (it is my first one) would look something like that:
UML Diagram

The concept is correct.
Your UML not show the abstract class. In your case, you can have something like this (as described in you UML):
class SchemaFactory
{
public static Schema getSchema(String type, String json)
{
if ( type.equals("account") )
return new AccountSchema(json);
else if ( type.equals("contact") )
return new ContactSchema(json);
else if ( type.equals("order") )
return new OrderSchema(json);
throw new IllegalArgumentException();
}
}
The interface:
interface Schema {
}
The implementation of AccountSchema:
class AccountSchema implements Schema {
AccountSchema(String json) {
//use json
}
}
The abstract class is optional for the pattern. It's useful if you would like to force that the Schemas fill the constructor of abstract class with json as parameter, but the schema class can still fake, like:
public class FakeSchema extends AbstractSchema {
public FakeSchema () {
super(null);
}
}

Related

Does strategy always needs to be passed from the client code in Strategy Pattern?

I have below piece of code:
public interface SearchAlgo { public Items search(); }
public class FirstSearchAlgo implements SearchAlgo { public Items search() {...} }
public class SecondSearchAlgo implements SearchAlgo { public Items search() {...} }
I also have a factory to create instances of above concrete classes based on client's input. Below SearchAlgoFactory code is just for the context.
public class SearchAlgoFactory {
...
public SearchAlgo getSearchInstance(String arg) {
if (arg == "First") return new FirstSearchAlgo();
if (arg == "Second") return new SecondSearchAlgo();
}
}
Now, I have a class that takes input from client, get the Algo from Factory and executes it.
public class Manager{
public Items execute(String arg) {
SearchAlgo algo = SearchAlgoFactory.getSearchInstance(arg);
return algo.search();
}
}
Question:
I feel that I am using both Factory and Strategy pattern but I am not sure 'cause whatever examples I have seen they all have a Context class to execute the strategy and client provides the strategy which they want to use. So, is this a correct implementation of Strategy?
If it comes to implementing design patterns, it is much more important to understand what they do than to conform to some gold standard reference implementation. And it looks like you understand the strategy pattern.
The important thing about strategies is that the implementation is external to some client code (usually called the context) and that it can be changed at runtime. This can be done by letting the user provide the strategy object directly. However, introducing another level of indirection through your factory is just as viable. Your Manager class acts as the context you see in most UML diagrams.
So, yes. In my opinion, your code implements the strategy pattern.

Designing custom workflow in JAVA and Spring

I am working on an spring 2.0.1.RELEASE application.
Brief of Application:
1. I have separate Transformer beans that transforms my DTO to Domain
and vice versa.
2. I have separate Validator beans that validate my domain object being passed.
3. I have Service classes that takes care of the applying rules and calling persistence layer.
Now, i want to build a Workflow in my application:
where i will just call the start of the workflow and below mentioned steps will be executed in order and exception handling will be done as per the step:
1.First-Transformtion - transformToDomain() method will be called for that object type.
2.Second-Validator - class valid() method will be called for that object.
3.Third-Service - class save() method will be called for that object.
4.Fourth- Transformation - transformToDTO() method will be called for that object type.
after this my workflow ends and i will return the DTO object as response of my REST API.
Exception handling part is the one, i also want to take care of, like if particular exception handler exist for that step then call it, else call global exception handler.
I designed some prototype of same, but looking for some expert advice and how this can be achieved with a better design in java.
Explanation with example considering above use case is highly appreciable.
I'm not so sure if what you are describing is a workflow system in its true sense, perhaps a Chain of Responsibility is more of what you are talking about?
Following what you described as a sequence of execution, here is a simplified example of how I would implement the chain:
Transformer.java
public interface Transformer<IN, OUT> {
OUT transformToDomain(IN dto);
IN transformToDTO(OUT domainObject);
}
Validator.java
public interface Validator<T> {
boolean isValid(T object);
}
Service.java
public interface Service {
void save(Object object);
}
And the implementation that binds everything:
ProcessChain.java
public class ProcessChain {
private Transformer transformer;
private Service service;
private Validator validator;
Object process(Object dto) throws MyValidationException {
Object domainObject = transformer.transformToDomain(dto);
boolean isValid = validator.isValid(domainObject);
if(!isValid){
throw new MyValidationException("Validation message here");
}
service.save(domainObject);
return transformer.transformToDTO(domainObject);
}
}
I haven't specified any Spring related things here because your question seems to be a design question rather than a technology questions.
Hope this helps
Brief of what i implemented in a way with not much hustle:
This is how I created flow of handlers:
Stream.<Supplier<RequestHandler>>of(
TransformToDomainRequestHandler::new,
ValidateRequestHandler::new,
PersistenceHandler::new,
TransformToDTORequestHandler::new)
.sequential()
.map(c -> c.get()) /* Create the handler instance */
.reduce((processed, unProcessed) -> { /* chains all handlers together */
RequestHandler previous = processed;
RequestHandler target = previous.getNextRequestHandler();
while (target != null && previous != null) {
previous = target;
target = target.getNextRequestHandler();
}
previous.setNextRequestHandler(unProcessed);
return processed;
}).get();
This is my Request Handler which all other handler extends

Spring Data Rest ResourceProcessor for Projection Exception

I have created the following Projection
#Projection(name = "select", types = {Organisation.class})
public interface OrganisationSelectProjection {
Long getId();
String getName();
}
Which I want to basically use in a "Select" component so I need the least data possible. So I also wanted to remove all the links with a ResourceProcessor, so I created this:
#Bean
public ResourceProcessor<Resource<OrganisationSelectProjection>> organisationProcessor() {
return resource -> {
resource.removeLinks();
return resource;
};
}
However, it looks like this breaks the entire API since whatever endpoint I hit I get the following exception message org.springframework.hateoas.PagedResources cannot be cast to org.springframework.hateoas.Resource
Any idea what I have doen wrong?
If you'd like to keep anonymous class in place, using ResourceSupport instead of Resource can resolve the issue.
#Bean
public ResourceProcessor<ResourceSupport> organisationProcessor() {
return resource -> {
resource.removeLinks();
return resource;
};
}
But in this case the processor will be used on each and every resource regardless of type of the content (you can check that inside the method body though).
Instead of anonymous descendant of ResourceProcessor you can define it as a stand-alone class:
public class OrganizationResourceProcessor implements ResourceProcessor<Resource<OrganisationSelectProjection>> {
#Override
public Resource<OrganisationSelectProjection> process(Resource<OrganisationSelectProjection> resource) {
resource.removeLinks();
return resource;
}
}
#Bean
public OrganizationResourceProcessor organisationProcessor() {
return new OrganizationResourceProcessor();
}
The issue you expereinced has something to do with type erasure since there is no any type parameters in the anonymous class implementation. Your definition is type-safe but it lacks that type-related data which is used at runtime when determining whether particular ResourceProcessor can process a resource. So spring-data-rest thinks that anonymous organizationProcessor can process PagedResources and feeds it to the processor where ClassCastException happens.
Not everything spring-data-rest puts through ResourceProcessors is a Resource. There can be different descendants of org.springframework.hateoas.ResourceSupport class (like PagedResources in your case) and majority of them are not inherited from Resource.

How to use Spring MVC #JsonView when returning an object hierarchy from a Rest Controller

I'm building an application which uses Spring MVC 4.10 with jackson 2.3.2.
I have a Project class which has children Proposal objects and a Customer object. These Proposal objects are complex and I want to return a summarized JSON view of them. A similar situation happens with the Customer object. I'm trying to implement this with #JsonView annotations.
I wanted to ask if extending the views of the member object classes in the container object class view is the way to do this or, if not, if there is a cleaner way to implement this that I am unaware of.
Context
Before today, I was under the false impression that you could annotate your controller with multiple views and that the resulting JSON representation would be filtered accordingly.
#JsonView({Project.Extended.class, Proposal.Summary.class, Customer.Summary.class})
#Transactional
#RequestMapping(value="/project", method=RequestMethod.GET)
public #ResponseBody List<Project> findAll() {
return projectDAO.findAll();
}
Where each class had its own JsonView annotations and interfaces
e.g.:
public class Customer {
...
public interface Summary {}
public interface Normal extends Summary {}
public interface Extended extends Normal {}
}
Nevertheless, it is only the first view in the array that gets taken into account. According to https://spring.io/blog/2014/12/02/latest-jackson-integration-improvements-in-spring
Only one class or interface can be specified with the #JsonView
annotation, but you can use inheritance to represent JSON View
hierarchies (if a field is part of a JSON View, it will be also part
of parent view). For example, this handler method will serialize
fields annotated with #JsonView(View.Summary.class) and
#JsonView(View.SummaryWithRecipients.class):
and the official documentation in http://docs.spring.io/spring/docs/current/spring-framework-reference/html/mvc.html#mvc-ann-jsonview
To use it with an #ResponseBody controller method or controller
methods that return ResponseEntity, simply add the #JsonView
annotation with a class argument specifying the view class or
interface to be used:
So, I ended up extending the views of the members in the view of the container object, like this
#Entity
public class Project {
...
public static interface Extended extends Normal, Proposal.Extended {}
public static interface Normal extends Summary, Customer.Normal {}
public static interface Summary {}
}
and changed my controller to this
#JsonView(Project.Extended.class)
#Transactional
#RequestMapping(value="/project", method=RequestMethod.GET)
public #ResponseBody List<Project> findAll() {
return projectDAO.findAll();
}
This seems to do the trick, but I couldn't find documentation or discussion about this situation. Is this the intended use of JsonViews or is it kind of hackish?
Thank you in advance
-Patricio Marrone
I believe you have configured your views as necessary. The root of the issue is not Spring's #JsonView, but rather Jackson's implementation of views. As stated in Jackson's view documentation:
Only single active view per serialization; but due to inheritance of Views, can combine Views via aggregation.
So, it appears that Spring is simply passing on and adhering to the limitation set in place by Jackson 2.
I use Jersey+Jackson but issued just the same problem.
That's a trick that I'm doing for my application to let me require for several JSON Views during serialization. I bet it is also possible with Spring MVC instead of Jersey, but not 100% sure. It also does not seem to have performance issues. Maybe it is a bit complicated for your case, but if you have large object with big amount of possible views, maybe it's better than doing a lot of inheritance.
So I use the Jackson Filter approach to require several views in serialization. However, I haven't found the way to overcome the issue of putting #JsonFilter("name") above the classes to map, which does not make it so clean. But I mask it in custom annotation at least:
#Retention(RetentionPolicy.RUNTIME)
#JacksonAnnotationsInside
#JsonFilter(JSONUtils.JACKSON_MULTIPLE_VIEWS_FILTER_NAME)
public #interface JsonMultipleViews {}
The filter itself looks like this:
public class JsonMultipleViewsFilter extends SimpleBeanPropertyFilter {
private Collection<Class<?>> wantedViews;
public JsonMultipleViewsFilter(Collection<Class<?>> wantedViews) {
this.wantedViews = wantedViews;
}
#Override
public void serializeAsField( Object pojo, JsonGenerator jgen, SerializerProvider provider, PropertyWriter writer ) throws Exception {
if( include( writer ) ) {
JsonView jsonViewAnnotation = writer.getAnnotation(JsonView.class);
// serialize the field only if there is no #JsonView annotation or, if there is one, check that at least one
// of view classes above the field fits one of required classes. if yes, serialize the field, if no - skip the field
if( jsonViewAnnotation == null || containsJsonViews(jsonViewAnnotation.value()) ) {
writer.serializeAsField( pojo, jgen, provider );
}
}
else if( !jgen.canOmitFields() ) {
// since 2.3
writer.serializeAsOmittedField( pojo, jgen, provider );
}
}
private boolean containsJsonViews(Class<?>[] viewsOfProperty) {
for (Class<?> viewOfProperty : viewsOfProperty) {
for (Class<?> wantedView : wantedViews) {
// check also subclasses of required view class
if (viewOfProperty.isAssignableFrom(wantedView)) {
return true;
}
}
}
return false;
}
#Override
protected boolean include( BeanPropertyWriter writer ) {
return true;
}
#Override
protected boolean include( PropertyWriter writer ) {
return true;
}
}
I can use this filter like this:
public static String toJson( Object object, Collection<Class<?>> jsonViewClasses) throws JsonProcessingException {
// if no json view class is provided, just map without view approach
if (jsonViewClasses.isEmpty()) {
return mapper.writeValueAsString(object);
}
// if only one json view class is provided, use out of the box jackson mechanism for handling json views
if (jsonViewClasses.size() == 1) {
return mapper.writerWithView(jsonViewClasses.iterator().next()).writeValueAsString(object);
}
// if more than one json view class is provided, uses custom filter to serialize with multiple views
JsonMultipleViewsFilter jsonMultipleViewsFilter = new JsonMultipleViewsFilter(jsonViewClasses);
return mapper.writer(new SimpleFilterProvider() // use filter approach when serializing
.setDefaultFilter(jsonMultipleViewsFilter) // set it as default filter in case of error in writing filter name
.addFilter(JACKSON_MULTIPLE_VIEWS_FILTER_NAME, jsonMultipleViewsFilter) // set custom filter for multiple views with name
.setFailOnUnknownId(false)) // if filter is unknown, don't fail, use default one
.writeValueAsString(object);
}
After that, Jersey allows us to add Jersey Filters on the point of running the application (it goes through each endpoint in each Controller in start of application and we can easily bind the Jersey filters at this moment if there is is multiple value in #JsonView annotation above the endpoint).
In Jersey filter for #JsonView annotation with multiple value above endpoint, once it's bint on startup to correct endpoints depending on annotations, we can easily override the response entity with calling that utils method
toJson(previousResponeObjectReturned, viewClassesFromAnnoation);
No reason to provide the code of Jersey Filter here since you're using Spring MVC. I just hope that it's easy to do it the same way in Spring MVC.
The Domain Object would look like this:
#JsonMultipleViews
public class Example
{
private int id;
private String name;
#JsonView(JsonViews.Extended.class)
private String extendedInfo;
#JsonView(JsonViews.Meta.class)
private Date updateDate;
public static class JsonViews {
public interface Min {}
public interface Extended extends Min {}
public interface Meta extends Min {}
//...
public interface All extends Extended, Meta {} // interfaces are needed for multiple inheritence of views
}
}
We can ommit putting Min.class in my case on those fields that are always required not depending on view. We just put Min in required views and it will serialize all fields without #JsonView annotation.
View All.class is required for me since if we have, for example, a specific set of views for each domain class (like in my case) and then we need to map a complex model consisting of several domain objects that both use views approach - some view for object one, but all views for object two, it's easier to put it above endpoint like this:
#JsonView({ObjectOneViews.SomeView.class, ObjectTwoViews.All.class})
because if we ommit ObjectTwoViews.All.class here and require for only ObjectOneViews.SomeView.class, those fields that are marked with annotation in Object Two will not be serialized.

Java OO concept better when flatten or include unnecessary process to simplify the usage?

Say, I hava Json parser class for type DataObject as following:
class JsonDataParser{
public DataObject parseData(String data){
// Parse data
return dataObject;
}
}
Now I have to connect to server to get the Json, so I have this:
class DataRetriever{
public String getData(){
// Get data from server using URL
return jsonDataString;
}
}
The question is that, is it better to code as follow in the main class (Method-A)
DataObject d = new JsonDataParser().parseData(new DataRetriever().getData());
Or modify the DataRetriever to include Json parsing inside(Method-B) as follow:
class DataRetriever{
public DataObject getData(){
// Get data from server using URL
return new JsonDataParser().parseData(jsonDataString);
}
}
The result would be the same in both case but conceptually, is it better to code using Method-A or Method-B?
I normally would create a controller class to follow Method-A but I'm not sure if this is efficient or not.
EDIT:
In fact, I'm thinking of using GenericType and Interface when parsing Json. So I prepared the following:
public interface IJsonParser<T> {
public T parseJSON(String jsonString);
public T parseJSON(JSONObject jsonObject);
}
then add it to DataRetriever's constructor as follow:
class DataRetriever<T>{
private IJsonParser<T> parser;
public DataRetriever(IJsonParser<T> parser){
this.parser = parser;
}
public DataObject getData(){
// Get data from server using URL
return new JsonDataParser().parseData(jsonDataString);
}
}
Ultimately, I was thinking of allowing multiple type of parser so each time we can supply our own parser to add flexibility to it. Is this considered malpractice of sort?
Here you should not be seeing the efficiency as both will be equally efficient. But rather than doing this you should think about the future extensibility of the class.
In that matter I think method B would be good as you can change the parser so some other one in future.
class DataRetriever{
private DataParser dataParser;
public DataRetriever(){
dataParser = new JsonDataParser(); //tomorrow this can be changed to something else
}
public DataObject getData(String data){
// Get data from server using URL
return dataParser.parseData(data);
}
}
UPDATE:
If you are looking for a proper OOP solution for this, then you should go with factory pattern which will return you the instance of the parser based on the type you provide it.
class ParserFactory{
//get parser
public DataParser createParser(int type){
//creates the new parser based on type and returns it
}
}
this would be better as the creation of the parser would be the responsibility of the factory and that would make more sense IMO.
Link for Factory Pattern
Depends on how often you call the method. Generally you would put the new JsonDataParser() somewhere the object would stay alive till the next parseData(...). For example as an object Property of DataRetriever.
EDIT:
Which in turn would also require your DataRetriever object to stay alive. In case this is for a website (e.g. Tomcat or other Servlet container), you should look into tomcat's feature to put objects into the servlet or application context.
That depends on you usage of getData() method.In the first case you are retrieving the String from the method and later using JsonDataParser object to convert it.In the second case, you are tightly coupling your DataRetriever class with JsonDataParser object, which is not considered as a good practice.Suppose you want to change your JsonDataParser class later.You have to change code DataRetriever as well in that case.SO from design perpective, second approach is better

Categories

Resources