Best design pattern to implement upload feature - java

I am working on a web application which is based on spring MVC. We have various screens for adding different domain components(eg. Account details, Employee details etc). I need to implement an upload feature for each of these domain components i.e. to upload Account, upload employee details etc which will be provided in a csv file (open the file, parse its contents, validate and then persist).
My question is, which design pattern should i consider to implement such a requirement so that upload (open the file, parse its contents, validate and then persist) feature becomes generic. I was thinking about using the template design pattern. Template Pattern
Any suggestions,pointers,links would be highly appreciated.

I am not going to answer your question. That said, let me answer your question! ;-)
I think that design patterns should not be a concern in this stage of development. In spite of their greatness (and I use them all the time), they should not be your primary concern.
My suggestion is for you to implement the first upload feature, then the second and then watching them for what they have that is equal and create a "mother" class. Whenever you come to a third class, repeat the process of generalization. The generic class will come naturally in this process.
Sometimes, I believe that people tend to over engineer and over plan. I am in good company: http://www.joelonsoftware.com/items/2009/09/23.html. Obviouslly, I am not advocating for no design software - that never works well. Nevertheless, looking for similarities after some stuff has been implemented and refactoring them may achieve better results (have you already read http://www.amazon.com/Refactoring-Improving-Design-Existing-Code/dp/0201485672/ref=sr_1_1?ie=UTF8&qid=1337348138&sr=8-1? It is old but stiil great!).

A strategy pattern my be useful here for the uploader. The Uploader class would be a sort of container/manager class that would simply contain a parsing attribute and a persistance attribute. Both of these attributes would be defined as an abstract base class and would have multiple implementations. Even though you say it will always be csv and oracle, this approach would be future-proof and would also separate the parsing/verifying from the persistence code.
Here's an example:
class Uploader
{
private:
Parser parser_;
Persistence persistence_;
void upload() {
parser_.read();
parser_.parse();
parser_.validate();
persistence_.persist(parser_.getData());
}
public:
void setParser(Parser parser) {parser_ = parser;}
void setPersister(Persistence persistence) {persistence_ = persistence;}
};
Class Parser
{
abstract void read();
abstract void parse();
abstract void validate();
abstract String getData();
};
class Persistence
{
abstract persist(String data);
};
class CsvParser : public Parser
{
// implement everything here
};
// more Parser implementations as needed
class DbPersistence : public Persistence
{
// implement everything here
};
class NwPersistence : public Persistence
{
// implement everything here
};
// more Persistence implementations as needed

You could use an Abstract Factory pattern.
Have an upload interface and then implement it for each of the domain objects and construct it in the factory based on the class passed in.
E.g.
Uploader uploader = UploadFactory.getInstance(Employee.class);

Related

where to put the behaviour of DTOs ? Object vs Data structure clean code

Similar question was posted here Clean code - how to design this class?
I still don't find an answer though, I'm confused!
I read the book "clean code" too.He is saying in some part you shouldn't mix data structure/Object, whether data structure with no behaviour or an object with behaviour.
In my application we have Data tranfer objects which carry data from external services .These DTO have just data accessors and mutators. So I was considering them as Data structure type.
However Robert Martin is saying in his book that client.isMarried() is better than isMarried(client) I found this logical as isMarried function use attributes only from client class.. it is cleaner.
In many areas in my application we need some behaviour on a certain DTOs I'm confused where to put this behaviour.
We have made Utils classes that has business logic like
ClientUtils {
boolean isMarried(Client client) { ...}
String getCompleteName(Client client) { ...}
}
Should this go to the service layer ? even if these methods does not manipulate any thing else other than the input object It does not interact with another layer (DAL, services .. )
Since you can't change the Client class due to the external library constraint, I wouldn't extend it. I suggest making a ClientInfo wrapper class that "has a" Client member instead.
class ClientInfo {
private Client myClient;
public ClientInfo(Client c) {
myClient = c;
}
public boolean isMarried() { ...}
public String getCompleteName() { ...}
}
If you ask me, then Utils class just means you have a random static method lingering somewhere which contains actual business logic. Why not keep DTOs as DTOs, and create a ClientManager class that has isMarried method?
The ClientInfo approach that wraps the external object is another option, possibly driven by Domain Driven Security.

Best practice to associate message and target class instance creation

The program I am working on has a distributed architecture, more precisely the Broker-Agent Pattern. The broker will send messages to its corresponding agent in order to tell the agent to execute a task. Each message sent contains the target task information(the task name, configuration properties needed for the task to perform etc.). In my code, each task in the agent side is implemente in a seperate class. Like :
public class Task1 {}
public class Task2 {}
public class Task3 {}
...
Messages are in JSON format like:
{
"taskName": "Task1", // put the class name here
"config": {
}
}
So what I need is to associate the message sent from the broker with the right task in the agent side.
I know one way is to put the target task class name in the message so that the agent is able to create an instance of that task class by the task name extracted from the message using reflections, like:
Class.forName(className).getConstructor(String.class).newInstance(arg);
I want to know what is the best practice to implement this association. The number of tasks is growing and I think to write string is easy to make mistakes and not easy to maintain.
If you're that specific about classnames you could even think about serializing task objects and sending them directly. That's probably simpler than your reflection approach (though even tighter coupled).
But usually you don't want that kind of coupling between Broker and Agent. A broker needs to know which task types there are and how to describe the task in a way that everybody understands (like in JSON). It doesn't / shouldn't know how the Agent implements the task. Or even in which language the Agent is written. (That doesn't mean that it's a bad idea to define task names in a place that is common to both code bases)
So you're left with finding a good way to construct objects (or call methods) inside your agent based on some string. And the common solution for that is some form of factory pattern like: http://alvinalexander.com/java/java-factory-pattern-example - also helpful: a Map<String, Factory> like
interface Task {
void doSomething();
}
interface Factory {
Task makeTask(String taskDescription);
}
Map<String, Factory> taskMap = new HashMap<>();
void init() {
taskMap.put("sayHello", new Factory() {
#Override
public Task makeTask(String taskDescription) {
return new Task() {
#Override
public void doSomething() {
System.out.println("Hello" + taskDescription);
}
};
}
});
}
void onTask(String taskName, String taskDescription) {
Factory factory = taskMap.get(taskName);
if (factory == null) {
System.out.println("Unknown task: " + taskName);
}
Task task = factory.makeTask(taskDescription);
// execute task somewhere
new Thread(task::doSomething).start();
}
http://ideone.com/We5FZk
And if you want it fancy consider annotation based reflection magic. Depends on how many task classes there are. The more the more effort to put into an automagic solution that hides the complexity from you.
For example above Map could be filled automatically by adding some class path scanning for classes of the right type with some annotation that holds the string(s). Or you could let some DI framework inject all the things that need to go into the map. DI in larger projects usually solves those kinds of issues really well: https://softwareengineering.stackexchange.com/questions/188030/how-to-use-dependency-injection-in-conjunction-with-the-factory-pattern
And besides writing your own distribution system you can probably use existing ones. (And reuse rather then reinvent is a best practice). Maybe http://www.typesafe.com/activator/template/akka-distributed-workers or more general http://twitter.github.io/finagle/ work in your context. But there are way too many other open source distributed things that cover different aspects to name all the interesting ones.

What's a good design pattern to implement a network protocol (XML)?

I want to implement a network protocol. To obtain a maintainable design I am looking for fitting patterns.
The protocol is based on XML and should be read with java. To simplify the discussion here I assume the example grammar:
<User>
<GroupList>
<Group>group1</Group>
<Group>group2</Group>
</GroupList>
</User>
Short question:
What is a good design pattern to parse such thing?
Long version:
I have found this and this question where different patterns (mostly state pattern) are proposed.
My actual (but lacking) solution is the folowing:
I create for each possible entry in the XML a class to contain the data and a parser. Thus I have User, User.Parser, ... as classes.
Further there is a ParserSelector that has a Map<String,AbstractParser> in which all possible subentries get registered.
For each parser a ParserSelector gets instantiated and set up.
For example the ParserSelector of the GroupList.Parser has one entry: The mapping from the string "Group" to an instance of Group.Parser.
If I did not use the ParserSleector class, I would have to write this block of code into every single parser.
The problem is now how to get the read data to the superobjects.
The Group.Parser would create a Group object with content group1.
This object must now be registered in the GroupList object.
I have read of using Visitor or Observer patterns but do not understand how they might fit here.
I give some pseudo code below to see the problem.
You see, that I have to check via instanceof for the type as statically there is the type information not available.
I thought this should be possible to solve using polymorphism in java in a cleaner (more maintainable) way.
I always face then the problem that java does only do dynamic binding on overriding.
Thus I cannot add a parameter to the XMLParser.parse(...) method to allow of "remote updating" as in a visitor/observer like approach.
Side remark: The real grammar is "deep" that is, it is such that there are quite many XML entries (here only three: User, GroupList and Group) while most of them might contain only very few different subentries (User and GroupList may only contain one subentry here, while Group itself contains only text).
Here comes some lines of pseude java code to explain the problem:
class User extends AbstractObject {
static class Parser implements XMLParser {
ParserSelector ps = ...; // Initialize with GroupList.Parser
void parse(XMLStreamReader xsr){
XMLParser p = ps.getParser(...); // The corresponding parser.
// We know only that it is XMLParser statically.
p.parse(...);
if(p instanceof GroupList.Parser){
// Set the group list in the User class
}
}
}
}
class GroupList extends AbstractObject{...}
class Group extends AbstractObject{...}
class ParserSelector{
Map<String,XMLParser> = new Map<>();
void registerParser(...){...} // Registers a possible parser for subentries
XMLParser getParser(String elementName){
return map.get(elementName); // Returns the parser registered with the given name
}
}
interface XMLParser {
void parse(XMLStreamReader xsr);
}
abstract class AbstractObject{}
To finish this question:
I ended up with JAXB. In fact I was not aware of the fact that it allows to easily create a XML Schema from java source code (using annotations).
Thus I just have to write the code with classical java objects which are used for transfer. Then the API handles the conversion to and from XML quite well.

I can't unit test my class without exposing private fields -- is there something wrong with my design?

I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.

How instantiate several OSGi services?

In the context of an Eclipse RCP application I decided to use OSGi services to provide "Interfaces" out of a plugin (i.e a bundle).
In one of my plugin I have the following Parser interface:
public interface Parser {
public void start(File file);
public boolean hasNext();
public Object next();
}
Consumer plugins will use this interface to parse files. Because several parsing can be done in the same time and because an implementation of this interface will need several "state" private field each consumer of this service must use a dedicated service instance.
In this case, the default solution provided by manu OSGi tutorials consisting in registering ONE service instance in the start method of the parser bundle doesn't work. What is the best solution to handle such a solution ?
I can create a ParserFactory service with one unique method:
public Parser create(File file);
??
Any comment is welcome,
As you're suggesting, I would change your service interface to be a provider of Parsers.
And your Parser is just an Iterator, so maybe something like
public interface ParserFactory<T> {
/** Iterating on the returned object
* provides Ts parsed from the InputStream.
*
* #param input must be closed by the returned object
* when done iterating.
*/
Iterable<T> createParser(InputStream input);
}
Using an InputStream or Reader also makes it more flexible that requiring a File.
Have a look at the OSGi ServiceFactory; this allows you to instantiate services for different requesting bundles. You can read more about it in section 5.6 of the core specification.

Categories

Resources