I want to know when we need to use the abstract factory pattern.
Here is an example,I want to know if it is necessary.
The UML
THe above is the abstract factory pattern, it is recommended by my classmate.
THe following is myown implemention. I do not think it is necessary to use the pattern.
And the following is some core codes:
package net;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
public class Test {
public static void main(String[] args) throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException {
DaoRepository dr=new DaoRepository();
AbstractDao dao=dr.findDao("sql");
dao.insert();
}
}
class DaoRepository {
Map<String, AbstractDao> daoMap=new HashMap<String, AbstractDao>();
public DaoRepository () throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException {
Properties p=new Properties();
p.load(DaoRepository.class.getResourceAsStream("Test.properties"));
initDaos(p);
}
public void initDaos(Properties p) throws InstantiationException, IllegalAccessException, ClassNotFoundException {
String[] daoarray=p.getProperty("dao").split(",");
for(String dao:daoarray) {
AbstractDao ad=(AbstractDao)Class.forName(dao).newInstance();
daoMap.put(ad.getID(),ad);
}
}
public AbstractDao findDao(String id) {return daoMap.get(id);}
}
abstract class AbstractDao {
public abstract String getID();
public abstract void insert();
public abstract void update();
}
class SqlDao extends AbstractDao {
public SqlDao() {}
public String getID() {return "sql";}
public void insert() {System.out.println("sql insert");}
public void update() {System.out.println("sql update");}
}
class AccessDao extends AbstractDao {
public AccessDao() {}
public String getID() {return "access";}
public void insert() {System.out.println("access insert");}
public void update() {System.out.println("access update");}
}
And the content of the Test.properties is just one line:
dao=net.SqlDao,net.SqlDao
So any ont can tell me if this suitation is necessary?
-------------------The following is added to explain the real suitation--------------
I use the example of Dao is beacuse it is common,anyone know it.
In fact,what I am working now is not related to the DAO,I am working to build a Web
service,the web serivce contains some algorithms to chang a file to other format,
For example:net.CreatePDF,net.CreateWord and etc,it expose two interfaces to client:getAlgorithms and doProcess.
The getAlogrithoms will return all the algorithms's ids,each id is realted to the
corresponding algorithm.
User who call the doProcess method will also provide the algorithm id he wanted.
All the algorithm extends the AbstractAlgorithm which define a run() method.
I use a AlogrithmsRepository to store all the algorithms(from
the properties file which config the concrete java classes of the algorithms by the web
service admin).That's to say, the interface DoProcess exposed by the web service is
executed by the concrete alogrithm.
I can give a simple example:
1)user send getAlgorithms request:
http://host:port/ws?request=getAlgorithms
Then user will get a list of algorithms embeded in a xml.
<AlgorithmsList>
<algorithm>pdf</algorithm>
<algorithm>word<algorithm>
</AlgorithmsList>
2)user send a DoProcess to server by:
http://xxx/ws?request=doProcess&alogrithm=pdf&file=http://xx/Test.word
when the server recieve this type of requst,it will get the concrete algorithm instance according to the "algorithm" parameter(it is pdf in this request) from the AlgorithmRepostory. And call the method:
AbstractAlgorithm algo=AlgorithmRepostory.getAlgo("pdf");
algo.start();
Then a pdf file will be sent to user.
BTW,in this example, the each algorithm is similar to the sqlDao,AccessDao.
Here is the image:
The design image
Now,does the AlgorithmRepostory need to use the Abstract Factory?
The main difference between the two approaches is that the top one uses different DAO factories to create DAO's while the bottom one stores a set of DAO's and returns references to the DAO's in the repository.
The bottom approach has a problem if multiple threads need access to the same type of DAO concurently as JDBC connections are not synchronised.
This can be fixed by having the DAO implement a newInstance() method which simply creates and returns a new DAO.
abstract class AbstractDao {
public abstract String getID();
public abstract void insert();
public abstract void update();
public abstract AbstractDao newInstance();
}
class SqlDao extends AbstractDao {
public SqlDao() {}
public String getID() {return "sql";}
public void insert() {System.out.println("sql insert");}
public void update() {System.out.println("sql update");}
public AbstractDao newInstance() { return new SqlDao();}
}
The repository can use the DAO's in the repository as factories for the DAO's returned by the Repository (which I would rename to Factory in that case) like this:
public AbstractDao newDao(String id) {
return daoMap.containsKey(id) ? daoMap.get(id).newInstance() : null;
}
Update
As for your question should your web-service implement a factory or can it use the repository like you described? Again the answer depends on the details:
For web-services it is normal to
expect multiple concurrent clients
Therefore the instances executing the
process for two clients must not
influence eachother
Which means they must not have shared state
A factory delivers a fresh instance on
every request, so no state is shared
when you use a factory pattern
If (and only if) the instances in your
repository are stateless your
web-service can also use the
repository as you describe, for this
they probably need to instantiate
other objects to actually execute the
process based on the request
parameters passed
If you ask to compare 2 designs from UML, 2nd API on UML have following disadvantage:
caller needs to explicitly specify type of DAO in call to getDAO(). Instead, caller shouldn't care about type of DAO it works with, as long as DAO complies with interface. First design allows caller simply call createDAO() and get interface to work with. This way control of which impl to use is more flexible and caller don't have this responsibility, which improves overall coherence of design.
Abstract Factory is useful if you need to separate multiple dimensions of choices in creating something.
In the common example case of windowing systems, you want to make a family of widgets for assorted windowing systems, and you create a concrete factory per windowing system which creates widgets that work in that system.
In your case of building DAOs, it is likely useful if you need to make a family of DAOs for the assorted entities in your domain, and want to make a "sql" version and an "access" version of the entire family. This is I think the point your classmate is trying to make, and if that's what you're doing it's likely to be a good idea.
If you have only one thing varying, it's overkill.
Related
I have below piece of code:
public interface SearchAlgo { public Items search(); }
public class FirstSearchAlgo implements SearchAlgo { public Items search() {...} }
public class SecondSearchAlgo implements SearchAlgo { public Items search() {...} }
I also have a factory to create instances of above concrete classes based on client's input. Below SearchAlgoFactory code is just for the context.
public class SearchAlgoFactory {
...
public SearchAlgo getSearchInstance(String arg) {
if (arg == "First") return new FirstSearchAlgo();
if (arg == "Second") return new SecondSearchAlgo();
}
}
Now, I have a class that takes input from client, get the Algo from Factory and executes it.
public class Manager{
public Items execute(String arg) {
SearchAlgo algo = SearchAlgoFactory.getSearchInstance(arg);
return algo.search();
}
}
Question:
I feel that I am using both Factory and Strategy pattern but I am not sure 'cause whatever examples I have seen they all have a Context class to execute the strategy and client provides the strategy which they want to use. So, is this a correct implementation of Strategy?
If it comes to implementing design patterns, it is much more important to understand what they do than to conform to some gold standard reference implementation. And it looks like you understand the strategy pattern.
The important thing about strategies is that the implementation is external to some client code (usually called the context) and that it can be changed at runtime. This can be done by letting the user provide the strategy object directly. However, introducing another level of indirection through your factory is just as viable. Your Manager class acts as the context you see in most UML diagrams.
So, yes. In my opinion, your code implements the strategy pattern.
I am working on a REST API where I have an interface that defines a list of methods which are implemented by 4 different classes, with the possibility of adding many more in the future.
When I receive an HTTP request from the client there is some information included in the URL which will determine which implementation needs to be used.
Within my controller, I would like to have the end-point method contain a switch statement that checks the URL path variable and then uses the appropriate implementation.
I know that I can define and inject the concrete implementations into the controller and then insert which one I would like to use in each particular case in the switch statement, but this doesn't seem very elegant or scalable for 2 reasons:
I now have to instantiate all of the services, even though I only need to use one.
The code seems like it could be much leaner since I am literally calling the same method that is defined in the interface with the same parameters and while in the example it is not really an issue, but in the case that the list of implementations grows ... so does the number of cases and redundant code.
Is there a better solution to solve this type of situation? I am using SpringBoot 2 and JDK 10, ideally, I'd like to implement the most modern solution.
My Current Approach
#RequestMapping(Requests.MY_BASE_API_URL)
public class MyController {
//== FIELDS ==
private final ConcreteServiceImpl1 concreteService1;
private final ConcreteServiceImpl2 concreteService2;
private final ConcreteServiceImpl3 concreteService3;
//== CONSTRUCTORS ==
#Autowired
public MyController(ConcreteServiceImpl1 concreteService1, ConcreteServiceImpl2 concreteService2,
ConcreteServiceImpl3 concreteService3){
this.concreteService1 = concreteService1;
this.concreteService2 = concreteService2;
this.concreteService3 = concreteService3;
}
//== REQUEST MAPPINGS ==
#GetMapping(Requests.SPECIFIC_REQUEST)
public ResponseEntity<?> handleSpecificRequest(#PathVariable String source,
#RequestParam String start,
#RequestParam String end){
source = source.toLowerCase();
if(MyConstants.SOURCES.contains(source)){
switch(source){
case("value1"):
concreteService1.doSomething(start, end);
break;
case("value2"):
concreteService2.doSomething(start, end);
break;
case("value3"):
concreteService3.doSomething(start, end);
break;
}
}else{
//An invalid source path variable was recieved
}
//Return something after additional processing
return null;
}
}
In Spring you can get all implementations of an interface (say T) by injecting a List<T> or a Map<String, T> field. In the second case the names of the beans will become the keys of the map. You could consider this if there are a lot of possible implementations or if they change often. Thanks to it you could add or remove an implementation without changing the controller.
Both injecting a List or a Map have some benefits and drawbacks in this case. If you inject a List you would probably need to add some method to map the name and the implementation. Something like :
interface MyInterface() {
(...)
String name()
}
This way you could transform it to a Map<String, MyInterface>, for example using Streams API. While this would be more explicit, it would polute your interface a bit (why should it be aware that there are multiple implementations?).
When using the Map you should probably name the beans explicitly or even introduce an annotation to follow the principle of least astonishment. If you are naming the beans by using the class name or the method name of the configuration class you could break the app by renaming those (and in effect changing the url), which is usually a safe operation to do.
A simplistic implementation in Spring Boot could look like this:
#SpringBootApplication
public class DynamicDependencyInjectionForMultipleImplementationsApplication {
public static void main(String[] args) {
SpringApplication.run(DynamicDependencyInjectionForMultipleImplementationsApplication.class, args);
}
interface MyInterface {
Object getStuff();
}
class Implementation1 implements MyInterface {
#Override public Object getStuff() {
return "foo";
}
}
class Implementation2 implements MyInterface {
#Override public Object getStuff() {
return "bar";
}
}
#Configuration
class Config {
#Bean("getFoo")
Implementation1 implementation1() {
return new Implementation1();
}
#Bean("getBar")
Implementation2 implementation2() {
return new Implementation2();
}
}
#RestController
class Controller {
private final Map<String, MyInterface> implementations;
Controller(Map<String, MyInterface> implementations) {
this.implementations = implementations;
}
#GetMapping("/run/{beanName}")
Object runSelectedImplementation(#PathVariable String beanName) {
return Optional.ofNullable(implementations.get(beanName))
.orElseThrow(UnknownImplementation::new)
.getStuff();
}
#ResponseStatus(BAD_REQUEST)
class UnknownImplementation extends RuntimeException {
}
}
}
It passes the following tests:
#RunWith(SpringRunner.class)
#SpringBootTest
#AutoConfigureMockMvc
public class DynamicDependencyInjectionForMultipleImplementationsApplicationTests {
#Autowired
private MockMvc mockMvc;
#Test
public void shouldCallImplementation1() throws Exception {
mockMvc.perform(get("/run/getFoo"))
.andExpect(status().isOk())
.andExpect(content().string(containsString("foo")));
}
#Test
public void shouldCallImplementation2() throws Exception {
mockMvc.perform(get("/run/getBar"))
.andExpect(status().isOk())
.andExpect(content().string(containsString("bar")));
}
#Test
public void shouldRejectUnknownImplementations() throws Exception {
mockMvc.perform(get("/run/getSomethingElse"))
.andExpect(status().isBadRequest());
}
}
Regarding two of your doubts :
1. Instantiating the service object should not be an issue as this is one time job and controller gonna need them to serve all type of request.
2. You can use the exact Path mapping to get rid of switch case. For e.g. :
#GetMapping("/specificRequest/value1")
#GetMapping("/specificRequest/value2")
#GetMapping("/specificRequest/value3")
All of the above mapping will be on separate method which would deal with specific source value and invoke respective service method.
Hope this will help to make code more cleaner and elegant.
There is one more option of separating this on service layer and having only one endpoint to serve all types of source but as you said there is different implementation for each source value then it says that source is nothing but a resource for your application and having separate URI/separate method makes the perfect sense here. Few advantages that I see here with this are :
Makes it easy to write the test cases.
Scaling the same without impacting any other source/service.
Your code dealing the each source as separate entity from other sources.
The above approach should be fine when you have limited source values. If you have no control over source value then we need further redesign here by making source value differentiate by one more value like sourceType etc. and then having separate controller for each group type of source.
Let's say I wanted to define an interface which represents a call to a remote service.
Both Services have different request and response
public interface ExecutesService<T,S> {
public T executeFirstService(S obj);
public T executeSecondService(S obj);
public T executeThirdService(S obj);
public T executeFourthService(S obj);
}
Now, let's see implementation
public class ServiceA implements ExecutesService<Response1,Request1>
{
public Response1 executeFirstService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeSecondService(Request1 obj)
{
//execute some service
}
public Response1 executeThirdService(Request1 obj)
{
//execute some service
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
public class ServiceB implements ExecutesService<Response2,Request2>
{
public Response1 executeFirstService(Request1 obj)
{
//execute some service
}
public Response1 executeSecondService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeThirdService(Request1 obj)
{
//This service call should not be executed by this class
throw new UnsupportedOperationException("This method should not be called for this class");
}
public Response1 executeFourthService(Request1 obj)
{
//execute some service
}
}
In a other class depending on some value in request I am creating instance of either ServiceA or ServiceB
I have questions regarding the above:
Is the use of a generic interface ExecutesService<T,S> good in the case where you want to provide subclasses which require different Request and Response.
How can I do the above better?
Basically, your current design violates open closed principle i.e., what if you wanted to add executeFifthService() method to ServiceA and ServiceB etc.. classes.
It is not a good idea to update all of your Service A, B, etc.. classes, in simple words, classes should be open for extension but closed for modification.
Rather, you can refer the below approach:
ExecutesService interface:
public interface ExecutesService<T,S> {
public T executeService(S obj);
}
ServiceA Class:
public class ServiceA implements ExecutesService<Response1,Request1> {
List<Class> supportedListOfServices = new ArrayList<>();
//load list of classnames supported by ServiceA during startup from properties
public Response1 executeService(Request1 request1, Service service) {
if(!list.contains(Service.class)) {
throw new UnsupportedOperationException("This method should
not be called for this class");
} else {
return service.execute(request1);
}
}
}
Similarly, you can implement ServiceB as well.
Service interface:
public interface Service<T,S> {
public T execute(S s);
}
FirstService class:
public class FirstService implements Service<Request1,Response1> {
public Response1 execute(Request1 req);
}
Similarly, you need to implement SecondService, ThirdService, etc.. as well.
So, in this approach, you are basically passing the Service (to be actually called, it could be FirstService or SecondService, etc..) at runtime and ServiceA validates whether it is in supportedListOfServices, if not throws an UnsupportedOperationException.
The important point here is that you don't need to update any of the existing services for adding new functionality (unlike your design where you need to add executeFifthService() in ServiceA, B, etc..), rather you need to add one more class called FifthService and pass it.
I would suggest you to create two different interfaces every of which is handling its own request and response types.
Of course you can develop an implementation with one generic interface handling all logic but it may make the code more complex and dirty from my point of view.
regards
It makes not really sense to have a interface if you know that for one case, most of methods of the interface are not supported and so should not be called by the client.
Why provide to the client an interface that could be error prone to use ?
I think that you should have two distinct API in your use case, that is, two classes (if interface is not required any longer) or two interfaces.
However, it doesn't mean that the two API cannot share a common interface ancestor if it makes sense for some processing where instances should be interchangeable as they rely on the same operation contract.
Is the use of a generic interace (ExecutesService) good in the case
where you want to provide subclasses which require different Request
and Response.
It is not classic class deriving but in some case it is desirable as
it allows to use a common interface for implementations that has some enough similar methods but don't use the same return type or parameter types in their signature :
public interface ExecutesService<T,S>
It allows to define a contract where the classic deriving cannot.
However, this way of implementing a class doesn't allow necessarily to program by interface as the declared type specifies a particular type :
ExecutesService<String, Integer> myVar = new ExecutesService<>();
cannot be interchanged with :
ExecutesService<Boolean, String> otherVar
like that myVar = otherVar.
I think that your question is a related problem to.
You manipulate implementations that have close enough methods but are not really the same behavior.
So, you finish to mix things from two concepts that have no relation between them.
By using classic inheriting (without generics), you would have probably introduced very fast distinct interfaces.
I guess it is not a good idea to implement interface and make possible to call unsupported methods. It is a sign, that you should split your interface into two or three, depending on concrete situation, in a such way, that each class implements all methods of the implemented interface.
In your case I would split the entire interface into three, using inheritance to avoid doubling. Please, see the example:
public interface ExecutesService<T, S> {
T executeFourthService(S obj);
}
public interface ExecutesServiceA<T, S> extends ExecutesService {
T executeSecondService(S obj);
T executeThirdService(S obj);
}
public interface ExecutesServiceB<T, S> extends ExecutesService {
T executeFirstService(S obj);
}
Please, also take into account that it is redundant to place public modifier in interface methods.
Hope this helps.
While working on an web-application , I need to get a set of classes at few steps and I am thinking to separate this logic to a simple Factory so as based on the Class type We can create class instance as well init it with default values.
Current structure of Class hierarchy is
public interface DataPopulator<Source,Target>{
// some method decaration
}
Abstract class
public abstract class AbstractDataPopulator<Source,Target> implements DataPopulator<Source, Target>{
// some common implimentation
}
And now classes which will be used as actual implementations like
Type1Populator extends AbstractDataPopulator.
Type2Populator extends AbstractDataPopulator.
Each of these implementation needs a set of common dependencies based on what functionality is being executed by those Populators.
As of Now I am creating instance with new and than filling those dependencies with simple setter methods.
I am thinking about creating a simple factory pattern like
public interface PopulatorFactory{
<T extends Object> T create(String className) throws Exception;
<T extends Object> T create(Class populatorClass) throws Exception;
}
Abstract class
public abstract class DefaultPopulatorFactory impliments PopulatorFactory{
public <T> T create(final Class populatorClass) throws Exception{
return Class.forName(populatorClass);
}
// other method.
}
Implementation classes
public Type1PopulatorFactory extends DefaultPopulatorFactory {
public <T> T create(final Class populatorClass) throws Exception{
final T populator= super.create(populatorClass);
}
}
I also want to initialize newly created instances with some default values specific to each implementation, but I'm not sure what's the best way to do this?
Should I define another method say initDefaults?
What is the best way to pass those dependencies to these poulators.
Is the approach outlined by me fine or is it overly complicated?
In cases when you are building not-so-trivial objects it is usually better to use the Builder pattern instead of a Factory.
In your case if you don't need external data sources you can simply write constructors for your classes where you can supply the default values and get rid of the contraption in your question.
If you use the Builder pattern you can simplify your framework by using a Builder object for the common data and a SomeOtherBuilder which extends from Builder and adds the custom values of the specialized implementation. You can make your classes constructors which are taking a Builder object.
public class Builder {
// your fields go here
}
public class SomeOtherBuilder extends Builder {
// your specialized fields go here
}
public class YourClass {
public YourClass(Builder builder) {
// construct it here
}
}
You can also make your classes generic with using something like T extends Builder.
Using the generic dao pattern, I define the generic interface:
public interface GenericDao<T extends DataObject, ID extends Serializable> {
T save(T t);
void delete(ID id);
T findById(ID id);
Class<T> getPersistentClass();
}
I then implemented an default GenericDaoImpl implementation to perform these functions with the following constructor:
public GenericDaoImpl(Class<T> clazz) {
this.persistentClass = clazz;
DaoRegistry.getInstance().register(clazz, this);
}
The point of the DaoRegistry is to look up a Dao by the class associating to it. This allows me to extend GenericDaoImpl and overwrite methods for objects that requires special handling:
DaoRegistry.getInstance().getDao(someClass.getClass()).save(someClass);
While it works, there are a few things that I don't like about it:
DaoRegistry is an singleton
The logic of calling save is complicated
Is there a better way to do this?
Edit
I am not looking to debate whether Singleton is an anti-pattern or not.
First of all, what is your problem with DaoRegistry being singleton?
Anyway, you could have an abstract base class for your entities that'd implement save like this
public T save(){
DaoRegistry.getInstance().getDao(this.getClass()).save(this);
}
then you could simply call someEntity.save()
Or it may be more straightforward if the entity classes itself implemented the whole GenericDao interface (save, delete and find methods), so the contents of your GenericDaoImpl would be in the base class of your entities.
It could be better to use instance of DaoRegistry instead of static methods. It would make it more manageable for test configurations. You could implement it as
#Component("daoRegistry")
public class DaoRegistry {
#Autowired
private List<GenericDao> customDaos;
private GenericDao defaultDao = new GenericDaoImpl();
public <T> T getDao(Class<T> clazz) {
// search customDaos for matching clazz, return default dao otherwise
}
}
Also you could add save method to it and rename accordingly. All customised daos should be available as beans.