What are the best practices for Facade pattern? - java

I have my code working, but I don't know if the way that I implemented it is appropriate. Basically, I want to maintain the pattern without violating it.
The code looks like this:
Package Model (with setters/getters omitted):
public class CA {
private Integer in;
private Integer jn;
}
public class CB {
private Integer kn;
private Integer ln;
}
public class CC {
private static CC instancia;
private CA a;
private CB b;
public static CC getInstancia() {
if(instancia == null) {
instancia = new CC();
}
return instancia;
}
}
Package Business:
class CCBusiness {
static CC c = CC.getInstancia();
void alter(Integer input) {
c.getCA.setIn(input);
Integer num = c.getCB.getLn();
}
}
Package Facade:
class FacadeOne {
void methodOne() {
CCBusiness.alter(1);
// And more xxBusiness.xx()
}
The real code is more complex, but to explain my doubts, I think this should work.
In one facade I call several Business objects, but it is appropriate that one Business (in this case, the one of CC class) can modify attributes from other classes (in this case, the ones inside CC)? Should I create CABusiness and CBBusiness?
Because, what I understand, one Business can't call another Business, so the second as to be parametrized to receive the object from FacadeOne (if I create CABusiness and CBBusiness)?

I think some clarifications might help you: The facade pattern helps you to have a single point of access for several classes which are hidden behind the facade and thus hidden to the outside world. Usually those classes form some kind of module or logical unit.
What you are struggling with is the structure behind the facade and their hierarchy. This is hard to analyse without knowing the whole picture, but from the information I have it would be best to have several you your Business classes, which can be individually called from the facade. Creating cross-callings between the Business objects will bear the chance to spaghettify your code.
As for best practices and techniques, the simplest one is to draw a sketch of your classes, which usually clarifies a lot. And you're already half way to UML based documentation. :-)
By the way, avoid giving your classes names like CA, CB... It's the same like naming variables a001, a002... Speaking names do a lot for readability!

By having a Facade you can get away with calling multiple CxBusiness objects and integrating their operations into a meaningful result. That is the purpose of a Facade, to simplify the interaction with the Business layer by hiding away interactions of 5 different components behind a concise and clear operation: methodOne.
For the individual CxBusiness however, you want to avoid cross-calling among each other; otherwise, you will end up with a complex dependency structure that could potentially run into circular references. Keep each CxBusiness as the sole wrapper for each Cx model and you will reduce the number of unwanted side-effects when interacting with them. Any interactions among these will take place in the facade.
Furthermore, enforce this pattern by having the facade depend upon interfaces rather than concrete classes: ICABusiness, ICCBusiness, etc. Then, the only way to access any model should be through these interfaces, and obviously, you should not have a concrete CxBusiness with a ICxBusiness member (no cross-dependencies). Once you put these restrictions in place, the implementation itself will flow towards a more modular and less coupled design.

Related

How to eliminate hard dependecies on Java Beans

I've a question about DIP Principle. One of the guidelines says that we should not hold references to a concrete class (if it changes then I'll have to modify all clients that use it). So, what can I follow this guideline when I use POJOs ? For Example:
I have a Bean 'Foo' with some attributes (it could represent a Domain object)
class Foo {
private String one;
private String two;
//getters and setters
}
Multiple clients instantiate this object, for example, to persist it in the Database
class Client1 {
private FooDao dao;
Client1(FooDao dao){
this.dao = dao;
}
public void persist() {
//hard coding
Foo foo = new Foo();   
foo.setOne("something...");
dao.save(foo); }
}
class Client2 {
private FooDao dao;
  Client2(FooDao dao){   
 this.dao = dao;
   } 
public void persist() {   
Foo foo = new Foo();   
foo.setOne("something...");
foo.setTwo("something...")   
dao.save(foo);
}
}
If I add or change any attribute to 'Foo' class every client would have to change, so follow this guideline how can I avoid that?
Thanks!
The comment from #chrylis is spot on. Robert Martin covers this in chapter 6 of Clean Code: Objects and Data Structures.
Objects hide their data behind abstractions and expose functions that operate on that data. Data structures expose their data and have no meaningful functions. (page 95)
The definition of OOP where everything is an object, and there are no data structures, is naive.
Mature programmers know that the idea that everything is an object is a myth. Sometimes you really do want simple data structures with procedures operating on them. (page 97)
So what about classes that expose both data and behavior?
Confusion sometimes leads to unfortunate hybrid structures that are half object and half data structure. They have functions that do significant things, and they also have either public variables or public accessors and mutators that, for all intents and purposes, make the private variables public, tempting other external functions to use those variables the way a procedural program would use a data structure.
Such hybrids make it hard to add new functions but also make it hard to add new data structures. They are the worst of both worlds. Avoid creating them. (page 99)
To the original question: the Dependency Inversion Principle applies to objects, not to data structures like Java Beans.
I think you're taking this a little too literally.
I had the pleasure of attending a talk given by Venkat Subramaniam, which talked about DIP.
You're right when you say that you should be relying on abstractions, not concretions, but in my notes from that talk, I have the footnote, "take this with a grain of salt."
In your case, you're going to want to take this with a grain of salt, since there's a fairly strong code smell here - you're exposing the use of this bean to all consumers who need it, which implicitly create a dependency on it. This violates the Single Responsibility Principle since this bean is being used in more places than it probably should be.
Since it seems like you're talking about a database abstraction, perhaps you would want to look into a DTO which would be exposed between services to carry information between them, and let your bean handle the internals.
To your point...
if it change[s] then I'll have to modify all clients that use it
...this is true if you remove functionality. If you add new functionality, you can let your downstream clients just ignore that functionality. If you want to change existing functionality, you have to allow the clients a path to migration.
You need to define the functionality of the method you would like to add.
interface Functionality {
public void persist();
}
Each class except the manager need to implement the interface:
class Client1 implements Functionality{
//Your code..
}
Add a high level class high level classes that not working directly with low level classes:
Class ManageClients{
Functionality func;
public void setClient(Functionality f) {
func= f;
}
public void manage() {
func.persist();
}
};
ManageClients class doesn't require changes when adding Clients.
Minimized risk to affect old functionality present in ManageClients class since we don't change it.
No need to redo the unit testing for ManageClients class.

How to refactor two complex classes with a lot of similar methods?

Each of the classes contains about 30 methods and almost a half of them are same or very similar. And soon I am going to add a third class which is in the same situation with these two classes. I feel it is a mess to maintain or change them. How can I refactor to avoid the duplicate code?
Here is a simplified version:
public class A extends ContentPanel{
private AMenuProvider menuProvider;
private ADefinitionTree tree;
public void sameMethod1(){
...
menuProvider.do();
tree.doSomething();
...
}
public void sameMethod2(){
...
menuProvider.do();
tree.doSomething();
...
}
public void differentMethodFromA(){
... // uses menuProvider and tree
}
...
// 10 similar methods and 20 different methods
}
public class B extends ContentPanel{
private BMenuProvider menuProvider;
private BDefinitionTree tree;
public void sameMethod1(){
...
menuProvider.do();
tree.doSomething();
...
}
public void sameMethod2(){
...
menuProvider.do();
tree.doSomething();
...
}
public void differentMethodFromB(){
... // uses menuProvider and tree
}
...
// 10 similar methods and 20 different methods
}
NOTE: BMenuProvider vs AMenuProvider and ADefinitionTree vs BDefinitionTree could be very different implementation, but they provide a lot of same methods. Each of them has some unique methods which the other does not have.
I thought about creating an abstract class and extend it, but it seems ugly wherever I put the menuProvider and tree attributes. I am not sure whether there is any design patterns or solutions. Please help me refactor the classes so that I can remove the duplicate code.
"S" in S.O.L.I.D:
The single responsibility principle states that every module or class
should have responsibility over a single part of the functionality
provided by the software, and that responsibility should be entirely
encapsulated by the class. All its services should be narrowly aligned
with that responsibility.
"I":
The interface-segregation principle (ISP) states that no client should
be forced to depend on methods it does not use.[1] ISP splits
interfaces which are very large into smaller and more specific ones so
that clients will only have to know about the methods that are of
interest to them. Such shrunken interfaces are also called role
interfaces.[2] ISP is intended to keep a system decoupled and thus
easier to refactor, change, and redeploy.
"D":
A. High-level modules should not depend on low-level modules. Both
should depend on abstractions. B. Abstractions should not depend on
details. Details should depend on abstractions.
You are violating (at least) these three sacred rules of OO. A and B should depend on abstractions (Interfaces). Same methods should be abstracted to one interface and different methods to different interfaces. S.O.L.I.D is more fundamental than design patterns. Design patterns are based on SOLID. Learn it first and you will have no more problems like this.
There are basically two ways to reuse code (when code is duplicated across two or more classes).
Inheritance: If all the participating classes (sharing duplicated code) are logically be put into a hierarchy where "is-a" relationship is followed than creating a base class, pull the common methods in the base class, and customize the subclass methods. (Consider using Template method pattern if there is a variation in method implementation across subclasses.)
Composition: If the participating classes do not follow "is-a" relationship and hence cannot be placed in a hierarchy, create a class and put all the reusable methods in that. Use the class wherever you wanted to reuse the methods via composition.

What is the best way to refactor Utility class in java (static classes)

I am thinking about refactoring some of our utility class(static classes).
Static classes are very hard to test and the main problem is that its making
Our code very tightly coupled , a lot of dependency.
What is the best design pattern to use for refactoring ?
I thought about immutable object with a builder, but I am not sure
consider this code as 1 I want to refactor
public class UtilTest {
public static boolean isEligibleItem(Item item){
if(isCondition1(item)){
return isCondition2(item);
}
return false;
}
public static boolean isCondition1(Item item){
//go to service that go to the data base
return false;
}
public static boolean isCondition2(Item item){
//go to service that go to the data base
return false;
}
}
If I want to test my isEligibleItem() method I need to mock the 2 method that go to db .
I can not do it as they are static . I want to avoid using Powermock
The Utility Class Anti-Pattern
The reason that people say static methods are hard to test is more about how it tightly couples unrelated classes together over time and it reduces cohesion as well as introducing invisible side effects. These three things are way more important than some unit test hand waving complaints
Testing Interactions
It is more about testing the interactions with other code than testing the static method itself. This is where Java really needed Functions as first class objects to begin with.
Classes with nothing but static methods are definitely a code smell in most cases. There are exceptions, but this anti-pattern tends to get abused by beginners and old timers from non-object oriented languages.
Exceptions to the Rule - Immutable
The exceptions are mainly things that might be considered missing from a class that is marked final like String that are Immutable.
Having a Strings class that has generalized static methods is not so bad because String is immutable ( no side effects ) and you can not add anything to the String class so you do not have many alternatives. Same goes with Integer and the like, Guava has this naming convention and it is works for these immutable objects.
Side Effects
static methods tend to introduce lots of side effects. Things that take an object and manipulate that object in some opaque manner are bad, worse is when they then look up other objects and manipulate them as well based on the instance that was passed in, they obfuscate what is going on and are tightly coupled and low cohesion.
High Cohesion
Tight Cohesion is not talked about as much as Coupling, but it is just as important. They are two sides of the same coin and ignoring one causes the other to suffer as a result.
These static methods should be on the classes that they are taking as an argument, they are tightly coupled to those classes. In this case why are they not on the Item class?
As soon as you add another static method that takes SomeOtherItem you have indirectly coupled un-related classes together.
The easiest way to remediate this is to move things closer to where they belong in this case to the Item class.
Factory/Provider Pattern
If you have things that really are general or thing that can not be added to a class because it is final or some other reason, working with interfaces and Provider Pattern is your best approach using a Factory to produce the Provider instances is even better.
Then you can use something like Guice to inject whatever implementation you need depending on if it is a test or not.
There is even a hybrid Utility Pattern that can have the implementation injected from a Provider that will give you the convenience of the static methods and the flexibility and maintainability of not having it.
A simple translation to a more testable setup would be:
public class UtilTest {
private final MyDatabaseService service;
public UtilTest(MyDatabaseService service) {
this.service = service;
}
public boolean isEligibleItem(Item item){
if(isCondition1(item)){
return isCondition2(item);
}
return false;
}
public boolean isCondition1(Item item){
this.service.goToDataBase();
return false;
}
public boolean isCondition2(Item item){
this.service.goToDataBase2();
return false;
}
}
This doesn't eliminate all the problems but it's a start, you can test your class with a mocked up database service.
If you want to push things further, you can declare an interface with all the methods you want UtilTest to expose (you might want to rename the class as well...), and make UtilTest implement it. All the code using UtilTest should be rewritten to use the interface instead, and then you can mock UtilTest completely and directly. Whether this is worthwhile depends a lot on how complicated UtilTest is in reality. If the tasks it performs are relatively simple, you'll probably think it's more hassle than it's worth. If however there's some heavy processing going in there, you'd definitely want to make it easily mockable.

Architecture/Design of a pipeline-based system. How to improve this code?

I have a pipeline-based application that analyzes text in different languages (say, English and Chinese). My goal is to have a system that can work in both languages, in a transparent way. NOTE: This question is long because it has many simple code snippets.
The pipeline is composed of three components (let's call them A, B, and C), and I've created them in the following way so that the components are not tightly coupled:
public class Pipeline {
private A componentA;
private B componentB;
private C componentC;
// I really just need the language attribute of Locale,
// but I use it because it's useful to load language specific ResourceBundles.
public Pipeline(Locale locale) {
componentA = new A();
componentB = new B();
componentC = new C();
}
public Output runPipeline(Input) {
Language lang = LanguageIdentifier.identify(Input);
//
ResultOfA resultA = componentA.doSomething(Input);
ResultOfB resultB = componentB.doSomethingElse(resultA); // uses result of A
return componentC.doFinal(resultA, resultB); // uses result of A and B
}
}
Now, every component of the pipeline has something inside which is language specific. For example, in order to analyze Chinese text, I need one lib, and for analyzing English text, I need another different lib.
Moreover, some tasks can be done in one language and cannot be done in the other. One solution to this problem is to make every pipeline component abstract (to implement some common methods), and then have a concrete language-specific implementation. Exemplifying with component A, I'd have the following:
public abstract class A {
private CommonClass x; // common to all languages
private AnotherCommonClass y; // common to all languages
abstract SomeTemporaryResult getTemp(input); // language specific
abstract AnotherTemporaryResult getAnotherTemp(input); // language specific
public ResultOfA doSomething(input) {
// template method
SomeTemporaryResult t = getTemp(input); // language specific
AnotherTemporaryResult tt = getAnotherTemp(input); // language specific
return ResultOfA(t, tt, x.get(), y.get());
}
}
public class EnglishA extends A {
private EnglishSpecificClass something;
// implementation of the abstract methods ...
}
In addition, since each pipeline component is very heavy and I need to reuse them, I thought of creating a factory that caches up the component for further use, using a map that uses the language as the key, like so (the other components would work in the same manner):
public Enum AFactory {
SINGLETON;
private Map<String, A> cache; // this map will only have one or two keys, is there anything more efficient that I can use, instead of HashMap?
public A getA(Locale locale) {
// lookup by locale.language, and insert if it doesn't exist, et cetera
return cache.get(locale.getLanguage());
}
}
So, my question is: What do you think of this design? How can it be improved? I need the "transparency" because the language can be changed dynamically, based on the text that it's being analyzed. As you can see from the runPipeline method, I first identify the language of the Input, and then, based on this, I need to change the pipeline components to the identified language. So, instead of invoking the components directly, maybe I should get them from the factory, like so:
public Output runPipeline(Input) {
Language lang = LanguageIdentifier.identify(Input);
ResultOfA resultA = AFactory.getA(lang).doSomething(Input);
ResultOfB resultB = BFactory.getB(lang).doSomethingElse(resultA);
return CFactory.getC(lang).doFinal(resultA, resultB);
}
Thank you for reading this far. I very much appreciate every suggestion that you can make on this question.
The factory idea is good, as is the idea, if feasible, to encapsulate the A, B, & C components into single classes for each language. One thing that I would urge you to consider is to use Interface inheritance instead of Class inheritance. You could then incorporate an engine that would do the runPipeline process for you. This is similar to the Builder/Director pattern. The steps in this process would be as follows:
get input
use factory method to get correct interface (english/chinese)
pass interface into your engine
runPipeline and get result
On the extends vs implements topic, Allen Holub goes a bit over the top to explain the preference for Interfaces.
Follow up to you comments:
My interpretation of the application of the Builder pattern here would be that you have a Factory that would return a PipelineBuilder. The PipelineBuilder in my design is one that encompases A, B, & C, but you could have separate builders for each if you like. This builder then is given to your PipelineEngine which uses the Builder to generate your results.
As this makes use of a Factory to provide the Builders, your idea above for a Factory remains in tact, replete with its caching mechanism.
With regard to your choice of abstract extension, you do have the choice of giving your PipelineEngine ownership of the heavy objects. However, if you do go the abstract way, note that the shared fields that you have declared are private and therefore would not be available to your subclasses.
I like the basic design. If the classes are simple enough, I might consider consolidating the A/B/C factories into a single class, as it seems there could be some sharing in behavior at that level. I'm assuming that these are really more complex than they appear, though, and that's why that is undesirable.
The basic approach of using Factories to reduce coupling between components is sound, imo.
If I'm not mistaken, What you are calling a factory is actually a very nice form of dependency injection. You are selecting an object instance that is best able to meet the needs of your parameters and return it.
If I'm right about that, you might want to look into DI platforms. They do what you did (which is pretty simple, right?) then they add a few more abilities that you may not need now but you may find would help you later.
I'm just suggesting you look at what problems are solved now. DI is so easy to do yourself that you hardly need any other tools, but they might have found situations you haven't considered yet. Google finds many great looking links right off the bat.
From what I've seen of DI, it's likely that you'll want to move the entire creation of your "Pipe" into the factory, having it do the linking for you and just handing you what you need to solve a specific problem, but now I'm really reaching--my knowledge of DI is just a little better than my knowledge of your code (in other words, I'm pulling most of this out of my butt).

Should I not subclass by type of object if there are many types?

I am working with a log of events where there are about 60 different "types" of events. Each event shares about 10 properties, and then there are subcategories of events that share various extra properties.
How I work with these events does depend on their type or what categorical interfaces they implement.
But it seems to be leading to code bloat. I have a lot of redundancy in the subclass methods because they implement some of the same interfaces.
Is it more appropriate to use a single event class with a "type" property and write logic that checks type and maintain some organization of categories of types (e.g. a list of event types that are category a, a second list that are category b, etc)? Or is the subclass design more appropriate in this case?
First Approach:
public interface Category1 {}
public interface Category2 {}
public abstract class Event {
private base properties...;
}
public class EventType1 extends Event implements Category1, Category2 {
private extra properties ...;
}
public class EventType2 extends Event implements Category3, Category4 {
private extra properties ...;
}
Second Approach:
public enum EventType {TYPE1, TYPE2, TYPE3, ...}
public class Event {
private union of all possible properties;
private EventType type;
}
My personal opinion is that it seems like a single event object is what is appropriate, because, if I am thinking about it correctly, there is no need for using inheritance to represent the model because it is really only the behavior and my conditions that alter based on the type.
I need to have code that does stuff like:
if(event instanceof Category1) {
...
}
This works well in the first approach in that instead of instanceof I can just call the method on the event and implement "the same code" in each of the appropriate subclasses.
But the second approach is so much more concise. Then I write stuff like:
if(CATEGORY1_TYPES.contains(event.getEventType()) {
...
}
And all my "processing logic" can be organized into a single class and none of it is redundantly spread out among the subclasses. So is this a case where although OO appears more appropriate, it would be better not too?
I would go with the object per event type solution, but I would instead group commonly used combinations of interfaces under (probably abstract) classes providing their skeletal implementations. This greatly reduces the code bloat generated by having many interfaces, but, on the other hand, increases the number of classes. But, if used properly and reasonably, it leads to cleaner code.
Inheritance can be limiting if you decide to extend an abstract base class of a
particular Category interface, because you might need to implement another Category as well.
So, here is a suggested approach:
Assuming you need the same implementation for a particular Category interface method (regardless of the Event), you could write an implementation class for each Category interface.
So you would have:
public class Category1Impl implements Category1 {
...
}
public class Category2Impl implements Category2 {
...
}
Then for each of your Event classes, just specify the Category interfaces it implements, and keep a private member instance of the Category implementation class (so you use composition, rather than inheritance). For each of the Category interface methods, simply forward the method call to the Category implementation class.
Since I didn't really get the answers I was looking for I am providing my own best guess based on my less than desirable learning experience.
The events themselves actually don't have behaviors, it is the handlers of the events that have behaviors. The events just represent the data model.
I rewrote the code to just treat events as object arrays of properties so that I can use Java's new variable arguments and auto-boxing features.
With this change, I was able to delete around 100 gigantic classes of code and accomplish much of the same logic in about 10 lines of code in a single class.
Lesson(s) learned: It is not always wise to apply OO paradigms to the data model. Don't concentrate on providing a perfect data model via OO when working with a large, variable domain. OO design benefits the controller more than the model sometimes. Don't focus on optimization upfront as well, because usually a 10% performance loss is acceptable and can be regained via other means.
Basically, I was over-engineering the problem. It turns out this is a case where proper OO design is overkill and turns a one-night project into a 3 month project. Of course, I have to learn things the hard way!
It depends on if each type of event inherently has different behavior that the event itself can execute.
Do your Event objects need methods that behave differently per type? If so, use inheritance.
If not, use an enum to classify the event type.
Merely having a large number of .java files isn't necessarily bad. If you can meaningfully extract a small number (2-4 or so) of Interfaces that represent the contracts of the classes, and then package all of the implementations up, the API you present can be very clean, even with 60 implementations.
I might also suggest using some delegate or abstract classes to pull in common functionality. The delegates and/or abstract helpers should all be package-private or class-private, not available outside the API you expose.
If there is considerable mixing and matching of behavior, I would consider using composition of other objects, then have either the constructor of the specific event type object create those objects, or use a builder to create the object.
perhaps something like this?
class EventType {
protected EventPropertyHandler handler;
public EventType(EventPropertyHandler h) {
handler = h;
}
void handleEvent(map<String,String> properties) {
handler.handle(properties);
}
}
abstract class EventPropertyHandler {
abstract void handle(map<String, String> properties);
}
class SomeHandler extends EventPropertyHandler {
void handle(map<String, String> properties) {
String value = properties.get("somekey");
// do something with value..
}
}
class EventBuilder {
public static EventType buildSomeEventType() {
//
EventType e = new EventType( new SomeHandler() );
}
}
There are probably some improvements that could be made, but that might get you started.

Categories

Resources