Code design to make adding similar tasks later easy - java

For example I have a app that can download videos. Since the tasks for downloading are similar I create a base class for downloading.
public abstract class Download {
public abstract void run();
}
For each concrete website, where videos can be downloaded I create a child class from the base class:
public class DownloadYouTube extends Download {
public void run() {
}
}
public class DownloadVimeo() extends Download {
public void run() {
}
}
To see from which site the user wants to download I create a enum and switch through it to create the right object, then I call the common method run().
public enum WEBSITE {
YOUTUBE,
VIMEO
}
public void startDownload(WEBSITE website) {
Download download;
switch (website) {
case YOUTUBE:
download = new DownloadYoutube();
break;
case VIMEO:
download = new DownloadVimeo();
break;
}
download.run();
}
Later other people may want to add new websites. With that design it is not to easy. People have to edit on three places: They have to alter the enum, they have to add a new case and they have to write the class itself.
It would be way better if the had just to write the class.
Is there any common code design or other advise to handle such a situation better than this?

As a possible solution you can add an abstract factory method to your enum which would create a necessary Download object.
So WEBSITE becomes not just a list of websites you support, but also encapsulates behaviour for each of them:
public enum WEBSITE {
YOUTUBE {
#Override
public Download createDownload() {
return new DownloadYouTube();
}
},
VIMEO {
#Override
public Download createDownload() {
return new DownloadVimeo();
}
};
public abstract Download createDownload();
}
public void startDownload(WEBSITE website) {
website.createDownload().run();
}
With such an approach it will be impossible to add a new WEBSITE without defining how it should be handled.

Create a map! A map is a data structure that lets you look up any sort of value with a key, so an instance of each of your download classes can be accessed by providing a string. (your variable 'website' can turn into this)
import java.util.HashMap;
import java.util.Map;
Map<String, Download> downloaders = new HashMap<String, Download>();
downloaders.put("Youtube", new DownloadYoutube());
downloaders.put("Vimeo", new DownloadVimeo());
// Iterate over all downloaders, using the keySet method.
for(String key: downloaders.keySet())
Download d = downloaders.get(key)
System.out.println();
NOTE: If you intend to use multiple instances of the same Download class, this solution will not work as posted here.

What you are trying to decide upon has been done before. There is a design pattern called a Template Method
The key idea behind a template method is that the skeleton of an algorithm is enforce, but the details fall upon subclasses. You have a
public interface DownloadTask {
public List<Parameter> getParameters();
public setParameters(Map<Parameter, Object> values);
public Future<File> performDownload();
}
with two concrete implementations
public class VimeoDownload implements DownloadTask {
...
}
public class YoutubeDownload implements DownloadTask {
...
}
or if you really want an enum, then
public enum DefaultDownload implements DownloadTask {
YOUTUBE,
VIMEO;
}
but I don't think an enum will buy you as much as you might think. By attempting to update an enum, you must either share the methods like so
public enum DefaultDownload implements DownloadTask {
YOUTUBE,
VIMEO;
public void someMethod(...) {
}
}
or declare them individually
public enum DefaultDownload implements DownloadTask {
YOUTUBE() {
public void someMethod(...) {
}
},
VIMEO() {
public void someMethod(...) {
}
};
}
And both scenarios put you at risk of breaking all downloads to add one new download.
Then you have the actual template method that ties it all together
public class Downloader {
/* asynchronous API */
public File doDownload(DownloadTask task) {
Map<Parameter, Object> paramValues = new Hashmap<>();
List<Parameter> params = task.getParameters();
for (Parameter param : task.getParameters()) {
Object value = getValue(param);
paramValues.put(param, value);
}
task.setParameters(paramValues);
return task.performDownload();
}
/* synchronous API */
public File doDownloadAndWait(DownloadTask task) {
Future<File> future = doDownload(task);
return future.get();
}
}
The actual steps I provided in DownloadTask are not the ones you will likely need. Change them to suit your needs.
Some people don't use an interface for the DownloadTask, that's fine. You can use an abstract class, like so
public abstract class DownloadTask {
public final Future<File> doDownload() {
this.getParameters();
this.setParameters(...);
return this.performDownload();
}
public final File doDownloadAndWait() {
Future<File> future = this.performDownload();
return future.get();
}
protected abstract List<Parameter> getParameters();
protected abstract void setParameters(Map<Parameter, Object> values);
protected abstract Future<File> performDownload();
}
This is actually a better Object Oriented design, but one must take care. Keep the inheritance tree shallow (one or two parents before hitting a standard Java library class) for future code maintenance. And don't be tempted to make the template methods (doDownload and doDownloadAndWait) non-final. After all, that's the key to this pattern, the template of operations is fixed. Without that there's hardly a pattern to follow and stuff can become a jumbled mess.

You are headed in the right direction...
Implementing an interface instead of extending a class will make your life easier. Also it's kind of annoying that you have to update a enum AND modify a case statement AND write a class to add a new handler.
editing 3 places is annoying, 2 is good, 1 is awesome.
We could get yours down to 2 pretty easily:
public enum WEBSITE {
YOUTUBE(new DownloadYouTube()),
VIMEO(new DownloadVimeo())
public final Download download;
public WEBSITE(Download dl)
{
download = dl;
}
}
public void startDownload(WEBSITE website) {
website.download.run()
}
There now you only edit two places, the ENUM definition and the new class. The big problem with enums is that you almost always have to edit 2 places (You have to update the enum to redirect to whatever you want to edit). In this case, the enum isn't helping you at all.
My first rule of enums is that if you don't individually address each value of the enum in a different location of your code, you shouldn't be using an enum.
You can get it down to 1 edit location, but with a new class it's pretty hard--the difficulty is that Java doesn't have "Discovery", so you can't just ask it for all the classes that implement "Download".
One way might be to use Spring and annotations. Spring can scan classes. Another is runtime annotation. The worst is probably looking at the directory containing your "Download" classes and try to instantiate each one.
As is, your solution isn't bad. 2 locations is often a pretty good balance.

It would be way better if the had just to write the class.
You can use class literals as opposed to enums:
startDownload(DownloadYouTube.class);
Simply have startDownload accept a Class<T extends Download> and instantiate it:
public void startDownload(Class<T extends Download> type) {
try {
Download download = type.newInstance();
download.run();
} catch(Exception e) {
e.printStacktrace();
}
}
Now all you have to do is create a new class that extends/implements Download
class DownloadYouTube extends/implements Download {
}
Download should be an interface if it only consists of public abstract methods:
interface Download {
void run();
}
class DownloadYouTube implements Download {
//...
}
The startDownload method above would stay the same regardless.
WEBSITE should actually be Website. Type identifiers should start with an uppercase and use camel casing standards:
enum Website { }
Although an enum is not needed for my solution above.
You should check to see if you really need a new instance every call. It seems you could just pass a URL to the Download#run if you adjusted it.
If a new instance isn't needed every call:
Website exists for simple access to the different downloaders.
That means YOUTUBE should give access to DownloadYoutube.
You could turn Download into an interface:
interface Download {
void run();
}
DownloadYoutube and DownloadVimeo could all implement this:
class DownloadYoutube implements Download {
public void run() {
//...
}
}
class DownloadVimeo implements Download {
public void run() {
//...
}
}
Website can also implement this:
enum Website implements Download {
public final void run() {
}
}
Now make it a requirement for each Website value to specify the Download to use:
enum Website implements Download {
YOUTUBE(new DownloadYoutube()),
VIMEO(new DownloadVimeo());
private final Download download;
Website(Download download) {
this.download = download;
}
#Override
public final void run() {
download.run();
}
}

Related

Force a user of my library to implement an interface or extend an abstract class

I'm developing an android library (.aar) and I was wondering if it was possible to, as the title suggest, force a user to implement an interface or extend an abstract class of my library.
I already know that I could just go with a class like this in my library :
public class MyLibrary
{
public interface VariablesInterface
{
void createVariables();
}
private static VariablesInterface vi = null;
public void setVariablesInterface(VariablesInterface v)
{
vi = v;
}
private static void SomeWork()
{
if (vi == null)
{
throw new RuntimeException("You noob.");
}
else
{
// do work
}
}
}
The library will work "alone" at some point, and when it will come to SomeWork(), if the interface isn't implemented it will crash, but this could only be seen at runtime.
Is there a way to have this behaviour when compiling the user's application ?
The goal is to avoid the user forgetting that he have to implement this without having to write it in the documentation and hope the user will probably read it.
Thanks for reading !
EDIT
I think that this question need some enhancement and background.
The purpose of the library is to provide classes that create variables which manages preferences, e.g. :
public class VarPreferenceBoolean extends VarPreference
{
private boolean defaultValue;
public VarPreferenceBoolean(String key, boolean defaultValue)
{
super(key, true);
this.defaultValue = defaultValue;
}
public void setValue(Context context, boolean value)
{
SharedPreferences.Editor e = context.getSharedPreferences(PropertiesManager.preferenceFileName, Context.MODE_PRIVATE).edit();
e.putBoolean(key, value);
e.commit();
}
public boolean getValue(Context context)
{
readPropFile(context);
SharedPreferences sp = context.getSharedPreferences(PropertiesManager.preferenceFileName, Context.MODE_PRIVATE);
return sp.getBoolean(key, defaultValue);
}
}
The same goes for int, string and so on.
In the super class, I add each VarPreference to a List to keep the library acknowledged of all the variables availables.
Note the readPropFile inside the getter.
Then, the user use the library in his project like this :
public class Constants
{
public static final VarPreferenceInt FILETYPE;
public static final VarPreferenceInt DATAMODE;
public static final VarPreferenceString URL_ONLINE;
public static final VarPreferenceBoolean UPDATING;
public static final VarPreferenceLong LAST_UPDATE;
static
{
FILETYPE = new VarPreferenceInt("FileType", MyFile.FileType.LOCAL.getValue());
DATAMODE = new VarPreferenceInt("DataMode", DataProvider.DataMode.OFFLINE.getValue());
URL_ONLINE = new VarPreferenceString("UrlOnline", "http://pouetpouet.fr");
UPDATING = new VarPreferenceBoolean("Updating", false);
LAST_UPDATE = new VarPreferenceLong("LastUpdate", 0L);
}
}
Now, when the user call an accessor, readPropFile will first search if a .properties file exist and modify accordingly the preferences if it found matches between the list of VarPreference and the properties of the file. Then it will delete the file and the accessor will return the value.
This is what exists today.
Now we want another application (let's say Pilot) to be able to get the VarPreferences of the user's application (let's say Client). Both implements the library.
Pilot send an Intent asking for the VarPreference list of Client, putting in extra the package name of Client.
The library receive the intent, verify the packagename, if it's Client it send back the list.
Problem is, if Client hasn't started, no VarPreference exists, and the list is empty.
I need to force the user to create his VarPreference in an method that my library know, to be able to call it whenever I want, and create the VarPreferences of the user when it's necessary.
Hope this is clearer !
EDIT
I rethought about all of this with a colleague and it just hit us that all this stack is biaised.
I didn't explain well and even if I said it, I didn't take account enough of this : everything needs to be done from the library.
So, even if I give an interface to the library, the application will have to run and call this affectation first in order to let the library work alone.
We are heading towards introspection now.
(This is the goal, it may not be possible...)
There will be an abstract class inside the library, with an abstract method where the user will place all of the VarPreferences creations. The user will have to extends this class and call the method in order to create his VarPreferences.
In the library, a method will search by introspection a child of the abstract class, create an instance of this child and call the method that will create the VarPreferences.
I would leave the abstract classes and interfaces in the main library and load the rest of your code via classloader from another. JDBC works like this.
Is there a way to have this behaviour when compiling the user's application ?
I see no way to force a compilation failure. However, if you force them to supply a VariablesInterface in the constructor then it will fail immediately. Make the VariablesInterface be final and only initialize it in the constructor:
public class MyLibrary {
private final VariablesInterface vi;
public MyLibrary(VariablesInterface vi) {
if (vi == null) {
throw new IllegalArgumentException("vi can't be null");
}
this.vi = vi;
}
...
If you can't change the constructor then you can also add to any SomeWork public methods some sort of configuration check method to make sure the the vi wiring has properly been done but this requires careful programming to make sure all public methods are covered.
public void somePublicMethod() {
checkWiring();
...
}
private void checkWiring() {
if (vi == null) {
throw new IllegalStateException("vi needs to be specified");
}
}

What is the appropriate way to plan for a Java API with new features over time?

I'm working with a team on a new Java API for one of our internal projects. We probably won't be able to take the time to stop and hash out all the details of the Java interfaces and get them 100% perfect at the beginning.
We have some core features that have to be there up front, and others that are likely to be added later over time but aren't important now, + taking the time to design those features now is a luxury we don't have. Especially since we don't have enough information yet to get all the design details right.
The Java approach to APIs is that once you publish an interface, it's effectively immutable and you should never change it.
Is there a way to plan for API evolution over time? I've read this question and I suppose we could do this:
// first release
interface IDoSomething
{
public void hop();
public void skip();
public void jump();
}
// later
interface IDoSomething2 extends IDoSomething
{
public void waxFloor(Floor floor);
public void topDessert(Dessert dessert);
}
// later still
interface IDoSomething3 extends IDoSomething2
{
public void slice(Sliceable object);
public void dice(Diceable object);
}
and then upgrade our classes from supporting IDoSomething to IDoSomething2 and then IDoSomething3, but this seems to have a code smell issue.
Then I guess there's the Guava way of marking interfaces with #Beta so applications can use these at risk, prior to being frozen, but I don't know if that's right either.
If you want flexible code generics can help.
For example, instead of:
interface FloorWaxer
{
public void waxFloor(Floor floor);
}
You can have:
interface Waxer<T>
{
void wax(T t);
}
class FloorWaxer implements Waxer<Floor>
{
void wax(Floor floor);
}
Also, Java 8 brought default methods in interfaces which allow you to add methods in already existing interfaces; with this in mind you can make you interfaces generic. This means you should make your interfaces as generic as possible; instead of:
interface Washer<T>
{
void wash(T what);
}
and then to later add
interface Washer<T>
{
void wash(T what);
void wash(T what, WashSubstance washSubstance);
}
and later add
interface Washer<T>
{
void wash(T what);
void wash(T what, WashSubstance washSubstance);
void wash(T what, WashSubstance washSubstance, Detergent detergent);
}
you can add from the beginning
#FunctionalInterface
interface Washer<T>
{
void wash(T what, WashSubstance washSubstance, Detergent detergent);
default wash(T what, WashSubstance washSubstance)
{
wash(what, washSubstance, Detergent.DEFAULT_DETERGENT);
}
default wash(T what, Detergent detergent)
{
wash(what, WashSubstance.DEFAULT_WASH_SUBSTANCE, detergent);
}
default wash(T what)
{
wash(what, WashSubstance.DEFAULT_WASH_SUBSTANCE, Detergent.DEFAULT_DETERGENT);
}
}
Also, try to make your interfaces functional (only one abstract method) so you can benefit from lambdas sugaring.
You could take the approach that tapestry-5 has taken which it dubs "Adaptive API" (more info here).
Instead of locked down interfaces, tapestry uses annotations and pojo's. I'm not entirely sure of your circumstances but this may or may not be a good fit. Note that tapestry uses ASM (via plastic) under the hood so that there is no runtime reflection to achieve this.
Eg:
public class SomePojo {
#Slice
public void slice(Sliceable object) {
...
}
#Dice
public void dice(Diceable object) {
...
}
}
public class SomeOtherPojo {
#Slice
public void slice(Sliceable object) {
...
}
#Hop
public void hop(Hoppable object) {
...
}
}
You could use a new package name for the new version of API - this would allow old and new API live side-by-side and API users can convert their components to the new API one at a time. You can provide some adaptors to help them with heavy-lifting on boundaries where objects get passed across boundaries between classes using new and old API.
The other option is quite harsh but could work for internal project - just change what you need and make users to adapt.
If you are just adding, providing default implementation (in an abstract class) of the new methods can make the process smoother. Of course this is not always applicable.
Signalling the change by changing major version number a provide detailed documentation about how to upgrade the code base to the new version of API is good idea in both cases.
I would suggest to take a look at these Structural Pattern. I think the Decorator pattern (also known as Adaptive pattern) can fill your needs. See the example in the linked Wikipedia article.
Here's the way I approach this situation.
First, I'd use abstract classes so that you can plug in default implementations later. With the advent of inner and nested classes in JDK 1.1, interfaces add little; almost all use cases can be comfortably converted to use pure abstract classes (often as nested classes).
First release
abstract class DoSomething {
public abstract void hop();
public abstract void skip();
public abstract void jump();
}
Second release
abstract class DoSomething {
public abstract void hop();
public abstract void skip();
public abstract void jump();
abstract static class VersionTwo {
public abstract void waxFloor(Floor floor);
public abstract void topDessert(Dessert dessert);
}
public VersionTwo getVersionTwo() {
// make it easy for callers to determine whether new methods are supported
// they can do if (doSomething.getVersionTwo() == null)
return null;
// OR throw new UnsupportedOperationException(), depending on specifics
// OR return a default implementation, depending on specifics
}
// if you like the interface you proposed in the question, you can do this:
public final void waxFloor(Floor floor) {
getVersionTwo().waxFloor();
}
public final void topDessert(Dessert dessert) {
getVersionTwo().topDessert();
}
}
Third release would be similar to second, so I'll omit it for brevity.
If you haven't designed the final API, don't use the name you want for it!
Call it something like V1RC1, V1RC2, .. and when it is done, you have V1.
People will see in their code, that they are still using a RC-Version and can remove that to get the real thing when it is ready.
Rostistlav is basically saying the same, but he calls them all real API Versions, so it would be V1, V2, V3, .... Think that's up to your taste.
You could also try an event driven approach and add new event types as your API changes without affecting backwards compatability.
eg:
public enum EventType<T> {
SLICE<Sliceable>(Sliceable.class),
DICE<Diceable>(Diceable.class),
HOP<Hoppable>(Hoppable.class);
private final Class<T> contextType;
private EventType<T>(Class<T> contextType) {
this.contextType = contextType;
}
public Class<T> getContextType() {
return this.contextType;
}
}
public interface EventHandler<T> {
void handleEvent(T context);
}
public interface EventHub {
<T> void subscribe(EventType<T> eventType, EventHandler<T> handler);
<T> void publish(EventType<T> eventType, T context);
}
public static void main(String[] args) {
EventHub eventHub = new EventHubImpl(); // TODO: Implement
eventHub.subscribe(EventType.SLICE, new EventHandler<Sliceable.class> { ... });
eventHub.subscribe(EventType.DICE, new EventHandler<Diceable.class> { ... });
eventHub.subscribe(EventType.HOP, new EventHandler<Hoppable.class> { ... });
Hoppable hoppable = new HoppableImpl("foo", "bar", "baz");
eventHub.publish(EventType.HOP, hoppable); // fires EventHandler<Hoppable.class>
}

Refactoring predecessor code

I'd like to ask for help and some suggestion how to refactor source code which I receive.
Here is pseudocode of my method:
public void generalMethod(String type) {
InputParameters params = new InputParameters();
if (type.equals("someKey1"){
decodeSomeKey1(params);
} else if (type.equals("someKey2"){
decodeSomeKey2(params);
} else if (type.equals("someKey3"){
decodeSomeKey3(params);
} else if (type.equals("someKey4"){
etc...
}
}
}
All methods have the same input parameters. In first step I created new interface and created for each method separate class which implements created interface.
interface ISomeInterfaceDecoder {
void decode(InputParameters params);
}
class DecodeSomeKey1 implements ISomeInterfaceDecoder {
#Override
public void decode(InputParameters params) {
// some implementation
}
}
class DecodeSomeKey2 implements ISomeInterfaceDecoder {
#Override
public void decode(InputParameters params) {
// some implementation
}
}
Then I created factory class as follows:
class Factory {
ISomeInterfaceDecoder getDecoder(String type) {
if (type.equals("someKey1"){
return new DecodeSomeKey1();
} else if (type.equals("someKey2"){
return new DecodeSomeKey2();
} else if (type.equals("someKey3"){
return new DecodeSomeKey3());
} else if (type.equals("someKey3"){
etc...
}
}
}
}
After these changes the code looks like this:
class SomeClass {
Factory factory = new Factory();
public void generalMethod(String type) {
InputParameters params = new InputParameters();
ISomeInterfaceDecoder decoder = factory.getDecoder(type);
decoder.decode(params);
}
}
Code of this method looks better but...
This method is called very very often. Each time a new instance of the given class is created. This can cause performance problems. So, I think it's not good approach to this problem.
Can you give me some suggestion how I should to refactor this code?
Thanks in advance for help.
Instead of having a key as a String, make it an enum. Then in the enum you can implement the decode() method like this:
public enum MyKeyEnum {
VALUE1 {
public void decode(InputParameters ip) {
// do specific decoding for VALUE1
}
},
VALUE2 {
public void decode(InputParameters ip) {
// do specific decoding for VALUE2
}
}
...
;
public abstract void decode(InputParameters ip);
}
Now in the calling code you can do something like this:
public void generalMethod(MyKeyEnum type) {
InputParameters params = new InputParameters();
type.decode(params);
}
The advantage is that all the decode methods are in 1 enum, you dont need a specific class for each of the decoders. Also when a new value is added to the enum, you cannot forget to implement the decode method (or it will not compile).
Can you give me some suggestion how I should to refactor this code?
I see no mention of automated regression testing, and that would be my first step, to put in a test suite (via, say, JUnit or TestNG) before going further.
After that, I'd perhaps introduce a Map of String keys to Decoder objects.
But put the test framework in first. Otherwise you'll never really know if you've introduced bugs or different modes of operation.
Introduce caching/singletons in your factory, that you only return an algorithm once. Also, make your factory a singleton.
Create a static Map<String, ISomeInterfaceDecoder> where you map the identifier to algorithms executing the call which means no factory class and no algorithm instantiation. Works only, if you have stateless algorithms.

Calling different services from domain classes of same base class

The question is mostly a design question (somewhat related to ddd). Sorry about the contrived example:
Assume, you have (domain) classes representing different types of fruits: apple, cherry and so on. Now suppose you have to implement some behavior of pressing out the juice. A caller should be able to invoke squeezing without knowing which specific fruit he's got.
Where should I put this behavior?
Surely, one could define a fruit interface / base class function
Fruit#squeeze()
and let all subclasses implement their own behavior.
Now a caller could simply do something like this:
Fruit f = new Cherry();
f.squeeze();
But what should be done if squeezing isn't as simple and involves more complex behavior like calling different external services, for each a fruit a different one like
AppleJuicerService#squeeze(Apple a)
and
CherryJuicerService#squeeze(Cherry c)
? It feels wrong to call services from a domain class.
I've read about the double dispatch pattern which seems not to fit here, as every subclass needs a different service.
My question would be: What can be done here to get a "clean" design?
EDIT:
Thanks for all your answers so far. I'll try to clarify the problem a bit. I'll try to give another, hopefully less contrived example for the problem I'm trying to state here:
Consider a Message base class which allows to show its content as a String.
interface Message {
String showContent();
}
Now suppose we have different types of messages like an EMailMessage:
class EMailMessage implements Message {
//some specific parameters for email
private EmailAddress recipientEmail;
public String showContent() {
//here the content would be converted to string
return "the content of an EMail"
}
}
Another type would be an SMSMessage:
class SMSMessage implement SMSMessage {
//some specific parameters for SMS
private TelNumber recepientTelephoneNumber;
public String showContent() {
//here the content would be converted to string
return "the content of a SMS"
}
}
Furthermore suppose, Messages are modeled as Entities and therefore can be persisted in a database. Though quite technically, assume that some Dependency Injection Framework like Spring is used to inject dependencies.
In analogy to the fruit example, consider we have to implement a send() behaviour which sends the Message to the recipient. Furthermore, assume that sending an EMail involves different logic than an SMS. Now, the question: Where should one put the logic of sending a Message?
Usually I'd opt to create a service for sending an SMS for example which would encapsulate e.g. the API of an SMS service provider. Furthermore, I'd create another service to encapsulate sending an EMail.
interface SendMessageService<T extends Message> {
void send(T message);
}
class SendEmailService extends SendMessageService<EMailMessage> {
public void send(EMailMessage message) {
//send the EMail
}
}
class SendSMSService extends SendMessageService<SMSMessage> {
public void send(SMSMessage message) {
//send the SMS
}
}
The drawback of this approach is that you cannot send a Message without determining its concrete subclass, i.e. something like the following is not directly possible
List<Message> messages = //Messages of different types
SendMessageService service = //???
for (Message m : messages) {
service.send(m);
}
Surely one could create a factory for creating Services according to the specific type of message. But that somewhat means cloning the inheritance hierarchy of Message. Is there some better way to achieve the desired result? Or am I missing something? Or would it be better to somehow inject the service into the entity?
You can delegate the work to a SqueezeBehavior interface and let each implementation define how to squeeze a Fruit or specific Fruit. This is a raw idea (it means it can be improved but is good as a first step):
interface SqueezeBehavior<T> {
void squeeze(T squeezeMe);
}
interface FruitSqueezeBehavior<T extends Fruit> extends SqueezeBehavior<T> {
}
class FruitSqueezer implements FruitSqueezeBehavior<Fruit> {
public void squeeze(Fruit fruit) {
System.out.println("squizing any fruit");
}
}
class AppleSqueezer implements FruitSqueezeBehavior<Apple> {
public void squeeze(Apple apple) {
System.out.println("squizing apple");
}
}
class CherrySqueezer implements FruitSqueezeBehavior<Cherry> {
public void squeeze(Cherry cherry) {
System.out.println("squizing cherry");
}
}
class FruitService {
public void foo(Fruit fruit) {
FruitSqueezeBehavior fruitSqueezer = ...
fruitSqueezer.squeeze(fruit);
}
}
Have a baseclass Fruit which defines the standard behaviour. When you have to use a more complex implementation you can override the appropriate method.
class Fruit {
public void Squeeze(){
// Standard squeeze behaviour
}
}
class Apple extends Fruit {
#Override
public void Squeeze(){
// Complex squeeze behaviour
}
}
class Cherry extends Fruit {
// Nothing special, cherries are easy to squeeze
}
If you have to define specific implementations for specific types, you will always have to define the behaviour somewhere. If this is too much for one method then you can call a more detailed class to do it for you.
You could work with a factory and do something like this
class FruitManipulator {
void Squeeze(Fruit f){
// Switch over fruit, create new service depending on the type
}
}
interface JuiceService<T extends Fruit> {
void Squeeze(T f);
}
class AppleJuiceService implements JuiceService<Apple> {
void Squeeze(Apple apple){
// Do your thing
}
}
And use it like this:
FruitManipulator service = new FruitManipulator();
service.Squeeze(new Apple());
You might want to find a better example though: the Squeeze() analogy isn't easy to work with. Perhaps expand on what a squeeze actually means?
You may consider DomainEvents. This helps you decouple Domain models from external service(usually stateless bean need injected)
interface Fruit {
void squeeze();
}
class Apple implements Fruit {
#Override
public void squeeze(){
// domain rules validations
DomainEvents.raise(new AppleSequeezedEvent(this));
}
}
class Cherry extends Fruit {
#Override
public void squeeze(){
// domain rules validations
DomainEvents.raise(new CherrySequeezedEvent(this));
}
}
class Banana extends Fruit {
#Override
public void squeeze(){
// domain rules validations
// hmm...No one cares banana...
}
}
class DomainEvents {
private static List<DomainEventHandler> handlers = new ArrayList<DomainEventHandler>();
public static void register(DomainEventHandler handler) {
this.handler.add(handler);
}
public static void raise(DomainEvent event) {
for (DomainEventHander handler: handlers) {
if (handler.subscribe(event.getClass()) {
handler.handle(event);
}
}
}
}
Now when you test apple, you could register some handler mock/stub:
#Test
public void tellsAppleIsSqueezed() throws Throwable {
DomainEventHandler stub = new FruitSqueezedEventHandlerStub(Apple.class);
DomainEvents.register(stub );
Apple apple = new Apple();
apple.squeeze();
//assert state change of apple if any before you publishing the event
assertThat(stub.getSqueezed(), sameInstance(apple));
}
You can test the real handler in their own unit test cases.
But I think this solution add extra complexity.

Separating Service Logic from Data

I have been looking over a couple of classes I have in an android project, and I realized that I have been mixing logic with data. Having realized how bad this can be to the readability and the test-ability of my project, I decided to do some refactoring in order to abstract away all services logic to separate services modules. However, since I have been relying on Java's polymorphism, I got lost and need some guidance.
Suppose I have this "to-be-changed" layout for a super data class, and two sub-classes:
public class DataItem {
/* some variables */
public saveToDB(/* Some Arguments */) {
/* do some stuff */
}
public render() {
/* render the class */
}
}
public class ChildDataItemA extends DataItem {
#Override
public saveToDB(/* Some Arguments */) {
super.saveToDB();
/* more specific logic to ChildDataItemA */
}
#Override
public render() {
/* render logic for ChildDataItemA */
}
}
public class ChildDataItemB extends DataItem {
#Override
public saveToDB(/* Some Arguments */) {
super.saveToDB();
/* more specific logic to ChildDataItemB */
}
#Override
public render() {
/* render logic for ChildDataItemB */
}
}
Now, I thought about moving the saveToDB() and render() methods to a service class. However, sometimes I need to be able to call these method into instance of compiled type DataItem without knowing its runtime type. For instance, I might want to make the following call:
List<DataItem> dataList;
for (DataItem item: dataList) {
item.saveToDB();
item.render();
}
Additionally, I thought of doing the following:
public class ChildDataItemB extends DataItem {
#Override
public saveToDB(/* Some Arguments */) {
super.saveToDB();
/* more specific logic to ChildDataItemB */
Service.saveToDBB();
}
#Override
public render() {
/* render logic for ChildDataItemB */
Service.renderB();
}
}
Where I still keep 'dummy' methods in each subclass that would call an appropriate service method. However, I do not think that this really achieves the separation I want since data classes will still know about services (bad!).
Any ideas on how to solve this?
Edit: Note that render() and saveToDB() are just generic examples of what these methods can be, so the problem is not really about choosing an ORM or SQL related techniques.
Visitor pattern to the rescue. Create a visitor interface and have each service implement this interface:
public interface DataItemVisitor {
// one method for each subtype you want to handle
void process(ChildDataItemA item);
void process(ChildDataItemB item);
}
public class PersistenceService implements DataItemVisitor { ... }
public class RenderService implements DataItemVisitor { ... }
Then have each DataItem implement an accept method:
public abstract class DataItem {
public abstract void accept(DataItemVisitor visitor);
}
public class ChildDataItemA extends DataItem {
#Override
public void accept(DataItemVisitor visitor) {
visitor.process(this);
}
}
public class ChildDataItemB extends DataItem {
#Override
public void accept(DataItemVisitor visitor) {
visitor.process(this);
}
}
Note that all accept implementations look the same but this refers to the correct type in each subclass. Now you can add new services without having to change the DataItem classes.
So you want to do:
List<DataItem> dataList;
for (DataItem item: dataList) {
service.saveToDB(item);
service.render(item);
}
For this you need to setup a system for your service to know more details from your DataItem subclass.
ORM's and serializers usually solve this via a metadata system, e.g. by finding an xml file with name matching the subclass, containing the properties to save or serialize.
ChildDataItemA.xml
<metaData>
<column name="..." property="..."/>
</metaData>
You could get the same result via reflection and annotations.
In your case, an application of the Bridge pattern could also work:
class DataItem {
public describeTo(MetaData metaData){
...
}
}
class Service {
public void saveToDB(DataItem item) {
MetaData metaData = new MetaData();
item.describeTo(metaData);
...
}
}
Your metadata could be decoupled from saving or rendering, so you can the same for both.
I would clean the "data" classes of render and saveToDB methods.
Instead, I would create a hierarchy of wrappers for DataItem (it does not have to mimic exactly the DataItem hierarchy). These wrappers will be the ones implementing those methods.
Additionally, I suggest that (if you can), you move to some ORM (Object-Relational Mapping) like Hibernate or JPA to get rid of the saveToDB method.
First of all the DataItem class should be clean, only with getters and setter and no logic at all, just like a POJO. moreover- your DataItem maybe should be abstract.
Now- for the logic, like others suggested I would use some ORM framework for the saveToDB part, but you said that it's not helping you cause it's android project and you have other methods like this as well.
So what I would do is to create an interface- IDataItemDAO, with the following logic:
public interface IDataItemDAO<T extends DataItem > {
public void saveToDB(T data, /* Some Arguments */);
... other methods that you need ...
}
I would create an abstract DAO for the DataItem and put it all the similar code of all DataItems:
public abstract class ChildDataItemADAO impelemets IDataItemDAO<DataItem> {
#Override
public void saveToDB(DataItem data, /* Some Arguments */); {
...
}
}
than I would create a DAO for each DataItem class that you have:
public class ChildDataItemADAO extends DataItemDAO impelemets IDataItemDAO<ChildDataItemA> {
#Override
public void saveToDB(ChildDataItemA data, /* Some Arguments */); {
super(data, ...);
//other specific saving
}
}
the other part is how to use the correct DAO for the correct instance, for this I would create a class that will bring me the correct DAO for the given instance, it is a very simple method if an if-else statements (or you can do it dynamically with a map of class and the DAO)
public DataItemDAO getDao(DataItem item) {
if (item instanceof ChildDataItemA) {
//save the instance ofcourse
return new ChildDataItemADAO();
}
}
so you should use it like this:
List<DataItem> dataList;
for (DataItem item: dataList) {
factory.getDao(item).saveToDB(item);
}
If you want separate logic from data you may try the following approach
Create your data class DataItem,ChildDataItemA, ChildDataItemB without the method operating on the data
Create an interface for some operations on you data class something like
public interface OperationGroup1OnDataItem {
public void saveToDB(DataItem dataItem/*plus other params*/) {
}
public void render(DataItem dataItem/*plus other params*/) {
}
......
}
Create a factory for implementing an OperationGroup provider
public class OperationFactoryProvider {
public static OperationGroup1OnDataItem getOperationGroup1For(Class class) {
....
}
}
Use it in you code:
List<DataItem> dataList;
for (DataItem item: dataList) {
OperationGroup1OnDataItem provider OperationFactoryProvider.getOperationGroup1For(item.class);
provider.saveToDB(item);
provider.render(item);
}
You can choose to implement the factory with a simple static map where you put the class (or the class fullName) as the key and an Object implementing the interface as the value; something like
Map<String,OperationGroup1OnDataItem> factoryMap= new HashMap<String,OperationGroup1OnDataItem>();
factoryMap.put(DataItem.class.getName(),new SomeClassThatImplementsOperationGroup1OnDataItemForDataItem());
factoryMap.put(ChildDataItemA.class.getName(),new SomeClassThatImplementsOperationGroup1OnDataItemForChildDataItemA());
The implementation of the getOperationGroup1For is:
return factoryMap.get(item.getClass().getName());
This is one example of separating logic from data, if you want separate logic from data your logic methods must be extracted from your data class; otherwise there is no separation. So I think every solution must start from removing logic methods.

Categories

Resources