Question
If one defines a public class Something <T> { ... }, then Java will complain about using a raw type if you do this: Something noparam = new Something();
Is it possible to define an interface or class that can be used with or without a type parameter?
Context
The frontend of a Java app interfaces asynchronously with our backend using callbacks:
public interface ResultCallback <T> {
public void onResult(T result);
public void onError();
}
Here's an example of the backend I'm talking about, with some basic CRUD operations using the callback above:
public interface Backend {
// create a new Comment for a specified blog Post
public void createComment(Post post, ResultCallback<Comment> callback);
// retrieve the Comment with the specified UUID
public void getComment(UUID id, ResultCallback<Comment> callback);
// delete the Comment with the specified UUID
public void deleteComment(UUID id, ResultCallback callback);
}
Notice that the delete operation's callback.onResult(T result) will not have a Comment parameter. It might make sense to parametrize that result with a Comment and just "return" the deleted Comment, even if it's just to satisfy the type parameter constraint of ResultCallback. I don't like this idea because the Comment is gone from the backend and no changes to it will persist.
Just for a usage example, the idea is that ResultCallbacks are defined in the frontend of the app and passed to the async backend. For example, to display a comment:
public CommentRenderer implements ResultCallback<Comment> {
#Override
public void onResult(Comment comment) {
// display the comment, commenter, date, etc. on screen
}
#Override
public void onError(String message) {
// display an error message
}
}
No, the closest thing to that comes to mind would be to use ResultCallback<Void> but that would still require you to give the onResult a null-argument which is kind of ugly.
In this case I would recommend you to either use a separate interface for the delete case or let ResultCallback have more than one method:
interface ResultCallback<T> {
void onResult(T t);
void onDelete(); // Called after delete.
void onError(String message);
}
If you think it is frustrating to override the onDelete even though you're rarely interested in "listening" for these type of events, you could give it an empty default implementation:
...
default void onDelete() {
// do nothing by default
}
...
An alternative for pre-Java 8, is to use an Adapter which would look like follows:
abstract class ResultAdapter<T> implements ResultCallback<T> {
#Override
public void onResult(T t) {
}
#Override
public void onDelete() {
}
#Override
public void onError(String msg) {
}
}
You can always use ResultCallback<Void> and use null in that case.
How a bout extended Generic Type?
public class Foo<T extends Comment> { ... }
public class FooDefault extends Foo< Baz > { ... }
Related
For example I have a app that can download videos. Since the tasks for downloading are similar I create a base class for downloading.
public abstract class Download {
public abstract void run();
}
For each concrete website, where videos can be downloaded I create a child class from the base class:
public class DownloadYouTube extends Download {
public void run() {
}
}
public class DownloadVimeo() extends Download {
public void run() {
}
}
To see from which site the user wants to download I create a enum and switch through it to create the right object, then I call the common method run().
public enum WEBSITE {
YOUTUBE,
VIMEO
}
public void startDownload(WEBSITE website) {
Download download;
switch (website) {
case YOUTUBE:
download = new DownloadYoutube();
break;
case VIMEO:
download = new DownloadVimeo();
break;
}
download.run();
}
Later other people may want to add new websites. With that design it is not to easy. People have to edit on three places: They have to alter the enum, they have to add a new case and they have to write the class itself.
It would be way better if the had just to write the class.
Is there any common code design or other advise to handle such a situation better than this?
As a possible solution you can add an abstract factory method to your enum which would create a necessary Download object.
So WEBSITE becomes not just a list of websites you support, but also encapsulates behaviour for each of them:
public enum WEBSITE {
YOUTUBE {
#Override
public Download createDownload() {
return new DownloadYouTube();
}
},
VIMEO {
#Override
public Download createDownload() {
return new DownloadVimeo();
}
};
public abstract Download createDownload();
}
public void startDownload(WEBSITE website) {
website.createDownload().run();
}
With such an approach it will be impossible to add a new WEBSITE without defining how it should be handled.
Create a map! A map is a data structure that lets you look up any sort of value with a key, so an instance of each of your download classes can be accessed by providing a string. (your variable 'website' can turn into this)
import java.util.HashMap;
import java.util.Map;
Map<String, Download> downloaders = new HashMap<String, Download>();
downloaders.put("Youtube", new DownloadYoutube());
downloaders.put("Vimeo", new DownloadVimeo());
// Iterate over all downloaders, using the keySet method.
for(String key: downloaders.keySet())
Download d = downloaders.get(key)
System.out.println();
NOTE: If you intend to use multiple instances of the same Download class, this solution will not work as posted here.
What you are trying to decide upon has been done before. There is a design pattern called a Template Method
The key idea behind a template method is that the skeleton of an algorithm is enforce, but the details fall upon subclasses. You have a
public interface DownloadTask {
public List<Parameter> getParameters();
public setParameters(Map<Parameter, Object> values);
public Future<File> performDownload();
}
with two concrete implementations
public class VimeoDownload implements DownloadTask {
...
}
public class YoutubeDownload implements DownloadTask {
...
}
or if you really want an enum, then
public enum DefaultDownload implements DownloadTask {
YOUTUBE,
VIMEO;
}
but I don't think an enum will buy you as much as you might think. By attempting to update an enum, you must either share the methods like so
public enum DefaultDownload implements DownloadTask {
YOUTUBE,
VIMEO;
public void someMethod(...) {
}
}
or declare them individually
public enum DefaultDownload implements DownloadTask {
YOUTUBE() {
public void someMethod(...) {
}
},
VIMEO() {
public void someMethod(...) {
}
};
}
And both scenarios put you at risk of breaking all downloads to add one new download.
Then you have the actual template method that ties it all together
public class Downloader {
/* asynchronous API */
public File doDownload(DownloadTask task) {
Map<Parameter, Object> paramValues = new Hashmap<>();
List<Parameter> params = task.getParameters();
for (Parameter param : task.getParameters()) {
Object value = getValue(param);
paramValues.put(param, value);
}
task.setParameters(paramValues);
return task.performDownload();
}
/* synchronous API */
public File doDownloadAndWait(DownloadTask task) {
Future<File> future = doDownload(task);
return future.get();
}
}
The actual steps I provided in DownloadTask are not the ones you will likely need. Change them to suit your needs.
Some people don't use an interface for the DownloadTask, that's fine. You can use an abstract class, like so
public abstract class DownloadTask {
public final Future<File> doDownload() {
this.getParameters();
this.setParameters(...);
return this.performDownload();
}
public final File doDownloadAndWait() {
Future<File> future = this.performDownload();
return future.get();
}
protected abstract List<Parameter> getParameters();
protected abstract void setParameters(Map<Parameter, Object> values);
protected abstract Future<File> performDownload();
}
This is actually a better Object Oriented design, but one must take care. Keep the inheritance tree shallow (one or two parents before hitting a standard Java library class) for future code maintenance. And don't be tempted to make the template methods (doDownload and doDownloadAndWait) non-final. After all, that's the key to this pattern, the template of operations is fixed. Without that there's hardly a pattern to follow and stuff can become a jumbled mess.
You are headed in the right direction...
Implementing an interface instead of extending a class will make your life easier. Also it's kind of annoying that you have to update a enum AND modify a case statement AND write a class to add a new handler.
editing 3 places is annoying, 2 is good, 1 is awesome.
We could get yours down to 2 pretty easily:
public enum WEBSITE {
YOUTUBE(new DownloadYouTube()),
VIMEO(new DownloadVimeo())
public final Download download;
public WEBSITE(Download dl)
{
download = dl;
}
}
public void startDownload(WEBSITE website) {
website.download.run()
}
There now you only edit two places, the ENUM definition and the new class. The big problem with enums is that you almost always have to edit 2 places (You have to update the enum to redirect to whatever you want to edit). In this case, the enum isn't helping you at all.
My first rule of enums is that if you don't individually address each value of the enum in a different location of your code, you shouldn't be using an enum.
You can get it down to 1 edit location, but with a new class it's pretty hard--the difficulty is that Java doesn't have "Discovery", so you can't just ask it for all the classes that implement "Download".
One way might be to use Spring and annotations. Spring can scan classes. Another is runtime annotation. The worst is probably looking at the directory containing your "Download" classes and try to instantiate each one.
As is, your solution isn't bad. 2 locations is often a pretty good balance.
It would be way better if the had just to write the class.
You can use class literals as opposed to enums:
startDownload(DownloadYouTube.class);
Simply have startDownload accept a Class<T extends Download> and instantiate it:
public void startDownload(Class<T extends Download> type) {
try {
Download download = type.newInstance();
download.run();
} catch(Exception e) {
e.printStacktrace();
}
}
Now all you have to do is create a new class that extends/implements Download
class DownloadYouTube extends/implements Download {
}
Download should be an interface if it only consists of public abstract methods:
interface Download {
void run();
}
class DownloadYouTube implements Download {
//...
}
The startDownload method above would stay the same regardless.
WEBSITE should actually be Website. Type identifiers should start with an uppercase and use camel casing standards:
enum Website { }
Although an enum is not needed for my solution above.
You should check to see if you really need a new instance every call. It seems you could just pass a URL to the Download#run if you adjusted it.
If a new instance isn't needed every call:
Website exists for simple access to the different downloaders.
That means YOUTUBE should give access to DownloadYoutube.
You could turn Download into an interface:
interface Download {
void run();
}
DownloadYoutube and DownloadVimeo could all implement this:
class DownloadYoutube implements Download {
public void run() {
//...
}
}
class DownloadVimeo implements Download {
public void run() {
//...
}
}
Website can also implement this:
enum Website implements Download {
public final void run() {
}
}
Now make it a requirement for each Website value to specify the Download to use:
enum Website implements Download {
YOUTUBE(new DownloadYoutube()),
VIMEO(new DownloadVimeo());
private final Download download;
Website(Download download) {
this.download = download;
}
#Override
public final void run() {
download.run();
}
}
I have an interface and its 2 implementations say :
public interface ObjectProcessor {
public void process(List<String> objectNames);
}
public CarImpl implements ObjectProcessor {
#override
public void process(List<String> carNames){
//car logic
} }
public VanImpl implements ObjectProcessor {
#override
public void process(List<String> vanNames){
//van logic
}
}
Now the caller who uses this interface looks like :
public void caller(VehicleType vehicleType, List<String> vehicleNames ) {
ObjectProcessor processor = null ;
if (VehicleType == VehicleType.CAR) {
processor = new CarImpl();
processor.process(vehicleNames);
}
}
VehicleType being an ENUM
This works fine. But is there anyway I can call an interface dynamically without
adding if statements. In the future if I am supporting another vehicle , I need to add an if statement along with a new implementation for the interface . How can I avoid this?
Overwrite abstract factory method in enum like this.
public enum VehicleType {
Car {
#Override
public ObjectProcessor createImpl() {
return new CarImpl();
}
},
Van {
#Override
public ObjectProcessor createImpl() {
return new VanImpl();
}
};
public abstract ObjectProcessor createImpl();
}
public void caller(VehicleType vehicleType, List<String> vehicleNames ) {
ObjectProcessor processor = vehicleType.createImpl();
processor.process(vehicleNames);
}
VechicleType combines enumeration with factory.
Or you can wirte all logics in enum like this.
public enum VehicleType {
Car {
#Override
public ObjectProcessor createImpl() {
return new ObjectProcessor() {
#Override
public void process(List<String> objectNames) {
// car logic
}
};
}
},
Van {
#Override
public ObjectProcessor createImpl() {
return new ObjectProcessor() {
#Override
public void process(List<String> objectNames) {
// van logic
}
};
}
};
public abstract ObjectProcessor createImpl();
}
In this case you don't need implementation classes (CarImpl, VanImpl, ...) any more.
Use Factory pattern. Here are some benefit from using it: http://javarevisited.blogspot.com/2011/12/factory-design-pattern-java-example.html#ixzz3ueUdV947
1) Factory method design pattern decouples the calling class from the target class, which result in less coupled and highly cohesive code?
2) Factory pattern in Java enables the subclasses to provide extended version of an object, because creating an object inside factory is more flexible than creating an object directly in the client. Since client is working on interface level any time you can enhance the implementation and return from Factory.
3) Another benefit of using Factory design pattern in Java is that it encourages consistency in Code since every time object is created using Factory rather than using different constructor at different client side.
4) Code written using Factory design pattern in Java is also easy to debug and troubleshoot because you have a centralized method for object creation and every client is getting object from same place
What you're basically implementing is a Factory pattern like proposed in the other answers. But in the end you will have to write an 'if' or 'switch' statement to select to correct implementation (or strategy) for your enum value. But like you mentioned yourself you'd have to extend this selection pattern whenever you add or remove an enum value. You can circumvent this by using a map like so:
public class ProcessorSelector {
private final Map<VehicleType, ObjectProcessor> processors;
public ProcessorSelector(Map<VehicleType, ObjectProcessor> processors) {
this.processors = processors;
}
public void process(VehicleType type, List<String> input) {
processors.get(type).process(input);
}
}
You can than configure your ProcessorSelector by passing a map with all the processor implementations mapped to the correct enum value (notice I used guava's ImmutableMap to conveniently construct the hashmap:
new ProcessorSelector(ImmutableMap.of(
VehicleType.CAR, new CarImpl(),
VehicleType.VAN, new VanImpl());
You'll never have to change your ProcessorSelector again, only the construction/configuration of the class. In fact you could say we just implemented the strategy pattern here. These selector classes are very common and if you feel you are implementing them quite often you could even use a more generic implementation, I recently described this in a blogpost: https://hansnuttin.wordpress.com/2015/12/03/functionselector/
I'm working with a team on a new Java API for one of our internal projects. We probably won't be able to take the time to stop and hash out all the details of the Java interfaces and get them 100% perfect at the beginning.
We have some core features that have to be there up front, and others that are likely to be added later over time but aren't important now, + taking the time to design those features now is a luxury we don't have. Especially since we don't have enough information yet to get all the design details right.
The Java approach to APIs is that once you publish an interface, it's effectively immutable and you should never change it.
Is there a way to plan for API evolution over time? I've read this question and I suppose we could do this:
// first release
interface IDoSomething
{
public void hop();
public void skip();
public void jump();
}
// later
interface IDoSomething2 extends IDoSomething
{
public void waxFloor(Floor floor);
public void topDessert(Dessert dessert);
}
// later still
interface IDoSomething3 extends IDoSomething2
{
public void slice(Sliceable object);
public void dice(Diceable object);
}
and then upgrade our classes from supporting IDoSomething to IDoSomething2 and then IDoSomething3, but this seems to have a code smell issue.
Then I guess there's the Guava way of marking interfaces with #Beta so applications can use these at risk, prior to being frozen, but I don't know if that's right either.
If you want flexible code generics can help.
For example, instead of:
interface FloorWaxer
{
public void waxFloor(Floor floor);
}
You can have:
interface Waxer<T>
{
void wax(T t);
}
class FloorWaxer implements Waxer<Floor>
{
void wax(Floor floor);
}
Also, Java 8 brought default methods in interfaces which allow you to add methods in already existing interfaces; with this in mind you can make you interfaces generic. This means you should make your interfaces as generic as possible; instead of:
interface Washer<T>
{
void wash(T what);
}
and then to later add
interface Washer<T>
{
void wash(T what);
void wash(T what, WashSubstance washSubstance);
}
and later add
interface Washer<T>
{
void wash(T what);
void wash(T what, WashSubstance washSubstance);
void wash(T what, WashSubstance washSubstance, Detergent detergent);
}
you can add from the beginning
#FunctionalInterface
interface Washer<T>
{
void wash(T what, WashSubstance washSubstance, Detergent detergent);
default wash(T what, WashSubstance washSubstance)
{
wash(what, washSubstance, Detergent.DEFAULT_DETERGENT);
}
default wash(T what, Detergent detergent)
{
wash(what, WashSubstance.DEFAULT_WASH_SUBSTANCE, detergent);
}
default wash(T what)
{
wash(what, WashSubstance.DEFAULT_WASH_SUBSTANCE, Detergent.DEFAULT_DETERGENT);
}
}
Also, try to make your interfaces functional (only one abstract method) so you can benefit from lambdas sugaring.
You could take the approach that tapestry-5 has taken which it dubs "Adaptive API" (more info here).
Instead of locked down interfaces, tapestry uses annotations and pojo's. I'm not entirely sure of your circumstances but this may or may not be a good fit. Note that tapestry uses ASM (via plastic) under the hood so that there is no runtime reflection to achieve this.
Eg:
public class SomePojo {
#Slice
public void slice(Sliceable object) {
...
}
#Dice
public void dice(Diceable object) {
...
}
}
public class SomeOtherPojo {
#Slice
public void slice(Sliceable object) {
...
}
#Hop
public void hop(Hoppable object) {
...
}
}
You could use a new package name for the new version of API - this would allow old and new API live side-by-side and API users can convert their components to the new API one at a time. You can provide some adaptors to help them with heavy-lifting on boundaries where objects get passed across boundaries between classes using new and old API.
The other option is quite harsh but could work for internal project - just change what you need and make users to adapt.
If you are just adding, providing default implementation (in an abstract class) of the new methods can make the process smoother. Of course this is not always applicable.
Signalling the change by changing major version number a provide detailed documentation about how to upgrade the code base to the new version of API is good idea in both cases.
I would suggest to take a look at these Structural Pattern. I think the Decorator pattern (also known as Adaptive pattern) can fill your needs. See the example in the linked Wikipedia article.
Here's the way I approach this situation.
First, I'd use abstract classes so that you can plug in default implementations later. With the advent of inner and nested classes in JDK 1.1, interfaces add little; almost all use cases can be comfortably converted to use pure abstract classes (often as nested classes).
First release
abstract class DoSomething {
public abstract void hop();
public abstract void skip();
public abstract void jump();
}
Second release
abstract class DoSomething {
public abstract void hop();
public abstract void skip();
public abstract void jump();
abstract static class VersionTwo {
public abstract void waxFloor(Floor floor);
public abstract void topDessert(Dessert dessert);
}
public VersionTwo getVersionTwo() {
// make it easy for callers to determine whether new methods are supported
// they can do if (doSomething.getVersionTwo() == null)
return null;
// OR throw new UnsupportedOperationException(), depending on specifics
// OR return a default implementation, depending on specifics
}
// if you like the interface you proposed in the question, you can do this:
public final void waxFloor(Floor floor) {
getVersionTwo().waxFloor();
}
public final void topDessert(Dessert dessert) {
getVersionTwo().topDessert();
}
}
Third release would be similar to second, so I'll omit it for brevity.
If you haven't designed the final API, don't use the name you want for it!
Call it something like V1RC1, V1RC2, .. and when it is done, you have V1.
People will see in their code, that they are still using a RC-Version and can remove that to get the real thing when it is ready.
Rostistlav is basically saying the same, but he calls them all real API Versions, so it would be V1, V2, V3, .... Think that's up to your taste.
You could also try an event driven approach and add new event types as your API changes without affecting backwards compatability.
eg:
public enum EventType<T> {
SLICE<Sliceable>(Sliceable.class),
DICE<Diceable>(Diceable.class),
HOP<Hoppable>(Hoppable.class);
private final Class<T> contextType;
private EventType<T>(Class<T> contextType) {
this.contextType = contextType;
}
public Class<T> getContextType() {
return this.contextType;
}
}
public interface EventHandler<T> {
void handleEvent(T context);
}
public interface EventHub {
<T> void subscribe(EventType<T> eventType, EventHandler<T> handler);
<T> void publish(EventType<T> eventType, T context);
}
public static void main(String[] args) {
EventHub eventHub = new EventHubImpl(); // TODO: Implement
eventHub.subscribe(EventType.SLICE, new EventHandler<Sliceable.class> { ... });
eventHub.subscribe(EventType.DICE, new EventHandler<Diceable.class> { ... });
eventHub.subscribe(EventType.HOP, new EventHandler<Hoppable.class> { ... });
Hoppable hoppable = new HoppableImpl("foo", "bar", "baz");
eventHub.publish(EventType.HOP, hoppable); // fires EventHandler<Hoppable.class>
}
I am looking for a good way to have different implementations of the same method which is defined in an interface but with different parameter types. Would this be possible?
In order to clarify this, suppose I have an interface Database and two implementing classes Database1 and Database2. Database has a method createNode(...) and another one modifyNode(...). The problem is that for Database1 the return type of the createNode method should be a long (the identifier). For Database2, however, it would be an object specific from the technology (in this case OrientDB but this doesn't matter too much, it is simply something that extends Object, of course). And also both create(...) return types should be used as one of modifyNode(...) parameters.
What I was thinking to do is:
`public interface Database {
public Object createNode(...);
public void modifyNode(Object id, ...);
...
}`
public class Database1 {
#Override
public Object createNode(...) {
...
long result = // obtain id of created node
return Long.valueOf(result);
}
#Override
public void modifyNode(Object id, ...) {
...
// use id as ((Long)id).longValue();
}
}
public class Database2 {
#Override
public Object createNode(...) {
...
SomeObject result = // obtain id of created node
return result;
}
#Override
public void modifyNode(Object id, ...) {
...
// use id as (SomeObject)id
}
}
I wanted to know if there is a better way to do this. Specially to avoid Long -> long and long -> Long conversions. I saw many similar questions here in StackOverflow but none of them were what I was looking for. Thank you very much in advance.
Here's an example of Generics
Database
public interface Database<T> {
public T createNode(...);
public void modifyNode(T id, ...);
...
}
Database1
class Database1 implements Database<Long> {
#Override
public Long createNode(...) {
...
long result = // obtain id of created node
return result;
}
#Override
public void modifyNode(Long id, ...) {
...
// use id
}
}
Database2
public class Database2 implements Database<SomeObject> {
#Override
public SomeObject createNode(...) {
...
SomeObject result = // obtain id of created node
return result;
}
#Override
public void modifyNode(SomeObject id, ...) {
...
// use id as (SomeObject)id
}
}
Btw, don't worry about autoboxing. You are using JDK >= 5 since there are #Override annotations.
I think you want Generic Methods.
Generic methods are methods that introduce their own type parameters.
This is similar to declaring a generic type, but the type parameter's
scope is limited to the method where it is declared. Static and
non-static generic methods are allowed, as well as generic class
constructors.
The syntax for a generic method includes a type parameter, inside
angle brackets, and appears before the method's return type. For
static generic methods, the type parameter section must appear before
the method's return type.
I have been looking over a couple of classes I have in an android project, and I realized that I have been mixing logic with data. Having realized how bad this can be to the readability and the test-ability of my project, I decided to do some refactoring in order to abstract away all services logic to separate services modules. However, since I have been relying on Java's polymorphism, I got lost and need some guidance.
Suppose I have this "to-be-changed" layout for a super data class, and two sub-classes:
public class DataItem {
/* some variables */
public saveToDB(/* Some Arguments */) {
/* do some stuff */
}
public render() {
/* render the class */
}
}
public class ChildDataItemA extends DataItem {
#Override
public saveToDB(/* Some Arguments */) {
super.saveToDB();
/* more specific logic to ChildDataItemA */
}
#Override
public render() {
/* render logic for ChildDataItemA */
}
}
public class ChildDataItemB extends DataItem {
#Override
public saveToDB(/* Some Arguments */) {
super.saveToDB();
/* more specific logic to ChildDataItemB */
}
#Override
public render() {
/* render logic for ChildDataItemB */
}
}
Now, I thought about moving the saveToDB() and render() methods to a service class. However, sometimes I need to be able to call these method into instance of compiled type DataItem without knowing its runtime type. For instance, I might want to make the following call:
List<DataItem> dataList;
for (DataItem item: dataList) {
item.saveToDB();
item.render();
}
Additionally, I thought of doing the following:
public class ChildDataItemB extends DataItem {
#Override
public saveToDB(/* Some Arguments */) {
super.saveToDB();
/* more specific logic to ChildDataItemB */
Service.saveToDBB();
}
#Override
public render() {
/* render logic for ChildDataItemB */
Service.renderB();
}
}
Where I still keep 'dummy' methods in each subclass that would call an appropriate service method. However, I do not think that this really achieves the separation I want since data classes will still know about services (bad!).
Any ideas on how to solve this?
Edit: Note that render() and saveToDB() are just generic examples of what these methods can be, so the problem is not really about choosing an ORM or SQL related techniques.
Visitor pattern to the rescue. Create a visitor interface and have each service implement this interface:
public interface DataItemVisitor {
// one method for each subtype you want to handle
void process(ChildDataItemA item);
void process(ChildDataItemB item);
}
public class PersistenceService implements DataItemVisitor { ... }
public class RenderService implements DataItemVisitor { ... }
Then have each DataItem implement an accept method:
public abstract class DataItem {
public abstract void accept(DataItemVisitor visitor);
}
public class ChildDataItemA extends DataItem {
#Override
public void accept(DataItemVisitor visitor) {
visitor.process(this);
}
}
public class ChildDataItemB extends DataItem {
#Override
public void accept(DataItemVisitor visitor) {
visitor.process(this);
}
}
Note that all accept implementations look the same but this refers to the correct type in each subclass. Now you can add new services without having to change the DataItem classes.
So you want to do:
List<DataItem> dataList;
for (DataItem item: dataList) {
service.saveToDB(item);
service.render(item);
}
For this you need to setup a system for your service to know more details from your DataItem subclass.
ORM's and serializers usually solve this via a metadata system, e.g. by finding an xml file with name matching the subclass, containing the properties to save or serialize.
ChildDataItemA.xml
<metaData>
<column name="..." property="..."/>
</metaData>
You could get the same result via reflection and annotations.
In your case, an application of the Bridge pattern could also work:
class DataItem {
public describeTo(MetaData metaData){
...
}
}
class Service {
public void saveToDB(DataItem item) {
MetaData metaData = new MetaData();
item.describeTo(metaData);
...
}
}
Your metadata could be decoupled from saving or rendering, so you can the same for both.
I would clean the "data" classes of render and saveToDB methods.
Instead, I would create a hierarchy of wrappers for DataItem (it does not have to mimic exactly the DataItem hierarchy). These wrappers will be the ones implementing those methods.
Additionally, I suggest that (if you can), you move to some ORM (Object-Relational Mapping) like Hibernate or JPA to get rid of the saveToDB method.
First of all the DataItem class should be clean, only with getters and setter and no logic at all, just like a POJO. moreover- your DataItem maybe should be abstract.
Now- for the logic, like others suggested I would use some ORM framework for the saveToDB part, but you said that it's not helping you cause it's android project and you have other methods like this as well.
So what I would do is to create an interface- IDataItemDAO, with the following logic:
public interface IDataItemDAO<T extends DataItem > {
public void saveToDB(T data, /* Some Arguments */);
... other methods that you need ...
}
I would create an abstract DAO for the DataItem and put it all the similar code of all DataItems:
public abstract class ChildDataItemADAO impelemets IDataItemDAO<DataItem> {
#Override
public void saveToDB(DataItem data, /* Some Arguments */); {
...
}
}
than I would create a DAO for each DataItem class that you have:
public class ChildDataItemADAO extends DataItemDAO impelemets IDataItemDAO<ChildDataItemA> {
#Override
public void saveToDB(ChildDataItemA data, /* Some Arguments */); {
super(data, ...);
//other specific saving
}
}
the other part is how to use the correct DAO for the correct instance, for this I would create a class that will bring me the correct DAO for the given instance, it is a very simple method if an if-else statements (or you can do it dynamically with a map of class and the DAO)
public DataItemDAO getDao(DataItem item) {
if (item instanceof ChildDataItemA) {
//save the instance ofcourse
return new ChildDataItemADAO();
}
}
so you should use it like this:
List<DataItem> dataList;
for (DataItem item: dataList) {
factory.getDao(item).saveToDB(item);
}
If you want separate logic from data you may try the following approach
Create your data class DataItem,ChildDataItemA, ChildDataItemB without the method operating on the data
Create an interface for some operations on you data class something like
public interface OperationGroup1OnDataItem {
public void saveToDB(DataItem dataItem/*plus other params*/) {
}
public void render(DataItem dataItem/*plus other params*/) {
}
......
}
Create a factory for implementing an OperationGroup provider
public class OperationFactoryProvider {
public static OperationGroup1OnDataItem getOperationGroup1For(Class class) {
....
}
}
Use it in you code:
List<DataItem> dataList;
for (DataItem item: dataList) {
OperationGroup1OnDataItem provider OperationFactoryProvider.getOperationGroup1For(item.class);
provider.saveToDB(item);
provider.render(item);
}
You can choose to implement the factory with a simple static map where you put the class (or the class fullName) as the key and an Object implementing the interface as the value; something like
Map<String,OperationGroup1OnDataItem> factoryMap= new HashMap<String,OperationGroup1OnDataItem>();
factoryMap.put(DataItem.class.getName(),new SomeClassThatImplementsOperationGroup1OnDataItemForDataItem());
factoryMap.put(ChildDataItemA.class.getName(),new SomeClassThatImplementsOperationGroup1OnDataItemForChildDataItemA());
The implementation of the getOperationGroup1For is:
return factoryMap.get(item.getClass().getName());
This is one example of separating logic from data, if you want separate logic from data your logic methods must be extracted from your data class; otherwise there is no separation. So I think every solution must start from removing logic methods.