What is the difference of the interfaces:
org.apache.commons.collections4.Transformer
java.util.function.Function
Aren't they doing a similar action?:
T --> doing stuff --> R
Assume I've got an User Object, and I want to let one of these interfaces return me the loginname as a String from that object. I could use both?
#Override
public String apply(User user) {
return user.getLoginname();
}
or
#Override
public String transform(User user) {
return user.getLoginname();
}
These two interfaces have equivalent function -- they take an object of some type as input, and return an object of some, possibly different, type. Transformer has somewhat narrower documented scope, in the sense that "compute a transformation of" is, to me, a particular case of "compute a function of", but that's weak.
The most important difference between these interfaces is how instances can be used with other objects. They are not interchangeable. Thus, if you want to use a TransformedList then it has to be defined in terms of a Transformer, not a Function. If you want to obtain a flatMap from a Stream then you need a Function, not a Transformer.
Because they are basically just different names for the same idea, however, it is trivial to write an adapter to enable you to use one type in a context that requires the other.
What is the difference?
The name.
Aren't they doing a similar action?
Yes.
I could use both?
Sure, but why rely on Commons Collections if you can use built-in type?
Related
Method Overloading supports polymorphism because it is one way that Java implements one-interface, multiple methods paradigm.
To understand how, I consider the following. In languages that do not support method overloading,
each method must be given a unique name. However, frequently I will want to implement essentially
the same method for different types of data. Consider the absolute value function. In languages
that do not support overloading, there are usually 3 or more versions of this function, each with
a slightly different name. For instance in C, the function abs() returns the absolute value of an
integer, labs() returns the absolute value of an long integer, fabs() returns the absolute value
of an floating-point value. Since C does not support overloading, each function has to have its
own name, even though all 3 functions do essentially the same thing. This makes the situation
more complex, conceptually, than it actually is. Although the underlying concept of each function
is the same, I will have 3 names to remember. This situation doesn’t occur in Java, because each
absolute value method can use the same name. Indeed, Java’s standard class library includes an
absolute value method, called abs(). This method is overloaded by Java’s Math class to handle all
numeric types. Java determines which version of abs() to call based upon type of argument.
There is no rule stating that overload method must relate to one another. However from a
stylistic point of view, method overloading implies a relationship. Thus, while I can use the
same name to overload unrelated method, I think I should not. For example, I could have use the
name sqr to create methods that return the square of an integer and the square root of a floating
point value. But these 2 operations are fundamentally different. Applying method overloading in
this manner is defeating its original purpose.
So in practice, should I only overload closely related operations? And any other reason to use overloaded methods besides this?
As far as I can see, method overloading is typically used only for supplying sensible default arguments to a method, in order to simplify the API. It can have some advantages when you or other users may not need all the offered flexibility available on your program/library.
Another valid use is for the primitive data types in Java, as you pointed out yourself.
void doThis() {
doThis(true);
}
void doThis(boolean firstArg) {
doThis(firstArg, 1);
}
void doThis(int secondArg) {
doThis(true, secondArg);
}
// actual logic using several parameters
void doThis(boolean firstArg, int secondArg) {
if (firstArg) {
System.out.println(secondArg+1);
}
else {
System.out.println(secondArg-1);
}
}
Of course this is a nonsense example, but it becomes more apparent when, for example, your method requires a PrintStream. You can supply System.out as the default in an overloaded method, but your logic should be in a method where any PrintStream can be supplied. Using the same method name for radically different uses or outputs is a big no-no, it would most likely infuriate anyone working with your code (including Future You: http://xkcd.com/1421/ ).
I'd like to utilize a unique java collection that can accept a strategy for determining if member objects are "equal" on collection initialization.
The reason I need to do this is because the equals method of the class that I need to add to this collection is already implemented to satisfy other (more appropriate) functionality. In a specific case, the criteria for uniqueness in this collection instance needs to check only one variable of the class as opposed to a number of variables that are checked in the equals method. I would prefer to avoid decorating the objects as I am gathering them from disparate libraries and it would be costly to loop through for decoration (and it may muddy my code).
I realize this would not be a Set as it would break the Java contract for Set, but I just feel as though this problem must have been encountered previously. I figured Guava or Apache Collections would have provided something, but no luck it seems. Does anybody know of any available library that does provide this type of functionality? Should I be entertaining a different solution altogether?
Can you use a Custom Comparator and a TreeSet or TreeMap? Or use a Map where the Key has your criteria? A HashSet is just a wrapper for a HashMap so using a map instead should be much more expensive.
That is not really practical. Consider for instance two instances of a class C which you consider equivalent.
Now you do:
set.add(c1);
set.remove(c2);
Should the set be empty after that? What about .retainAll(), .removeAll()?
Your best bet here is to create your own class which wraps over class C, deletages whatever is needed to be delegated, and have this wrapper class implement .hashCode() and .equals() (and possibly Comparable of itself too). With such a class, you can just go on and use classical sets and maps.
Guava has an Equivalence, which lets you define whether two objects are equivalent.
It also has Equivalence.Wrapper which wraps arbitrary objects and delegates equals() and hashCode() to the implementations in the equivalence, rather than their own.
So you could do something like this:
public class MySet<T> implements Set<T> {
private final Equivalence<T> equivalence;
private final Set<Wrapper<T>> delegate = new HashSet<Wrapper<T>>();
public MySet(Equivalence<T> equivalence) {
this.equivalence = equivalence;
}
public boolean add(T t) {
return delegate.add(equivalence.wrap(t));
}
// other Set methods
}
I have an Object Conversion Class that converts from my domain level objects to DTOs.
I effectively have the following structure
class RuleGroupDTO {
List<RuleDTO> ruleDTOs;
// other members
EvaluationRuleDTO and AssignmentRuleDTO both extend from RuleDTO
My API for conversion is as follows:
public RuleGroupDTO convert(RuleGroup ruleGroup);
So when I pass in my domain RuleGroup to the convert class, it looks at a code associated with the RuleGroup and then constructs either EvaluationRuleDTO or AssignmentRuleDTOs encapsulated within the RuleGroupDTO.
When I retrieve back my RuleGroupDTO, I know that it will contain a List<RuleDTO> that is either List<EvaluationRuleDTO> or List<AssignmentRuleDTO>.
However, to get the correct class version I need to loop through the List<RuleDTO> and cast to either List<EvaluationRuleDTO> or List<AssignmentRuleDTO>
This seems messy, and I am thinking that I can leverage some generic concepts to avoid performing this loop + cast.
What would be a possible approach to changing my API or object structure to take advantage of generics here?
If you have a set amount of types and you want them separate, then return a pojo with the various types separated into different collections, e.g.:
public class DTOResult {
public List<EvaluationRuleDTO> evalDTOs;
public List<AssignmentRuleDTO> assignDTOs;
}
Use List<RuleDTO> to avoid casting, and for god's sake don't write a loop. Make RuleDTO implement a visitor pattern that allows any consumer to handle either kind of subclass in a type-safe manner.
Java sucks at variant types.
I have a situation where I have have a lot of model classes (~1000) which implement any number of 5 interfaces. So I have classes which implement one and others which implement four or five.
This means I can have any permutation of those five interfaces. In the classical model, I would have to implement 32-5 = 27 "meta interfaces" which "join" the interfaces in a bundle. Often, this is not a problem because IB usually extends IA, etc. but in my case, the five interfaces are orthogonal/independent.
In my framework code, I have methods which need instances that have any number of these interfaces implemented. So lets assume that we have the class X and the interfaces IA, IB, IC, ID and IE. X implements IA, ID and IE.
The situation gets worse because some of these interfaces have formal type parameters.
I now have two options:
I could define an interface IADE (or rather IPersistable_MasterSlaveCapable_XmlIdentifierProvider; underscores just for your reading pleasure)
I could define a generic type as <T extends IPersistable & IMasterSlaveCapable & IXmlIdentifierProvider> which would give me a handy way to mix & match interfaces as I need them.
I could use code like this: IA a = ...; ID d = (ID)a; IE e = (IE)e and then use the local variable with the correct type to call methods even though all three work on the same instance. Or use a cast in every second method call.
The first solution means that I get a lot of empty interfaces with very unreadable names.
The second uses a kind of "ad-hoc" typing. And Oracle's javac sometimes stumbles over them while Eclipse gets it right.
The last solution uses casts. Nuff said.
Questions:
Is there a better solution for mixing any number of interfaces?
Are there any reasons to avoid the temporary types which solution #2 offers me (except for shortcomings in Oracle's javac)?
Note: I'm aware that writing code which doesn't compile with Oracle's javac is a risk. We know that we can handle this risk.
[Edit] There seems to be some confusion what I try to attempt here. My model instances can have one of these traits:
They can be "master slave capable" (think cloning)
They can have an XML identifier
They might support tree operations (parent/child)
They might support revisions
etc. (yes, the model is even more complex than that)
Now I have support code which operates on trees. An extensions of trees are trees with revisions. But I also have revisions without trees.
When I'm in the code to add a child in the revision tree manager, I know that each instance must implement ITtree and IRevisionable but there is no common interface for both because these are completely independent concerns.
But in the implementation, I need to call methods on the nodes of the tree:
public void addChild( T parent, T child ) {
T newRev = parent.createNewRevision();
newRev.addChild( foo );
... possibly more method calls to other interfaces ...
}
If createNewRevision is in the interface IRevisionable and addChild is in the interface ITree, what are my options to define T?
Note: Assume that I have several other interfaces which work in a similar way: There are many places where they are independent but some code needs to see a mix of them. IRevisionableTree is not a solution but another problem.
I could cast the type for each call but that seems clumsy. Creating all permutations of interfaces would be boring and there seems no reasonable pattern to compress the huge interface names. Generics offer a nice way out:
public
<T extends IRevisionable & ITree>
void addChild( T parent, T child ) { ... }
This doesn't always work with Oracle's javac but it seems compact and useful. Any other options/comments?
Loosely coupled capabilities might be interesting. An example here.
It is an entirely different approach; decoupling things instead of typing.
Basically interfaces are hidden, implemented as delegating field.
IA ia = x.lookupCapability(IA.class);
if (ia != null) {
ia.a();
}
It fits here, as with many interfaces the wish to decouple rises, and you can more easily combine cases of interdepending interfaces (if (ia != null && ib != null) ...).
If you have a method (semicode)
void doSomething(IA & ID & IE thing);
then my main concern is: Couldn't doSomething be better tailored? Might it be better to split up the functionality? Or are the interfaces itself badly tailored?
I have stumbled over similar things several times and each time it proved to be better to take big step backward and rethink the complete partitioning of the logic - not only due to the stuff you mentioned but also due to other concerns.
Since you formulated your question very abstractly (i.e. without a sensible example) I cannot tell you if that's advisable in your case also.
I would avoid all "artificial" interfaces/types that attempt to represent combinations. It's just bad design... what happens if you add 5 more interfaces? The number of combinations explodes.
It seems you want to know if some instance implements some interface(s). Reasonable options are:
use instanceof - there is no shame
use reflection to discover the interfaces via object.getClass().getInterfaces() - you may be able to write some general code to process stuff
use reflection to discover the methods via object.getClass().getMethods() and just invoke those that match a known list of methods of your interfaces (this approach means you don't have to care what it implements - sounds simple and therefore sounds like a good idea)
You've given us no context as to exactly why you want to know, so it's hard to say what the "best" approach is.
Edited
OK. Since your extra info was added it's starting to make sense. The best approach here is to use the a callback: Instead of passing in a parent object, pass in an interface that accepts a "child".
It's a simplistic version of the visitor pattern. Your calling code knows what it is calling with and how it can handle a child, but the code that navigates around and/or decides to add a child doesn't have context of the caller.
Your code would look something like this (caveat: May not compile; I just typed it in):
public interface Parent<T> {
void accept(T child);
}
// Central code - I assume the parent is passed in somewhere earlier
public void process(Parent<T> parent) {
// some logic that decides to add a child
addChild(parent, child);
}
public void addChild(Parent<T> parent, T child ) {
parent.accept(child);
}
// Calling code
final IRevisionable revisionable = ...;
someServer.process(new Parent<T> {
void accept(T child) {
T newRev = revisionable.createNewRevision();
newRev.addChild(child);
}
}
You may have to juggle things around, but I hope you understand what I'm trying to say.
Actually solution 1 is a good solution, but you should find a better naming.
What actually would you name a class that implements the IPersistable_MasterSlaveCapable_XmlIdentifierProvider interface? If you follow good naming convention, it should have a meaningful name originating from a model entity. You can give the interface the same name prefixed with I.
I don't find it a disadvantage to have many interfaces, because like that you can write mock implementations for testing purposes.
My situation is the opposite: I know that at certain point in code,
foo must implement IA, ID and IE (otherwise, it couldn't get that
far). Now I need to call methods in all three interfaces. What type
should foo get?
Are you able to bypass the problem entirely by passing (for example) three objects? So instead of:
doSomethingWithFoo(WhatGoesHere foo);
you do:
doSomethingWithFoo(IA foo, ID foo, IE foo);
Or, you could create a proxy that implements all interfaces, but allows you to disable certain interfaces (i.e. calling the 'wrong' interface causes an UnsupportedOperationException).
One final wild idea - it might be possible to create Dynamic Proxies for the appropriate interfaces, that delegate to your actual object.
My current IVR app uses a wrapper class with several methods to call a web service and then parse its results. Each class has a single "invoke" method which calls the web service, and then calls subsequent submethods to break up the parsing into logical chunks.
Whenever a new input argument is needed in one or more of the submethods, the previous developer would add it as an argument on the invoke, and then add it as an argument on the submethods.
Is this the proper way to do this, or would it be better to set a field on the class, and then reference that whenever necessary?
Instead of:
invoke (oldField1, oldField2, newField1)
submethod1 (results, oldField1, oldField2, newField1)
submethod2 (results, oldField1, oldField2, newField1)
Should it be:
invoke(oldField1, oldField2, newField1){
OldField1=oldField1
OldField2=oldField2
NewField1=newField1
}
submethod1(results)
submethod2(results)
Or even:
new (oldField1, oldField2, newField1){
OldField1=oldField1
OldField2=oldField2
NewField1=newField1
}
invoke()
submethod1(results)
submethod2(results)
Thanks!
The first solution allows making the object stateless, and allows using a unique instance for all the invocations, even in parallel.
The third one allows making the object stateful but immutable. It could be used for several invocations using the same set of fields, even in parallel (if made immutable).
Both of these solutions are acceptable. The less state an object has, the easiest it is to use it, particularly in a multi-thread environment.
The less mutable an object is, the easiest it is to use it.
The second one makes it a stateful mutable object, which can't be used by several threads (without synchronization). It looks less clean than the other two to me.
My general rule is to avoid statefulness in a service-oriented class whenever possible. Although Java doesn't really support functional programming per-se, the simplest and most scalable implementation is your first approach, which uses no member variables.
If your goal is to avoid frequent changes to method signatures, you could try to use a more generic field encapsulation:
public class Invoker {
public static void invoke(ResultContainer result, List<String> parameters) {
submethod1(result, parameters);
submethod2(result, parameters);
}
}
I would also recommend that you take a look at the Decorator design pattern for more ideas.
It depends on if your argument is data or identifying a mode/switch.
I suggest one argument for the data structure type and another argument that contains the enum types of different operations.
And then based on your enum type or mode of operation you can choose a strategy on which class to execute.
To restrict this increasing argument approach, you could provide an interface. And force the implementation to adhere to that.