Design a Java interface: a method with variable numbers of parameters - java

I just came into a problem with designing an interface whose methods may have variable numbers of input arguments.
public interface FoobarSerialization<T> {
Foobar serialize(T obj);
}
The problem is, for the classes that implement this interface, they require different numbers of input arguments.
public class FoobarA implements FoobarSerialization<FoobarA> {
#Override
public Foobar serialize(FoobarA obj, int bar) {
//...
}
}
public class FoobarB implements FoobarSerialization<FoobarB> {
#Override
public Foobar serialize(FoobarB obj, Date date, String str) {
//...
}
}
Is there a good design or any genuine way to solve this problem? I know the method in the interface can be declared as:
Foobar serialize(T... obj);
But I'm not sure if this was a good practice to design an interface like this.
Any thought?
Update: My intention of using an interface came from the collection of classes that need to be serialized and deserialized for different purposes. They serve as components under the same domain. But their serialization methods are quite different, especially considering their dependencies on objects and services that don't share any common features nor classes.
I guess the right question to ask here is: in terms of design, what's the best approach when there exits a set of classes which share the same behaviors (serialize, deserialize, doSomething, etc) but have different input args?

Composition pattern to the rescue.
In your particular case I would create interface which accepts just 1 parameter:
public interface Serializer<T> {
Foobar serialize(T object);
}
Now, if you need to serialize several fields, you just create an object which has all fields you need to serialize:
class FoobarBundle {
String stringField;
int intField;
byte[] arrayField;
/* ... */
}
And write bunch of serializers: FoobarBundleSerializer, StringSerializer, IntegerSerializer, ByteArraySerializer. In the end combine all serializers in FoobarBundleSerializer like that:
class FoobarBundleSerializer implements Serializer<FoobarBundle> {
StringSerializer stringSerializer;
IntegerSerializer integerSerializer;
ByteArraySerializer byteArraySerializer;
/* constructor here */
#Override
public Foobar serialize(FoobarBundle bundle) {
Foobar foobarString = stringSerializer.serialize(bundle.stringField);
Foobar foobarInteger = integerSerializer.serialize(bundle.intField);
Foobar foobarByteArray = byteArraySerializer.serialize(bundle.byteArrayField);
return combineFoobarSomehow(foobarString, foobarInteger, foobarByteArray);
}
}

Your mileage may vary, but usually confusing use (e.g. same number, but different types of arguments) of methods with the same name should be avoided. Though one can take help of method overloading, it is considered less than desirable. If the list of parameters is manageable, you should name the method differently to avoid ambiguities. See Item 26 in Effective Java 2.
The vararg methods are alright, but in Java, the best practice is to specify at least one concrete argument followed by a variable number of arguments of the same type. This is perhaps not applicable in your case, since there is no vararg syntax for a method like public Foobar serialize(FoobarB obj, Date date, String str);. It might be acceptable to use a syntax like (Object ... objects), but this practice is not considered generally applicable.
Contrast this with a method like printf which can and should be able to output a variable number of arguments of any type (including primitives) to an output stream.

Related

How do I avoid breaking the Liskov substitution principle with a class that implements multiple interfaces?

Given the following class:
class Example implements Interface1, Interface2 {
...
}
When I instantiate the class using Interface1:
Interface1 example = new Example();
...then I can call only the Interface1 methods, and not the Interface2 methods, unless I cast:
((Interface2) example).someInterface2Method();
Of course, to make this runtime safe, I should also wrap this with an instanceof check:
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
I'm aware that I could have a wrapper interface that extends both interfaces, but then I could end up with multiple interfaces to cater for all the possible permutations of interfaces that can be implemented by the same class. The Interfaces in question do not naturally extend one another so inheritance also seems wrong.
Does the instanceof/cast approach break LSP as I am interrogating the runtime instance to determine its implementations?
Whichever implementation I use seems to have some side-effect either in bad design or usage.
I'm aware that I could have a wrapper interface that extends both
interfaces, but then I could end up with multiple interfaces to cater
for all the possible permutations of interfaces that can be
implemented by the same class
I suspect that if you're finding that lots of your classes implement different combinations of interfaces then either: your concrete classes are doing too much; or (less likely) your interfaces are too small and too specialised, to the point of being useless individually.
If you have good reason for some code to require something that is both a Interface1 and a Interface2 then absolutely go ahead and make a combined version that extends both. If you struggle to think of an appropriate name for this (no, not FooAndBar) then that's an indicator that your design is wrong.
Absolutely do not rely on casting anything. It should only be used as a last resort and usually only for very specific problems (e.g. serialization).
My favourite and most-used design pattern is the decorator pattern. As such most of my classes will only ever implement one interface (except for more generic interfaces such as Comparable). I would say that if your classes are frequently/always implementing more than one interface then that's a code smell.
If you're instantiating the object and using it within the same scope then you should just be writing
Example example = new Example();
Just so it's clear (I'm not sure if this is what you were suggesting), under no circumstances should you ever be writing anything like this:
Interface1 example = new Example();
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
Your class can implement multiple interfaces fine, and it is not breaking any OOP principles. On the contrary, it is following the interface segregation principle.
It is confusing why would you have a situation where something of type Interface1 is expected to provide someInterface2Method(). That is where your design is wrong.
Think about it in a slightly different way: Imagine you have another method, void method1(Interface1 interface1). It can't expect interface1 to also be an instance of Interface2. If it was the case, the type of the argument should have been different. The example you have shown is precisely this, having a variable of type Interface1 but expecting it to also be of type Interface2.
If you want to be able to call both methods, you should have the type of your variable example set to Example. That way you avoid the instanceof and type casting altogether.
If your two interfaces Interface1 and Interface2 are not that loosely coupled, and you will often need to call methods from both, maybe separating the interfaces wasn't such a good idea, or maybe you want to have another interface which extends both.
In general (although not always), instanceof checks and type casts often indicate some OO design flaw. Sometimes the design would fit for the rest of the program, but you would have a small case where it is simpler to type cast rather than refactor everything. But if possible you should always strive to avoid it at first, as part of your design.
You have two different options (I bet there are a lot more).
The first is to create your own interface which extends the other two:
interface Interface3 extends Interface1, Interface2 {}
And then use that throughout your code:
public void doSomething(Interface3 interface3){
...
}
The other way (and in my opinion the better one) is to use generics per method:
public <T extends Interface1 & Interface2> void doSomething(T t){
...
}
The latter option is in fact less restricted than the former, because the generic type T gets dynamically inferred and thus leads to less coupling (a class doesn't have to implement a specific grouping interface, like the first example).
The core issue
Slightly tweaking your example so I can address the core issue:
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
So you defined the method DoTheThing(Interface1 example). This is basically saying "to do the thing, I need an Interface1 object".
But then, in your method body, it appears that you actually need an Interface2 object. Then why didn't you ask for one in your method parameters? Quite obviously, you should've been asking for an Interface2
What you're doing here is assuming that whatever Interface1 object you get will also be an Interface2 object. This is not something you can rely on. You might have some classes which implement both interfaces, but you might as well have some classes which only implement one and not the other.
There is no inherent requirement whereby Interface1 and Interface2 need to both be implemented on the same object. You can't know (nor rely on the assumption) that this is the case.
Unless you define the inherent requirement and apply it.
interface InterfaceBoth extends Interface1, Interface2 {}
public void DoTheThing(InterfaceBoth example)
{
example.someInterface2Method();
}
In this case, you've required InterfaceBoth object to both implement Interface1 and Interface2. So whenever you ask for an InterfaceBoth object, you can be sure to get an object which implements both Interface1 and Interface2, and thus you can use methods from either interface without even needing to cast or check the type.
You (and the compiler) know that this method will always be available, and there's no chance of this not working.
Note: You could've used Example instead of creating the InterfaceBoth interface, but then you would only be able to use objects of type Example and not any other class which would implement both interfaces. I assume you're interested in handling any class which implements both interfaces, not just Example.
Deconstructing the issue further.
Look at this code:
ICarrot myObject = new Superman();
If you assume this code compiles, what can you tell me about the Superman class? That it clearly implements the ICarrot interface. That is all you can tell me. You have no idea whether Superman implements the IShovel interface or not.
So if I try to do this:
myObject.SomeMethodThatIsFromSupermanButNotFromICarrot();
or this:
myObject.SomeMethodThatIsFromIShovelButNotFromICarrot();
Should you be surprised if I told you this code compiles? You should, because this code doesn't compile.
You may say "but I know that it's a Superman object which has this method!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman variable.
You may say "but I know that it's a Superman object which implements the IShovel interface!". But then you'd be forgetting that you only told the compiler it was an ICarrot variable, not a Superman or IShovel variable.
Knowing this, let's look back at your code.
Interface1 example = new Example();
All you've said is that you have an Interface1 variable.
if (example instanceof Interface2) {
((Interface2) example).someInterface2Method();
}
It makes no sense for you to assume that this Interface1 object also happens to implement a second unrelated interface. Even if this code works on a technical level, it is a sign of bad design, the developer is expecting some inherent correlation between two interfaces without actually having created this correlation.
You may say "but I know I'm putting an Example object in, the compiler should know that too!" but you'd be missing the point that if this were a method parameter, you would have no way of knowing what the callers of your method are sending.
public void DoTheThing(Interface1 example)
{
if (example instanceof Interface2)
{
((Interface2) example).someInterface2Method();
}
}
When other callers call this method, the compiler is only going to stop them if the passed object does not implement Interface1. The compiler is not going to stop someone from passing an object of a class which implements Interface1 but does not implement Interface2.
Your example does not break LSP, but it seems to break SRP. If you encounter such case where you need to cast an object to its 2nd interface, the method that contains such code can be considered busy.
Implementing 2 (or more) interfaces in a class is fine. In deciding which interface to use as its data type depends entirely on the context of the code that will use it.
Casting is fine, especially when changing context.
class Payment implements Expirable, Limited {
/* ... */
}
class PaymentProcessor {
// Using payment here because i'm working with payments.
public void process(Payment payment) {
boolean expired = expirationChecker.check(payment);
boolean pastLimit = limitChecker.check(payment);
if (!expired && !pastLimit) {
acceptPayment(payment);
}
}
}
class ExpirationChecker {
// This the `Expirable` world, so i'm using Expirable here
public boolean check(Expirable expirable) {
// code
}
}
class LimitChecker {
// This class is about checking limits, thats why im using `Limited` here
public boolean check(Limited limited) {
// code
}
}
Usually, many, client-specific interfaces are fine, and somewhat part of the Interface segregation principle (the "I" in SOLID). Some more specific points, on a technical level, have already been mentioned in other answers.
Particularly that you can go too far with this segregation, by having a class like
class Person implements FirstNameProvider, LastNameProvider, AgeProvider ... {
#Override String getFirstName() {...}
#Override String getLastName() {...}
#Override int getAge() {...}
...
}
Or, conversely, that you have an implementing class that is too powerful, as in
class Application implements DatabaseReader, DataProcessor, UserInteraction, Visualizer {
...
}
I think that the main point in the Interface Segregation Principle is that the interfaces should be client-specific. They should basically "summarize" the functions that are required by a certain client, for a certain task.
To put it that way: The issue is to strike the right balance between the extremes that I sketched above. When I'm trying to figure out interfaces and their relationships (mutually, and in terms of the classes that implement them), I always try to take a step back and ask myself, in an intentionally naïve way: Who is going to receive what, and what is he going to do with it?
Regarding your example: When all your clients always need the functionality of Interface1 and Interface2 at the same time, then you should consider either defining an
interface Combined extends Interface1, Interface2 { }
or not have different interfaces in the first place. On the other hand, when the functionalities are completely distinct and unrelated and never used together, then you should wonder why the single class is implementing them at the same time.
At this point, one could refer to another principle, namely Composition over inheritance. Although it is not classically related to implementing multiple interfaces, composition can also be favorable in this case. For example, you could change your class to not implement the interfaces directly, but only provide instances that implement them:
class Example {
Interface1 getInterface1() { ... }
Interface2 getInterface2() { ... }
}
It looks a bit odd in this Example (sic!), but depending on the complexity of the implementation of Interface1 and Interface2, it can really make sense to keep them separated.
Edited in response to the comment:
The intention here is not to pass the concrete class Example to methods that need both interfaces. A case where this could make sense is rather when a class combines the functionalities of both interfaces, but does not do so by directly implementing them at the same time. It's hard to make up an example that does not look too contrived, but something like this might bring the idea across:
interface DatabaseReader { String read(); }
interface DatabaseWriter { void write(String s); }
class Database {
DatabaseConnection connection = create();
DatabaseReader reader = createReader(connection);
DatabaseReader writer = createWriter(connection);
DatabaseReader getReader() { return reader; }
DatabaseReader getWriter() { return writer; }
}
The client will still rely on the interfaces. Methods like
void create(DatabaseWriter writer) { ... }
void read (DatabaseReader reader) { ... }
void update(DatabaseReader reader, DatabaseWriter writer) { ... }
could then be called with
create(database.getWriter());
read (database.getReader());
update(database.getReader(), database.getWriter());
respectively.
With the help of various posts and comments on this page, a solution has been produced, which I feel is correct for my scenario.
The following shows the iterative changes to the solution to meet SOLID principles.
Requirement
To produce the response for a web service, key + object pairs are added to a response object. There are lots of different key + object pairs that need to be added, each of which may have unique processing required to transform the data from the source to the format required in the response.
From this it is clear that whilst the different key / value pairs may have different processing requirements to transform the source data to the target response object, they all have a common goal of adding an object to the response object.
Therefore, the following interface was produced in solution iteration 1:
Solution Iteration 1
ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
Any developer that needs to add an object to the response can now do so using an existing implementation that matches their requirement, or add a new implementation given a new scenario
This is great as we have a common interface which acts as a contract for this common practise of adding response objects
However, one scenario requires that the target object should be taken from the source object given a particular key, "identifier".
There are options here, the first is to add an implementation of the existing interface as follows:
public class GetIdentifierResponseObjectProvider<T extends Map, S extends Map> implements ResponseObjectProvider<T, S> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get("identifier"));
}
}
This works, however this scenario could be required for other source object keys ("startDate", "endDate" etc...) so this implementation should be made more generic to allow for reuse in this scenario.
Additionally, other implementations may require more context information to perform the addObject operation... So a new generic type should be added to cater for this
Solution Iteration 2
ResponseObjectProvider<T, S, U> {
void addObject(T targetObject, S sourceObject, String targetKey);
void setParams(U params);
U getParams();
}
This interface caters for both usage scenarios; the implementations that require additional params to perform the addObject operation and the implementations that do not
However, considering the latter of the usage scenarios, the implementations that do not require additional parameters will break the SOLID Interface Segregation Principle as these implementations will override getParams and setParams methods but not implement them. e.g:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S, U> {
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(U));
}
public void setParams(U params) {
//unimplemented method
}
U getParams() {
//unimplemented method
}
}
Solution Iteration 3
To fix the Interface Segregation issue, the getParams and setParams interface methods were moved into a new Interface:
public interface ParametersProvider<T> {
void setParams(T params);
T getParams();
}
The implementations that require parameters can now implement the ParametersProvider interface:
public class GetObjectBySourceKeyResponseObjectProvider<T extends Map, S extends Map, U extends String> implements ResponseObjectProvider<T, S>, ParametersProvider<U>
private String params;
public void setParams(U params) {
this.params = params;
}
public U getParams() {
return this.params;
}
public void addObject(final T targetObject, final S sourceObject, final String targetKey) {
targetObject.put(targetKey, sourceObject.get(params));
}
}
This solves the Interface Segregation issue but causes two more issues... If the calling client wants to program to an interface, i.e:
ResponseObjectProvider responseObjectProvider = new GetObjectBySourceKeyResponseObjectProvider<>();
Then the addObject method will be available to the instance, but NOT the getParams and setParams methods of the ParametersProvider interface... To call these a cast is required, and to be safe an instanceof check should also be performed:
if(responseObjectProvider instanceof ParametersProvider) {
((ParametersProvider)responseObjectProvider).setParams("identifier");
}
Not only is this undesirable it also breaks the Liskov Substitution Principle - "if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program"
i.e. if we replaced an implementation of ResponseObjectProvider that also implements ParametersProvider, with an implementation that does not implement ParametersProvider then this could alter the some of the desirable properties of the program... Additionally, the client needs to be aware of which implementation is in use to call the correct methods
An additional problem is the usage for calling clients. If the calling client wanted to use an instance that implements both interfaces to perform addObject multiple times, the setParams method would need to be called before addObject... This could cause avoidable bugs if care is not taken when calling.
Solution Iteration 4 - Final Solution
The interfaces produced from Solution Iteration 3 solve all of the currently known usage requirements, with some flexibility provided by generics for implementation using different types. However, this solution breaks the Liskov Substitution Principle and has a non-obvious usage of setParams for the calling client
The solution is to have two separate interfaces, ParameterisedResponseObjectProvider and ResponseObjectProvider.
This allows the client to program to an interface, and would select the appropriate interface depending on whether the objects being added to the response require additional parameters or not
The new interface was first implemented as an extension of ResponseObjectProvider:
public interface ParameterisedResponseObjectProvider<T,S,U> extends ResponseObjectProvider<T, S> {
void setParams(U params);
U getParams();
}
However, this still had the usage issue, where the calling client would first need to call setParams before calling addObject and also make the code less readable.
So the final solution has two separate interfaces defined as follows:
public interface ResponseObjectProvider<T, S> {
void addObject(T targetObject, S sourceObject, String targetKey);
}
public interface ParameterisedResponseObjectProvider<T,S,U> {
void addObject(T targetObject, S sourceObject, String targetKey, U params);
}
This solution solves the breaches of Interface Segregation and Liskov Substitution principles and also improves the usage for calling clients and improves the readability of the code.
It does mean that the client needs to be aware of the different interfaces, but since the contracts are different this seems to be a justified decision especially when considering all the issues that the solution has avoided.
The problem you describe often comes about through over-zealous application of the Interface Segregation Principle, encouraged by languages' inability to specify that members of one interface should, by default, be chained to static methods which could implement sensible behaviors.
Consider, for example, a basic sequence/enumeration interface and the following behaviors:
Produce an enumerator which can read out the objects if no other iterator has yet been created.
Produce an enumerator which can read out the objects even if another iterator has already been created and used.
Report how many items are in the sequence
Report the value of the Nth item in the sequence
Copy a range of items from the object into an array of that type.
Yield a reference to an immutable object that can accommodate the above operations efficiently with contents that are guaranteed never to change.
I would suggest that such abilities should be part of the basic sequence/enumeration interface, along with a method/property to indicate which of the above operations are meaningfully supported. Some kinds of single-shot on-demand enumerators (e.g. an infinite truly-random sequence generator) might not be able to support any of those functions, but segregating such functions into separate interfaces will make it much harder to produce efficient wrappers for many kinds of operations.
One could produce a wrapper class that would accommodate all of the above operations, though not necessarily efficiently, on any finite sequence which supports the first ability. If, however, the class is being used to wrap an object that already supports some of those abilities (e.g. access the Nth item), having the wrapper use the underlying behaviors could be much more efficient than having it do everything via the second function above (e.g. creating a new enumerator, and using that to iteratively read and ignore items from the sequence until the desired one is reached).
Having all objects that produce any kind of sequence support an interface that includes all of the above, along with an indication of what abilities are supported, would be cleaner than trying to have different interfaces for different subsets of abilities, and requiring that wrapper classes make explicit provision for any combinations they want to expose to their clients.

How do I make a function accept any type (not necessarily an object)?

I have a function
boolean isValid(/* what goes in here? */) {
//do stuff
}
In the function signature, what do I have to enter in the parameter list in order for the method to accept a single parameter that may be of any type (primitive or object)?
I have searched around on this site and google, but I could only find situations where the return type is unknown.
If you have the function accept the generic object type, Java will create an object version of any primitive data type (i.e. Integer for int).
Reading the question and the comments, it seems like your perception of how compiled languages work may be a bit off.
If you make your function accept only one type (e.g. String), then it will fail to compile if the caller does not pass an object that is an instance of that type. "Not compiling" means they will not even be able to run the program without fixing the error. The compiler enforces this type safety for you, so you don't have to worry about this.
After reading through the comments it seems like you have a business need to valid any object people pass to you, and possibly, you will have to support more types as time goes on.
The simplest solution is like what jtoomey said, make a method like public boolean isValid(Object val). However, think about how much if statements you have to write, and how hard is it to modify the code when new type validation are needed.
To me, I would probably do something bit more complicated than just providing a single method. I would leverage factory to create validator base on class type like:
public interface Validator<T> {
public boolean isValid(T val) {
}
}
public class ValidatorFactory {
public static ValidatorFactory create(String configFile) {
/*read config and create new instance */
}
public Validator<T> createValidator(Class<T> clazz) {
/* base on config and create validator base on type*/
}
}
public class Application {
public static ValidatorFactory vFactory = ValidatorFactory.create()
public static void main(String[] args) {
Object val = Arguments.getVal(); //assume this exists
Validator<Object> validator = vFactory.create(val.class);
if (validator == null) {
System.out.println("not valid");
return;
}
System.out.println(validator.isValid());
}
}
Note that personally I feel this is a terrible solution, because you are throwing away the type safe feature of Java. But if you really must, having a configurable factory would be better than just a method that takes in any type. Having the Validator interface allows you to know the type when you are writing validation code.
I think you are a bit confused.
I will try to clarify some things:
You probably know that the return type of a method is the value that will be pass to the part of the code that called that method(aka know as client). Every method/function in Java can only return only one type(I assume you are familiar with basic polymorphism and you know what an IS A relationships is...). Here is the first clarification, you can return only one type, but one or many objects(There are data structure types).
Arguments are the values that the caller/client of the method/function can pass into the it for processing. Arguments can have zero or many parameters, this mean that you can pass as many objects as you want.
Parameters are exactly the same as arguments it is just a terminology difference nothing else. I you want to be accurate with the terms, you can say that parameters are the brackets when you define the method and argument are those brackets when you call the method.
In either the return type or in the parameters/arguments, the 2 types of types you can pass are Objects or primitive types.
If you use something of type Object, this will allow you to return any object(Object is the super class of all classes). But primitive types are not objects, so a type Object in a signature will not allow you to pass a number, but there is one little trick...
In Java there are special types of Objects known as primitive wrappers(Integer,Double...) this objects are object representations of primitives, sometimes they are used because they have some inbuilt functions that help programmers to easily manipulate the data(That is not the main point, keep reading...),every wrapper that represents a numerical primitive type, extends a class called Number and because of one feature that Java have known as autoboxing, you can pass primitives into Wrappers automatically.
Anyway, I don't know if this is the trick you were looking for, but in any case I want just to advice you, that there is no reason at all to do what you are trying, It sounds really strange and I don't think that such thing is really needed in real life programming.
Have a look at this code :
public static void main( String[] args )
{
App app = new App();
app.method(5);
}
public void method(Number number) {
System.out.print(number);
}
Another alternative:
Another example that I can think about in order to make a parameter universal is by the use of generics. So just for ending this answer to prove my point, here a method that will allow you pass anything you want, no mater if is primitive or object:
public class App
{
public static void main( String[] args )
{
App app = new App();
app.method(5);
app.someMethod(9);
app.someMethod("Whatever");
app.someMethod(true);
}
public void method(Number number) {
System.out.println(number);
}
public <T> void someMethod(T t) {
System.out.println(t);
}
}
I hope you find this useful, but I insist that I doubt that you will never do something like this in real life.

Java - Interface Methods

Just playing around with interfaces and I have a question about something which I can't really understand.
The following code doesn't run, which is the behaviour I expect as the interface method requires the method to work for any object and the implemented method has the signature changed to only allow string objects.
interface I {
public void doSomething(Object x);
}
class MyType implements I {
public void doSomething(String x) {
System.out.println(x);
}
}
However, using the following block of code, I was shocked to see that it did work. I thought it would not work as we are expecting to return an object and the implemented method will only return a string object. Why does this work and what is the difference between the two principles here of passed parameters and return types?
interface I {
public Object doSomething(String x);
}
class MyType implements I {
public String doSomething(String x) {
System.out.println(x);
return(x);
}
}
public Object doSomething(String x);
has to return something. Anything, in fact, so long as it is some type of object. So if you implement
public String doSomething(String x) {stuff}
that's fine, because it does in fact return an Object. The fact that the object it will return will always be a String is no big deal.
The reason the first example doesn't work, is because accepting only strings is more limiting than accepting any object. But returning only strings is fine.
For an analogy, let's say you got a contract to paint a building, and you're gonna hire some employees to help you out. The contract requires that you hire any painter that applies, regardless of how tall they are, but doesn't specify what color paint to use. If you only hired painters over 6 ft tall (that's the input, accepting only String instead of all Object), you'd be violating the contract. But choosing to paint with only blue paint (returning only strings) is just fine, because the contract didn't specify color, only that you must paint the building.
It works because a String is an Object.
This has to do with covariant return types, introduced in Java SE 5.0.
You can see more details in http://docs.oracle.com/javase/tutorial/java/javaOO/returnvalue.html
From the java language specification:
Return types may vary among methods that override each other if the return types are reference types. The notion of return-type-substitutability supports covariant returns, that is, the specialization of the return type to a subtype.
So in other words, it works as you did it, but it would not work if the return type in the interface is String, and in the implementing class is Object.
The principle behind this behaviour is called covariant return type. In this particular case, the overrriding type may "narrow" the originally declared parameter type.
This means that as String is subclassing Object, Object may be substituted by String.
The reason why the first example doesn't work and the second example does, is because function prototypes are defined by the name and all parameters only, but not the return type. In the first example, there is a difference, so the compiler thinks they are two different functions.
In the second example, the implemented function does not broaden the type, but instead specializes the type (String is a specialization of Object), so it works.
Likewise you can limit the visibility of the implemented method, but not broaden it.
Furthermore, Java has generics, which are useful in this context.
Example:
interface I<T>
{
public void doSomething(T x);
}
class MyType implements I<String>
{
public void doSomething(String x)
{
System.out.println(x);
}
}
method signature does not take into account the return type. (Though is is an error to declare two methods with the same signature but different return type). So:
void doSomething(Object)
void doSomething(String)
Are simply two methods and none overrides or implements the other
string class is inherited from object class, so only this code works.

Why can't parameter types be loosened in overridden methods?

This code is invalid:
interface Foo
{
public void foo(final String string);
}
public class Bar implements Foo
{
// Error: does not override.
#Override public void foo(final Object object)
{
}
}
Because every String is obviously an Object, I would expect this code to be perfectly fine: any code that depends on giving foo() a String would stil function when foo() actually takes an Object.
It appears, though, that method signatures have to be identical to those of the methods they're overriding. Why?
What if
interface Foo
{
void foo(String string);
void foo(Object string);
}
Then which method is overridden by Bar?
'loosening' as you put it should not impact on someone expecting to use your interface defined method in a particular way, as any methods you call on that object should in theory be callable on the specified object, but Eugene's point stands, becaause there would probably be a little compiler headache to deal with in determining what method you actually intended to override if you just vaguely want to stick to the interfaces specification. Also, why whould this be desireable if you are going to stick to moving up the inheritance heirarchy, because surelely you will be able to do all you want to the thing further down the hierarchy as to 'Object' for example? Possibly casting inside your method would solve your problem? If it is possible to do what you want to do, you will also probably start treading on the polymorphism paradigm.
I think this is a classical contravariance issue. Your interface requires a string to be passed as parameter, you want an implementation that accepts an object (because, after all, strings are also objects).
The problem is that if you allowed that, then you could no longer guarantee that the parameter being required by the interface is a string or any one of its ancestors. You might just as well pass any object to your implementation and you would be breaking the contract of the interface, putting in danger the type safety and type coherence of your design.
You do have options, though:
public class Bar implements Foo
{
#Override public void foo(final String object)
{
}
public void foo(final Object object)
{
foo((String) object);
}
}
By this, you would be ensuring that object is actually a string, making possible to the type system to check that you are actually complying with the interface contract established in the method signature.
Is there a particular scenario in which you would consider your contravariance example to be requirement?
It's just the constructs of the Java programming language. The structure of Java programs will grow on you. So for now just try and adjust.

Java: Bounded heterogeneous collections when items are not related

If i have heterogeneous collection for which I know exactly the types i'm going to place is there a way to enforce this.
For example take this scenario say i have a map that has a String key and value which can be on of three unrelated types. Now I know that I will only put ClassA and ClassB or java.lang.String
for example here is the code
public HetroCollection
{
public Map<String,Object> values;
}
public ClassA
{
}
public ClassB
{
}
public static void Main(String args[])
{
HetroCollection collection = new HetroCollection();
collection.values.add("first", new ClassA());
collections.values.add("second", new ClassB());
collections.values.add("third" , "someString");
//BAD want to stop random adds
collections.values.("fourth" , new SomeRandomClass());
}
The Options I have thought of are:
have the classes implement a common interface and use Generics on the Map (Problem with this is if this also involves library classes either JDK or third party then changing class is not an option
hide the Map and provide put Methods which are paratemized like
put(String key , ClassA value);
put(String key , ClassB value);
put(String key, String value);
get(String key);
Rethink design and not use heterogeneous collection (not sure how I would represent this any other way)
Looking for the best practice answer for this.
I think that the "best practice" solutions are either your first and third options, provided that circumstances allow it.
Another option that you haven't considered is something like this:
public class MyMap extends HashMap<String, Object> {
...
// constructors
...
#Override
public void put(String key, Object value) {
if (value instanceof ClassA || value instanceof ClassB) {
super.put(key, value);
} else {
throw new IllegalArgumentException("Verbotten!");
}
}
...
}
You could combine this with your second option so that there is a statically typed option, and possibly even label the put(String, Object) method as deprecated to discourage its use.
And finally, there is the option of just ignoring the problem, and relying on the application programmer to not put random stuff into the map. Depending on the circumstances, this might even be the best solution.
Well, you've already proven your first thought to not be an option. The second thought would be the best option, if you really need this functionality. Otherwise the best option is to rethink your approach. But, it's easier to help if we knew a little context.
There is a fourth option:
If you want to stick instances of exactly these types into a collection, chances are they have something in common. If you can not introduce a common supertype to express that commonality, you can still introduce a parallel class hierarchy with such a common superclass, and declare your map to hold items of that type.
// You can find a better name ;-)
abstract class Foo {
public abstract void foo();
public void bar() {
// something generic
}
public abstract void visit(FooVisitor visitor);
}
class ClassAFoo {
final ClassA delegate;
// Constructor and implementations for foo()
}
class ClassBFoo {
final ClassB delegate;
// Constructor and implementations for foo()
}
class StringFoo {
final String delegate;
// Constructor and implementations for foo()
}
Advantages:
statically type safe
you can add methods to the common type or implement the visitor pattern to switch on the type of wrapped value
the compiler can check that you have handled all types when working with the map (in contrast to using a series of if-statements to switch on the type)
Disadvantages:
boilerplate code, complexity
There's a great facility in Java for this called classes. Given your example, you might write one like this:
public class Foo {
private ClassA first;
private ClassB second;
private String someString;
...
public void setFirst(ClassA first) {
this.first = first;
}
public ClassA getFirst() {
return first;
}
...
}
Seriously, given what you've said this sounds like exactly what you want. If you only want to allow specific keys, with values that may only be of specific types (that depend on the key itself)... that's a class. If there's some really strong reason that you need to use String map keys here (and this seems unlikely to me), please explain.
Edit:
When I answered this I was under the impression for some reason that you needed to enforce only specific keys mapping to specific types of values. Looking at it again, it seems like that may not be the case. If that isn't the case, I think your best option is rethinking the design (giving an example of why you need to do this might be helpful). If you do that and don't come up with anything, I think #2 is the best option. It enforces your restrictions on the types of values the map can have in a somewhat typesafe way.
In theory type safety with mixed objects from a List can be achieved using HList in Functional Java. See blog post and Examples. Also relevant this article from IBM developerworks. I wrote in theory because in practice the type declaration can only cope with a limited number of elements and it grows rapidly.

Categories

Resources