I have a Java method that takes 3 parameters, and I'd like it to also have a 4th "optional" parameter. I know that Java doesn't support optional parameters directly, so I coded in a 4th parameter and when I don't want to pass it I pass null. (And then the method checks for null before using it.) I know this is kind of clunky... but the other way is to overload the method which will result in quite a bit of duplication.
Which is the better way to implement optional method parameters in Java: using a nullable parameter, or overloading? And why?
Write a separate 3-parameter method that forwards to the 4-parameter version. Don't kludge it.
With so many parameters, you might want to consider a builder or similar.
Use something like this:
public class ParametersDemo {
public ParametersDemo(Object mandatoryParam1, Object mandatoryParam2, Object mandatoryParam3) {
this(mandatoryParam1,mandatoryParam2,mandatoryParam3,null);
}
public ParametersDemo(Object mandatoryParam1, Object mandatoryParam2, Object mandatoryParam3, Object optionalParameter) {
//create your object here, using four parameters
}
}
Related
I have a function with annotations like that in my code:
#When("^trying to login or register with email address \"([^\"]*)\"$")
fun whenLoginOrRegister(email: String? = null) {
email?.let { user.email = it }
loginViewModel.whenLoggingIn()
}
What Kotlin does here is that it creates two or more overloads of the function depending on the number of optional parameters. How can I force Kotlin to just create one method instead of multiple? The optimum would be to just have one method which simply accepts null.
Background: I use Cucumber and it searches for these annotated functions by means of reflexion. But two functions with the same annotation lead to an exception as there are no ambiguities allowed. If I can't force Kotlin to create multiple methods, maybe there is a different workaround that can help in this situation?
For such a function, if you don't use the #JvmOverloads annotation, Kotlin creates exactly two methods, regarding of the number of optional parameters. One method has the regular signature, and another one additionally accepts a bit mask of parameters that have been passed. There is no way to avoid creating multiple methods.
What I would do in this case is simply create two separate functions, "when trying to login or register without email" and "when trying to login or register with email address <email>".
What is the difference of the interfaces:
org.apache.commons.collections4.Transformer
java.util.function.Function
Aren't they doing a similar action?:
T --> doing stuff --> R
Assume I've got an User Object, and I want to let one of these interfaces return me the loginname as a String from that object. I could use both?
#Override
public String apply(User user) {
return user.getLoginname();
}
or
#Override
public String transform(User user) {
return user.getLoginname();
}
These two interfaces have equivalent function -- they take an object of some type as input, and return an object of some, possibly different, type. Transformer has somewhat narrower documented scope, in the sense that "compute a transformation of" is, to me, a particular case of "compute a function of", but that's weak.
The most important difference between these interfaces is how instances can be used with other objects. They are not interchangeable. Thus, if you want to use a TransformedList then it has to be defined in terms of a Transformer, not a Function. If you want to obtain a flatMap from a Stream then you need a Function, not a Transformer.
Because they are basically just different names for the same idea, however, it is trivial to write an adapter to enable you to use one type in a context that requires the other.
What is the difference?
The name.
Aren't they doing a similar action?
Yes.
I could use both?
Sure, but why rely on Commons Collections if you can use built-in type?
This is a follow-up thread on How to get rid of instanceof in this Builder implementation
There are still some problems with this design. Every time a new parameter is introduced, one must create a new ConcereteParameter class.
That's not a problem. But one must also add method in the CommandBuilder append(ConcreteParameter). And I'm not quite liking that dependency.
To summarize
Commands can be configured with parameters. Not every command can receive the same parameters. So some have to be ignored. When being applied to a command (In this implementation this is achieved by throwing an UnsupportedOperationException
Parameters that can be applied to certain classes are used differently in those classes (Like FTPCommand & HTTPCommand might use IpParameter in a different way)
In the future new Commands and Parameters might be introduced
Update
The implementation as it is now works. But isn't it overkill that if I have
about 30 parameters, that for every parameter I have to have a separate method?
If there is,
What is a more clean and more flexible way/pattern to achieve this?
What is a parameter for you, and what is a parameter type? If you really have different kinds of objects as parameters, with different operations you may perform on them, then you cannot avoid having different classes to handle them. If your parameters only differ in how the commands interpret them, but otherwise are mostly String and Integer or whatever, then having extra classes for each possible meaning is surely overkill. And if your parameter are some form of key-value pair, then I'd represent them as such: a single class (or perhaps one for each reasonable value type) to contain the name and the value of the parameter.
If you can use the above to reduce the number of parameter types, you might want to consider reflection for the actual command building. You could have an annotation #Parameter which you use to decorate setter methods of your command classes. E.g. #Parameter void setIP(String) would mean that the command accepts a String parameter, and will interpret that as am IP address. If you use key-value parameters, you can either derive the key from the method name, or add a value to the annotation, or both. Using such a framework, you could have a single command builder which would take care of feeding parameters to the appropriate setters.
Even though there is an accepted answer, I feel you need to be aware of another option.
I would use a Map as a context object, and pass the context to the execute method of your command. The command will simply pull the parameters it needs out of the Map by String.
public interface Command {
public void execute(Map<String, Object> context);
}
class OneCommandImpl extends Command {
public void execute(Map<String, Object> context) {
context.get('p1');
context.get('p2');
}
}
The advantages of this approach are that it's simple, and there is no need for reflection. You can build any command you want, that requires any number of arguments, using this one interface. The primary disadvantage is the type of value in the Map is not specific.
I am writing a Java method with the following signature.
void Logger(Method method, Object[] args);
If a method (e.g. ABC() ) calls this method Logger, it should retrieve the Method object that encapsulates data about itself (ABC()) and pass it as an argument.
How can a method retrieve the Method object that is storing all the information about that method?
A simple way is that I use
Method[] methods = ExampleClass.Class.getMethods();
and search the whole array for the Method with the correct name. (Which is quite inefficient). Also, if two or more methods have the same names, then I will have to retrieve their parameter types too (to distinguish them) and have different code for each method. This would be inefficient as well as painful.
Is there a better way?
Thanks for the help.
Don't do it. Rather obtain the method name from the stack.
public void log(Object object) {
String methodName = Thread.currentThread().getStackTrace()[2].getMethodName();
// ...
}
This is however pretty expensive and that's why most self-respected logging frameworks offer an option to turn it on/off (which I would recommend to use instead of homegrowing one).
Even better, don't implement this method at all. Use logback, or some other modern logging framework.
If you are writing a wrapper over existing logger frameworks then the already provide a way to print the method name in the log message - if that's what you are trying to implement.
You can read from the log4j documentation, for example, that this extraction (as the other answer suggests) is done from the stack trace and is expensive : http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Java - why no return type based method overloading?
The compiler does not consider return
type when differentiating methods, so
you cannot declare two methods with
the same signature even if they have a
different return type.
Java Tutorial
Why is this?
Because it's not required to assign the result when you want to execute a method. How would the compiler then know which of the overloaded ones you'd like to call? There will be ambiguity.
Because you can't tell from just the method invocation what the return type is supposed to be. The compiler needs to be able to tell, using only information at the call site, what method to call. Return values may be discarded so you can't in general know that there is a return value and what its type is. it gets even more confusing once you start thinking about type coersions (short->int) or casts.
Basically when the compiler sees a method call it knows all of the arguments need to be there in order to be a valid method call, so it can use those arguments to find the right method to call. But returns values will not be known at the time of the call, and even the type of the return value may not be discoverable.
Because a calling method would need to pass the return type to the method being called.
So you have
Public Integer doStuff(String thing) { };
and
Public Double doStuff(String thing) { };
The class calling doStuff would need to tell the class to use the doStuff that takes a string (already does), and returns a Double (does not do).
The reason WHY java does this? To help prevent horrible code like I listed above I assume :) Overloading is easy to make a mess with, and I am not sure I see the benefit to the case above.
Any given call site could successfully use a number of different return types, due to polymorphism and auto-(un)boxing at the very least, so this would either need rules to deal with ambiguous cases, or only work in simple circumstances.
I just wanted to point out that - while not possible in Java - Ada supports overloading that is based purely on return types. Yet Google's best explaination on how that works is a stack overflow post:
Function overloading by return type?