When KISS and DRY collide [closed] - java

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm an obsessive follower of the DRY and KISS principles but last week I had a case where both seem to contradict each other:
For an application I was doing, I had to implement a loop for times which does the following:
iterate over the elements of a list of type A
convert the element of type A to type B and insert them into a list of type B
Here's an example:
for (A a : listOfA) {
listOfB.add(BFactory.convertFromAToB(a));
}
Within the code, I have to do this about 4 times, convert a type (e.g. D, E etc.) into another one. I may not be able to change the types I'm about to convert, as they are 3rd party types which we have to use in out app.
So we have:
for (A a : listOfA) {
listOfB.add(BFactory.convertFromAToB(a));
}
for (C a : listOfC) {
listOfB.add(DFactory.convertFromCToD(c));
}
...
So, to not violate dry, I came up with a generic solution:
private interface Function<S, T> {
T apply(S s);
}
public <S, T> void convertAndCopy(List<S> src, List<T> dst, Function<S, T> f) {
for (S s : src) {
dst.add(f.apply(s));
}
}
A call looks something like this:
convertAndCopy(listOfA, listOfB, new Function<A, B>() {
A apply(B b) {
return CFactory.convertFromBToC(b);
}
});
Now, while this is better in terms of DRY, I think it violates KISS, as this solution is much harder to understand than the duplicated for loops.
So, is this DRY vs. KISS? Which one to favor in this context?
EDIT
Just to be clear, the class I'm talking about is an Adapter, which delegates call to a legacy system to our own implementation, converting the legacy into our own types along the way. I have no means of changing the legacy types, nor may I change our types (which are XML-Schema-generated).

Either is fine.
With the loops, you are not really repeating yourself, because the only parts that are repetitive is "syntactic clutter" (and not too much of that in your case). You are not repeating/duplicating "application logic" code.
If you like the "Function" style, maybe make use of the Guava library (which has the Function interface and many helper methods that work with them on collections). That is DRY (because you don't repeat yourself, and re-use code that already exists), and still KISS (because those are well understood patterns).

If you only have to do this 4 times in your whole application, and the conversion is really as trivial as your examples, I would choose writing 4 for loops any time over the generic solution.
Readability suffers a lot from using that generic solution and you don't actually gain anything from it.

General principles like DRY and KISS never work all of the time.
IMO, the answer is to forget the dogma (at least for this problem), and think about what gives you the best / most readable solution.
If the duplicated x 4 code is easier to understand and it is not a maintenance burden (i.e. you don't need to change it a lot), then it is the right solution.
(And Thilo's answer is right too ... IMO)

I think it is not that KISS and DRY contradict each other. I would rather say that Java does not allow you to express simplicity while not repeating yourself.
First of all if you introduce properly named methods to convert from List<A> to List<B> and so on instead of repeating the loop all the time it would be DRY while still remaining KISS.
But what I would advice is to look at alternate languages that allow you to take full odvantage of DRY while still promote KISS, e.g. in Scala:
val listOfB = listOfA map convertAtoB
val listOfC = listOfB map convertBtoC
val listOfD = listOfC map convertCtoD
Where convertAtoB is a function taking an item of type A and returning B:
def convertAtoB(a: A): B = //...
Or you can even chain these map calls.

You could move the conversion function into CFactory:
convertAndCopy(listOfA, listOfB, CFactory.getConverterFromAToB());
The code is quite readable/simple this way and you promote code reuse (maybe you will need to use the converter object later in another context).
Implementation :
public <S, T> void convertAndCopy(List<A> listofA, List<B> listOfB, Function<A, B> f) {
listOfB.addAll(Collections2.transform(listOfA,f));
}
(using guava Iterators).
I'm not even sure that you should DRY here, you could use directly:
listOfB.addAll(Collections2.transform(listOfA,CFactory.getConverterFromAToB()));

Related

Java 8 way to work with an enum [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm wondering what the best way is in Java 8 to work with all the values of an enum. Specifically when you need to get all the values and add it to somewhere, for example, supposing that we have the following enum:
public enum Letter {
A, B, C, D;
}
I could of course do the following:
for (Letter l : Letter.values()) {
foo(l);
}
But, I could also add the following method to the enum definition:
public static Stream<Letter> stream() {
return Arrays.stream(Letter.values());
}
And then replace the for from above with:
Letter.stream().forEach(l -> foo(l));
Is this approach OK or does it have some fault in design or performance? Moreover, why don't enums have a stream() method?
I'd go for EnumSet. Because forEach() is also defined on Iterable, you can avoid creating the stream altogether:
EnumSet.allOf(Letter.class).forEach(x -> foo(x));
Or with a method reference:
EnumSet.allOf(Letter.class).forEach(this::foo);
Still, the oldschool for-loop feels a bit simpler:
for (Letter x : Letter.values()) {
foo(x);
}
Three questions: three-part answer:
Is it okay from a design point of view?
Absolutely. Nothing wrong with it. If you need to do lots of iterating over your enum, the stream API is the clean way to go and hiding the boiler plate behind a little method is fine. Although I’d consider OldCumudgeon’s version even better.
Is it okay from a performance point of view?
It most likely doesn’t matter. Most of the time, enums are not that big. Therefore, whatever overhead there is for one method or the other probably doesn’t matter in 99.9% of the cases.
Of course, there are the 0.1% where it does. In that case: measure properly, with your real-world data and consumers.
If I had to bet, I’d expect the for each loop to be faster, since it maps more directly to the memory model, but don’t guess when talking performance, and don’t tune before there is actual need for tuning. Write your code in a way that is correct first, easy to read second and only then worry about performance of code style.
Why aren’t Enums properly integrated into the Stream API?
If you compare Java’s Stream API to the equivalent in many other languages, it appears seriously limited. There are various pieces that are missing (reusable Streams and Optionals as Streams, for example). On the other hand, implementing the Stream API was certainly a huge change for the API. It was postponed multiple times for a reason. So I guess Oracle wanted to limit the changes to the most important use cases. Enums aren’t used that much anyway. Sure, every project has a couple of them, but they’re nothing compared to the number of Lists and other Collections. Even when you have an Enum, in many cases you won’t ever iterate over it. Lists and Sets, on the other hand, are probably iterated over almost every time. I assume that these were the reasons why the Enums didn’t get their own adapter to the Stream world. We’ll see whether more of this gets added in future versions. And until then you always can use Arrays.stream.
My guess is that enums are limited in size (i.e the size is not limited by language but limited by usage)and thus they don't need a native stream api. Streams are very good when you have to manipulate transform and recollect the elements in a stream; these are not common uses case for Enum (usually you iterate over enum values, but rarely you need to transform, map and collect them).
If you need only to do an action over each elements perhaps you should expose only a forEach method
public static void forEach(Consumer<Letter> action) {
Arrays.stream(Letter.values()).forEach(action);
}
.... //example of usage
Letter.forEach(e->System.out.println(e));
I think the shortest code to get a Stream of enum constants is Stream.of(Letter.values()). It's not as nice as Letter.values().stream() but that's an issue with arrays, not specifically enums.
Moreover, why don't enums have a stream() method?
You are right that the nicest possible call would be Letter.stream(). Unfortunately a class cannot have two methods with the same signature, so it would not be possible to implicitly add a static method stream() to every enum (in the same way that every enum has an implicitly added static method values()) as this would break every existing enum that already has a static or instance method without parameters called stream().
Is this approach OK?
I think so. The drawback is that stream is a static method, so there is no way to avoid code duplication; it would have to be added to every enum separately.

Interface versus implementing Queue

Queue is an interface that implements many different classes. I am confused about why my text book gives this sample text using Queue. I have also found this in an example on the Princeton website. Is this a customary way to provide code so it can be edited later to the programmers preferred type of queue?
This is code taken from an algorithm for a Binary Search Symbol Table.
public Iterable<Key> keys(Key lo, Key hi) {
Queue<Key> q = new Queue<Key>();
for (int i = rank(lo); i < rank(hi); i++) {
q.enqueue(keys[i]);
}
if (contains(hi)) {
q.enqueue(keys[rank(hi)]);
}
return q;
}
So first, there are many classes that implement Queue not Queue is an interface that implements many different classes.
Second, yes code convention is that you use the interface where ever possible to make code more flexible in case a different implementation is desired. This is especially true of method signatures but also good practice for variable and field declaration.
Third, looks like a typo. Line should be Queue<Key> q = new LinkedList<Key>();
See answer here: Why are variables declared with their interface name in Java?
Also, What does it mean to “program to an interface”?
I do this often for other interfaces where the specific implementation doesn't provide anything over the interface, like List.
List<String> strings = new ArrayList<>();
In that case I retain the most flexibility with the least amount of hassle.
Granted, you won't change implementations very often, but there are some specific ones where for example inserting takes O(n) time and lookup takes O(log n) time or vice versa and in those cases you have to do some research how you're using the interface.
Do you change the contents of the collection a lot, but read it sparingly? Or is it the other way around and do you insert just once and read often?
Your question is very broad. This technique is called programming to interface and it's an effect of years of gathering good practices in coding. It just makes things easier in so many ways. It revolutionised ways how we write software today. If you're an eager learner and want to know more, check this article as a starter.

Statically Typing a Scripting Language in Java

I'm building a scripting language in Java for a game, and I'm currently working on the parser. The language is to be utilized by players/modders/myself to create custom spells and effects. However, I'm having difficulty imagining how to smoothly implement static typing in the current system (a painful necessity driven by performance needs). I don't care so much if compilation is fast, but actual execution needs to be as fast as I can get it (within reason, at least. I'm hoping to get this done pretty soon.)
So the parser has next() and peek() methods to iterate through the stream of tokens. It's currently built of a hierarchy methods that call each other in a fashion that preserves type precedence (the "bottom-most" method returning a constant, variable, etc). Each method returns an IResolve that has a generic type <T> it "resolves" to. For example, here's a method that handles "or" expressions, with "and" being more tightly coupled:
protected final IResolve checkGrammar_Or() throws ParseException
{
IResolve left = checkGrammar_And();
if (left == null)
return null;
if (peek().type != TokenType.IDENTIFIER || !"or".equals((String)peek().value))
return left;
next();
IResolve right = checkGrammar_Or();
if (right == null)
throwExpressionException();
return new BinaryOperation(left, right, new LogicOr());
}
The problem is when I need to implement a function that depends on the type. As you probably noticed, the generic type isn't being specified by the parser, and is part of the design problem. In this function, I was hoping to do something like the following (though this wouldn't work due to generic types' erasure...)
protected final IResolve checkGrammar_Comparison() throws ParseException
{
IResolve left = checkGrammer_Term();
if (left == null)
return null;
IBinaryOperationType op;
switch (peek().type)
{
default:
return left;
case LOGIC_LT:
//This ain't gonna work because of erasure
if (left instanceof IResolve<Double>)
op = new LogicLessThanDouble();
break;
//And the same for these
case LOGIC_LT_OR_EQUAL:
case LOGIC_GT:
case LOGIC_GT_OR_EQUAL:
}
next();
IResolve right = checkGrammar_Comparison();
if (right == null)
throwExpressionException();
return new BinaryOperation(left, right, op);
}
The problem spot, where I'm wishing I could make the connection, is in the switch statement. I'm already certain I'll need to make IResolve non-generic and give it a "getType()" method that returns an int or something, especially if I want to support user-defined classes in the future.
The question is:
What's the best way to achieve static typing given my current structure and the desire for mixed inheritance (user-defined classes and interfaces, like Java and C#)? If there is no good way, how can I alter or even rebuild my structure to achieve it?
Note: I don't claim to have any idea what I've gotten myself into, constructive criticism is more than welcome. If I need to clarify anything, let me know!
Another note: I know you're thinking "Why static typing?", and normally I'd agree with you-- however, the game world is composed of voxels (it's a Minecraft mod to be precise) and working with them needs to be fast. Imagine a script that's a O(n^2) algorithm iterating over 100 blocks twenty times a second, for 30+ players on a cheap server that's already barely squeaking by... or, a single, massive explosion effecting thousands of blocks, inevitably causing a horrendous lag spike. Hence, backend type checking or any form of duck-typing ain't gonna cut it (though I'm desperately aching for it atm.) The low level benefits are a necessity in this particular case, painful though it is.
You can get the best of both worlds by adding a method Class<T> getType() to IResolve; its implementers should simply return the appropriate Class object. (If the implementers themselves are generic, you need to get a reference to that object in the constructor or something.)
You can then do left.getType().equals(Double.class), etc.
This is entirely separate from the question of whether you should build your own parser with static typing, which is very much worth asking.
The solution I'm going with, as some have suggested in the comments, was to separate parsing and typing into separate phases, along with using an enum to represent type as I originally felt I should.
While I appreciate Taymon's answer, I can't use it if I hope to support user defined classes in the future.
If someone has a better solution, I'd be more than happy to accept it!

Why does Scala implement for as a closure?

Recent events on the blogosphere have indicated that a possible performance problem with Scala is its use of closures to implement for.
What are the reasons for this design decision, as opposed to a C or Java-style "primitive for" - that is one which will be turned into a simple loop?
(I'm making a distinction between Java's for and its "foreach" construct here, as the latter involves an implicit Iterator).
More detail, following up from Peter. This bit of Scala:
object ScratchFor {
def main(args : Array[String]) : Unit = {
for (val s <- args) {
println(s)
}
}
}
creates 3 classes: ScratchFor$$anonfun$main$1.class ScratchFor$.class ScratchFor.class
ScratchFor::main just forwards to the companion object, ScratchFor$.MODULE$::main which spins up an ScratchFor$$anonfun$main$1 (which is an implementation of AbstractFunction1).
It's in the apply() method of this anonymous inner impl of AbstractFunction1 that the actual code lives, which is effectively the loop body.
I don't see HotSpot being able to rewrite this into a simple loop. Happy to be proved wrong on this, though.
Traditional for loops are clumsy, verbose and error-prone. I think it is proof enough of this that "for-each" loops where added to Java, C# and C++, but if you want more details you may check item 46 of Effective Java.
Now, for-each loops are still much faster than Scala for-comprehension, but they are also much less powerful (and more clumsy) because they cannot return values. If you want to transform or filter a collection (or do both to a group of collections), you'll still have to handle all the mechanical details of constructing the result collection in addition to computing the values. Not to mention it inevitably uses some mutable state.
Finally, even though for-each loops are adequate enough for collections, they are not suited to other monadic classes (of which collections are a subset of).
So Scala has a general method which takes care of all of the above. Yes, it is slower, but the goal is to have the compiler effectively optimise it well enough so that this doesn't become a hindrance (and, of course, JIT could help here as well).
That has not been accomplished to this date, but -optimise has reduced a lot of ground between common for-each loops and for-comprehensions on the latest versions of Scala. If performance is essential, you can always use while or tail recursion.
Now, it would be possibly for Scala to have common for loops or for-each loops as special cases specifically targeted at performance issues (since for-comprehensions can do everything they do). However, that violates two principles that guide Scala's design:
Reduce complexity. Yes, contrary to what some say, that is a design goal, and special cases that serve no other purpose other than optimise performance -- even though a workable solution exists for performance cases -- would needlessly increase the complexity of the language.
Scalability. This is in the sense that the use can scale the language for any size of problem by writing libraries. The point here is that having the compiler optimise one particular class, such as Range, would make it impossible for the user to create a replacement class that would perform just as well.
The for comprehension in Scala is a powerful general-purpose looping and pattern-matching construct. Look at what it can do:
case class Person(first: String, last: String) {}
val people = List(Person("Isaac","Newton"), Person("Michael","Jordan"))
val lastfirst = for (Person(f,l) <- people) yield l+", "+f
for (n <- lastfirst) println(n)
The second case looks pretty straightforward--take each item in a collection and print it. But the first takes apart a list containing a custom data structure and transforms it into a different collection type!
The first for there highlights only a small portion of the capability of the construct; it is both extremely powerful and extremely general. In order to maintain this power, the for must be able to turn into something very general, which means closures. Then the question is: do you also introduce special cases that operate on known collections in simple ways with improved performance? The answer thus far has been mostly no, instead preferring solutions that optimize the general closure-taking methods that for turns into.
Whether this is useful for you in particular depends on whether you are using the general capabilities a lot (in which case you will be glad) or not (in which case you may wish progress was faster).
Still, try -optimize. It often usefully speeds up simple for-comprehensions these days.
The for-comprehension is much more than a simple loop.
If you need an imperative loop, use while. If you want to write performant code in Scala, you need to know this. Just like you have to know about language implementation when you want to write fast code in every other language.
So, since the for-comprehension is not a simple loop, I hope you understand that it's not compiled down to a simple loop.
I would assume using a closure is a general solution. A more optimal solution in some cases would be to "inline" the closure as a loop and eliminate the need to create an object. Perhaps the Scala designers feel the JIT should do this, rather having the compiler do this.
Let's say in Java this is the same as writing
public static void main(String... args) {
for_loop(args, new Function<String>() {
public void apply(String s) {
System.out.println(s);
}
});
}
interface Function<T> {
void apply(T s);
}
public static <T> void for_loop(T... ts, Function<T> tFunc) {
for(T t: ts) tFunc.apply(t);
}
This is fairly easy to inline (if you're a human). What is surprising is that Scala doesn't have an intrinsic to perform the optimisation to eliminate the need for a new object. Certainly the JIT could do it in theory, but in practise, it might be a while before it handles this specific case.
I'm surprised that no one has mentioned one of the pitfalls you can get into if for does not create a closure.
In Python for example:
ls = [None] * 3
for i in [0, 1, 2]:
ls[i] = lambda: i
print(ls[0]())
print(ls[1]())
print(ls[2]())
This prints 2 2 2, because i has a longer lifetime than the for loop. I run into this trap all the time in Python and R.
So even in the very simplest of cases, it is important for for in Scala to be implemented using an anonymous function, because it creates an environment to store variables.

Best practice for passing many arguments to method?

Occasionally , we have to write methods that receive many many arguments , for example :
public void doSomething(Object objA , Object objectB ,Date date1 ,Date date2 ,String str1 ,String str2 )
{
}
When I encounter this kind of problem , I often encapsulate arguments into a map.
Map<Object,Object> params = new HashMap<Object,Object>();
params.put("objA",ObjA) ;
......
public void doSomething(Map<Object,Object> params)
{
// extracting params
Object objA = (Object)params.get("objA");
......
}
This is not a good practice , encapsulate params into a map is totally a waste of efficiency.
The good thing is , the clean signature , easy to add other params with fewest modification .
what's the best practice for this kind of problem ?
In Effective Java, Chapter 7 (Methods), Item 40 (Design method signatures carefully), Bloch writes:
There are three techniques for shortening overly long parameter lists:
break the method into multiple methods, each which require only a subset of the parameters
create helper classes to hold group of parameters (typically static member classes)
adapt the Builder pattern from object construction to method invocation.
For more details, I encourage you to buy the book, it's really worth it.
Using a map with magical String keys is a bad idea. You lose any compile time checking, and it's really unclear what the required parameters are. You'd need to write very complete documentation to make up for it. Will you remember in a few weeks what those Strings are without looking at the code? What if you made a typo? Use the wrong type? You won't find out until you run the code.
Instead use a model. Make a class which will be a container for all those parameters. That way you keep the type safety of Java. You can also pass that object around to other methods, put it in collections, etc.
Of course if the set of parameters isn't used elsewhere or passed around, a dedicated model may be overkill. There's a balance to be struck, so use common sense.
If you have many optional parameters you can create fluent API: replace single method with the chain of methods
exportWithParams().datesBetween(date1,date2)
.format("xml")
.columns("id","name","phone")
.table("angry_robots")
.invoke();
Using static import you can create inner fluent APIs:
... .datesBetween(from(date1).to(date2)) ...
It's called "Introduce Parameter Object". If you find yourself passing same parameter list on several places, just create a class which holds them all.
XXXParameter param = new XXXParameter(objA, objB, date1, date2, str1, str2);
// ...
doSomething(param);
Even if you don't find yourself passing same parameter list so often, that easy refactoring will still improve your code readability, which is always good. If you look at your code 3 months later, it will be easier to comprehend when you need to fix a bug or add a feature.
It's a general philosophy of course, and since you haven't provided any details, I cannot give you more detailed advice either. :-)
First, I'd try to refactor the method. If it's using that many parameters it may be too long any way. Breaking it down would both improve the code and potentially reduce the number of parameters to each method. You might also be able to refactor the entire operation to its own class. Second, I'd look for other instances where I'm using the same (or superset) of the same parameter list. If you have multiple instances, then it likely signals that these properties belong together. In that case, create a class to hold the parameters and use it. Lastly, I'd evaluate whether the number of parameters makes it worth creating a map object to improve code readability. I think this is a personal call -- there is pain each way with this solution and where the trade-off point is may differ. For six parameters I probably wouldn't do it. For 10 I probably would (if none of the other methods worked first).
This is often a problem when constructing objects.
In that case use builder object pattern, it works well if you have big list of parameters and not always need all of them.
You can also adapt it to method invocation.
It also increases readability a lot.
public class BigObject
{
// public getters
// private setters
public static class Buider
{
private A f1;
private B f2;
private C f3;
private D f4;
private E f5;
public Buider setField1(A f1) { this.f1 = f1; return this; }
public Buider setField2(B f2) { this.f2 = f2; return this; }
public Buider setField3(C f3) { this.f3 = f3; return this; }
public Buider setField4(D f4) { this.f4 = f4; return this; }
public Buider setField5(E f5) { this.f5 = f5; return this; }
public BigObject build()
{
BigObject result = new BigObject();
result.setField1(f1);
result.setField2(f2);
result.setField3(f3);
result.setField4(f4);
result.setField5(f5);
return result;
}
}
}
// Usage:
BigObject boo = new BigObject.Builder()
.setField1(/* whatever */)
.setField2(/* whatever */)
.setField3(/* whatever */)
.setField4(/* whatever */)
.setField5(/* whatever */)
.build();
You can also put verification logic into Builder set..() and build() methods.
There is a pattern called as Parameter object.
Idea is to use one object in place of all the parameters. Now even if you need to add parameters later, you just need to add it to the object. The method interface remains same.
You could create a class to hold that data. Needs to be meaningful enough though, but much better than using a map (OMG).
Code Complete* suggests a couple of things:
"Limit the number of a routine's parameters to about seven. Seven is a magic number for people's comprehension" (p 108).
"Put parameters in input-modify-output order ... If several routines use similar parameters, put the similar parameters in a consistent order" (p 105).
Put status or error variables last.
As tvanfosson mentioned, pass only the parts of a structured variables ( objects) that the routine needs. That said, if you're using most of the structured variable in the function, then just pass the whole structure, but be aware that this promotes coupling to some degree.
* First Edition, I know I should update. Also, it's likely that some of this advice may have changed since the second edition was written when OOP was beginning to become more popular.
Using a Map is a simple way to clean the call signature but then you have another problem. You need to look inside the method's body to see what the method expects in that Map, what are the key names or what types the values have.
A cleaner way would be to group all parameters in an object bean but that still does not fix the problem entirely.
What you have here is a design issue. With more than 7 parameters to a method you will start to have problems remembering what they represent and what order they have. From here you will get lots of bugs just by calling the method in wrong parameter order.
You need a better design of the app not a best practice to send lots of parameters.
Good practice would be to refactor. What about these objects means that they should be passed in to this method? Should they be encapsulated into a single object?
Create a bean class, and set the all parameters (setter method) and pass this bean object to the method.
Look at your code, and see why all those parameters are passed in. Sometimes it is possible to refactor the method itself.
Using a map leaves your method vulnerable. What if somebody using your method misspells a parameter name, or posts a string where your method expects a UDT?
Define a Transfer Object . It'll provide you with type-checking at the very least; it may even be possible for you to perform some validation at the point of use instead of within your method.
I would say stick with the way you did it before.
The number of parameters in your example is not a lot, but the alternatives are much more horrible.
Map - There's the efficiency thing that you mentioned, but the bigger problem here are:
Callers don't know what to send you without referring to something
else... Do you have javadocs which states exactly what keys and
values are used? If you do (which is great), then having lots of parameters
isn't a problem either.
It becomes very difficult to accept different argument types. You
can either restrict input parameters to a single type, or use
Map<String, Object> and cast all the values. Both options are
horrible most of the time.
Wrapper objects - this just moves the problem since you need to fill the wrapper object in the first place - instead of directly to your method, it will be to the constructor of the parameter object.
To determine whether moving the problem is appropriate or not depends on the reuse of said object. For instance:
Would not use it: It would only be used once on the first call, so a lot of additional code to deal with 1 line...?
{
AnObject h = obj.callMyMethod(a, b, c, d, e, f, g);
SomeObject i = obj2.callAnotherMethod(a, b, c, h);
FinalResult j = obj3.callAFinalMethod(c, e, f, h, i);
}
May use it: Here, it can do a bit more. First, it can factor the parameters for 3 method calls. it can also perform 2 other lines in itself... so it becomes a state variable in a sense...
{
AnObject h = obj.callMyMethod(a, b, c, d, e, f, g);
e = h.resultOfSomeTransformation();
SomeObject i = obj2.callAnotherMethod(a, b, c, d, e, f, g);
f = i.somethingElse();
FinalResult j = obj3.callAFinalMethod(a, b, c, d, e, f, g, h, i);
}
Builder pattern - this is an anti-pattern in my view. The most desirable error handling mechanism is to detect earlier, not later; but with the builder pattern, calls with missing (programmer did not think to include it) mandatory parameters are moved from compile time to run time. Of course if the programmer intentionally put null or such in the slot, that'll be runtime, but still catching some errors earlier is a much bigger advantage to catering for programmers who refuse to look at the parameter names of the method they are calling.
I find it only appropriate when dealing with large number of optional parameters, and even then, the benefit is marginal at best. I am very much against the builder "pattern".
The other thing people forget to consider is the role of the IDE in all this.
When methods have parameters, IDEs generate most of the code for you, and you have the red lines reminding you what you need to supply/set. When using option 3... you lose this completely. It's now up to the programmer to get it right, and there's no cues during coding and compile time... the programmer must test it to find out.
Furthermore, options 2 and 3, if adopted wide spread unnecessarily, have long term negative implications in terms of maintenance due to the large amount of duplicate code it generates. The more code there is, the more there is to maintain, the more time and money is spent to maintain it.
This is often an indication that your class holds more than one responsibility (i.e., your class does TOO much).
See The Single Responsibility Principle
for further details.
If you are passing too many parameters then try to refactor the method. Maybe it is doing a lot of things that it is not suppose to do. If that is not the case then try substituting the parameters with a single class. This way you can encapsulate everything in a single class instance and pass the instance around and not the parameters.
... and Bob's your uncle: No-hassle fancy-pants APIs for object creation!
https://projectlombok.org/features/Builder

Categories

Resources