Java 8 way to work with an enum [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm wondering what the best way is in Java 8 to work with all the values of an enum. Specifically when you need to get all the values and add it to somewhere, for example, supposing that we have the following enum:
public enum Letter {
A, B, C, D;
}
I could of course do the following:
for (Letter l : Letter.values()) {
foo(l);
}
But, I could also add the following method to the enum definition:
public static Stream<Letter> stream() {
return Arrays.stream(Letter.values());
}
And then replace the for from above with:
Letter.stream().forEach(l -> foo(l));
Is this approach OK or does it have some fault in design or performance? Moreover, why don't enums have a stream() method?

I'd go for EnumSet. Because forEach() is also defined on Iterable, you can avoid creating the stream altogether:
EnumSet.allOf(Letter.class).forEach(x -> foo(x));
Or with a method reference:
EnumSet.allOf(Letter.class).forEach(this::foo);
Still, the oldschool for-loop feels a bit simpler:
for (Letter x : Letter.values()) {
foo(x);
}

Three questions: three-part answer:
Is it okay from a design point of view?
Absolutely. Nothing wrong with it. If you need to do lots of iterating over your enum, the stream API is the clean way to go and hiding the boiler plate behind a little method is fine. Although I’d consider OldCumudgeon’s version even better.
Is it okay from a performance point of view?
It most likely doesn’t matter. Most of the time, enums are not that big. Therefore, whatever overhead there is for one method or the other probably doesn’t matter in 99.9% of the cases.
Of course, there are the 0.1% where it does. In that case: measure properly, with your real-world data and consumers.
If I had to bet, I’d expect the for each loop to be faster, since it maps more directly to the memory model, but don’t guess when talking performance, and don’t tune before there is actual need for tuning. Write your code in a way that is correct first, easy to read second and only then worry about performance of code style.
Why aren’t Enums properly integrated into the Stream API?
If you compare Java’s Stream API to the equivalent in many other languages, it appears seriously limited. There are various pieces that are missing (reusable Streams and Optionals as Streams, for example). On the other hand, implementing the Stream API was certainly a huge change for the API. It was postponed multiple times for a reason. So I guess Oracle wanted to limit the changes to the most important use cases. Enums aren’t used that much anyway. Sure, every project has a couple of them, but they’re nothing compared to the number of Lists and other Collections. Even when you have an Enum, in many cases you won’t ever iterate over it. Lists and Sets, on the other hand, are probably iterated over almost every time. I assume that these were the reasons why the Enums didn’t get their own adapter to the Stream world. We’ll see whether more of this gets added in future versions. And until then you always can use Arrays.stream.

My guess is that enums are limited in size (i.e the size is not limited by language but limited by usage)and thus they don't need a native stream api. Streams are very good when you have to manipulate transform and recollect the elements in a stream; these are not common uses case for Enum (usually you iterate over enum values, but rarely you need to transform, map and collect them).
If you need only to do an action over each elements perhaps you should expose only a forEach method
public static void forEach(Consumer<Letter> action) {
Arrays.stream(Letter.values()).forEach(action);
}
.... //example of usage
Letter.forEach(e->System.out.println(e));

I think the shortest code to get a Stream of enum constants is Stream.of(Letter.values()). It's not as nice as Letter.values().stream() but that's an issue with arrays, not specifically enums.
Moreover, why don't enums have a stream() method?
You are right that the nicest possible call would be Letter.stream(). Unfortunately a class cannot have two methods with the same signature, so it would not be possible to implicitly add a static method stream() to every enum (in the same way that every enum has an implicitly added static method values()) as this would break every existing enum that already has a static or instance method without parameters called stream().
Is this approach OK?
I think so. The drawback is that stream is a static method, so there is no way to avoid code duplication; it would have to be added to every enum separately.

Related

Using java Streams with only one item as a computation context - good practice or not? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Constantly i question myself if creating a Stream of one item is a good practice or a exploit of it's features. We're talking about code like this:
Stream.of(object)
.map(this::doSomething)
.map(this::doSomething)
.findAny();
Like Optionals, Streams make it possible to implement more declarative code in Java, without the need to bind instructions as functions and compose then:
Function<x, y> doSomething = // some computation
Function<x, y> doSomethingElse = // other computation
doSomething.andThen(doSomethingElse).apply(object) // or using compose
The classic style of binding intermediary variables would produce code like this:
var x = doSomething(object)
var y = doSomethingElse(x)
While still completely valid and readable, using the stream construct we gain the freedom of composing several small functions sequentially - something that leads to better code on my opinion (it's easy to just plug new Function calls).
I could hack this as:
identity(Value.class)
.andThen(this::doSomething)
.andThen(something::execute)
.andThen(this::doSomethingElse)
.apply(value);
But i need a generic method like:
private <T> UnaryOperator<T> identity(Class<T> t) {
return UnaryOperator.identity();
}
I can, of course, think of ways to reuse this code but i feel that there should be a better, native way of starting this function chain.
There's something native do Java or a better method to starting this function composing chain, without the need of binding a Function to a name?
Imho, this feels a bit overkill.
In the background a lot of objects get instantiated to accomplish such a simple task.
Stream is here to manage streams of elements.
Also
.findAny();
.findFirst();
aren't really that appropriate for the situation, mostly because they return an Optional<T> and you'll have to deal with it even if you know the result is always present.
On Optional.of, why Optional? I mean, there is nothing optional here, you know the value is there. It's misleading. Use Optional once in the wrong place and you'll have it all over the place.
There is nothing wrong in having multiple consecutive method calls that create an object B starting from an object A.
The key points are immutability and intermediate state. Just use immutable definitions and you'll be okay.
Remember Java isn't functional-first (still).
Anyway, you might be interested in RxJava or FunctionalJava.

Reducing complexity of large switch statements [duplicate]

This question already has answers here:
Eliminating `switch` statements [closed]
(23 answers)
Closed 5 years ago.
In the codebase I'm currently working on, it's common to have to take a string passed in from further up the chain and use it as a key to find a different String. The current standard idiom is to use switch statements, however for larger switch statements (think ~20-30 cases) sonarqube says it's a code smell and should be reduced for cyclomatic complexity. My current solution is to use a static HashMap, like so
private static final HashMap<String, String> sortMap;
static {
sortMap = new HashMap<>();
sortMap.put("foo1", "bar1");
sortMap.put("foo2", "bar2");
sortMap.put("foo3", "bar3");
etc...
}
protected String mapSortKey(String key) {
return sortMap.get(key);
}
However this doesn't seem to actually be any cleaner, and if anything seems more confusing for maintainers. Is there a better way to solve this? Or should sonarqube be ignored in this situation? I am aware of using polymorphism, i.e. Ways to eliminate switch in code, however that seems like it is overkill for this problem, as the switch statements are being used as makeshift data structures rather than as rudimentary polymorphism. Other similar questions I've found about reducing switch case cyclomatic complexity aren't really applicable in this instance.
If, by your example, this is just the case of choosing a mapped value from a key, a table or properties file would be a more appropriate way to handle this.
If you're talking about logic within the different switch statements, you might find that a rules engine would suit better.
You hit upon the major requirement: maintainability. If we are coding in too much logic or too much data, we have made brittle code. Choose a design pattern suited to the type of switched information and export the functionality into a maintainable place for whomever must make changes later... because with a long list like this, chances are high that changes will be occurring with some frequency.

Inefficiency of defensive copy in Java [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm a longtime C/C++ programmer who's learning Java. I've read about the problem of breaking encapsulation by having an accessor method that returns a reference to a private field. The standard Java solution seems to be defensive copying - calling the copy constructor or clone() to create a copy of the field and returning a reference to the copy. I don't understand why no one seems concerned about the inefficiency of making a defensive copy. In C++, the accessor would just return a pointer to const, protecting the private member without copying. Why doesn't Java have a reference to const?
Why doesn't Java have a reference to const?
Questions can only be properly answered by the language designer, but I think that the problem was that they couldn't figure out how to make it work as part of the language design. My recollection (from some "Java design rationale" document that I came across one time) was that Gosling et al originally wanted to support const ...
In fact, though both C and C++ both support const as a way of expressing a mutability constraint, they both also have loopholes that allow some code to "break" the constraint. (See the Wikipedia article on const-correctness.) It could be that it was the difficulty of coming up with a design for Java that didn't have (or need) such loopholes that caused Gosling et al to abandon that idea.
The flip-side is that the need for defensive copying in Java is not as great as you might imagine, and that the cost of doing it is not as great as you might imagine. And when the cost of the defensive copy is significant, there are other options in Java ... like creating "unmodifiable" wrapper objects, or interfaces that only support "read" operations.
NOTE: I don't know that you're going to find a direct answer to the question of why there is no const in Java. (My guess is because due to the dynamic nature of Java, having a const keyword did not provide as many compiler optimizations and thus was deemed not worth putting in - but that's just my opinion. [In C++ you have all of the final concrete types available to you at compile time, in Java, you don't.])
As for what is generally done instead, you have to make a decision based on the data types and what your fields semantically mean.
Some common types (String, Date) are immutable and are designed to be passed around and returned from getters with as little overhead as possible while not allowing modification.
As #DavidWallace points out, there are methods which create a shallow, unmodifiable copy of a Map and allow you to return that to caller - to guarantee they won't mess with it. This is a practical solution, however it does not enforce it at compile time.
If we're talking about maps: A java.util.Map is, per the interface contract, mutable. To achieve something close to const, you can easily create a different but simple interface with only a lookup method:
public interface Lookup<K,V> {
public V get(K);
}
And return an instance of that. Thus guaranteeing nobody will modify it... at compile time... because no such method exists.
And implementing a MapLookup that implements the above by wrapping a map without making a copy is like 5 lines of code.
If you want to make really sure that nobody modifies what you send back, don't send back something that is mutable. (And there's no reason it has to be done via an inefficient deep copy, as explained above.)

When KISS and DRY collide [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm an obsessive follower of the DRY and KISS principles but last week I had a case where both seem to contradict each other:
For an application I was doing, I had to implement a loop for times which does the following:
iterate over the elements of a list of type A
convert the element of type A to type B and insert them into a list of type B
Here's an example:
for (A a : listOfA) {
listOfB.add(BFactory.convertFromAToB(a));
}
Within the code, I have to do this about 4 times, convert a type (e.g. D, E etc.) into another one. I may not be able to change the types I'm about to convert, as they are 3rd party types which we have to use in out app.
So we have:
for (A a : listOfA) {
listOfB.add(BFactory.convertFromAToB(a));
}
for (C a : listOfC) {
listOfB.add(DFactory.convertFromCToD(c));
}
...
So, to not violate dry, I came up with a generic solution:
private interface Function<S, T> {
T apply(S s);
}
public <S, T> void convertAndCopy(List<S> src, List<T> dst, Function<S, T> f) {
for (S s : src) {
dst.add(f.apply(s));
}
}
A call looks something like this:
convertAndCopy(listOfA, listOfB, new Function<A, B>() {
A apply(B b) {
return CFactory.convertFromBToC(b);
}
});
Now, while this is better in terms of DRY, I think it violates KISS, as this solution is much harder to understand than the duplicated for loops.
So, is this DRY vs. KISS? Which one to favor in this context?
EDIT
Just to be clear, the class I'm talking about is an Adapter, which delegates call to a legacy system to our own implementation, converting the legacy into our own types along the way. I have no means of changing the legacy types, nor may I change our types (which are XML-Schema-generated).
Either is fine.
With the loops, you are not really repeating yourself, because the only parts that are repetitive is "syntactic clutter" (and not too much of that in your case). You are not repeating/duplicating "application logic" code.
If you like the "Function" style, maybe make use of the Guava library (which has the Function interface and many helper methods that work with them on collections). That is DRY (because you don't repeat yourself, and re-use code that already exists), and still KISS (because those are well understood patterns).
If you only have to do this 4 times in your whole application, and the conversion is really as trivial as your examples, I would choose writing 4 for loops any time over the generic solution.
Readability suffers a lot from using that generic solution and you don't actually gain anything from it.
General principles like DRY and KISS never work all of the time.
IMO, the answer is to forget the dogma (at least for this problem), and think about what gives you the best / most readable solution.
If the duplicated x 4 code is easier to understand and it is not a maintenance burden (i.e. you don't need to change it a lot), then it is the right solution.
(And Thilo's answer is right too ... IMO)
I think it is not that KISS and DRY contradict each other. I would rather say that Java does not allow you to express simplicity while not repeating yourself.
First of all if you introduce properly named methods to convert from List<A> to List<B> and so on instead of repeating the loop all the time it would be DRY while still remaining KISS.
But what I would advice is to look at alternate languages that allow you to take full odvantage of DRY while still promote KISS, e.g. in Scala:
val listOfB = listOfA map convertAtoB
val listOfC = listOfB map convertBtoC
val listOfD = listOfC map convertCtoD
Where convertAtoB is a function taking an item of type A and returning B:
def convertAtoB(a: A): B = //...
Or you can even chain these map calls.
You could move the conversion function into CFactory:
convertAndCopy(listOfA, listOfB, CFactory.getConverterFromAToB());
The code is quite readable/simple this way and you promote code reuse (maybe you will need to use the converter object later in another context).
Implementation :
public <S, T> void convertAndCopy(List<A> listofA, List<B> listOfB, Function<A, B> f) {
listOfB.addAll(Collections2.transform(listOfA,f));
}
(using guava Iterators).
I'm not even sure that you should DRY here, you could use directly:
listOfB.addAll(Collections2.transform(listOfA,CFactory.getConverterFromAToB()));

API Design for Idiot-Proof Iteration Without Generics

When you're designing the API for a code library, you want it to be easy to use well, and hard to use badly. Ideally you want it to be idiot proof.
You might also want to make it compatible with older systems that can't handle generics, like .Net 1.1 and Java 1.4. But you don't want it to be a pain to use from newer code.
I'm wondering about the best way to make things easily iterable in a type-safe way... Remembering that you can't use generics so Java's Iterable<T> is out, as is .Net's IEnumerable<T>.
You want people to be able to use the enhanced for loop in Java (for Item i : items), and the foreach / For Each loop in .Net, and you don't want them to have to do any casting. Basically you want your API to be now-friendly as well as backwards compatible.
The best type-safe option that I can think of is arrays. They're fully backwards compatible and they're easy to iterate in a typesafe way. But arrays aren't ideal because you can't make them immutable. So, when you have an immutable object containing an array that you want people to be able to iterate over, to maintain immutability you have to provide a defensive copy each and every time they access it.
In Java, doing (MyObject[]) myInternalArray.clone(); is super-fast. I'm sure that the equivalent in .Net is super-fast too. If you have like:
class Schedule {
private Appointment[] internalArray;
public Appointment[] appointments() {
return (Appointment[]) internalArray.clone();
}
}
people can do like:
for (Appointment a : schedule.appointments()) {
a.doSomething();
}
and it will be simple, clear, type-safe, and fast.
But they could do something like:
for (int i = 0; i < schedule.appointments().length; i++) {
Appointment a = schedule.appointments()[i];
}
And then it would be horribly inefficient because the entire array of appointments would get cloned twice for every iteration (once for the length test, and once to get the object at the index). Not such a problem if the array is small, but pretty horrible if the array has thousands of items in it. Yuk.
Would anyone actually do that? I'm not sure... I guess that's largely my question here.
You could call the method toAppointmentArray() instead of appointments(), and that would probably make it less likely that anyone would use it the wrong way. But it would also make it harder for people to find when they just want to iterate over the appointments.
You would, of course, document appointments() clearly, to say that it returns a defensive copy. But a lot of people won't read that particular bit of documentation.
Although I'd welcome suggestions, it seems to me that there's no perfect way to make it simple, clear, type-safe, and idiot proof. Have I failed if a minority of people are unwitting cloning arrays thousands of times, or is that an acceptable price to pay for simple, type-safe iteration for the majority?
NB I happen to be designing this library for both Java and .Net, which is why I've tried to make this question applicable to both. And I tagged it language-agnostic because it's an issue that could arise for other languages too. The code samples are in Java, but C# would be similar (albeit with the option of making the Appointments accessor a property).
UPDATE: I did a few quick performance tests to see how much difference this made in Java. I tested:
cloning the array once, and iterating over it using the enhanced for loop
iterating over an ArrayList using
the enhanced for loop
iterating over an unmodifyable
ArrayList (from
Collections.unmodifyableList) using
the enhanced for loop
iterating over the array the bad way (cloning it repeatedly in the length check
and when getting each indexed item).
For 10 objects, the relative speeds (doing multiple repeats and taking the median) were like:
1,000
1,300
1,300
5,000
For 100 objects:
1,300
4,900
6,300
85,500
For 1000 objects:
6,400
51,700
56,200
7,000,300
For 10000 objects:
68,000
445,000
651,000
655,180,000
Rough figures for sure, but enough to convince me of two things:
Cloning, then iterating is definitely
not a performance issue. In fact
it's consistently faster than using a
List. (this is why Java's
enum.values() method returns a
defensive copy of an array instead of
an immutable list.)
If you repeatedly call the method,
repeatedly cloning the array unnecessarily,
performance becomes more and more of an issue the larger the arrays in question. It's pretty horrible. No surprises there.
clone() is fast but not what I would describe as super faster.
If you don't trust people to write loops efficiently, I would not let them write a loop (which also avoids the need for a clone())
interface AppointmentHandler {
public void onAppointment(Appointment appointment);
}
class Schedule {
public void forEachAppointment(AppointmentHandler ah) {
for(Appointment a: internalArray)
ah.onAppointment(a);
}
}
Since you can't really have it both ways, I would suggest that you create a pre generics and a generics version of your API. Ideally, the underlying implementation can be mostly the same, but the fact is, if you want it to be easy to use for anyone using Java 1.5 or later, they will expect the usage of Generics and Iterable and all the newer languange features.
I think the usage of arrays should be non-existent. It does not make for an easy to use API in either case.
NOTE: I have never used C#, but I would expect the same holds true.
As far as failing a minority of the users, those that would call the same method to get the same object on each iteration of the loop would be asking for inefficiency regardless of API design. I think as long as that's well documented, it's not too much to ask that the users obey some semblance of common sense.

Categories

Resources