Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
The question is more general and is not related to pros and cons of both styles.
The question is should I prefer whenever it is possible to use Stream instead of for loops because it is declarative with a good readability?
I was arguing with my colleague about pros and cons of using streams and for loop. I agree that we should prefer streams in 90% of time but I believe there are some cases when it is better to use for loop instead of stream.
For example I needed to perform several operations on collection of elements and these operations could throw Checked Exception. During operating if exception occurres for any element I wanted to quit the execution at all so I used for loop for it and wrapped it in try/catch block. My colleague was not satisfied because result took in two times more lines than If I would use stream instead. I rewrote it by creating own custom functional interfaces that throws checked exception and static methods to convert them into throwing unchecked exception(examples here) and finally it looked like this:
try {
Map<String, String> someResult= elements.stream()
.filter(throwingPredicateWrapper(element-> client.hasValue(element)))
.collect(
Collectors.toMap(Function.identity(),
throwingFunctionWrapper(element -> client.getValue(element))));
return someResult;
} catch (Exception e) {
LOGGER.error("Error while processing", e);
}
He was happy because it took lines of code in two time less.
It is simple example and it does not look so bad but old loop here is more simple and faster way to deal with that case I believe.
Should we tend to use Streams everywhere it is possible?
Joshua Bloch, author of "Effective Java", has a good talk which touches on when to use streams. Start watching around 30:30 for his section on "Use streams judiciously".
Although this is largely opinion based, he argues that you do not want to immediately begin turning all of your procedural loops into streams, but you really want a balanced approach. He provides at least one example method where doing so creates code that is more difficult to understand. He also argues that there is no right answer in many cases whether to write it procedural or in a more functional manner, and it is dependent on the context (and I would argue what the team has decided to do corporately might play a role). He has the examples on GitHub, and all the examples below are from his GitHub repository.
Here is the example he provides of his iterative anagram method,
// Prints all large anagram groups in a dictionary iteratively (Page 204)
public class IterativeAnagrams {
public static void main(String[] args) throws IOException {
File dictionary = new File(args[0]);
int minGroupSize = Integer.parseInt(args[1]);
Map<String, Set<String>> groups = new HashMap<>();
try (Scanner s = new Scanner(dictionary)) {
while (s.hasNext()) {
String word = s.next();
groups.computeIfAbsent(alphabetize(word),
(unused) -> new TreeSet<>()).add(word);
}
}
for (Set<String> group : groups.values())
if (group.size() >= minGroupSize)
System.out.println(group.size() + ": " + group);
}
private static String alphabetize(String s) {
char[] a = s.toCharArray();
Arrays.sort(a);
return new String(a);
}
}
And here it is using Streams,
// Overuse of streams - don't do this! (page 205)
public class StreamAnagrams {
public static void main(String[] args) throws IOException {
Path dictionary = Paths.get(args[0]);
int minGroupSize = Integer.parseInt(args[1]);
try (Stream<String> words = Files.lines(dictionary)) {
words.collect(
groupingBy(word -> word.chars().sorted()
.collect(StringBuilder::new,
(sb, c) -> sb.append((char) c),
StringBuilder::append).toString()))
.values().stream()
.filter(group -> group.size() >= minGroupSize)
.map(group -> group.size() + ": " + group)
.forEach(System.out::println);
}
}
}
He argues for a balanced, third approach that uses both,
// Tasteful use of streams enhances clarity and conciseness (Page 205)
public class HybridAnagrams {
public static void main(String[] args) throws IOException {
Path dictionary = Paths.get(args[0]);
int minGroupSize = Integer.parseInt(args[1]);
try (Stream<String> words = Files.lines(dictionary)) {
words.collect(groupingBy(word -> alphabetize(word)))
.values().stream()
.filter(group -> group.size() >= minGroupSize)
.forEach(g -> System.out.println(g.size() + ": " + g));
}
}
private static String alphabetize(String s) {
char[] a = s.toCharArray();
Arrays.sort(a);
return new String(a);
}
}
Related
Okay so essentially, I have some code that uses the contains() method to detect the presence of specific characters in two strings. For extra context, this question is a good resource as to what kind of problem I'm having (and the third solution is also something I've looked into for this). Regardless, here is some of my code:
// code up here basically just concatenates different
// characters to Strings: stringX and stringY
if (stringX.contains("!\"#")) {
} else if (stringX.contains("$%&")) {
} else if (stringX.contains("\'()")) {
} else if (stringX.contains("!$\'")) {
} else if (stringX.contains("\"%(")) {
// literally 70+ more else-if statements
}
if (stringY.contains("!\"#")) {
} else if (stringY.contains("$%&")) {
} else if (stringY.contains("\'()")) {
} else if (stringY.contains("!$\'")) {
} else if (stringY.contains("\"%(")) {
// literally 70+ more else-if statements, all of which are
// exactly the same as those working with stringX
}
I'm still pretty new to Java programming, so I'm not sure how I should go about this. Maybe it is a non-issue? Also, if I can remedy this without using RegEx, that would be preferable; I am not very knowledgeable in it at this point it time. But if the only rational solution would be to utilize it, I will obviously do so.
Edit: The code within all of these else-if statements will not be very different from each other at all; basically just a System.out.println() with some information about what characters stringX/stringY contains.
Writing the same code more than once should immediately set off alarm bells in your head to move that code into a function so it can be reused.
As for simplifying the expression, the best approach is probably storing the patterns you're looking for as an array and iterating over the array with your condition.
private static final String[] patterns = new String[] {"!\"#", "$%&", "\'()", "!$\'", "\"%(", ...};
private static void findPatterns(String input) {
for (String pattern : patterns) {
if (input.contains(pattern) {
System.out.println("Found pattern: " + pattern);
}
}
}
// Elsewhere...
findPatterns(stringX);
findPatterns(stringY);
This pattern is especially common in functional and functional-style languages. Java 8 streams are a good example, so you could equivalently do
List<String> patterns = Arrays.asList("!\"#", "$%&", "\'()", "!$\'", "\"%(", ...);
patterns.stream()
.filter(pattern -> stringX.contains(pattern))
.forEach(pattern -> System.out.println("Found pattern: " + pattern));
can simply by make a list of your case. then using java 8 stream filter
List<String> pattems = Arrays.asList("!\"#", "$%&", ...);
Optional<String> matched = pattems.stream().filter(p -> stringX.contains(p));
if(matched.isPresent()) {
System.console().printf(matched.get())
}
java stream could make your peformance slower but not too much
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
i m playing around with Java Streams and I wonder if there is any way to create a code Block like this ->
if(givenString.equals("productA")) {
return new productA();
} else if(givenString.equals("productB") {
return new productB();
} .....
into a Java Stream like this ->
Stream.of(givenString)
.filter(e -> e.equal("productA)")
.map(e -> new productA())
i came across with this solution which works but i m not convinced...
Stream.of(givenString)
.map(e -> e -> e.equals("productA)" ? new productA() : new productB())
.findAny()
.get()
You don't want to do that inline in a stream. Instead, write a helper method that does just that:
private static Product createByString(String name) {
// I assume Product is a common superclass
// TODO: implement
}
Now the question is: How should this method be implemented?
Use a big switch statement.
private static Product createByString(String name) {
switch (name) {
case "productA": new productA();
case "productB": new productB();
// ... maybe more?
default: throw new IllegalArgumentException("name " + name + " is not a valid Product");
}
}
Pro: a switch on a string is compiled into a jumptable, so you won't have n string comparisons.
Con: You can't extend it at runtime, and you have to keep this method in sync.
Use a HashMap<String,Supplier<Product>>.
private static final Map<String,Supplier<Product>> productConstructors = new HashMap<>();
static {
productConstructors.put("productA", productA::new);
productConstructors.put("productB", productB::new);
}
private static Product createByString(String name) {
Supplier<Product> constructor = productConstructors.get(name);
if (constructor == null) {
// Handle this?
throw new IllegalArgumentException("name " + name + " is not a valid Product");
}
return constructor.get();
}
Pro: with some easy modifications you can add new products to this implementation, or even replace them.
Con: has some moderate overhead, and you still need to maintain a the mapping between "productA" and it's type.
Use reflection.
The good old hammer where every problem looks like a nail.
private static Product createByString(String name) {
try {
return Class.forName("your.pkgname. " + name).asSubclass(Product.class).getConstructor().newInstance();
} catch (ReflectiveOperationException e) {
throw new RuntimeException(e);
}
}
Pro: You don't need to do the binding.
Con: It's slow.
In your first example below:
if(givenString.equals("productA")) {
return new productA();
} else if(givenString.equals("productB") {
return new productB();
}
You are returning an instance of some object specified via a string. It seems to me that if you know the string, you can just create the object right away without using an intervening method call to do so.
Another possibility is that the class name was provided via some user input. In this case you might want to consider reflection to accomplish this so you can reference the methods and fields of the newly created class.
In either case I doubt streams is a reasonable approach for this sort of requirement.
I have been trying to get into functional programming with java for a few weeks now. I have created 2 functions below "validateFileFunctionally" and "validateFileRegularly" which perform same validations. First works in a functional way using predicates(we can assume Suppliers, Consumers also in here) while the second one works in traditional java ways.
In 2018 which way should I go.
And should I try to use functional programming everywhere in my code as being done in "validateFileFunctionally" or only with Streams?
public class Main {
private final String INVALID_FILE_NAME_LENGTH = "INVALID FILE NAME LENGTH";
private final String INVALID_FILE_EXTENSION = "INVALID FILE EXTENSION";
private final String INVALID_FILE_SIZE = "INVALID FILE SIZE";
public static void main(String... args) {
File file = new File("text.pdf");
Main main = new Main();
main.validateFileFunctionally(file);
main.validateFileRegularly(file);
}
private void validateFileFunctionally(File file) {
BiPredicate<File, Integer> validateFileName = (f, maxLength) -> f.getName().length() < maxLength;
BiPredicate<File, String> validateExtension = (f, type) -> f.getName().endsWith(type);
BiPredicate<File, Integer> validateSize = (f, maxSize) -> f.length() <= maxSize;
BiConsumer<Boolean, String> throwExceptionIfInvalid = (isValid, errorMessage) -> {
if(!isValid) {
throw new InvalidFileException(errorMessage);
}
};
throwExceptionIfInvalid.accept(validateFileName.test(file, 20), INVALID_FILE_NAME_LENGTH);
throwExceptionIfInvalid.accept(validateExtension.test(file, ".pdf") || validateExtension.test(file, ".csv"), INVALID_FILE_EXTENSION);
throwExceptionIfInvalid.accept(validateSize.test(file, 20), INVALID_FILE_SIZE);
}
private void validateFileRegularly(File file) {
if (file.getName().length() > 20) {
throw new InvalidFileException("INVALID FILE NAME LENGTH");
} else if (!file.getName().endsWith(".pdf") && !file.getName().endsWith(".csv")) {
throw new InvalidFileException("INVALID FILE NAME LENGTH");
} else if (file.length() > 20) {
throw new InvalidFileException("INVALID FILE NAME LENGTH");
}
}
class InvalidFileException extends RuntimeException {
public InvalidFileException(String message) {
super(message);
}
}
}
Dah, this is a pet peeve of mine I'm afraid. Don't try to cram in functional stuff everywhere just because it's the latest new / cool thing - that just makes your code hard to read and unconventional. The Java 8 functional libraries are just another tool you have available that allow you to write cleaner, more concise code in a number of cases. You certainly shouldn't aim to use them exclusively.
Take your case as an example - the chained if statements still might not be the best way of achieving the above, but I can look at that and know near enough exactly what's going on in a few seconds.
Meanwhile, the functional example is just - rather odd. It's longer, less obvious as to what's going on, and offers no real advantage. I can't see a single case for using it as written in this example.
You should be applying Functional Programming wherever it makes sense, and stay away from bold statements like:
"I should try to use FP everywhere in my code"
"I should code only with Streams"
However, keep in mind that this example is not functional at all - validateFileFunctionally is just an enterprise-grade version of validateFileRegularly
Simply put, you took an imperative piece of code and rewrote it by wrapping it into FP infrastructure which is not what FP is about.
FP is about removing runtime uncertainty by building code from small and predictable building blocks/values, and not by putting lambda expressions wherever possible.
In your example, one could achieve this by abandoning exception handling and representing validation result as a value:
private Result validateFileRegularly(File file) {
if (file.getName().length() > 20) {
return Result.failed("INVALID FILE NAME LENGTH");
} else if (!file.getName().endsWith(".pdf") && !file.getName().endsWith(".csv")) {
return Result.failed("INVALID FILE NAME LENGTH");
} else if (file.length() > 20) {
return Result.failed("INVALID FILE NAME LENGTH");
}
return Result.ok();
}
Naturally, one could use the more sophisticated syntax for that, or a more sophisticated applicative-based validation API, but essentially that's what's all about.
This question already has answers here:
How can I throw CHECKED exceptions from inside Java 8 lambdas/streams?
(18 answers)
Closed 4 years ago.
I have a method with nested for loops as follows:
public MinSpecSetFamily getMinDomSpecSets() throws InterruptedException {
MinSpecSetFamily result = new MinSpecSetFamily();
ResourceType minRT = this.getFirstEssentialResourceType();
if (minRT == null || minRT.noSpecies()) {
System.out.println("There is something wrong with the "
+ "minimal rticator, such as adjacent to no species. ");
}
for (Species spec : minRT.specList) {
ArrayList<SpecTreeNode> leafList = this.getMinimalConstSpecTreeRootedAt(spec).getLeaves();
for (SpecTreeNode leaf : leafList) {
result.addSpecSet(new SpecSet(leaf.getAncestors()));
}
}
return result;
}
This works fine, but the application is performance critical so I modified the method to use parallelStream() as follows:
public MinSpecSetFamily getMinDomSpecSets() throws InterruptedException {
ResourceType minRT = this.getFirstEssentialResourceType();
if (minRT == null || minRT.noSpecies()) {
System.out.println("There is something wrong with the "
+ "minimal rticator, such as adjacent to no species. ");
}
MinSpecSetFamily result = minRT.specList.parallelStream()
.flatMap(spec -> getMinimalConstSpecTreeRootedAt(spec).getLeaves().parallelStream())
.map(leaf -> new SpecSet(leaf.getAncestors()))
.collect(MinSpecSetFamily::new, MinSpecSetFamily::addSpecSet, MinSpecSetFamily::addMSSF);
return result;
}
This worked fine until I wanted to introduce an InterruptedException in the 'getLeaves()' method. Now the parallelStream version will not compile as it says I have an unreported InterruptedException which must be caught or declared to be thrown. I think this is because the parallelStream runs on multiple threads. No combination of try/catch blocks suggested by my IDE resolves the issue.
The second solution posted in Interrupt parallel Stream execution
suggests that I may be able to resolve the issue using ForkJoinPool but I have been unable to figure out how to modify my method to use this approach.
If you want to stick to your current design, you just need to catch the exception:
.flatMap(spec -> {
try {
return getMinimalConstSpecTreeRootedAt(spec).getLeaves().parallelStream();
} catch (InterruptedException e) {
// return something else to indicate interruption
// maybe an empty stream?
}
}).map(...)
Note that a parallel stream of parallel streams is possibly unnecessary and parallelising the top level stream only may be sufficient performance-wise.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Java 8 provides a bunch of functional interfaces that we can implement using lambda expressions, which allows functions to be treated as
first-class citizen (passed as arguments, returned from a method, etc...).
Example:
Stream.of("Hello", "World").forEach(str->System.out.println(str));
Why functions considered as first-class citizens are so important? Any example to demonstrate this power?
The idea is to be able to pass behavior as a parameter. This is useful, for example, in implementing the Strategy pattern.
Streams API is a perfect example of how passing behavior as a parameter is useful:
people.stream()
.map(person::name)
.map(name->new GraveStone(name, Rock.GRANITE)
.collect(Collectors.toSet())
Also it allows programmers to think in terms of functional programming instead of object-oriented programming, which is convenient for a lot of tasks, but is quite a broad thing to cover in an answer.
I think the second part of the question has been addressed well. But I want to try to answer the first question.
By definition there is more that a first-class citizen function can do. A first-class citizen function can:
be named by variables
be passed as arguments
be returned as the result of another function
participate as a member data type in a data structure (e.g., an array or list)
These are the privileges of being "first-class."
It's a matter of expressiveness. You don't have to, but in many practical cases it will make your code more readable and concise. For instance, take your code:
public class Foo {
public static void main(String[] args) {
Stream.of("Hello", "World").forEach(str->System.out.println(str));
}
}
And compare it to the most concise Java 7 implementation I could come out with:
interface Procedure<T> {
void call(T arg);
}
class Util {
static <T> void forEach(Procedure<T> proc, T... elements) {
for (T el: elements) {
proc.call(el);
}
}
}
public class Foo {
static public void main(String[] args) {
Util.forEach(
new Procedure<String>() {
public void call(String str) { System.out.println(str); }
},
"Hello", "World"
);
}
}
The result is the same, the number of lines a bit less :) Also note that for supporting Procedure instances with different number of arguments, you would have needed an interface each or (more practical) passing all the arguments as a single Parameters object. A closures would have been made in a similar way, by adding some fields to the Procedure implementation. That's a lot of boilerplate.
In fact, things like first-class "functors" and (non-mutable) closures have been around for a long time using anonymous classes, but they required a significant implementation effort. Lambdas just make things easier to read and write (at least, in most cases).
Here's a short program the shows (arguably) the primary differentiating factor.
public static void main(String[] args) {
List<Integer> input = Arrays.asList(10, 12, 13, 15, 17, 19);
List<Integer> list = pickEvensViaLists(input);
for (int i = 0; i < 2; ++i)
System.out.println(list.get(i));
System.out.println("--------------------------------------------");
pickEvensViaStreams(input).limit(2).forEach((x) -> System.out.println(x));
}
private static List<Integer> pickEvensViaLists(List<Integer> input) {
List<Integer> list = new ArrayList<Integer>(input);
for (Iterator<Integer> iter = list.iterator(); iter.hasNext(); ) {
int curr = iter.next();
System.out.println("processing list element " + curr);
if (curr % 2 != 0)
iter.remove();
}
return list;
}
private static Stream<Integer> pickEvensViaStreams(List<Integer> input) {
Stream<Integer> inputStream = input.stream();
Stream<Integer> filtered = inputStream.filter((curr) -> {
System.out.println("processing stream element " + curr);
return curr % 2 == 0;
});
return filtered;
}
This program takes an input list and prints the first two even numbers from it. It does so twice: the first time using lists with hand-written loops, the second time using streams with lambda expressions.
There are some differences in terms of the amount of code one has to write in either approach but this is not (in my mind) the main point. The difference is in how things are evaluated:
In the list-based approach the code of pickEvensViaLists() iterates over the entire list. it will remove all odd values from the list and only then will return back to main(). The list that it returned to main() will therefore contain four values: 10, 12, 20, 30 and main() will print just the first two.
In the stream-based approach the code of pickEvensViaStreams() does not actually iterate over anything. It returns a stream who else can be computed off of the input stream but it did not yet compute any one of them. Only when main() starts iterating (via forEach()) will the elements of the returned stream be computed, one by one. As main() only cares about the first two elements only two elements of the returned stream are actually computed. In other words: with stream you get lazy evaluation: streams are iterated only much as needed.
To see that let's examine the output of this program:
--------------------------------------------
list-based filtering:
processing list element 10
processing list element 12
processing list element 13
processing list element 15
processing list element 17
processing list element 19
processing list element 20
processing list element 30
10
12
--------------------------------------------
stream-based filtering:
processing stream element 10
10
processing stream element 12
12
with lists the entire input was iterated over (hence the eight "processing list element" messages). With stream only two elements were actually extracted from the input resulting in only two "processing stream element" messages.