I have a method that adds maps to a cache and I was wondering what I could do more to simplify this loop with Java 8.
What I have done so far:
Standard looping we all know:
for(int i = 0; i < catalogNames.size(); i++){
List<GenericCatalog> list = DummyData.getCatalog(catalogNames.get(i));
Map<String, GenericCatalog> map = new LinkedHashMap<>();
for(GenericCatalog item : list){
map.put(item.name.get(), item);
}
catalogCache.put(catalogNames.get(i), map);};
Second iteration using forEach:
catalogNames.forEach(e -> {
Map<String, GenericCatalog> map = new LinkedHashMap<>();
DummyData.getCatalog(e).forEach(d -> {
map.put(d.name.get(), d);
});
catalogCache.put(e, map);});
And third iteration that removes unnecessary bracers:
catalogNames.forEach(objName -> {
Map<String, GenericCatalog> map = new LinkedHashMap<>();
DummyData.getCatalog(objName).forEach(obj -> map.put(obj.name.get(), obj));
catalogCache.put(objName, map);});
My question now is what can be further done to simplify this?
I do understand that it's not really necessary to do anything else with this method at this point, but, I was curios about the possibilities.
There is small issue with solution 2 and 3 they might cause a side effects
Side-effects in behavioral parameters to stream operations are, in
general, discouraged, as they can often lead to unwitting violations
of the statelessness requirement, as well as other thread-safety
hazards.
As an example of how to transform a stream pipeline that
inappropriately uses side-effects to one that does not, the following
code searches a stream of strings for those matching a given regular
expression, and puts the matches in a list.
ArrayList<String> results = new ArrayList<>();
stream.filter(s -> pattern.matcher(s).matches())
.forEach(s -> results.add(s)); // Unnecessary use of side-effects!
So instead of using forEach to populate the HashMap it is better to use Collectors.toMap(..). I am not 100% sure about your data structure, but I hope it is close enough.
There is a List and corresponding Map:
List<Integer> ints = Arrays.asList(1,2,3);
Map<Integer,List<Double>> catalog = new HashMap<>();
catalog.put(1,Arrays.asList(1.1,2.2,3.3,4.4));
catalog.put(2,Arrays.asList(1.1,2.2,3.3));
catalog.put(3,Arrays.asList(1.1,2.2));
now we would like to get a new Map where a map key is element from the original List and map value is an other Map itself. The nested Map's key is transformed element from catalog List and value is the List element itself. Crazy description and more crazy code below:
Map<Integer, Map<Integer, Double>> result = ints.stream().collect(
Collectors.toMap(
el -> el,
el -> catalog.get(el).stream().
collect(Collectors.toMap(
c -> c.intValue(),
c -> c
))
)
);
System.out.println(result);
// {1={1=1.1, 2=2.2, 3=3.3, 4=4.4}, 2={1=1.1, 2=2.2, 3=3.3}, 3={1=1.1, 2=2.2}}
I hope this helps.
How about utilizing Collectors from the stream API? Specifically, Collectors#toMap
Map<String, Map<String, GenericCatalog>> cache = catalogNames.stream().collect(Collectors.toMap(Function.identity(),
name -> DummyData.getCatalog(name).stream().collect(Collectors.toMap(t -> t.name.get(), Function.identity(),
//these two lines only needed if HashMap can't be used
(o, t) -> /* merge function */,
LinkedHashMap::new));
This avoids mutating an existing collection, and provides you your own individual copy of a map (which you can use to update a cache, or whatever you desire).
Also I would disagree with arbitrarily putting end braces at the end of a line of code - most style guides would also be against this as it somewhat disturbs the flow of the code to most readers.
Related
I'm fairly new to Java and trying to learn how to use streams for easier code writing. If I can code like this:
Map<String, SomeConfig> temp = new HashMap<>();
resultStorage.forEach((key, value) -> key.getUsers().forEach(user -> {
if (!temp.containsKey(user.getMeta())) {
SomeConfig emailConfiguration = key
.withCheck1(masterAccountId)
.withCheck2(getClientTimezone())
.withCheck3(user.getMeta());
temp.put(user.getMeta(), emailConfiguration);
}
temp.get(user. getMeta()).getStreams().add(value);
}));
return new ArrayList<>(temp.values());
resultStorage declaration:
private Map< SomeConfig, byte[]> resultStorage = new ConcurrentHashMap<>();
getStreams is a getter on SomeConfig that returns a List<byte[]> as here:
private List<byte[]> attachmentStreams = new ArrayList<>();
public List<byte[]> getAttachmentStreams() {
return attachmentStreams;
}
My first attempt was something similar to this:
resultStorage.entrySet().stream()
.forEach(entry -> entry.getKey().getUsers().forEach(user -> {
}));
Are we able to use a forEach within one of the streams terminating operation, forEach? How would a stream benefit in this case as I saw documentation that it can significantly improve readability and performance of older pre-Java8 code?
Edit:
resultStorage holds a ConcurrentHashMap. It will contain Map<SomeConfig, byte[]> for email and attachments. Using another HashMap temp that is initially empty - we analyze resultStorage , see if temp contains a specific email key, and then put or add based on the existence of a user's email
The terminal operation of entrySet().stream().forEach(…) is entirely unrelated to the getUsers().forEach(…) call within the Consumer. So there’s no problem of “multiple terminal operations” here.
However, replacing the Map operation forEach((key, value) -> … with an entrySet() .stream() .forEach(entry -> …) rarely adds a benefit. So far, you’re not only made the code longer, you introduced the necessity to deal with a Map.Entry instead of just using key and value.
But you can simplify your operation by using a single computeIfAbsent instead of containsKey, put, and get:
resultStorage.forEach((key, value) -> key.getUsers().forEach(user ->
temp.computeIfAbsent(user.getMeta(), meta ->
key.withCheck1(masterAccountId).withCheck2(getClientTimezone()).withCheck3(meta))
.getStreams().add(value)));
Notes after the code.
Map<String, SomeConfig> temp = resultStorage.keySet()
.stream()
.flatMap(key -> key.getUsers()
.stream()
.map(user -> new AbstractMap.SimpleEntry(user, key)))
.collect(Collectors.toMap(e -> e.getKey().getMeta(),
e -> e.getValue()
.withCheck1(masterAccountId)
.withCheck2(getClientTimezone())
.withCheck3(e.getKey().getMeta())
resultStorage.keySet()
This returns Set<SomeConfig>.
stream()
This returns a stream where every element in the stream is an instance of SomeConfig.
.flatMap(key -> key.getUsers()
.stream()
.map(user -> new AbstractMap.SimpleEntry(user, key)))
Method flatMap() must return a Stream. The above code returns a Stream where every element is an instance of AbstractMap.SimpleEntry. The "entry" key is the user and the entry value is the key from resultStorage.
Finally I create a Map<String, SomeConfig> via [static] method toMap of class Collectors.
The first argument to method toMap is the key mapper, i.e. a method that extracts the [map] key from the AbstractMap.SimpleEntry. In your case this is the value returned by method getMeta() of the user – which is the key from AbstractMap.SimpleEntry, i.e. e.getKey() returns a user object.
The second argument to toMap is the value mapper. e.getValue() returns a SomeConfig object and the rest is your code, i.e. the withChecks.
There is no way I can test the above code because not only did you not post a minimal, reproducible example, you also did not post any sample data. Hence the above may be way off what you actually require.
Also note that the above code simply creates your Map<String, SomeConfig> temp. I could not understand the code in your question that processes that Map so I did not try to implement that part at all.
I am trying to append two list according to their size. With list with bigger size in front.
I have few lists like this.
List<Pair<Double, String>> masterList = new ArrayList<>();
and this is the working Java code that I tried first - with a simple if else loop:
if (listOne.size() >= listTwo.size()){
masterList.addAll(listOne);
masterList.addAll(listTwo);
} else {
masterList.addAll(listTwo);
masterList.addAll(listOne);
}
masterList.addAll(otherList); // and at the end all other list can be added without any condition
I am fairly new to the Java, so I was studying about it and came across Comparators and Lambda. So, I tried to use that for my code, something like this:
List<Pair<Double, String>> masterList = Stream.concat(listOne.stream(), listTwo.stream())
.filter(Comparator.comparingInt(List::size))
.collect(Collectors.toList())
But I am not able to achieve proper results.
Can someone point out my mistake, I am still trying to learn.
The for-loop is very nice, Stream isn't necessary, but to answer the question, you may
not use concat as it'll already join the lists, and you loose the concept of different list
don't use filter but rather sorted
then flatMap to pass from Stream<List<Pair<>>> to Stream<Pair<>>
List<Pair<Double, String>> masterList = Stream.of(listOne, listTwo)
.sorted(Comparator.comparing(List::size, Comparator.reverseOrder()))
.flatMap(List::stream)
.collect(Collectors.toList());
masterList.addAll(otherList);
It may be possible to use Stream.concat to join the contents of otherList and thus to get rid of masterList.addAll
Also, there is an example of using Comparator.reversed() method:
List<Pair<Double, String>> masterList = Stream.concat(
Stream.of(listOne, listTwo) // Stream<List<Pair>>
.sorted(Comparator.<List>comparingInt(List::size).reversed())
.flatMap(List::stream), // Stream<Pair>
otherList.stream() // Stream<Pair>
)
.collect(Collectors.toList());
However, a ternary operator should do fine as well to detect a longer list to place in the beginning:
List<Pair<Double, String>> masterList2 = Stream.concat(
(listOne.size() >= listTwo.size()
? Stream.of(listOne, listTwo)
: Stream.of(listTwo, listOne)
)
.flatMap(List::stream),
otherList.stream()
)
.collect(Collectors.toList());
Input :
List<String> elements= new ArrayList<>();
elements.add("Oranges");
elements.add("Figs");
elements.add("Mangoes");
elements.add("Apple");
List<String> listofComments = new ArrayList<>();
listofComments.add("Apples are better than Oranges");
listofComments.add("I love Mangoes and Oranges");
listofComments.add("I don't know like Figs. Mangoes are my favorites");
listofComments.add("I love Mangoes and Apples");
Output : [Mangoes, Apples, Oranges, Figs] -> Output must be in descending order of the number of occurrences of the elements. If elements appear equal no. of times then they must be arranged alphabetically.
I am new to Java 8 and came across this problem. I tried solving it partially; I couldn't sort it. Can anyone help me with a better code?
My piece of code:
Function<String, Map<String, Long>> function = f -> {
Long count = listofComments.stream()
.filter(e -> e.toLowerCase().contains(f.toLowerCase())).count();
Map<String, Long> map = new HashMap<>(); //creates map for every element. Is it right?
map.put(f, count);
return map;
};
elements.stream().sorted().map(function).forEach(e-> System.out.print(e));
Output: {Apple=2}{Figs=1}{Mangoes=3}{Oranges=2}
In real life scenarios you would have to consider that applying an arbitrary number of match operations to an arbitrary number of comments can become quiet expensive when the numbers grow, so it’s worth doing some preparation:
Map<String,Predicate<String>> filters = elements.stream()
.sorted(String.CASE_INSENSITIVE_ORDER)
.map(s -> Pattern.compile(s, Pattern.LITERAL|Pattern.CASE_INSENSITIVE))
.collect(Collectors.toMap(Pattern::pattern, Pattern::asPredicate,
(a,b) -> { throw new AssertionError("duplicates"); }, LinkedHashMap::new));
The Predicate class is quiet valuable even when not doing regex matching. The combination of the LITERAL and CASE_INSENSITIVE flags enables searches with the intended semantic without the need to convert entire strings to lower case (which, by the way, is not sufficient for all possible scenarios). For this kind of matching, the preparation will include building the necessary data structure for the Boyer–Moore Algorithm for more efficient search, internally.
This map can be reused.
For your specific task, one way to use it would be
filters.entrySet().stream()
.map(e -> Map.entry(e.getKey(), listofComments.stream().filter(e.getValue()).count()))
.sorted(Map.Entry.comparingByValue(Comparator.reverseOrder()))
.forEachOrdered(e -> System.out.printf("%-7s%3d%n", e.getKey(), e.getValue()));
which will print for your example data:
Mangoes 3
Apple 2
Oranges 2
Figs 1
Note that the filters map is already sorted alphabetically and the sorted of the second stream operation is stable for streams with a defined encounter order, so it only needs to sort by occurrences, the entries with equal elements will keep their relative order, which is the alphabetical order from the source map.
Map.entry(…) requires Java 9 or newer. For Java 8, you’d have to use something like
new AbstractMap.SimpleEntry(…) instead.
You can still modify your function to store Map.Entry instead of a complete Map
Function<String, Map.Entry<String, Long>> function = f -> Map.entry(f, listOfComments.stream()
.filter(e -> e.toLowerCase().contains(f.toLowerCase())).count());
and then sort these entries before performing a terminal operation forEach in your case to print
elements.stream()
.map(function)
.sorted(Comparator.comparing(Map.Entry<String, Long>::getValue)
.reversed().thenComparing(Map.Entry::getKey))
.forEach(System.out::println);
This will then give you as output the following:
Mangoes=3
Apples=2
Oranges=2
Figs=1
First thing is to declare an additional class. It'll hold element and count:
class ElementWithCount {
private final String element;
private final long count;
ElementWithCount(String element, long count) {
this.element = element;
this.count = count;
}
String element() {
return element;
}
long count() {
return count;
}
}
To compute count let's declare an additional function:
static long getElementCount(List<String> listOfComments, String element) {
return listOfComments.stream()
.filter(comment -> comment.contains(element))
.count();
}
So now to find the result we need to transform stream of elements to stream of ElementWithCount objects, then sort that stream by count, then transform it back to stream of elements and collect it into result list.
To make this task easier, let's define comparator as a separate variable:
Comparator<ElementWithCount> comparator = Comparator
.comparing(ElementWithCount::count).reversed()
.thenComparing(ElementWithCount::element);
and now as all parts are ready, final computation is easy:
List<String> result = elements.stream()
.map(element -> new ElementWithCount(element, getElementCount(listOfComments, element)))
.sorted(comparator)
.map(ElementWithCount::element)
.collect(Collectors.toList());
You can use Map.Entry instead of a separate class and inline getElementCount, so it'll be "one-line" solution:
List<String> result = elements.stream()
.map(element ->
new AbstractMap.SimpleImmutableEntry<>(element,
listOfComments.stream()
.filter(comment -> comment.contains(element))
.count()))
.sorted(Map.Entry.<String, Long>comparingByValue().reversed().thenComparing(Map.Entry.comparingByKey()))
.map(Map.Entry::getKey)
.collect(Collectors.toList());
But it's much harder to understand in this form, so I recommend to split it to logical parts.
I have two maps that use the same object as keys. I want to merge these two streams by key. When a key exists in both maps, I want the resulting map to run a formula. When a key exists in a single map I want the value to be 0.
Map<MyKey, Integer> map1;
Map<MyKey, Integer> map2;
<Map<MyKey, Double> result =
Stream.concat(map1.entrySet().stream(), map2.entrySet().stream())
.collect(Collectors.toMap(
Map.Entry::getKey, Map.Entry::getValue,
(val1, val2) -> (val1 / (double)val2) * 12D));
This will use the formula if the key exists in both maps, but I need an easy way to set the values for keys that only existed in one of the two maps to 0D.
I can do this by doing set math and trying to calculate the inner-join of the two keySets, and then subtracting the inner-join result from the full outer join of them... but this is a lot of work that feels unnecessary.
Is there a better approach to this, or something I can easily do using the Streaming API?
Here is a simple way, only stream the keys, and then looking up the values, and leaving the original maps unchanged.
Map<String, Double> result =
Stream.concat(map1.keySet().stream(), map2.keySet().stream())
.distinct()
.collect(Collectors.toMap(k -> k, k -> map1.containsKey(k) && map2.containsKey(k)
? map1.get(k) * 12d / map2.get(k) : 0d));
Test
Map<String, Integer> map1 = new HashMap<>();
Map<String, Integer> map2 = new HashMap<>();
map1.put("A", 1);
map1.put("B", 2);
map2.put("A", 3);
map2.put("C", 4);
// code above here
result.entrySet().forEach(System.out::println);
Output
A=4.0
B=0.0
C=0.0
For this solution to work, your initial maps should be Map<MyKey, Double>. I'll try to find another solution that will work if the values are initially Integer.
You don't even need streams for this! You should simply be able to use Map#replaceAll to modify one of the Maps:
map1.replaceAll((k, v) -> map2.containsKey(k) ? 12D * v / map2.get(k) : 0D);
Now, you just need to add every key to map1 that is in map2, but not map1:
map2.forEach((k, v) -> map1.putIfAbsent(k, 0D));
If you don't want to modify either of the Maps, then you should create a deep copy of map1 first.
Stream.concat is not the right approach here, as you are throwing the elements of the two map together, creating the need to separate them afterward.
You can simplify this by directly doing the intended task of processing the intersection of the keys by applying your function and processing the other keys differently. E.g. when you stream over one map instead of the concatenation of two maps, you only have to check for the presence in the other map to either, apply the function or use zero. Then, the keys only present in the second map need to be put with zero in a second step:
Map<MyKey, Double> result = map1.entrySet().stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(Map.Entry::getKey, e -> {
Integer val2 = map2.get(e.getKey());
return val2==null? 0.0: e.getValue()*12.0/val2;
}),
m -> {
Map<MyKey, Double> rMap = m.getClass()==HashMap.class? m: new HashMap<>(m);
map2.keySet().forEach(key -> rMap.putIfAbsent(key, 0.0));
return rMap;
}));
This clearly suffers from the fact that Streams don’t offer convenience methods for processing map entries. Also, we have to deal with the unspecified map type for the second processing step. If we provided a map supplier, we also had to provide a merge function, making the code even more verbose.
The simpler solution is to use the Collection API rather than the Stream API:
Map<MyKey, Double> result = new HashMap<>(Math.max(map1.size(),map2.size()));
map2.forEach((key, value) -> result.put(key, map1.getOrDefault(key, 0)*12D/value));
map1.keySet().forEach(key -> result.putIfAbsent(key, 0.0));
This is clearly less verbose and potentially more efficient as it omits some of the Stream solution’s processing steps and provides the right initial capacity to the map. It utilizes the fact that the formula evaluates to the desired zero result if we use zero as default for the first map’s value for absent keys. If you want to use a different formula which doesn’t have this property or want to avoid the calculation for absent mappings, you’d have to use
Map<MyKey, Double> result = new HashMap<>(Math.max(map1.size(),map2.size()));
map2.forEach((key, value2) -> {
Integer value1 = map1.get(key);
result.put(key, value1 != null? value1*12D/value2: 0.0);
});
map1.keySet().forEach(key -> result.putIfAbsent(key, 0.0));
How can i convert the below condition to Java 8 streams way ?
List<String> name = Arrays.asList("A", "B", "C");
String id;
if(name.contains("A")){
id = "123";
}else if(name.contains("B")){
id = "234";
}else if(name.contains("C")){
id = "345";
}
I am in process of learning Streams and was wondering how i can convert this one. I tried with foreach, map, filter but it was not getting at it
Yet another (but compact) solution:
Arrays.asList("B", "C", "A", "D").stream()
.map(s -> s.equals("A") ? new SimpleEntry<>(1, "123")
: s.equals("B") ? new SimpleEntry<>(2, "234")
: s.equals("C") ? new SimpleEntry<>(3, "345")
: null)
.filter(x -> x != null)
.reduce((a, b) -> a.getKey() < b.getKey() ? a : b)
.map(Entry::getValue)
.ifPresent(System.out::println);
I cannot see why do you have to convert it to stream. This doesn't seem to be stream API case for me.
But if you want to easily add new items and make code more readable, I can suggest you to use map instead.
private static final ImmutableMap<String, String> nameToId = new ImmutableMap.Builder<String, String>()
.put("A", "123")
.put("B", "234")
.put("C", "345")
.build();
Now you can add new items without changing much code and just call nameToId.get(name) to fetch id by name.
You can add more flexibility here using streams
Stream.of("A", "B", "C").map(nameToId::get)collect(Collectors.toList());
Inspired by Serghey Bishyr's answer to use a map I also used a map (but ordered) and I will rather go through the keys of the map instead of the list to find the appropriate id. That might of course not be the best solution, but you can play with Streams that way ;-)
Map<String, String> nameToId = new LinkedHashMap<>();
// the following order reflects the order of your conditions! (if your first condition would contain "B", you would move "B" at the first position)
nameToId.put("A", "123");
nameToId.put("B", "234");
nameToId.put("C", "345");
List<String> name = Arrays.asList("A", "B", "C");
String id = nameToId.keySet()
.stream()
.filter(name::contains)
.findFirst()
.map(nameToId::get)
.orElse(null)
You gain nothing really... don't try to put too much into the filtering predicates or mapping functions, because then your Stream solution might not be that readable anymore.
The problem you describe is to get a single value (id) from application of a function to two input sets: the input values and the mappings.
id = f(list,mappings)
So basically your question is, to find a f that is based on streams (in other words, solutions that return a list don't solve your problem).
First of all, the original if-else-if-else construct mixes three concerns:
input validation (only considering the value set "A","B","C")
mapping an input value to an output value ("A" -> "123", "B" -> "234", "C" -> "345")
defining an implicit prioritization of input values according to their natural order (not sure if that is intentional or conincidental), "A" before "B" before "C"
When you want to apply this to a stream of input value, you have to make all of them explicit:
a Filter function, that ignores all input value without a mapping
a Mapper function, that maps the input to the id
a Reduce function (BinaryOperator) the performs the prioritization logic implied by the if-else-if-else construct
Mapping Function
The mapper is a discrete function mapping the input values to a one-element-stream of outputput values:
Function<String,Optional<String>> idMapper = s -> {
if("A".equals(s)){
return Optional.of("123");
} else if("B".equals(s)){
return Optional.of("234");
} else if("C".equals(s)){
return Optional.of("345");
}
return Optional.empty();
} ;
For more mappings an immutable map should be used:
Map<String,String> mapping = Collections.unmodifiableMap(new HashMap<String,String>(){{
put("A", "123");
put("B", "234");
put("C", "345");
}}); //the instance initializer is just one way to initialize the map :)
Function<String,Optional<String>> idMapper = s -> Optional.ofNullable(mapping.get(s));
Filter Function
As we only allow input values for which we have a mapping, we could use the keyset of the mapping map:
Predicate<String> filter = s -> mapping.containsKey(s);
Reduce Function
For find the top-priority element of the stream using their natural order, use this BinaryOperator:
BinaryOperator<String> prioritizer = (a, b) -> a.compareTo(b) < 0 ? a : b;
If there is another logic to prioritize, you have to adapt the implementation accordingly.
This operator is used in a .reduce() call. If you prioritize based on natural order, you could use .min(Comparator.naturalOrder()) on the stream instead.
Because the natur
Stream Pipeline
Now you first have to reduce the stream to a single value, using the prioritizer, the result is an Optional which you flatMap by applying the idMapper function (flatMap to not end with Optional>
Optional<String> id = Arrays.asList("C", "B", "A")
.stream()
.filter(filter) //concern: input validation
.reduce(prioritizer) //concern: prioritization
.flatMap(idMapper); //concern: id-mapping
Final Result
To wrap it up, for your particular problem, the most concise version (without defining functions first) using a stream and input validation would be:
//define the mapping in an immutable map (that's just one way to do it)
final Map<String,String> mapping = Collections.unmodifiableMap(
new HashMap<String,String>(){{
put("A", "123");
put("B", "234");
put("C", "345");
}});
Optional<String> result = Arrays.asList("C", "D", "A", "B")
.stream()
.filter(mapping::containsKey)
.min(Comparator.naturalOrder())
.flatMap(s -> Optional.ofNullable(mapping.get(s)));
which is the sought-for f:
BiFunction<List<String>,Map<String,String>,Optional<String>> f =
(list,map) -> list.stream()
.filter(map::containsKey)
.min(Comparator.naturalOrder())
.flatMap(s -> Optional.ofNullable(mapping.get(s)));
There is certainly some appeal to this approach, but the elegance-through-simplicity of the if-else approach cannot be denied either ;)
But for the sake of completeness, let's look at complexity. Assuming the number of mappings and the number of input values is rather large (otherwise it wouldn't really matter).
Solutions based on iterating over the map and searching using contains (as in your if-else construct):
Best-Case: o(1) (first branch in the if-else construct, first item in list)
Worst-Case: O(n^2) (last branch in the if-else construct, last item in list)
For the streaming solution with reduce, you have to iterate completely through the input list (O(n)) while the map lookup is O(1)
Best-Case: o(n)
Worst-Case: O(n)
Thx to Hamlezz for the reduce idea and Holger for pointing out that applying the mapper function directly to the stream does not yield the same result (as first match wins and not the first entry in the if-else construct) and the min(Comparator.naturalOrder()) option.