I am new to functional programming, and I am trying to get better.
Currently, I am experimenting with some code that takes on the following basic form:
private static int myMethod(List<Integer> input){
Map<Integer,Long> freq = input
.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
return (int) freq
.keySet()
.stream()
.filter(key-> freq.containsKey(freq.get(key)))
.count();
}
First a hashmap is used to get the frequency of each number in the list. Next, we sum up the amount of keys which have their values that also exist as keys in the map.
What I don't like is how the two streams need to exist apart from one another, where a HashMap is made from a stream only to be instantly and exclusively consumed by another stream.
Is there a way to combine this into one stream? I was thinking something like this:
private static int myMethod(List<Integer> input){
return (int) input
.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.keySet()
.stream()
.filter(key-> freq.containsKey(freq.get(key)))
.count();
}
but the problem here is there is no freq map to reference, as it is used as part of the pipeline, so the filter cannot do what it needs to do.
In summary, I don't like that this collects to a hashmap only then to convert back into a keyset. Is there a way to "streamline" (pun intended) this operation to
Not be going back and forth from stream and hashmap
Reference itself in a way without needing to declare a separate map before the pipeline.
Thank you!
Your keySet is nothing but effectively a HashSet formed of your input. So, you should make use of temporary storage such that:
Set<Integer> freq = new HashSet<>(input);
and further count, filter based on values in a single stream pipeline as
return (int) input
.stream()
.collect(Collectors.groupingBy(Function.identity(),
Collectors.counting()))
.values() // just using the frequencies evaluated
.stream()
.filter(count -> freq.contains(count.intValue()))
.count();
Related
I am trying to come up with an efficient way to select a discrete, but arbitrary range of key-value pairs from a HashMap. This is easy in Python, but seems difficult in Java. I was hoping to avoid using Iterators, since they seems slow for this application (correct me if I'm wrong).
For example, I'd like to be able to make the following call:
ArrayList<Pair<K, V>> values = pairsFromRange(hashMap, 0, 5);
You can't do anything that performs meaningfully better than an Iterator to do this with a HashMap.
If you use a TreeMap, however, this becomes easy: use subMap(0, 5) or the like.
Looks straightforward with lambdas (implies iteration of course). skip(n) and limit(n) should allow to address any slice of the map.
Map<String, String> m = new HashMap<>();
m.put("k1","v");
m.put("k2","v");
m.put("k3","v");
m.put("k4","v");
m.put("k5","v");
Map<String,String> slice = m.entrySet().stream()
.limit(3)
.collect(Collectors.toMap(x -> x.getKey(), x -> x.getValue()));
System.out.println(slice);
slice ==> {k1=v, k2=v, k3=v}
slice = m.entrySet().stream()
.skip(2)
.limit(3)
.collect(Collectors.toMap(x -> x.getKey(), x -> x.getValue()));
System.out.println(slice);
slice ==> {k3=v, k4=v, k5=v}
Starting with a map like:
Map<Integer, String> mapList = new HashMap<>();
mapList.put(2,"b");
mapList.put(4,"d");
mapList.put(3,"c");
mapList.put(5,"e");
mapList.put(1,"a");
mapList.put(6,"f");
I can sort the map using Streams like:
mapList.entrySet()
.stream()
.sorted(Map.Entry.<Integer, String>comparingByKey())
.forEach(System.out::println);
But I need to get list (and a String) of the correspondent sorted elements (that would be: a b c d e f) that do correspond with the keys: 1 2 3 4 5 6.
I cannot find the way to do it in that Stream command.
Thanks
As #MA says in his comment I need a mapping and that is not explained in this question: How to convert a Map to List in Java?
So thank you very much #MA
Sometimes people are too fast into closing questions!
You can use a mapping collector:
var sortedValues = mapList.entrySet()
.stream()
.sorted(Map.Entry.comparingByKey())
.collect(Collectors.mapping(Entry::getValue, Collectors.toList()))
You could also use some of the different collection classes instead of streams:
List<String> list = new ArrayList<>(new TreeMap<>(mapList).values());
The downside being that if you do all that in a single line it can get quite messy, quite fast. Additionally you're throwing away the intermediate TreeMap just for the sorting.
If you want to sort on the keys and collect only the values, you need to use a mapping function to only preserve the values after your sorting. Afterwards you can just collect or do a foreach loop.
mapList.entrySet()
.stream()
.sorted(Map.Entry.comparingByKey())
.map(Map.Entry::getValue)
.collect(Collectors.toList());
I have extracted values from a Map into a List but got a List<Optional<TXN_PMTxnHistory_rb>>, and I want to convert it into List<TXN_PMTxnHistory_rb>.
My code:
List<Optional<TXN_PMTxnHistory_rb>> listHistory_rb6 =
listHistory_rb5.values()
.stream()
.collect(Collectors.toList());
I'd like to obtain a List<TXN_PMTxnHistory_rb>.
Filter out all the empty values and use map to obtain the non-empty values:
List<TXN_PMTxnHistory_rb> listHistory_rb6 =
listHistory_rb5.values()
.stream()
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
It's possible to do this using a method called flatMap on the stream of Optionals which will remove any 'empty' Optionals.
List<TXN_PMTxnHistory_rb> listHistory_rb6 =
listHistory_rb5.values()
.stream()
.flatMap(Optional::stream)
.collect(Collectors.toList());
Flatmap is essentially performing two things - "mapping" and "flattening". In the mapping phase it calls whatever method you've passed in and expects back a new stream - in this case each Optional in your original List will become a Stream containing either 1 or 0 values.
The flatten phase will then create a new Stream containing the results of all the mapped Streams. Thus, if you had 2 Optional items in your List, one empty and one full, the resulting Stream would contain 0 elements from the first mapped Stream, and 1 value from the second.
Another option is to get all values and then filter out nulls:
List<TXN_PMTxnHistory_rb> listHistory_rb6 =
listHistory_rb5.values().stream()
.map(opt -> opt.orElse(null))
.filter(Objects::nonNull)
.collect(Collectors.toList());
Lets assume I have a very long list of strings. I want to count the number of occurrences of each string. I don't know how many and of what kind the strings are (means: I have no dictionary of all possible strings)
My first idea was to create a Map and to increase the integer every time I find the key again.
But this feels a bit clumsy. Is there a better way to count all occurrences of those strings?
Since Java 8, the easiest way is to use streams:
Map<String, Long> counts =
list.stream().collect(
Collectors.groupingBy(
Function.identity(), Collectors.counting()));
Prior to Java 8, your currently outlined approach works just fine. (And the Java 8+ way is doing basically the same thing too, just with a more concise syntax).
You can do it without streams too:
Map<String, Long> map = new HashMap<>();
list.forEach(x -> map.merge(x, 1L, Long::sum));
If you really want a specific datastructure, you can always look towards Guava's Multiset:
Usage will be similar to this:
List<String> words = Arrays.asList("a b c a a".split(" "));
Multiset<String> wordCounts = words.stream()
.collect(toCollection(HashMultiset::create));
wordCounts.count("a"); // returns 3
wordCounts.count("b"); // returns 1
wordCounts.count("z"); // returns 0, no need to handle null!
I want to collect the first n elements from a stream, without iterating through the entire thing. Is there a standard method that does this? Ala
MyList.stream()
.filter(x -> predicate(x))
.findFirstN(100)
would return a collection of up to 100 elements from the stream? My alternative is to evaluate the entire stream and then sample from the result, but that doesn't take advantage of the lazy evaluation inherent to streams.
MyList.stream()
.filter(x -> predicate(x))
.limit(100)