I have a Map<String , String> which indicates links from A to B. I want to chain all possible routes. for example :
[A , B]
[B , C]
[C , D]
[E , F]
[F , G]
[H , I]
will output
[A , B , C , D]
[E , F , G]
[H , I]
I found similar question here (but not fully fulfills my requirement) : https://stackoverflow.com/a/10176274/298430
And here is my solution :
public static <T> Set<List<T>> chainLinks(Map<T , T> map) {
Set<List<T>> resultSet = new HashSet<>();
map.forEach((from, to) -> {
if (!map.containsValue(from)) {
List<T> list = new ArrayList<>();
list.add(from);
list.addAll(inner(to, map));
resultSet.add(list);
}
});
return resultSet;
}
private static <T> List<T> inner(T from , Map<T , T> map) {
if (map.containsKey(from)) {
List<T> list = new ArrayList<>();
list.add(from);
list.addAll(inner(map.get(from), map));
return list;
} else {
List<T> end = new ArrayList<>();
end.add(from);
return end;
}
}
and the test case :
#Test
public void testChainLinks() {
Map<String , String> map = new HashMap<String , String>() {{
put("A" , "B");
put("B" , "C");
put("C" , "D");
put("E" , "F");
put("F" , "G");
put("H" , "I");
}};
Utils.chainLinks(map).forEach(list -> {
logger.info("list = {}" , list.stream().collect(Collectors.joining(" -> ")));
});
}
It does work correctly :
list = H -> I
list = E -> F -> G
list = A -> B -> C -> D
But I don't like my solution. Because I feel it can be solved in a more functional-style . I can feel the smell of stream.fold() here . I tried but in vain to convert my code to a pure functional style : which means no intermediate objects creation...
Is it possible ? Any hints are grateful !
Non-recursive solution:
Set<List<String>> result = map.keySet().stream()
.filter(k -> !map.containsValue(k))
.map(e -> new ArrayList<String>() {{
String x = e;
add(x);
while (map.containsKey(x))
add(x = map.get(x));
}})
.collect(Collectors.toSet());
EDIT: included filter from David PĂ©rez Cabrera's comment to remove intermediate lists.
Well you can easily recursion:
private static Set<List<String>> chainLinks(Map<String, String> map) {
return map.keySet().stream().filter(k -> !map.containsValue(k)).map( (key) ->
calc(key, map, new LinkedList<>())
).collect(Collectors.toSet());
}
private static List<String> calc(String key,Map<String, String> map,List<String> list){
list.add(key);
if (map.containsKey(key))
return calc(map.get(key),map,list);
else
return list;
}
There's an alternative solution using the custom collector with close to linear complexity. It's really faster than the solutions proposed before, though looks somewhat uglier.
public static <T> Collector<Entry<T, T>, ?, List<List<T>>> chaining() {
BiConsumer<Map<T, ArrayDeque<T>>, Entry<T, T>> accumulator = (
m, entry) -> {
ArrayDeque<T> k = m.remove(entry.getKey());
ArrayDeque<T> v = m.remove(entry.getValue());
if (k == null && v == null) {
// new pair does not connect to existing chains
// create a new chain with two elements
k = new ArrayDeque<>();
k.addLast(entry.getKey());
k.addLast(entry.getValue());
m.put(entry.getKey(), k);
m.put(entry.getValue(), k);
} else if (k == null) {
// new pair prepends an existing chain
v.addFirst(entry.getKey());
m.put(entry.getKey(), v);
} else if (v == null) {
// new pair appends an existing chain
k.addLast(entry.getValue());
m.put(entry.getValue(), k);
} else {
// new pair connects two existing chains together
// reuse the first chain and update the tail marker
// btw if k == v here, then we found a cycle
k.addAll(v);
m.put(k.getLast(), k);
}
};
BinaryOperator<Map<T, ArrayDeque<T>>> combiner = (m1, m2) -> {
throw new UnsupportedOperationException();
};
// our map contains every chain twice: mapped to head and to tail
// so in finisher we have to leave only half of them
// (for example ones connected to the head).
// The map step can be simplified to Entry::getValue if you fine with
// List<Collection<T>> result.
Function<Map<T, ArrayDeque<T>>, List<List<T>>> finisher = m -> m
.entrySet().stream()
.filter(e -> e.getValue().getFirst().equals(e.getKey()))
.map(e -> new ArrayList<>(e.getValue()))
.collect(Collectors.toList());
return Collector.of(HashMap::new, accumulator, combiner, finisher);
}
Usage:
List<List<String>> res = map.entrySet().stream().collect(chaining());
(I did not implement the combiner step, thus it cannot be used for parallel streams, but it's not very hard to add it as well). The idea is simple: we track partial chains found so far in the map where keys point to chain starts and ends and the values are ArrayDeque objects containing the chains found so far. Every new entry updates existing deque (if it appends/prepends it) or merges two deques together.
According to my tests this version works 1000x faster than #saka1029 solution for the 50000 element input array with 100 chains.
Related
I have the following class:
public static class GenerateMetaAlert implements WindowFunction<Tuple2<String, Boolean>, Tuple2<String, Boolean>, Tuple, TimeWindow> {
#Override
public void apply(Tuple key, TimeWindow timeWindow, Iterable<Tuple2<String, Boolean>> iterable, Collector<Tuple2<String, Boolean>> collector) throws Exception {
//code
}
}
What I'm trying to do is is for each element of the collection there are any other with the opposite value in a field.
An example:
Iterable: [<val1,val2>,<val3,val4>,<val5,val6>,...,<valx,valy>]
|| || || ||
elem1 elem2 elem3 elemn
What I would like to test:
foreach(element)
if elem(i).f0 = elem(i+1).f0 then ...
if elem(i).f0 = elem(i+2).f0 then ...
<...>
if elem(i+1).f0 = elem(i+2).f0 then ...
<...>
if elem(n-1).f0 = elem(n).f0 then ...
I think this would be possible using something like this:
Tuple2<String, Boolean> tupla = iterable.iterator().next();
iterable.iterator().forEachRemaining((e)->{
if ((e.f0 == tupla.f0) && (e.f1 != tupla.f1)) collector.collect(e);});
But like i'm new with Java, I don't know how I could do it in an optimal way.
This is a part of a Java program which use Apache Flink:
.keyBy(0, 1)
.timeWindow(Time.seconds(60))
.apply(new GenerateMetaAlert())
Testing:
Using the following code:
public static class GenerateMetaAlert implements WindowFunction<Tuple2<String, Boolean>, Tuple2<String, Boolean>, Tuple, TimeWindow> {
#Override
public void apply(Tuple key, TimeWindow timeWindow, Iterable<Tuple2<String, Boolean>> iterable, Collector<Tuple2<String, Boolean>> collector) throws Exception {
System.out.println("key: " +key);
StreamSupport.stream(iterable.spliterator(), false)
.collect(Collectors.groupingBy(t -> t.f0)) // yields a Map<String, List<Tuple2<String, Boolean>>>
.values() // yields a Collection<List<Tuple2<String, Boolean>>>
.stream()
.forEach(l -> {
System.out.println("l.size: " +l.size());
// l is the list of tuples for some common f0
while (l.size() > 1) {
Tuple2<String, Boolean> t0 = l.get(0);
System.out.println("t0: " +t0);
l = l.subList(1, l.size());
l.stream()
.filter(t -> t.f1 != t0.f1)
.forEach(t -> System.out.println("t: "+ t));
}
});
}
}
The result is:
key: (868789022645948,true)
key: (868789022645948,false)
l.size: 2
l.size: 2
t0: (868789022645948,true)
t0: (868789022645948,false)
Conclusion of this test: is like the condition .filter(t -> t.f1 != t0.f1) is never met
If I change .filter(t -> t.f1 != t0.f1) for .filter(t -> t.f1 != true) (or false) the filter works
I also use the following:
final Boolean[] aux = new Boolean[1];
<...>
Tuple2<String, Boolean> t0 = l.get(0);
aux[0] = t0.f1;
<...>
.filter(t -> !t.f1.equals(aux[0]))
But even with that, I don't have any output (I only have it when I use t.f1.equals(aux[0])
An Iterable allows you to obtain as many Iterators over its elements as you like, but each of them iterates over all the elements, and only once. Thus, your idea for using forEachRemaining() will not work as you hope. Because you're generating a new Iterator to invoke that method on, it will start at the beginning instead of after the element most recently provided by the other iterator.
What you can do instead is create a Stream by use of the Iterable's Spliterator, and use a grouping-by Collector to group the iterable's tuples by their first value. You can then process the tuple lists as you like.
For example, although I have some doubts as to whether it's what you actually want, this implements the logic described in the question:
StreamSupport.stream(iterable.spliterator(), false)
.collect(Collectors.groupingBy(t -> t.f0)) // yields a Map<String, List<Tuple2<String, Boolean>>>
.values() // yields a Collection<List<Tuple2<String, Boolean>>>
.stream()
.forEach(l -> {
// l is the list of tuples for some common f0
while (l.size() > 1) {
Tuple2<String, Boolean> t0 = l.get(0);
l = l.subList(1, l.size());
l.stream()
.filter(t -> t.f1 != t0.f1)
.forEach(t -> collect(t));
}
});
Note well that that can collect the same tuple multiple times, as follows from your pseudocode. If you wanted something different, such as collecting only tuples representing a flip of f1 value for a given f0, once each, then you would want a different implementation of the lambda in the outer forEach() operation.
I made collector who can reduce a stream to a map which has the keys as the items that can be bought by certain customers and the names of customers as values, my implementation is working proberly in sequential stream
but when i try to use parallel it's not working at all, the resulting sets always contain one customer name.
List<Customer> customerList = this.mall.getCustomerList();
Supplier<Object> supplier = ConcurrentHashMap<String,Set<String>>::new;
BiConsumer<Object, Customer> accumulator = ((o, customer) -> customer.getWantToBuy().stream().map(Item::getName).forEach(
item -> ((ConcurrentHashMap<String,Set<String>>)o)
.merge(item,new HashSet<String>(Collections.singleton(customer.getName())),
(s,s2) -> {
HashSet<String> res = new HashSet<>(s);
res.addAll(s2);
return res;
})
));
BinaryOperator<Object> combiner = (o,o2) -> {
ConcurrentHashMap<String,Set<String>> res = new ConcurrentHashMap<>((ConcurrentHashMap<String,Set<String>>)o);
res.putAll((ConcurrentHashMap<String,Set<String>>)o2);
return res;
};
Function<Object, Map<String, Set<String>>> finisher = (o) -> new HashMap<>((ConcurrentHashMap<String,Set<String>>)o);
Collector<Customer, ?, Map<String, Set<String>>> toItemAsKey =
new CollectorImpl<>(supplier, accumulator, combiner, finisher, EnumSet.of(
Collector.Characteristics.CONCURRENT,
Collector.Characteristics.IDENTITY_FINISH));
Map<String, Set<String>> itemMap = customerList.stream().parallel().collect(toItemAsKey);
There is certainly a problem in my accumulator implementation or another Function but I cannot figure it out! could anyone suggest what should i do ?
Your combiner is not correctly implemented.
You overwrite all entries that has the same key. What you want is adding values to existing keys.
BinaryOperator<ConcurrentHashMap<String,Set<String>>> combiner = (o,o2) -> {
ConcurrentHashMap<String,Set<String>> res = new ConcurrentHashMap<>(o);
o2.forEach((key, set) -> set.forEach(string -> res.computeIfAbsent(key, k -> new HashSet<>())
.add(string)));
return res;
};
I'm trying Java 8, I want to iterate over 2 collections and call a parameter function for each pair of values.
In abstract, I want to apply a foo(tuple, i) function for each iteration
[ v1, v2, v3, v4, v5, v6 ] (first collection)
[ w1, w2, w3, w4, w5, w6 ] (second collection)
---------------------------
foo(<v1,w1>, 0)
foo(<v2,w2>, 1)
...
foo(<v6,w6>, 5)
Now what I got so far (java and pseudo code)
// Type of f?
private <S,U> void iterateSimultaneously(Collection<S> c1, Collection<U> c2, Function f) {
int i = 0
Iterator<S> it1 = c1.iterator()
Iterator<U> it2 = c2.iterator()
while(it1.hasNext() && it2.hasNext()) {
Tuple<S, U> tuple = new Tuple<>(it1.next(), it2.next())
// call somehow f(tuple, i)
i++
}
}
// ........................
// pseudo code, is this posible in Java?
iterateSimultaneously(c1, c2, (e1, e2, i) -> {
// play with those items and the i value
})
Is something like this what you're looking for?
private <S,U> void iterateSimultaneously(Collection<S> c1, Collection<U> c2, BiConsumer<Tuple<S, U>, Integer> f) {
int i = 0
Iterator<S> it1 = c1.iterator()
Iterator<U> it2 = c2.iterator()
while(it1.hasNext() && it2.hasNext()) {
Tuple<S, U> tuple = new Tuple<>(it1.next(), it2.next())
f.accept(tuple, i);
i++
}
}
iterateSimultaneously(c1, c2, (t, i) -> {
//stuff
})
What type is the function f supposed to return? If nothing, change it to a consumer instead. If you want it to accept a tuple you most clarify it like I have done here. Is this what you're looking for?
You are probably looking for a BiConsumer:
private <S,U> void iterateSimultaneously(Collection<S> c1, Collection<U> c2,
BiConsumer<Tuple<S, U>, Integer> f) {
f.accept(tuple, i);
}
and call it with:
iterateSimultaneously(c1, c2, (tuple, i) -> doSomethingWith(tuple, i));
The signature of doSomethingWith would look like:
private <S, U> void doSomethingWith(Tuple<S, U> tuple, int i) {
}
You can find an detailed implementation using Stream API of Java 8 of what you are looking for just here (the method zip()) :
https://github.com/JosePaumard/streams-utils/blob/master/src/main/java/org/paumard/streams/StreamsUtils.java#L398
Take a look at Guava's utilities for streams, particularly Streams.zip and Streams.mapWithIndex. You might use them both to achieve what you want:
Collection<Double> numbers = Arrays.asList(1.1, 2.2, 3.3, 4.4, 5.5);
Collection<String> letters = Arrays.asList("a", "b", "c", "d", "e");
Stream<Tuple<Double, String>> zipped = Streams.zip(
numbers.stream(),
letters.stream(),
Tuple::new);
Stream<String> withIndex = Streams.mapWithIndex(
zipped,
(tuple, index) -> index + ": " + tuple.u + "/" + tuple.v);
withIndex.forEach(System.out::println);
This produces the following output:
0: 1.1/a
1: 2.2/b
2: 3.3/c
3: 4.4/d
4: 5.5/e
This works by first zipping streams for c1 and c2 collections into one zipped stream of tuples and then mapping this zipped stream with a function that receives both each tuple and its corresponding index.
Note that Streams.mapWithIndex must receive a BiFunction, which means that it must return a value. If you want to consume both the tuples and the indices instead, I'm afraid you will need to create a new tuple containing the original tuple and the index:
Stream<Tuple<Tuple<Double, String>, Long>> withIndex = Streams.mapWithIndex(
zipped,
Tuple::new);
withIndex.forEach(tuple -> someMethod(tuple.u, tuple.v));
Where someMethod has the following signature:
<U, V> void method(Tuple<U, V> tuple, long index)
Note 1: this example assumes the following Tuple class is used:
public class Tuple<U, V> {
private final U u;
private final V v;
Tuple(U u, V v) {
this.u = u;
this.v = v;
}
// TODO: getters and setters, hashCode and equals
}
Note 2: while you can achieve the same with iterators, the main advantage of these utilities is that they also work efficiently on parallel streams.
Note 3: this functionality is available since Guava 21.0.
I have a Multimap structure, Map<String, Set<String>> as input. I want to group entries of this map if any two sets of entry values have a common element. Output should be of the format Map<Set<String>, Set<String>> where each key will be a group of keys from the input map.
eg. given this input:
A -> [1,2]
B -> [3,4]
C -> [5,6]
D -> [1,5]
Output:
[A,C,D] -> [1,2,5,6]
[B] -> [3,4]
Here A & D have 1 as common element, C & D have 5 as common element. So A, C, D are merged into one key.
There are lots of ways you can solve this. One that I like (assuming you are using Java 8) is to implement this as a collector for a Map.Entry stream. Here's a possible implementation:
public class MapCollector {
private final Map<Set<String>,Set<Integer>> result = new HashMap<>();
public void accept(Map.Entry<String,Set<Integer>> entry) {
Set<String> key = new HashSet<>(Arrays.asList(entry.getKey()));
Set<Integer> value = new HashSet<>(entry.getValue());
Set<Set<String>> overlapKeys = result.entrySet().stream()
.filter(e -> e.getValue().stream().anyMatch(value::contains))
.map(Map.Entry::getKey)
.collect(Collectors.toSet());
overlapKeys.stream().forEach(key::addAll);
overlapKeys.stream().map(result::get).forEach(value::addAll);
result.keySet().removeAll(overlapKeys);
result.put(key, value);
}
public MapCollector combine(MapCollector other) {
other.result.forEach(this::accept);
return this;
}
public static Collector<Map.Entry<String, Set<Integer>>, MapCollector, Map<Set<String>,Set<Integer>>> collector() {
return Collector.of(MapCollector::new, MapCollector::accept, MapCollector::combine, c -> c.result);
}
}
This can be used as follows:
Map<Set<String>,Set<Integer>> result = input.entrySet().stream()
.collect(MapCollector.collector());
Most of the work is done in the accept method. It finds all overlapping sets and moves them to the new map entry. It supports parallel streams which could be useful if your map is massive.
I have the following Java6 and Java8 code:
List<ObjectType1> lst1 = // a list of ObjectType1 objects
List<ObjectType2> lst2 = // a list of ObjectType1 objects, same size of lst1
List<ObjectType3> lst3 = new ArrayLis<ObjectType3>(lst1.size());
for(int i=0; i < lst1.size(); i++){
lst3.add(new ObjectType3(lst1.get(i).getAVal(), lst2.get(i).getAnotherVal()));
}
Is there any way in Java8 to handle the previous for in a more concise way using Lambda?
A Stream is tied to a given iterable/Collection so you can't really "iterate" two collections in parallel.
One workaround would be to create a stream of indexes but then it does not necessarily improve over the for loop. The stream version could look like:
List<ObjectType3> lst3 = IntStream.range(0, lst1.size())
.mapToObj(i -> new ObjectType3(lst1.get(i).getAVal(), lst2.get(i).getAnotherVal()))
.collect(toList());
You could create a method that transforms two collections into a new collection, like this:
public <T, U, R> Collection<R> singleCollectionOf(final Collection<T> collectionA, final Collection<U> collectionB, final Supplier<Collection<R>> supplier, final BiFunction<T, U, R> mapper) {
if (Objects.requireNonNull(collectionA).size() != Objects.requireNonNull(collectionB).size()) {
throw new IllegalArgumentException();
}
Objects.requireNonNull(supplier);
Objects.requireNonNull(mapper);
Iterator<T> iteratorA = collectionA.iterator();
Iterator<U> iteratorB = collectionB.iterator();
Collection<R> returnCollection = supplier.get();
while (iteratorA.hasNext() && iteratorB.hasNext()) {
returnCollection.add(mapper.apply(iteratorA.next(), iteratorB.next()));
}
return returnCollection;
}
The important part here is that it will map the obtained iteratorA.next() and iteratorB.next() into a new object.
It is called like this:
List<Integer> list1 = IntStream.range(0, 10).boxed().collect(Collectors.toList());
List<Integer> list2 = IntStream.range(0, 10).map(n -> n * n + 1).boxed().collect(Collectors.toList());
singleCollectionOf(list1, list2, ArrayList::new, Pair::new).stream().forEach(System.out::println);
In your example it would be:
List<ObjectType3> lst3 = singleCollectionOf(lst1, lst2, ArrayList::new, ObjectType3::new);
Where for example Pair::new is a shorthand for the lamdda (t, u) -> new Pair(t, u).
I haven't found a way to update 1 stream to another, however, I accomplished a similar feat using a Map. :)
Map<Integer, String> result = new HashMap<>();
for(int index = 100; index > 0; index--){
result.put(index, String.valueOf(index));
}
result.keySet().stream()
.filter(key -> key%3 == 0)
.sorted()
.forEach(key -> result.put(key, "Fizz"));
result.keySet().stream()
.filter(key -> key%5 == 0)
.sorted()
.forEach(key -> result.put(key, "Buzz"));
result.keySet().stream()
.filter(key -> key%3 == 0 && key%5 == 0)
.sorted()
.forEach(key -> result.put(key, "FizzBuzz"));
result.keySet().stream().forEach(key -> System.out.println(result.get(key)));