From following list I need only 'wow' and 'quit'.
List<String> list = new ArrayList();
list.add("test");
list.add("test");
list.add("wow");
list.add("quit");
list.add("tree");
list.add("tree");
you can check the frequency of an element in the Collection and rule out the elements which have frequency higher than 1.
List<String> list = new ArrayList<String>();
list.add("test");
list.add("test");
list.add("wow");
list.add("quit");
list.add("tree");
list.add("tree");
for(String s: list){
if(Collections.frequency(list, s) == 1){
System.out.println(s);
}
Output:
wow
quit
This snippet should leave you with a set (output) which contains only non-duplicated elements of your list.
HashSet<String> temp = new HashSet<String>();
HashSet<String> output = new HashSet<String>();
for (String element : list)
{
if (temp.contains(element)) output.remove(element);
else
{
temp.insert(element);
output.insert(element);
}
}
Operates in O(n*log(n)) time: one set of logarithmic operations (set lookups, inserts, etc) for each of the n elements in the list.
You can use HashMap impl to count occurences and select only onces that occur once.
e.g.
void check(List<String> list)
{
Map<String,Integer> checker = new HashMap<String,Integer>();
List<String> result = new ArrayList<String>();
for(String value: list)
{
Integer count = checker.get(value);
if (count==null)
{
count = 0;
}
checker.put(value, ++count);
}
// now select only values with count == 1
for(String value: checker.keySet())
{
if (checker.get(value) == 1)
{
result.add(value);
}
}
System.out.println(result);
}
And a Third way
List result = new ArrayList();
for(Object o : list){
if(list.indexOf(o) == list.lastIndexOf(o))
result.add(o);
}
Here is a Java 8 way without streams:
Map<String, Long> counts = new HashMap<>();
list.forEach(word -> counts.merge(word, 1L, Long::sum));
counts.values().removeIf(count -> count > 1);
This first iterates the list and stores the frequency of each word in the counts map. For this I'm using the Map.merge method, which either associates the provided value (1L in this case) with the given key (word here) or uses the provided merge function (Long::sum) to combine an existent value with the given one.
Then, words with a frequency greater than 1 are removed from the map via the Collection.removeIf method.
The whole process has O(n) time complexity.
Java 8+
list.stream() // Stream
.filter(i -> Collections.frequency(list, i) == 1) // Stream
.collect(Collectors.toList()) // List
.forEach(System.out::println); // void
It prints every element from that list that appears exactly once.
Details:
lambda expressions
Stream interface
Collections class
#ROMANIA_Engineer's solution should work just fine, but it does hide an O(n2) complexity in it, since Collections.frequency is an O(n) operation.
A more efficient solution that can still be squeezed in to a single statement could be to count how many times each item occurs and filter just items that appear once:
list.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.entrySet()
.stream()
.filter(e -> e.getValue() == 1L)
.map(Map.Entry::getKey)
.forEach(System.out::println);
List aList = Arrays.asList("test", "test",
"wow", "wow", "wow");
Set hashSet = new HashSet(aList);
hashSet.addAll(aList);
now you can print HashSet all the duplicates values removed
Easily clean a list using a lambda:
list.removeIf(element -> Collections.frequency(list, element) > 1);
If you're open to using a third-party library, the following can be used with Eclipse Collections:
List<String> list = Arrays.asList("test", "test", "wow", "quit", "tree", "tree");
Set<String> set = Bags.mutable.withAll(list).selectUnique();
System.out.println(set);
Outputs:
[wow, quit]
You can also construct a Bag directly instead of creating a List as follows:
MutableBag<String> bag =
Bags.mutable.with("test", "test", "wow", "quit", "tree", "tree");
MutableSet<String> set = bag.selectUnique();
Note: I am a committer for Eclipse Collections
Related
We have a list:
List<String> strList = Arrays.asList("10.0 string1", "10.3 string2", "10.0 string3", "10.4 string4","10.3 string5");
each entry is a string made of 2 strings separated by space.
Objective is to find all the entries with max number of occurance (i.e 10.0 and 10.3 wit 2 occurrences).
The following code works. Question is could these 3 statements be reduced to 1 or at least 2?
var map2 = strList.stream()
.map(m -> {String[] parts = m.split(" "); return parts[0];})
.collect((Collectors.groupingBy(Function.identity(),LinkedHashMap::new, Collectors.counting())));
var max3 = map2.entrySet().stream()
.max((entry1, entry2) -> entry1.getValue() > entry2.getValue() ? 1 : -1)
.get()
.getValue();
var listOfMax2 = map2.entrySet().stream()
.filter(entry -> entry.getValue() == max3)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
System.out.println(listOfMax2);
The code you have is pretty straight forward if you change the names of your variables to something meaningfull. You could write a custom collector, but i doubt it is worth the effort and is able to make your code much more readable. The easiest solution I can think of is, if you insists in chaining your stream, to first build the frequency and then invert the map to use the values(frequencies) as key and keys as value and to collect to a Treemap, which is sorted by key, and get the last entry:
List<String> strList = Arrays.asList("10.0 string1", "10.3 string2", "10.0 string3", "10.4 string4", "10.3 string5");
var mostFrequentEntries =
strList.stream()
.map(s -> s.substring(0, s.indexOf(' ')))
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.entrySet()
.stream()
.collect(Collectors.groupingBy(Map.Entry::getValue, TreeMap::new, Collectors.mapping(Map.Entry::getKey, Collectors.toList())))
.lastEntry().getValue();
System.out.println(mostFrequentEntries);
This simplest way I know is to start with a frequency count of for the targeted value and return the maximum value and the map in a data structure for subsequent processing.
Here is some data (added to yours for demo)
List<String> strList = Arrays.asList("10.0 string1",
"10.0 string2", "10.3 string3", "10.0 string4",
"10.3 string5", "10.4 string6", "10.3 string7",
"10.4 string8", "10.5 string9", "10.6 string10");
first, stream the list and create a map based on frequency. This is done via using toMap and incrementing the count for duplicate keys.
then stream the entries of that map looking for the maximum count. Then return the count and the map in a SimpleEntry data structure.
Entry<Integer,Map<String,Integer>> result = strList.stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(str -> str.split("\\s+")[0],, s -> 1,Integer::sum),
m -> new SimpleEntry<>(
m.isEmpty() ? 0 : Collections.max(m.values()),m)));
Now, using the returned map and the maximum count, print all the keys that have the same count.
int max = result.getKey();
result.getValue().forEach((k,v)-> {
if (v == max) {
System.out.println(k);
}
});
prints
10.4
10.3
10.0
Thanks to Holger for making some suggestions regarding Collections.max and the two argument version of String.split().
This question already has answers here:
How to force max to return ALL maximum values in a Java Stream?
(9 answers)
Closed 2 years ago.
I am trying to do following:
1) get elements from collection that holds condition
2) sort it based on length
3) return only elements with max length
So for example
List<String> list = new ArrayList();
list.add("xone");
list.add("two");
list.add("xthree");
list.add("xseven");
Using stream i can create:
list.stream()
.filter( e -> e.startsWith("x"))
.sort( e -> e.length() )
.collect(..)
however this just sorts it.. is there any pretty way how to return only elements with maximum found length? In this case it would be "xthree" and "xseven"
Thanks for help!
Using streams:
List<String> longest =
list.stream().filter( e -> e.startsWith("x"))
.collect(groupingBy(String::length))
.entrySet()
.stream()
.max(comparingInt(e -> e.getKey()))
.get()
.getValue();
But personally I would say it's better to do it without streams: even though the code is longer, I find it easier to follow, and it avoids processing all the strings that are shorter than the longest already found:
List<String> longest = new ArrayList<>();
int max = 0;
for (String s : list) {
if (!s.startsWith("x")) continue;
// Ignore the string if it is shorter.
if (s.length() < max) continue;
if (s.length() > max) {
// We found a longer string. Discard the current entries.
longest.clear();
max = s.length();
}
// Add the string to the list of longest strings.
longest.add(s);
}
If you want to use streams, I would split this into two parts for clarity and to reduce memory usage:
final int maxLen = list.stream()
.max(Comparator.comparingInt(String::length))
.get()
.length();
List<String> maxSized = list.stream()
.filter(item -> item.length() == maxLen)
.collect(Collectors.toList());
You could collect and group by length, and take the maximum length collection. That would use more memory, but iterate fewer times. It depends on what performance characteristics you want.
maxSized = list.stream()
.collect(Collectors.groupingBy(String::length))
.entrySet()
.stream()
.max(Comparator.comparingInt(e -> e.getKey()))
.get()
.getValue();
Without the stream API you could do this:
int maxLen = 0;
for (String s : list) {
maxLen = Math.max(maxLen, s.length());
}
List<String> maxSized = new ArrayList<>();
for (String s : list) {
if (s.length() == maxLen) {
maxSized.add(s);
}
}
for (String s: maxSized) {
System.out.println(s);
}
Prints:
xthree
xseven
A slightly compact way of writing those groupingBy operations could be as follows -
TreeMap<Integer, List<String>> map = new TreeMap<>(list.stream()
.collect(Collectors.groupingBy(String::length)));
List<String> maxLengthStrings = map.lastEntry().getValue();
Let's say I have one list with elements like:
List<String> endings= Arrays.asList("AAA", "BBB", "CCC", "DDD");
And I have another large list of strings from which I would want to select all elements ending with any of the strings from the above list.
List<String> fullList= Arrays.asList("111.AAA", "222.AAA", "111.BBB", "222.BBB", "111.CCC", "222.CCC", "111.DDD", "222.DDD");
Ideally I would want a way to partition the second list so that it contains four groups, each group containing only those elements ending with one of the strings from first list. So in the above case the results would be 4 groups of 2 elements each.
I found this example but I am still missing the part where I can filter by all endings which are contained in a different list.
Map<Boolean, List<String>> grouped = fullList.stream().collect(Collectors.partitioningBy((String e) -> !e.endsWith("AAA")));
UPDATE: MC Emperor's Answer does work, but it crashes on lists containing millions of strings, so doesn't work that well in practice.
Update
This one is similar to the approach from the original answer, but now fullList is no longer traversed many times. Instead, it is traversed once, and for each element, the list of endings is searched for a match. This is mapped to an Entry(ending, fullListItem), and then grouped by the list item. While grouping, the value elements are unwrapped to a List.
Map<String, List<String>> obj = fullList.stream()
.map(item -> endings.stream()
.filter(item::endsWith)
.findAny()
.map(ending -> new AbstractMap.SimpleEntry<>(ending, item))
.orElse(null))
.filter(Objects::nonNull)
.collect(groupingBy(Map.Entry::getKey, mapping(Map.Entry::getValue, toList())));
Original answer
You could use this:
Map<String, List<String>> obj = endings.stream()
.map(ending -> new AbstractMap.SimpleEntry<>(ending, fullList.stream()
.filter(str -> str.endsWith(ending))
.collect(Collectors.toList())))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
It takes all endings and traverses the fullList for elements ending with the value.
Note that with this approach, for each element it traverses the full list. This is rather inefficient, and I think you are better off using another way to map the elements. For instance, if you know something about the structure of the elements in fullList, then you can group it immediately.
To partition a stream, means putting each element into one of two groups. Since you have more suffixes, you want grouping instead, i.e. use groupingBy instead of partitioningBy.
If you want to support an arbitrary endings list, you might prefer something better than a linear search.
One approach is using a sorted collection, using a suffix-based comparator.
The comparator can be implemented like
Comparator<String> backwards = (s1, s2) -> {
for(int p1 = s1.length(), p2 = s2.length(); p1 > 0 && p2 > 0;) {
int c = Integer.compare(s1.charAt(--p1), s2.charAt(--p2));
if(c != 0) return c;
}
return Integer.compare(s1.length(), s2.length());
};
The logic is similar to the natural order of string, with the only difference that it runs from the end to the beginning. In other words, it’s equivalent to Comparator.comparing(s -> new StringBuilder(s).reverse().toString()), but more efficient.
Then, given an input like
List<String> endings= Arrays.asList("AAA", "BBB", "CCC", "DDD");
List<String> fullList= Arrays.asList("111.AAA", "222.AAA",
"111.BBB", "222.BBB", "111.CCC", "222.CCC", "111.DDD", "222.DDD");
you can perform the task as
// prepare collection with faster lookup
TreeSet<String> suffixes = new TreeSet<>(backwards);
suffixes.addAll(endings);
// use it for grouping
Map<String, List<String>> map = fullList.stream()
.collect(Collectors.groupingBy(suffixes::floor));
But if you are only interested in the count of each group, you should count right while grouping, avoiding to store lists of elements:
Map<String, Long> map = fullList.stream()
.collect(Collectors.groupingBy(suffixes::floor, Collectors.counting()));
If the list can contain strings which match no suffix of the list, you have to replace suffixes::floor with s -> { String g = suffixes.floor(s); return g!=null && s.endsWith(g)? g: "_None"; } or a similar function.
Use groupingBy.
Map<String, List<String>> grouped = fullList
.stream()
.collect(Collectors.groupingBy(s -> s.split("\\.")[1]));
s.split("\\.")[1] will take the yyy part of xxx.yyy.
EDIT : if you want to empty the values for which the ending is not in the list, you can filter them out:
grouped.keySet().forEach(key->{
if(!endings.contains(key)){
grouped.put(key, Collections.emptyList());
}
});
If your fullList have some elements which have suffixes that are not present in your endings you could try something like:
List<String> endings= Arrays.asList("AAA", "BBB", "CCC", "DDD");
List<String> fullList= Arrays.asList("111.AAA", "222.AAA", "111.BBB", "222.BBB", "111.CCC", "222.CCC", "111.DDD", "222.DDD", "111.EEE");
Function<String,String> suffix = s -> endings.stream()
.filter(e -> s.endsWith(e))
.findFirst().orElse("UnknownSuffix");
Map<String,List<String>> grouped = fullList.stream()
.collect(Collectors.groupingBy(suffix));
System.out.println(grouped);
If you create a helper method getSuffix() that accepts a String and returns its suffix (for example getSuffix("111.AAA") will return "AAA"), you can filter the Strings having suffix contained in the other list and then group them:
Map<String,List<String>> grouped =
fullList.stream()
.filter(s -> endings.contains(getSuffix(s)))
.collect(Collectors.groupingBy(s -> getSuffix(s)));
For example, if the suffix always begins at index 4, you can have:
public static String getSuffix(String s) {
return s.substring(4);
}
and the above Stream pipeline will return the Map:
{AAA=[111.AAA, 222.AAA], CCC=[111.CCC, 222.CCC], BBB=[111.BBB, 222.BBB], DDD=[111.DDD, 222.DDD]}
P.S. note that the filter step would be more efficient if you change the endings List to a HashSet.
One can use groupingBy of substrings with filter to ensure that the final Map has just the Collection of relevant values. This could be sone as :
Map<String, List<String>> grouped = fullList.stream()
.collect(Collectors.groupingBy(a -> getSuffix(a)))
.entrySet().stream()
.filter(e -> endings.contains(e.getKey()))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
private static String getSuffix(String a) {
return a.split(".")[1];
}
You can use groupingBy with filter on endings list as,
fullList.stream()
.collect(groupingBy(str -> endings.stream().filter(ele -> str.endsWith(ele)).findFirst().get()))
I have an ArrayList with the following strings;
List<String> e = new ArrayList<String>();
e.add("123");
e.add("122");
e.add("125");
e.add("123");
I want to check the list for duplicates and remove them from the list. In this case my list will only have two values, and in this example it would be the values 122 and 125, and the two 123s will go away.
What will be the best way to this? I was thinking of using a Set, but that will only remove one of the duplicates.
In Java 8 you can do:
e.removeIf(s -> Collections.frequency(e, s) > 1);
If !Java 8 you can create a HashMap<String, Integer>. If the String already appears in the map, increment its key by one, otherwise, add it to the map.
For example:
put("123", 1);
Now let's assume that you have "123" again, you should get the count of the key and add one to it:
put("123", get("aaa") + 1);
Now you can easily iterate on the map and create a new array list with keys that their values are < 2.
References:
ArrayList#removeIf
Collections#frequency
HashMap
You can also use filter in Java 8
e.stream().filter(s -> Collections.frequency(e, s) == 1).collect(Collectors.toList())
You could use a HashMap<String, Integer>.
You iterate over the list and if the Hash map does not contain the string, you add it together with a value of 1.
If, on the other hand you already have the string, you simply increment the counter. Thus, the map for your string would look like this:
{"123", 2}
{"122", 1}
{"125", 1}
You would then create a new list where the value for each key is 1.
Here is a non-Java 8 solution using a map to count occurrences:
Map <String,Integer> map = new HashMap<String, Integer>();
for (String s : list){
if (map.get(s) == null){
map.put(s, 1);
}
else {
map.put(s, map.get(s) + 1);
}
}
List<String> newList = new ArrayList<String>();
// Remove from list if there are multiples of them.
for (Map.Entry<String, String> entry : map.entrySet())
{
if(entry.getValue() > 1){
newList.add(entry.getKey());
}
}
list.removeAll(newList);
Solution in ArrayList
public static void main(String args[]) throws Exception {
List<String> e = new ArrayList<String>();
List<String> duplicate = new ArrayList<String>();
e.add("123");
e.add("122");
e.add("125");
e.add("123");
for(String str : e){
if(e.indexOf(str) != e.lastIndexOf(str)){
duplicate.add(str);
}
}
for(String str : duplicate){
e.remove(str);
}
for(String str : e){
System.out.println(str);
}
}
The simplest solutions using streams have O(n^2) time complexity. If you try them on a List with millions of entries, you'll be waiting a very, very long time. An O(n) solution is:
list = list.stream()
.collect(Collectors.groupingBy(Function.identity(), LinkedHashMap::new, Collectors.counting()))
.entrySet()
.stream()
.filter(e -> e.getValue() == 1)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
Here, I used a LinkedHashMap to maintain the order. Note that static imports can simplify the collect part.
This is so complicated that I think using for loops is the best option for this problem.
Map<String, Integer> map = new LinkedHashMap<>();
for (String s : list)
map.merge(s, 1, Integer::sum);
list = new ArrayList<>();
for (Map.Entry<String, Integer> e : map.entrySet())
if (e.getValue() == 1)
list.add(e.getKey());
List<String> e = new ArrayList<String>();
e.add("123");
e.add("122");
e.add("125");
e.add("123");
e.add("125");
e.add("124");
List<String> sortedList = new ArrayList<String>();
for (String current : e){
if(!sortedList.contains(current)){
sortedList.add(current);
}
else{
sortedList.remove(current);
}
}
e.clear();
e.addAll(sortedList);
I'm a fan of the Google Guava API. Using the Collections2 utility and a generic Predicate implementation it's possible to create a utility method to cover multiple data types.
This assumes that the Objects in question have a meaningful .equals
implementation
#Test
public void testTrimDupList() {
Collection<String> dups = Lists.newArrayList("123", "122", "125", "123");
dups = removeAll("123", dups);
Assert.assertFalse(dups.contains("123"));
Collection<Integer> dups2 = Lists.newArrayList(123, 122, 125,123);
dups2 = removeAll(123, dups2);
Assert.assertFalse(dups2.contains(123));
}
private <T> Collection<T> removeAll(final T element, Collection<T> collection) {
return Collections2.filter(collection, new Predicate<T>(){
#Override
public boolean apply(T arg0) {
return !element.equals(arg0);
}});
}
Thinking about this a bit more
Most of the other examples in this page are using the java.util.List API as the base Collection. I'm not sure if that is done with intent, but if the returned element has to be a List, another intermediary method can be used as specified below. Polymorphism ftw!
#Test
public void testTrimDupListAsCollection() {
Collection<String> dups = Lists.newArrayList("123", "122", "125", "123");
//List used here only to get access to the .contains method for validating behavior.
dups = Lists.newArrayList(removeAll("123", dups));
Assert.assertFalse(dups.contains("123"));
Collection<Integer> dups2 = Lists.newArrayList(123, 122, 125,123);
//List used here only to get access to the .contains method for validating behavior.
dups2 = Lists.newArrayList(removeAll(123, dups2));
Assert.assertFalse(dups2.contains(123));
}
#Test
public void testTrimDupListAsList() {
List<String> dups = Lists.newArrayList("123", "122", "125", "123");
dups = removeAll("123", dups);
Assert.assertFalse(dups.contains("123"));
List<Integer> dups2 = Lists.newArrayList(123, 122, 125,123);
dups2 = removeAll(123, dups2);
Assert.assertFalse(dups2.contains(123));
}
private <T> List<T> removeAll(final T element, List<T> collection) {
return Lists.newArrayList(removeAll(element, (Collection<T>) collection));
}
private <T> Collection<T> removeAll(final T element, Collection<T> collection) {
return Collections2.filter(collection, new Predicate<T>(){
#Override
public boolean apply(T arg0) {
return !element.equals(arg0);
}});
}
Something like this (using a Set):
Set<Object> blackList = new Set<>()
public void add(Object object) {
if (blackList.exists(object)) {
return;
}
boolean notExists = set.add(object);
if (!notExists) {
set.remove(object)
blackList.add(object);
}
}
If you are going for set then you can achieve it with two sets. Maintain duplicate values in the other set as follows:
List<String> duplicateList = new ArrayList<String>();
duplicateList.add("123");
duplicateList.add("122");
duplicateList.add("125");
duplicateList.add("123");
duplicateList.add("127");
duplicateList.add("127");
System.out.println(duplicateList);
Set<String> nonDuplicateList = new TreeSet<String>();
Set<String> duplicateValues = new TreeSet<String>();
if(nonDuplicateList.size()<duplicateList.size()){
for(String s: duplicateList){
if(!nonDuplicateList.add(s)){
duplicateValues.add(s);
}
}
duplicateList.removeAll(duplicateValues);
System.out.println(duplicateList);
System.out.println(duplicateValues);
}
Output: Original list: [123, 122, 125, 123, 127, 127]. After removing
duplicate: [122, 125] values which are duplicates: [123, 127]
Note: This solution might not be optimized. You might find a better
solution than this.
With the Guava library, using a multiset and streams:
e = HashMultiset.create(e).entrySet().stream()
.filter(me -> me.getCount() > 1)
.map(me -> me.getElement())
.collect(toList());
This is pretty, and reasonably fast for large lists (O(n) with a rather large constant factor). But it does not preserve order (LinkedHashMultiset can be used if that is desired) and it creates a new list instance.
It is also easy to generalise, to instead remove all triplicates for example.
In general the multiset data structure is really useful to keep in ones toolbox.
I want to iterate two lists and get new filtered list which will have values not present in second list. Can anyone help?
I have two lists - one is list of strings, and the other is list of MyClass objects.
List<String> list1;
List<MyClass> list2;
MyClass {
MyClass(String val)
{
this.str = val;
}
String str;
...
...
}
I want filtered list of strings based on -> check second list for elements (abc) whose values not present in list1.
List<String> list1 = Arrays.asList("abc", "xyz", "lmn");
List<MyClass> list2 = new ArrayList<MyClass>();
MyClass obj = new MyClass("abc");
list2.add(obj);
obj = new MyClass("xyz");
list2.add(obj);
Now I want new filtered list -> which will have value => "lmn". i.e. values not present in list2 whose elements are in list1.
// produce the filter set by streaming the items from list 2
// assume list2 has elements of type MyClass where getStr gets the
// string that might appear in list1
Set<String> unavailableItems = list2.stream()
.map(MyClass::getStr)
.collect(Collectors.toSet());
// stream the list and use the set to filter it
List<String> unavailable = list1.stream()
.filter(e -> unavailableItems.contains(e))
.collect(Collectors.toList());
this can be achieved using below...
List<String> unavailable = list1.stream()
.filter(e -> !list2.contains(e))
.collect(Collectors.toList());
Doing it with streams is easy and readable:
Predicate<String> notIn2 = s -> list2.stream().noneMatch(mc -> s.equals(mc.str));
List<String> list3 = list1.stream().filter(notIn2).collect(Collectors.toList());
list1 = list1.stream().filter(str1->
list2.stream().map(x->x.getStr()).collect(Collectors.toSet())
.contains(str1)).collect(Collectors.toList());
This may work more efficient.
If you stream the first list and use a filter based on contains within the second...
list1.stream()
.filter(item -> !list2.contains(item))
The next question is what code you'll add to the end of this streaming operation to further process the results... over to you.
Also, list.contains is quite slow, so you would be better with sets.
But then if you're using sets, you might find some easier operations to handle this, like removeAll
Set list1 = ...;
Set list2 = ...;
Set target = new Set();
target.addAll(list1);
target.removeAll(list2);
Given we don't know how you're going to use this, it's not really possible to advise which approach to take.
See below, would welcome anyones feedback on the below code.
not common between two arrays:
List<String> l3 =list1.stream().filter(x -> !list2.contains(x)).collect(Collectors.toList());
Common between two arrays:
List<String> l3 =list1.stream().filter(x -> list2.contains(x)).collect(Collectors.toList());
if you have class with id and you want to filter by id
line1 : you mape all the id
line2: filter what is not exist in the map
Set<String> mapId = entityResponse.getEntities().stream().map(Entity::getId).collect(Collectors.toSet());
List<String> entityNotExist = entityValues.stream().filter(n -> !mapId.contains(n.getId())).map(DTOEntity::getId).collect(Collectors.toList());
`List<String> unavailable = list1.stream()
.filter(e -> (list2.stream()
.filter(d -> d.getStr().equals(e))
.count())<1)
.collect(Collectors.toList());`
for this if i change to
`List<String> unavailable = list1.stream()
.filter(e -> (list2.stream()
.filter(d -> d.getStr().equals(e))
.count())>0)
.collect(Collectors.toList());`
will it give me list1 matched with list2 right?
#DSchmdit answer worked for me. I would like to add on that. So my requirement was to filter a file based on some configurations stored in the table.
The file is first retrieved and collected as list of dtos. I receive the configurations from the db and store it as another list. This is how I made the filtering work with streams
List<FPRSDeferralModel> modelList = Files
.lines(Paths.get("src/main/resources/rootFiles/XXXXX.txt")).parallel().parallel()
.map(line -> {
FileModel fileModel= new FileModel();
line = line.trim();
if (line != null && !line.isEmpty()) {
System.out.println("line" + line);
fileModel.setPlanId(Long.parseLong(line.substring(0, 5)));
fileModel.setDivisionList(line.substring(15, 30));
fileModel.setRegionList(line.substring(31, 50));
Map<String, String> newMap = new HashedMap<>();
newMap.put("other", line.substring(51, 80));
fileModel.setOtherDetailsMap(newMap);
}
return fileModel;
}).collect(Collectors.toList());
for (FileModel model : modelList) {
System.out.println("model:" + model);
}
DbConfigModelList respList = populate();
System.out.println("after populate");
List<DbConfig> respModelList = respList.getFeedbackResponseList();
Predicate<FileModel> somePre = s -> respModelList.stream().anyMatch(respitem -> {
System.out.println("sinde respitem:"+respitem.getPrimaryConfig().getPlanId());
System.out.println("s.getPlanid()"+s.getPlanId());
System.out.println("s.getPlanId() == respitem.getPrimaryConfig().getPlanId():"+
(s.getPlanId().compareTo(respitem.getPrimaryConfig().getPlanId())));
return s.getPlanId().compareTo(respitem.getPrimaryConfig().getPlanId()) == 0
&& (s.getSsnId() != null);
});
final List<FileModel> finalList = modelList.stream().parallel().filter(somePre).collect(Collectors.toList());
finalList.stream().forEach(item -> {
System.out.println("filtered item is:"+item);
});
The details are in the implementation of filter predicates. This proves much more perfomant over iterating over loops and filtering out