An intersection of Two Lists Objects in java 8. Can some tell me what am I doing wrong?
List<Student> originalStudent = new ArrayList<>();
List<Student> newStudent = new ArrayList<>();
List<Student> intersectListStudent = new LinkedList<>()
originalStudent.add(new Student("William", "Tyndale",1));
originalStudent.add(new Student("Jonathan", "Edwards",2));
originalStudent.add(new Student("Martin", "Luther"),3);
newStudent.add(new Student("Jonathan", "Edwards",2));
newStudent.add(new Student("James", "Tyndale",4));
newStudent.add(new Student("Roger", "Moore",5));
originalStudent.forEach(n ->
newStudent.stream()
.filter(db -> !n.getName().equals(db.getName()) &&
!n.getLastName().equals(db.getLastName()))
.forEach(student-> intersectListStudent .add(student)));
Can some tell me what am I doing wrong?
You violate the Side-effects principle of java-stream which in a nutshell says that a stream shouldn't modify another collection while performing the actions through the pipelines. I haven't tested your code, however, this is not a way you should treat streams.
How to do it better?
Simply use the List::contains in the filter's predicate to get rid of the unique values.
List<Student> students = originalStudent.stream()
.filter(newStudent::contains)
.collect(Collectors.toList());
This solution (understand the method List::contains) is based on the implemented equality comparison using Object::equals. Hence, there is needed to override the very same method in the class Student.
Edit: Please, be aware that that automatically overriding the Object::equals will mind the id to the equality computation. Therefore the equality will be based on the name and surname only. (thanks to #nullpointer).
Without the Object::equals overridden?
You have to perform the comparison in the filter using another stream and the method Stream::anyMatch which returns true if the predicate is qualified.
List<Student> students = originalStudent.stream()
.filter(os -> newStudent.stream() // filter
.anyMatch(ns -> // compare both
os.getName().equals(ns.getName() && // name
os.getLastName().equals(ns.getLastName()))) // last name
.collect(Collectors.toList());
What you can do is construct a SortedSet<Student> from the two concatenated lists originalStudent and newStudent. The sorted set uses a Comparator.comparing(Student::getName).thenComparing(Student::getLastName) as its comparator.
Stream.concat(originalStudent.stream(), newStudent.stream())
.collect(Collectors.toCollection(() -> new TreeSet<>(
Comparator.comparing(Student::getFname)
.thenComparing(Student::getLname))
))
Related
I'm trying to understand a very basic concept and I'm not sure why it doesn't work.
I have two lists of type, and I want to combine them into one Set with distinct values (as it should be by the definition of Set). However, when I print the set values I get duplicates.
List<SID> list1 = (.....)
List<SID> list2 = (.....)
Set<SID> combined = new HashSet<>();
combined.addAll(list1);
combined.addAll(list2);
I also tried with distinct()
Set<SID> combinedSet = Stream.concat(list1.stream(), list2.stream()).distinct().collect(Collectors.toSet());
List<SID> combinedList = Stream.concat(list1.stream(), list2.stream()).distinct().collect(Collectors.toList());
Any idea why?
SID class overrides the hashCode and the equals method...any other way to achieve it?
I am new to Java 8. I have a list of custom objects of type A, where A is like below:
class A {
int id;
String name;
}
I would like to determine if all the objects in that list have same name. I can do it by iterating over the list and capturing previous and current value of names. In that context, I found How to count number of custom objects in list which have same value for one of its attribute. But is there any better way to do the same in java 8 using stream?
You can map from A --> String , apply the distinct intermediate operation, utilise limit(2) to enable optimisation where possible and then check if count is less than or equal to 1 in which case all objects have the same name and if not then they do not all have the same name.
boolean result = myList.stream()
.map(A::getName)
.distinct()
.limit(2)
.count() <= 1;
With the example shown above, we leverage the limit(2) operation so that we stop as soon as we find two distinct object names.
One way is to get the name of the first list and call allMatch and check against that.
String firstName = yourListOfAs.get(0).name;
boolean allSameName = yourListOfAs.stream().allMatch(x -> x.name.equals(firstName));
another way is to calculate count of distinct names using
boolean result = myList.stream().map(A::getName).distinct().count() == 1;
of course you need to add getter for 'name' field
One more option by using Partitioning. Partitioning is a special kind of grouping, in which the resultant map contains at most two different groups – one for true and one for false.
by this, You can get number of matching and not matching
String firstName = yourListOfAs.get(0).name;
Map<Boolean, List<Employee>> partitioned = employees.stream().collect(partitioningBy(e -> e.name==firstName));
Java 9 using takeWhile takewhile will take all the values until the predicate returns false. this is similar to break statement in while loop
String firstName = yourListOfAs.get(0).name;
List<Employee> filterList = employees.stream()
.takeWhile(e->firstName.equals(e.name)).collect(Collectors.toList());
if(filterList.size()==list.size())
{
//all objects have same values
}
Or use groupingBy then check entrySet size.
boolean b = list.stream()
.collect(Collectors.groupingBy(A::getName,
Collectors.toList())).entrySet().size() == 1;
I have a class which is of the following definition
public class MyClass {
int val;
type t;
}
Where type is an enum with values A,B,C,D,....
I have a list of objects of MyClass and I want to filter out the first element of each type occurring in the list.
for example :-
Given list:
{{1,A},{2,A},{4,B},{5,B},{3,C}}
Output:
{{1,A},{4,B},{3,C}}
Is there a way to use filter() of a stream of the list to solve this problem?
I'm not sure if there's a way to do this with a single Stream pipeline, but you can do it with two.
The first pipeline groups the objects by the val property (producing a Map<Integer,List<MyClass>>) and the second takes the first object of each List produced by the first pipeline and collects them into the output List:
List<MyClass>
filtered = mycl.stream ()
.collect (Collectors.groupingBy (c -> c.val))
.values ()
.stream ()
.map (l -> l.get (0))
.collect (Collectors.toList ());
Here is a solution which is not as elegant I hoped for but it works:
Set<MyType> typeSet = new HashSet<>();
List<MyClass> result = list.stream()
.filter(c -> typeSet.add(c.getType())).collect(
Collectors.toList());
I'm not sure if there is any direct way of doing it but you can achieve it by doing
1) First use streams's findFirst method with filter (TypeOf type).
2) do above steps for all types.
3) Merge all above data into one list.
One of good way to achieve this override equals() and hashCode() in your MyClass class. Check equality on the basis of 'type'. Then put your List in Set it will remove all duplicate. :)
I have a POJO class SearchResults, that contains 4 Strings (title, number, date, status) and then all the getter and setter methods for it.
In another class I populate an ArrayList<SearchResults> results, is there a way I can go through that list results and erase any elements that have a duplicate number?
I've tried populating a new ArrayList by first passing results into a LinkedHashSet but that didn't work.
ArrayList<SearchResults> noDup;
noDup = new ArrayList<SearchResults>(new LinkedHashSet<SearchResults>(results));
I've also tried doing a .remove(indexof()) but that didn't work either.
if(noDup.contains(new SearchResults("-1","","",""))){noDup.remove(noDup.indexOf(new SearchResults("-1","","","")));}
Any suggestions?
Edit:
The equals() method in SearchResults (wonr refers to the number)
#Override
public boolean equals(Object object){
if(object == null){
return false;
}
if(getClass() != object.getClass()){
return false;
}
SearchResults result = (SearchResults) object;
if((this.wonr == null) ? (result.wonr == null): this.wonr.equals(result.wonr)){
return false;
}
return true;
}
The suggestions for implementing hashCode and equals are possible options, but does this single number value truly define what it means for these objects to be equivalent in the general case? If not, defining equals and hashCode that way seems to be a hack.
Without altering the definition of equivalence, if in just this case you want to elminiate values with the same number value, there are other approaches you can try. You didn't give us the API for your SearchResult class, so I'll assume there's an accessible field named number.
One quick way is to use a TreeSet which defines its idea of equivalence based on an underlying comparison operation. Write a custom Comparator that only looks at the number field and you're good to go:
Java 8
List<SearchResult> allResultsWithDuplicates = // ... populated list
Comparator<SearchResult> comparator =
(left, right) -> Integer.compare(left.number, right.number);
Set<SearchResult> uniqueNumbers = new TreeSet<>(comparator);
uniqueNumbers.addAll(allResultsWithDuplicates);
As JB Nizet mentioned, if your SearchResult class has a getNumber accessor method you can use a function reference and eliminate the lambda expression defining Comparator:
Comparator<SearchReult> comparator = Comparator.comparing(SearchResult::getNumber);
Java 5-7
In earlier versions of Java you must implement the Comparator class yourself. Then it plugs into the code given above in exactly the same way. This example assumes there is a int getNumber() accessor method on your SearchResult class:
Comparator<SearchResult> comparator =
new Comparator<SearchResult>() {
#Override
public int compare(SearchResult sr1, SearchResult sr2) {
// Optional support for null arguments is left as
// an exercise for the reader.
return Integer.compare(sr1.getNumber(), sr2.getNumber());
}
};
Another way you can do it with Java-8 is this way:
1) Create set of unique numbers,
2) Iterate over your list and filter by this set:
Set<Integer> numbers = new HashSet<>();
List<SearchResult> noDups = listWithDups.stream()
.filter(sr -> numbers.add(sr.getNumber()))
.collect(Collectors.toList());
If you implemented equals() and hashCode() so that they just look at Number property you could build a Set<SearchResult> instead of a ArrayList<SearchResult> and you will implicitly get no duplicates (this is one of the properties of sets - they don't contain duplicates). You can still iterate over the entries in the set so you should have all the functionality you need.
Do a stream of our list and use filter method and collect to an other list.
Imagine that I have a list of certain objects:
List<Student>
And I need to generate another list including the ids of Students in the above list:
List<Integer>
Avoiding using a loop, is it possible to achieve this by using apache collections or guava?
Which methods should be useful for my case?
Java 8 way of doing it:-
List<Integer> idList = students.stream().map(Student::getId).collect(Collectors.toList());
With Guava you can use Function like -
private enum StudentToId implements Function<Student, Integer> {
INSTANCE;
#Override
public Integer apply(Student input) {
return input.getId();
}
}
and you can use this function to convert List of students to ids like -
Lists.transform(studentList, StudentToId.INSTANCE);
Surely it will loop in order to extract all ids, but remember guava methods returns view and Function will only be applied when you try to iterate over the List<Integer>
If you don't iterate, it will never apply the loop.
Note: Remember this is the view and if you want to iterate multiple times it will be better to copy the content in some other List<Integer> like
ImmutableList.copyOf(Iterables.transform(students, StudentToId.INSTANCE));
Thanks to Premraj for the alternative cool option, upvoted.
I have used apache CollectionUtils and BeanUtils. Accordingly, I am satisfied with performance of the following code:
List<Long> idList = (List<Long>) CollectionUtils.collect(objectList,
new BeanToPropertyValueTransformer("id"));
It is worth mentioning that, I will compare the performance of guava (Premraj provided) and collectionUtils I used above, and decide the faster one.
Java 8 lambda expression solution:
List<Integer> iDList = students.stream().map((student) -> student.getId()).collect(Collectors.toList());
If someone get here after a few years:
List<String> stringProperty = (List<String>) CollectionUtils.collect(listOfBeans, TransformerUtils.invokerTransformer("getProperty"));
You can use Eclipse Collections for this purpose
Student first = new Student(1);
Student second = new Student(2);
Student third = new Student(3);
MutableList<Student> list = Lists.mutable.of(first, second, third);
List<Integer> result = list.collect(Student::getId);
System.out.println(result); // [1, 2, 3]
The accepted answer can be written in a further shorter form in JDK 16 which includes a toList() method directly on Stream instances.
Java 16 solution
List<Integer> idList = students.stream().map(Student::getId).toList();
It is Mathematically impossible to do this without a loop. In order to create a mapping, F, of a discrete set of values to another discrete set of values, F must operate on each element in the originating set. (A loop is required to do this, basically.)
That being said:
Why do you need a new list? You could be approaching whatever problem you are solving in the wrong way.
If you have a list of Student, then you are only a step or two away, when iterating through this list, from iterating over the I.D. numbers of the students.
for(Student s : list)
{
int current_id = s.getID();
// Do something with current_id
}
If you have a different sort of problem, then comment/update the question and we'll try to help you.