I'm quite new into programming and got a tricky question.
I got an object which has multiple parameters:
public class SampleObject {
private String number;
private String valueOne;
private String valueTwo;
private String valueThree;
// getters, setters, all-args constructor
}
Every object always has non-null number attribute as well as one of three values-field. So for example, if valueOne is not null, the other two value fields valueTwo and valueThree would be null.
So here's my problem:
The SampleObject is referenced in AnotherClass which looks so:
public class AnotherClass {
private UUID id;
private List<SampleObject> sampleObjects;
// getters, setters, all-args constructor
}
I am receiving one object of AnotherClass containing multiple entities of SampleClass in a list.
What I want to do is merge all SampleObjects which got the same number into one object and provide a map, where the number is the key and value are the value parameters. For example:
Sample1(number:"1", valueOne="1", valueTwo=null, valueThree=null)
Sample2(number:"1", valueOne=null, valueTwo="2", valueThree=null)
Sample3(number:"1", valueOne=null, valueTwo=null, valueThree="3")
Sample4(number:"2", valueOne="5", valueTwo=null, valueThree=null)
Desired state:
Sample1Merged(number:"1", valueOne="1", valueTwo="2", valueThree="3")
Sample4(number:"2", valueOne="5", valueTwo=null, valueThree=null)
What I have already done is the following:
final Map<String, SampleObject> mapOfMergedSamples = new LinkedHashMap<>();
anotherClass.getSampleObjects().stream()
.sorted(Comparator.comparing(SampleObject::getNumber))
.forEach(s -> mapOfMergedSamples.put(s.getNumber(), new SampleObject(Stream.of(s.getValueOne(), s.getValueTwo())
.filter(Objects::nonNull)
.collect(Collectors.joining()), s.getValueThree()))
);
return mapOfMergedSamples;
The problem with my current try is that every number gets overwritten because they have the same key in the map (the number in the SampleObject) does someone know how can I archive my desired state?
Based on your usage of Collector.joining() I assume that you want to concatenate all non-null values without any delimiters (anyway it can be easily changed).
In order to combine SampleObject instances having the same number property, you can group them into an intermediate Map where the number would serve as Key and a custom accumulation type (having properties valueOne, valueTwo, valueThree) would be a Value (note: if you don't want to define a new type, you can put the accumulation right into the SampleObject, but I'll go with a separate class because this approach is more flexible).
Here's it might look like (for convenience, I've implemented Consumer interface):
public class SampleObjectAccumulator implements Consumer<SampleObject> {
private StringBuilder valueOne = new StringBuilder();
private StringBuilder valueTwo = new StringBuilder();
private StringBuilder valueThree = new StringBuilder();
#Override
public void accept(SampleObject sampleObject) {
if (sampleObject.getValueOne() != null) valueOne.append(sampleObject.getValueOne());
if (sampleObject.getValueTwo() != null) valueTwo.append(sampleObject.getValueTwo());
if (sampleObject.getValueThree() != null) valueThree.append(sampleObject.getValueThree());
}
public SampleObjectAccumulator merge(SampleObjectAccumulator other) {
valueOne.append(other.valueOne);
valueTwo.append(other.valueTwo);
valueThree.append(other.valueThree);
return this;
}
public SampleObject toSampleObject(String number) {
return new SampleObject(
number,
valueOne.toString(),
valueTwo.toString(),
valueThree.toString()
);
}
// getters
}
To create an intermediate Map we can use Collector groupingBy() and as its downstream Collector, in order to leverage the custom accumulation type, we can provide a custom collector, which can instantiated using factory method Collector.of().
Then we need to create a stream over the entries of the intermediate map in order to transform the Value.
Note that sorting applied in only the second stream.
AnotherClass anotherClass = // initializing the AnotherClass instance
final Map<String, SampleObject> mapOfMergedSamples = anotherClass.getSampleObjects().stream()
.collect(Collectors.groupingBy(
SampleObject::getNumber,
Collector.of(
SampleObjectAccumulator::new,
SampleObjectAccumulator::accept,
SampleObjectAccumulator::merge
)
))
.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.collect(Collectors.toMap(
Map.Entry::getKey,
e -> e.getValue().toSampleObject(e.getKey()),
(left, right) -> { throw new AssertionError("All keys are expected to be unique"); },
LinkedHashMap::new
));
Related
I'm new into programming and now facing a tricky problem:
I got an Object which contains a list, so the Object looks like the following:
public class SampleClass {
private UUID id;
private List<ValueObject> values;
// getters, constructor, etc
}
The ValueObject contains some Strings like:
public class ValueObject {
private String firstName;
private String lastName;
private String address;
// getters, constructor, etc
}
I have a SampleClass intance which contains a list of multiple ValueObjects. Some of the ValueObjects have the same firstName and lastName.
What I want to archive is that I want to filter out all ValueObject within a SampleClass object having the same firstName and lastName. And I want to keep the last (according to the encounter order) ValueObject out of each group duplicates in the list.
I've tried the following:
SampleClass listWithDuplicates = // intializing SampleClass instance
listWithDuplicates.getValues().stream()
.collect(Collectors.groupingBy(
ValueObject::getLastname,
Collectors.toList()
));
To group it by lastname but how do I find then the matching firstNames, because lastname can be equal but firstname can be different so I want to still keep it in my Object as it not equal. And then how to remove the duplicates?
Thank you for your help
Update:
The order of the list should not get affected by removing the duplicates. And the
listWithDuplicates
holds a SampleClass Object.
You can solve this problem by using four-args version of the Collector toMap(), which expects the following arguments:
keyMapper - a function which generates a Key out of a stream element;
*valueMapper - a function producing Value from a stream element;
mergeFunctino - a function responsible for resolving Values mapped to the same Key;
mapFunctory - allows to specify the required type of Map.
In case if you can't change the implementation of the equals/hashCode in the ValueObject you can introduce an auxiliary type that would serve as a Key.
public record FirstNameLastName(String firstName, String lastName) {
public FirstNameLastName(ValueObject value) {
this(value.getFirstName(), value.getLastName);
}
}
Note: if you're OK with overriding the equals/hashCode contract of the ValueObject on it's firstName and lastName then you don't the auxiliary type shown above. In the code below you can use Function.identity() as both keyMapper and valueMapper of toMap().
And the stream can be implemented like this:
SampleClass listWithDuplicates = // initializing your domain object
List<ValueObject> uniqueValues = listWithDuplicates.getValues().stream()
.collect(Collectors.toMap(
FirstNameLastName::new, // keyMapper - creating Keys
Function.identity(), // valueMapper - generating Values
(left, right) -> right // mergeFunction - resolving duplicates
LinkedHashMap::new // mapFuctory - LinkedHashMap is needed to preserve the encounter order of the elements
))
.values().stream()
.toList();
You can override the equals and hashCode methods in your ValueObject class (not tested):
public class ValueObject {
private String firstName;
private String lastName;
private String adress;
// Constructor...
// Getters and setters...
#Override
public boolean equals(Object obj) {
return obj == this ||(obj instanceof ValueObject
&& ((ValueObject) obj).firstName.equals(this.firstName)
&& ((ValueObject) obj).lastName.equals(this.lastName)
);
}
#Override
public int hashCode() {
return (firstName + lastName).hashCode();
}
}
Then all you need is to use Stream#distinct to remove the duplicates
listWithDuplicates.getValues().stream().distinct()
.collect(Collectors.groupingBy(ValueObject::getLastname, Collectors.toList()));
You can read this answer for a little more information about.
I cannot put it in a stream. But if you indeed create a equals method in the valueObject based on name and first name, you can filter based on the object. Put this in the ValueObject:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ValueObject that = (ValueObject) o;
return Objects.equals(firstName, that.firstName) && Objects.equals(lastName, that.lastName);
}
than it should you work fine with a loop like this:
List<ValueObject> listWithoutDuplicates = new ArrayList<>();
for(var vo: listWithDuplicates){
if(!listWithoutDuplicates.contains(vo)){
listWithoutDuplicates.add(vo);
}
}
But in a stream would be nicer, .. But you can work that out if you've implemented the equals method.
I like the already provided answers, and think they give answer for you, but maybe for learning purposes, another approach could be to use a Collector that compares firstName and lastName fields of each ValueObject and then retains only the last instance of duplicates.
List<ValueObject> filteredList = listWithDuplicates.getValues().stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(
vo -> vo.getLastName() + vo.getFirstName(),
vo -> vo,
(vo1, vo2) -> vo2
),
map -> new ArrayList<>(map.values())
));
So you would group the ValueObject instances by lastName and firstName using a COllector that creates a Map and the keys for the map are the concatenated lastName and firstName. The collectingAndThen collector transforms the Map into a List of ValueObjects.
I have a class Student:
public class Student {
private String name;
private int age;
private String city;
private double salary;
private double incentive;
// getters, all-args constructor, etc.
}
And I have a list of Student instances called students.
I want to create a new list which will contain the Students grouped by their name, age and city. The salary and incentive of the students having these attributes identical should be summed up.
Example:
Input:
Student("Raj",10,"Pune",10000,100)
Student("Raj",10,"Pune",20000,200)
Student("Raj",20,"Pune",10000,100)
Student("Ram",30,"Pune",10000,100)
Student("Ram",30,"Pune",30000,300)
Student("Seema",10,"Pune",10000,100)
Output:
Student("Raj",10,"Pune",30000,300)
Student("Raj",20,"Pune",10000,100)
Student("Ram",30,"Pune",40000,400)
Student("Seema",10,"Pune",10000,100)
My attempt:
List<Student> students = // initializing the list
List<Student> res = new ArrayList<>(students.stream()
.collect(Collectors.toMap(
ec -> new AbstractMap.SimpleEntry<>(ec.getName(),ec.getAge(),ec.getCity()),
Function.identity(),
(a, b) -> new Student(
a.getName(), a.getAge(), a.getCity(), a.getSalary().add(b.getSalary()),a.getIncentive().add(b.getIncentive())
)
))
.values())));
Which produces a compilation error:
Compile error- Cannot resolve constructor 'SimpleEntry(String, int, String)' and Cannot resolve method 'add(double)
I've also tried some other options, but without success. How can I achieve that?
To obtain the total salary and incentive for students having the same name, age and city you can group the data using a Map. So you were thinking in the right direction.
In order to achieve that you would need some object, that would hold references name, age and city.
You can't place this data into a Map.Entry because it can only hold two references. A quick and dirty option would be to pack these properties into a List using List.of() (Arrays.asList() for Java 8), or nest a map entry into another map entry (which would look very ugly). Although it's doable, I would not recommend doing so if you care about maintainability of code. Therefore, I'll not use this approach (but if you wish - just change one expression in the Collector).
A cleaner way would be to introduce a class, or a Java 16 record to represent these properties.
Option with a record would be very concise because all the boilerplate code would be auto-generated by the compiler:
public record NameAgeCity(String name, int age, String city) {
public static NameAgeCity from(Student s) {
return new NameAgeCity(s.getName(), s.getAge(), s.getCity());
}
}
For JDK versions earlier than 16 you can use the following class:
public static class NameAgeCity {
private String name;
private int age;
private String city;
// getters, all-args constructor
public static NameAgeCity from(Student s) {
return new NameAgeCity(s.getName(), s.getAge(), s.getCity());
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
NameAgeCity that = (NameAgeCity) o;
return age == that.age && Objects.equals(name, that.name) && Objects.equals(city, that.city);
}
#Override
public int hashCode() {
return Objects.hash(name, age, city);
}
}
To make the solution compliant with Java 8, I would use this class.
In order to group the data from the stream into a Map, where instances of the class above would be used as a Key, we can use a flavor of Collector groupingBy(), which expects a keyMapper function and a downstream Collector responsible for generating the Values of the Map.
Since we need to perform plain arithmetic using primitive values, it would be performancewise to make use of the Collector which performs a mutable reduction, i.e. mutates the properties of it's underlying mutable container while consuming elements from the stream. And to create such Collector a type that would serve as a mutable container.
I'll introduce a class AggregatedValues which would serve as accumulation type. It would make sense to do so if objects in the source list represent different people (in such case the result would not represent a particular person, and you would use Student, or whatever it's named in the real code, for that purpose it would be confusing), or if Student is immutable, and you want to keep it like that.
public class AggregatedValues implements Consumer<Student> {
private String name;
private int age;
private String city;
private double salary;
private double incentive;
// getters, all-args constructor
#Override
public void accept(Student s) {
if (name == null) name = s.getName();
if (age == 0) age = s.getAge();
if (city == null) city = s.getCity();
salary += s.getSalary();
incentive += s.getIncentive();
}
public AggregatedValues merge(AggregatedValues other) {
salary += other.salary;
incentive += other.incentive;
return this;
}
}
And that's how we can make use of it:
List<Student> students = new ArrayList<>();
Collections.addAll(students, // List.of() for Java 9+
new Student("Raj", 10, "Pune", 10000, 100),
new Student("Raj", 10, "Pune", 20000, 200),
new Student("Raj", 20, "Pune", 10000, 100),
new Student("Ram", 30, "Pune", 10000, 100),
new Student("Ram", 30, "Pune", 30000, 300),
new Student("Seema", 10, "Pune", 10000, 100)
);
List<AggregatedValues> res = students.stream()
.collect(Collectors.groupingBy(
NameAgeCity::from, // keyMapper
Collector.of( // custom collector
AggregatedValues::new, // supplier
AggregatedValues::accept, // accumulator
AggregatedValues::merge // combiner
)
))
.values().stream()
.collect(Collectors.toList()); // or toList() for Java 16+
If in your real code, it does make sense in your real code to have the same type of the resulting list we can do one small change by introducing method toStudent() in the AggregatedValues.
public class AggregatedValues implements Consumer<Student> {
// the rest code
public Student toStudent() {
return new Student(name, age, city, salary, incentive);
}
}
And this method can be used as a finisher Function of the Collector:
List<AggregatedValues> res = students.stream()
.collect(Collectors.groupingBy(
NameAgeCity::from, // keyMapper
Collector.of( // custom collector
AggregatedValues::new, // supplier
AggregatedValues::accept, // accumulator
AggregatedValues::merge // combiner
AggregatedValues::toStudent // finisherFunction
)
))
.values().stream()
.collect(Collectors.toList()); // or toList() for Java 16+
res.forEach(System.out::println);
Output:
Student{name='Raj', age=20, city='Pune', salary=10000.0, incentive=100.0}
Student{name='Raj', age=10, city='Pune', salary=30000.0, incentive=300.0}
Student{name='Ram', age=30, city='Pune', salary=40000.0, incentive=400.0}
Student{name='Seema', age=10, city='Pune', salary=10000.0, incentive=100.0}
I am reading data from an excel file using apache poi and transforming it into a list of object. But now I want to extract any duplicates based on certain rules into another list of that object and also get the non-duplicate list.
Condition to check for a duplicate
name
email
phone number
gst number
Any of these properties can result in a duplicate. which mean or not an and
Party Class
public class Party {
private String name;
private Long number;
private String email;
private String address;
private BigDecimal openingBalance;
private LocalDateTime openingDate;
private String gstNumber;
// Getter Setter Skipped
}
Let's say this is the list returned by the logic to excel data so far
var firstParty = new Party();
firstParty.setName("Valid Party");
firstParty.setAddress("Valid");
firstParty.setEmail("Valid");
firstParty.setGstNumber("Valid");
firstParty.setNumber(1234567890L);
firstParty.setOpeningBalance(BigDecimal.ZERO);
firstParty.setOpeningDate(DateUtil.getDDMMDateFromString("01/01/2020"));
var secondParty = new Party();
secondParty.setName("Valid Party");
secondParty.setAddress("Valid Address");
secondParty.setEmail("Valid Email");
secondParty.setGstNumber("Valid GST");
secondParty.setNumber(7593612247L);
secondParty.setOpeningBalance(BigDecimal.ZERO);
secondParty.setOpeningDate(DateUtil.getDDMMDateFromString("01/01/2020"));
var thirdParty = new Party();
thirdParty.setName("Valid Party 1");
thirdParty.setAddress("address");
thirdParty.setEmail("email");
thirdParty.setGstNumber("gst");
thirdParty.setNumber(7593612888L);
thirdParty.setOpeningBalance(BigDecimal.ZERO);
secondParty.setOpeningDate(DateUtil.getDDMMDateFromString("01/01/2020"));
var validParties = List.of(firstParty, secondParty, thirdParty);
What I have attempted so far :-
var partyNameOccurrenceMap = validParties.parallelStream()
.map(Party::getName)
.collect(Collectors.groupingBy(Function.identity(), HashMap::new, Collectors.counting()));
var partyNameOccurrenceMapCopy = SerializationUtils.clone(partyNameOccurrenceMap);
var duplicateParties = validParties.stream()
.filter(party-> {
var occurrence = partyNameOccurrenceMap.get(party.getName());
if (occurrence > 1) {
partyNameOccurrenceMap.put(party.getName(), occurrence - 1);
return true;
}
return false;
})
.toList();
var nonDuplicateParties = validParties.stream()
.filter(party -> {
var occurrence = partyNameOccurrenceMapCopy.get(party.getName());
if (occurrence > 1) {
partyNameOccurrenceMapCopy.put(party.getName(), occurrence - 1);
return false;
}
return true;
})
.toList();
The above code only checks for party name but we also need to check for email, phone number and gst number.
The code written above works just fine but the readability, conciseness and the performance might be an issue as the data set is large enough like 10k rows in excel file
Never ignore Equals/hashCode contract
name, email, number, gstNumber
Any of these properties can result in a duplicate, which mean or
Your definition of a duplicate implies that any of these properties should match, whilst others might not.
It means that it's impossible to provide an implementation equals/hashCode that would match the given definition and doesn't violate the hashCode contract.
If two objects are equal according to the equals method, then calling the hashCode method on each of the two objects must produce the same integer result.
I.e. if you implement equals in such a way they any (not all) of these properties: name, email, number, gstNumber could match, and that would enough to consider the two objects equal, then there's no way to implement hashCode correctly.
And as the consequence of this, you can't use the object with a broken equals/hashCode implementation in with a hash-based Collection because equal objects might end up the in the different bucket (since they can produce different hashes). I.e. HashMap would not be able to recognize the duplicated keys, hence groupingBy with groupingBy() with Function.identity() as a classifier function would not work properly.
Therefore, to address this problem, you need to implement equals() based on all 4 properties: name, email, number, gstNumber (i.e. all these values have to be equal), and similarly all these values must contribute to hash-code.
How to determine Duplicates
There's no easy way to determine duplicates by multiple criteria. The solution you've provided is not viable, since we can't rely on the equals/hashCode.
The only way is to generate a HashMap separately for each end every attribute (i.e. in this case we need 4 maps). But can we alternate this, avoiding repeating the same steps for each map and hard coding the logic?
Yes, we can.
We can create a custom generic accumulation type (it would be suitable for any class - no hard-coded logic) that would encapsulate all the logic of determining duplicates and maintain an arbitrary number of maps under the hood. After consuming all the elements from the given collection, this custom object would be aware of all the duplicates in it.
That's how it can be implemented.
A custom accumulation type that would be used as container of a custom Collector. Its constructor expects varargs of functions, each function correspond to the property that should be taken into account while checking whether an object is a duplicate.
public static class DuplicateChecker<T> implements Consumer<T> {
private List<DuplicateHandler<T>> handles;
private Set<T> duplicates;
#SafeVarargs
public DuplicateChecker(Function<T, ?>... keyExtractors) {
this.handles = Arrays.stream(keyExtractors)
.map(DuplicateHandler::new)
.toList();
}
#Override
public void accept(T t) {
handles.forEach(h -> h.accept(t));
}
public DuplicateChecker<T> merge(DuplicateChecker<T> other) {
for (DuplicateHandler<T> handler: handles) {
other.handles.forEach(handler::merge);
}
return this;
}
public DuplicateChecker<T> finish() {
duplicates = handles.stream()
.flatMap(handler -> handler.getDuplicates().stream())
.flatMap(Set::stream)
.collect(Collectors.toSet());
return this;
}
public boolean isDuplicate(T t) {
return duplicates.contains(t);
}
}
A helper class representing a single createrion (like name, email, etc.) which encapsulates a HashMap. keyExtractor is used to obtain a key from an object of type T.
public static class DuplicateHandler<T> implements Consumer<T> {
private Map<Object, Set<T>> itemByKey = new HashMap<>();
private Function<T, ?> keyExtractor;
public DuplicateHandler(Function<T, ?> keyExtractor) {
this.keyExtractor = keyExtractor;
}
#Override
public void accept(T t) {
itemByKey.computeIfAbsent(keyExtractor.apply(t), k -> new HashSet<>()).add(t);
}
public void merge(DuplicateHandler<T> other) {
other.itemByKey.forEach((k, v) ->
itemByKey.merge(k,v,(oldV, newV) -> { oldV.addAll(newV); return oldV; }));
}
public Collection<Set<T>> getDuplicates() {
Collection<Set<T>> duplicates = itemByKey.values();
duplicates.removeIf(set -> set.size() == 1); // the object is proved to be unique by this particular property
return duplicates;
}
}
And that is the method, responsible for generating the map of duplicates, that would be used from the clean code. The given collection would be partitioned into two parts: one mapped to the key true - duplicates, another mapped to the key false - unique objects.
public static <T> Map<Boolean, List<T>> getPartitionByProperties(Collection<T> parties,
Function<T, ?>... keyExtractors) {
DuplicateChecker<T> duplicateChecker = parties.stream()
.collect(Collector.of(
() -> new DuplicateChecker<>(keyExtractors),
DuplicateChecker::accept,
DuplicateChecker::merge,
DuplicateChecker::finish
));
return parties.stream()
.collect(Collectors.partitioningBy(duplicateChecker::isDuplicate));
}
And that how you can apply it for your particular case.
main()
public static void main(String[] args) {
List<Party> parties = // initializing the list of parties
Map<Boolean, List<Party>> isDuplicate = partitionByProperties(parties,
Party::getName, Party::getNumber,
Party::getEmail, Party::getGstNumber);
}
I would use create a map for each property where
key is the property we want to check duplicate
value is a Set containing all the index of element in the list with same key.
Then we can
filter values in the map with more that 1 index (i.e. duplicate indexes).
union all the duplicate index
determine if the element is duplicate/unique by using the duplicate index.
The time complexity is roughly O(n).
public class UniquePerEachProperty {
private static void separate(List<Party> partyList) {
Map<String, Set<Integer>> nameToIndexesMap = new HashMap<>();
Map<String, Set<Integer>> emailToIndexesMap = new HashMap<>();
Map<Long, Set<Integer>> numberToIndexesMap = new HashMap<>();
Map<String, Set<Integer>> gstNumberToIndexesMap = new HashMap<>();
for (int i = 0; i < partyList.size(); i++) {
Party party = partyList.get(i);
nameToIndexesMap.putIfAbsent(party.getName(), new HashSet<>());
nameToIndexesMap.get(party.getName()).add(i);
emailToIndexesMap.putIfAbsent(party.getEmail(), new HashSet<>());
emailToIndexesMap.get(party.getEmail()).add(i);
numberToIndexesMap.putIfAbsent(party.getNumber(), new HashSet<>());
numberToIndexesMap.get(party.getNumber()).add(i);
gstNumberToIndexesMap.putIfAbsent(party.getGstNumber(), new HashSet<>());
gstNumberToIndexesMap.get(party.getGstNumber()).add(i);
}
Set<Integer> duplicatedIndexes = Stream.of(
nameToIndexesMap.values(),
emailToIndexesMap.values(),
numberToIndexesMap.values(),
gstNumberToIndexesMap.values()
).flatMap(Collection::stream).filter(indexes -> indexes.size() > 1)
.flatMap(Set::stream).collect(Collectors.toSet());
List<Party> duplicatedList = new ArrayList<>();
List<Party> uniqueList = new ArrayList<>();
for (int i = 0; i < partyList.size(); i++) {
Party party = partyList.get(i);
if (duplicatedIndexes.contains(i)) {
duplicatedList.add(party);
} else {
uniqueList.add(party);
}
}
System.out.println("duplicated:" + duplicatedList);
System.out.println("unique:" + uniqueList);
}
public static void main(String[] args) {
separate(List.of(
// name duplicate
new Party("name1", 1L, "email1", "gstNumber1"),
new Party("name1", 2L, "email2", "gstNumber2"),
// number duplicate
new Party("name3", 3L, "email3", "gstNumber3"),
new Party("name4", 3L, "email4", "gstNumber4"),
// email duplicate
new Party("name5", 5L, "email5", "gstNumber5"),
new Party("name6", 6L, "email5", "gstNumber6"),
// gstNumber duplicate
new Party("name7", 7L, "email7", "gstNumber7"),
new Party("name8", 8L, "email8", "gstNumber7"),
// unique
new Party("name9", 9L, "email9", "gstNumber9")
));
}
}
Assume Party has below constructor and toString()(for testing)
public class Party {
public Party(String name, Long number, String email, String gstNumber) {
this.name = name;
this.number = number;
this.email = email;
this.address = "";
this.openingBalance = BigDecimal.ZERO;
this.openingDate = LocalDateTime.MIN;
this.gstNumber = gstNumber;
}
#Override
public String toString() {
return "Party{" +
"name='" + name + '\'' +
", number=" + number +
", email='" + email + '\'' +
", gstNumber='" + gstNumber + '\'' +
'}';
}
...
}
The code below works fine for me now, but it is not future proof, becuase the numbers of if else statments and instanceof. I would like to extend the Transport list with more objects like bicyles, motors etc.... but every time when I add new object I need to add more if else statements and create more instanceof. Does anyone have a better idea or better solution?
private static Transport filterObjects(List<Transport> listOfTransport, int refNr) {
List<Transport> cars = listOfTransport.stream()
.filter(transport -> transport instanceof Cars)
.collect(Collectors.toList());
List<Transport> airPlanes = listOfTransport.stream()
.filter(transport -> transport instanceof Airplanes)
.collect(Collectors.toList());
if (!cars.isEmpty()){
return cars.get(refNr);
} else if (!airPlanes.isEmpty()) {
return airPlanes.get(refNr);
} else {
return null;
}
}
Pass in the subtype you want. Maybe this would work:
private static Transport filterObjects(List<Transport> listOfTransport, Class clazz, int refNr) {
List<Transport> transports = listOfTransport.stream().filter(clazz::isInstance).collect(Collectors.toList());
return !transports.isEmpty() ? transports.get(refNr) : null;
}
Just as you currently prioritize cars over planes, as your transport types grow you also need some kind of priority on which to return preferentially. You can solve this with an enum. You only need to expand your enum accordingly as soon as you add a new transport type. The enum could look something like:
enum Priority{
Car(1),
Airplane(2);
private int value;
Priority (int value) {
this.value = value;
}
public int getValue() {
return value;
}
}
Then you can refactor your method by grouping the elements of your list by their simple class names and adding them to a sorted map using the priority you define in your enum. You can then use the first entry of the map to determine the return value. Example:
private static Transport filterObjects(List<Transport> listOfTransport, int refNr) {
Comparator<String> comp = Comparator.comparingInt(e -> Priority.valueOf(e).getValue());
List<Transport> result =
listOfTransport.stream()
.collect(Collectors.groupingBy(
e -> e.getClass().getSimpleName(),
() -> new TreeMap<>(comp),
Collectors.toList()))
.firstEntry().getValue();
return (result != null && 0 <= refNr && refNr < result.size()) ?
result.get(refNr) : null;
}
First group the list elements into a map based on subtype, then create a list of subtypes of transport. Iterate this list and then check if corresponding entry exists in the map:
private static final List<Class> subTypes = List.of(Cars.class, Airplanes.class);
private static Transport filterObjects(List<Transport> listOfTransport, int refNr) {
Map<Class, List<Transport>> map = listOfTransport.stream()
.collect(Collectors.groupingBy(t -> t.getClass()));
Optional<List<Transport>> op = subTypes.stream()
.filter(map::containsKey)
.findFirst();
if(op.isPresent()) {
return op.get().get(refNr); // This could cause IndexOutOfBoundsException
}else{
return null;
}
}
Well, you could do the following.
First, define your order:
static final List<Class<? extends Transport>> ORDER = List.of(
Car.class,
Airplane.class
);
Then, you could write the following method:
private static Transport filterObjects(List<Transport> listOfTransport, int refNr) {
Map<Class<? extends Transport>, Transport> map = listOfTransport.stream()
.collect(Collectors.groupingBy(Transport::getClass, Collectors.collectingAndThen(Collectors.toList(), list -> list.get(refNr))));
return ORDER.stream()
.filter(map::containsKey)
.map(map::get)
.findFirst()
.orElse(null);
}
What this does, is mapping each distinct Class to the refNrth element which is a subtype of the respective class.
Then it walks over ORDER and checks if an element has been found within the original listOfTransport. The key won't exist in the map if listOfTransport does not contain any element of the particular class.
Note that if any element of a particular class exists in the map, the number of elements of that class is assumed to be at least refNr, otherwise an IndexOutOfBoundsException is thrown. With other words, each transport must occur 0 or at least refNr times within the listOfTransport.
Also note that getClass() does not necessarily yield the same result as instanceof. However, I have assumed here that each respective transport does not have further subclasses.
I have a list of objects. The object looks like this:
public class Slots {
String slotType;
Visits visit;
}
public class Visits {
private long visitCode;
private String agendaCode;
private String scheduledTime;
private String resourceType;
private String resourceDescription;
private String visitTypeCode;
...
}
I need to find the elements that have the same agendaCode, visitTypeCode and scheduledTime and for the life of me I can't get it done.
I tried this:
Set<String> agendas = slotsResponse.getContent().stream()
.map(Slots::getVisit)
.map(Visits::getAgendaCode)
.collect(Collectors.toUnmodifiableSet());
Set<String> visitTypeCode = slotsResponse.getContent().stream()
.map(Slots::getVisit)
.map(Visits::getVisitTypeCode)
.collect(Collectors.toUnmodifiableSet());
Set<String> scheduledTime = slotsResponse.getContent().stream()
.map(Slots::getVisit)
.map(Visits::getScheduledTime)
.collect(Collectors.toUnmodifiableSet());
List<Slots> collect = slotsResponse.getContent().stream()
.filter(c -> agendas.contains(c.getVisit().getAgendaCode()))
.filter(c -> visitTypeCode.contains(c.getVisit().getVisitTypeCode()))
.filter(c -> scheduledTime.contains(c.getVisit().getScheduledTime()))
.collect(Collectors.toList());
But it's not doing what I thought it would. Ideally I would have a list of lists, where each sublist is a list of Slots objects that share the same agendaCode, visitTypeCode and scheduledTime. I struggle with functional programming so any help or pointers would be great!
This is Java 11 and I'm also using vavr.
Since you mentioned you're using vavr, here is the vavr way to solve this question.
Supposed you have your io.vavr.collection.List (or Array or Vector or Stream or similar vavr collection) of visits:
List<Visits> visits = ...;
final Map<Tuple3<String, String, String>, List<Visits>> grouped =
visits.groupBy(visit ->
Tuple.of(
visit.getAgendaCode(),
visit.getVisitTypeCode(),
visit.getScheduledTime()
)
);
Or with a java.util.List of visits:
List<Visits> visits = ...;
Map<Tuple3<String, String, String>, List<Visits>> grouped = visits.stream().collect(
Collectors.groupingBy(
visit ->
Tuple.of(
visit.getAgendaCode(),
visit.getVisitTypeCode(),
visit.getScheduledTime()
)
)
);
The easiest way is to define a new class with necessaries fields (agendaCode, visitTypeCode and scheduledTime). Don't forget about equals/hashcode.
public class Visits {
private long visitCode;
private String resourceType;
private String resourceDescription;
private Code code;
...
}
class Code {
private String agendaCode;
private String scheduledTime;
private String visitTypeCode;
...
#Override
public boolean equals(Object o) {...}
#Override
public int hashCode() {...}
}
Then you can use groupingBy like:
Map<Code, List<Slots>> map = slotsResponse.getContent().stream()
.collect(Collectors.groupingBy(s -> s.getVisit().getCode()));
Also you can just implement equals method inside Visits only for agendaCode, visitTypeCode and scheduledTime. In this case use groupingBy by s.getVisit()
I love Ruslan's idea of using Collectors::groupingBy. Nevertheless, I don't like creating a new class or defining a new equals method. Both of them coerces you to a single Collectors::groupingBy version. What if you want to group by other fields in other methods?
Here is a piece of code that should let you overcome this problem:
slotsResponse.getContent()
.stream()
.collect(Collectors.groupingBy(s -> Arrays.asList(s.getVisit().getAgendaCode(), s.getVisit().getVisitTypeCode(), s.getVisit().getScheduledTime())))
.values();
My idea was to create a new container for every needed field (agendaCode, visitTypeCode, scheludedTime) and compare slots on these newly created containers. I would have liked doing so with a simple Object array, but it doesn't work - arrays should be compared with Arrays.equals which is not the comparison method used by Collectors::groupingBy.
Please note that you should store somewhere or use a method to define which fields you want to group by.
The fields you want to group by are all strings. You can define a function which concatenate those fields values and use that as key for your groups. Example
Function<Slots,String> myFunc = s -> s.getVisit().agendaCode + s.getVisit().visitTypeCode + s.getVisit().scheduledTime;
// or s.getVisit().agendaCode +"-"+ s..getVisit().visitTypeCode +"-"+ s.getVisit().scheduledTime;
And then group as below:
Map<String,List<Slots>> result = slotsResponse.getContent().stream()
.collect(Collectors.groupingBy(myFunc));