There is a List of object like:-
ID Employee IN_COUNT OUT_COUNT Date
1 ABC 5 7 2020-06-11
2 ABC 12 5 2020-06-12
3 ABC 9 6 2020-06-13
This is the an employee data for three date which I get from a query in List object.
Not I want total number of IN_COUNT and OUT_COUNT between three date. This can be achieved by doing first iterating stream for only IN_COUNT and calling sum() and then in second iteration, only OUT_COUNT data can be summed. But I don't want to iterate the list two times.
How is this possible in functional programming using stream or any other option.
What you are trying to do is called a 'fold' operation in functional programming. Java streams call this 'reduce' and 'sum', 'count', etc. are just specialized reduces/folds. You just have to provide a binary accumulation function. I'm assuming Java Bean style getters and setters and an all args constructor. We just ignore the other fields of the object in our accumulation:
List<MyObj> data = fetchData();
Date d = new Date();
MyObj res = data.stream()
.reduce((a, b) -> {
return new MyObj(0, a.getEmployee(),
a.getInCount() + b.getInCount(), // Accumulate IN_COUNT
a.getOutCount() + b.getOutCount(), // Accumulate OUT_COUNT
d);
})
.orElseThrow();
This is simplified and assumes that you only have one employee in the list, but you can use standard stream operations to partition and group your stream (groupBy).
If you don't want to or can't create a MyObj, you can use a different type as accumulator. I'll use Map.entry, because Java lacks a Pair/Tuple type:
Map.Entry<Integer, Integer> res = l.stream().reduce(
Map.entry(0, 0), // Identity
(sum, x) -> Map.entry(sum.getKey() + x.getInCount(), sum.getValue() + x.getOutCount()), // accumulate
(s1, s2) -> Map.entry(s1.getKey() + s2.getKey(), s1.getValue() + s2.getValue()) // combine
);
What's happening here? We now have a reduce function of Pair accum, MyObj next -> Pair. The 'identity' is our start value, the accumulator function adds the next MyObj to the current result and the last function is only used to combine intermediate results (e.g., if done in parallel).
Too complicated? We can split the steps of extracting interesting properties and accumulating them:
Map.Entry<Integer, Integer> res = l.stream()
.map(x -> Map.entry(x.getInCount(), x.getOutCount()))
.reduce((x, y) -> Map.entry(x.getKey() + y.getKey(), x.getValue() + y.getValue()))
.orElseGet(() -> Map.entry(0, 0));
You can use reduce to done this:
public class Counts{
private int inCount;
private int outCount;
//constructor, getters, setters
}
public static void main(String[] args){
List<Counts> list = new ArrayList<>();
list.add(new Counts(5, 7));
list.add(new Counts(12, 5));
list.add(new Counts(9, 6));
Counts total = list.stream().reduce(
//it's start point, like sum = 0
//you need this if you don't want to modify objects from list
new Counts(0,0),
(sum, e) -> {
sum.setInCount( sum.getInCount() + e.getInCount() );
sum.setOutCount( sum.getOutCount() + e.getOutCount() );
return sum;
}
);
System.out.println(total.getInCount() + " - " + total.getOutCount());
}
Related
My list consists of elements with fiels Type(String), Amount(Double) and Quantity(Integer) and it looks like this:
Type: Type A, Amount : 55.0, Quantity : 0
Type: Type A, Amount : 55.0, Quantity : 5
Type: Type A, Amount : 44.35, Quantity : 6
Type: Type A, Amount : 55.0, Quantity : 0
Type: Type B, Amount : 7.0, Quantity : 1
Type: Type B, Amount : 7.0, Quantity : 1
Type: Type C, Amount : 1613.57, Quantity : 0
Type: Type C, Amount : 1613.57, Quantity : 1
So i am trying to loop my array to find duplicate, and add the Amount if its duplicate. The outcome would be like this:
Type: Type A, Amount : 209.35.0, Quantity : 11
Type: Type B, Amount : 14.0, Quantity : 2
Type: Type C, Amount : 3227.14, Quantity : 1
What i have tried is creating another List, add the List to new List, then compare them, but didnt work
List<Type> newList = new ArrayList();
for(int k = 0; k < typeList.size(); k++) {
Type type= new Type();
Double totalAmount = Double.parseDouble("0");
type.setTypeName(typeList.get(k).getTypeName());
type.setAmount(chargeTypeList.get(k).getAmount());
newList.add(k, type);
if(typeList.get(k).getChargeTypeName().equalsIgnoreCase(newList.get(k).getiTypeName())) {
totalAmount += typeList.get(k).getAmount();
}
}
I don't want to hardcode the value to check for duplicate Type
You should probably be putting these values into a Map, which guarantees there is only one element for each key. Using a map is very common for representing amounts of some thing where we store the thing as the key and keep track of how many of those things we have in the value.
You can use compute to then add elements to the list.
What you currently have:
record Data(String type, Double amount, Integer quantity) {}
What may represent your data better:
record Datav2(Double amount, Integer quantity) {}
Storing Datav2 in a map and adding an element.
var map = new HashMap<>(Map.of("A", new Datav2( 2.0, 3)));
// add element to map equivalent to Data("A", 3.0, 3)
map.compute("A", (k, v) -> {
if (v == null) {
v = new Datav2(0.0, 0);
}
return new Datav2(v.amount = 3.0, v.quantity + 3);
});
If you need to start with a list for whatever reason you can use the Stream API to turn the list into a map. Specifically toMap.
var list = List.of(new Data("A", 2.0, 3),
new Data("A", 3.0, 3),
new Data("C", 2.0, 1),
new Data("B", 10.0, 3),
new Data("B", 2.0, 5)
);
var collected = list
.stream()
.collect(Collectors.toMap(
// what will the key be
Data::type,
// what will the value be
data -> new Datav2(data.amount, data.quantity),
// how do we combine two values if they have the same key
(d1, d2) -> new Datav2(d1.amount + d2.amount, d1.quantity + d2.quantity)
));
System.out.println(collected);
{A=Datav2[amount=5.0, quantity=6], B=Datav2[amount=12.0, quantity=8], C=Datav2[amount=2.0, quantity=1]}
Another approach would be to sort the list by type, then iterate it and add each item to an sum item. When the type changes, add your sum item to a result list and keep going.
Another way for achieving is by use of collect & hashmap's merge operation:
List<TypeClass> ls = List.of(new TypeClass("A", 12.3, 2), new TypeClass("A", 3.4, 4),
new TypeClass("B", 12.4, 6), new TypeClass("B", 12.8, 8));
System.out.println(
ls.stream().collect(HashMap<String, TypeClass>::new, (x, y) -> x.merge(y.getTypeName(), y, (o, p) -> {
return new TypeClass(y.getTypeName(), o.getAmount() + p.getAmount(),
o.getQuantity() + p.getQuantity());
}), (a, b) -> a.putAll(b)));
this will print following output:
{A=TypeClass [typeName=A, amount=15.700000000000001, quantity=6],
B=TypeClass [typeName=B, amount=25.200000000000003, quantity=14]}
Here, we are accumulating hashmap which is merged based on key i.e. your string value. Merged function is simple addition of amount & quantity of your Type Class.
You can use built-in collector groupingBy() to group the objects having the same type in conjunction with a custom collector created via Collector.of() as downstream of grouping.
Assuming that your custom object looks like this (for the purpose of conciseness, I've used a Java 16 record):
public record MyType(String type, double amount, int quantity) {}
Note:
Don't use wrapper-types without any good reason, uses primitives instead. That would allow avoiding unnecessary boxing/unboxing and eliminates the possibilities of getting a NullPointerException while performing arithmetical operations or comparing numeric values.
If the number values that type attribute might have is limited, then it would be better to use an enum instead of String because it's more reliable (it would guard you from making a typo) and offers some extra possibilities since enums have an extensive language support.
That's how the accumulation logic can be implemented:
List<MyType> typeList = new ArrayList();
List<MyType> newList = typeList.stream()
.collect(Collectors.groupingBy(
MyType::type,
Collector.of(
MyAccumulator::new,
MyAccumulator::accept,
MyAccumulator::merge
)
))
.entrySet().stream()
.map(entry -> new MyType(entry.getKey(),entry.getValue().getAmount(), entry.getValue().getQuantity()))
.toList();
And that's how the custom accumulation type internally used by the collector might look like:
public static class MyAccumulator implements Consumer<MyType> {
private double amount;
private int quantity;
#Override
public void accept(MyType myType) {
add(myType.amount(), myType.quantity());
}
public MyAccumulator merge(MyAccumulator other) {
add(other.amount, other.quantity);
return this;
}
private void add(double amount, int quantity) {
this.amount += amount;
this.quantity += quantity;
}
// getters
}
We have two lists in Java, as shown below. And I need to get all elements from these two lists and if there is same id with the same date we need to sum the cost` of such elements in these lists.
List<A> listA = new ArrayList<>();
List<A> listB = new ArrayList<>();
List<A> results = new ArrayList<>();
listA:
Id
Date
Cost
1
2022-01-01
11.65
2
2022-02-01
12.65
2
2022-03-01
13.65
3
2022-05-01
19.5
listB:
Id
Date
Cost
1
2022-04-01
1.65
1
2022-05-01
134.65
2
2022-02-01
12.65
2
2022-09-01
7.8
3
2022-06-01
3.65
The results should be
results list should be - >
Id
Date
Cost
1
2022-01-01
11.65
1
2022-04-01
1.65
1
2022-05-01
134.65
2
2022-02-01
25.3*
2
2022-03-01
13.65
2
2022-09-01
7.8
3
2022-05-01
19.5
3
2022-06-01
3.65
* (listA.cost + listB.cost this is based on date condition and id)
What I have tried till now is this
Stream.concat(
listA.stream().map(d -> new Result(d.getId(), d.getDate()), d.getCost()),
listB.stream().map(b -> new Result(b.getId(), b.getDate(), b.getCost())
)
.collect(Collectors.toList());
I am able to get all the data but after this step I need to get all the data and if there is same date and same id we need to sum up the cost from listA with the cost of listB
get all the data and if there is same date and same id we need to sum up the cost from listA with the cost of listB
It can be achieved by grouping the data from the these list into an intermediate map. The key of this map should be an object that would be capable to incorporate date and id. There are several quick and dirty approaches like concatenate them as strings, but the correct way is to define a record (or a class). And the values of the intermediate map would represent a sum of costs that are mapped to the same combination of date and id.
Then we can create a stream over the entries of the auxiliary map, turn each entry into an object A and collect into a list.
That's how it can be implemented.
Define a record, IdDate, to use as a key:
public record IdDate(long id, LocalDate date) {}
List<A> listA =
Arrays.asList(new A(1, LocalDate.parse("2022-01-01"), 11.65), // for Java 9+ use - List.of()
new A(2, LocalDate.parse("2022-02-01"), 12.65),
new A(2, LocalDate.parse("2022-03-01"), 13.65),
new A(3, LocalDate.parse("2022-05-01"), 19.5));
List<A> listB =
Arrays.asList(new A(1, LocalDate.parse("2022-04-01"), 1.65), // for Java 9+ use - List.of()
new A(1, LocalDate.parse("2022-05-01"), 134.65),
new A(2, LocalDate.parse("2022-02-01"), 12.65),
new A(2, LocalDate.parse("2022-09-01"), 7.8),
new A(3, LocalDate.parse("2022-06-01"), 3.65));
List<A> results =
Stream
.concat(listA.stream(), listB.stream())
.collect(
Collectors.groupingBy( // creating an intermediate map `Map<IdDate, Double>`
a -> new IdDate(a.getId(), a.getDate()), // classifier fuction - generating a key
Collectors.summingDouble(A::getCost) // downstream collector - combining values mapped to the same key
)
)
.entrySet()
.stream()
.map(entry -> new A(entry.getKey().id(), // transforming an entry into an object `A`
entry.getKey().date(),
entry.getValue()))
.toList(); // In earlier Java: .collect(Collectors.toList());
results.forEach(System.out::println);
Output:
A{id=2, date=2022-09-01, cost=7.8}
A{id=2, date=2022-03-01, cost=13.65}
A{id=2, date=2022-02-01, cost=25.3}
A{id=3, date=2022-06-01, cost=3.65}
A{id=3, date=2022-05-01, cost=19.5}
A{id=1, date=2022-05-01, cost=134.65}
A{id=1, date=2022-04-01, cost=1.65}
A{id=1, date=2022-01-01, cost=11.65}
A link to Online Demo
This method will work:
Our A class:
class A implements Comparable<A>{
private int id;
private String date;
private double cost;
public A(int i, String d, double c){
id = i;
date = d;
cost = c;
}
public int getID(){
return id;
}
public String getDate(){
return date;
}
public double getCost(){
return cost;
}
public void setCost(double c){
cost = c;
}
#Override public int compareTo(A compareID) {
int comp = (compareID).getID();
return this.id - comp;
}
public String toString(){
return id + " " + date + " " + cost;
}
}
Main class:
import java.util.*;
class Main {
public static void main(String[] args) {
List<A> listA = new ArrayList<>();
List<A> listB = new ArrayList<>();
List<A> results = new ArrayList<>();
listA.add(new A(1, "2022-01-01", 11.65));
listA.add(new A(2, "2022-02-01", 12.65));
listA.add(new A(2, "2022-03-01", 13.65));
listA.add(new A(3, "2022-05-01", 19.5));
listB.add(new A(1, "2022-04-01", 1.65));
listB.add(new A(1, "2022-05-01", 134.65));
listB.add(new A(2, "2022-02-01", 12.65));
listB.add(new A(2, "2022-09-01", 7.8));
listB.add(new A(3, "2022-06-01", 3.65));
results = listA;
for(int i = 0; i < results.size(); i++){
for(int j = 0; j < listB.size(); j++){
if((results.get(i)).getDate().equals((listB.get(j)).getDate()) && (results.get(i)).getID() == (listB.get(j)).getID()){
results.get(i).setCost(results.get(i).getCost() + listB.get(j).getCost());
listB.remove(j);
j -= 1;
}
}
}
results.addAll(listB);
Collections.sort(results);
System.out.println(results);
}
}
Output:
[1 2022-01-01 11.65, 1 2022-04-01 1.65, 1 2022-05-01 134.65, 2 2022-02-01 25.3, 2 2022-03-01 13.65, 2 2022-09-01 7.8, 3 2022-05-01 19.5, 3 2022-06-01 3.65]
In our A class, we define our private instance variables, being id, date, cost, and set up constructors, accessors, and mutators for the class. We will also set up our own comparator so we can sort by ID.
In our main class, we set results equal to listA. Next, we use a nested for loop to iterate through both results and listB. If the ith element in results and the jth element in listB have the same id and date, we will add the values in listA, and delete that element in listB. Finally, if there are any elements left over in listB, we add them to results, and then sort the list.
This is not the most efficient approach as it involves a nested for loop and sorting, but this will work nonetheless.
I hope this answered your question! Please let me know if you have any further questions or clarifications :)
Well, you can use groupingBy in combination with reduce to get what you want.
First, we're creating a groupingBy key (in your case, thats a class with id and date). We may as well create a record.
record GroupByKey(int id, String date) {
public static GroupByKey fromResult(Result result) {
return new GroupByKey(result.id(), result.date());
}
}
Note that I added a method named fromResult, which is able to convert a Result instance to its corresponding groupingBy key.
Then the following will do:
Map<GroupByKey, Optional<Result>> result = Stream.of(a, b)
.flatMap(List::stream)
.collect(groupingBy(
GroupByKey::fromResult,
reducing((l, r) -> new Result(l.id(), l.date(), l.cost() + r.cost()))
));
(using static imports from java.util.stream.Collectors)
What happens here, is the following.
flatMap(List::stream) makes sure that each element of both lists are the subject of a new stream.
Then groupingBy creates a Map with as key the GroupingByKey and as value a list with all corresponding Result instances. Now the only thing what needs to be done here, is take all elements from the list within the map, and create a single Result instance with all costs added to eachother.
Note that this yields a Map<GroupByKey, Optional<Result>>. The values of the map are Optional<Result>s rather than Results, because if some entry value were an empty list, the reduction operation would yield no results, hence the Optional. However, since we're using groupingBy, it is simply not possible for any of the keys to have an empty List as associated value, because then the key should not have been created in the first place. So we can just execute Optional::orElseThrow to unwrap the Optional. This can be done with the collectingAndThen method:
.collect(groupingBy(
GroupByKey::fromResult,
collectingAndThen(
reducing((l, r) -> new Result(l.id(), l.date(), l.cost() + r.cost())),
Optional::orElseThrow
)
));
The result is a Map<GroupingByKey, Result>, but we only need the values, not the keys. A simple values() call on the map returns a Collection<Result>.
I have to create a method that gives the 10 Taxpayers that spent the most in the entire system.
There's a lot of classes already created and code that would have to be in between but what I need is something like:
public TreeSet<Taxpayer> getTenTaxpayers(){
TreeSet<Taxpayer> taxp = new TreeSet<Taxpayer>();
...
for(Taxpayer t: this.taxpayers.values()){ //going through the Map<String, Taxpayer>
for(Invoice i: this.invoices.values()){ //going through the Map<String, Invoice>
if(taxp.size()<=10){
if(t.getTIN().equals(i.getTIN())){ //if the TIN on the taxpayer is the same as in the Invoice
...
}
}
}
}
return taxp;
}
To sum it up, I have to go through a Map<String, Taxpayer> which has for example 100 Taxpayers, then go through a Map<String, Invoice> for each respective invoice and return a new Collection holding the 10 Taxpayers that spent the most on the entire system based on 1 attribute on the Invoice Class. My problem is how do I get those 10, and how do I keep it sorted. My first look at it was to use a TreeSet with a Comparator but the problem is the TreeSet would be with the class Taxpayer while what we need to compare is an attribute on the class Invoice.
Is this a classic Top K problem ? Maybe you can use the java.util.PriorityQueue to build a min heap to get the top 10 Taxpayer.
This can be broken down into 3 steps:
Extract distinct TaxPayers
Extract Invoices for each payer and then sum amount
Sort by the payed amount and limit to first 10
If you are using java-8 you can do something like:
final Map<TaxPayer, Double> toTenMap = payersMap.values() // get values from map
.stream() // create java.util.Stream
.distinct() // do not process duplicates (TaxPayer must provide a standard-compliant equals method)
.map(taxPayer -> {
final double totalAmount = invoicesMap
.values() // get values from the invoices map
.stream() // create Stream
.filter(invoice -> invoice.getTIN().equals(taxPayer.getTIN())) // get only those for the current TaxPayer
.mapToDouble(Invoice::getAmount) // get amount
.sum(); // sum amount
return new AbstractMap.SimpleEntry<>(taxPayer, totalAmount); // create Map.Entry
})
.sorted( ( entry1, entry2 ) -> { // sort by total amount
if (entry1.getValue() > entry2.getValue()) return 1;
if (entry1.getValue() < entry2.getValue()) return -1;
return 0;
})
.limit(10) // get only top ten payers
.collect(Collectors.toMap( // save to map
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
));
Surely there is a more elegant solution. Also, I haven't tested it because I don't have much time now.
Select sum(paidAmount), count(paidAmount), classificationName,
From tableA
Group by classificationName;
How can i do this in Java 8 using streams and collectors?
Java8:
lineItemList.stream()
.collect(Collectors.groupingBy(Bucket::getBucketName,
Collectors.reducing(BigDecimal.ZERO,
Bucket::getPaidAmount,
BigDecimal::add)))
This gives me sum and group by. But how can I also get count on the group name ?
Expectation is :
100, 2, classname1
50, 1, classname2
150, 3, classname3
Using an extended version of the Statistics class of this answer,
class Statistics {
int count;
BigDecimal sum;
Statistics(Bucket bucket) {
count = 1;
sum = bucket.getPaidAmount();
}
Statistics() {
count = 0;
sum = BigDecimal.ZERO;
}
void add(Bucket b) {
count++;
sum = sum.add(b.getPaidAmount());
}
Statistics merge(Statistics another) {
count += another.count;
sum = sum.add(another.sum);
return this;
}
}
you can use it in a Stream operation like
Map<String, Statistics> map = lineItemList.stream()
.collect(Collectors.groupingBy(Bucket::getBucketName,
Collector.of(Statistics::new, Statistics::add, Statistics::merge)));
this may have a small performance advantage, as it only creates one Statistics instance per group for a sequential evaluation. It even supports parallel evaluation, but you’d need a very large list with sufficiently large groups to get a benefit from parallel evaluation.
For a sequential evaluation, the operation is equivalent to
lineItemList.forEach(b ->
map.computeIfAbsent(b.getBucketName(), x -> new Statistics()).add(b));
whereas merging partial results after a parallel evaluation works closer to the example already given in the linked answer, i.e.
secondMap.forEach((key, value) -> firstMap.merge(key, value, Statistics::merge));
As you're using BigDecimal for the amounts (which is the correct approach, IMO), you can't make use of Collectors.summarizingDouble, which summarizes count, sum, average, min and max in one pass.
Alexis C. has already shown in his answer one way to do it with streams. Another way would be to write your own collector, as shown in Holger's answer.
Here I'll show another way. First let's create a container class with a helper method. Then, instead of using streams, I'll use common Map operations.
class Statistics {
int count;
BigDecimal sum;
Statistics(Bucket bucket) {
count = 1;
sum = bucket.getPaidAmount();
}
Statistics merge(Statistics another) {
count += another.count;
sum = sum.add(another.sum);
return this;
}
}
Now, you can make the grouping as follows:
Map<String, Statistics> result = new HashMap<>();
lineItemList.forEach(b ->
result.merge(b.getBucketName(), new Statistics(b), Statistics::merge));
This works by using the Map.merge method, whose docs say:
If the specified key is not already associated with a value or is associated with null, associates it with the given non-null value. Otherwise, replaces the associated value with the results of the given remapping function
You could reduce pairs where the keys would hold the sum and the values would hold the count:
Map<String, SimpleEntry<BigDecimal, Long>> map =
lineItemList.stream()
.collect(groupingBy(Bucket::getBucketName,
reducing(new SimpleEntry<>(BigDecimal.ZERO, 0L),
b -> new SimpleEntry<>(b.getPaidAmount(), 1L),
(v1, v2) -> new SimpleEntry<>(v1.getKey().add(v2.getKey()), v1.getValue() + v2.getValue()))));
although Collectors.toMap looks cleaner:
Map<String, SimpleEntry<BigDecimal, Long>> map =
lineItemList.stream()
.collect(toMap(Bucket::getBucketName,
b -> new SimpleEntry<>(b.getPaidAmount(), 1L),
(v1, v2) -> new SimpleEntry<>(v1.getKey().add(v2.getKey()), v1.getValue() + v2.getValue())));
List <Person> roster = new List<Person>();
Integer totalAgeReduce = roster
.stream()
.map(Person::getAge)
.reduce(
0,
(a, b) -> a + b);
Can anyone help me understand the above code snippet. My understanding is that the stream method will first iterate through the entire roster List and while it is iterating it will create a new List of the mapped objects with every person's age in it. Then it will finally call the reduce after the mapping is done (the reduce is only called at the end after mapping correct?). And in the reduce it starts of at 0, and in the first iteration of reduce on the newly mapped list a = 0 and b is equal to the first element in the List that was created from the mapping function. Then it will continue and add all the elements from the mapped list and return to you an integer with the sum of all the ages.
Each item in the stream will each be sent through all the steps one at a time. Here's some test code to help you see whats happening:
List<String> test = Arrays.asList("A","B");
System.out.println("END: " + test.stream()
.map(s -> {System.out.println("1 " + s); return s; })
.map(s -> {System.out.println("2 " + s); return s; })
.reduce("", (acc, s) -> {System.out.println("3 " + s); return acc + s; })
);
Output
1 A
2 A
3 A
1 B
2 B
3 B
END: AB
TL;DR
It sums all the ages from the Person's within the List.
stream() : Creates a stream from the Collection (List)
map() : Will make a mapping from the received object to another object (here from Person to Integer (getAge returns an Integer))
reduce(0,(a, b) -> a + b) : reduce is a reduction (it reduces all the objects received into one (here the action is to add them all together, a big addition). It takes the identity (first value to begin with) as first argument and the following lambda expression (BinaryOperator<Integer> or BiFunction<Integer, Integer, Integer>) presents the logic to apply for the reduction.
Example
List<Person> persons = Arrays.asList(new Person("John", 20),
new Person("Mike", 40),
new Person("Wayne", 30));
Integer totalAgeReduce = roster.stream()
.map(Person::getAge)
.reduce(0,(a, b) -> a + b);
System.out.println(totalAgeReduce); // 90
The thing is
(a, b) -> a + b);
is an accumulator, and if you look at it like a recursive function, it will be passing the result of the sum, for every element in the stream, as Andreas Point out is not a list, is a pipeline.
Just to point out lambda expressions is just passing an Argument which in fact is a function.
If you would use loops it would look like this:
List<Integer> ages = new ArrayList<>();
for (Person p : roster) {
ages.add(p.getAge());
}
int sum = 0;
for (Integer age : ages) {
sum += age;
}