Sum Bigdecimals inside Stream - java

I´m using Java 8 Stream where I iterate over two collections, and after pass a filter I want to sum one of the bigdecimal variables that I have inside my stream to an external bigDecimal variable "restrictionsNumber"
Here my code:
final BigDecimal restrictionsNumber = cmd.amount.getNumberOfUnits();
order.products()
.stream()
.flatMap(product -> product.getRestrictions()
.stream()
.filter(restriction -> restriction.equals(newProductRestriction))
.map(restriction -> restrictionsNumber.add(product.getAmount()
.getNumberOfUnits())));
The last map is the one where I´m trying to sum the two bigdecimals.
I know I´m doing something wrong.
Can anyone give me an advise about how to do it with Stream.
I´m trying to refactor from this old fashion code
final BigDecimal restrictionsNumber = cmd.amount.getNumberOfUnits();
for (Product product : order.products()) {
for (String oldProductRestriction : product.getRestrictions()) {
if (oldProductRestriction.equals(newProductRestriction)) {
restrictionsNumber = restrictionsNumber.add(product.getAmount()
.getNumberOfUnits());
}
}
}
Regards.

This may be what you need (but it keeps adding the same amount several times for each product, in line with your original code, which seems weird):
BigDecimal sum = order.products()
.stream()
.flatMap(product -> product.getRestrictions()
.stream()
.filter(restriction -> restriction.equals(newProductRestriction))
.map(restriction -> product.getAmount().getNumberOfUnits()))
.reduce(BigDecimal.ZERO, BigDecimal::add);
BigDecimal result = restrictionsNumber.add(sum);

It sounds like you want to use the "reduce" operation.
Reduce is used for operations like summing over a whole stream, or adding finding the maximum.
(If you want your addition to happen for a single stream element then your question was unclear to me, please add detail)

Related

How to convert Long to BigDecimal while also using a Stream

I'm struggling to understand how can I make the following code work.
The field count_human_dna of my stat class is of type BigDecimal, with setting the type as Long this works, but I need to change it to BigDecimal, can somehow tell me how could I make this work for BigDecimal field?
stat.setCount_human_dna(dnaSamples.stream()
.filter(x -> x.getType().equals("Human"))
.collect(Collectors.counting()));
This code is counting all the dnaSamples which type belong to Human.
Use the BigDecimal#valueOf method for the conversion from long to BigDecimal.
stat.setCount_human_dna(BigDecimal.valueOf(dnaSamples.stream().filter(x -> x.getType().equals("Human")).collect(Collectors.counting())));
See the JavaDocs for more detail.
The most simple and efficient way to do is to use terminal operation count() which returns the number of elements in the stream as long and then convert into BigDecimal:
stat.setCount_human_dna(getDNACount(dnaSamples));
public static BigDecimal getDNACount(Collection<Sample> dnaSamples) {
long humanSamples = dnaSamples.stream()
.filter(x -> x.getType().equals("Human"))
.count();
return BigDecimal.valueOf(humanSamples);
}
And you can produce the result of type BigDecimal directly from the stream using reduce() a terminal operation:
stat.setCount_human_dna(getDNACount(dnaSamples));
public static BigDecimal getDNACount(Collection<Sample> dnaSamples) {
return dnaSamples.stream()
.filter(x -> x.getType().equals("Human"))
.reduce(BigDecimal.ZERO,
(total, next) -> total.add(BigDecimal.ONE),
BigDecimal::add);
}
Sidenote: I'm not an expert in such questions as DNA analysis, but the result of this reduction will always be a whole number. You might consider utilizing BigInteger instead of BigDecimal.

given an infinite sequence break it into intervals, and return a new infinite sequence with the average of each interval

i have to calculate the average of a Infinite Sequence using Stream API
Input:
Stream<Double> s = a,b,c,d ...
int interval = 3
Expected Result:
Stream<Double> result = avg(a,b,c), avg(d,e,f), ....
the result can be also an Iterator, or any other type
as long as it mantains the structure of an infinite list
of course what i written is pseudo code and doesnt run
There is a #Beta API termed mapWithIndex within Guava that could help here with certain assumption:
static Stream<Double> stepAverage(Stream<Double> stream, int step) {
return Streams.mapWithIndex(stream, (from, index) -> Map.entry(index, from))
.collect(Collectors.groupingBy(e -> (e.getKey() / step), TreeMap::new,
Collectors.averagingDouble(Map.Entry::getValue)))
.values().stream();
}
The assumption that it brings in is detailed in the documentation clearly(emphasized by me):
The resulting stream is efficiently splittable if and only if stream
was efficiently splittable and its underlying spliterator reported
Spliterator.SUBSIZED. This is generally the case if the underlying
stream comes from a data structure supporting efficient indexed random
access, typically an array or list.
This should work fine using vanilla Java
I'm using Stream#mapMulti and a Set external to the Stream to aggregate the doubles
As you see, I also used DoubleSummaryStatistics to count the average.
I could have use the traditional looping and summing then dividing but I found this way more explicit
Update:
I changed the Collection used from Set to List as a Set could cause unexpected behaviour
int step = 3;
List<Double> list = new ArrayList<>();
Stream<Double> averagesStream =
infiniteStream.mapMulti(((Double aDouble, Consumer<Double> doubleConsumer) -> {
list.add(aDouble);
if (list.size() == step) {
DoubleSummaryStatistics doubleSummaryStatistics = new DoubleSummaryStatistics();
list.forEach(doubleSummaryStatistics::accept);
list.clear();
doubleConsumer.accept(doubleSummaryStatistics.getAverage());
}
}));

Accumulating value of objects when carrying the same timestamp

I am currently stuck on this:
I have datapoints that carry a value and a timestamp as a Long (epoch seconds):
public class MyDataPoint(){
private Float value;
private Long timestamp;
//constructor, getters and setters here
}
I have lists that are bound to different sources where these datapoints are coming from.
public class MySource(){
private Interger sourceId;
private List<MyDataPoint> dataPointList;
//constructor, getters and setters here
}
Now I want to accumulate these datapoints in a new list:
each datapoint with the same timestamp should be accumulated in a new datapoint with the sum of the value of each datapoint that carries the same timestamp.
So for instance I have 3 datapoints with the same timestamp, I want to create one datapoint with the timestamp, and the sum of the three values.
However, these datapoints have not started or ended recording at the same time. And for one timestamp maybe only one datapoint exists.
For now I have stuffed all of the datapoints into one list, thinking I could use streams to achieve my goal, but I can't figure it out. Maybe this is the wrong way anyway because I can't see how to use filters or maps to do this.
I have thought about using Optionals since for one timestamp maybe only one exists, but there is no obvious answer for me.
Anyone able to help me out?
I am guessing that you are trying to grouping the value you in the list, then convert it to new list using stream. What i suggest is using Collectors.groupingBy and Collectors.summingInt to convert your List to a Map<Long,Double> first - which holding your timestamp as key and Double as sum of all value that has same timestamp. After this you can convert this map back to the new list.
Not tested yet but to convert your List to Map<Long, Double> should be something like:
dataPointList.stream().collect(Collectors.groupingBy(d -> d.timestamp, Collectors.summingDouble(d -> d.value))); //you can using method reference for better readability
Following assumes your DataPoint is immutable (you cannot use the same instance to accumulate into) so uses an intermediate Map.
Collection<DataPoint> summary = sources.stream()
.flatMap(source -> source.dataPointList.stream()) // smush sources into a single stream of points
.collect(groupingBy(p -> p.timestamp, summingDouble(p -> (double)p.value))) // Collect points into Map<Long, Double>
.entrySet().stream() // New stream, the entries of the Map
.map(e -> new MyDataPoint(e.getKey(), e.getValue()))
.collect(toList());
Another solution avoids the potentially large intermediate Map by collecting directly into a DataPoint.
public static DataPoint combine(DataPoint left, DataPoint right) {
return new DataPoint(left.timestamp, left.value + right.value); // return new if immutable or increase left if not
}
Collection<DataPoint> summary = sources.stream()
.flatMap(source -> source.dataPointList.stream()) // smush into a single stream of points
.collect(groupingBy(p -> p.timestamp, reducing(DataPoint.ZERO, DataPoint::combine))) // Collect all values into Map<Long, DataPoint>
.values();
This can be upgraded to parallelStream() if DataPoint is threadsafe etc
I think the "big picture" solution it's quite easy even if I can predict some multithread issues to complicate all.
In pure Java, you need simply a Map:
Map<Long,List<MyDataPoint>> dataPoints = new HashMap<>();
just use Timestamp as KEY.
For the sake of OOP, Let's create a class like DataPointCollector
public class DataPointCollector {
private Map<Long,List<MyDataPoint>> dataPoints = new HashMap<>();
}
To add element, create a method in DataPointCollector like:
public void addDataPoint(MyDataPoint dp){
if (dataPoints.get(dp.getTimestamp()) == null){
dataPoints.put(dp.getTimestamp(), new ArrayList<MyDataPoint>());
}
dataPoints.get(dp.getTimestamp()).add(dp);
}
This solve most of your theorical problems.
To get the sum, just iterate over the List and sum the values.
If you need a realtime sum, just wrap the List in another object that has totalValue and List<MyDataPoint> as fields and update totalValue on each invokation of addDataPoint(...).
About streams: streams depends by use cases, if in a certain time you have all the DataPoints you need, of course you can use Streams to do things... however streams are often expensive for common cases and I think it's better to focus on an easy solution and then make it cool with streams only if needed

Java 8 nested streams - convert chained for loops

I'm currently playing around with Java 8 features .
I have the following piece of code, and tried out multiple ways to use Streams, but without success.
for (CheckBox checkBox : checkBoxList) {
for (String buttonFunction : buttonFunctionsList) {
if (checkBox.getId().equals(buttonFunction)) {
associatedCheckBoxList.add(checkBox);
}
}
}
I tried the following, but I am not sure is this correct or not:
checkBoxList.forEach(checkBox -> {
buttonFunctionsList.forEach(buttonFunction -> {
if (checkBox.getId().equals(buttonFunction))
associatedCheckBoxList.add(checkBox);
});
});
Thanks!
Eran's answer is probably fine; but since buttonFunctionList is (presumably) a List, there is a possibility of it containing duplicate elements, meaning that the original code would add the checkboxes to the associated list multiple times.
So here is an alternative approach: you are adding the checkbox to the list as many times as there are occurrences of that item's id in the other list.
As such, you can write the inner loop as:
int n = Collections.frequency(buttonFunctionList, checkBox.getId();
associatedCheckboxList.addAll(Collections.nCopies(checkBox, n);
Thus, you can write this as:
List<CheckBox> associatedCheckBoxList =
checkBoxList.flatMap(cb -> nCopies(cb, frequency(buttonFunctionList, cb.getId())).stream())
.collect(toList());
(Using static imports for brevity)
If either checkBoxList or buttonFunctionList is large, you might want to consider computing the frequencies once:
Map<String, Long> frequencies = buttonFunctionList.stream().collect(groupingBy(k -> k, counting());
Then you can just use this in the lambda as the n parameter of nCopies:
(int) frequencies.getOrDefault(cb.getId(), 0L)
You should prefer collect over forEach when your goal is to produce some output Collection:
List<CheckBox> associatedCheckBoxList =
checkBoxList.stream()
.filter(cb -> buttonFunctionsList.stream().anyMatch(bf -> cb.getId().equals(bf)))
.collect(Collectors.toList());

What is the best way to aggregate Streams into one DISTINCT with Java 8

Suppose i have multiple java 8 streams that each stream potentially can be converted into Set<AppStory> , now I want with the best performance to aggregate all streams into one DISTINCT stream by ID , sorted by property ("lastUpdate")
There are several ways to do what but i want the fastest one , for example:
Set<AppStory> appStr1 =StreamSupport.stream(splititerato1, true).
map(storyId1 -> vertexToStory1(storyId1).collect(toSet());
Set<AppStory> appStr2 =StreamSupport.stream(splititerato2, true).
map(storyId2 -> vertexToStory2(storyId1).collect(toSet());
Set<AppStory> appStr3 =StreamSupport.stream(splititerato3, true).
map(storyId3 -> vertexToStory3(storyId3).collect(toSet());
Set<AppStory> set = new HashSet<>();
set.addAll(appStr1)
set.addAll(appStr2)
set.addAll(appStr3) , and than make sort by "lastUpdate"..
//POJO Object:
public class AppStory implements Comparable<AppStory> {
private String storyId;
private String ........... many other attributes......
public String getStoryId() {
return storyId;
}
#Override
public int compareTo(AppStory o) {
return this.getStoryId().compareTo(o.getStoryId());
}
}
... but it is the old way.
How can I create ONE DISTINCT by ID sorted stream with BEST PERFORMANCE
somethink like :
Set<AppStory> finalSet = distinctStream.sort((v1, v2) -> Integer.compare('not my issue').collect(toSet())
Any Ideas ?
BR
Vitaly
I think the parallel overhead is much greater than the actual work as you stated in the comments. So let your Streams do the job in sequential manner.
FYI: You should prefer using Stream::concat because slicing operations like Stream::limit can be bypassed by Stream::flatMap.
Stream::sorted is collecting every element in the Stream into a List, sort the List and then pushing the elements in the desired order down the pipeline. Then the elements are collected again. So this can be avoided by collecting the elements into a List and do the sorting afterwards. Using a List is a far better choice than using a Set because the order matters (I know there is a LinkedHashSet but you can't sort it).
This is the in my opinion the cleanest and maybe the fastest solution since we cannot prove it.
Stream<AppStory> appStr1 =StreamSupport.stream(splititerato1, false)
.map(this::vertexToStory1);
Stream<AppStory> appStr2 =StreamSupport.stream(splititerato2, false)
.map(this::vertexToStory2);
Stream<AppStory> appStr3 =StreamSupport.stream(splititerato3, false)
.map(this::vertexToStory3);
List<AppStory> stories = Stream.concat(Stream.concat(appStr1, appStr2), appStr3)
.distinct().collect(Collectors.toList());
// assuming AppStory::getLastUpdateTime is of type `long`
stories.sort(Comparator.comparingLong(AppStory::getLastUpdateTime));
I can't guarantee that this would be faster than what you have (I guess so, but you'll have to measure to be sure), but you can simply do this, assuming you have 3 streams:
List<AppStory> distinctSortedAppStories =
Stream.of(stream1, stream2, stream3)
.flatMap(Function.identity())
.map(this::vertexToStory)
.distinct()
.sorted(Comparator.comparing(AppStory::getLastUpdate))
.collect(Collectors.toList());

Categories

Resources