Java 8 - Update two properties in same Stream code - java

I'm wondering if there is a way I can update two times an object in a Stream lambda code, I need to update two properties of a class, I need to update the value and the recordsCount properties
Object:
public class HistoricDataModelParsed {
private Date startDate;
private Date endDate;
private Double value;
private int recordsCount;
}
I tried doing something like this:
val existingRecord = response.stream()
.filter(dateTime ->fromDate.equals(dateTime.getStartDate()))
.findAny()
.orElse(null);
response.stream()
.filter(dateTime ->fromDate.equals(dateTime.getStartDate()))
.findAny()
.orElse(existingRecord)
.setValue(valueAdded)
.setRecordsCount(amount);
But I got this error: "Cannot invoke setRecordsCount(int) on the primitive type void"
So I ended up doing the stream two times to update each of the two fields I needed
response.stream()
.filter(dateTime ->fromDate.equals(dateTime.getStartDate()))
.findAny()
.orElse(existingRecord)
.setValue(valueAdded);
response.stream()
.filter(dateTime ->fromDate.equals(dateTime.getStartDate()))
.findAny()
.orElse(existingRecord)
.setRecordsCount(amount);
Is there a way I can achieve what I need without the need to stream two times the list?

The return type of setValue is void and not HistoricDataModelParsed. So you cannot invoke the method setRecordsCount which is in HistoricDataModelParsed class.
You could have added a method in HistoricDataModelParsed which takes two parameters for value and recordsCount:
public void setValueAndCount(Double value, int count) {
this.value = value;
this.recordsCount = count;
}
Then call this method after orElse:
response.stream()
.filter(dateTime ->fromDate.equals(dateTime.getStartDate()))
.findAny()
.orElse(existingRecord)
.setValueAndCount(valueAdded, amount);

The state of an object should not change within a stream. It can lead to inconsistent results. But you can create new instances of the objects and pass new values via the constructor. Here is a simple record that demonstrates the method. Records are basically immutable classes that have no setters. The getters are the names of the variables. A class would also work in this example.
record Temp(int getA, int getB) {
#Override
public String toString() {
return "[" + getA + ", " + getB +"]";
}
}
Some data
List<Temp> list = List.of(new Temp(10, 20), new Temp(50, 200),
new Temp(100, 200));
And the transformation. A new instance of Temp with new values is created along with the old ones to completely populate the constructor. Otherwise, the existing object is passed along.
List<Temp> result = list.stream().map(
t -> t.getA() == 50 ? new Temp(2000, t.getB()) : t)
.toList();
System.out.println(result);
Prints
[[10, 20], [2000, 200], [100, 200]]
To answer the void error you got it's because a stream expects values to continue thru out the stream so if a method is void, it isn't returning anything so you would have to return it. Here is an example:
stream.map(t->{voidReturnMethod(t); return t;}).toList();
The return ensures the pipeline continues.

Simply store the result of orElse and then call your methods on it.
HistoricDataModelParsed record =
response.stream()
.filter(dateTime -> fromDate.equals(dateTime.getStartDate()))
.findAny()
.orElse(existingRecord);
record.setValue(valueAdded)
record.setRecordsCount(amount);

Related

Java aggregate same objects into one

I'm quite new into programming and got a tricky question.
I got an object which has multiple parameters:
public class SampleObject {
private String number;
private String valueOne;
private String valueTwo;
private String valueThree;
// getters, setters, all-args constructor
}
Every object always has non-null number attribute as well as one of three values-field. So for example, if valueOne is not null, the other two value fields valueTwo and valueThree would be null.
So here's my problem:
The SampleObject is referenced in AnotherClass which looks so:
public class AnotherClass {
private UUID id;
private List<SampleObject> sampleObjects;
// getters, setters, all-args constructor
}
I am receiving one object of AnotherClass containing multiple entities of SampleClass in a list.
What I want to do is merge all SampleObjects which got the same number into one object and provide a map, where the number is the key and value are the value parameters. For example:
Sample1(number:"1", valueOne="1", valueTwo=null, valueThree=null)
Sample2(number:"1", valueOne=null, valueTwo="2", valueThree=null)
Sample3(number:"1", valueOne=null, valueTwo=null, valueThree="3")
Sample4(number:"2", valueOne="5", valueTwo=null, valueThree=null)
Desired state:
Sample1Merged(number:"1", valueOne="1", valueTwo="2", valueThree="3")
Sample4(number:"2", valueOne="5", valueTwo=null, valueThree=null)
What I have already done is the following:
final Map<String, SampleObject> mapOfMergedSamples = new LinkedHashMap<>();
anotherClass.getSampleObjects().stream()
.sorted(Comparator.comparing(SampleObject::getNumber))
.forEach(s -> mapOfMergedSamples.put(s.getNumber(), new SampleObject(Stream.of(s.getValueOne(), s.getValueTwo())
.filter(Objects::nonNull)
.collect(Collectors.joining()), s.getValueThree()))
);
return mapOfMergedSamples;
The problem with my current try is that every number gets overwritten because they have the same key in the map (the number in the SampleObject) does someone know how can I archive my desired state?
Based on your usage of Collector.joining() I assume that you want to concatenate all non-null values without any delimiters (anyway it can be easily changed).
In order to combine SampleObject instances having the same number property, you can group them into an intermediate Map where the number would serve as Key and a custom accumulation type (having properties valueOne, valueTwo, valueThree) would be a Value (note: if you don't want to define a new type, you can put the accumulation right into the SampleObject, but I'll go with a separate class because this approach is more flexible).
Here's it might look like (for convenience, I've implemented Consumer interface):
public class SampleObjectAccumulator implements Consumer<SampleObject> {
private StringBuilder valueOne = new StringBuilder();
private StringBuilder valueTwo = new StringBuilder();
private StringBuilder valueThree = new StringBuilder();
#Override
public void accept(SampleObject sampleObject) {
if (sampleObject.getValueOne() != null) valueOne.append(sampleObject.getValueOne());
if (sampleObject.getValueTwo() != null) valueTwo.append(sampleObject.getValueTwo());
if (sampleObject.getValueThree() != null) valueThree.append(sampleObject.getValueThree());
}
public SampleObjectAccumulator merge(SampleObjectAccumulator other) {
valueOne.append(other.valueOne);
valueTwo.append(other.valueTwo);
valueThree.append(other.valueThree);
return this;
}
public SampleObject toSampleObject(String number) {
return new SampleObject(
number,
valueOne.toString(),
valueTwo.toString(),
valueThree.toString()
);
}
// getters
}
To create an intermediate Map we can use Collector groupingBy() and as its downstream Collector, in order to leverage the custom accumulation type, we can provide a custom collector, which can instantiated using factory method Collector.of().
Then we need to create a stream over the entries of the intermediate map in order to transform the Value.
Note that sorting applied in only the second stream.
AnotherClass anotherClass = // initializing the AnotherClass instance
final Map<String, SampleObject> mapOfMergedSamples = anotherClass.getSampleObjects().stream()
.collect(Collectors.groupingBy(
SampleObject::getNumber,
Collector.of(
SampleObjectAccumulator::new,
SampleObjectAccumulator::accept,
SampleObjectAccumulator::merge
)
))
.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.collect(Collectors.toMap(
Map.Entry::getKey,
e -> e.getValue().toSampleObject(e.getKey()),
(left, right) -> { throw new AssertionError("All keys are expected to be unique"); },
LinkedHashMap::new
));

Optional.map - how does it work exactly?

I'm trying to get into Optional issues in Java 8. I've written an extremely simple program, consisting of one class and main() method.
I expect the output data to be [aaa, DDD, ccc]. However, I'm getting [aaa, bbb, ccc]. But if change to s = new TestClass("DDD") line to the commented one, I get what I want.
So how does map() work? Can it map an object only by editing it? Why doesn't it work properly if I create a new instance and return it?
class:
public class TestClass {
String str;
public TestClass(String str) {
this.str = str;
}
#Override
public String toString() {
return str;
}
}
main() method:
public static void main(String[] args) {
List<TestClass> list = new ArrayList<TestClass>();
list.add( new TestClass("aaa") );
list.add( new TestClass("bbb") );
list.add( new TestClass("ccc") );
list.stream()
.filter( s -> s.str.equals("ccc") || s.str.equals("bbb") )
.findFirst()
.map( s -> {
// s.str = "DDD"; this works just fine
s = new TestClass("DDD");
return s;
} );
System.out.println(list);
}
You are assigning a new Object within a method.
Your new reference will be valid only within the method scope, but as soon as you return, the reference will point to its original instance.
In java this is the expected behavior.
E.g:
String s = "foo";
changeString(s);
print(s); // prints "foo"
where
void changeString(String s) {
s = "bar";
}
Nobody forbids you to change your object properties though, as you do with s.str = "DDD" (unless your object is immutable, of course).
In your specific case, you are not doing anything with the map lambda result, therefore your changes are lost.
Actually map is useless in your case even when you just do s.str = "DDD" as it could be done within a forEach.
But. since you are working on only one result, which may not even exist (Optional), you should use
...findFirst().ifPresent(s -> s.str = "DDD" );
You should use map only when you need to transform an object into a different type for further processing.
Please change your code to :
Optional<Object> result = list.stream()
.filter( s -> s.str.equals("ccc") || s.str.equals("bbb") )
.findFirst()
.map( s -> {
// s.str = "DDD"; this works just fine
s = new TestClass("DDD");
return s;
} );
System.out.println(result);
and run again - it should give you some hint.
Please notice that you don't use result in your version at all, and modifying source list in such cases is not the best habit (possible multi threading issues)
With s.str = "DDD" you are modifying or mutating the TestClass instance in the list setting its str field to value DDD.
With s = new TestClass("DDD"), you are not touching the instances in the list. The variable s is a local variable to that lambda block. Assigning a new object reference to it will not change the str field of the object it was pointing to earlier.
Usually, with a map, you have to collect it or do something with the result. But here you are not doing anything with the mapped result.
the ‘map’ method is used to map each element to its corresponding result. If you want to change one element of your list, you need to collect the results into a new List. using the .collect() method.
List<TestClass> list = new ArrayList<TestClass>();
list.add( new TestClass("aaa") );
list.add( new TestClass("bbb") );
list.add( new TestClass("ccc") );
List<TestClass> result = list.stream()
.map( s -> {
if (s.toString().equals("bbb")) {
s = new TestClass("DDD");
}
return s;
}).collect(Collectors.toList());
for(TestClass t : result){
System.out.println(t);
}
The result of this is:
aaa
DDD
ccc
Tidying up the stream:
List<TestClass> result = list.stream()
.map( s -> s.toString().equals("bbb") ? new TestClass("DDD") : s)
.collect(Collectors.toList());

How to parallelize a loop in Java

In the following code, a local method is called on every element of a HashSet. If it returns a special value we halt the loop. Otherwise we add every return value to a new HashSet.
HashSet<Object> myHashSet=…;
HashSet<Object> mySecondHashSet=…;
for (Object s : myHashSet) {
Object value = my_method(s);
if(value==specialValue)
return value;
else
mySecondHashSet.add(value);
}
I’d like to parralelize this process. None of the objects in the HashSet have any objects in common (it’s a tree-like structure) so I know they can run without any synchonization issues. How do I modify the code such that each call of my_method(s) starts a new tread, and also that if one of the threads evaluates to the special values, all the threads halt without returning and the special value is returned?
Having in mind java 8, this could be relatively simple, while it won't preserve your initial code semantics:
In case all you need is to return special value once you hit it
if (myHashSet.parallelStream()
.map(x -> method(x))
.anyMatch(x -> x == specialValue)) {
return specialValue;
}
If you need to keep transformed values until you meet the special value, you already got an answer from #Elliot in comments, while need to mention that semantic is not the same as your original code, since no orderer will be preserved.
While it yet to be checked, but I would expect following to be optimized and stop once it will hit wanted special value:
if (myHashSet.parallelStream()
.anyMatch(x -> method(x) == specialValue)) {
return specialValue;
}
I would do that in two passes:
find if any of the transformed set elements matches the special value;
transform them to a Set.
Starting a new thread for each transformation is way too heavy, and will bring your machine to its knees (unless you have very few elements, in which case parallelizing is probably not worth the effort.
To avoid transforming the values twice with my_method, you can do the transformation lazily and memoize the result:
private class Memoized {
private Object value;
private Object transformed;
private Function<Object, Object> transform;
public Memoized(Object value, Function<Object, Object> transform) {
this.value = value;
}
public Object getTransformed() {
if (transformed == null) {
transformed = transform.apply(value);
}
return transformed;
}
}
And then you can use the following code:
Set<Memoized> memoizeds =
myHashSet.stream() // no need to go parallel here
.map(o -> new Memoized(o, this::my_method))
.collect(Collectors.toSet());
Optional<Memoized> matching = memoized.parallelStream()
.filter(m -> m.getTransformed().equals(specialValue))
.findAny();
if (matching.isPresent()) {
return matching.get().getTransformed();
}
Set<Object> allTransformed =
memoized.parallelStream()
.map(m -> m.getTransformed())
.collect(Collectors.toSet());

Fill Map<String,Map<String,Integer>> with Stream

I have a Linkedlist with Data ( author, date , LinkedList<Changes(lines, path)> )
now i want to create with a stream out of this a Map< Filepath, Map< Author, changes >>
public Map<String, Map<String, Integer>> authorFragmentation(List<Commit> commits) {
return commits.stream()
.map(Commit::getChangesList)
.flatMap(changes -> changes.stream())
.collect(Collectors.toMap(
Changes::getPath,
Collectors.toMap(
Commit::getAuthorName,
(changes) -> 1,
(oldValue, newValue) -> oldValue + 1)));
}
I try it so but this doesnt work.
How can i create this Map in a Map with the Stream and count at the same time the changes ?
Jeremy Grand is completely correct in his comment: in your collector it has long been forgotten that you started out from a stream of Commit objects, so you cannot use Commit::getAuthorName there. The challenge is how to keep the author name around to a place where you also got the path. One solution is to put both into a newly created string array (since both are strings).
public Map<String, Map<String, Long>> authorFragmentation(List<Commit> commits) {
return commits.stream()
.flatMap(c -> c.getChangesList()
.stream()
.map((Changes ch) -> new String[] { c.getAuthorName(), ch.getPath() }))
.collect(Collectors.groupingBy(sa -> sa[1],
Collectors.groupingBy(sa -> sa[0], Collectors.counting())));
}
Collectors.counting() insists on counting into a Long, not Integer, so I have modified your return type. I’m sure a conversion to Integer would be possible if necessary, but I would first consider whether I could live with Long.
It’s not the most beautiful stream code, and I will wait to see if other suggestions come up.
The code is compiled, but since I neither have your classes nor your data, I have not tried running it. If there are any issues, please revert.
Your mistake is that map/flatMap call "throws away" the Commit. You do not know which Commit a Change belongs to when trying to collect. In order to keep that information I'd recommend creating a small helper class (you could use a simple Pair, though):
public class OneChange
{
private Commit commit;
private Change change;
public OneChange(Commit commit, Change change)
{
this.commit = commit;
this.change = change;
}
public String getAuthorName() { return commit.getAuthorName(); };
public String getPath() { return change.getPath(); };
public Integer getLines() { return change.getLines(); };
}
You can then flatMap to that, group it by path and author, and then sum up the lines changed:
commits.stream()
.flatMap(commit -> commit.getChanges().stream().map(change -> new OneChange(commit, change)))
.collect(Collectors.groupingBy(OneChange::getPath,
Collectors.groupingBy(OneChange::getAuthorName,
Collectors.summingInt(OneChange::getLines))));
In case you do not want to sum up the lines, but just count the Changes, replace Collectors.summingInt(OneChange::getLines) by Collectors.counting().

Getting filtered records from streams using lambdas in java

I have an entity Employee
class Employee{
private String name;
private String addr;
private String sal;
}
Now i have list of these employees. I want to filter out those objects which has name = null and set addr = 'A'. I was able to achieve like below :
List<Employee> list2= list.stream()
.filter(l -> l.getName() != null)
.peek(l -> l.setAddr("A"))
.collect(Collectors.toList());
Now list2 will have all those employees whose name is not null and then set addr as A for those employees.
What i also want to find is those employees which are filtered( name == null) and save them in DB.One way i achieved is like below :
List<Employee> list2= list.stream()
.filter(l -> filter(l))
.peek(l -> l.setAddr("A"))
.collect(Collectors.toList());
private static boolean filter(Employee l){
boolean j = l.getName() != null;
if(!j)
// save in db
return j;
}
1) Is this the right way?
2) Can we do this directly in lambda expression instead of writing separate method?
Generally, you should not use side effect in behavioral parameters. See the sections “Stateless behaviors” and “Side-effects” of the package documentation. Also, it’s not recommended to use peek for non-debugging purposes, see “In Java streams is peek really only for debugging?”
There’s not much advantage in trying to squeeze all these different operations into a single Stream pipeline. Consider the clean alternative:
Map<Boolean,List<Employee>> m = list.stream()
.collect(Collectors.partitioningBy(l -> l.getName() != null));
m.get(false).forEach(l -> {
// save in db
});
List<Employee> list2 = m.get(true);
list2.forEach(l -> l.setAddr("A"));
Regarding your second question, a lambda expression allows almost everything, a method does. The differences are on the declaration, i.e. you can’t declare additional type parameters nor annotate the return type. Still, you should avoid writing too much code into a lambda expression, as, of course, you can’t create test cases directly calling that code. But that’s a matter of programming style, not a technical limitation.
If you are okay in using peek for implementing your logic (though it is not recommended unless for learning), you can do the following:
List<Employee> list2= list.stream()
.peek(l -> { // add this peek to do persistence
if(l.getName()==null){
persistInDB(l);
}
}).filter(l -> l.getName() != null)
.peek(l -> l.setAddr("A"))
.collect(Collectors.toList());
You can also do something like this:
List<Employee> list2 = list.stream()
.filter(l->{
boolean condition = l.getName()!=null;
if(condition){
l.setAddr("A");
} else {
persistInDB(l);
}
return condition;
})
.collect(Collectors.toList());
Hope this helps!

Categories

Resources