I want to use Java 8 lambdas and streams to reduce the amount of code in the following method that produces an Optional. Is it possible to achieve?
My code:
protected Optional<String> getMediaName(Participant participant) {
for (ParticipantDevice device : participant.getDevices()) {
if (device.getMedia() != null && StringUtils.isNotEmpty(device.getMedia().getMediaType())) {
String mediaType = device.getMedia().getMediaType().toUpperCase();
Map<String, String> mediaToNameMap = config.getMediaMap();
if (mediaMap.containsKey(mediaType)) {
return Optional.of(mediaMap.get(mediaType));
}
}
}
return Optional.empty();
}
Yes. Assuming the following class hierarchy (I used records here).
record Media(String getMediaType) {
}
record ParticipantDevice(Media getMedia) {
}
record Participant(List<ParticipantDevice> getDevices) {
}
It is pretty self explanatory. Unless you have an empty string as a key you don't need, imo, to check for it in your search. The main difference here is that once the map entry is found, Optional.map is used to return the value instead of the key.
I also checked this out against your loop version and it works the same.
public static Optional<String> getMediaName(Participant participant) {
Map<String, String> mediaToNameMap = config.getMediaMap();
return participant.getDevices().stream()
.map(ParticipantDevice::getMedia).filter(Objects::nonNull)
.map(media -> media.getMediaType().toUpperCase())
.filter(mediaType -> mediaToNameMap.containsKey(mediaType))
.findFirst()
.map(mediaToNameMap::get);
}
Firstly, since your Map of media types returned by config.getMediaMap() doesn't depend on a particular device, it makes sense to generate it before processing the collection of devices. I.e. regurless of the approach (imperative or declarative) do it outside a Loop, or before creating a Stream, to avoid generating the same Map multiple times.
And to implement this method with Streams, you need to use filter() operation, which expects a Predicate, to apply the conditional logic and map() perform a transformation of stream elements.
To get the first element that matches the conditions apply findFirst(), which produces an optional result, as a terminal operation.
protected Optional<String> getMediaName(Participant participant) {
Map<String, String> mediaToNameMap = config.getMediaMap();
return participant.getDevices().stream()
.filter(device -> device.getMedia() != null
&& StringUtils.isNotEmpty(device.getMedia().getMediaType())
)
.map(device -> device.getMedia().getMediaType().toUpperCase())
.filter(mediaToNameMap::containsKey)
.map(mediaToNameMap::get)
.findFirst();
}
Related
I have a stream of data as shown below and I wish to collect the data based on a condition.
Stream of data:
452857;0;L100;csO;20220411;20220411;EUR;000101435;+; ;F;1;EUR;000100000;+;
452857;0;L120;csO;20220411;20220411;EUR;000101435;+; ;F;1;EUR;000100000;+;
452857;0;L121;csO;20220411;20220411;EUR;000101435;+; ;F;1;EUR;000100000;+;
452857;0;L126;csO;20220411;20220411;EUR;000101435;+; ;F;1;EUR;000100000;+;
452857;0;L100;csO;20220411;20220411;EUR;000101435;+; ;F;1;EUR;000100000;+;
452857;0;L122;csO;20220411;20220411;EUR;000101435;+; ;F;1;EUR;000100000;+;
I wish to collect the data based on the index = 2 (L100,L121 ...) and store it in different lists of L120,L121,L122 etc using Java 8 streams. Any suggestions?
Note: splittedLine array below is my stream of data.
For instance: I have tried the following but I think there's a shorter way:
List<String> L100_ENTITY_NAMES = Arrays.asList("L100", "L120", "L121", "L122", "L126");
List<List<String>> list= L100_ENTITY_NAMES.stream()
.map(entity -> Arrays.stream(splittedLine)
.filter(line -> {
String[] values = line.split(String.valueOf(DELIMITER));
if(values.length > 0){
return entity.equals(values[2]);
}
else{
return false;
}
}).collect(Collectors.toList())).collect(Collectors.toList());
I'd rather change the order and also collect the data into a Map<String, List<String>> where the key would be the entity name.
Assuming splittedLine is the array of lines, I'd probably do something like this:
Set<String> L100_ENTITY_NAMES = Set.of("L100", ...);
String delimiter = String.valueOf(DELIMITER);
Map<String, List<String>> result =
Arrays.stream(splittedLine)
.map(line -> {
String[] values = line.split(delimiter );
if( values.length < 3) {
return null;
}
return new AbstractMap.SimpleEntry<>(values[2], line);
})
.filter(Objects::nonNull)
.filter(tempLine -> L100_ENTITY_NAMES.contains(tempLine.getEntityName()))
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.mapping(Map.Entry::getValue, Collectors.toList());
Note that this isn't necessarily shorter but has a couple of other advantages:
It's not O(n*m) but rather O(n * log(m)), so it should be faster for non-trivial stream sizes
You get an entity name for each list rather than having to rely on the indices in both lists
It's easier to understand because you use distinct steps:
split and map the line
filter null values, i.e. lines that aren't valid in the first place
filter lines that don't have any of the L100 entity names
collect the filtered lines by entity name so you can easily access the sub lists
I would convert the semicolon-delimited lines to objects as soon as possible, instead of keeping them around as a serialized bunch of data.
First, I would create a model modelling our data:
public record LBasedEntity(long id, int zero, String lcode, …) { }
Then, create a method to parse the line. This can be as well an external parsing library, for this looks like CSV with semicolon as delimiter.
private static LBasedEntity parse(String line) {
String[] parts = line.split(";");
if (parts.length < 3) {
return null;
}
long id = Long.parseLong(parts[0]);
int zero = Integer.parseInt(parts[1]);
String lcode = parts[2];
…
return new LBasedEntity(id, zero, lcode, …);
}
Then the mapping is trivial:
Map<String, List<LBasedEntity>> result = Arrays.stream(lines)
.map(line -> parse(line))
.filter(Objects::nonNull)
.filter(lBasedEntity -> L100_ENTITY_NAMES.contains(lBasedEntity.lcode()))
.collect(Collectors.groupingBy(LBasedEntity::lcode));
map(line -> parse(line)) parses the line into an LBasedEntity object (or whatever you call it);
filter(Objects::nonNull) filters out all null values produced by the parse method;
The next filter selects all entities of which the lcode property is contained in the L100_ENTITY_NAMES list (I would turn this into a Set, to speed things up);
Then a Map is with key-value pairs of L100_ENTITY_NAME → List<LBasedEntity>.
You're effectively asking for what languages like Scala provide on collections: groupBy. In Scala you could write:
splitLines.groupBy(_(2)) // Map[String, List[String]]
Of course, you want this in Java, and in my opinion, not using streams here makes sense due to Java's lack of a fold or groupBy function.
HashMap<String, ArrayList<String>> map = new HashMap<>();
for (String[] line : splitLines) {
if (line.length < 2) continue;
ArrayList<String> xs = map.getOrDefault(line[2], new ArrayList<>());
xs.addAll(Arrays.asList(line));
map.put(line[2], xs);
}
As you can see, it's very easy to understand, and actually shorter than the stream based solution.
I'm leveraging two key methods on a HashMap.
The first is getOrDefault; basically if the value associate with our key doesn't exist, we can provide a default. In our case, an empty ArrayList.
The second is put, which actually acts like a putOrReplace because it lets us override the previous value associated with the key.
I hope that was helpful. :)
you're asking for a shorter way to achieve the same, actually your code is good. I guess the only part that makes it look lengthy is the if/else check in the stream.
if (values.length > 0) {
return entity.equals(values[2]);
} else {
return false;
}
I would suggest introduce two tiny private methods to improve the readability, like this:
List<List<String>> list = L100_ENTITY_NAMES.stream()
.map(entity -> getLinesByEntity(splittedLine, entity)).collect(Collectors.toList());
private List<String> getLinesByEntity(String[] splittedLine, String entity) {
return Arrays.stream(splittedLine).filter(line -> isLineMatched(entity, line)).collect(Collectors.toList());
}
private boolean isLineMatched(String entity, String line) {
String[] values = line.split(DELIMITER);
return values.length > 0 && entity.equals(values[2]);
}
I have a function:
String fun(List<Function<String, String>> pro, String x){
for(var p: pro){
x = p.apply(x);
}
return x;
}
How can I convert this function to functional style instead of imperative style?
Assuming what you want is to apply each function to your string, passing along the result of each function to the next, you can do this with reduce.
String fun(List<Function<String, String>> functions, String x) {
return functions.stream()
.reduce(s -> s, Function::andThen)
.apply(x);
}
Using reduce with andThen creates a combined function that chains your list of functions together. We then apply the combined function to x.
Alternatively, #Naman in the comments suggests the formulation:
functions.stream()
.reduce(Function::andThen)
.orElse(Function.identity())
.apply(x)
which I believe performs one fewer andThen operation (when the list of functions is nonempty), but is functionally the same as the first version.
(Function.identity() is an another way to write s -> s.)
I believe you are already aware about those compilation errors. You can't just define List<Function<>> without having a common understanding about those list of functions. Maybe you can get some inspiration from below code snippet.
String fun(List<Function<String, String>> listOfFunctions, String commonInputStr){
for (Function<String, String> function : listOfFunctions) {
String tempValStr = function.apply(commonInputStr);
if (tempValStr != null){
return tempValStr;
}
}
return null;
}
Or if you want to find the first result value like below:
Optional<String> fun(List<Function<String, String>> listOfFunctions, String commonInputStr){
return listOfFunctions.stream()
.map(stringStringFunction -> stringStringFunction.apply(commonInputStr))
.findFirst();
}
I have following block, processRule() removes entries from diff list.
public List<Difference> process(List<Rule> rules, List<Difference> differences) {
for (Rule rule : rules) {
differences = processRule(rule, differences);
}
return differences;
}
how can this be done with stream api? i can't just use flatMap because i need each new call to processRule() to have reduced differences as an argument.
May be something like this using stream reduce.
Note: not tested, posting from my mobile
return rules
.stream()
.reduce(differences, (rule1, rule2) ->
processRule(rule2,
processRule(rule1, differences))
} );
In the following code, a local method is called on every element of a HashSet. If it returns a special value we halt the loop. Otherwise we add every return value to a new HashSet.
HashSet<Object> myHashSet=…;
HashSet<Object> mySecondHashSet=…;
for (Object s : myHashSet) {
Object value = my_method(s);
if(value==specialValue)
return value;
else
mySecondHashSet.add(value);
}
I’d like to parralelize this process. None of the objects in the HashSet have any objects in common (it’s a tree-like structure) so I know they can run without any synchonization issues. How do I modify the code such that each call of my_method(s) starts a new tread, and also that if one of the threads evaluates to the special values, all the threads halt without returning and the special value is returned?
Having in mind java 8, this could be relatively simple, while it won't preserve your initial code semantics:
In case all you need is to return special value once you hit it
if (myHashSet.parallelStream()
.map(x -> method(x))
.anyMatch(x -> x == specialValue)) {
return specialValue;
}
If you need to keep transformed values until you meet the special value, you already got an answer from #Elliot in comments, while need to mention that semantic is not the same as your original code, since no orderer will be preserved.
While it yet to be checked, but I would expect following to be optimized and stop once it will hit wanted special value:
if (myHashSet.parallelStream()
.anyMatch(x -> method(x) == specialValue)) {
return specialValue;
}
I would do that in two passes:
find if any of the transformed set elements matches the special value;
transform them to a Set.
Starting a new thread for each transformation is way too heavy, and will bring your machine to its knees (unless you have very few elements, in which case parallelizing is probably not worth the effort.
To avoid transforming the values twice with my_method, you can do the transformation lazily and memoize the result:
private class Memoized {
private Object value;
private Object transformed;
private Function<Object, Object> transform;
public Memoized(Object value, Function<Object, Object> transform) {
this.value = value;
}
public Object getTransformed() {
if (transformed == null) {
transformed = transform.apply(value);
}
return transformed;
}
}
And then you can use the following code:
Set<Memoized> memoizeds =
myHashSet.stream() // no need to go parallel here
.map(o -> new Memoized(o, this::my_method))
.collect(Collectors.toSet());
Optional<Memoized> matching = memoized.parallelStream()
.filter(m -> m.getTransformed().equals(specialValue))
.findAny();
if (matching.isPresent()) {
return matching.get().getTransformed();
}
Set<Object> allTransformed =
memoized.parallelStream()
.map(m -> m.getTransformed())
.collect(Collectors.toSet());
I have a java.util.Map inside an rx.Observable and I want to filter the map (remove an element based on a given key).
My current code is a mix of imperative and functional, I want to accomplish this goal without the call to isItemInDataThenRemove.
public static Observable<Map<String, Object>> filter(Map<String, Object> data, String removeKey) {
return Observable.from(data).filter((entry) -> isItemInDataThenRemove(entry,removeKey));
}
private static boolean isItemInDataThenRemove(Map<String, Object> data, String removeKey) {
for (Map.Entry<String,Object> entry : data.entrySet()) {
if(entry.getKey().equalsIgnoreCase(removeKey)) {
System.out.printf("Found element %s, removing.", removeKey);
data.remove(removeKey);
return true;
}
}
return false;
}
The code you have proposed has a general problem in that it modifies the underlying stream while operating on it. This conflicts with the general requirement for streams for non-interference, and often in practice means that you will get a ConcurrentModificationException when using streams pipelines with containers that remove objects in the underlying container.
In any case (as I learned yesterday) there is a new default method on the Collection class that does pretty much exactly what you want:
private static boolean isItemInDataThenRemove(Map<String, Object> data, String removeKey) {
return data.entrySet().removeIf(entry -> entry.getKey().equalsIgnoreCase(removeKey));
}
WORKING CODE:
private static boolean isItemInDataThenRemove(Map<String, Object> data, String removeKey) {
data.entrySet().stream().filter(entry ->
entry.getKey().equalsIgnoreCase(removeKey)).forEach(entry -> {
data.remove(entry.getKey());
});
return true;
}