Use Java 8 Optional in existing Java 7 code - java

I have an assignment in which I need to convert the following pre-Java 8 code to Java 8 code. Below is just one method which is giving me hard time to finish up:
public static List<VehicleMake> loadMatching(Region region, String nameStartsWith, VehicleLoader loader) {
if ((nameStartsWith == null) || (region == null) || (loader == null)) {
throw new IllegalArgumentException("The VehicleLoader and both region and nameStartsWith are required when loading VehicleMake matches");
}
List<VehicleMake> regionMakes = loader.getVehicleMakesByRegion(region.name());
if (regionMakes == null) {
return null;
}
List<VehicleMake> matches = new ArrayList<>(regionMakes.size());
for (VehicleMake make : regionMakes) {
if ((make.getName() == null) || !make.getName().startsWith(nameStartsWith)) {
continue;
}
matches.add(make);
}
return matches;
}
I want to remove the null checks by using Optional<T> without modifying previously created classes and interfaces.
I tried to begin by changing the method return type and doing the following but compiler is throwing this error:
Bad return type in method reference since the VehicleMake class doesn't have optional instance fields.
Following is my code attempt:
public static Optional<List<VehicleMake>> loadMatchingJava8(Region region, String nameStartsWith, VehicleLoader loader) {
Optional<List<VehicleMake>> regionMakes = Optional.ofNullable(loader).ifPresent(loader.getVehicleMakesByRegion(Optional.ofNullable(region).ifPresent(region.name())));
/*
TODO rest of the conversion
*/
}
EDIT: Removed the flatMap and corrected code by not passing argument to method reference. But now it is not letting me pass region.name() to getVehicleMakesByRegion()
EDIT: Pass in consumer to ifPresent():
Optional<List<VehicleMake>> regionMakes = Optional.ofNullable(loader).ifPresent(()-> loader.getVehicleMakesByRegion(Optional.ofNullable(region).ifPresent(()->region.name()));

You may replace your initial null checks with
Optional.ofNullable(nameStartsWith)
.flatMap(x -> Optional.ofNullable(region))
.flatMap(x -> Optional.ofNullable(loader))
.orElseThrow(() -> new IllegalArgumentException(
"The VehicleLoader and both region and nameStartsWith"
+ " are required when loading VehicleMake matches"));
but it’s an abuse of that API. Even worse, it wastes resource for the questionable goal of providing a rather meaningless exception in the error case.
Compare with
Objects.requireNonNull(region, "region is null");
Objects.requireNonNull(nameStartsWith, "nameStartsWith is null");
Objects.requireNonNull(loader, "loader is null");
which is concise and will throw an exception with a precise message in the error case. It will be a NullPointerException rather than an IllegalArgumentException, but even that’s a change that will lead to a more precise description of the actual problem.
Regarding the rest of the method, I strongly advice to never let Collections be null in the first place. Then, you don’t have to test the result of getVehicleMakesByRegion for null and won’t return null by yourself.
However, if you have to stay with the original logic, you may achieve it using
return Optional.ofNullable(loader.getVehicleMakesByRegion(region.name()))
.map(regionMakes -> regionMakes.stream()
.filter(make -> Optional.ofNullable(make.getName())
.filter(name->name.startsWith(nameStartsWith))
.isPresent())
.collect(Collectors.toList()))
.orElse(null);
The initial code, which is intended to reject null references, should not get mixed with the actual operation which is intended to handle null references.

I have updated your code with Optional:
public static List<VehicleMake> loadMatchingJava8(Region region, String nameStartsWith, VehicleLoader loader) {
Optional<List<VehicleMake>> regionMakes = Optional.ofNullable(region)
.flatMap(r -> Optional.ofNullable(loader).map(l -> l.getVehicleMakesByRegion(r.name())));
return Optional.ofNullable(nameStartsWith)
.map(s -> regionMakes
.map(Collection::stream)
.orElse(Stream.empty())
.filter(make -> make.getName() != null && make.getName().startsWith(s))
.collect(Collectors.toList()))
.orElse(Collections.emptyList());
}

If you really want to convert flow control to Optional, the code keep consistent with yours should be like this(I break the code in 2 lines for printing):
public static Optional<List<VehicleMake>> loadMatchingJava8(Region region,
String nameStartsWith,
VehicleLoader loader) {
if ((nameStartsWith == null) || (region == null) || (loader == null)) {
throw new IllegalArgumentException("The VehicleLoader and both region and " +
"nameStartsWith are required when loading VehicleMake matches");
}
return Optional.ofNullable(loader.getVehicleMakesByRegion(region.name()))
.map(makers -> makers.stream()
.filter((it) -> it.getName() != null
&& it.getName().startsWith(nameStartsWith))
.collect(Collectors.toList()));
}
NOTE: you can see more about why do not abuse Optional in this question.

I can't say this is very elegant, but it should satisfy your requirement. There are no explicit null checks, but it'll throw the exception if any input parameters are null, and it filters out vehicles with invalid names from the resulting list.
public static List<VehicleMake> loadMatching(Region region, String nameStartsWith, VehicleLoader loader) {
return Optional.ofNullable(nameStartsWith)
.flatMap(startWith -> Optional.ofNullable(loader)
.flatMap(vl -> Optional.ofNullable(region)
.map(Region::name)
.map(vl::getVehicleMakesByRegion))
.map(makes -> makes.stream()
.filter(make -> Optional.ofNullable(make.getName())
.filter(name -> name.startsWith(startWith))
.isPresent())
.collect(Collectors.toList())))
.orElseThrow(() -> new IllegalArgumentException("The VehicleLoader and both region and nameStartsWith are required when loading VehicleMake matches"));

Related

Extract generic Function for two different classes in java

I have this switch statement that has the exact same function code repeated twice and I would like to DRY it up:
case "form" -> handleProvider.withHandle(handle -> handle.attach(FormDao.class).findFormById(id))
.thenApply(form -> { // Form.class
if (form == null) throw exceptionIfNotFound;
return form;
})
.thenApply(obj -> obj.exportedDocument);
case "note" -> handleProvider.withHandle(handle -> handle.attach(NoteDao.class).findNoteById(id))
.thenApply(note -> { // Note.class
if (note == null) throw exceptionIfNotFound;
return note;
})
If IntelliJ extract the common bits I get
final Function<Form,Form> formFormFunction = form -> {
if (form == null) throw exceptionIfNotFound;
return form;
};
which obviously just works for one code path; the Form objects, but not the Note objects. The two objects do not actually implement the same interface here, but on the other hand, I do not make use of any specific interface in the code. I just want to say I have a method that takes a and outputs a unchanged, and that T could be anything.
Make this into a method rather than a variable. This way you can make it generic.
private static <T> Function<T, T> getNullCheckFunction() {
return t -> {
if (form == null) throw exceptionIfNotFound;
return t;
};
}
Then you can do:
case "form" -> handleProvider.withHandle(handle -> handle.attach(FormDao.class).findFormById(id))
.thenApply(getNullCheckFunction()) // here!
.thenApply(obj -> obj.exportedDocument);
case "note" -> handleProvider.withHandle(handle -> handle.attach(NoteDao.class).findNoteById(id))
.thenApply(getNullCheckFunction()) // here!
Note that what you are doing in the function returned by getNullCheckFunction is very similar to Objects.requireNonNull. If you are fine with throwing NullPointerException instead of your own exception, you can just do:
.thenApply(Objects::requireNonNull)

Other ways to check for not null in Java

I have a lot of this kind of code in my project:
if (entityRepository.saveEntity(new RemoteEntityBuilder()
.appId(appId)
.nameSpace(nameSpace)
.entityType(entityType)
.entityId(entityId)
.blobs(Lists.list(new RemoteBlobBuilder()
.blobName(blobName)
.blobStream(new SimpleRemoteInputStream(inputStream))
.build()))
.build()) != null) {
// Meaning entity was saved
} else {
// Meaning entity was not saved
}
The saveEntity method returns either NULL (if operation failed) or the object/entity that was saved if the operation was successful. My question is, is there a better way to represent this code with the use of != null for instance:
if(entityRepository.saveEntity(...)) {
}
Or something else.
UPDATE:
The saveEntity method is this
#Override public RemoteEntity saveEntity(RemoteEntity entity)
throws NotBoundException, RemoteException {
RemoteEntities remoteEntities = saveEntities(new RemoteEntity[] {entity});
return remoteEntities != null ? remoteEntities.entities().stream().findFirst().get() : null;
}
Here's how it looks now thanks to YCF_L:
entityRepository.saveEntity(new RemoteEntityBuilder()
.appId(appId)
.nameSpace(nameSpace)
.entityType(entityType)
.entityId(entityId)
.blobs(Lists.list(new RemoteBlobBuilder()
.blobName(blobName)
.blobStream(new SimpleRemoteInputStream(inputStream))
.build()))
.build()).ifPresentOrElse(remoteEntity -> {
pubSubService.updated(remoteEntity.appId(), remoteEntity.nameSpace(),
remoteEntity.entityType(), remoteEntity.entityId());
setStatus(Status.SUCCESS_CREATED);
}, () -> {
setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
});
Here's how the code looks in the IDE (looks pretty clean to me):
I would use Optional in your case :
public Optional<RemoteEntity> saveEntity(RemoteEntity entity) throws NotBoundException, RemoteException {
RemoteEntities remoteEntities = saveEntities(new RemoteEntity[]{entity});
return remoteEntities.entities().stream()
.findFirst();
}
and then :
if(entityRepository.saveEntity(...).isPresent()) {
...
}
In fact you have many choices with Optional, you can use ifPresent also :
entityRepository.saveEntity(...)
.ifPresent(r -> ..)
Or throw an exception:
entityRepository.saveEntity(...)
.orElseThrow(() -> ..)
What is "better" may be a matter of opinion.
Given your example, the way to achieve that would be to create another method that calls saveEntity() and returns true or false. (I do wonder why saveEntity() doesn't throw an exception if its operations fails -- that would be more normal in my experience.)
If you simply don't like that the comparison is hard to spot, you might reverse the order:
if (null != entityRepository.saveEntity(...))
I would probably move the call outside of the if entirely, as I find side effects in conditionals potentially confusing.
RemoteEntity myEntity = entityRepository.saveEntity(...)
if (myEntity != null) ...

Setting a value if not null within a java stream

How can I handle null checks in the below code using Java 8 when my counterparty can be null.
I want to set counterParty only if it has a value and not set if it is empty.
public static Iterable<? extends Trade> buildTrade (final List<Trade> trade) {
return () -> trade.stream()
.map(trade -> Trade.newBuilder()
.setType(trade.type())
.setUnit(trade.unit())
.setCounterParty(trade.counterParty())
.build())
.iterator();
}
You can use the following code:
trade.stream()
.map(trade -> {
TradeBuilder tb = Trade.newBuilder()
.setType(trade.type())
.setUnit(trade.unit());
Optional.ofNullable(trade.counterParty())
.ifPresent(tb::setCounterParty);
return tb.build();
})
.iterator();
Or without Optional:
trade.stream()
.map(trade -> {
TradeBuilder tb = Trade.newBuilder()
.setType(trade.type())
.setUnit(trade.unit());
if(trade.counterParty() != null) tb.setCounterParty(trade.counterParty());
return tb.build();
})
.iterator();
The stream aspect of this has no relevance to the question; let's strip it out:
trade -> Trade.newBuilder()
.setType(trade.type())
.setUnit(trade.unit())
.setCounterParty(trade.counterParty())
.build()
You're asking to not set counterParty if it is null.
A really easy way to do this would be to modify builder class's setCounterParty() to do nothing and return, if the parameter is null.
TradeBuilder setCounterParty(CounterParty cp) {
if(cp != null) {
this.counterParty = cp;
}
return this;
}
You do need to ensure that this behaviour is consistent with other callers' needs.
If your builder is being dynamically generated by some framework (Lombok etc), you might not have code in which you can easily make this change -- but most such frameworks have mechanisms that allow you to take control of that kind of thing.
If you can't modify the builder, you can break up the calls to it, and surround
one call with an if:
trade -> {
TradeBuilder b = Trade.newBuilder()
.setType(trade.type())
.setUnit(trade.unit());
if(trade.counterParty() != null) {
b.setCounterParty(trade.counterParty());
}
return b.build()
}

Kafka Stream Chained LeftJoin - Processing previous old message again after the new one

I have a stream that is a composite of other streams
final KTable<Long, CompositeInfo> compositeInfoTable = compositeImcTable
.leftJoin(
compositeFundTable,
(CompositeImc cimc, CompositeFund cf) -> {
CompositeInfo newCandidate = new CompositeInfo();
if (cimc != null) {
newCandidate.imcName = cimc.imcName;
newCandidate.imcID = cimc.imcID;
if (cf != null) {
newCandidate.investments = cf.investments;
}
}
return newCandidate;
})
.leftJoin(
compositeGeographyTable,
(CompositeInfo cinfo, CompositeGeography cg) -> {
if (cg != null) {
cinfo.regions = cg.regions;
}
return cinfo;
})
.leftJoin(
compositeSectorTable,
(CompositeInfo cinfo, CompositeSector cs) -> {
if (cs != null) {
cinfo.sectors = cs.sectors;
}
return cinfo;
})
.leftJoin(
compositeClusterTable,
(CompositeInfo cinfo, CustomCluster cc) -> {
if (cc != null && cc.clusters != null) {
cinfo.clusters = cc.clusters;
}
return cinfo;
})
.leftJoin(
compositeAlphaClusterTable,
(CompositeInfo cinfo, CompositeAlphaCluster cac) -> {
if (cac != null) {
cinfo.alphaClusters = cac.alphaClusters;
};
return cinfo;
},
Materialized.<Long, CompositeInfo, KeyValueStore<Bytes, byte[]>>as(this.storeName)
.withKeySerde(Serdes.Long())
.withValueSerde(compositeInfoSerde));
My issue relates to the left join between CompositeInfo and CustomCluster. CustomCluster looks like the following
KTable<Long, CustomCluster> compositeClusterTable = builder
.stream(
SUB_TOPIC_COMPOSITE_CLUSTER,
Consumed.with(Serdes.Long(), compositeClusterSerde))
.filter((k, v) -> v.clusters != null)
.groupByKey(Serialized.with(Serdes.Long(), compositeClusterSerde))
.reduce((aggValue, newValue) -> newValue);
A message in a custom cluster looks like
CustomCluster [clusterId=null, clusterName=null, compositeId=280, operation=null, clusters=[Cluster [clusterId=6041, clusterName=MyName]]]
So I assign the HashMap clusters in this object to the clusters in CompositeInfo object joined on the compositeId.
What I am witnessing is that a CustomCluster message comes in for a given compositeId an dis processed correctly but then the old message containing the previous cluster (I am still investigating this) is processed again.
Upon digging through the problem happens in kafka internal KTableKTableRightJoin
public void process(final K key, final Change<V1> change) {
// we do join iff keys are equal, thus, if key is null we cannot join and just ignore the record
if (key == null) {
return;
}
final R newValue;
R oldValue = null;
final V2 value2 = valueGetter.get(key);
if (value2 == null) {
return;
}
newValue = joiner.apply(change.newValue, value2);
if (sendOldValues) {
oldValue = joiner.apply(change.oldValue, value2);
}
context().forward(key, new Change<>(newValue, oldValue));
}
when the joine returns the first time, the newValue is updated correctly. But the code then goes to sendOldValues block and as soon as the joiner returns, the newValue is update gain but this time with old cluster value.
So here are my questions:
Why is newValues getting updated when the joiner is called the
second time with oldValue
Is there a way to turn sendOldValues off
Does my chained left-joins would have anything to do with it. I know
previous versions of kafka had a bug with chaining. But now I am on
1.0
UPDATE:
Another thing I found. If I move join up the chain of joins and remove others, the sendOldValues remains False. So if I have something like the following:
final KTable<Long, CompositeInfo> compositeInfoTable = compositeImcTable
.leftJoin(
compositeFundTable,
(CompositeImc cimc, CompositeFund cf) -> {
CompositeInfo newCandidate = new CompositeInfo();
if (cimc != null) {
newCandidate.imcName = cimc.imcName;
newCandidate.imcID = cimc.imcID;
if (cf != null) {
newCandidate.investments = cf.investments;
}
}
return newCandidate;
})
.leftJoin(
compositeClusterTable,
(CompositeInfo cinfo, CustomCluster cc) -> {
if (cc != null && cc.clusters != null) {
cinfo.clusters = cc.clusters;
}
return cinfo;
},
Materialized.<Long, CompositeInfo, KeyValueStore<Bytes, byte[]>>as(this.storeName)
.withKeySerde(Serdes.Long())
.withValueSerde(compositeInfoSerde));
This gives me the correct result. But I think that if I put any more chained joins after this they might display the same erroneous behavior.
I am not certain of anything at this point but I think my problem lies in chained leftjoin and the behavior of calculating oldValue. Has anyone else run into this issue?
UPDATE
After much digging through I realize that sendOldValues is internal to kafka and not the cause of issue I am experiencing. My issue is that the newValue changes when the ValueJoiner for oldValue returns and I dont know if its due to some pass by reference assignment to Java objects
This is what an incoming object looks like
CustomCluster [clusterId=null, clusterName=null, compositeId=280, operation=null, clusters=[Cluster [clusterId=6041, clusterName=Sunil 2]]]
clusters is a HashSet<Cluster> clusters = new HashSet<Cluster>();
It is then joined to an object
CompositeInfo [compositeName=BUCKET_NM-280, compositeID=280, imcID=19651, regions=null, sectors=null, clusters=[]]
the clusters here is of the same type but in CompositeInfo class
When I join, I assign clusters of CustomCluster object to CompositeInfo object
(CompositeInfo cinfo, CustomCluster cc) -> {
if (cc != null && cc.clusters != null) {
cinfo.clusters = cc.clusters;
}
return cinfo;
}
After stumbling on the same issue myself, I would like to provide a detailed answer as well as a simplified example that helps illustrating the problem.
#Bean
public Function<KTable<String, String>,
Function<KTable<String, String>, Consumer<KTable<String, String>>>> processEvents() {
return firstnames ->
lastnames ->
titles -> firstnames
.mapValues(firstname -> new Salutation().withFirstname(firstname))
.join(lastnames, (salutation, lastname) -> salutation.withLastname(lastname))
.leftJoin(titles, (salutation, title) -> salutation.withTitle(title))
.toStream()
.foreach((key, salutation) -> log.info("{}: {}", key, salutation));
}
The example (which uses Spring Cloud Stream with the Kafka Streams binder) shows a common pattern where topics contents are merged into an accumulator object. In our case, a salutation (e.g. "Dear Ms. Smith") is accumulated/aggregated into a Salutation object by joining topics representing the firstname, lastname and an (optional) title.
It is important to note that in this example, the Salutation instance is a mutable object that is constructed step by step. When running such a piece of code, you will see that when changing a person's last name, the merge will always be "running behind". This means that if you publish a lastname event because Ms. Smith has just got married and is now called "Johnson", then Kafka Streams will again emit a Salutation representing "Ms. Smith", despite the fact that she changed her last name. It is only when you publish yet another event for the same person on the lastnames topic (e.g. "Miller") that "Dear Ms. Johnson" will be logged.
The reason for this behavior is found in a piece of code located in KTableKTableInnerJoin.java:
if (change.newValue != null) {
newValue = joiner.apply(change.newValue, valueRight);
}
if (sendOldValues && change.oldValue != null) {
oldValue = joiner.apply(change.oldValue, valueRight);
}
context().forward(key, new Change<>(newValue, oldValue), To.all().withTimestamp(resultTimestamp));
joiner is a ValueJoiner, which in our case can e.g. be (salutation, lastname) -> salutation.withLastname(lastname) as shown above. The problem with this piece of code is that if you use an accumulation pattern with a mutable accumulator object (in our case an instance of Salutation), which is (by design) reused for all the joins, then oldValue and newValue will be the same object. Moreover, since oldValue is computed afterwards, it will contain the old last name, which explains why Spring Kafka is running behind.
Therefore, it is critical that the object returned by the ValueJoiner is each time a fresh object which does not contain references to other mutable objects, which might be shared (and therefore mutated). The safest approach is therefore to have the ValueJoiner return an immutable object.
I would not consider this as a bug of the library, since it has to compare the old and new state somehow, and since taking a snaphsot of a mutable object would require a deep copy. However, it would probably be worthwhile to have it mentioned in the documentation. Also, issuing a warning when oldValue == newValue would at least make people aware of the problem. I will check whether such improvements could be incorporated.
It was indeed a pass by reference issue. When joining, I need to initialize and return a new object rather than assign value to the old object.
Answer based upon fizi comments on question.

try..catch VS long if() [duplicate]

This question already has answers here:
Null check chain vs catching NullPointerException
(19 answers)
Closed 6 years ago.
I have a complex model structure in my project.
Sometimes I have to get a deep placed value from it. It looks like following:
something.getSomethongElse().getSecondSomething().getThirdSomething().getFourthSomething();
The problem is that each of those methods could return null, and I will get NullPointerException in case if it does.
What I want to know is should I write long if like
if(something != null && something.getSomethongElse() != null && something..getSomethongElse().getSecondSomething() != null && something.getSomethongElse().getSecondSomething().getThirdSomething() != null && omething.getSomethongElse().getSecondSomething().getThirdSomething().getFourthSomething() != null) {
//process getFourthSomething result.
}
Or it is OK just to use try..catch like following:
SomethingFourth fourth = null;
try {
fourth = something.getSomethongElse().getSecondSomething().getThirdSomething().getFourthSomething();
} catch (NullPointerException e) { }
if(fourth != null) {
///work with fourth
}
I know that NPE is a thing to be avoided, but isn't it overhead to avoid it in my case?
If you can refactor the code and make each method return Optional. It will be possible to avoid null checks and try ... catch.
Optional<Result> result = something.getSomethingElse()
.flatMap(e -> e.getSecondSomething())
.flatMap(x -> x.getThirdSomething())
.flatMap(e -> e.getFourthSomething());
// at the end to check if result is present
result.ifPresent(..some_logic_here..); // or result.orElse(...);
so getSomethingElse() returns Optional<SomethingElse>, getThirdSomething() - Optional<ThirdSomething> and so on. We have to use here flatMap(Function<? super T,Optional<U>> mapper) because if the provided mapper is one whose result is already an Optional, and if invoked, flatMap does not wrap it with an additional Optional. In other words if map on map(e -> e.getSecondSomething()) the result type will be Optional<Optional<SecondSomething>> and we will have to do unnecessary get() call - map(...).get().map(...).
I hope this helps.
UPDATED
You can do the same thing using method references.
Optional<Result> result = something.getSomethongElse()
.flatMap(SomethongElse::getSecondSomething)
.flatMap(SecondSomething::getThirdSomething)
.flatMap(ThirdSomething::getFourthSomething);

Categories

Resources