How can I port a java inner function from here
which fully is contained in to Scala?
JavaPairRDD<Envelope, HashSet<Point>> castedResult = joinListResultAfterAggregation.mapValues(new Function<HashSet<Geometry>,HashSet<Point>>()
{
#Override
public HashSet<Point> call(HashSet<Geometry> spatialObjects) throws Exception {
HashSet<Point> castedSpatialObjects = new HashSet<Point>();
Iterator spatialObjectIterator = spatialObjects.iterator();
while(spatialObjectIterator.hasNext())
{
castedSpatialObjects.add((Point)spatialObjectIterator.next());
}
return castedSpatialObjects;
}
});
return castedResult;
My approach as outlined below would not compile due to some NotinferredU
val castedResult = joinListResultAfterAggregation.mapValues(new Function[java.util.HashSet[Geometry], java.util.HashSet[Point]]() {
def call(spatialObjects: java.util.HashSet[Geometry]): java.util.HashSet[Point] = {
val castedSpatialObjects = new java.util.HashSet[Point]
val spatialObjectIterator = spatialObjects.iterator
while (spatialObjectIterator.hasNext) castedSpatialObjects.add(spatialObjectIterator.next.asInstanceOf[Point])
castedSpatialObjects
}
})
When asking a question about compilation errors please provide the exact error, especially when your code doesn't stand on its own.
The inner function itself is fine; my guess would be that due to changes above joinListResultAfterAggregation isn't a JavaPairRDD anymore, but a normal RDD[(Envelope, Something)] (where Something could be java.util.HashSet, scala.collection.Set or some subtype), so its mapValues takes a Scala function, not a org.apache.spark.api.java.function.Function. Scala functions are written as lambdas: spatialObjects: Something => ... (the body will depend on what Something actually is, and the argument type can be omitted in some circumstances).
How about this ?
val castedResult = joinListResultAfterAggregation.mapValues(spatialObjects => {
spatialObjects.map(obj => (Point) obj)
})
Related
I can't fully understand how you can convert the code
futuresList.stream().map(CompletableFuture::join).collect(Collectors.toList())
from Java to Kotlin code
I have a list of CompletableFuture and I want to combine for CompletableFuture.allOf
P.S. val futuresList : MutableList<CompletableFuture<String>>
Your original code is almost a working one, you just need to specify the type parameter:
futuresList.stream().map(CompletableFuture<String>::join).collect(Collectors.toList())
Honestly, I'm not sure why this is required and why Kotlin does not use type inference in such a case. Alternatively, we can do it like this:
futuresList.stream().map { it.join() }.collect(Collectors.toList())
I believe this approach is more common in Kotlin.
Also, I'm not sure why do you use stream here. It seems the same as mapping the list directly:
futuresList.map { it.join() }
Here's the kotlint code, which shows how to achieve what you want ...
fun main() {
var newList = listOf(1, 2).map(Adder.Companion::addOne)
newList.forEach { println(it) }
// Will print 2, 3
}
class Adder {
companion object {
fun addOne(x: Int): Int {
return x + 1
}
}
}
Here's a working example.
I have an Try<Option<Foo>>. I want to flatMap Foo into a Bar, using it using an operation that can fail. It's not a failure if my Option<Foo> is an Option.none(), (and the Try was a success) and in this case there's nothing to do.
So I have code like this, which does work:
Try<Option<Bar>> myFlatMappingFunc(Option<Foo> fooOpt) {
return fooOpt.map(foo -> mappingFunc(foo).map(Option::of) /* ew */)
.getOrElse(Try.success(Option.none()); // double ew
}
Try<Bar> mappingFunc(Foo foo) throws IOException {
// do some mapping schtuff
// Note that I can never return null, and a failure here is a legitimate problem.
// FWIW it's Jackson's readValue(String, Class<?>)
}
I then call it like:
fooOptionTry.flatMap(this::myFlatMappingFunc);
This does work, but it looks really ugly.
Is there a better way to flip the Try and Option around?
Note 1: I actively do not want to call Option.get() and catch that within the Try as it's not semantically correct. I suppose I could recover the NoSuchElementException but that seems even worse, code-wise.
Note 2 (to explain the title): Naively, the obvious thing to do is:
Option<Try<Bar>> myFlatMappingFunc(Option<Foo> fooOpt) {
return fooOpt.map(foo -> mappingFunc(foo));
}
except this has the wrong signature and doesn't let me map with the previous operation that could have failed and also returned a successful lack of value.
When you are working with monads, each monad type combine only with monads of same type. This is usually a problem because the code will come very unreadable.
In the Scala world, there are some solutions, like the OptionT or EitherT transformers, but do this kind of abstractions in Java could be difficult.
The simple solution is to use only one monad type.
For this case, I can think in two alternatives:
transform fooOpt to Try<Foo> using .toTry()
transform both to Either using .toEither()
Functional programmers are usually more comfortable with Either because exceptions will have weird behaviors, instead Either usually not, and both works when you just want to know why and where something failed.
Your example using Either will look like this:
Either<String, Bar> myFlatMappingFunc(Option<Foo> fooOpt) {
Either<String, Foo> fooE = fooOpt.toEither("Foo not found.");
return fooE.flatMap(foo -> mappingFunc(foo));
}
// Look mom!, not "throws IOException" or any unexpected thing!
Either<String, Bar> mappingFunc(Foo foo) {
return Try.of(() -> /*do something dangerous with Foo and return Bar*/)
.toEither().mapLeft(Throwable::getLocalizedMessage);
}
I believe this is simply a sequence function (https://static.javadoc.io/io.vavr/vavr/0.9.2/io/vavr/control/Try.html#sequence-java.lang.Iterable-) that you are looking for:
Try.sequence(optionalTry)
You can combine Try.sequence and headOption functions and create a new transform function with a little better look, in my opinion, also you can use generic types to get a more reusable function :) :
private static <T> Try<Option<T>> transform(Option<Try<T>> optT) {
return Try.sequence(optT.toArray()).map(Traversable::headOption);
}
If I understand correctly, you want to :
keep the first failure if happens
swap the second when mapping to json for an empty option.
Isn t it simpler if you decompose your function in such a way:
public void keepOriginalFailureAndSwapSecondOneToEmpty() {
Try<Option<Foo>> tryOptFoo = null;
Try<Option<Bar>> tryOptBar = tryOptFoo
.flatMap(optFoo ->
tryOptionBar(optFoo)
);
}
private Try<Option<Bar>> tryOptionBar(Option<Foo> optFoo) {
return Try.of(() -> optFoo
.map(foo -> toBar(foo)))
.orElse(success(none())
);
}
Bar toBar(Foo foo) throws RuntimeException {
return null;
}
static class Bar {
}
static class Foo {
}
The solution of throughnothing and durron597 helped me there. This is my groovy test case:
def "checkSomeTry"() {
given:
def ex = new RuntimeException("failure")
Option<Try<String>> test1 = Option.none()
Option<Try<String>> test2 = Option.some(Try.success("success"))
Option<Try<String>> test3 = Option.some(Try.failure(ex))
when:
def actual1 = Try.sequence(test1).map({ t -> t.toOption() })
def actual2 = Try.sequence(test2).map({ t -> t.toOption() })
def actual3 = Try.sequence(test3).map({ t -> t.toOption() })
then:
actual1 == Try.success(Option.none())
actual2 == Try.success(Option.some("success"))
actual3 == Try.failure(ex)
}
I'm new to scala and I'm trying to get used to the language. I was looking for an equivalent to the following Java synchronisation technique.
private final Map<String, Future<Boolean>> requestMap = new HashMap<>();
public void updateMap(String key) {
synchronized(requestMap) {
// update contents of requestMap
}
}
I think the syntax below is the Scala equivalent of the Java above.
private val requestMap = new mutable.LinkedHashMap[String, Future[Boolean]]
def updateMap(key: String): Unit = {
requestMap.synchronized {
// update contents of requestMap
}
}
What I'm trying to achieve here is ensuring that only one thread can manipulate the requestMap object at any given time in the updateMap method. I want to know are the two examples above equivalent and where can I find this Scala usage of synchronized documented?
You are right, these are equivalent:
//Java
synchronized(foo) { statements ... }
//Scala
foo.synchronized { statements ... }
In scala, synchronized is a library construct (although a synthetic one) - that is, it is a method on AnyRef
Referring to the same discussion can we have something like this and will it make the code inside AnyRef.synchronized, thread safe?
Object abc {
...
*AnyRef*.synchronized{
val ff = new FunClass()
ff.displayTs()
}
}
class FunClass() {
def displayTs(): Unit{
println(timestamp)
}
}
...
}
So, I have this code (using the Javapoet Lib):
if (myBeautifulBoolean) <--------------------------
theClass = TypeSpec.classBuilder(classe.getName())
.addModifiers(javax.lang.model.element.Modifier.valueOf(classe.getProte().toString().toUpperCase()), Modifier.FINAL) <-------------------
.superclass(father==null?ClassName.OBJECT:father)
.addMethods(methods)
.addFields(fields)
.build();
else
theClass = TypeSpec.classBuilder(classe.getName())
.addModifiers(javax.lang.model.element.Modifier.valueOf(classe.getProte().toString().toUpperCase())) <------------------
.superclass(father==null?ClassName.OBJECT:father)
.addMethods(methods)
.addFields(fields)
.build();
and i want it to become something like:
theClass = TypeSpec.classBuilder(classe.getName())
.addModifiers(javax.lang.model.element.Modifier.valueOf(classe.getProte().toString().toUpperCase()), myBeautifulBoolean?Modifier.FINAL:null) <----------
.superclass(father==null?ClassName.OBJECT:father)
.addMethods(methods)
.addFields(fields)
.build();
Where is the problem?
if i write myBeautifulBoolean?Modifier.FINAL:null, I get an exception because the parameters of addmodifiers() cannot be null, and there is nothing like Modifier.NOTFINAL
So, is there a way to tell the code "Ehi, if the boolean is true, add an argument, if not, don't"?
addModifiers takes an array. you could do addModifiers(test ? new Modifier[] { mod, Modifier.FINAL} : new Modifier[] { mod }) you could make this prettier with a helper method
public static <T> T[] arr(T... array) { return array; }
// later
.addModifiers(test ? arr(mod, FINAL) : arr(mod))
I have a Java code calling Scala method.
Java side code:
List<String> contexts = Arrays.asList(initialContext);
ContextMessage c = ContextMessage.load(contexts);
Scala side code:
def load(contexts: List[String]) = ...
contexts foreach context =>
In this case, I have scala.collection.immutable.List<String> cannot be applied ... error message.
I also need to make the type of contexts as general as possible (i.e., Seq) as the load method iterates over the given collection object to process something.
def load(contexts: Seq[String]) = ...
How to solve the two issues?
I would just use JavaConversions and keep my Scala code scalatic.
// Scala code
object ContextMessage {
def load(contexts: Seq[String]) = ???
}
// in your Java code
ContextMessage c = ContextMessage.load(JavaConversions.asScalaBuffer(Arrays.asList(initialContext));
In the case Scala calling this method, Implicit conversion between java.util.ArrayList and Seq type can solve this issue easily.
import scala.collection.mutable.ListBuffer
object ContextMessage extends App {
implicit def typeConversion(input: java.util.ArrayList[String]) = {
val res : ListBuffer[String] = new ListBuffer[String]()
for (i <- 0 to input.size - 1) {
// println(input.get(i))
res += input.get(i)
}
res
}
def load(contexts: Seq[String]) = {
contexts foreach { c =>
println(c)
}
}
val x = new java.util.ArrayList[String]()
x.add("A")
x.add("B")
load(x)
}
ContextMessage.main(args)
The result shows:
A
B