Casting to JsonObject in Kotlin - java

I have the following Java code:
var jwks = ((List<Object>) keys.getList()).stream()
.map(o -> new JsonObject((Map<String, Object>) o))
.collect(Collectors.toList());
and would like to translate safely into Kotlin code.
When I copy the code into Intellj, then it translates for me as follows:
val jwks = (keys.list as List<Any?>).stream()
.map { o: Any? -> JsonObject(o as Map<String?, Any?>?) }
.collect(Collectors.toList())
Can I do it better or should I let it as is.
Update
Maybe I have to provide more context. What I am trying to do is, to implement JWT Authorization for Vert.x with Keycloak in Kotlin regarding to the tutorial https://vertx.io/blog/jwt-authorization-for-vert-x-with-keycloak/.
I am trying to rewrite the method
private Future<Startup> setupJwtAuth(Startup startup) {
var jwtConfig = startup.config.getJsonObject("jwt");
var issuer = jwtConfig.getString("issuer");
var issuerUri = URI.create(issuer);
// derive JWKS uri from Keycloak issuer URI
var jwksUri = URI.create(jwtConfig.getString("jwksUri", String.format("%s://%s:%d%s",
issuerUri.getScheme(), issuerUri.getHost(), issuerUri.getPort(), issuerUri.getPath() + "/protocol/openid-connect/certs")));
var promise = Promise.<JWTAuth>promise();
// fetch JWKS from `/certs` endpoint
webClient.get(jwksUri.getPort(), jwksUri.getHost(), jwksUri.getPath())
.as(BodyCodec.jsonObject())
.send(ar -> {
if (!ar.succeeded()) {
startup.bootstrap.fail(String.format("Could not fetch JWKS from URI: %s", jwksUri));
return;
}
var response = ar.result();
var jwksResponse = response.body();
var keys = jwksResponse.getJsonArray("keys");
// Configure JWT validation options
var jwtOptions = new JWTOptions();
jwtOptions.setIssuer(issuer);
// extract JWKS from keys array
var jwks = ((List<Object>) keys.getList()).stream()
.map(o -> new JsonObject((Map<String, Object>) o))
.collect(Collectors.toList());
// configure JWTAuth
var jwtAuthOptions = new JWTAuthOptions();
jwtAuthOptions.setJwks(jwks);
jwtAuthOptions.setJWTOptions(jwtOptions);
jwtAuthOptions.setPermissionsClaimKey(jwtConfig.getString("permissionClaimsKey", "realm_access/roles"));
JWTAuth jwtAuth = JWTAuth.create(vertx, jwtAuthOptions);
promise.complete(jwtAuth);
});
return promise.future().compose(auth -> {
jwtAuth = auth;
return Future.succeededFuture(startup);
});
}
into Kotlin language.

You can use Kotlin's star projection which basically handles unknown generics in a type-safe way.
(keys.list as List<Map<*, *>>).map { JsonObject(it) }
Since there is no multi-mapping there is no need of Stream/Sequence API.
However, if you want to use lazy-evaluation (each element going through all mapping then next element in same way rather than mapping all element then running next map):
(keys.list as List<Map<*, *>>)
.asSequence()
.map { JsonObject(it) }
.map { /* Maybe some other mapping */ }
.filter { /* Maybe some filter */ }
.take(5) // Or some other intermediate operation
.toList() // Finally start the operations and collect
Edit: I forgot that the keys are String, so you can cast to List<Map<String, *>> instead :)

Related

How to get cosmos response as a JSON Array in JAVA java using azure-cosmos sdk

I can run query in azure cosmos-db explore like in below image and see the response as a json array
I want to do the same using Java with azure-cosmos SDK
Below is my function
public JSONArray getCosmosResponseFromSyncClient(String databaseName, String
containerName, String sqlQuery) {
try {
cosmosClient = new
CosmosClientBuilder().endpoint(cosmosURI).key(cosmosPrimaryKey).buildClient();
CosmosDatabase database = cosmosClient.getDatabase(databaseName);
CosmosContainer container = database.getContainer(containerName);
int preferredPageSize = 10;
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
queryOptions.setQueryMetricsEnabled(true);
CosmosPagedIterable < JSONArray > responsePagedIterable = container.queryItems(sqlQuery,
queryOptions, JSONArray.class);
return cosmosQueryResponseObjectAsAJSONArray;
}finally {
cosmosClient.close();
}
}
Assuming that org.json.JSONArray is used for JSONArray, you can use Async API in Cosmos DB V4 SDK. The cosmosAsyncClient really should be built outside your method and re-used by all threads calling the method. See sample here for creating async client properly and consuming from multiple methods.
cosmosAsyncClient = new CosmosClientBuilder().endpoint(cosmosURI).key(cosmosPrimaryKey).buildAsyncClient();
CosmosAsyncDatabase database = cosmosAsyncClient.getDatabase(databaseName);
CosmosAsyncContainer container = database.getContainer(containerName);
Your method should look like this:
public JSONArray getCosmosResponseFromAsyncClient(String sqlQuery) {
int preferredPageSize = 10;
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
queryOptions.setQueryMetricsEnabled(true);
CosmosPagedFlux<JsonNode> pagedFlux = container.queryItems(sqlQuery, queryOptions,
JsonNode.class);
List<JsonNode> cosmosQueryResponseObjectAsAJSONArray = pagedFlux.byPage(preferredPageSize)
.flatMap(pagedFluxResponse -> {
return Flux.just(pagedFluxResponse
.getResults()
.stream()
.collect(Collectors.toList()));
}).onErrorResume((exception) -> {
logger.error(
"Exception. e: {}",
exception.getLocalizedMessage(),
exception);
return Mono.empty();
}).blockLast();
return new JSONArray(cosmosQueryResponseObjectAsAJSONArray.toString());
}

Apache Kafka - Implementing a KTable

I am new to Kafka Streams API and I am trying to create a KTable. I have an input topic: s-order-topic, which is a json format message, as shown below.
{ "current_ts": "2019-12-24 13:16:40.316952",
"primary_keys": ["ID"],
"before": null,
"tokens": {"txid":"3.17.2493",
"csn":"64913009"},
"op_type":"I",
"after": { "CODE":"AAAA41",
"STATUS":"COMPLETED",
"ID":24},
"op_ts":"2019-12-24 13:16:40.316941",
"table":"S_ORDER"}
I read messages from this topic and I want to create a KTable that has as key, the field "after":"ID" and for value all the fields inside the "after" field (except for "ID").
I have successfully created a KTable only when I use the default aggregate functions i.e count. But I have difficulty creating my own aggregate function. Below I present the part of the code that I try to create the KTable.
KTable<Long, String> s_table = builder.stream("s-order-topic", Consumed.with(Serdes.Long(),Serdes.String()))
.mapValues(value -> {
String time;
JSONObject json = new JSONObject(value);
if (json.getString("op_type").equals("I")) {
time = "after";
}else {
time = "before";
}
JSONObject json2 = new JSONObject(json.getJSONObject(time).toString());
return json2.toString();
})
.groupBy((key, value) -> {
JSONObject json = new JSONObject(value);
return json.getLong("ID");
}, Grouped.with(Serdes.Long(), Serdes.String()))
.aggregate( ... );
How can I implement this KTable?
Am I approaching the problem correctly?
(mapValues -> keep only the "before"/"after" field. groupBy -> Make the ID the key of the message. Aggregate -> ? )
I figured out a solution for my case. I implemented the KTable as shown below:
KTable<String, String> s_table = builder.stream("s-order-topic", Consumed.with(Serdes.String(),Serdes.String()))
.mapValues(value -> {
String time;
JSONObject json = new JSONObject(value);
if (json.getString("op_type").equals("I")) {
time = "after";
}else {
time = "before";
}
JSONObject json2 = new JSONObject(json.getJSONObject(time).toString());
return json2.toString();
})
.groupBy((key, value) -> {
JSONObject json = new JSONObject(value);
return String.valueOf(json.getLong("ID"));
}, Grouped.with(Serdes.String(), Serdes.String()))
.reduce((prev,newval)->newval);
The aggregate function is not suitable for this case, instead I used the reduce function.
The output from the console consumer is shown below:
15 {"CODE":"AAAA17","STATUS":"PENDING","ID":15}
18 {"CODE":"AAAA50","STATUS":"SUBMITTED","ID":18}
4 {"CODE":"AAAA80","STATUS":"SUBMITTED","ID":4}
19 {"CODE":"AAAA83","STATUS":"SUBMITTED","ID":19}
18 {"CODE":"AAAA33","STATUS":"COMPLETED","ID":18}
5 {"CODE":"AAAA38","STATUS":"PENDING","ID":5}
10 {"CODE":"AAAA1","STATUS":"COMPLETED","ID":10}
3 {"CODE":"AAAA68","STATUS":"NOT COMPLETED","ID":3}
9 {"CODE":"AAAA89","STATUS":"PENDING","ID":9}

How to satisfy nested generic requirement of Class<T> in kotlin

I am trying to call a method where it's signature includes a parameter of Class<T>
below is the sample code in kotlin
val response: ResponseEntity<ResponseObject<*>> = testRestTemplate.postForEntity("/users", user, ResponseObject::class.java)
what i am trying to achieve is to get rid of the <*> in responseObject and let it be
val response: ResponseEntity<ResponseObject<User>> = ???
but i am not sure on what is the correct syntax to provide to satisfy the Class<T> requirement
i tried
ResponseObject<User::class.java>::class.java
but that is not a valid syntax. any pointers?
The real problem is if i use * i don't know how exactly to infer the User instance from there correctly.
ok I managed to solve my problem using type casting using when
#Test
fun testCreateUser() {
val user = User(id = null)
val response = testRestTemplate.postForEntity("/users", user, ResponseObject::class.java)
val responseObject = response.body
when (val returnedUser = responseObject.model) {
is User -> {
assertNotNull(returnedUser.id)
assertEquals(UserStatus.active, returnedUser.status)
}
}
}
If you can change the signature of the method then you may try something similar to the following:
class ResponseEntity<T : Any>(val body: T)
class ResponseObject<T : Any>(val model: T)
data class User(val id: Long, val status: String)
fun <M : Any, K : ResponseObject<M>> postForEntity(paht: String, model: M): ResponseEntity<K> {
return TODO()
}
val response: ResponseEntity<ResponseObject<User>> = postForEntity("/users", User(1, "good"))
You could use
#Suppress("UNCHECKED_CAST")
val response = testRestTemplate.postForEntity("/users", user, ResponseObject::class.java as Class<ResponseObject<User>>)
Or a helper function if you need it more than once for different parameters
inline fun <reified T> classOf<T>() = T::class.java
val response = testRestTemplate.postForEntity("/users", user, classOf<ResponseObject<User>>())
(in both cases the type ResponseEntity<ResponseObject<User>> should be inferred)

Pass array as value of annotation param in JavaPoet

Using JavaPoet I'm trying to annotate a class with an annotation which has an array as a parameter value i.e.
#MyCustom(param = { Bar.class, Another.class })
class Foo {
}
I use AnnotationSpec.builder and its addMember() method:
List<TypeMirror> moduleTypes = new ArrayList<>(map.keySet());
AnnotationSpec annotationSpec = AnnotationSpec.builder(MyCustom.class)
.addMember("param", "{ $T[] } ", moduleTypes.toArray() )
.build();
builder.addAnnotation(annotationSpec);
CodeBlock has a joining collector, and you can use that to stream this, doing something like the following (if for instance this was an enum). You can do it for any types, just the map would change.
AnnotationSpec.builder(MyCustom.class)
.addMember(
"param",
"$L",
moduleTypes.stream()
.map(type -> CodeBlock.of("$T.$L", MyCustom.class, type))
.collect(CodeBlock.joining(",", "{", "}")))
.build()
Maybe not an optimal solution but passing an array to an annotation in JavaPoet can be done in the following way:
List<TypeMirror> moduleTypes = new ArrayList<>(map.keySet());
CodeBlock.Builder codeBuilder = CodeBlock.builder();
boolean arrayStart = true;
codeBuilder.add("{ ");
for (TypeMirror modType: moduleTypes)
if (!arrayStart)
codeBuilder.add(" , ");
arrayStart = false;
codeBuilder.add("$T.class", modType);
codeBuilder.add(" }");
AnnotationSpec annotationSpec = AnnotationSpec.builder(MyCustom.class)
.addMember("param", codeBuilder.build() )
.build();
builder.addAnnotation(annotationSpec);

Spark - How to use SparkContext within classes?

I am building an application in Spark, and would like to use the SparkContext and/or SQLContext within methods in my classes, mostly to pull/generate data sets from files or SQL queries.
For example, I would like to create a T2P object which contains methods that gather data (and in this case need access to the SparkContext):
class T2P (mid: Int, sc: SparkContext, sqlContext: SQLContext) extends Serializable {
def getImps(): DataFrame = {
val imps = sc.textFile("file.txt").map(line => line.split("\t")).map(d => Data(d(0).toInt, d(1), d(2), d(3))).toDF()
return imps
}
def getX(): DataFrame = {
val x = sqlContext.sql("SELECT a,b,c FROM table")
return x
}
}
//creating the T2P object
class App {
val conf = new SparkConf().setAppName("T2P App").setMaster("local[2]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val t2p = new T2P(0, sc, sqlContext);
}
Passing the SparkContext as an argument to the T2P class doesn't work since the SparkContext is not serializable (getting a task not serializable error when creating T2P objects). What is the best way to use the SparkContext/SQLContext inside my classes? Or perhaps is this the wrong way to design a data pull type process in Spark?
UPDATE
Realized from the comments on this post that the SparkContext was not the problem, but that I was using a using a method within a 'map' function, causing Spark to try to serialize the entire class. This would cause the error since SparkContext is not serializable.
def startMetricTo(userData: ((Int, String), List[(Int, String)]), startMetric: String) : T2PUser = {
//do something
}
def buildUserRollup() = {
this.userRollup = this.userSorted.map(line=>startMetricTo(line, this.startMetric))
}
This results in a 'task not serializable' exception.
I fixed this problem (with the help of the commenters and other StackOverflow users) by creating a separate MetricCalc object to store my startMetricTo() method. Then I changed the buildUserRollup() method to use this new startMetricTo(). This allows the entire MetricCalc object to be serialized without issue.
//newly created object
object MetricCalc {
def startMetricTo(userData: ((Int, String), List[(Int, String)]), startMetric: String) : T2PUser = {
//do something
}
}
//using function in T2P
def buildUserRollup(startMetric: String) = {
this.userRollup = this.userSorted.map(line=>MetricCalc.startMetricTo(line, startMetric))
}
I tried several options, this is what worked eventually for me..
object SomeName extends App {
val conf = new SparkConf()...
val sc = new SparkContext(conf)
implicit val sqlC = SQLContext.getOrCreate(sc)
getDF1(sqlC)
def getDF1(sqlCo: SQLContext): Unit = {
val query1 = SomeQuery here
val df1 = sqlCo.read.format("jdbc").options(Map("url" -> dbUrl,"dbtable" -> query1)).load.cache()
//iterate through df1 and retrieve the 2nd DataFrame based on some values in the Row of the first DataFrame
df1.foreach(x => {
getDF2(x.getString(0), x.getDecimal(1).toString, x.getDecimal(3).doubleValue) (sqlCo)
})
}
def getDF2(a: String, b: String, c: Double)(implicit sqlCont: SQLContext) : Unit = {
val query2 = Somequery
val sqlcc = SQLContext.getOrCreate(sc)
//val sqlcc = sqlCont //Did not work for me. Also, omitting (implicit sqlCont: SQLContext) altogether did not work
val df2 = sqlcc.read.format("jdbc").options(Map("url" -> dbURL, "dbtable" -> query2)).load().cache()
.
.
.
}
}
Note: In the above code, if I omitted (implicit sqlCont: SQLContext) parameter from getDF2 method signature, it would not work. I tried several other options of passing the sqlContext from one method to the other, it always gave me NullPointerException or Task not serializable Excpetion. Good thins is it eventually worked this way, and I could retrieve parameters from a row of the DataFrame1 and use those values in loading the DataFrame 2.

Categories

Resources