Jackson - Variable key - java

I'm facing a little problem, I think some workaround can be find but I'm searching for the proper way to do it.
I use Selenium with Grid, and then I configure all my nodes with JSON files. Some browsers (Chrome, IE) needs specific drivers.
These drivers are defined by a key of that form: webdriver.browser.driver with browser = chrome or ie. So, we've got for example:
{"browserName": "chrome",
"maxInstances": 5,
"platform": "WINDOWS",
"webdriver.chrome.driver": "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe" }
{"browserName": "internet explorer",
"maxInstances": 1,
"platform": "WINDOWS",
"webdriver.ie.driver": "C:/Program Files (x86)/Internet Explorer/iexplore.exe" }
I want to get the value and put it in: private String driverPath of my POJO.
Is there a way to get the value of the key dynamically? Like a regex?

Conceptually speaking, in both of your objects, the attributes "webdriver.chrome.driver" and "webdriver.ie.driver" represents the same entity: the driver, so the attribute should just be called driver.
Usually a POJO <-> JSON conversion is intended to be one to one, so one Java field per JSON field.
If you cannot change the JSON or you feel that you need them to be like they are right now, at least in Jackson and GSON, you can register a Custom Deserializer and manually make the parsing for that value.
You can see examples here: http://www.baeldung.com/jackson-deserialization

Parse it as a List, for example using jackson:
List<Map> list = (List<Map>) new ObjectMapper().readValue(json, List.class);
Here's some working code:
String json = "[{\"browserName\": \"chrome\", \"maxInstances\": 5,\n \"platform\": \"WINDOWS\",\n" +
" \"webdriver.chrome.driver\": \"C:/Program Files (x86)/Google/Chrome/Application/chrome.exe\" }," +
"{\"browserName\": \"internet explorer\", \"maxInstances\": 1," +
" \"platform\": \"WINDOWS\", \"webdriver.ie.driver\": \"C:/Program Files (x86)/Internet Explorer/iexplore.exe\" }]";
List<Map> list = (List<Map>) new ObjectMapper().readValue(json, List.class);
String browser = "chrome";
String driver = list.stream()
.<String>map(m -> (String) m.get("webdriver." + browser + ".driver"))
.filter(s -> s != null)
.findFirst()
.orElse(null); // C:/Program Files (x86)/Google/Chrome/Application/chrome.exe
driver will be null if not found

Related

SparkSql and REGEX

in my case i use a dataset(dataframe) in JavaSparkSQL.
This dataset result from an JSON file. The json file is formed from key-value.When i lunch a query for see the value i write for examle:
SELECT key1.name from table
example JSON file
{
"key1":
{ "name": ".....",....}
"key2":
{ "name":"....",....}
}
my question is, when i want acceding at all key,I believe I should use a REGEX like
select key*.name from table
but i don't know the regex!
please help
I am afraid no such syntax is available in (spark) SQL.
You may want to construct your query programmatically though.
Something like :
String sql = Stream.of(ds.schema().fieldNames()).filter(name -> name.startsWith("key")).collect(Collectors.joining(", ", "select ", " from table"));
System.out.println(sql);
or even
Dataset<Row> result = spark.table("table").select(Stream.of(ds.schema().fieldNames()).filter(name -> name.startsWith("key")).map(name -> ds.col(name))
.toArray(Column[]::new));
result.show();
HTH!

How to create and publish Index using Java Client Programatically

Is it possible to programmatically create and publish secondary indexes using Couchbases Java Client 2.2.2? I want to be able to create and publish my custom secondary indexes Running Couchbase 4.1. I know this is possible to do with Couchbase Views but I can't find the same for indexes.
couchbase-java-client-2.3.1 is needed in order to programmatically create indexes primary or secondary. Some of the usable methods can be found on the bucketManger same that is used to upsert views. Additionally the static method createIndex can be used it support DSL and String syntax
There are a few options to create your secondary indexes.
Option #1:
Statement query = createIndex(name).on(bucket.name(), x(fieldName));
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #2:
String query = "BUILD INDEX ON `" + bucket.name() + "` (" + fieldName + ")";
N1qlQueryResult result = bucket.query(N1qlQuery.simple(query));
Option #3 (Actually multiple options here since method createN1qlIndex is overloaded
bucket.bucketManager().createN1qlIndex(indexName, fields, where, true, false);
Primary index:
// Create a N1QL Primary Index (ignore if it exists)
bucket.bucketManager().createN1qlPrimaryIndex(true /* ignore if exists */, false /* defer flag */);
Secondary Index:
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_1",
true, //ignoreIfExists
false, //defer
Expression.path("field1.id"),
Expression.path("field2.id"));
or
// Create a N1QL Index (ignore if it exists)
bucket.bucketManager().createN1qlIndex(
"my_idx_2",
true, //ignoreIfExists
false, //defer
new String ("field1.id"),
new String("field2.id"));
The first secondary index (my_idx_1) is helpful if your document is something like this:
{
"field1" : {
"id" : "value"
},
"field2" : {
"id" : "value"
}
}
The second secondary index (my_idx_2) is helpful if your document is something like this:
{
"field1.id" : "value",
"field2.id" : "value"
}
You should be able to do this with any 2.x, once you have a Bucket
bucket.query(N1qlQuery.simple(queryString))
where queryString is something like
String queryString = "CREATE PRIMARY INDEX ON " + bucketName + " USING GSI;";
As of java-client 3.x+ there is a QueryIndexManager(obtained via cluster.queryIndexes()) which provides an indexing API with the below specific methods to create indexes:
createIndex(String bucketName, String indexName, Collection<String> fields)
createIndex(String bucketName, String indexName, Collection<String> fields, CreateQueryIndexOptions options)
createPrimaryIndex(String bucketName)
createPrimaryIndex(String bucketName, CreatePrimaryQueryIndexOptions options)

Conditionals in String - Java Properties File

Have a java property like below
sample.properties
query={name} in {address:-}
Currently using StrSubstitutor to replace the values
valuesMap.put("name", "Har");
valuesMap.put("address", "Park Street");
String queryString= properties.get("query");
StrSubstitutor sub = new StrSubstitutor(valuesMap);
String resolvedString = sub.replace(queryString);
resolvedString = Har in Park Street
What I need is that if the "address" isn't available, the resolved string should be as :
resolvedString = Har instead of resolvedString = Har in
Is it possible to achieve this using StrSubstitutor or by anyother means like using template engine?
Do not want any Java code dependency as the query pattern can change.
This could be a way not to loose generality:
query1={name} in {address}
query2={name}
...
String queryString;
if (valueMap.get("address") == "") {
queryString= properties.get("query2");
} else {
queryString= properties.get("query1");
}
StrSubstitutor sub = new StrSubstitutor(valuesMap);
String resolvedString = sub.replace(queryString);
in this way even if in the future queries have to change, you'll be able to manage the difference between the two cases.
EDIT: if in the future you want to add new properties such in your comment:
{name} as {alias} in {address}
I think a way to proceed could be to put even "as" and "in" into the query:
query = {name} {as} {alias} {in} {address}
and the populate the database or the valueMap (don't know where the data come from in your application)) in the right way (so {as}="" when no "alias" and {in}="" when no address). Would that be feasible for you?

jackson unmarshalling problems

I am trying to deserialize a JSON String using Jackson 2 with RestAssured (java tool for IT tests).
I have a problem. The String I am trying to deserialize is :
{"Medium":{"uuid":"2","estimatedWaitTime":0,"status":"OPEN_AVAILABLE","name":"Chat","type":"CHAT"}}
There is the object type "Medium" at the begining of the String. This cause Jackson failing during deserialization:
com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "Medium"
I've set the "IGNORE_ON_UNKNOWN_PROPERTIES" to false and then I got no exception during deserialisation. However, all of my properties are 'null' in java.
Response getAvailability -> {"Medium":{"uuid":"2","estimatedWaitTime":0,"status":"OPEN_AVAILABLE","name":"Chat","type":"CHAT"}}
### MEDIUM name -> null
### MEDIUM uuid -> null
### MEDIUM wait time -> null
### MEDIUM wait time -> null
### MEDIUM status -> null
Does anyone can help me ? (note: I can't change my input JSON string).
{
"Medium": {
"uuid": "2",
"estimatedWaitTime": 0,
"status": "OPEN_AVAILABLE",
"name": "Chat",
"type": "CHAT"
}
}
as you can see uuid and other params are part of medium object , so class in which it can be deserialized is.
class Medium
{
string name;
// specify other params also.
}
class BaseObject
{
Medium Medium;
}
and then use jackson.deserialize('json', BaseObject.class)
above i had given pseudo code
You need to put annotation
#JsonRootName("Medium")
on your bean class and configure object mapper to
mapper.enable(DeserializationFeature.UNWRAP_ROOT_VALUE).
You need a way to remove the Object name that is the part of the input JSON. Since you cannot change the input string, Use this code to change this input string to a tree and get the value of "Medium" node.
ObjectMapper m = new ObjectMapper();
JsonNode root = m.readTree("{\"Medium\":{\"uuid\":\"2\",\"estimatedWaitTime\":0,\"status\":\"OPEN_AVAILABLE\",\"name\":\"Chat\",\"type\":\"CHAT\"}}");
JsonNode obj = root.get("Medium");
Medium medium = m.readValue(obj.asText, Medium.class);

Update collection in MongoDb via Apache Spark using Mongo-Hadoop connector

I would like to update a specific collection in MongoDb via Spark in Java.
I am using the MongoDB Connector for Hadoop to retrieve and save information from Apache Spark to MongoDb in Java.
After following Sampo Niskanen's excellent post regarding retrieving and saving collections to MongoDb via Spark, I got stuck with updating collections.
MongoOutputFormat.java includes a constructor taking String[] updateKeys, which I am guessing is referring to a possible list of keys to compare on existing collections and perform an update. However, using Spark's saveAsNewApiHadoopFile() method with parameter MongoOutputFormat.class, I am wondering how to use that update constructor.
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
Prior to this, MongoUpdateWritable.java was being used to perform collection updates. From examples I've seen on Hadoop, this is normally set on mongo.job.output.value, maybe like this in Spark:
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, MongoUpdateWritable.class, MongoOutputFormat.class, config);
However, I'm still wondering how to specify the update keys in MongoUpdateWritable.java.
Admittedly, as a hacky way, I've set the "_id" of the object as my document's KeyValue so that when a save is performed, the collection will overwrite the documents having the same KeyValue as _id.
JavaPairRDD<BSONObject,?> analyticsResult; //JavaPairRdd of (mongoObject,result)
JavaPairRDD<Object, BSONObject> save = analyticsResult.mapToPair(s -> {
BSONObject o = (BSONObject) s._1;
//for all keys, set _id to key:value_
String id = "";
for (String key : o.keySet()){
id += key + ":" + (String) o.get(key) + "_";
}
o.put("_id", id);
o.put("result", s._2);
return new Tuple2<>(null, o);
});
save.saveAsNewAPIHadoopFile("file:///bogus", Object.class, Object.class, MongoOutputFormat.class, config);
I would like to perform the mongodb collection update via Spark using MongoOutputFormat or MongoUpdateWritable or Configuration, ideally using the saveAsNewAPIHadoopFile() method. Is it possible? If not, is there any other way that does not involve specifically setting the _id to the key values I want to update on?
I tried several combination of config.set("mongo.job.output.value","....") and several combination of
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
and none of them worked.
I made it to work by using MongoUpdateWritable class as output of my map method:
items.map(row => {
val mongo_id = new ObjectId(row("id").toString)
val query = new BasicBSONObject()
query.append("_id", mongo_id)
val update = new BasicBSONObject()
update.append("$set", new BasicBSONObject().append("field_name", row("new_value")))
val muw = new MongoUpdateWritable(query,update,false,true)
(null, muw)
})
.saveAsNewAPIHadoopFile(
"file:///bogus",
classOf[Any],
classOf[Any],
classOf[com.mongodb.hadoop.MongoOutputFormat[Any, Any]],
mongo_config
)
The raw query executed in mongo is then something like this:
2014-11-09T13:32:11.609-0800 [conn438] update db.users query: { _id: ObjectId('5436edd3e4b051de6a505af9') } update: { $set: { value: 10 } } nMatched:1 nModified:0 keyUpdates:0 numYields:0 locks(micros) w:24 3ms

Categories

Resources