Reading Configuration in Java, JSON an Option? - java

I'm setting up a configuration file and system to process it. The current file I have is all JSON and is read using the GSON library. What I'd like to do is change part of the configuration file so I can chain rules to add more advanced logic.
For example, the configuration part would look similar to (assume it's JSON, but clearly it's not proper syntax):
Parts: {
{
{
Type: Keyword,
Keywords: "a,b,c",
UniqueOnly: true,
Threshold: 2,
}
AND
{
Type: RegEx,
Pattern: "^(abc|123)$",
UniqueOnly: true,
Threshold: 2,
}
}
OR
{
Type: Keyword,
Keywords: "abc",
Threshold: 1
}
}
The inner parts will be able to translate back to a class appropriately but between the rules I'd like to have an option for "AND" as well as "OR" with nesting. Is this something that can be achieved with ease with any JSON/GSON parsers or is it more along the lines of needing to write an entirely custom reader from scratch?

I would use valid JSON in this situation or would not use JSON at all. I have tried to do what you wnat and placed it on gist. The resulting JSON that is both serialized and deserialized looks like the following:
{
"combOperation": "OR",
"elements": [
{
"combOperation": "AND",
"elements": [
{
"regex": "^[a-zA-Z]*",
"uniqueOnly": false,
"threshold": 1
},
{
"keyWord": "BarFoo",
"uniqueOnly": false,
"threshold": 5
}
]
},
{
"keyWord": "FooBar",
"uniqueOnly": true,
"threshold": 2
}
]
}

Related

Is there a way to extract all values by some key from a list of maps using SpEL?

Let's assume there's a map corresponding to the following structure:
{
"lists": [
{
"list": [
{
"letter": "a"
},
{
"letter": "b"
}
]
},
{
"list": [
{
"letter": "c"
},
{
"letter": "d"
}
]
}
]
}
There's an easy way to get all lists using SpEL (#root['lists']) or all letters of the first list ("#root['lists'][0]['list']") or the first letter of the first list ("#root['lists'][0]['list'][0]"). Also, there's a projection mechanism which allows using construction like "#root['lists'].![#this['list']]" to convert each item of lists to a result of the projection expression.
However, given all these possibilities, I still failed to come up with an expression allowing me to extract all letters from both lists. For the example above, I'd like to get
[
{
"letter": "a"
},
{
"letter": "b"
},
{
"letter": "c"
},
{
"letter": "d"
}
]
I tried to use the projection mechanism to achieve my goal but it didn't really help. The problem I see is that every time SpEL detects a list, it applies the projection expression to each element of the list so any structure of nested lists can't be changed this way.
The problem I solve can be easily solved using JsonPath, however, I assume I could get something wrong and didn't notice a way to achieve the same result using SpEL. I would be happy to listen to any ideas.

How to compare 2 json by ignoring some attributes and child element orders

I'm seeking for a java library which can give fine grain control on json comparision. (just like XMLUnit)
For example, I have below 2 json document:
Control:
{
"timestamp":1234567,
"items":[
{
"id":111,
"title":"Test Item 111"
},
{
"id":222,
"title":"Test Item 222"
}
]
}
Test:
{
"timestamp":7654321,
"items":[
{
"id":222,
"title":"Test Item 222"
},
{
"id":111,
"title":"Test Item 111"
},
{
"id":333,
"title":"Test Item 333"
}
]
}
I'd like to apply below semantics when comparing them:
Ignore 'timestamp'
When comparing 'items', please do head to head compare against 'id'
Any suggestions?
As point out in your tags you can use Jackson to convert the both json to a Java class.
You can override the equals method within the class with your desired condition.
After all just use equals function in your objects.
JsonUnit seems to help:
https://github.com/lukas-krecan/JsonUnit#options
for my case:
Configuration cfg = Configuration.empty().when(path("items"), then(Option.IGNORING_ARRAY_ORDER, Option.IGNORING_EXTRA_ARRAY_ITEMS)).when(path("timestamp"), thenIgnore());
Diff diff = Diff.create(control, test, "", "", cfg);
assertTrue(diff.similar());

ShEx Validation - reason and appInfo are null in Result Shape Map

I am learning ShEx and using 'shexjava API' done by http://shexjava.lille.inria.fr/ for my project. I have schema, data graph and fixed shape map. When I validate using refine and recursive validation, I am getting ResultShapeMap but the reason and appInfo are null for NONCONFORMANT status. I do not understand why these two fields are null.
I have schema, dataGraph, shapeMap. This is code for validation.
ValidationAlgorithm vl = new RefineValidation(schema, dataGraph);
ResultShapeMap result = vl.validate(shapeMap);
Shape is,
{
"#context": "http://www.w3.org/ns/shex.jsonld",
"type": "Schema",
"shapes": [
{
"id": "http://example.com/ns#HouseShape",
"type": "Shape",
"expression": {
"type": "EachOf",
"expressions": [
{ "type": "TripleConstraint",
"predicate": "http://example.com/ns#number",
"valueExpr": { "type": "NodeConstraint",
"datatype": "http://www.w3.org/2001/XMLSchema#String"
}
},
{ "type": "TripleConstraint",
"predicate": "http://example.com/ns#size",
"valueExpr": { "type": "NodeConstraint",
"datatype": "http://www.w3.org/2001/XMLSchema#decimal"
}
}
]
}
}
]
}
Data is,
ex:House1 a ex:House ;
ex:number "11A" ;
ex:size 23 .
My Result is,
ResultShapeMap [
associations= [
ShapeAssociation [
nodeSelector=<example.com/ns#House>,
shapeSelector=<example.com/ns#HouseShape>,
status=NONCONFORMANT,
reason=null,
appInfo=null
]
]
]
I want to output the reason for not conforming. But it gives me null for that.
Could some one please help me.
The shexjava implementation currently does not support indicating a reason for failure.
This is because when a node does not satisfy a shape there may be several reasons.
If you want to learn ShEx, I would advise you to use ShapeDesigner
https://gitlab.inria.fr/jdusart/shexjapp/
which provides a graphical interface in which you can explore validation results.
In this particular case, it indicates that the validation fails because 23 is not a decimal (it's actually an integer) Screenshot of validation exploration result in ShapeDesigner
I do not know whether this is a bug, i.e. whether integrers should be considered to be also decimals in RDF.

Elasticsearch aggregation histogram by date no longer works with script

I have an ES query, which returns some 26 results.
The query has aggregation histogram element which looks like this:
"aggregations" : {
"by_date" : {
"date_histogram" : {
"field" : "startDate",
"interval" : "month"
}
}
}
The aggregation element of search result looks like this:
"aggregations": {
"date_histogram": {
"buckets":[
{"key_as_string":"2016-01-01T00:00:00.000Z", "key":1451606400000, "doc_count":18},
{"key_as_string":"2016-02-01T00:00:00.000Z", "key":1454284800000, "doc_count":8}
]
}
}
So far so good. But what I want is to do some scripting against search results to remove elements not matching certain criteria. So I added this to the query:
"aggregations" : {
"by_date" : {
"date_histogram" : {
"field" : "startDate",
"interval" : "month",
"script" : {
"inline" : "if (condition) {return 1} else {return 0}"
}
}
}
Unfortunately, this results a single result bucket and aggregation is lost:
"date_histogram": {
"buckets": [
{"key_as_string": "1970-01-01T00:00:00.000Z", "key": 0, "doc_count": 26 }
]
}
What have I tried:
reducing the script inline element to just return 1. This still results broken aggregation
returning value of date field itself. Results ClassCastException - the result should be a number
checking ES config settings. I have enabled everything for script.engine.groovy.{file|indexed|inline}.{aggs|mapping|search|update|plugin}, also script.inline, script.indexed and script.aggs.
Checked the 2.0 breaking aggregation changes but none seem to be relevant.
I know I can run separate queries having that filter in query itself (rather than aggregation part) which would let me do aggregation without script. The point is that I have a dozen of different aggregations which take the same set of search results and do different types of filtering (and aggregation). Running the same query multiple times is counter productive and not acceptable.
As far as I know, this used to work in version 1.4.4 but is no longer working in version 2.2.0.
Is this a bug? Or perhaps the same logic could be reimplemented differently, e.g. via Bucket Script Aggregation, or any other?
have you tried with the new aggregation framework, and inline ternaries in a groovy style script ?
I previously ran into the same kind of issue, and that's how i solved it.
Your aggregation query would look like this :
"aggs": {
"2": {
"date_histogram": {
"field": "startDate",
"interval": "month",
},
"aggs": {
"1": {
"sum": {
"script": "((condition) ? 1 : 0)",
"lang": "expression"
}
}
}
}
}
Note that you can also try it with defining your script as a .groovy file in the scripts folder of ElasticSearch installation.
Hope that it'll help.
Regards.

Parse a structured flattened json object to list of object

I am current have a Json like :
{
"data":{
"gatewayId":"asd",
"records":[
{
"ms":123,
"points":[
{
"sensorId":"asdasd",
"sensorType":"asdasd",
"batt" : 12,
"kw" : 2
},
{
"sensorId":"123",
"sensorType":"as123dasd",
"batt" : 12,
"kw" : 2
}
]
},
{
"ms":123123,
"points":[
{
"sensorId":"asdasd",
"sensorType":"asdasd",
"batt" : 12,
"kw" : 2
},
{
"sensorId":"123",
"sensorType":"as123dasd",
"batt" : 12,
"kw" : 2
}
]
}
]
},
"gatewayType":"Asdasd"
}
My purpose is to denormalise the object to the lowest level in Java
where the pojo is
class SimpleData {
private String gatewayId;
private String gatewayType;
private Long ms;
private String sensorType;
private Double batt;
private Long kw;
}
For what I did for now, I flatten the json to a list for string as below.
root.gatewayType="Asdasd"
root.data.gatewayId="asd"
root.data.records[0].ms=123
root.data.records[0].points[0].sensorId="asdasd"
root.data.records[0].points[0].sensorType="asdasd"
root.data.records[0].points[0].batt=12
root.data.records[0].points[0].kw=2
root.data.records[0].points[1].sensorId="123"
root.data.records[0].points[1].sensorType="as123dasd"
root.data.records[0].points[1].batt=12
root.data.records[0].points[1].kw=2
root.data.records[1].ms=123123
root.data.records[1].points[0].sensorId="asdasd"
root.data.records[1].points[0].sensorType="asdasd"
root.data.records[1].points[0].batt=12
root.data.records[1].points[0].kw=2
root.data.records[1].points[1].sensorId="123"
root.data.records[1].points[1].sensorType="as123dasd"
root.data.records[1].points[1].batt=12
root.data.records[1].points[1].kw=2
I am thinking is it any logic or library can parse the above list for string to list of SimpleData object?
Sorry, My question maybe not clear, I find a more simple way to solve the problem.
But I need a library to denormalize the json.
for example if the json is :
{
"a" : "1",
"b" : ["2", "3"]
}
will become
[
{
"a" : "1",
"b" : "2"
},
{
"a" : "1",
"b" : "3"
}
]
I believe the Gson library is what you are looking for. It provides simple methods to convert Json array to simple Java objects, and vice versa. Very handy, developed by Google.
I recommend fastjson, which is really fast and quite easy to use. In your case, you only need to define POJO in the structure as your JSON data and parse the JSON to Object. It's created by Alibaba.

Categories

Resources