I'm trying to deserialize a JSON file with Jackson and I want to use different names for objects. I know how to set the #JsonProperty annotation but this doesn't work for class names. An example:
public class _my_class {
#JsonProperty("my_variable")
private String myVariable;
}
I want the class to be named MyClass. I also tried to use #JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "MyClass") but it doesn't work too. Is there a solution to this?
EDIT
This is my simplified JSON file:
{
"CVE_data_type": "CVE",
"CVE_data_format": "MITRE",
"CVE_data_version": "4.0",
"CVE_data_numberOfCVEs": "1",
"CVE_data_timestamp": "2018-10-26T07:00Z",
"CVE_Items": [
{
"cve": {
"data_type": "CVE",
"data_format": "MITRE",
"data_version": "4.0",
"CVE_data_meta": {
"ID": "CVE-2018-0001",
"ASSIGNER": "my#mail.com"
},
"affects": {
"vendor": {
"vendor_data": [
{
"vendor_name": "myVendorName",
"product": {
"product_data": [
{
"product_name": "myProductName",
"version": {
"version_data": [
{
"version_value": "myVersionValue",
"version_affected": "myVersionAffected"
}
]
}
}
]
}
}
]
}
},
"problemtype": {
"problemtype_data": [
{
"description": [
{
"lang": "en",
"value": "myProblemtypeDescription"
}
]
}
]
},
"references": {
"reference_data": [
{
"url": "http://www.myReferenceDataUrl.com/",
"name": "myReferenceDataName",
"refsource": "myReferenceDataRefsource",
"tags": [
"myReferenceDataTagOne",
"myReferenceDataTagTwo"
]
}
]
},
"description": {
"description_data": [
{
"lang": "en",
"value": "myDescription"
}
]
}
},
"configurations": {
"CVE_data_version": "4.0",
"nodes": [
{
"operator": "OR",
"cpe": [
{
"vulnerable": true,
"cpe22Uri": "cpe:/o:this:is:a:cpe",
"cpe23Uri": "cpe:2.3:o:this:is:a:cpe:*:*:*:*:*:*",
"versionStartIncluding": "myVersionStartIncluding",
"versionStartExcluding": "myVersionStartExcluding",
"versionEndIncluding": "myVersionEndIncluding",
"versionEndExcluding": "myVersionEndExcluding"
},
{
"vulnerable": true,
"cpe22Uri": "cpe:/o:this:is:another:cpe",
"cpe23Uri": "cpe:2.3:o:this:is:another:cpe:*:*:*:*:*:*"
}
]
}
]
},
"impact": {
"baseMetricV3": {
"cvssV3": {
"version": "3.0",
"vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
"attackVector": "NETWORK",
"attackComplexity": "LOW",
"privilegesRequired": "NONE",
"userInteraction": "NONE",
"scope": "UNCHANGED",
"confidentialityImpact": "HIGH",
"integrityImpact": "HIGH",
"availabilityImpact": "HIGH",
"baseScore": 9.8,
"baseSeverity": "CRITICAL"
},
"exploitabilityScore": 3.9,
"impactScore": 5.9
},
"baseMetricV2": {
"cvssV2": {
"version": "2.0",
"vectorString": "(AV:N/AC:L/Au:N/C:P/I:P/A:P)",
"accessVector": "NETWORK",
"accessComplexity": "LOW",
"authentication": "NONE",
"confidentialityImpact": "PARTIAL",
"integrityImpact": "PARTIAL",
"availabilityImpact": "PARTIAL",
"baseScore": 7.5
},
"severity": "HIGH",
"exploitabilityScore": 10.0,
"impactScore": 6.4,
"obtainAllPrivilege": false,
"obtainUserPrivilege": false,
"obtainOtherPrivilege": false,
"userInteractionRequired": false
}
},
"publishedDate": "2018-01-10T22:29Z",
"lastModifiedDate": "2018-02-23T02:29Z"
}
]
}
Now in want the corresponding class for the CVE Meta Data like this:
public class CVEDataMeta /* currently it's CVE_Data_Meta */ {
private String id;
private String assigner;
// getter and setters
}
EDIT 2
That's how i read the json file:
public CVE_Data deserialize(InputStream jsonStream) {
CVE_Data cveData = null;
ObjectMapper mapper = new ObjectMapper();
try {
cveData = mapper.readValue(jsonStream, CVE_Data.class);
} catch (...) {
...
}
return cveData;
}
Related
I have a collection which name called 'airport' and i have Atlas Auto Complete index you can see JSON config below.
{
"mappings": {
"dynamic": false,
"fields": {
"name": [
{
"type": "string"
},
{
"foldDiacritics": false,
"maxGrams": 7,
"minGrams": 2,
"type": "autocomplete"
}
]
}
}
}
and this is my Document record
{
"_id": {
"$oid": "63de588c7154cc3ee5cbabb2"
},
"name": "Antalya Airport",
"code": "AYT",
"country": "TR",
"createdDate": {
"$date": {
"$numberLong": "1675516044323"
}
},
"updatedDate": {
"$date": {
"$numberLong": "1675516044323"
}
},
"updatedBy": "VISITOR",
"createdBy": "VISITOR",
}
And This is my MongoDB Query
public List<Document> autoCompleteAirports(AutoCompleteRequest autoCompleteRequest) {
return database.getCollection(AIRPORT).aggregate(
Arrays.asList(new Document("$search",
new Document("index", "airportAutoCompleteIndex")
.append("text",
new Document("query", autoCompleteRequest.getKeyword())
.append("path", "name")
)))
).into(new ArrayList<>());
}
So, when i type "antalya" or "Antalya", this works. But when i type "Antaly" or "antal" there is no result.
Any solution ?
i tried change min and max grams settings on index
I want to create a search for books with ElasticSearch and SpringData.
I index my books with ISBN/EAN without hyphens and save it in my database. This data I index with ElasticSearch.
Indexed data: 1113333444444
If I'm search for a ISBN/EAN with hyphen: 111-3333-444444
There is no result. If I'm searching without hyphen, my book will be found as expected.
My settings are like this:
{
"analysis": {
"filter": {
"clean_special": {
"type": "pattern_replace",
"pattern": "[^a-zA-Z0-9]",
"replacement": ""
}
},
"analyzer": {
"isbn_search_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"clean_special"
]
}
}
}
}
I index my fields like this:
#Field(type = FieldType.Keyword, searchAnalyzer = "isbn_search_analyzer")
private String isbn;
#Field(type = FieldType.Keyword, searchAnalyzer = "isbn_search_analyzer")
private String ean;
If I test my analyzer:
GET indexname/_analyze
{
"analyzer" : "isbn_search_analyzer",
"text" : "111-3333-444444"
}
I get following result:
{
"tokens" : [
{
"token" : "1113333444444",
"start_offset" : 0,
"end_offset" : 15,
"type" : "word",
"position" : 0
}
]
}
If I'm search like this:
GET indexname/_search
{
"query": {
"query_string": {
"fields": [ "isbn", "ean" ],
"query": "111-3333-444444"
}
}
}
I don't get any result. Have someone of you an idea?
As mentioned by #P.J.Meisch, you have done everything correct, but missed defining your field data type to text, when you define them as keyword, even though you are explicitly telling ElasticSearch to use your custom-analyzer isbn_search_analyzer, it will be ignored.
Working example on your sample data when field is defined as text.
Index mapping
{
"settings": {
"analysis": {
"filter": {
"clean_special": {
"type": "pattern_replace",
"pattern": "[^a-zA-Z0-9]",
"replacement": ""
}
},
"analyzer": {
"isbn_search_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"clean_special"
]
}
}
}
},
"mappings": {
"properties": {
"isbn": {
"type": "text",
"analyzer": "isbn_search_analyzer"
},
"ean": {
"type": "text",
"analyzer": "isbn_search_analyzer"
}
}
}
}
Index Sample records
{
"isbn" : "111-3333-444444"
}
{
"isbn" : "111-3333-2222"
}
Search query
{
"query": {
"query_string": {
"fields": [
"isbn",
"ean"
],
"query": "111-3333-444444"
}
}
}
And search response
"hits": [
{
"_index": "65780647",
"_type": "_doc",
"_id": "1",
"_score": 0.6931471,
"_source": {
"isbn": "111-3333-444444"
}
}
]
Elasticsearch does not analyze fields of type keyword. You need to set the type to text.
Currently, jackson is rejecting the whole JSON when there is blank property value.
I want to use com.fasterxml.jackson.* to parse JSON code.
As you see below input JSON, name attribute is blank some of the elements.
Hense iterate through JSON objects will ignore by Jackson.
hence there will be 2 elements formed as part of the output.
I am using below code but no luck
def readJsonString[T](content: String)(implicit m: Manifest[T]): T = {
val objectMapper = new ObjectMapper(new JsonFactory().enable(Feature.ALLOW_COMMENTS)) with ScalaObjectMapper
objectMapper.registerModule(DefaultScalaModule)
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
objectMapper.configure(DeserializationFeature.WRAP_EXCEPTIONS, false)
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false)
objectMapper.readValue(content)
}
//Existing Json that I want to use as input where some attribute has blank value
[
{
"name": "Invalid",
"ruleType": "validation_1",
"inputs": [ { "Name": "", "country": ["USA"] } ]
},
{
"name": "",
"ruleType": "validation_2",
"inputs": [ { "Name": "Test", "place": ["USA"] } ]
},
{
"name": "Valid",
"ruleType": "validation_1",
"inputs": []
}
{
"name": "Valid",
"ruleType": "validation_2",
"inputs": [ { "Name": "Test", "place": ["USA"] } ]
},
{
"name": "Valid",
"ruleType": "validation_1",
"inputs": [ { "Name": "Test", "place": ["France"] } ]
}
]
//New Json that will created from above which has proper name attribute value
[
{
"name": "Valid",
"ruleType": "validation_1",
"inputs": [ { "Name": "Test", "country": ["USA"] } ]
}
{
"name": "Valid",
"ruleType": "validation_1",
"inputs": [ { "Name": "Test", "place": ["France"] } ]
}
]
Here is one solution that would work using jackson:
Case classes to map your json to:
case class NamePlace(Name: String, country: Seq[String])
case class NameRuleTypeInputs(name: String, ruleType: String, inputs: Seq[NamePlace])
Custom serializer to include field validations before serializing:
class NameRuleTypeInputsSerializer(defaultSerializer: JsonSerializer[Object]) extends JsonSerializer[NameRuleTypeInputs] {
override def serialize(value: NameRuleTypeInputs, gen: JsonGenerator, serializers: SerializerProvider): Unit = {
if (isValid(value)) {
defaultSerializer.serialize(value, gen, serializers)
}
}
private def isValid(value: NameRuleTypeInputs) = {
!Option(value.name).getOrElse("").isEmpty &&
Option(value.inputs).getOrElse(Seq.empty).nonEmpty &&
!value.inputs.exists(i => Option(i.Name).getOrElse("").isEmpty)
}
}
Updated your objectMapper to include the custom serializer:
val objectMapper = new ObjectMapper(new JsonFactory().enable(Feature.ALLOW_COMMENTS)) with ScalaObjectMapper
objectMapper.registerModule(DefaultScalaModule)
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
objectMapper.configure(DeserializationFeature.WRAP_EXCEPTIONS, false)
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false)
objectMapper.registerModule(new SimpleModule(){
override def setupModule(context: Module.SetupContext): Unit = {
super.setupModule(context)
context.addBeanSerializerModifier(new BeanSerializerModifier() {
override def modifySerializer(config: SerializationConfig, beanDesc: BeanDescription, serializer: JsonSerializer[_]): JsonSerializer[_] = {
if(classOf[NameRuleTypeInputs] isAssignableFrom beanDesc.getBeanClass) {
new NameRuleTypeInputsSerializer(serializer.asInstanceOf[JsonSerializer[Object]])
} else {
serializer
}
}
})
}
})
Methods to read/write json:
def readJsonString[T](content: String)(implicit m: Manifest[T]): T = {
objectMapper.readValue(content)
}
def writeJsonString(nameRuleTypeInputsList: Seq[NameRuleTypeInputs]): String = {
// Using pretty printer for readability
objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(nameRuleTypeInputsList)
}
A small test:
val testJson =
"""
[
{
"name": "Invalid",
"ruleType": "validation_1",
"inputs": [ { "Name": "", "country": ["USA"] } ]
},
{
"name": "",
"ruleType": "validation_2",
"inputs": [ { "Name": "Test", "country": ["USA"] } ]
},
{
"name": "Valid",
"ruleType": "validation_1",
"inputs": []
},
{
"name": "Valid",
"ruleType": "validation_2",
"inputs": [ { "Name": "Test", "country": ["USA"] } ]
},
{
"name": "Valid",
"ruleType": "validation_1",
"inputs": [ { "Name": "Test", "country": ["France"] } ]
}
]
""".stripMargin
val namedRuleTypeInputs: Seq[NameRuleTypeInputs] = readJsonString[Seq[NameRuleTypeInputs]](testJson)
println(writeJsonString(namedRuleTypeInputs))
Output:
[ {
"name" : "Valid",
"ruleType" : "validation_2",
"inputs" : [ {
"Name" : "Test",
"country" : [ "USA" ]
} ]
}, {
"name" : "Valid",
"ruleType" : "validation_1",
"inputs" : [ {
"Name" : "Test",
"country" : [ "France" ]
} ]
} ]
Useful Reference: https://www.baeldung.com/jackson-serialize-field-custom-criteria
I have a JSON file like this:
{
"Resources": {
"HelloWorldFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "index.handler",
"Runtime": "nodejs8.10",
"Events": {
"HelloWorldApi": {
"Type": "Api",
"Properties": {
"Path": "/",
"Method": "GET"
}
}
},
"Policies": [
{
"SNSPublishMessagePolicy": {
"TopicName": {
"Fn::GetAtt": [
"HelloWorldTopic",
"TopicName"
]
}
}
}
],
"Environment": {
"Variables": {
"SNS_TOPIC_ARN": {
"Ref": "HelloWorldTopic"
}
}
},
"CodeUri": "nothing"
}
},
"HelloWorldTopic": {
"Type": "AWS::SNS::Topic",
"Properties": {
"Subscription": [
{
"Endpoint": "nothing",
"Protocol": "email"
}
]
}
}
}
}
I am using the Jackson YAMLFactory to parse a YAML-file that is equivalent to this JSON. How can I parse this in a way that all the content inside "Resources" is stored in a single String? (I want to keep this as a separate YAML/JSON for further analysis)
ObjectMapper mapper = new ObjectMapper();
String resources = mapper.readTree(new FileReader(path_to_your_json_file).at("/Resources").asText()
Or something like this.
I want a conditional transformation where I need to add a property in output if the value of a specific field in input matches my condition. Below is my input and output required.
Input
{
"attr": [
{
"name": "first",
"validations": [
{
"type": "Required",
"value": true
}
]
},
{
"name": "last",
"validations": [
{
"type": "lenght",
"value": "10"
}
]
},
{
"name": "email",
"validations": [
{
"type": "min",
"value": 10
}
]
}
]
}
Output
{
"out": [
{
"name": "first",
"required": "yes"
},
{
"name": "last"
},
{
"name": "email"
}
]
}
So I am able to get till the condition, but inside condition, & and # are being respective to the input rather than to the output. Can anybody help me out with the transformation? Below is the spec I have written so far.
[
{
"operation": "shift",
"spec": {
"attr": {
"*": {
"name": "out.&1.name",
"validations": {
"*": {
"type": {
"Required": {
"#(2,value)": "out.&1.req"
}
}
}
}
}
}
}
}
]
This spec does the transform.
[
{
"operation": "shift",
"spec": {
"attr": {
"*": {
"name": "out[&1].name",
"validations": {
"*": {
"type": {
"Required": {
"#yes": "out[&5].required"
}
}
}
}
}
}
}
}
]
However, I think you meant to grab the "value" : true that is a sibling of the "Required" : true, rather than have the output be "yes".
If so swap in this bit.
"Required": {
"#(2,value)": "out[&5].required"
}