Let's say I have a search response like
"hits": [
{
"_index": "customer",
"_type": "doc",
"_id": "5",
"_score": 1,
"_source": {
"name": "TEST1",
"surname": "Soanems1"
}
},
I want to be able to modify the data before the user gets the response. Let's say the "name" field has a value of "TEST1" and I want the data at rest to stay "TEST1" but I want the user to see a different value, for example, "TEST12345". Is there a way to do it?
Is there a way to create a custom plugin that does this transformation for me? (there are some not trivial transformations that I want to do to the response)
Related
I am creating a Java service that gets a JSON Object from a HTTP GET Request.
This is an example of the JSON Object I am getting:
{
"data": [{
"itemNumber": "547325645",
"manufacturer": "LV3",
"name": "Levis 501",
"minimumQuantity": "1.0",
"maximumQuantity": "10.0",
"prices": [{
"currency": "EUR",
"amount": "80.0"
}]
}, {
"itemNumber": "224145625",
"manufacturer": "LV3",
"name": "Levis 502",
"minimumQuantity": "1.0",
"maximumQuantity": "10.0",
"prices": [{
"currency": "EUR",
"amount": "90.0"
}]
}],
"pagination": {
"offset": 0,
"limit": 2,
"total": 1925
}
}
Right now I am using Jackson to map my HttpResponse into a JSON Object:
productListResponse = new BufferedReader (
new InputStreamReader(getResponse.getEntity().getContent(), StandardCharsets.UTF_8))
.lines()
.collect(Collectors.joining("\n"));
JsonNode productListJson = mapper.readTree(productListResponse);
The problem where I run into now is that I can't parse this JSON properly. For example, I want to omit many values, like the quantities or the currencies. I also want to convert the manufacturer value to another text, in this case "Levis". But what fails for now is the data list, this does not get along with the mapper. I get an empty object back, which does not contain the JSON, but I have already tested if I get a response at all and I do. The next problem would be, of course, that the prices are also in a list and there I also do not know how to go on.
Simply put, I just want a semicolon-separated CSV file where I pull individual values out of the JSON. Of course, I have already done a lot of research and also found reliable information, but it does not apply to my case.
These are some of the sources I have already used:
https://www.baeldung.com/java-converting-json-to-csv
Converting JSON to XLS/CSV in Java
https://docs.aspose.com/cells/java/convert-json-to-csv/
I am working with the Jayway JsonPath library to obtain the correct 'id' from below JSON where my phoneNumbers type is 'iPhone'.
In general, I would like to know how to find something from the root element of a block when a specific condition is specified in the sub-JSON objects.
I tried below expressions that select the block associated with iPhone type and also a list of ids respectively, but I am not able to get to the root element id belonging to the JSON object where my phone type is iPhone. Can someone please guide me? I need to get the id as 1 for this question.
To get the list of ids: $[*].id
To get the json object corresponding to iPhone type: $[*].phoneNumbers[?(#.type=='iPhone')]
[
{
"id": "1",
"phoneNumbers": [
{
"type": "iPhone",
"number": "0123-4567-8888"
},
{
"type": "home",
"number": "0123-4567-8910"
}
]
},
{
"id": "2",
"phoneNumbers": [
{
"type": "x",
"number": "0123-4567-8888"
},
{
"type": "y",
"number": "0123-4567-8910"
}
]
}
]
I think you want your expression to look deeper.
First, find the objects that have an iPhone in the phone numbers list. Then just select the IDs.
Try $[?(#.phoneNumbers[*].type=="iPhone")].id.
Edit
It looks like the Java JsonPath library (I think you're using this) supports a number of functions. It doesn't list a contains(), but you might try the anyof operator:
$[?(#.phoneNumbers[*].type anyof ["iPhone"])].id
Note that this is definitely implementation-specific and will likely not work with any other library.
we are getting data from Elasticsearch using Java API, and we could not exclude a default fields of Elasticsearch ( i.e : "_index" , "_type" , "_id" )
"hits": [
{
"_index": "IndexTest",
"_type": "typeTest",
"_id": "AVkerYWpSOWHD5ykzT4i",
"_score": 1,
"_source": {
"id": "1",
"name": "Toto", ...
}
]
we are trying with path_filter on Kibana consol :
GET
IndexTest/typeTest/_search?filter_path=hits.hits._source&_source=id,name
It's works good, but now we wanna do it using Java API, Any ideas
Thank you
Considering the below JSON, what would be the best way to store this into SQLite?
I am already parsing this with Gson, but wondering what would be a pain-free way to store this into SQLite and be able to retrieve it with no parsing issues.
I am already storing the desc, deposit objects as a HashMaps. My issue is the lease object. What would be an elegant way to store the leasees array?
Should I just create another Leasee object? And then serialize the ArrayList into a Blob for storage into the database?
{
"name": "1",
"desc": {
"country": "1",
"city": "1",
"postal": "1",
"street": "1",
"substreet": "1",
"year": 1,
"sqm": 1
},
"owner": [
"1"
],
"manager": [
"1"
],
"lease": {
"leasee": [
{
"userId": "1",
"start": {
"$date": 1420070400000
},
"end": {
"$date": 1420070400000
}
}
],
"expire": {
"$date": 1420070400000
},
"percentIncrease": 1,
"dueDate": 1
},
"deposit": {
"bank": "China Construction Bank",
"description": "Personal Bank Account USA"
}
}
Storing everything in a BLOB ignores the benefit that a DB provides.
You have much of a relational database structure already described (however loosely) in the JSON:
Properties table with location and description info.
Persons table with names and contacts.
Roles table relating Properties and Persons (residents, managers, owners, service providers).
Leases table with terms related to Properties and Persons.
Payments table with payment info related to Leases.
You can manually write in the primary keys in your JSON, taking care to match those relationships between tables, then insert the resulting rows by processing that modified JSON. Here's a link to SQLite doc on using INSERT with auto increment.
My situation is this:
I have an index that has been generated by an entirely different app and I want to transplant/expose it from solr.
I can easily do that, but the problem now is I can't query on any of the fields. I just get empty {}
I modified the schema to include the fields from the documents transplanted and it doesn't seem to have made a difference.
When I do a q=*:* I can see all the documents so I know the info is there. This has to be easy. I am just missing it.
Would anyone like to give me an education?
Document example:
"docs": [
{
"code": "A000",
"description": "Cholera due to Vibrio cholerae 01, biovar cholerae",
"version": "10",
"concepts": [
"cholera",
"cholera-due to biovar cholera",
"cholera-due to vibrio cholerae 01",
"infectious diseases",
"intestinal infectious diseases"
],
"tags": [
"biovar",
"cholera",
"cholerae",
"diseases",
"infectious",
"intestinal",
"vibrio"
],
"category": "A00",
"etiology": "0"
},
]