Remove default fields from search response in Elasticsearch using Java API - java

we are getting data from Elasticsearch using Java API, and we could not exclude a default fields of Elasticsearch ( i.e : "_index" , "_type" , "_id" )
"hits": [
{
"_index": "IndexTest",
"_type": "typeTest",
"_id": "AVkerYWpSOWHD5ykzT4i",
"_score": 1,
"_source": {
"id": "1",
"name": "Toto", ...
}
]
we are trying with path_filter on Kibana consol :
GET
IndexTest/typeTest/_search?filter_path=hits.hits._source&_source=id,name
It's works good, but now we wanna do it using Java API, Any ideas
Thank you

Related

Java converting nested JSON to CSV

I am creating a Java service that gets a JSON Object from a HTTP GET Request.
This is an example of the JSON Object I am getting:
{
"data": [{
"itemNumber": "547325645",
"manufacturer": "LV3",
"name": "Levis 501",
"minimumQuantity": "1.0",
"maximumQuantity": "10.0",
"prices": [{
"currency": "EUR",
"amount": "80.0"
}]
}, {
"itemNumber": "224145625",
"manufacturer": "LV3",
"name": "Levis 502",
"minimumQuantity": "1.0",
"maximumQuantity": "10.0",
"prices": [{
"currency": "EUR",
"amount": "90.0"
}]
}],
"pagination": {
"offset": 0,
"limit": 2,
"total": 1925
}
}
Right now I am using Jackson to map my HttpResponse into a JSON Object:
productListResponse = new BufferedReader (
new InputStreamReader(getResponse.getEntity().getContent(), StandardCharsets.UTF_8))
.lines()
.collect(Collectors.joining("\n"));
JsonNode productListJson = mapper.readTree(productListResponse);
The problem where I run into now is that I can't parse this JSON properly. For example, I want to omit many values, like the quantities or the currencies. I also want to convert the manufacturer value to another text, in this case "Levis". But what fails for now is the data list, this does not get along with the mapper. I get an empty object back, which does not contain the JSON, but I have already tested if I get a response at all and I do. The next problem would be, of course, that the prices are also in a list and there I also do not know how to go on.
Simply put, I just want a semicolon-separated CSV file where I pull individual values out of the JSON. Of course, I have already done a lot of research and also found reliable information, but it does not apply to my case.
These are some of the sources I have already used:
https://www.baeldung.com/java-converting-json-to-csv
Converting JSON to XLS/CSV in Java
https://docs.aspose.com/cells/java/convert-json-to-csv/

ElasticSearch 5.6 is there a way to modify ES search response?

Let's say I have a search response like
"hits": [
{
"_index": "customer",
"_type": "doc",
"_id": "5",
"_score": 1,
"_source": {
"name": "TEST1",
"surname": "Soanems1"
}
},
I want to be able to modify the data before the user gets the response. Let's say the "name" field has a value of "TEST1" and I want the data at rest to stay "TEST1" but I want the user to see a different value, for example, "TEST12345". Is there a way to do it?
Is there a way to create a custom plugin that does this transformation for me? (there are some not trivial transformations that I want to do to the response)

Good way to store many nested JSON objects and arrays as data structure in Android SQLite?

Considering the below JSON, what would be the best way to store this into SQLite?
I am already parsing this with Gson, but wondering what would be a pain-free way to store this into SQLite and be able to retrieve it with no parsing issues.
I am already storing the desc, deposit objects as a HashMaps. My issue is the lease object. What would be an elegant way to store the leasees array?
Should I just create another Leasee object? And then serialize the ArrayList into a Blob for storage into the database?
{
"name": "1",
"desc": {
"country": "1",
"city": "1",
"postal": "1",
"street": "1",
"substreet": "1",
"year": 1,
"sqm": 1
},
"owner": [
"1"
],
"manager": [
"1"
],
"lease": {
"leasee": [
{
"userId": "1",
"start": {
"$date": 1420070400000
},
"end": {
"$date": 1420070400000
}
}
],
"expire": {
"$date": 1420070400000
},
"percentIncrease": 1,
"dueDate": 1
},
"deposit": {
"bank": "China Construction Bank",
"description": "Personal Bank Account USA"
}
}
Storing everything in a BLOB ignores the benefit that a DB provides.
You have much of a relational database structure already described (however loosely) in the JSON:
Properties table with location and description info.
Persons table with names and contacts.
Roles table relating Properties and Persons (residents, managers, owners, service providers).
Leases table with terms related to Properties and Persons.
Payments table with payment info related to Leases.
You can manually write in the primary keys in your JSON, taking care to match those relationships between tables, then insert the resulting rows by processing that modified JSON. Here's a link to SQLite doc on using INSERT with auto increment.

What is a good way to define user roles in a mongo database

Coming from a relational db background I find it easy to break apart user and their roles into normalized tables. But what is the customary way to do this in a mongo database?
Scenario I have
({ "roles" :
[
{"role": "user"},
{"role": "manager"},
{"role": "admin"}
]
"privileges" :
[
{"privilege": "READ"},
{"privilege": "READ/WRITE"},
{"privilege": "ALL"}
]
"users" :
[
{"user": "Sammy"},
{"user": "Tom"},
{"user": "Fred"},
{"user": "Zack"}
]
"userPermissions" :
[
{"admin": "Sammy"},
{"manager": "Tom"},
{"user": "Fred"},
{"user": "Zack"}
]
})
Question : Is this an appropriate way to model user roles in Mongo?
If your roles are plain Strings or a primitive tuple, you can store them as array inside each User's document. If the role is a complex entity, you can store them as an array of doc-refs.
UPDATE:
this is a document from my userAccount collection, generated by Spring Security Core Grails plugin:
{
"_id" : "541fdfdebaacef69047415a8",
"authorities" : [
{
"authority" : "ROLE_USER"
},
{
"authority" : "ROLE_ADMIN"
}
],
"password" : "lakdjalksdj87a68sd76as87d6a87sd6",
"username" : "someusername",
"version" : NumberLong(1)
}
Spring Security and it's descendants are the standard security implementations in java, so...
Mongo DB user roles are defined as JSON documents. Try the following links
https://docs.mongodb.com/manual/reference/built-in-roles/
https://scalegrid.io/blog/creating-role-based-access-control-in-mongodb/

How to transplant an index into a solr core

My situation is this:
I have an index that has been generated by an entirely different app and I want to transplant/expose it from solr.
I can easily do that, but the problem now is I can't query on any of the fields. I just get empty {}
I modified the schema to include the fields from the documents transplanted and it doesn't seem to have made a difference.
When I do a q=*:* I can see all the documents so I know the info is there. This has to be easy. I am just missing it.
Would anyone like to give me an education?
Document example:
"docs": [
{
"code": "A000",
"description": "Cholera due to Vibrio cholerae 01, biovar cholerae",
"version": "10",
"concepts": [
"cholera",
"cholera-due to biovar cholera",
"cholera-due to vibrio cholerae 01",
"infectious diseases",
"intestinal infectious diseases"
],
"tags": [
"biovar",
"cholera",
"cholerae",
"diseases",
"infectious",
"intestinal",
"vibrio"
],
"category": "A00",
"etiology": "0"
},
]

Categories

Resources