How to update a nested entity in MongoDB using Java API - java

I need to update the below Json like adding new entries inside the case1/case2 or updating the existing.How can I use the Update and set to perform this.And Also what if my Rest Service is a Patch.
{
"MyBson": [{
"id": "someId",
"name": "name123",
"someEntity": [
"A",
"B"
],
"someEntityParams": [{
"case1": [{
"id": 133,
"name": "name"
},
{
"id": 124,
"name": "name1"
}
]
},
{
"case2": [{
"id": 135,
"name": "name2",
"onemoreField": "field"
},
{
"id": 136,
"name": "name3",
"onemoreField": "field"
},
{
"id": 137,
"name": "name4",
"onemoreField": "field"
}
]
}
]
}]
}

Following code snippet shows how to manipulate given JSON string with Jackson library to add new JSON node and update existing node with specific criteria.
You have to modify the logic to add/udpate nodes according to given request JSON string.
Code snippet
ObjectMapper mapper = new ObjectMapper();
ObjectNode root = (ObjectNode) mapper.readTree(jsonStr);
//create new node to be added
ObjectNode newNode = mapper.createObjectNode();
newNode.put("id", 125);
newNode.put("name", "name2");
ArrayNode arrayNode = (ArrayNode) root.get("MyBson");
arrayNode.forEach(e -> {
e.get("someEntityParams").forEach(e1 -> {
//add new node of case1
if (e1.has("case1")) {
((ArrayNode) e1.get("case1")).add(newNode);
}
//update existing node of case2 whose id is 136
if (e1.has("case2")) {
e1.get("case2").forEach(e2 -> {
if (e2.get("id").asInt() == 136) {
((ObjectNode) e2).put("onemoreField", "onemoreFieldNew");
}
});
}
});
});
System.out.println(root.toString());
Console output
{"MyBson":[{"id":"someId","name":"name123","someEntity":["A","B"],"someEntityParams":[{"case1":[{"id":133,"name":"name"},{"id":124,"name":"name1"},{"id":125,"name":"name2"}]},{"case2":[{"id":135,"name":"name2","onemoreField":"field"},{"id":136,"name":"name3","onemoreField":"onemoreFieldNew"},{"id":137,"name":"name4","onemoreField":"field"}]}]}]}

Related

MongoDB Java - Create a new ObjectId for each element of an existing array

I have an existing collection, containing several documents.
[{
"_id": "...1",
"prop1": "...",
"prop2": "...",
"someArray": [
{
"value": "sub element 1.1"
},
{
"value": "sub element 1.2"
},
{
"value": "sub element 1.3"
}
]
}, {
"_id": "...2",
"prop1": "...",
"prop2": "...",
"someArray": [
{
"value": "sub element 2.1"
},
{
"value": "sub element 2.2"
}
]
}, // many others here...
]
For each root document, I would like to add an _id property of type ObjectId on each sub-element of someArray. So, after I run my command, the content of the collection is the following:
[{
"_id": "...1",
"prop1": "...",
"prop2": "...",
"someArray": [
{
"_id": ObjectId("..."),
"value": "sub element 1.1"
},
{
"_id": ObjectId("..."),
"value": "sub element 1.2"
},
{
"_id": ObjectId("..."),
"value": "sub element 1.3"
}
]
}, {
"_id": "...2",
"prop1": "...",
"prop2": "...",
"someArray": [
{
"_id": ObjectId("..."),
"value": "sub element 2.1"
},
{
"_id": ObjectId("..."),
"value": "sub element 2.2"
}
]
}, // ...
]
Each ObjectId being, of course, unique.
The closer I got was with this:
db.getCollection('myCollection').updateMany({}, { "$set" : { "someArray.$[]._id" : ObjectId() } });
But every sub-element of the entire collection ends up with the same ObjectId value...
Ideally, I need to get this working using Java driver for MongoDB. The closest version I got is this (which presents the exact same problem: all the ObjectId created have the same value).
database
.getCollection("myCollection")
.updateMany(
Filters.ne("someArray", Collections.emptyList()), // do not update empty arrays
new Document("$set", new Document("someArray.$[el]._id", "ObjectId()")), // set the new ObjectId...
new UpdateOptions().arrayFilters(
Arrays.asList(Filters.exists("el._id", false)) // ... only when the _id property doesn't already exist
)
);
With MongoDB v4.4+, you can use $function to use javascript to assign the _id in the array.
db.collection.aggregate([
{
"$addFields": {
"someArray": {
$function: {
body: function(arr) {
return arr.map(function(elem) {
elem['_id'] = new ObjectId();
return elem;
})
},
args: [
"$someArray"
],
lang: "js"
}
}
}
}
])
Here is the Mongo playground for your reference. (It's slightly different from the code above as playground requires the js code to be in double quote)
For older version of MongoDB, you will need to use javascript to loop the documents and update them one by one.
db.getCollection("...").find({}).forEach(function(doc) {
doc.someArray = doc.someArray.map(function(elem) {
elem['_id'] = new ObjectId();
return elem;
})
db.getCollection("...").save(doc);
})
Here is what I managed to write in the end:
MongoCollection<Document> collection = database.getCollection("myCollection");
collection
.find(Filters.ne("someArray", Collections.emptyList()), MyItem.class)
.forEach(item -> {
item.getSomeArray().forEach(element -> {
if( element.getId() == null ){
collection.updateOne(
Filters.and(
Filters.eq("_id", item.getId()),
Filters.eq("someArray.value", element.getValue())
),
Updates.set("someArray.$._id", new ObjectId())
);
}
});
});
The value property of sub-elements had to be unique (and luckily it was). And I had to perform separate updateOne operations in order to obtain a different ObjectId for each element.

Dynamic Json to CSV in java. looking for Generic code that works with any json, irrespective of any number of node in json file

hi i need to convert all json elements into csv . json is dynamic file, number of field and names will change from file to file.
i tried different methods but most cases i need to mention field names in scripts to pull data into csv
JSON file
[
{
"system": "Application",
"id": "12345",
"version": 1,
"event": "NEW",
"keywords": {
"ProductType": "ALL",
"Business": "USA",
},
"product": {
"type": "INS",
"startDate": 20190102,
"endDate": 20190104,
"cash": 100000.00,
"sub": {
"type": "Life",
"productId": 987,
"maturityDate": 20260421,
},
"paymentCalendar": [
"Monthly"
],
"duration": "20Y",
"Amount": 1000.00,
"cashFlows": [
{
"startDate": 20190102,
"endDate": 20190104,
"paymentDate": 20190104,
}
],
"principalFlows": [
{
"startDate": 20190102,
"endDate": 20190104,
"paymentDate": 20190102,
"currency": "USA",
"amount": 400.0
},
{
"startDate": 20190104,
"endDate": 20190104,
"paymentDate": 20190104,
"currency": "USA",
"amount": 600.0
}
]
},
"EventDate": 20190108,
"maturityDate": 20190104
}
]
above fields are not constant, all filed will keep changing.
expected output is below
Using Jackson ObjectMapper and Apache Commons CSV you can implement the functionality you require by reading the JSON and then visiting all the nodes.
If the node is a collection then visit all its children with the field or array index appended to the prefix
Note that arrays and objects need to be handled independently
If the node is a not a collection then add it to the CSV output
public void jsonToCsv(String json, Appendable appendable) throws IOException {
JsonNode root = new ObjectMapper().reader().readTree(json);
CSVPrinter printer = CSVFormat.DEFAULT.print(appendable);
appendNode(root.get(0), "", printer);
}
private void appendNode(JsonNode node, String prefix, CSVPrinter printer) throws IOException {
if (node.isArray()) {
for (int i = 0; i < node.size(); ++i) {
appendNode(node.get(i), String.format("%s/%d", prefix, i), printer);
}
} else if (node.isContainerNode()) {
Iterator<Map.Entry<String, JsonNode>> fields = node.fields();
while (fields.hasNext()) {
Map.Entry<String, JsonNode> field = fields.next();
appendNode(field.getValue(), String.format("%s/%s", prefix, field.getKey()), printer);
}
} else {
printer.printRecord(prefix.substring(1), node.asText());
}
}

JSON to JAVA POJO of dynamic key-value pair

I have to create POJO class of following JSON, Problem is that key p_d has variables with dynamic name like s_t, n_t, n_p and etc. the real JSON is big and i am facing problem with that part only, i shared partial JSON.
i am using jackson for parsing.
{
"flag": true,
"flag2": false,
"r_no": [
{
"room_type": 250067,
"no_of_rooms": 1,
"no_of_children": 1,
"no_of_adults": 2,
"description": "Executive Room, 1 King Bed, Non Smoking",
"children_ages": [
8
]
},
{
"room_type": 250067,
"no_of_rooms": 1,
"no_of_children": 0,
"no_of_adults": 2,
"description": "Executive Room, 1 King Bed, Non Smoking"
}
],
"r_code": "abc",
"r_key": "123",
"p_d": {
"s_t": [
{
"name": "xyz",
"cur": "INR"
},
{
"name": "xyz1",
"cur": "INR"
}
],
"n_t": [
{
"name": "xyz2",
"cur": "INR"
}
],
"n_p": [
{
"name": "xyz5",
"cur": "INR"
}
]
},
"cur": "INR"
}
For dynamic keys, use a Map<String, Object>:
ObjectMapper mapper = new ObjectMapper();
Map<String, Object> parsed = mapper.readValue(json,
new TypeReference<Map<String, Object>>() {});

java - Jackson Json traverse encapsulated tree

I have a schema like this (simplified):
{
"range": {
"offset": 0,
"limit": 1,
"total": 2
},
"items": [
{
"id": 11,
"name": "foo",
"children": [
{
"id": 112,
"name": "bar",
"children": [
{
"id": 113,
"name": "foobar",
"type": "file"
}
],
"type": "folder"
},
{
"id": 212,
"name": "foofoo",
"type": "file"
}
],
"type": "room"
},
{
"id": 21,
"name": "barbar",
"type": "room"
}
]
}
I need to read only specific values like "id" from the first room (item). For this I need to iterate trough all items on every level (n items for root, n items for n children) with type folder or file.
For now i have this code:
POJO
public static class Item {
public int id;
}
Jackson Tree Iteration
ObjectMapper mapper = new ObjectMapper();
com.fasterxml.jackson.databind.JsonNode root = mapper.readTree(JSON);
root = root.get("items").get(0);
TypeReference<List<Item>> typeRef = new TypeReference<List<Item>>(){};
List<Item> list = mapper.readValue(root.traverse(), typeRef);
for (Item f : list) {
System.out.println(f.id);
}
How can i get all id's of all children in all items with specific type?
How to avoid the "Unrecognized field" exception without defining the whole schema?
Thank you very much for your help!
Try using java8 functions it has lot to do it in lesser lines ,
ObjectMapper mapper = new ObjectMapper();
Pass your json value
Map obj = mapper.readValue(s, Map.class);
List<Object> items= (List<Object>) obj.get("items");
Object[] Ids= items
.stream()
.filter(items-> ((Map)items).get("type").equals("room"))
.toArray()
Use the readTree(...) method to parse the JSON without needing to define the entire schema and find Nodes called "id".
You can then use findValues("id") to get the List of values back.

Exclude properties from Json data without deserializing it

I have Json data like this:
{
"_id": "123",
"transaction": {
"className": "ExpenseReport",
"id": "789",
"createdBy": {
"firstName": "Donald",
"lastName": "Morgan",
"address": {
"street1": "1362 Woodlawn Lane",
"street2": "Suite #100805",
"place": {
"city": "Darien",
"state": "CA",
"country": "USA",
"number": "OBJ-4823478",
"createdBy": "Brett Wright"
},
"zip": 88884,
"number": "OBJ-5740231",
"createdBy": "Brett Wright"
},
"number": "OBJ-3561551",
"createdBy": "Brett Wright"
},
"score": 12,
"reasonCodes": [
"these",
"are",
"strings"
]
}
}
I want a subset of this data after excluding some properties, say something like this:
{
"_id": "123",
"transactionType": "EXPENSE_REPORT",
"transaction": {
"className": "ExpenseReport",
"id": "789",
"createdBy": {
"firstName": "Donald",
"lastName": "Morgan",
"address": {
"street1": "1362 Woodlawn Lane",
"street2": "Suite #100805",
"place": {
"city": "Darien",
"state": "CA",
"country": "USA"
},
"createdBy": "Brett Wright"
},
"createdBy": "Brett Wright"
},
"score": 12
}
}
Now one way would be to deserialize the original json data into a POJO, use Jackson Views to annotate the required properties, and then serialize the same POJO again to get the Json data without the properties.
But I want to achieve something like this WITHOUT DESERIALIZING the Json data, say by just parsing the json data and removing the key-value pairs if they are not found in a collection. Is anyone aware of any library that does that?
Jackson allows you to do only the parsing step using ObjectMapper.readTree()
JsonNode root = om.readTree(input);
The resulting JsonNodes are mutable, so something like this does the job:
ObjectNode place = (ObjectNode)(root.findPath("transaction")
.findPath("createdBy")
.findPath("address")
.findPath("place")
);
place.remove("number");
This is --unfortunately-- not too nice, but can be easily wrapped into a generic method that takes a property path:
void deleteProperty(JsonNode root, List<String> propPath)
{
JsonNode node = root;
for (String propName: propPath.subList(0, propPath.size() - 1)) {
node = node.findPath(propName);
}
// completely ignore missing properties
if ((! node.isMissingNode()) && (! node.isEmpty())) {
if (node instanceof ObjectNode) {
final ObjectNode parent = (ObjectNode)node;
parent.remove(propPath.get(propPath.size() - 1));
}
}
}
It is then possible to write out the modified node tree using writeTree().
There is also the property filter API. Unfortunately while it is easy to filter out individual properties with it, it is non-trivial to use it for property paths. For example, in your case, the trivial filter can only filter out all createdBy properties.

Categories

Resources