I have a model which looks like this:
{
"projectName": "MyFirstProject",
"projectId": "1234",
"testCaseList": [
{
"testCaseName": "TestCase1",
"steps": [
{
"Action": "Click on this",
"Result": "pass"
},
{
"Action": "Click on that",
"Result": "pass"
}
]
},
{
"testCaseName": "TestCase2",
"steps": [
{
"Action": "Click on him",
"Result": "pass"
},
{
"Action": "Click on her",
"Result": "pass"
}
]
}
]
}
However, as this is a nested object, I am having difficulties updating it using the method:
default PanacheUpdate update(String update, Object... params)
I am using Repository Pattern and below is my code snippet:
List<TestCase> newTestCaseList = ...;
update("testCaseList", newTestCaseList).where("projectId=?1",projectId);
which actually throws the following error:
org.bson.json.JsonParseException: JSON reader was expecting ':' but found ','.
at org.bson.json.JsonReader.readBsonType(JsonReader.java:149)
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:82)
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
at org.bson.BsonDocument.parse(BsonDocument.java:63)
at io.quarkus.mongodb.panache.runtime.MongoOperations.executeUpdate(MongoOperations.java:634)
at io.quarkus.mongodb.panache.runtime.MongoOperations.update(MongoOperations.java:629)
My Current Approach
What currently works for me is to use default void update(Entity entity) instead when updating nested objects.
This however presents a few considerations:
Extra code is required to fetch the entire document, parse through, and update the required fields
Since update(Entity entity) works on a document level, it will also update unchanged parts of the document, which isn't ideal.
I guess the encountered error states nothing but a limitation of Panache for mongoDB for the moment through the standard offered PanacheQL.
The issue should be worked-around using native mongoDB Java API that can be accessed through the PanacheMongoEntityBase#mongoCollection:
mongoCollection().updateOne(
eq("projectId", projectId),
new Document("$set", new Document("testCaseList", newTestCaseList))
);
Related
I have a graph which should have the following json structure
{
"models": [
{
"name": "Model1",
"id": 19,
"diagram": [
{
"name": "Diagram1",
"id": "34"
},
{
"name": "Diagram2",
"id": "36"
}
]
},
{
"name": "Model2",
"id": 14,
"diagram": [
{
"name": "Diagram2",
"id": "32"
}
]
}
]
}
Here is my sample graph:
g.addV('org').property('name', 'Org_seppmed').as('org1').addV('org').property('name', 'Org_siemens').as('org2').addV('user').property('name', 'U_Sebastian').as('user1').addV('user').property('name', 'U_Jaroslaw').as('user2').addV('user').property('name', 'U_Jürgen').as('user3').addV('user').property('name', 'U_Carsten').as('user4').addV('user').property('name', 'U_DevUser').as('user5').addE('administratedBy').from('org1').to('user1').addE('hasEmployee').from('org1').to('user2').addE('hasEmployee').from('org1').to('user3').addE('hasEmployee').from('org2').to('user4').addE('hasEmployee').from('org1').to('user5').addE('hasGuest').from('org1').to('user4').addV('project').property('name', 'P_Bremsen').as('projectmbt1').addV('project').property('name', 'P_Blinker').as('projectmbt2').addE('owns').from('user1').to('projectmbt1').addE('owns').from('user2').to('projectmbt2').addE('writes').from('user2').to('projectmbt1').addE('writes').from('user3').to('projectmbt1').addE('reads').from('user4').to('projectmbt1').addE('writes').from('user1').to('projectmbt2').addE('writes').from('user3').to('projectmbt2').addV('model').property('name', 'M_Bremse').as('modelmbt1').addV('model').property('name', 'M_Blinker').as('modelmbt2').addV('model').property('name', 'M_Bremse2').as('modelmbt3').addE('has').from('projectmbt1').to('modelmbt1').addE('has').from('projectmbt1').to('modelmbt3').addE('has').from('projectmbt2').to('modelmbt2').addV('diagram').property('name', 'D_D1').as('diagramd1').addV('diagram').property('name', 'D_D2').as('diagramd2').addV('diagram').property('name', 'D_D3').as('diagramd3').addV('diagram').property('name', 'D_D1').as('diagramd4').addV('testsuite').property('name', 'T_TestD1').as('testsuited1').addE('has').from('modelmbt1').to('diagramd1').addE('has').from('modelmbt1').to('diagramd2').addE('has').from('modelmbt2').to('diagramd3').addE('has').from('modelmbt3').to('diagramd4').addE('has').from('modelmbt1').to('testsuited1').addV('initial').as('initMBT1D1').addV('precondition').as('preMBT1D1').addV('teststep').property('name', 'N_Blinking left').as('node1MBT1D1').addV('teststep').property('name', 'N_Blinking right').as('node2MBT1D1').addV('teststep').property('name', 'N_Hazard flashers').as('node3MBT1D1').addV('teststep').property('name', 'N_Emergency flashers').as('node4MBT1D1').addV('teststep').property('name', 'N_Blinking off').as('node5MBT1D1').addV('node').as('connodeMBT1D1').addV('postcondition').as('postMBT1D1').addV('final').as('finalMBT1D1').addE('has').from('diagramd1').to('initMBT1D1').addE('has').from('diagramd1').to('preMBT1D1').addE('has').from('diagramd1').to('node1MBT1D1').addE('has').from('diagramd1').to('node2MBT1D1').addE('has').from('diagramd1').to('node3MBT1D1').addE('has').from('diagramd1').to('node4MBT1D1').addE('has').from('diagramd1').to('node5MBT1D1').addE('has').from('diagramd1').to('connodeMBT1D1').addE('has').from('diagramd1').to('postMBT1D1').addE('has').from('diagramd1').to('finalMBT1D1').addE('pointsTo').from('initMBT1D1').to('preMBT1D1').addE('pointsTo').from('preMBT1D1').to('node1MBT1D1').addE('pointsTo').from('preMBT1D1').to('node2MBT1D1').addE('pointsTo').from('preMBT1D1').to('node3MBT1D1').addE('pointsTo').from('preMBT1D1').to('node4MBT1D1').addE('pointsTo').from('preMBT1D1').to('node5MBT1D1').addE('pointsTo').from('node1MBT1D1').to('postMBT1D1').addE('pointsTo').from('postMBT1D1').to('node1MBT1D1').addE('pointsTo').from('node2MBT1D1').to('postMBT1D1').addE('pointsTo').from('node3MBT1D1').to('postMBT1D1').addE('pointsTo').from('node4MBT1D1').to('postMBT1D1').addE('pointsTo').from('node5MBT1D1').to('postMBT1D1').addE('pointsTo').from('postMBT1D1').to('connodeMBT1D1').addE('linksTo').from('connodeMBT1D1').to('diagramd2').addE('pointsTo').from('connodeMBT1D1').to('finalMBT1D1').addV('initial').as('initMBT1D2').addV('teststep').property('name', 'N_Trigger left').as('node1MBT1D2').addV('teststep').property('name', 'N_Trigger right').as('node2MBT1D2').addV('verificationpoint').property('name', 'N_Triggercheck').as('node3MBT1D2').addV('final').as('finalMBT1D2').addE('has').from('diagramd2').to('initMBT1D2').addE('has').from('diagramd2').to('node1MBT1D2').addE('has').from('diagramd2').to('node2MBT1D2').addE('has').from('diagramd2').to('node3MBT1D2').addE('has').from('diagramd2').to('finalMBT1D2').addE('pointsTo').from('initMBT1D2').to('node1MBT1D2').addE('pointsTo').from('initMBT1D2').to('node2MBT1D2').addE('pointsTo').from('node1MBT1D2').to('node3MBT1D2').addE('pointsTo').from('node2MBT1D2').to('node3MBT1D2').addE('pointsTo').from('node3MBT1D2').to('finalMBT1D2')
Now I am trying to find a query to get above structure:
g.V().hasLabel("model").out().hasLabel("diagram").path().by(elementMap())
The result is like:
==>path[{id=12384, label=model, name=Model2}, {id=28904, label=diagram, name=Diagram2}]
==>path[{id=37056, label=model, name=Model1}, {id=16448, label=diagram, name=Diagram1}]
==>path[{id=37056, label=model, name=Model1}, {id=24808, label=diagram, name=Diagram1}]
My expectation was to get a Map<Model, List<Diagram>>. But i get for each diagram of a model a complete new entry, instead of adding all related diagrams to a model.
How can I solve my problem?
If you want to generate a Map you will need to group using the model as the key. Something like shown below. Obviously, you will not really want to use the label but something that identifies the class of the model, if you have such a property, such as name.
g.V().hasLabel("model").
group().
by('name').
by(out().hasLabel("diagram").path().by(elementMap()).fold())
EDITED after discussion in comments:
If you do not want the whole path in the result but just the unique diagram nodes and their properties, the query can be amended to as shown below. This version also adds all of the properties from the label node as the group (map) key.
g.V().hasLabel("model").
group().
by(elementMap()).
by(out().hasLabel("diagram").elementMap().fold())
With the first operation by matching with id[1602271], its creating new collection and saving one doc(below-mentioned doc).
{
"_id": "1602271",
"date": "2019-02-11T06:25:13.425Z",
"currentStatus": "scheduled",
"statusHistory": [
{
"status": "onboarded",
"date": "2018-11-02T10:07:11.167Z"
},
{
"status": "preference_ready",
"date": "2018-11-02T10:08:56.359Z"
},
{
"status": "scheduled",
"date": "2018-11-02T10:26:38.721Z"
}
]
}
With the second operation id[1602131], it's not creating a new doc instead it's overwriting with the older one (above JSON).
{
"_id": "1602131",
"date": "2019-01-22T07:08:58.253Z",
"currentStatus": "scheduled",
"statusHistory": [
{
"status": "onboarded",
"date": "2018-11-02T06:07:28.765Z"
},
{
"status": "preference_ready",
"date": "2018-11-02T06:11:30.777Z"
},
{
"status": "scheduled",
"date": "2018-11-29T05:48:57.871Z"
}
]
}
Please refer below-mentioned code:
public static final String STATUS_COLLECTION_NAME = "TeacherStatus";
public static final String ARCHIVE_STATUS_COLLECTION_NAME =
"ArchiveTeacherStatus";
Aggregation aggregation = Aggregation.newAggregation(match(where("_id").is(teacherId)),
out(ARCHIVE_STATUS_COLLECTION_NAME));
mongoOperations.aggregate(aggregation, STATUS_COLLECTION_NAME, TeacherStatus.class);
Works as intended. https://docs.mongodb.com/manual/reference/operator/aggregation/out/
If the collection specified by the $out operation already exists, then upon completion of the aggregation, the $out stage atomically replaces the existing collection with the new results collection.
It will be possible in mongodb 4.2 where $out stage will accept an additional parameter mode, which can take values "replaceCollection" (what happens now), "replaceDocuments", "insertDocuments" (what you want).
Having re-read your code, why are you using aggregation pipeline with $out to copy one document? That's hunting sparrows with a cannon.
You can do it more reliably through the app. Read the document, then save it into the other collection.
Is it possible to set the value of context variable from java. I have modified the node json and added a action in that action in the result property I want to set data from my local database. So I am getting the action in java code and trying to set the value of result property of action object, below is the code which i am trying but it is not working. can some one suggest a better approach for this.
if(response.getActions()!=null) {
System.out.println("actions : "+response.getActions());
for (int i = 0; i < response.getActions().size(); i++) {
System.out.println(" i : "+response.getActions().get(i).getName());
if(response.getActions().get(i).getName()=="Apply_For_Loan") {
response.getActions().get(i).setResultVariable("123");
}
}
}
For setting value of result_variable in to the context below is my code.
if(response.getActions().get(i).getName().equals("Apply_For_Loan")) {
System.out.println("in action");
Assistant service = new Assistant("2018-12-12");
service.setUsernameAndPassword(userId, password);
service.setEndPoint(endPoint);
String result=response.getActions().get(i).getResultVariable();
Context context = response.getContext();
context.put(result, "123");
MessageOptions msg=new MessageOptions.Builder(watsonId)
.input(new InputData.Builder("Apply For Loan").build())
.context(context)
.build();
response=service.message(msg).execute();
System.out.println("msg : "+response);
}
Below is the response which i am getting after re-executing the assistant call.
{
"output": {
"generic": [
{
"response_type": "text",
"text": "Hello RSR, you loan application of 5420 is created. Please note the Loan Number for future use "
}
],
"text": [
"Hello RSR, you loan application of 5420 is created. Please note the Loan Number for future use "
],
"nodes_visited": [
"node_1_1544613102320",
"node_1_1544613102320"
],
"log_messages": []
},
"input": {
"text": "Apply For Loan"
},
"intents": [
{
"intent": "ApplyForLoan",
"confidence": 1.0
}
],
"entities": [],
"context": {
"number": 9.971070056E9,
"$Loan_Number": "123",
"system": {
"initialized": true,
"dialog_stack": [
{
"dialog_node": "node_1_1544613102320"
}
],
"dialog_turn_counter": 3.0,
"dialog_request_counter": 3.0,
"_node_output_map": {
"node_1_1544613102320": {
"0": [
0.0
]
}
},
"branch_exited": true,
"branch_exited_reason": "completed"
},
"Mail_Id": "Email_Id",
"conversation_id": "b59c7a02-2cc6-4149-ae29-602796ab22e1",
"person": "RSR",
"rupees": 5420.0
},
"actions": [
{
"name": "Apply_For_Loan",
"type": "client",
"parameters": {
"pername": "RSR",
"loanamount": 5420.0
},
"result_variable": "$Loan_Number"
}
]
}
In above response $Loan_Number is the result variable which i have updated from java code and the same result_variable i am using in the output text to return the $Loan_Number, but in output text it is still coming blank and in actions also it is still coming blank?
The simple answer to your question is no, you can't set anything in the JSON Model using setResultVariable. It should work like this:
1) Define an action on your Dialognode
The result_variable defines the <result_variable_name>, e.g. where the result of that action should be stored in the conversation context. If you execute some server side action, you read the result from the specified context location. In this example, the location is context.result_of_action, the context might be omitted. More details are here.
2) Process the action in Java Client
DialogNodeAction action = response.getActions().get(0);
final String result_variable = action.getResultVariable();
The result_variable is like a key. You obtain the value from your DB and add it to the context:
Context context = response.getContext();
context.put(result_variable, "value from DB");
new MessageOptions.Builder("Your ID")
.input(new InputData.Builder("Your response").build())
.context(context) // setting context
.build();
Finally the Watson Assistant Dialog (or some other App) can get the result from the Client:
"context": {
"result_of_action": "value from DB",
The Objects you read using the Java API are being parsed from JSON using Gson and need to be re-parsed back to JSON when construction a new API Call.
Edit after question update
In your example:
"actions": [
{
"name": "Apply_For_Loan",
"type": "client",
"parameters": {
"pername": "RSR",
"loanamount": 5420.0
},
"result_variable": "$Loan_Number"
}
the key result_variable tells other modules in your app where to find the result of your action. So $Loan_Number should not be the value but the key to the value in the session context (a kind of method contract).
Let's say you set the value to context.my_result, then your Watson Assistant should be able to access the "123" from DB in the next dialog node under "context.my_result".
I am having a problem while querying elastic search. The below is my query
GET _search {
"query": {
"bool": {
"must": [{
"match": {
"name": "SomeName"
}
},
{
"match": {
"type": "SomeType"
}
},
{
"match": {
"productId": "ff134be8-10fc-4461-b620-79s51199c7qb"
}
},
{
"range": {
"request_date": {
"from": "2018-08-22T12:16:37,392",
"to": "2018-08-28T12:17:41,137",
"format": "YYYY-MM-dd'T'HH:mm:ss,SSS"
}
}
}
]
}
}
}
I am using three match queries and a range query in the bool query. My intention is getting docs with these exact matches and with in this date range. Here , if i change name and type value, i wont get the results. But for productId , if i put just ff134be8, i would get results. Anyone knows why is that ? . The exact match works on name and type but not for productId
You need to set the mapping of your productId to keyword to avoid the tokenization. With the standard tokenizer "ff134be8-10fc-4461-b620-79s51199c7qb" will create ["ff134be8", "10fc", "4461", "b620", "79s51199c7qb"] as tokens.
You have different options :
1/ use a term query to check without analyzing the content of the field
...
{
"term": {
"productId": "ff134be8-10fc-4461-b620-79s51199c7qb"
}
},
...
2/ if you are in Elasticsearch 6.X you could change your request to
...
{
"match": {
"productId.keyword": "ff134be8-10fc-4461-b620-79s51199c7qb"
}
},
...
As elasticsearch will create a subfield keyword with the type keyword for all string field
The best option is, of course, the first one. Always use term query if you are trying to match the exact content.
I have a query in valid JSON format which works well in kibana or Sense when I use GET request.I am also able to create this query using XContentBuilder, but I need to send this query using its JSON form as it is to ElasticSearch. Is it possible to store the query in a JSON file and query ElasticSearch using this JSON file.
My query -
{
"min_score":5,
"sort" : [
{
"_geo_distance" : {
"location" : [40.715, -73.988],
"order" : "asc",
"unit" : "km",
"mode" : "min",
"distance_type" : "arc"
}
}
],
"query": {
"bool": {
"must": {
"query_string": {
"query": "hospital",
"analyzer": "english"
}
},
"filter": {
"geo_distance": {
"distance": "50000km",
"location": {
"lat": 40.715,
"lon": -73.988
}
}
}
}
}
}
What I want is to store this query in a JSON file and use this JSON file to send a search request directly without using Query builder.
You can use a search template, and store this template in the cluster state, see the official documentation about search templates, especially about pre-registered templates.