Java Akka http post method Unable to unmarshall nested request json - java

I am trying to send Nested json in Post request body to call akkhttp route, but getting exception,
Cannot unmarshall JSON as Campaign
Someone Please help me out to resolve the issue.
below is code
concat(
post(() -> pathPrefix(PathMatchers.segment(ACCOUNTS_SEGMENT).slash(PathMatchers.segment()), (accountId) ->
path(PathMatchers.segment(JOBS_SEGMENT).slash(PathMatchers.uuidSegment()) , (jobId) -> {
**return entity(Jackson.unmarshaller(Campaign.class), campaign -> {**
System.out.println("###############"+campaign.getName());
CompletionStage<Done> futureSaved = executeCampaignProcess(jobId.toString(), accountId);
return onSuccess(futureSaved, done ->
complete(StatusCodes.ACCEPTED, ACCEPTED_EXECUTE_CAMPAIGN_REQUEST)
);
});
}
)))
Nested json in a request body is as below
{
"id" : "2d2cee47-40c9-4ebe-80bb-f8a38e6379f9",
"name" : "DDDAmol",
"description" : "",
"type" : "FINITE",
"senderDisplayName" : "",
"senderAddress" : "",
"dialingOrder" : [ "PRIORITY", "RETRY", "REGULAR" ],
"finishType" : "FINISH_AFTER",
"finishTime" : null,
"finishAfter" : 0,
"checkTimeBasedFinishCriteria" : false,
"createdOn" : [ 2023, 1, 31, 8, 12, 40, 32309000 ],
"updatedOn" : [ 2023, 2, 14, 14, 11, 4, 758967300 ],
"lastExecutedOn" : [ 2023, 2, 18, 13, 51, 28, 821000000 ],
"contactList" : "CD_06_SMS_QUEUED_DELAYED",
"rule" : null,
"strategy" : {
"id" : "a41d8895-7a67-4cce-a39b-5a6ee5e8b4a9",
"name" : "Simple",
"type" : "SMS",
"description" : "",
"smsText" : "Hello",
"smsPace" : 40,
"smsPaceTimeUnit" : "SECOND",
"createdOn" : [ 2023, 1, 31, 8, 12, 15, 209992700 ],
"updatedOn" : [ 2023, 1, 31, 8, 12, 15, 209992700 ]
}
}
Refering below link but no working :(
https://doc.akka.io/docs/akka-http/current/common/json-support.html

Related

ElasticSearch - Searching partial text in String

What is the best way to use ElasticSearch to search exact partial text in String?
In SQL the method would be:
%PARTIAL TEXT%,
%ARTIAL TEX%
In Elastic Search current method being used:
{
"query": {
"match_phrase_prefix": {
"name": "PARTIAL TEXT"
}
}
}
However, it breaks whenever you remove first and last character of string as shown below (No results found):
{
"query": {
"match_phrase_prefix": {
"name": "ARTIAL TEX"
}
}
}
I believe that there will be numerous suggestions, such as the use of ngram analyzer, on how you can solve this problem. I believe the simplest would be to use fuzziness.
{
"query": {
"match": {
"name": {
"query": "artial tex",
"operator": "and",
"fuzziness": 1
}
}
}
}
There are multiple ways to do partial search and each comes with its own tradeoffs.
1. Wildcard
For wildcard perform search on "keyword" field instead of "text" .
{
"query": {
"wildcard": {
"name.keyword": "*artial tex*"
}
}
}
Wild cards have poor performance, there are better alternatives.
2. Match/Match_phrase/Match_phrase_prefix
If you are searching for whole tokens like "PARTIAL TEXT". You can simply use a match query, all documents which contain tokens "PARTIAL" and "TEXT" will be returned.
If order of tokens matter, you can use match_phrase.
If you want to search for partial tokens, use match_phrase_prefix. Prefix match is only done on last token in search input ex. "partial tex"
This is not suitable for your use case, since you want to search anywhere.
3. N grams
The ngram tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits N-grams of each word of the specified length.
N-grams are like a sliding window that moves across the word - a
continuous sequence of characters of the specified length. They are
useful for querying languages that don’t use spaces or that have long
compound words, like German.
Query
{
"settings": {
"max_ngram_diff" : "5",
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 5,
"max_gram": 7
}
}
}
}
}
POST index29/_analyze
{
"analyzer": "my_analyzer",
"text": "Partial text"
}
Tokens Generated:
"tokens" : [
{
"token" : "Parti",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 0
},
{
"token" : "Partia",
"start_offset" : 0,
"end_offset" : 6,
"type" : "word",
"position" : 1
},
{
"token" : "Partial",
"start_offset" : 0,
"end_offset" : 7,
"type" : "word",
"position" : 2
},
{
"token" : "artia",
"start_offset" : 1,
"end_offset" : 6,
"type" : "word",
"position" : 3
},
{
"token" : "artial",
"start_offset" : 1,
"end_offset" : 7,
"type" : "word",
"position" : 4
},
{
"token" : "artial ",
"start_offset" : 1,
"end_offset" : 8,
"type" : "word",
"position" : 5
},
{
"token" : "rtial",
"start_offset" : 2,
"end_offset" : 7,
"type" : "word",
"position" : 6
},
{
"token" : "rtial ",
"start_offset" : 2,
"end_offset" : 8,
"type" : "word",
"position" : 7
},
{
"token" : "rtial t",
"start_offset" : 2,
"end_offset" : 9,
"type" : "word",
"position" : 8
},
{
"token" : "tial ",
"start_offset" : 3,
"end_offset" : 8,
"type" : "word",
"position" : 9
},
{
"token" : "tial t",
"start_offset" : 3,
"end_offset" : 9,
"type" : "word",
"position" : 10
},
{
"token" : "tial te",
"start_offset" : 3,
"end_offset" : 10,
"type" : "word",
"position" : 11
},
{
"token" : "ial t",
"start_offset" : 4,
"end_offset" : 9,
"type" : "word",
"position" : 12
},
{
"token" : "ial te",
"start_offset" : 4,
"end_offset" : 10,
"type" : "word",
"position" : 13
},
{
"token" : "ial tex",
"start_offset" : 4,
"end_offset" : 11,
"type" : "word",
"position" : 14
},
{
"token" : "al te",
"start_offset" : 5,
"end_offset" : 10,
"type" : "word",
"position" : 15
},
{
"token" : "al tex",
"start_offset" : 5,
"end_offset" : 11,
"type" : "word",
"position" : 16
},
{
"token" : "al text",
"start_offset" : 5,
"end_offset" : 12,
"type" : "word",
"position" : 17
},
{
"token" : "l tex",
"start_offset" : 6,
"end_offset" : 11,
"type" : "word",
"position" : 18
},
{
"token" : "l text",
"start_offset" : 6,
"end_offset" : 12,
"type" : "word",
"position" : 19
},
{
"token" : " text",
"start_offset" : 7,
"end_offset" : 12,
"type" : "word",
"position" : 20
}
]
You can do search on any of the tokens generated. You can also use "token_chars": [
"letter",
"digit"
]
to generate tokens excluding space.
Your choice of any of the option above will depend on your data size and performance requirements. Wildcard is more flexible but matching is done at run time hence perfomance is slow. If data size is small this will be ideal solution.
Ngrams, tokens are generated at time of indexing. It takes more memory but search is faster. For large data size this should be ideal solution.

how to change json request to payload form

I have a json request like this:
{
"record_type": "item",
"data": [
{
"item_id_wms": 75985,
"item_type": "Inventory",
"item_code": "ITEM8080808",
"display_name": "ITEM E888",
"category_item": "",
"upc_code" : 12345678,
"base_unit": "BOX",
"height": 12,
"width": 13,
"depth": 12,
"uom_volume": 10
}
]
}
but I hit use request.setPayload(request.toString()); I got error like this when hitting my other service
Response Body : {"error" : {"code" : "SYNTAX_ERROR", "message" : "org.mozilla.javascript.EcmaError: SyntaxError: Unexpected token: c (INVOCATION_WRAPPER$sys#24)."}}
but if I manually use this code: I can hit my endpoint.
OAuthRequest requestSendToNS = new OAuthRequest(Verb.POST, baseUrl);
requestSendToNS.setPayload(("{\"record_type\":\"item\", \"data\": [ { \"item_id_wms\": 99920, \"item_type\": \"kit\", \"item_code\": \"ITEM808080\", \"display_name\": \"ITEM E888\", \"category_item\": \"kit\", \"upc_code\" : 12345678, \"base_unit\": \"PCS\", \"height\": 12, \"width\": 13, \"depth\": 12, \"uom_volume\": 10, \"component\" : [ { \"item_id_component\":\"ITMIDAS0000000007\", \"quantity_component\": 18, \"unit_id_component\":\"PCS\" }, { \"item_id_component\":\"ITMIDAS0000000001\", \"quantity_component\": 15, \"unit_id_component\":\"PCS\" } ] }, { \"item_id_wms\": 99922, \"item_type\": \"kit\", \"item_code\": \"ITEM808080\", \"display_name\": \"ITEM E888\", \"category_item\": \"kit\", \"upc_code\" : 12345678, \"base_unit\": \"PCS\", \"height\": 12, \"width\": 13, \"depth\": 12, \"uom_volume\": 10, \"component\" : [ { \"item_id_component\":\"ITMIDAS0000000007\", \"quantity_component\": 18, \"unit_id_component\":\"PCS\" }, { \"item_id_component\":\"ITMIDAS0000000001\", \"quantity_component\": 15, \"unit_id_component\":\"PCS\" } ] }, { \"item_id_wms\": 99924, \"item_type\": \"kit\", \"item_code\": \"ITEM808080\", \"display_name\": \"ITEM E888\", \"category_item\": \"kit\", \"upc_code\" : 12345678, \"base_unit\": \"PCS\", \"height\": 12, \"width\": 13, \"depth\": 12, \"uom_volume\": 10, \"component\" : [ { \"item_id_component\":\"ITMIDAS0000000007\", \"quantity_component\": 18, \"unit_id_component\":\"PCS\" }, { \"item_id_component\":\"ITMIDAS0000000001\", \"quantity_component\": 15, \"unit_id_component\":\"PCS\" } ] } ] }"));
How can I change my json request come in the payload format like that?

Search match text in Elasticsearch SpringBoot by using percentage

I'm a new Elasticsearch SpringBoot here. I don't know how to search match text in Elasticsearch SpringBoot by using percentage. For example, I have a text "Hello world". Can I set a percentage of 50% or 70% to match with my text? I try with property minimumShouldMatch already but it seems doesn't work for my case right now.
Anyone help me please, Thank
You could use should query, split your search phrase by term, and set minimum_should_match according to your pourcentage
Example query
{
"query": {
"bool": {
"should": [
{
"term": {
"my_field": "hello"
}
},
{
"term": {
"my_field": "world"
}
},
{
"term": {
"my_field": "i'm"
}
},
{
"term": {
"my_field": "alive"
}
}
],
"minimum_should_match": 2
}
}
}
Will find hello world, hello alive etc...
To split a text in terms you should use _analyse of your index
Analyze and split terms
POST myindex/_analyze
{
"field": "my_field",
"text": "hello world i'm alive"
}
Which give you result like that to populate your query, and match term analyser with the query, if for example you use custom analyzer
{
"tokens" : [
{
"token" : "hello",
"start_offset" : 0,
"end_offset" : 5,
"type" : "<ALPHANUM>",
"position" : 0
},
{
"token" : "world",
"start_offset" : 6,
"end_offset" : 11,
"type" : "<ALPHANUM>",
"position" : 1
},
{
"token" : "i'm",
"start_offset" : 12,
"end_offset" : 15,
"type" : "<ALPHANUM>",
"position" : 2
},
{
"token" : "alive",
"start_offset" : 16,
"end_offset" : 21,
"type" : "<ALPHANUM>",
"position" : 3
}
]
}

RegEx for extracting text from a file in NiFi

I have a JSON response like below and I only want to extract text following text from file using extracttext processor in NIFI. But, it is saying not a valid Java expression.
JSON Response
"17" : {
"columnId" : 17,
"columnName" : "id",
"value" : "1234:;5678"
}
"17" : {
"columnId" : 17,
"columnName" : "id",
"value" : "1234:;5678"
},
"19" : {
"columnId" : 19,
"columnName" : "HelloWorld",
"value" : "Test 1:;34130"
},
"21" : {
"columnId" : 21,
"columnName" : "Testing",
"value" : "Test"
}
"17" : {
"columnId" : 17,
"columnName" : "id",
"value" : "1299:;6775"
},
"19" : {
"columnId" : 19,
"columnName" : "HelloWorld",
"value" : "Test 2.:;34147"
},
"21" : {
"columnId" : 21,
"columnName" : "Testing",
"value" : "Test"
}
"17" : {
"columnId" : 17,
"columnName" : "id",
"value" : "1299:;6775"
},
"19" : {
"columnId" : 19,
"columnName" : "HelloWorld",
"value" : "Test.:;34147"
},
"21" : {
"columnId" : 21,
"columnName" : "globalregions",
"value" : "Test"
}
"
I have tried expression:
"17" : {(.*?)\}.
It's not working.
Expected result should be :-
"17" : {
"columnId" : 17,
"columnName" : "id",
"value" : "1234:;5678"
}
"17" : {
"columnId" : 17,
"columnName" : "id",
"value" : "1299:;6775"
}
normally you should have unique keys for json object.
and in your json there are several keys "17" in the same object...
however the following regexp should work for your json: "17"\s*:\s*\{[^}]*\}
you can try it: https://regex101.com/r/8RiPHu/1/

How to return just the matched elements from a mongoDB array

I've been looking for this question one week and I can't understand why it still don't work...
I have this object into my MongoDB database:
{
produc: [
{
cod_prod: "0001",
description: "Ordenador",
price: 400,
current_stock: 3,
min_stock: 1,
cod_zone: "08850"
},
{
cod_prod: "0002",
description: "Secador",
price: 30,
current_stock: 10,
min_stock: 2,
cod_zone: "08870"
},
{
cod_prod: "0003",
description: "Portatil",
price: 500,
current_stock: 8,
min_stock: 4,
cod_zone: "08860"
},
{
cod_prod: "0004",
description: "Disco Duro",
price: 100,
current_stock: 20,
min_stock: 5,
cod_zone: "08850"
},
{
cod_prod: "0005",
description: "Monitor",
price: 150,
current_stock: 0,
min_stock: 2,
cod_zone: "08850"
}
]
}
I would like to query for array elements with specific cod_zone ("08850") for example.
I found the $elemMatch projection that supposedly should return just the array elements which match the query, but I don't know why I'm getting all object.
This is the query I'm using:
db['Collection_Name'].find(
{
produc: {
$elemMatch: {
cod_zone: "08850"
}
}
}
);
And this is the result I expect:
{ produc: [
{
cod_prod: "0001",
denominacion: "Ordenador",
precio: 400,
stock_actual: 3,
stock_minimo: 1,
cod_zona: "08850"
},{
cod_prod: "0004",
denominacion: "Disco Duro",
precio: 100,
stock_actual: 20,
stock_minimo: 5,
cod_zona: "08850"
},
{
cod_prod: "0005",
denominacion: "Monitor",
precio: 150,
stock_actual: 0,
stock_minimo: 2,
cod_zona: "08850"
}]
}
I'm making a Java program using MongoDB Java Connector, so I really need a query for java connector but I think I will be able to get it if I know mongo query.
Thank you so much!
This is possible through the aggregation framework. The pipeline passes all documents in the collection through the following operations:
$unwind operator - Outputs a document for each element in the produc array field by deconstructing it.
$match operator will filter only documents that match cod_zone criteria.
$group operator will group the input documents by a specified identifier expression and applies the accumulator expression $push to each group:
$project operator then reconstructs each document in the stream:
db.collection.aggregate([
{
"$unwind": "$produc"
},
{
"$match": {
"produc.cod_zone": "08850"
}
},
{
"$group":
{
"_id": null,
"produc": {
"$push": {
"cod_prod": "$produc.cod_prod",
"description": "$produc.description",
"price" : "$produc.price",
"current_stock" : "$produc.current_stock",
"min_stock" : "$produc.min_stock",
"cod_zone" : "$produc.cod_zone"
}
}
}
},
{
"$project": {
"_id": 0,
"produc": 1
}
}
])
will produce:
{
"result" : [
{
"produc" : [
{
"cod_prod" : "0001",
"description" : "Ordenador",
"price" : 400,
"current_stock" : 3,
"min_stock" : 1,
"cod_zone" : "08850"
},
{
"cod_prod" : "0004",
"description" : "Disco Duro",
"price" : 100,
"current_stock" : 20,
"min_stock" : 5,
"cod_zone" : "08850"
},
{
"cod_prod" : "0005",
"description" : "Monitor",
"price" : 150,
"current_stock" : 0,
"min_stock" : 2,
"cod_zone" : "08850"
}
]
}
],
"ok" : 1
}

Categories

Resources