I have a MongoDB aggregation that works in the mongo shell but returns an error when I attempt to run it from within Java application.
The aggregation query in question is as follows:
db.assets.aggregate([
{ $lookup: {
from: "vulnerabilities",
let: { id: "$_id", jdkId: "$jdkId" },
pipeline: [
{ $match:
{ $expr:
{ $and: [
{ $eq: [ "$resolved", false ] },
{ $or: [
{ $eq: [ "$assetId", "$$id" ] },
{ $eq: [ "$assetId", "$$jdkId" ] }
] }
] }
}
}
],
as: "unresolvedVulnerabilities"
} },
{ $addFields: {
maxCvssBaseScore: {
$max: {
$max: "$unresolvedVulnerabilities.cves.cvssBaseScore"
}
}
} },
{ $skip: 0 },
{ $limit: 100 }
] ).pretty()
When I attempt to run this query in a Java application, I get the following error:
{
"operationTime": {
"$timestamp": {
"t": 1645001907,
"i": 1
}
},
"ok": 0.0,
"errmsg": "Invalid $addFields :: caused by :: Unrecognized expression '$max '",
"code": 168,
"codeName": "InvalidPipelineOperator",
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1645001907,
"i": 1
}
},
"signature": {
"hash": {
"$binary": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",
"$type": "00"
},
"keyId": {
"$numberLong": "0"
}
}
}
}
The code for generating the addFields stage of the aggregation pipeline is as follows:
Bson addFieldsStage = new Document().append("$addFields",
new Document().append(PROPERTY_MAX_CVSS_BASE_SCORE,
new Document().append("$max",
new Document().append("$max ", "$" + PROPERTY_UNRESOLVED_VULNERABILITIES + "." + PROPERTY_CVES + "." + PROPERTY_CVSS_BASE_SCORE)
)
)
);
An example of the underlying data is as follows:
{
"id": "eb843e46-901a-3d0e-8d6e-1315c78cf5f7",
"name": "test-server1#test.a.b.c",
"type": "liberty",
"productName": "WebSphere Application Server Liberty Network Deployment",
"version": "20.0.0.9",
"features": [
"el-3.0",
"jsp-2.3",
"servlet-3.1",
"ssl-1.0",
"transportSecurity-1.0",
"usageMetering-1.0"
],
"apars": [],
"hostName": "test.a.b.c",
"serverName": "test-server1",
"installDirectory": "/opt/ibm/wlp/",
"profileDirectory": "/opt/ibm/wlp/usr/",
"operatingSystem": "Linux",
"operatingSystemVersion": "3.10.0-1160.53.1.el7.x86_64",
"jdkId": "579a03c6-76d0-3813-ad61-51428041987b",
"unresolvedVulnerabilities": [
{
"id": "156bb40b-826d-31e8-ab61-e0d3ee8b6446",
"name": "6520468 : IBM J9 VM#test.a.b.c",
"description": "There are multiple vulnerabilities in the IBM® SDK, Java™ Technology Edition that is shipped with IBM WebSphere Application Server. These might affect some configurations of IBM WebSphere Application Server Traditional, IBM WebSphere Application Server Liberty and IBM WebSphere Application Server Hypervisor Edition. These products have addressed the applicable CVEs. If you run your own Java code using the IBM Java Runtime delivered with this product, you should evaluate your code to determine whether the complete list of vulnerabilities is applicable to your code. For a complete list of vulnerabilities, refer to the link for \"IBM Java SDK Security Bulletin\" located in the References section for more information. HP fixes are on a delayed schedule.",
"assetId": "579a03c6-76d0-3813-ad61-51428041987b",
"securityBulletinId": "7e4684ae-8252-354e-be6d-4268af2d272e",
"resolved": false,
"cves": [
{
"id": "CVE-2021-35578",
"description": "An unspecified vulnerability in Java SE related to the JSSE component could allow an unauthenticated attacker to cause a denial of service resulting in a low availability impact using unknown attack vectors.",
"cvssBaseScore": 5.3
},
{
"id": "CVE-2021-35564",
"description": "An unspecified vulnerability in Java SE related to the Keytool component could allow an unauthenticated attacker to cause no confidentiality impact, low integrity impact, and no availability impact.",
"cvssBaseScore": 5.3
}
],
"remediations": [
{
"startVersion": "8.0.0.0",
"endVersion": "8.0.6.36",
"fixPack": "8.0.7.0"
}
],
"created": "2022-01-11T10:58:43Z",
"createdBy": "server-registration-processor",
"updated": "2022-01-11T10:58:43Z",
"updatedBy": "server-registration-processor",
"secondsExposed": 0
},
{
"id": "0f6006e6-a8ae-3cb6-bb7e-ba3afbf93996",
"name": "6489683 : test-server1#test.a.b.c",
"description": "There are multiple vulnerabilities in the Apache Commons Compress library that is used by WebSphere Application Server Liberty. This has been addressed.",
"assetId": "eb843e46-901a-3d0e-8d6e-1315c78cf5f7",
"securityBulletinId": "12de7238-ff4e-3252-a05f-19d51a3f8bf0",
"resolved": false,
"cves": [
{
"id": "CVE-2021-36090",
"description": "Apache Commons Compress is vulnerable to a denial of service, caused by an out-of-memory error when large amounts of memory are allocated. By reading a specially-crafted ZIP archive, a remote attacker could exploit this vulnerability to cause a denial of service condition against services that use Compress' zip package.",
"cvssBaseScore": 7.5
},
{
"id": "CVE-2021-35517",
"description": "Apache Commons Compress is vulnerable to a denial of service, caused by an out of memory error when allocating large amounts of memory. By persuading a victim to open a specially-crafted TAR archive, a remote attacker could exploit this vulnerability to cause a denial of service condition against services that use Compress' tar package.",
"cvssBaseScore": 5.5
}
],
"remediations": [
{
"startVersion": "20.0.0.1",
"endVersion": "20.0.0.12",
"operator": "OR",
"iFixes": [
"PH39418"
],
"fixPack": "21.0.0.10"
}
],
"created": "2022-01-11T10:58:43Z",
"createdBy": "vulnerability-manager",
"updated": "2022-01-11T10:58:43Z",
"updatedBy": "vulnerability-manager",
"secondsExposed": 0
}
],
"groups": [
"NO_GROUP"
],
"created": "2022-01-11T10:58:43Z",
"createdBy": "server-registration-processor",
"updated": "2022-01-25T15:49:16Z",
"updatedBy": "vulnerability-manager"
}
Is anyone able to tell me what I am doing wrong?
It turns out that it was a fat finger error... one of the $max strings included an additional space at the end.
The Java code for the working stage is:
Bson addFieldsStage = new Document().append("$addFields",
new Document().append(PROPERTY_MAX_CVSS_BASE_SCORE,
new Document().append("$max",
new Document().append("$max", "$" + PROPERTY_UNRESOLVED_VULNERABILITIES + "." + PROPERTY_CVES + "." + PROPERTY_CVSS_BASE_SCORE)
)
)
);
Many thanks to prasad_ for spotting this.
Related
I am trying to run opencypher query using CypherGremlinClient.
this is how i initiate CypherGremlinClient.
org.apache.commons.configuration.Configuration configuration = new BaseConfiguration();
configuration.setProperty("port", neptuneProp.getPort());
configuration.setProperty("hosts", neptuneProp.getOpencypherhost());
configuration.setProperty("connectionPool.enableSsl", "true");
Cluster cluster = Cluster.open(configuration);
Client gremlinClient = cluster.connect();
return CypherGremlinClient.translating(gremlinClient);
My query is
MATCH p=(n)-[r]->(d) WHERE ID(n) = '123' RETURN n,r
this returns results with all attributes including ~id,~label which I need it for node and for Edges i am looking into ~id, _inV, _outV. but this query is running slow, most of the time I get memory error from neptune server.
Other side i am using below method which is executing query faster but does not return attributes that i am looking for.
Config neptunePropOpencypherConfig = Config.builder()
.withConnectionTimeout(30, TimeUnit.SECONDS)
.withMaxConnectionPoolSize(1000)
.withDriverMetrics()
.withLeakedSessionsLogging()
.withEncryption()
.withTrustStrategy(Config.TrustStrategy.trustSystemCertificates())
.build();
GraphDatabase.driver("bolt://" neptune.getHost() + ":" + neptuneProp.getPort(), neptunePropOpencypherConfig)
please help me to run my query faster with CypherGremlinClient or help me to get all other attributes using
Transaction readTx = readDiver.session().beginTransaction();
Result results = readTx.run("MATCH p=(n)-[r]->(d) WHERE ID(n) = '" + wgiId + "' RETURN n,r");
or if you know then suggest another approach.
I am looking for this kind of response.
{
"results": [
{
"n": {
"~id": "123",
"~entityType": "node",
"~labels": [
"ontology"
],
"~properties": {
"lastrevid": 0,
"P98": "1151332690",
"labels": "{ \"de\" : [ \"Ivan Shedoff\" ] }",
"aliases": "{ \"de\" : [ \"Ivan Shedoff\" ] }",
"description": "{ }",
"P26": "4.61517E+19"
}
},
"r": {
"~id": "123$11d27a77-1227-422c-9134-6e96d1cb7c79",
"~entityType": "relationship",
"~start": "123",
"~end": "Q3",
"~type": "claim",
"~properties": {
"claimCode": "P5"
}
}
}
]
}
Neptune now supports openCypher as query language, see here. Support is currently in lab mode but soon to be GA.
Using the openCypher endpoint would be much easier than trying to do a translation. If you use the HTTPS endpoint you will get a result that is almost exactly what you specified above, an example is below
{
"results": [
{
"a": {
"~id": "22",
"~entityType": "node",
"~labels": [
"airport"
],
"~properties": {
"desc": "Seattle-Tacoma",
"lon": -122.30899810791,
"runways": 3,
"type": "airport",
"country": "US",
"region": "US-WA",
"lat": 47.4490013122559,
"elev": 432,
"city": "Seattle",
"icao": "KSEA",
"code": "SEA",
"longest": 11901
}
}
}
]
}
Consider the code snippet below-
{
"header": {
"systemId": "1"
},
"body": {
"approvalType": "S",
"requester": "CRM",
"approver": "V",
"additionalInfoList": [
{
"additionalInfoItem": {
"value": [
{
"secret": [
{
"question": "1"
}
]
},
{
"secret": [
{
"question": "2"
}
]
},
{
"secret": [
{
"question": "3"
}
]
}
]
}
},
{
"additionalInfoItem": {
"name": "key2",
"value": [
{
"secret": [
{
"question": "00"
}
]
},
{
"secret": [
{
"question": "002"
}
]
},
{
"secret": [
{
"question": "003"
}
]
}
]
}
}
]
}
}
For this json path
$.body.additionalInfoList[*].additionalInfoItem.value[*].secret[*].question
the API gives
[
"1",
"2",
"3",
"00",
"002",
"003"
]
I am using configuration REQUIRE_PROPERTIES option which configures JsonPath to require properties defined in path when an indefinite path is evaluated.
If in the above JSON, one of the question is not sent in request, an exception as below will be thrown - No results for path: $['body']['additionalInfoList'][1]['additionalInfoItem']['value'][0]['secret'][0]['question']
I have a requirement to collect all other values for the question tag even when the exception com.jayway.jsonpath.PathNotFoundException was thrown. How can I achieve this?
On the other hand, if I use the option SUPPRESS_EXCEPTIONS, how can I know if there is a missing path?
I do not have a great answer here; however, to get it done I would suggest processing the response twice:
once with SUPPRESS_EXCEPTIONS (or simply no option might work as well),
and then with REQUIRE_PROPERTIES to detect the errors.
This should allow you to handle the scenario as described.
I have indexed my pdf file in elastic search using ingest-attachment processor plugin and now am search my file based on the contents available in PDF.
For Example, am having some contents like this in my pdf.
Hello I m Karthikeyan. My mail id Karthikeyan#gmail.com, My mob no 4573894833.
While am searching using Java API, am able to search like the following.
Search For,
Karthikeyan#gmail.com am able to get the file.
But,
If i search for,
#gm means am not able to get the file, am expecting that i should get the file because, this file have my search keyword #gm.
How can i do this. ?
Am using tokenizer with min_gram & max_gram as 3 each.
Please find the below java api that i have used, but none of them giving me the results as expected.
QueryStringQueryBuilder attachmentQB = new QueryStringQueryBuilder("#gm");
Please find my below mappings details.
PUT attach_local
{
"settings": {
"analysis": {
"analyzer": {
"custom_analyzer": {
"type": "custom",
"tokenizer": "my_tokenizer",
"char_filter": [
"html_strip"
],
"filter": [
"lowercase",
"asciifolding"
]
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"doc": {
"properties": {
"attachment": {
"properties": {
"content": {
"type": "text",
"analyzer": "custom_analyzer"
},
"content_length": {
"type": "long"
},
"content_type": {
"type": "text"
},
"language": {
"type": "text"
}
}
},
"resume": {
"type": "text"
}
}
}
}
}
You can see how ES tokenizes your search text using
POST /attach_local/_analyze
{
"analyzer": "custom_analyzer",
"text": "#gm"
}
That will tell you if the # character is dropped or not. If it is then that would explain the behavior since your inverted index has all trigrams and you are searching for a bigram.
i cannot create invoice in QuickBooks online using Java SDK v2.9.1
company location: India
currency: INR
due to some validation error not very clearly returned in API fault response.
i believe the payload is complete as the same works with US sandbox environment.
any insight into this?
thanks in advance :)
Request:
{
"ApplyTaxAfterDiscount": false,
"AutoDocNumber": false,
"CurrencyRef": {
"value": "INR"
},
"DepartmentRef": {
"value": "1"
},
"DocNumber": "00006",
"DueDate": "2017-06-13",
"Line": [
{
"Amount": 110.0,
"Description": "00006",
"DetailType": "SalesItemLineDetail",
"SalesItemLineDetail": {
"ItemRef": {
"value": "25"
},
"Qty": 1,
"TaxCodeRef": {
"value": "3"
},
"UnitPrice": 110.0
}
}
],
"ShipAddr": {
"City": "HD",
"CountryCode": "IND",
"Line1": "55-DP-1",
"Line2": "",
"PostalCode": "600660"
},
"ShipDate": "2017-06-13",
"TotalAmt": 110.0,
"TxnDate": "2017-06-13"
}
Response:
{
"Fault": {
"Error": [
{
"Detail": "Business Validation Error: Unexpected Internal Error. (-30003)",
"Message": "A business validation error has occurred while processing your request",
"code": "6000",
"element": ""
}
],
"type": "ValidationFault"
},
"time": "2017-06-13T05:52:29.153-07:00"
}
while sending this request please check in your java code for correct date formats,
I got same error code when I trying to post QBO bill API with txnDate, but unfortunately it in my code format of date changed and I got above error. After converting date into correct format "yyyy-MM-DD", QBO bill payment API works fine for me.
So first check in your code or print json request data to check date format.
I have this Bing Maps JSON file and I want to retrieve "++formattedAddress++" from inside it
{
"statusCode": 200,
"statusDescription": "OK",
"copyright": "Copyright © 2013 Microsoft and its suppliers. All rights reserved. This API cannot be accessed and the content and any results may not be used, reproduced or transmitted in any manner without express written permission from Microsoft Corporation.",
"authenticationResultCode": "ValidCredentials",
"resourceSets": [
{
"resources": [
{
"__type": "Location:http://schemas.microsoft.com/search/local/ws/rest/v1",
"point": {
"type": "Point",
"coordinates": [
63.8185213804245,
12.105498909950256
]
},
"matchCodes": [
"Good"
],
"address": {
"addressLine": "55 Stuff",
"locality": "Stuff",
"++formattedAddress++": "55 Stuff, 51512 Stuff",
"postalCode": "25521",
"adminDistrict2": "Stuff-Stuff",
"countryRegion": "UK",
"adminDistrict": "NL"
},
"bbox": [
84.81465866285382,
12.097347537264563,
50.822384097995176,
7.11365028263595
],
"name": "55 Stuff, 51122 Stuff",
"confidence": "Medium",
"entityType": "Address",
"geocodePoints": [
{
"calculationMethod": "Interpolation",
"type": "Point",
"usageTypes": [
"Display",
"Route"
],
"coordinates": [
50.8185213804245,
7.105498909950256
]
}
]
}
],
"estimatedTotal": 1
}
],
"traceId": "8a13f73cab93472db1253e4c1621c651|BL2M002306|02.00.83.1900|BL2MSNVM001274, BL2MSNVM003152",
"brandLogoUri": "http://dev.virtualearth.net/Branding/logo_powered_by.png"
}
What I have tried so far is like this:
final JSONArray jsonMainArr = locationData.getJSONArray("resourceSets").getJSONObject(0).getJSONArray("resources");
final JSONObject childJSONObject = jsonMainArr.getJSONObject(0);
return childJSONObject.getString("formattedAddress");
childJSONObject is still 2-3 levels over formattedAddress and the query is becoming highly inefficient
get formattedAddress address value as from current json String :
final JSONObject childJSONObject = jsonMainArr.getJSONObject(0)
.getJSONObject("address");
return childJSONObject.getString("++formattedAddress++");
There are so much online sites where you paste your complex code and get it in an easy way. e.g. http://json.parser.online.fr/