Business Validation Error: Unexpected Internal Error. (-30003) - QuickBooks Java SDK - java

i cannot create invoice in QuickBooks online using Java SDK v2.9.1
company location: India
currency: INR
due to some validation error not very clearly returned in API fault response.
i believe the payload is complete as the same works with US sandbox environment.
any insight into this?
thanks in advance :)
Request:
{
"ApplyTaxAfterDiscount": false,
"AutoDocNumber": false,
"CurrencyRef": {
"value": "INR"
},
"DepartmentRef": {
"value": "1"
},
"DocNumber": "00006",
"DueDate": "2017-06-13",
"Line": [
{
"Amount": 110.0,
"Description": "00006",
"DetailType": "SalesItemLineDetail",
"SalesItemLineDetail": {
"ItemRef": {
"value": "25"
},
"Qty": 1,
"TaxCodeRef": {
"value": "3"
},
"UnitPrice": 110.0
}
}
],
"ShipAddr": {
"City": "HD",
"CountryCode": "IND",
"Line1": "55-DP-1",
"Line2": "",
"PostalCode": "600660"
},
"ShipDate": "2017-06-13",
"TotalAmt": 110.0,
"TxnDate": "2017-06-13"
}
Response:
{
"Fault": {
"Error": [
{
"Detail": "Business Validation Error: Unexpected Internal Error. (-30003)",
"Message": "A business validation error has occurred while processing your request",
"code": "6000",
"element": ""
}
],
"type": "ValidationFault"
},
"time": "2017-06-13T05:52:29.153-07:00"
}

while sending this request please check in your java code for correct date formats,
I got same error code when I trying to post QBO bill API with txnDate, but unfortunately it in my code format of date changed and I got above error. After converting date into correct format "yyyy-MM-DD", QBO bill payment API works fine for me.
So first check in your code or print json request data to check date format.

Related

MongoDb aggregation query using $addFields and $max in Java

I have a MongoDB aggregation that works in the mongo shell but returns an error when I attempt to run it from within Java application.
The aggregation query in question is as follows:
db.assets.aggregate([
{ $lookup: {
from: "vulnerabilities",
let: { id: "$_id", jdkId: "$jdkId" },
pipeline: [
{ $match:
{ $expr:
{ $and: [
{ $eq: [ "$resolved", false ] },
{ $or: [
{ $eq: [ "$assetId", "$$id" ] },
{ $eq: [ "$assetId", "$$jdkId" ] }
] }
] }
}
}
],
as: "unresolvedVulnerabilities"
} },
{ $addFields: {
maxCvssBaseScore: {
$max: {
$max: "$unresolvedVulnerabilities.cves.cvssBaseScore"
}
}
} },
{ $skip: 0 },
{ $limit: 100 }
] ).pretty()
When I attempt to run this query in a Java application, I get the following error:
{
"operationTime": {
"$timestamp": {
"t": 1645001907,
"i": 1
}
},
"ok": 0.0,
"errmsg": "Invalid $addFields :: caused by :: Unrecognized expression '$max '",
"code": 168,
"codeName": "InvalidPipelineOperator",
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1645001907,
"i": 1
}
},
"signature": {
"hash": {
"$binary": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",
"$type": "00"
},
"keyId": {
"$numberLong": "0"
}
}
}
}
The code for generating the addFields stage of the aggregation pipeline is as follows:
Bson addFieldsStage = new Document().append("$addFields",
new Document().append(PROPERTY_MAX_CVSS_BASE_SCORE,
new Document().append("$max",
new Document().append("$max ", "$" + PROPERTY_UNRESOLVED_VULNERABILITIES + "." + PROPERTY_CVES + "." + PROPERTY_CVSS_BASE_SCORE)
)
)
);
An example of the underlying data is as follows:
{
"id": "eb843e46-901a-3d0e-8d6e-1315c78cf5f7",
"name": "test-server1#test.a.b.c",
"type": "liberty",
"productName": "WebSphere Application Server Liberty Network Deployment",
"version": "20.0.0.9",
"features": [
"el-3.0",
"jsp-2.3",
"servlet-3.1",
"ssl-1.0",
"transportSecurity-1.0",
"usageMetering-1.0"
],
"apars": [],
"hostName": "test.a.b.c",
"serverName": "test-server1",
"installDirectory": "/opt/ibm/wlp/",
"profileDirectory": "/opt/ibm/wlp/usr/",
"operatingSystem": "Linux",
"operatingSystemVersion": "3.10.0-1160.53.1.el7.x86_64",
"jdkId": "579a03c6-76d0-3813-ad61-51428041987b",
"unresolvedVulnerabilities": [
{
"id": "156bb40b-826d-31e8-ab61-e0d3ee8b6446",
"name": "6520468 : IBM J9 VM#test.a.b.c",
"description": "There are multiple vulnerabilities in the IBM® SDK, Java™ Technology Edition that is shipped with IBM WebSphere Application Server. These might affect some configurations of IBM WebSphere Application Server Traditional, IBM WebSphere Application Server Liberty and IBM WebSphere Application Server Hypervisor Edition. These products have addressed the applicable CVEs. If you run your own Java code using the IBM Java Runtime delivered with this product, you should evaluate your code to determine whether the complete list of vulnerabilities is applicable to your code. For a complete list of vulnerabilities, refer to the link for \"IBM Java SDK Security Bulletin\" located in the References section for more information. HP fixes are on a delayed schedule.",
"assetId": "579a03c6-76d0-3813-ad61-51428041987b",
"securityBulletinId": "7e4684ae-8252-354e-be6d-4268af2d272e",
"resolved": false,
"cves": [
{
"id": "CVE-2021-35578",
"description": "An unspecified vulnerability in Java SE related to the JSSE component could allow an unauthenticated attacker to cause a denial of service resulting in a low availability impact using unknown attack vectors.",
"cvssBaseScore": 5.3
},
{
"id": "CVE-2021-35564",
"description": "An unspecified vulnerability in Java SE related to the Keytool component could allow an unauthenticated attacker to cause no confidentiality impact, low integrity impact, and no availability impact.",
"cvssBaseScore": 5.3
}
],
"remediations": [
{
"startVersion": "8.0.0.0",
"endVersion": "8.0.6.36",
"fixPack": "8.0.7.0"
}
],
"created": "2022-01-11T10:58:43Z",
"createdBy": "server-registration-processor",
"updated": "2022-01-11T10:58:43Z",
"updatedBy": "server-registration-processor",
"secondsExposed": 0
},
{
"id": "0f6006e6-a8ae-3cb6-bb7e-ba3afbf93996",
"name": "6489683 : test-server1#test.a.b.c",
"description": "There are multiple vulnerabilities in the Apache Commons Compress library that is used by WebSphere Application Server Liberty. This has been addressed.",
"assetId": "eb843e46-901a-3d0e-8d6e-1315c78cf5f7",
"securityBulletinId": "12de7238-ff4e-3252-a05f-19d51a3f8bf0",
"resolved": false,
"cves": [
{
"id": "CVE-2021-36090",
"description": "Apache Commons Compress is vulnerable to a denial of service, caused by an out-of-memory error when large amounts of memory are allocated. By reading a specially-crafted ZIP archive, a remote attacker could exploit this vulnerability to cause a denial of service condition against services that use Compress' zip package.",
"cvssBaseScore": 7.5
},
{
"id": "CVE-2021-35517",
"description": "Apache Commons Compress is vulnerable to a denial of service, caused by an out of memory error when allocating large amounts of memory. By persuading a victim to open a specially-crafted TAR archive, a remote attacker could exploit this vulnerability to cause a denial of service condition against services that use Compress' tar package.",
"cvssBaseScore": 5.5
}
],
"remediations": [
{
"startVersion": "20.0.0.1",
"endVersion": "20.0.0.12",
"operator": "OR",
"iFixes": [
"PH39418"
],
"fixPack": "21.0.0.10"
}
],
"created": "2022-01-11T10:58:43Z",
"createdBy": "vulnerability-manager",
"updated": "2022-01-11T10:58:43Z",
"updatedBy": "vulnerability-manager",
"secondsExposed": 0
}
],
"groups": [
"NO_GROUP"
],
"created": "2022-01-11T10:58:43Z",
"createdBy": "server-registration-processor",
"updated": "2022-01-25T15:49:16Z",
"updatedBy": "vulnerability-manager"
}
Is anyone able to tell me what I am doing wrong?
It turns out that it was a fat finger error... one of the $max strings included an additional space at the end.
The Java code for the working stage is:
Bson addFieldsStage = new Document().append("$addFields",
new Document().append(PROPERTY_MAX_CVSS_BASE_SCORE,
new Document().append("$max",
new Document().append("$max", "$" + PROPERTY_UNRESOLVED_VULNERABILITIES + "." + PROPERTY_CVES + "." + PROPERTY_CVSS_BASE_SCORE)
)
)
);
Many thanks to prasad_ for spotting this.

How to index a file into my cloud organization using a Java Http PUT request?

I used Postman tool to upload a HTML file in my organization.
Using PUT call
https://api.cloud.coveo.com/push/v1/organizations/*************/sources/**************************/documents?documentId=https://*******/page/page-id/SamplePage.html
Authorisation Bearer Token : ************************
headers content-type : Application/json
In body - raw code
{
"author": "John",
"date": "2020-03-18T17:56:41.666Z",
"documenttype": "HTML",
"filename": "SamplePage.html",
"language": [
"English"
],
"permanentid": "123456789",
"sourcetype": "Push",
"title": "Sample Page test",
"fileExtension": ".html",
"data": " sample.html ",
"permissions": [
{
"allowAnonymous": false,
"allowedPermissions": [
{
"identity": "AlphaTeam",
"identityType": "Group"
}
],
"deniedPermissions": [
{
"identity": "bob#example.com",
"identityType": "User"
}
]
}
]
}
Now how can we upload the sample file using a Java PUT request similar to Postman procedure.
Could anyone provide a solution to this.

I want to fetch max and min past temperature of US on the basis of ZipCode using NCDC (National Climate Data Center)climate Data API

I am using this API for fetching data:
https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GHCND&datatypeid=TMAX,TMIN&locationid=ZIP:28801&startdate=2010-05-01&enddate=2010-05-02
It returns the following response:
{
"metadata": {
"resultset": {
"offset": 1,
"count": 4,
"limit": 25
}
},
"results": [
{
"date": "2010-05-01T00:00:00",
"datatype": "TMAX",
"station": "GHCND:USW00013872",
"attributes": ",,0,2400",
"value": 267
},
{
"date": "2010-05-01T00:00:00",
"datatype": "TMIN",
"station": "GHCND:USW00013872",
"attributes": ",,0,2400",
"value": 139
},
{
"date": "2010-05-02T00:00:00",
"datatype": "TMAX",
"station": "GHCND:USW00013872",
"attributes": ",,0,2400",
"value": 267
},
{
"date": "2010-05-02T00:00:00",
"datatype": "TMIN",
"station": "GHCND:USW00013872",
"attributes": ",,0,2400",
"value": 206
}
]
}
I cannot find the document regarding the attributes of the response, Is there any other way I can get such info?
I believe this link has the information you are looking for. Please note that this information is provided by NCDC. You need to contact them for information. Most likely all necessary information will be available in their website. You just need to look out carefully.

tExtractJSONField From tFileInputJSON - Talent Open Studio

I am very new to Talend Open Studio for DI. I am trying to read data from the below JSON File :
{
"data": [
{
"id": "X999_Y999",
"from": {
"name": "Tom Brady", "id": "X12"
},
"message": "Looking forward to 2010!",
"actions": [
{
"name": "Comment",
"link": "http://www.facebook.com/X999/posts/Y999"
},
{
"name": "Like",
"link": "http://www.facebook.com/X999/posts/Y999"
}
],
"type": "status",
"created_time": "2010-08-02T21:27:44+0000",
"updated_time": "2010-08-02T21:27:44+0000"
},
{
"id": "X998_Y998",
"from": {
"name": "Peyton Manning", "id": "X18"
},
"message": "Where's my contract?",
"actions": [
{
"name": "Comment",
"link": "http://www.facebook.com/X998/posts/Y998"
},
{
"name": "Like",
"link": "http://www.facebook.com/X998/posts/Y998"
}
],
"type": "status",
"created_time": "2010-08-02T21:27:44+0000",
"updated_time": "2010-08-02T21:27:44+0000"
}
]
}
I want to load three attributes into my table ( id, actions_name and actions_link). So, in the first step (tFileInputJSON) - I tried to do a Loop Json query as below:
Here, am able to extract the rows as I needed. But, then I used a tExtractJSONField to extract individual fields under "actions" for each "id" using XPath expressions as below:
I tried several other ways to extract the fields but could not do this. Also, not able to find any correct post in stack overflow and talent forums very relevant to my question. Could somebody please help?
Arrange the job like ,
tFileInputJSON is like,
tExtractJSONFields is like,
Then you will get output as,

How to validate JSON against hyper-schema with json-schema-validator

I cannot figure out how to properly setup a hyper-schema to work with json-schema-validator. I am using the java version of json-schema-validator, version is 2.2.5.
My schema is:
{
"$schema": "http://json-schema.org/draftv4/hyper-schema#",
"title": "User object",
"description": "A user representation",
"type": "object",
"properties": {
"email": {
"description": "The user's email address",
"format":"email",
"maxLength": 255
},
"picture": {
"description": "The user's picture",
"type": "string",
"media": {
"binaryEncoding": "base64",
"type": "image/png"
}
}
}
}
My json object is:
{"email":"k#w.de",
"picture":null}
Now, when I load up the schema into JsonSchemaFactory and intend to start validating I get the following warning:
warning: the following keywords are unknown and will be ignored: [media]
level: "warning"
schema: {"loadingURI":"#","pointer":"/properties/picture"}
domain: "syntax"
ignored: ["media"]
Is there anything else to configure for using the hyper-schema besides the $schema field?
This is because your $schema is wrong!
It should be http://json-schema.org/draft-04/hyper-schema#. See section 6 of the core specification for the list of well-known URIs.

Categories

Resources