Defining Empty String in Avro Schema - java

I currently having an issue with the Avro JsonDecoder. Avro is used in Version 1.8.2. The .avsc file is defined like:
{
"type": "record",
"namespace": "my.namespace",
"name": "recordName",
"fields": [
{
"name": "Code",
"type": "string"
},
{
"name": "CodeNumber",
"type": "string",
"default": ""
}
]
}
When I now run my test cases I get an org.apache.avro.AvroTypeException: Expected string. Got END_OBJECT. The class throwing the error is JasonDecoder.
For me it looks like the defaut value handling on my side might not be correct with using just "" as the default value. The error occurs only if the field is not available at all, but this, in my understanding, is the case when the default value should be used. If I set the value in the json as "CodeNumber": "" the decoder does not have any issues.
Any hints or ideas?

Found this:
Turns out the issue is that the default values are just ignored by the java implementations. I've added a workaround which will catch the exception and then look for a default value. Will be in release 1.9.0
Source: https://github.com/sksamuel/avro4s/issues/106
If it is possible, try to upgrade your Avro Decoder to 1.9.0 version.

Related

How to deploy java 17 lambda function through cloud formation

I'm planning to deploy java 17 function to AWS lambda. But since as per the documentation AWS didn't provide a bas image for java 17.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-java.html
So I have a problem with what value should I use in cloudformation template Runtime field
"AMIIDLookup": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "index.handler",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
},
"Code": {
"S3Bucket": "lambda-functions",
"S3Key": "amilookup.zip"
},
"Runtime": "Java11", # what is the alternative for this
"Timeout": 25,
"TracingConfig": {
"Mode": "Active"
}
}
}
There is no official Java17 runtime for Lambda yet, you would have to create a custom runtime on your own.
Robert is right, either create a custom runtime or use docker image to spin up your aws lambda function https://cloud.netapp.com/blog/aws-cvo-blg-aws-lambda-images-how-to-use-container-images-to-deploy-lambda

How get size of an array in json response? rest assured API, JAVA

I found a lot of topics with this question on this portal, but anyway I was not able to achieve what I need. I keep getting the next exception:
java.lang.IllegalArgumentException: The parameter "roles" was used but not defined. Define parameters using the JsonPath.params(...) function .
Partially my code:
import static io.restassured.RestAssured.given;
import io.restassured.RestAssured;
import io.restassured.path.json.JsonPath;
import io.restassured.response.Response;
String baseURItest = RestAssured.baseURI = "http://testapi.test.com/testapps");
Response response;
response = given().when().get("/getAllRoles?token=token");
countRoles=response.body().path("$..roles.size()");
System.out.println(countRoles);
console output:
> java.lang.IllegalArgumentException: The parameter "roles" was used but
> not defined. Define parameters using the JsonPath.params(...) function
and json body response:
{
"Message": "",
"Status": 200,
"Data": {
"errors": [],
"roles": [
{
"ROLEKEY": "1",
"ROLEID": 1,
"ROLENAME": "name1",
"ROLEDESCRIPTION": "1"
},
{
"ROLEKEY": "2",
"ROLEID": 2,
"ROLENAME": "name2",
"ROLEDESCRIPTION": "12"
},
{
"ROLEKEY": "3",
"ROLEID": 3,
"ROLENAME": "name3",
"ROLEDESCRIPTION": "x"
}
]
}
}
I also tried :
JsonPath jsonPathValidator = response.jsonPath();
System.out.println(jsonPathValidator.get("$..ROLEKEY").toString());
I would say I tried a lot of different ways that I found in google, but each time I will get the same error. Can someone please explain to me what am I missing here, please? Or what should I do? Thank you in advance for any help!
Problem: You misunderstand JsonPath in Rest-Assured.
$..ROLEKEY is JsonPath jayway syntax.
JsonPath in Rest-Assured uses groovy GPath internally.
Note from Rest-Assured wiki.
Note that the "json path" syntax uses Groovy's GPath notation and is not to be confused with Jayway's JsonPath syntax.
Solution: you can choose one of two ways below:
Use JsonPath jayway by adding this to pom.xml (or gradle.build)
<dependency>
<groupId>com.jayway.jsonpath</groupId>
<artifactId>json-path</artifactId>
<version>2.6.0</version>
</dependency>
Response res = given().when().get("/getAllRoles?token=token");
JsonPath.read(res.asString(), "$..ROLEKEY");
You can use JsonPath from Rest-Assured.
List<String> roles = response.jsonPath().getList("Data.roles.ROLEKEY");
System.out.println(roles);
//[1,2,3]

Defining optional record type in Avro schema

I am having difficulties in defining the avro schema for the following xml where SubElement1 is optional:
XML File
<?xml version="1.0" encoding="UTF-8"?>
<Element1 attribute1="attr_value1" attribute2="attr_value2">
<SubElement1 attribute1="attr_value">
</Element1>
AVRO Schema
{
"namespace": "com.kafka.avro",
"type": "record",
"name": "Element1",
"fields": [
{"name": "attribute1", "type": "string"},
{"name": "attribute2", "type": "string"},
{"name": "SubElement1",
"type":["null", {
"type": "record",
"name": "SubElement1",
"fields":[
{"name" : "attribute1", "type" : "string" , "default":""}
]
}], "default":null
}
]
}
This works fine when SubElement1 is not present:
"SubElement1": null
When there is SubElement1 present, this results into the following where the namespace is added:
"SubElement1": {
"com.kafka.avro.SubElement1": {
"attribute1": "attr_value1"
}
}
I Would like to completely omit the namespace part as below:
"SubElement1": {
"attribute1": "attr_value1"
}
Is this possible?
It looks like your output is the JSON encoding. Unfortunately, the specification states that the encoding should look like that: https://avro.apache.org/docs/current/spec.html#json_encoding
There is an outstanding issue on the Avro issue tracker about this feature request: https://issues.apache.org/jira/browse/AVRO-1582. In that issue, I think someone provided some code that would do what you want, but as of right now there is no way to do this with the standard library.

Property file same key with different value in java

I have a property file like this.
host=192.168.1.1
port=8060
host=192.168.1.2
port=8070
host=192.168.1.3
port=8080
host=192.168.1.4
port=8090
Now I want the unique url so I can pass it to other application.
Example
HostOne : https://192.168.1.1:8060
HostTwo : https://192.168.1.2:8070
HostThree : https://192.168.1.3:8080
HostFour : https://192.168.1.4:8090
How can I get it using Java or any other library. Please help.
Thanks.
EDITED
How about this if I will this type of data.
host=192.168.1.1,8060
host=192.168.1.1,8060
host=192.168.1.1,8060
host=192.168.1.1,8060
Now is there any way to get this. ?
Basically that property file is broken. A property file is a sequence of key/value pairs which is build into a map, so it requires the keys be unique. I suspect that if you load this into a Properties object at the moment, you'll get just the last host/port pair.
Options:
Make this a real properties file by giving unique keys, e.g.
host.1=192.168.1.1
port.1=8060
host.2=192.168.1.2
port.2=8070
...
Use a different file format (e.g. JSON)
Write your own custom parser which understands your current file format, but don't call it a "properties file" as that has a specific meaning to Java developers
Personally I'd probably go with JSON. For example, your file could be represented as:
[
{ "host": "192.168.1.1", "port": 8060 },
{ "host": "192.168.1.2", "port": 8070 },
{ "host": "192.168.1.3", "port": 8080 },
{ "host": "192.168.1.4", "port": 8090 }
]

How to index an array of element in Elasticsearch?

I'm using Elasticsearch 1.4.3 and I'm trying to create an automated "filler" for the database.
The idea is to use this website http://beta.json-generator.com/BhxCdZ6 to generate a random set of data and push it in an index of Elasticsearch.
For interfacing with Elasticsearch, I am using Elasticsearch for Java API mixed with the Elasticsearch web API.
I managed to push one user per time simply copy-pasting the information excluding the [ and ] characters and creating a shell script that calls
curl -XPOST 'http://localhost:9200/myindex/users/' -d '{
"name": {
"first": "Dickerson",
"last": "Wood"
}, etc...
If I try to copy a full block composed of 3 people and try to push the data with the same script
curl -XPOST 'http://localhost:9200/geocon/users/' -d '[
{
"name": {
"first": "Dickerson",
"last": "Wood"
}, etc ...
]
}'
The error returned is :
org.elasticsearch.index.mapper.MapperParsingException: Malformed content, must start with an object
How would you solve this problem? Thank you!
You are missing the closing brace wrapping the item:
[
{
"name": {
"first": "Dickerson",
"last": "Wood"
}, etc.
]
You can validate your JSON e.g. via http://jsonlint.com/.
Also, try taking a look at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html

Categories

Resources