Property file same key with different value in java - java

I have a property file like this.
host=192.168.1.1
port=8060
host=192.168.1.2
port=8070
host=192.168.1.3
port=8080
host=192.168.1.4
port=8090
Now I want the unique url so I can pass it to other application.
Example
HostOne : https://192.168.1.1:8060
HostTwo : https://192.168.1.2:8070
HostThree : https://192.168.1.3:8080
HostFour : https://192.168.1.4:8090
How can I get it using Java or any other library. Please help.
Thanks.
EDITED
How about this if I will this type of data.
host=192.168.1.1,8060
host=192.168.1.1,8060
host=192.168.1.1,8060
host=192.168.1.1,8060
Now is there any way to get this. ?

Basically that property file is broken. A property file is a sequence of key/value pairs which is build into a map, so it requires the keys be unique. I suspect that if you load this into a Properties object at the moment, you'll get just the last host/port pair.
Options:
Make this a real properties file by giving unique keys, e.g.
host.1=192.168.1.1
port.1=8060
host.2=192.168.1.2
port.2=8070
...
Use a different file format (e.g. JSON)
Write your own custom parser which understands your current file format, but don't call it a "properties file" as that has a specific meaning to Java developers
Personally I'd probably go with JSON. For example, your file could be represented as:
[
{ "host": "192.168.1.1", "port": 8060 },
{ "host": "192.168.1.2", "port": 8070 },
{ "host": "192.168.1.3", "port": 8080 },
{ "host": "192.168.1.4", "port": 8090 }
]

Related

Way to resolve both string and list as a Java list in Typesafe config

I have a HOCON config file, something like
foo {
[
id: 1
f: abc
],
[
id: 2
f: [xyz , pqr]
]
}
At the backend, I want the field f as a Java list. So wherever the field f is a string, I should be able to convert it to a List. config.resolve() doesn't seem to work here and I need a custom wrapper on top it which I'm unable to think of. Is there any way by which this could be achieved ?

Defining Empty String in Avro Schema

I currently having an issue with the Avro JsonDecoder. Avro is used in Version 1.8.2. The .avsc file is defined like:
{
"type": "record",
"namespace": "my.namespace",
"name": "recordName",
"fields": [
{
"name": "Code",
"type": "string"
},
{
"name": "CodeNumber",
"type": "string",
"default": ""
}
]
}
When I now run my test cases I get an org.apache.avro.AvroTypeException: Expected string. Got END_OBJECT. The class throwing the error is JasonDecoder.
For me it looks like the defaut value handling on my side might not be correct with using just "" as the default value. The error occurs only if the field is not available at all, but this, in my understanding, is the case when the default value should be used. If I set the value in the json as "CodeNumber": "" the decoder does not have any issues.
Any hints or ideas?
Found this:
Turns out the issue is that the default values are just ignored by the java implementations. I've added a workaround which will catch the exception and then look for a default value. Will be in release 1.9.0
Source: https://github.com/sksamuel/avro4s/issues/106
If it is possible, try to upgrade your Avro Decoder to 1.9.0 version.

How to create proper kafka-connect plugin without connector?

I try to create plugin with "transform" for my data to kafka-connect and use it with different sink connectors.
When I install plugin, kafka-connect doesn't see my classes.
I used kafka-connect maven plugin to create my bundle zip.
Installation with confluent-hub (from local file) has succeed.
All file are unzip, my worker properties has updated plugin.paths.
I run my connect in distributed mode and try to create new connector with transformer from my package.
My plugin structure looks like:
- mwojdowski-my-connect-plugin-0.0.1-SNAPSHOT
|- manifest.json
|- lib
||- my-connect-plugin-0.0.1-SNAPSHOT.jar
and my manifest.json file:
{
"name" : "my-connect-plugin",
"version" : "0.0.1-SNAPSHOT",
"title" : "my-connect-plugin",
"description" : "A set of transformations for Kafka Connect",
"owner" : {
"username" : "mwojdowski",
"name" : "Marcin Wojdowski<mwojdowski#gmail.com>"
},
"tags" : [ "transform", "field", "topic" ],
"features" : {
"supported_encodings" : [ "any" ],
"single_message_transforms" : true,
"confluent_control_center_integration" : true,
"kafka_connect_api" : true
},
"documentation_url" : "",
"docker_image" : { },
"license" : [ {
"name" : "Confluent Software License",
"url" : "https://www.confluent.io/software-evaluation-license"
} ],
"component_types" : [ "transform" ],
"release_date" : "2019-08-29"
}
Next, I try to create new connector:
curl -XPOST -H 'Content-type:application/json' 'localhost:8083/connectors' -d '{
"name" : "custom-file-sink-with-validation",
"config" : {
"connector.class" : "FileStreamSink",
"tasks.max" : "1",
"topics" : "test_topic",
"file" : "/tmp/my-plugin-test.txt",
"key.ignore" : "true",
"schema.ignore" : "true",
"drop.invalid.message": "false",
"behavior.on.malformed.documents": "warn",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter":"org.apache.kafka.connect.storage.StringConverter",
"transforms" : "Validation",
"transforms.Validation.type" : "org.kafka.connect.my.connector.ValidateId"
}
}'
After restart of kafka connect, when I try to create new connector, exception is thrown:
{
"error_code": 400,
"message": "Connector configuration is invalid and contains the following 2 error(s):\nInvalid value org.kafka.connect.my.connector.ValidateId for configuration transforms.Validation.type: Class org.kafka.connect.my.connector.ValidateId could not be found.\nInvalid value null for configuration transforms.Validation.type: Not a Transformation\nYou can also find the above list of errors at the endpoint `/{connectorType}/config/validate`"
}
I try to install plugin also manually, following doc:
https://docs.confluent.io/current/connect/managing/install.html
But it looks, like Connect doesn't load my jars.
When I copy my jar to share/java/kafka it works, but this is not a solution.
I suspect my plugin is skip, because it not contains connectors.
In this case should I add my jars to classpath manually? (Opposite to https://docs.confluent.io/current/connect/userguide.html#installing-plugins)
Or should I explicitly point in my connector configuration to try use my plugin?
Regards,
M.
Sorry, problem was really trivial.
During refactoring one of the packages get "'s" at the end and I miss to update it in config.
"transforms.Validation.type" : "org.kafka.connect.my.connectors.ValidateId"
instead of
"transforms.Validation.type" : "org.kafka.connect.my.connector.ValidateId"
I refactor it moment before switch from standalone to distributed.
One more time sorry for worrying You and thank You for your support.
Regards,
Marcin

Swagger 1.5 not generating valid JSON description

I am trying to make swagger document my API composed of Jersey-spring 2.22.2 with Spring 4.3 and Jackson 2.22.2.
The swagger package I'm using is:
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jersey2-jaxrs</artifactId>
<scope>compile</scope>
<version>1.5.12</version>
</dependency>
one of endpoint declaration:
#POST
#ApiOperation(
value = "creates folder hierarchy type client|lead",
notes = "creates folder hierarchy type client|lead"
)
#ApiResponses(value = {
#ApiResponse(code = 200, message = "creation successfull")
})
#Path("create_type")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Response createHierarchy(
#ApiParam(value = "hierarchy type", required = true) #NotNull #FormDataParam("type") EHierarchyType hierarchyType,
#ApiParam(value = "parametric part of the hierarchy", required = true) #NotNull #FormDataParam("params") Map<String, Folder2> folderMap
) throws ItemExistsException, AccessDeniedException, PathNotFoundException, WebserviceException, RepositoryException, DatabaseException, ExtensionException, AutomationException, UnknowException, IOException, UserQuotaExceededException, LockException, VersionException {
StopWatch stopWatch = new StopWatch();
folderCtrl.createHierarchy(folderMap, hierarchyType);
logger.info("create hierarchy took: " + stopWatch.getElapsedTime());
return Response.ok().build();
}
and this is how the generated json looks like for this endpoint:
"/folder/create_type" : {
"post" : {
"tags" : [ "folder" ],
"summary" : "creates folder hierarchy type client|lead",
"description" : "creates folder hierarchy type client|lead",
"operationId" : "createHierarchy",
"consumes" : [ "multipart/form-data" ],
"parameters" : [ {
"name" : "type",
"in" : "formData",
"description" : "hierarchy type",
"required" : true,
"type" : "string",
"enum" : [ "CLIENT", "LEAD" ]
}, {
"name" : "params",
"in" : "formData",
"description" : "parametric part of the hierarchy",
"required" : true,
"type" : "object"
} ],
"responses" : {
"200" : {
"description" : "creation successfull"
}
}
}
}
when I try to parse this output in swagger editor it returns error back, and I think the reason might be that in "paramas" names parameter it has created its type of object instead of schema. My point here is to find out why? Is it some bug in swagger or it's me that missed something?
Also, on the another endpoint I have, there is #FormDataParam that is an pojo model object annotated with #ApiModel. This is translated by swagger as of type 'ref' but it doesn't gives user any other clue of what this object is or which fields it should contain. In Swagger-UI I see just 'undefined' as param type. This is not much informing. What I need to do in order to see the object's structure and to supply it's json definition as an example to try in ui?
Thanks
This answer contains examples of how the final Swagger spec should look like, but I don't know how to express that using Swagger #annotations. Hope this gives you some ideas anyway.
In Swagger 2.0, there is no straightforward way to have file + object in request body – form parameters can be primitive values, arrays and files but not objects, and body parameters support objects but not files (although you can try representing files as type: string – more on that below).
The next version, OpenAPI Specification 3.0 (which is RC at the time of writing) will support request body containing files + objects – check this example. I assume #annotations will be updated to support that too.
For now you have a couple of options.
1) One possible way is to pass the file contents as a binary string as part of the body parameter. Your API spec would look like:
paths:
/something:
post:
consumes:
- application/json
parameters:
- in: body
name: body
required: true
schema:
$ref: '#/definitions/FileWithMetadata'
...
definitions:
FileWithMetadata:
type: object
required: [file_data]
properties:
file_data:
type: string
format: binary # or format: byte
metadata:
type: object
additionalProperties:
type: string
2) Another possible way is to send the metadata names and values as separate arrays, so you would have 3 form parameters: file, array of key names, and array of key values. This is analog to:
curl -F "file=#foo.zip" -F "metadata_keys=description,author" -F "metadata_values=A ZIP file,Helen" https://api.example.com
Your API spec would look like this:
paths:
/something:
post:
consumes:
- multipart/form-data
parameters:
- in: formData
name: file
type: file
required: true
- in: formData
name: metadata_keys
type: array
items:
type: string
- in: formData
name: metadata_values
type: array
items:
type: string

How to index an array of element in Elasticsearch?

I'm using Elasticsearch 1.4.3 and I'm trying to create an automated "filler" for the database.
The idea is to use this website http://beta.json-generator.com/BhxCdZ6 to generate a random set of data and push it in an index of Elasticsearch.
For interfacing with Elasticsearch, I am using Elasticsearch for Java API mixed with the Elasticsearch web API.
I managed to push one user per time simply copy-pasting the information excluding the [ and ] characters and creating a shell script that calls
curl -XPOST 'http://localhost:9200/myindex/users/' -d '{
"name": {
"first": "Dickerson",
"last": "Wood"
}, etc...
If I try to copy a full block composed of 3 people and try to push the data with the same script
curl -XPOST 'http://localhost:9200/geocon/users/' -d '[
{
"name": {
"first": "Dickerson",
"last": "Wood"
}, etc ...
]
}'
The error returned is :
org.elasticsearch.index.mapper.MapperParsingException: Malformed content, must start with an object
How would you solve this problem? Thank you!
You are missing the closing brace wrapping the item:
[
{
"name": {
"first": "Dickerson",
"last": "Wood"
}, etc.
]
You can validate your JSON e.g. via http://jsonlint.com/.
Also, try taking a look at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html

Categories

Resources