Missing double quotes for the required field using Snake Yaml - java

i am trying to read a Yaml template and replace certain fields in the template dynamically and create a new Yaml file. My resultant yaml file should reflect the template in all aspects including the double quotes. But I am missing double quotes for the required fields when I use snake yaml.
Can anyone please suggest to resolve this issue?
Example :
My yaml template is as shown below:
version: snapshot-01
kind: sample
metadata:
name: abc
groups:
id: "1000B"
category: category1
I am reading the above template and replacing the required fields dynamically as shown below.
Yaml yaml = new Yaml();
InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream(yamlTemplateLocation);
Map<String, Object>yamlMap = yaml.load(inputStream);
Now I am replacing the required fields as shown below
yamlMap.put("version","v-1.0");
Map<String, Object> metadata = (Map<String, Object>) yamlMap.get("metadata");
metadata.put("name", "XYZ");
Map<String, Object> groups = (Map<String, Object>) yamlMap.get("groups");
groups.put("id","5000Z");
groups.put("category","newCategory");
DumperOptions options = new DumperOptions();
options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK);
options.setPrettyFlow(true);
Yaml yaml = new Yaml(options);
String output = yaml.dump(map);
System.out.println(output);
I am expecting output as shown below
Expected Output :
version: v-1.0
kind: sample
metadata:
name: XYZ
groups:
id: "5000Z"
category: newCategory
But I am actually getting output as below
version: v-1.0
kind: sample
metadata:
name: XYZ
groups:
id: 5000Z
category: newCategory
My problem here is, I am missing the double quotes for "id" node in the new yaml file.
When I use, options.setDefaultScalarStyle(ScalarStyle.DOUBLE_QUOTED), I am getting all fields double quoted which is not required. I need double quotes for id field only.
Can anyone please advice to resolve this issue.
Thanks

If your input is a template, it might be better to use a templating engine. As simple example, MessageFormat would allow you to write id: "{0}" and then interpolate the actual value into it, keeping the double quotes. You could use more sophisticated templating depending on your use-case.
That being said, let's look at how to do it with SnakeYAML:
If you want to control how a single item is rendered as scalar, you have to define a class like this:
class QuotedString {
public String value;
public QuotedString(String value) {
this.value = value;
}
}
And then create a custom representer for it:
class MyRepresenter extends Representer {
public MyRepresenter() {
this.representers.put(QuotedString.class, new RepresentQuotedString());
}
private class RepresentQuotedString implements Represent {
public Node representData(Object data) {
QuotedString str = (QuotedString) data;
return representScalar(
Tag.STR, str.value, DumperOptions.ScalarStyle.DOUBLE_QUOTED);
}
}
}
Modify your code to use the new class:
groups.put("id", new QuotedString("5000Z"));
And finally, instruct SnakeYAML to use your representer:
Yaml yaml = new Yaml(new MyRepresenter(), options);
This should do it.

Related

Unable to validate query parameters against predefined yaml file using json schema validator

I need to validate query parameter's schema against pre defined yaml file schema, so I using the json schema validator. How ever validation is getting failed.
I am following the below steps:
Populate parameter and corresponding schema.
final List<Parameter> parameters = openAPI.getPaths().get(requestPath).getGet().getParameters()
.stream().filter(parameter -> Objects.nonNull(parameter.getIn()) && parameter.getIn().equalsIgnoreCase("query"))
.collect(Collectors.toList());
final Map<Parameter, JsonNode> parameterAndSchema = parameters.stream().collect(Collectors.toMap(Function.identity(), parameter -> {
JsonNode parameterSchema;
try {
final Schema schema = parameter.getSchema();
parameterSchema = mapper.readTree(objectWriter.writeValueAsString(schema));
return parameterSchema;
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}));
create queryParameterSchema to validate query parameter against its corresponding schema prepared at step number 1:
Hard coded query parameters for testing
final Map<String,String> queryParameterMap = Map.of("test-parameter", "testValue1");
JsonNode queryParameterSchema = new ObjectMapper()
.readTree(queryParameterMap,JsonNode.class)
Convert step 1 schema (prepared from yaml) into JsonSchema as below:
JsonSchemaFactory schemaFactory = JsonSchemaFactory.getInstance(SpecVersion.VersionFlag.V7);
SchemaValidatorsConfig config = new SchemaValidatorsConfig();
config.setTypeLoose(true);
config.setFailFast(false);
JsonSchema jsonSchema = schemaFactory.getSchema(schema, config);
processingReport = jsonSchema.validate(queryParameterSchema , queryParameterSchema , at);
Sample yaml file:
paths:
/test-instances:
get:
tags:
- Instances (Store)
summary: Test summary
operationId: SearchInstances
parameters:
- name: test-parameter
in: query
description: Names of the services offered
required: false
style: form
explode: false
schema:
type: string
However when I am trying to validate queryParameterSchema against this JsonSchema, TypeValidator is invoked and always returning false as my queryParameterSchema populated at step2 always coming as object node with schema type as OBJECT and validator schema type is coming as String (because its defined as String in yaml),
I think I may have to create queryParameterSchema st step 2 differently but not sure how

How to Read Array of Object from YAML file in Spring

We are working on Spring based application which needs to access and read certain credentials stored in the yaml file which use the format below:
#yaml file content
...
secrets: username1:pw1, username2:pw2
...
we load the yaml file using PropertySource annotation like this
#PropertySources({
#PropertySource(value = "file:/someSecretPath/secrets.yaml", ignoreResourceNotFound = true) // secret yaml file
})
I have tried using the following code to parse it into array of map object since it is a list contains 2 object type,
#Repository
public class getSecretClass {
#Value("${secrets}")
private Map<String,String>[] credentials;
}
but it fails with this error:
... 'java.lang.string' cannot be converted into java.lang.map[] ....
On the other hand,
I tested reading from simple string array like this one:
#yaml file content
...
secrets: text1, text2
...
and the code below would work with no issue:
#Repository
public class getSecretClass {
#Value("${secrets}")
private String[] credentials;
...
private someReadSecretMethod(){
System.out.println( credentials[0] );
}
}
it will print "text1" as expected. But we do need to store the credentials as pairs in my use case, a plain text would not work for me.
Any suggestion or advice would be much appreciated !
Why not do something easier on the eyes, easier to program if not a little verbose:
secrets:
- username: username1
password: pwd1
- username: username2
password: pwd2
Have a class for your secret:
public class Secret {
private String username;
private String password;
public String toString() { return username + ":" password; }
}
And then injects as
#Value("${secrets}")
List<Secret> secrets;
If you must use map. I think array of map are probably not what you desired. Maybe you need one map which contains two keys, username1 and username2. The following might let you know how to obtain the map.
The yml file content is like that.
secrets: "{username1:pw1,username2:pw1}"
The Java code field is like that.
#Value("#{${secrets}}")
public Map<Integer,Integer> secrets;

How to write List<Number> with OpenCSV?

I have the following field inside a StacItem Object:
#JsonProperty
private List<Number> bbox = null;
I made a basic implementation with OpenCSV to write this Object into a CSV, and it mostly works with this code (i'm showing just the relevant part):
final StatefulBeanToCsv<Object> beanToCSV = new StatefulBeanToCsvBuilder<>(writer)
.withSeparator(';')
.build();
for(StacItem item : items){
beanToCSV.write(item);
}
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.set(HttpHeaders.CONTENT_TYPE, "text/csv");
httpHeaders.set(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=exportItems.csv");
logger.info("END WRITING");
return new ResponseEntity<>(new FileSystemResource(file), HttpStatus.OK);
Here's the twist! In the logs of my Microservice, I see the full structure of that StacItem, and it should have this bbox field:
bbox=[8.24275148213394, 45.5050129344147, 7.62767704092889, 45.0691351737573]
While my implementation returns just this:
"8.24637830863774"
So when I open my CSV I just found the column "bbox" with one value but I need the others too..Can you please tell me why it stops on the first one or how to get the others 3?
UPDATE:
I found that this does the trick! But then...it exports just this single field for every StacItem so I lose every other field in my Object.
#CsvBindAndSplitByName(elementType = Number.class, writeDelimiter = ",")
#JsonProperty("bbox")
private List<Number> bbox = null;
Thanks
Try using CsvBindByName on every field you want to map (specify the column attribute of the annotation is not mandatory).
You can even use CsvBindByPosition if you prefer.
Did you try to change ?
beanToCSV.write(item); -> beanToCSV.writeNext(item);
or
for(StacItem item : items){
beanToCSV.write(item);
}
// to
beanToCSV.write(items);

Converting Typesafe Config type to java.util.Properties

The title talks by itself, I have a Config object (from https://github.com/typesafehub/config) and I want to pass it the a constructor which only supports java.util.Properties as argument.
Is there an easy way to convert a Config to a Properties object ?
Here is a way to convert a typesafe Config object into a Properties java object. I have only tested it in a simple case for creating Kafka properties.
Given this configuration in application.conf
kafka-topics {
my-topic {
zookeeper.connect = "localhost:2181",
group.id = "testgroup",
zookeeper.session.timeout.ms = "500",
zookeeper.sync.time.ms = "250",
auto.commit.interval.ms = "1000"
}
}
You can create the corresponding Properties object like that:
import com.typesafe.config.{Config, ConfigFactory}
import java.util.Properties
import kafka.consumer.ConsumerConfig
object Application extends App {
def propsFromConfig(config: Config): Properties = {
import scala.collection.JavaConversions._
val props = new Properties()
val map: Map[String, Object] = config.entrySet().map({ entry =>
entry.getKey -> entry.getValue.unwrapped()
})(collection.breakOut)
props.putAll(map)
props
}
val config = ConfigFactory.load()
val consumerConfig = {
val topicConfig = config.getConfig("kafka-topics.my-topic")
val props = propsFromConfig(topicConfig)
new ConsumerConfig(props)
}
// ...
}
The function propsFromConfig is what you are mainly interested in, and the key points are the use of entrySet to get a flatten list of properties, and the unwrapped of the entry value, that gives an Object which type depends on the configuration value.
You can try my scala wrapper https://github.com/andr83/scalaconfig. Using it convert config object to java Properties is simple:
val properties = config.as[Properties]
As typesafe config/hocon supports a much richer structure than java.util.propeties it will be hard to get a safe conversion.
Or spoken otherwise as properties can only express a subset of hocon the conversion is not clear, as it will have a possible information loss.
So if you configuration is rather flat and does not contain utf-8 then you could transform hocon to json and then extract the values.
A better solution would be to implement a ConfigClass and populate the values with values from hocon and passing this to the class you want to configure.
It is not possible directly through typesafe config. Even rending the entire hocon file into json does provide a true valid json:
ex:
"play" : {
"filters" : {
"disabled" : ${?play.filters.disabled}[
"play.filters.hosts.AllowedHostsFilter"
],
"disabled" : ${?play.filters.disabled}[
"play.filters.csrf.CSRFFilter"
]
}
}
That format is directly from Config.render
as you can see, disabled is represented twice with hocon style syntax.
I have also had problems with rendering hocon -> json -> hocon
Example hocon:
http {
port = "9000"
port = ${?HTTP_PORT}
}
typesafe config would parse this to
{
"http": {
"port": "9000,${?HTTP_PORT}"
}
}
However if you try to parse that in hocon - it throws a syntax error. the , cannot be there.
The hocon correct parsing would be 9000${?HTTP_PORT} - with no comma between the values. I believe this is true for all array concatenation and substitution

remove square brackets from domino access services

I want to access Domino data via the Domino Access Services (DAS) as REST provider in java e.g.
String url = "http://malin1/fakenames.nsf/api/data/collections/name/groups";
ObjectMapper mapper = new ObjectMapper();
JsonFactory factory = new JsonFactory();
JsonParser parser = factory.createParser(new URL(url));
JsonNode rootNode = mapper.readTree(parser);
however, I notice DAS binds the JSON in square brackets:
[
{
"#entryid":"1-D68BB54DEA77AC8085256B700078923E",
"#unid":"D68BB54DEA77AC8085256B700078923E",
"#noteid":"1182",
"#position":"1",
"#read":true,
"#siblings":3,
"#form":"Group",
"name":"LocalDomainAdmins",
"description":"This group should contain all Domino administrators in your domain. Most system databases and templates give people in this group Manager access."
},
{
"#entryid":"3-9E6EABBF405A1A9985256B020060E64E",
"#unid":"9E6EABBF405A1A9985256B020060E64E",
"#noteid":"F46",
"#position":"3",
"#read":true,
"#siblings":3,
"#form":"Group",
"name":"OtherDomainServers",
"description":"You should add all Domino servers in other domains with which you commonly replicate to this group."
}
]
How can I easily get rid of these brackets?
As already mentioned you should leave them intact. You can parse theJSON array for example with Jackson.
find an example snippet below
import org.codehaus.jackson.JsonNode;
import org.codehaus.jackson.JsonProcessingException;
import org.codehaus.jackson.map.ObjectMapper;
...
String response = ... your posted string
ObjectMapper mapper = new ObjectMapper();
try {
JsonNode taskIdsjsonNode = mapper.readTree(response);
for (JsonNode next : taskIdsjsonNode) {
System.out.printf("%s: %s%n", "#entryid", next.get("#entryid"));
System.out.printf("%s: %s%n", "name", next.get("name"));
}
} catch (.... ) {
// your exception handling goes here
}
output
#entryid: "1-D68BB54DEA77AC8085256B700078923E"
name: "LocalDomainAdmins"
#entryid: "3-9E6EABBF405A1A9985256B020060E64E"
name: "OtherDomainServers"
The brackets are not nasty but a correct notation. To access the contens just use [0] in your client side script or with your JSON parser in Java you like.
Perhaps the explanation here can help:
https://quintessens.wordpress.com/2015/05/08/processing-json-data-from-domino-access-services-with-jackson/
Basically you establish a call to DAS via the Jersey client and then you parse the json via Jackson library to a map in java.
During the parsing process you can define which values you want to parse and transform them.
Take a look at the Person class...

Categories

Resources