Currently I'm using the Jackson JSON Processor to write preference data and whatnot to files mainly because I want advanced users to be able to modify/backup this data. Jackson is awesome for this because its incredibly easy to use and, apparently performs decently (see here), however the only problem I seem to be having with it is when I run myObjectMapper.writeValue(myFile, myJsonObjectNode) it writes all of the data in the ObjectNode to one line. What I would like to do is to format the JSON into a more user friendly format.
For example, if I pass a simple json tree to it, it will write the following:
{"testArray":[1,2,3,{"testObject":true}], "anotherObject":{"A":"b","C":"d"}, "string1":"i'm a string", "int1": 5092348315}
I would want it to show up in the file as:
{
"testArray": [
1,
2,
3,
{
"testObject": true
}
],
"anotherObject": {
"A": "b",
"C": "d"
},
"string1": "i'm a string",
"int1": 5092348315
}
Is anyone aware of a way I could do this with Jackson, or do I have to get the String of JSON from Jackson and use another third party lib to format it?
Thanks in advance!
try creating Object Writer like this
ObjectWriter writer = mapper.defaultPrettyPrintingWriter();
You need to configure the mapper beforehand as follows:
ObjectMapper mapper = new ObjectMapper();
mapper.configure(SerializationConfig.Feature.INDENT_OUTPUT, true);
mapper.writeValue(myFile, myJsonObjectNode);
As per above mentioned comments this worked for me very well,
Object json = mapper.readValue(content, Object.class);
mapper.writerWithDefaultPrettyPrinter().writeValueAsString(json);
Where content is your JSON string response
Jackson version:2.12
To enable standard indentation in Jackson 2.0.2 and above use the following:
ObjectMapper myObjectMapper = new ObjectMapper();
myObjectMapper.enable(SerializationFeature.INDENT_OUTPUT);
myObjectMapper.writeValue(myFile, myJsonObjectNode)
source:https://github.com/FasterXML/jackson-databind
Related
I was wondering if there is a way to read a YAML file in Java without having to create a lot of POJO's but still have the ability to cleanly read the elements of the YAML. Meaning, not messing with LinkedHashmaps.
Is there a library or something that can do this?
Thanks in advance
Regards
You can use Jackson library, the ObjectMapper (the most important class of that library) has a method readTree which returns a JsonNode, which you can read and traverse. Usage is pretty simple:
String yamlString =
"---\n" +
"name: Bob\n" +
"age: 35";
ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
JsonNode root = mapper.readTree(yamlString);
String name = root.get("name").asText();
int age = root.get("age").asInt();
Make sure you check some tutorials and don't get confused if you find too much stuff about JSON, because Jackson has been originally a library parsing JSON, other formats were added later.
I have a file like this:
[{
"messageType": "TYPE_1",
"someData": "Data"
},
{
"messageType": "TYPE_2",
"dataVersion": 2
}]
As you can see there is a file which contains different types of JSON objects. I also have an ObjectMapper which is able to parse the both types. I have to read the JSon objects one by one (because this file can be pretty huge) and to get the right Object (Type1Obj or Type2Obj) for each of them.
My question is how I could achieve with Jackson to read the JSon objects one by one from the file.
You could read the array as a generic Jackson JSON object similar to
ObjectMapper objectMapper = new ObjectMapper();
JsonNode rootNode = objectMapper.readTree(jsonData);
then traverse all the children of the array using
rootNode#elements()
and parse every one of the JsonNode children into the respective type using a check of messageType similar to
if ("TYPE_1".equals(childNode.get("messageType")) {
Type1Class type1 = objectMapper.treeToValue(childNode, Type1Class.class);
} else // ...
I want to save the entities in our program into .json files to get a better connection between backend and our Angular frontend. For this, I wrote some tests and during the execution, the structure is saved in the files as desired.
The structure is sampled by
ObjectMapper objectMapper = new ObjectMapper();
try{
ObjectWriter writer = objectMapper.wrtier(new DefaultPrettyPrinter());
String result = objectMapper.writerWirthDefaultPrettyPrinter().writeValueAsString(new OurObject());
writer.writerValue(new File("path"), result);
}
What I got
"{\r\n \"firstProp\": something,\r\n \"secondProp\": anything,\r\n...
But I want, that the file contains the classical JSON structure to make it better readable, this means:
{
"firstProp": something,
"secondProp": anything,
...
What can I do, to write it in the desired JSON structure?
Thanks for any help
Matthias
You're double-encoding the json string
Remove writeValueAsString and try to directly use writer.writerValue(file, object)
But if you're emitting this from a Java backend, it's typically best practice to serve it from an HTTP request, not as a file to any front-end
When calling org.bson.Document.toJson() for a document containing an Int64 (Java long) value, it is encoded using Mongo's "$numberLong" encoding. For example this code:
Document parse = Document.parse("{}");
parse.put("version", 1L);
String json = parse.toJson();
Produces this JSON:
{ "version" : { "$numberLong" : "1" } }
In order to make it more interopable I'd like to encode it as String instead, i.e. like this:
{ "version" : "1" }
I've read the wiki on codecs and extended JSON as well as most of the Javadoc but I cannot for the life of me figure out how to register a custom JSON serializer for longs. Even the default org.bson.codecs.LongCodec doesn't seem to be the one that introduces the $numberLong syntax.
Obviously a simple setting (e.g. an equvalent to Gson's setLongSerializationPolicy(LongSerializationPolicy.STRING)) would be ideal but any way to make this work would be appreciated.
For get strict JSON you can use com.mongodb.util.JSON
and method serialize()
JSON.serialize(parse)
Or you can use some JSON libraries. For example GSON.
Gson gson = new Gson();
String gsonString = gson.toJson(parse);
use json.simple
JSONObject.toJSONString(parse)
I have been reading a lot about Apache Avro these days and I am more inclined towards using it instead of using JSON. Currently, what we are doing is, we are serializing the JSON document using Jackson and then writing that serialize JSON document into Cassandra for each row key/user id. Then we have a REST service that reads the whole JSON document using the row key and then deserialize it and use it further.
We will write into Cassandra like this-
user-id column-name serialize-json-document-value
Below is an example which shows the JSON document that we are writing into Cassandra. This JSON document is for particular row key/user id.
{
"lv" : [ {
"v" : {
"site-id" : 0,
"categories" : {
"321" : {
"price_score" : "0.2",
"confidence_score" : "0.5"
},
"123" : {
"price_score" : "0.4",
"confidence_score" : "0.2"
}
},
"price-score" : 0.5,
"confidence-score" : 0.2
}
} ],
"lmd" : 1379214255197
}
Now we are thinking to use Apache Avro so that we can compact this JSON document by serializing with Apache Avro and then store it in Cassandra. I have couple of questions on this-
Is it possible to serialize the above JSON document using Apache Avro first of all and then write it into Cassandra? If yes, how can I do that? Can anyone provide a simple example?
And also we need to deserialize it as well while reading back from Cassandra from our REST service. Is this also possible to do?
Below is my simple code which is serializing the JSON document and printing it out on the console.
public static void main(String[] args) {
final long lmd = System.currentTimeMillis();
Map<String, Object> props = new HashMap<String, Object>();
props.put("site-id", 0);
props.put("price-score", 0.5);
props.put("confidence-score", 0.2);
Map<String, Category> categories = new HashMap<String, Category>();
categories.put("123", new Category("0.4", "0.2"));
categories.put("321", new Category("0.2", "0.5"));
props.put("categories", categories);
AttributeValue av = new AttributeValue();
av.setProperties(props);
Attribute attr = new Attribute();
attr.instantiateNewListValue();
attr.getListValue().add(av);
attr.setLastModifiedDate(lmd);
// serialize it
try {
String jsonStr = JsonMapperFactory.get().writeValueAsString(attr);
// then write into Cassandra
System.out.println(jsonStr);
} catch (JsonGenerationException e) {
e.printStackTrace();
} catch (JsonMappingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
Serialzie JSON document will look something like this -
{"lv":[{"v":{"site-id":0,"categories":{"321":{"price_score":"0.2","confidence_score":"0.5"},"123":{"price_score":"0.4","confidence_score":"0.2"}},"price-score":0.5,"confidence-score":0.2}}],"lmd":1379214255197}
AttributeValue and Attribute class are using Jackson Annotations.
And also one important note, properties inside the above json document will get changed depending on the column names. We have different properties for different column names. Some column names will have two properties, some will have 5 properties. So the above JSON document will have its correct properties and its value according to our metadata that we are having.
I hope the question is clear enough. Can anyone provide a simple example for this how can I achieve that using Apache Avro. I am just starting with Apache Avro so I am having lot of problems..
Since you already use jackson, you could try the Jackson dataformat module to support Avro-encoded data.
Avro requires a schema, so you MUST design it before using it; and usage differs a lot from free-formed JSON.
But instead of Avro, you might want to consider Smile -- a one-to-one binary serialization of JSON, designed for use cases where you may want to go back and forth between JSON and binary data; for example, to use JSON for debugging, or when serving Javascript clients.
Jackson has Smile backend (see https://github.com/FasterXML/jackson-dataformat-smile) and it is literally a one-line change to use Smile instead of (or in addition to) JSON.
Many projects use it (for example, Elastic Search), and it is mature and stable format; and tooling support via Jackson is extensive for different datatypes.