Json NullPointerException when calling readValue - java

I decided to write my own read and write object methods by implementing the Json.Serializable interface because I was unhappy with how Json does it's automated object writing (it omits arrays). My write methods work properly, however for some reason I get a NullPointerException when I try to read the values back, as if I'm looking for a value by the incorrect name, which I'm certain I'm not doing; the write and read names are identical. These are my read and write methods and the Json output (the error occurs at the first readValue() call).
#Override
public void write(Json json)
{
json.writeObjectStart(this.getName());
json.writeValue("Level", level);
json.writeValue("Health", health);
json.writeValue("Allegiance", alle);
json.writeValue("Stats", stats);
json.writeValue("Has Moved", hasMoved);
json.writeValue("Position", new Point((int)this.getX(), (int)this.getY()));
json.writeObjectEnd();
}
#Override
public void read(Json json, JsonValue jsonData)
{
level = json.readValue("Level", Integer.class, jsonData);
health = json.readValue("Health", Integer.class, jsonData);
alle = json.readValue("Allegiance", Allegiance.class, jsonData);
stats = json.readValue("Stats", int[].class, jsonData);
hasMoved = json.readValue("Has Moved", Boolean.class, jsonData);
Point p = json.readValue("Position", Point.class, jsonData);
this.setPosition(p.x, p.y);
}
/////////////////////////////////////////////////////////////////////
player: {
party: {}
},
state: state1,
map: {
foes: {
units: [
{
class: com.strongjoshuagames.reverseblade.game.units.UnitWolf,
Wolf: {
Level: 5,
Health: 2,
Allegiance: FOE,
Stats: [ 2, 3, 3, 4, 3, 4, 3, 5 ],
"Has Moved": false,
Position: {
x: 320,
y: 320
}
}
}
]
}
}
Note that I've read objects from the same file this is being saved in before, so the file shouldn't be an issue.

I'm not 100% sure how the JSON library works, but I believe that since you do json.writeObjectStart(this.getName()); in your write function, you have to 'reverse' this in your read function like everything else you wrote. In order to do this, you need to get the JsonValue's first child and get it's Level, Health, etc. I'm not sure about the API so I can't give exact code, but it'd be something like this:
level = json.readValue("Level", Integer.class, jsonData.child());
Think of it like this: I make a box and put a dictionary in it. I can't just lookup words in the box, I have to take the dictionary out first. Likewise, you need to get the object you wrote first before you can look up its fields.

Related

Show object scheme in Swagger while maintaining a data/meta structure

Designing a REST API I'd like to implement the following structure with a clear distinction between the returned object(s) and any relevant metadata
{
"data": [
{
"id": 1,
"clockingTime": "2022-08-10T15:32+02:00[Europe/Paris]",
"comment": 101,
"crudType": 1
}
],
"meta": {
"creationTime": "2022-08-11T17:10:40.045+0200",
"correlationId": "93b058ad-5383-4054-b284-342d3041f79f"
}
}
This is achieved by having a top level object RestResponseObject containing data and meta with the data object being extended by all objects-to-return
#GetMapping(value = "/test")
public ResponseEntity<RestResponseObject> test() {
RestResponseObject restResponseObject = new RestResponseObject();
restResponseObject.setData(new FooDTO());
restResponseObject.setMeta(new RestMetaObject());
return new ResponseEntity<>(restResponseObject , HttpStatus.OK);
}
the issue is that this obfuscates the actual object being returned and swagger will not show the scheme with relevant variables and/or types
Is this the wrong way to implement a data/meta object structure or is there no other way to show the actual objects being returned

How to handle java object heap VM? [duplicate]

I'm trying to parse some huge JSON file (like http://eu.battle.net/auction-data/258993a3c6b974ef3e6f22ea6f822720/auctions.json) using gson library (http://code.google.com/p/google-gson/) in JAVA.
I would like to know what is the best approch to parse this kind of big file (about 80k lines) and if you may know good API that can help me processing this.
Some idea...
read line by line and get rid of the JSON format: but that's nonsense.
reduce the JSON file by splitting this file into many other: but I did not find any good Java API for this.
use this file directlly as nonSql database, keep the file and use it as my database.
I would really appreciate adices/ help/ messages/ :-)
Thanks.
You don't need to switch to Jackson. Gson 2.1 introduced a new TypeAdapter interface that permits mixed tree and streaming serialization and deserialization.
The API is efficient and flexible. See Gson's Streaming doc for an example of combining tree and binding modes. This is strictly better than mixed streaming and tree modes; with binding you don't waste memory building an intermediate representation of your values.
Like Jackson, Gson has APIs to recursively skip an unwanted value; Gson calls this skipValue().
I will suggest to have a look at Jackson Api it is very easy to combine the streaming and tree-model parsing options: you can move through the file as a whole in a streaming way, and then read individual objects into a tree structure.
As an example, let's take the following input:
{
"records": [
{"field1": "aaaaa", "bbbb": "ccccc"},
{"field2": "aaa", "bbb": "ccc"}
] ,
"special message": "hello, world!"
}
Just imagine the fields being sparse or the records having a more complex structure.
The following snippet illustrates how this file can be read using a combination of stream and tree-model parsing. Each individual record is read in a tree structure, but the file is never read in its entirety into memory, making it possible to process JSON files gigabytes in size while using minimal memory.
import org.codehaus.jackson.map.*;
import org.codehaus.jackson.*;
import java.io.File;
public class ParseJsonSample {
public static void main(String[] args) throws Exception {
JsonFactory f = new MappingJsonFactory();
JsonParser jp = f.createJsonParser(new File(args[0]));
JsonToken current;
current = jp.nextToken();
if (current != JsonToken.START_OBJECT) {
System.out.println("Error: root should be object: quiting.");
return;
}
while (jp.nextToken() != JsonToken.END_OBJECT) {
String fieldName = jp.getCurrentName();
// move from field name to field value
current = jp.nextToken();
if (fieldName.equals("records")) {
if (current == JsonToken.START_ARRAY) {
// For each of the records in the array
while (jp.nextToken() != JsonToken.END_ARRAY) {
// read the record into a tree model,
// this moves the parsing position to the end of it
JsonNode node = jp.readValueAsTree();
// And now we have random access to everything in the object
System.out.println("field1: " + node.get("field1").getValueAsText());
System.out.println("field2: " + node.get("field2").getValueAsText());
}
} else {
System.out.println("Error: records should be an array: skipping.");
jp.skipChildren();
}
} else {
System.out.println("Unprocessed property: " + fieldName);
jp.skipChildren();
}
}
}
}
As you can guess, the nextToken() call each time gives the next parsing event: start object, start field, start array, start object, ..., end object, ..., end array, ...
The jp.readValueAsTree() call allows to read what is at the current parsing position, a JSON object or array, into Jackson's generic JSON tree model. Once you have this, you can access the data randomly, regardless of the order in which things appear in the file (in the example field1 and field2 are not always in the same order). Jackson supports mapping onto your own Java objects too. The jp.skipChildren() is convenient: it allows to skip over a complete object tree or an array without having to run yourself over all the events contained in it.
Declarative Stream Mapping (DSM) library allows you to define mappings between your JSON or XML data and your POJO. So you don't need to write a custom parser. İt has powerful scripting(Javascript, groovy, JEXL) support. You can filter and transform data while you are reading. You can call functions for partial data operation while you are reading data. DSM read data as a Stream so it uses very low memory.
For example,
{
"company": {
....
"staff": [
{
"firstname": "yong",
"lastname": "mook kim",
"nickname": "mkyong",
"salary": "100000"
},
{
"firstname": "low",
"lastname": "yin fong",
"nickname": "fong fong",
"salary": "200000"
}
]
}
}
imagine the above snippet is a part of huge and complex JSON data. we only want to get stuff that has a salary higher than 10000.
First of all, we must define mapping definitions as follows. As you see, it is just a yaml file that contains the mapping between POJO fields and field of JSON data.
result:
type: object # result is map or a object.
path: /.+staff # path is regex. its match with /company/staff
function: processStuff # call processStuff function when /company/stuff tag is closed
filter: self.data.salary>10000 # any expression is valid in JavaScript, Groovy or JEXL
fields:
name:
path: firstname
sureName:
path: lastname
userName:
path: nickname
salary: long
Create FunctionExecutor for process staff.
FunctionExecutor processStuff=new FunctionExecutor(){
#Override
public void execute(Params params) {
// directly serialize Stuff class
//Stuff stuff=params.getCurrentNode().toObject(Stuff.class);
Map<String,Object> stuff= (Map<String,Object>)params.getCurrentNode().toObject();
System.out.println(stuff);
// process stuff ; save to db. call service etc.
}
};
Use DSM to process JSON
DSMBuilder builder = new DSMBuilder(new File("path/to/mapping.yaml")).setType(DSMBuilder.TYPE.XML);
// register processStuff Function
builder.registerFunction("processStuff",processStuff);
DSM dsm= builder.create();
Object object = dsm.toObject(xmlContent);
Output: (Only stuff that has a salary higher than 10000 is included)
{firstName=low, lastName=yin fong, nickName=fong fong, salary=200000}

Get Matrix response from Tensorflow serving model in Java

I'm currently building a model in Python and get the result from another Java client.
I need to know how can I get the float[][] or List<List<Float>> (something similar like that) from a TensorProto which has more than 1 dimension.
In Python, it could be very easy to do this job:
from tensorflow.python.framework import tensor_util
.
.
.
print tensor_util.MakeNdarray(tensorProto)
===== UPDATE =======:
Java's tensorProto.getFloatValList() does not work if it was created by Python's tensor_util.make_tensor_proto(vector), either.
All the case above can be solved by #Ash's answer
As Allen mentioned in a comment, this is probably a good feature request.
But in the interim, a workaround would be to construct and run a graph that parses the encoded protobuf and returns a Tensor. It won't be particularly efficient, but you could do something like this:
import org.tensorflow.*;
import java.util.Arrays;
public final class ProtoToTensor {
public static Tensor<Float> tensorFromSerializedProto(byte[] serialized) {
// One may way to cache the Graph and Session as member variables to avoid paying the cost of
// graph and session construction on each call.
try (Graph g = buildGraphToParseProto();
Session sess = new Session(g);
Tensor<String> input = Tensors.create(serialized)) {
return sess.runner()
.feed("input", input)
.fetch("output")
.run()
.get(0)
.expect(Float.class);
}
}
private static Graph buildGraphToParseProto() {
Graph g = new Graph();
// The graph construction process in Java is currently (as of TensorFlow 1.4) very verbose.
// Once https://github.com/tensorflow/tensorflow/issues/7149 is resolved, this should become
// *much* more convenient and succint.
Output<String> in =
g.opBuilder("Placeholder", "input")
.setAttr("dtype", DataType.STRING)
.setAttr("shape", Shape.scalar())
.build()
.output(0);
g.opBuilder("ParseTensor", "output").setAttr("out_type", DataType.FLOAT).addInput(in).build();
return g;
}
public static void main(String[] args) {
// Let's say you got a byte[] representation of the proto somehow.
// In this case, I got it from Python from the following program
// that serializes the 1x1 matrix:
/*
import tensorflow as tf
list(bytearray(tf.make_tensor_proto([[1.]]).SerializeToString()))
*/
byte[] bytes = {8, 1, 18, 8, 18, 2, 8, 1, 18, 2, 8, 1, 42, 4, 0, 0, (byte)128, 63};
try (Tensor<Float> t = tensorFromSerializedProto(bytes)) {
// You can now get an float[][] array using t.copyTo().
// t.shape() gives shape information.
System.out.println("Tensor: " + t);
float[][] f = t.copyTo(new float[1][1]);
System.out.println("float[][]: " + Arrays.deepToString(f));
}
}
}
As you can see, this is using some pretty low-level APIs to construct the graph and session. It would be reasonable to have a feature request that replaces all of this with a single line:
Tensor<Float> t = Tensor.createFromProto(serialized);

Deserialising set of enums to use as flags

I want to be able to set enum/bit flags from Json. I have managed to serialize my object with a HashSet containing my enum values. By default LibGDX serializes this but adds a class field with the type of set to the json so it knows what to do. I want my json to be clean and decoupled from java so I wrote this class:
public class CriteriaSerializer implements Json.Serializer<HashSet> {
#Override
public void write(Json json, HashSet object, Class knownType) {
json.writeArrayStart();
for (Object o : object)
{
if (o instanceof Modifier.Criteria) {
json.writeValue(o, Modifier.Criteria.class);
}
}
json.writeArrayEnd();
}
#Override
public HashSet read(Json json, JsonValue jsonData, Class type) {
System.out.println("Running!?");
HashSet<Modifier.Criteria> criteriaSet = new HashSet<Modifier.Criteria>();
for (JsonValue entry = jsonData.child; entry != null; entry = entry.next)
{
criteriaSet.add(Modifier.Criteria.valueOf("ADD"));//Modifier.Criteria.valueOf(entry.asString()));
}
return criteriaSet;
}
}
The write method results in the following output:
modifier: {
amount: 1 //Other field
criteriaSet: [
RED
BLUE
]
All I need is to get those values as strings so I can do something along the lines of myCriteriaSet.put(Criteria.valueOf(output). The thing is, the program crashes before the read method is running. I guess this is because it finds an ArrayList in the json data but the corresponding field in the object is a HashSet. This is the error java.lang.IllegalArgumentException: Can not set java.util.Set field com.buckriderstudio.towercrawler.Creature.Modifier.criteriaSet to java.util.ArrayList
Both writing to- and reading from json is important to me so I need them to work with eachother. In the end I'm just looking for a clean solution to get (de)serialize EnumSet or bit combinations in a readable manner. I feel I'm close but there might be a better technique then the one I'm trying.
What I like about the LibgDX Json implementation is that fields are not mandatory and can have default values. This cleans up the json data considerably since I have a lot of fields that can be set optionally. Therefor this library has my preference to say Jackson, but I have not played around with Jackson all that much though.
EDIT
This is an edit especially for Andreas. As far as I know (but I might be wrong) this has nothing to do with the actual problem. Andreas is explaining to me that the Json syntax is wrong, the fact is that it does not even reach my read method and that the json library that ships with LibGDX is not writing 100% correct Json. Does it need to? Perhaps to bear the name Json? Does it need to to work? I don't think so.
Here is my test. All I do is create this Creature object and parse it with 1 line of code. There is no personal code of me involved in parsing this.
Creature c = new Creature("MadMenyo");
System.out.println(json.prettyPrint(c));
//Output
{
name: MadMenyo
modifier: {
amount: 1
criteriaSet: {
class: java.util.HashSet
items: [
VS_NATURE
MULTIPLY
]
}
}
stats: {
ENDURANCE: {
abbreviation: END
displayName: Endurance
baseValue: 8
finalValue: 8
}
MAGIC: {
abbreviation: MP
displayName: Your mana
baseValue: 20
finalValue: 20
}
STRENGTH: {
baseValue: 6
finalValue: 6
}
HEALTH: {
abbreviation: HP
displayName: Your life
baseValue: 100
finalValue: 100
}
}
}
//Looks like no valid Json to me. But the following line parses that correctly into a Creature object.
Creature jsonCreature = json.fromJson(Creature.class, jsonCreature);
Before we drift off even further. The reason why I do not want to use this is because it outputs the class class: java.util.HashSet and I'm pretty sure that is unnecessary.
EDIT
After adding the following lines of code I managed to output correct json. Yet the code still breaks before it gets to my custom read method. The question remain how to either fix that or serialize a Enumset or other Set holding enums in a different way as long as it is readable in Json and can be used as flags.
JsonWriter jw = new JsonWriter(new StringWriter());
json.setOutputType(JsonWriter.OutputType.json);
json.setWriter(jw);
//Now outputs proper Json
{
"name": "MadMenyo",
"modifier": {
"amount": 1,
"criteriaSet": [
"VS_NATURE",
"MULTIPLY"
]
},
"stats": {
"ENDURANCE": {
"abbreviation": "END",
"displayName": "Endurance",
"baseValue": 8,
"finalValue": 8
},
"MAGIC": {
"abbreviation": "MP",
"displayName": "Your mana",
"baseValue": 20,
"finalValue": 20
},
"STRENGTH": {
"baseValue": 6,
"finalValue": 6
},
"HEALTH": {
"abbreviation": "HP",
"displayName": "Your life",
"baseValue": 100,
"finalValue": 100
}
}
Although this isn't exactly answering my question, since it's still crashing before it gets to the read method, I have found a suitable work around using the Jackson library. I have figured out how to disregard default values by using the following annotation on the class to be serialized: #JsonInclude(JsonInclude.Include.NON_DEFAULT). This gets me the exact json output as what I was attempting with the build in json serializer. The only downside is the speed, Jackson is about 20 times slower on a single Object but looping that a 1000 times makes it "only" about 5 times slower.
For anyone who does not know, this is how you integrate Jackson with LibGDX:
In build add a dependency to the core project.
compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.0.1'
Starting parsing takes just a couple more lines.
ObjectMapper mapper = new ObjectMapper();
//Add pretty print indentation
mapper.enable(SerializationFeature.INDENT_OUTPUT);
//When serializing we have to wrap it in a try/catch signature
try {
mapper.writeValue(Gdx.files.local("creature.json").file(), creature);
} catch (IOException e) {
e.printStackTrace();
}
//To map it back to a object we do the same
Creature jsonCreature = null;
try {
jsonCreature = mapper.readValue(Gdx.files.local("creature.json").readString(), Creature.class);
} catch (IOException e) {
e.printStackTrace();
}
//Jackson also has control over what you want to serialize
mapper.setVisibility(PropertyAccessor.FIELD, JsonAutoDetect.Visibility.ANY);
//Or with annotation in front of the class
#JsonIgnoreProperties({"nameOfProperty", "anotherProperty"})
This all at least gives me the same power as the build in json serializer and it serializes EnumSet right off the bat.
If someone knows how to de-serialize my write method in the initial question I will be glad to accept that as the answer.

Migrate Python Dict to Java and JSON

I was a decent programmer in Python. Now i am forced to do a Wowza Module for my chat application. An application which will login by facebook account and the status of each user is saved on a Wowza Server, which use java for app development, connected via flash client & RTMP. The online status datastructure will be like this in Python.
Please tell me how to represent it in Java, I am not so familar with variable 'Types' in java :(
x = {
10001: {
'status': 0,
'friends': {}
},
10002: {
'status': 1,
'friends': {
10001: 0,
10003: 1
}
},
10003: {
'status': 1,
'friends': {
10001: 0,
10003: 1
}
}
}
10001,10002 etc will be facebook user ids.. and 0,1 will be their online/offline status.
If 10001 is connected, the datastructure will have some little modifications, it will change the status of 10001 to 1, and add all his friends ids, retrieved from facebook and update their status too.
x = {
10001: {
'status': 1,
'friends': {
10002: 1,
10003: 1
}
},
10002: {
'status': 1,
'friends': {
10001: 1,
10003: 1
}
},
10003: {
'status': 1,
'friends': {
10001: 1,
10003: 1
}
}
}
And if the user 10001 is disconnected, it will goto earlier stage.
Is there anyway i can store it as a json object? or is there any simple way to Store and retrieve data?
I assume that by store and retrieve, you mean cache it in memory, so:
(1) Create javabean classes to encapsulate the data. A java HashMap is very similar to a python dictionary. Why don't you try to write the classes in java as if they were python, update your question with the result, and then people can help you with details such as java generics which have no real python equivalent.
(2) Use one of the Object<-->JSON mapping frameworks that are out there to serialize instances to/from JSON. Gson and Jackson are popular.
It depends on what you want to do...
If you can use a json library such as Google Gson, it's perfect to manage JSON from Java.
then if you want to code it by yourself and you just manage integers and strings, it's not very difficult...
a Json structure is just an array or a map key/value where key is a String and Value is either a simple value or a complex one hence a hashmap or an array...
Anyway, generally, it's easier to use directly GSon ;)
All i needed was http://www.json.org/javadoc/org/json/JSONObject.html library.
I could use it with little struggle like adding 'type' to every object and can't create the tree in one step as in python.
Thanks mandubian and jtoberon :))
import net.sf.json.JSONException;
import net.sf.json.JSONObject;
public class JSONExample {
JSONObject json;
JSONObject objJSON;
JSONObject objObjJSON;
public void addtoJSON(){
json = new JSONObject();
objJSON= new JSONObject();
objObjJSON =new JSONObject();
//adding last tree
objObjJSON.put(10001, 0);
objObjJSON.put(10002, 1);
//adding secondary tree
objJSON.put("status",1);
objJSON.put("friends",objObjJSON);
//added root tree
objJSON.put(10003,objJSON);
System.out.println("JSON is " + objJSON);
}
}

Categories

Resources