My Json is:
{
"objects":{
"apple":[
{"x":3, "y":5},
{"x":6, "y":9}
],
"car":[
{"x":7, "y":9},
{"x":5, "y":8}
]
}
}
import com.badlogic.gdx.utils.Json;
import java.util.ArrayList;
import java.util.HashMap;
public class ClassA{
public Integer x, y;
}
public class ClassB{
public HashMap<String, ArrayList<ClassA>> objects;
}
public static void main(String[] args) {
Json json = new Json();
ClassB classB = json.fromJson(ClassB.class, "{\n" +
" \"objects\":{\n" +
" \"apple\":[\n" +
" {\"x\":3, \"y\":5},\n" +
" {\"x\":6, \"y\":9}\n" +
" ],\n" +
" \"car\":[\n" +
" {\"x\":7, \"y\":9},\n" +
" {\"x\":5, \"y\":8}\n" +
" ]\n" +
" }\n" +
" }");
System.out.println(json.toJson(classB));
}
I use "Libgdx", "json.fromJson" is works fine and when i call "json.toJson(classB)" an exception throws:
Exception in thread "main" java.lang.StackOverflowError
at com.badlogic.gdx.utils.JsonWriter$OutputType.quoteValue(JsonWriter.java:187)
at com.badlogic.gdx.utils.JsonWriter.value(JsonWriter.java:88)
at com.badlogic.gdx.utils.Json.writeValue(Json.java:574)
at com.badlogic.gdx.utils.Json.writeFields(Json.java:290)
at com.badlogic.gdx.utils.Json.writeValue(Json.java:580)
at com.badlogic.gdx.utils.Json.writeFields(Json.java:290)
at com.badlogic.gdx.utils.Json.writeValue(Json.java:580)
...
When i change "ArrayList" to String(etc) from HashMap in ClassB the code works fine.
So why an exception is thowing when i use "ArrayList" and how can i parse my json to ClassB instance?
Yes, the StackOverflowException itself is a bug.
The libGDX' JSON class expects class type information in certain places, if it can't deduce the type by itself. In this regard it is much less tolerant than other, much larger libraries such as Gson.
see: https://github.com/libgdx/libgdx/wiki/Reading-%26-writing-JSON#writing-object-graphs
The easiest fix in your example is to write your ClassA instances as:
" {\"class\": \"net.your.package.ClassA\", \"x\":3, \"y\":5},\n"
The class name can be shortened, see Json.setElementType().
In general, as a best practice, for prototyping I recommend the opposite to your approach. Create the structure in code, then write to JSON to see how libGDX "perceives" your data structure, then read back from JSON to verify the output.
Side note: for most cases its recommended to use the libGDX containers, ObjectMap<> and Array<> in your example. Also, ClassA members can probably be int instead of Integer to avoid auto-boxing.
As requested in the comments.
Post the libGDX bug as a new issue here.
add this to your build.gradle dependencies (to the core project if you use the default structure):
compile group: 'com.google.code.gson', name: 'gson', version: '2.8.0'
Initialize a Gson object:
Gson gson = new Gson();
And use the from and to json functions:
String json = gson.toJson(classB);
and
ClassB classB = gson.fromJson(json,ClassB.class);
Related
Documentation suggests testing API client based on WSClient using a mock web service, that is, create a play.server.Server which will respond to real HTTP requests.
I would prefer to create WSResponse objects directly from files, complete with status line, header lines and body, without real TCP connections. That would require less dependencies and run faster. Also there may be other cases when this is useful.
But I can't find a simple way to do it. It seems all implementations wrapped by WSResponse are tied to reading from network.
Should I just create my own subclass of WSResponse for this, or maybe I'm wrong and it already exists?
The API for Play seems intentionally obtuse. You have to use their "Cacheable" classes, which are the only ones that seem directly instantiable from objects you'd have lying around.
This should get you started:
import play.api.libs.ws.ahc.AhcWSResponse;
import play.api.libs.ws.ahc.cache.CacheableHttpResponseBodyPart;
import play.api.libs.ws.ahc.cache.CacheableHttpResponseHeaders;
import play.api.libs.ws.ahc.cache.CacheableHttpResponseStatus;
import play.shaded.ahc.io.netty.handler.codec.http.DefaultHttpHeaders;
import play.shaded.ahc.org.asynchttpclient.Response;
import play.shaded.ahc.org.asynchttpclient.uri.Uri;
AhcWSResponse response = new AhcWSResponse(new Response.ResponseBuilder()
.accumulate(new CacheableHttpResponseStatus(Uri.create("uri"), 200, "status text", "protocols!"))
.accumulate(new CacheableHttpResponseHeaders(false, new DefaultHttpHeaders().add("My-Header", "value")))
.accumulate(new CacheableHttpResponseBodyPart("my body".getBytes(), true))
.build());
The mystery boolean values aren't documented. My guess is the boolean for BodyPart is whether that is the last part of the body. My guess for Headers is whether the headers are in the trailer of a message.
I used another way, mocking WSResponse with Mockito:
import play.libs.ws.WSRequest;
import play.libs.ws.WSResponse;
import org.mockito.Mockito;
...
final WSResponse wsResponseMock = Mockito.mock(WSResponse.class);
Mockito.doReturn(200).when(wsResponseMock).getStatus();
final String jsonStr = "{\n"
+ " \"response\": {\n"
+ " \"route\": [\n"
+ " { \"summary\" :\n"
+ " {\n"
+ " \"distance\": 23\n"
+ " }\n"
+ " }\n"
+ " ]\n"
+ " }\n"
+ "}";
ObjectMapper mapper = new ObjectMapper();
JsonNode jsonNode = null;
try {
jsonNode = mapper.readTree(jsonStr);
} catch (IOException e) {
e.printStackTrace();
}
Mockito.doReturn(
jsonNode)
.when(wsResponseMock)
.asJson();
If you are using play-framework 2.8.x and scala, the below code can help us generate a dummy WSResponse:
import play.api.libs.ws.ahc.AhcWSResponse
import play.api.libs.ws.ahc.cache.CacheableHttpResponseStatus
import play.shaded.ahc.org.asynchttpclient.Response
import play.shaded.ahc.org.asynchttpclient.uri.Uri
import play.api.libs.ws.ahc.cache.CacheableHttpResponseBodyPart
import play.shaded.ahc.io.netty.handler.codec.http.DefaultHttpHeaders
class OutputWriterSpec extends FlatSpec with Matchers {
val respBuilder = new Response.ResponseBuilder()
respBuilder.accumulate(new CacheableHttpResponseStatus(Uri.create("http://localhost:9000/api/service"), 202, "status text", "json"))
respBuilder.accumulate(new DefaultHttpHeaders().add("Content-Type", "application/json"))
respBuilder.accumulate(new CacheableHttpResponseBodyPart("{\n\"id\":\"job-1\",\n\"lines\": [\n\"62812ce276aa9819a2e272f94124d5a1\",\n\"13ea8b769685089ba2bed4a665a61fde\"\n]\n}".getBytes(), true))
val resp = new AhcWSResponse(respBuilder.build())
val outputWriter = OutputWriter
val expected = ("\"job-1\"", List("\"62812ce276aa9819a2e272f94124d5a1\"", "\"13ea8b769685089ba2bed4a665a61fde\""), "_SUCCESS")
"Output Writer" should "handle response from api call" in {
val actual = outputWriter.handleResponse(resp, "job-1")
println("the actual : " + actual)
actual shouldEqual(expected)
}
}
Can Apache Avro handle parameterized types during serialization?
I see this exception thrown from Avro framework when I try to serialize an instance that uses generics -
org.apache.avro.AvroTypeException: Unknown type: T
at org.apache.avro.specific.SpecificData.createSchema(SpecificData.java:255)
at org.apache.avro.reflect.ReflectData.createSchema(ReflectData.java:514)
at org.apache.avro.reflect.ReflectData.createFieldSchema(ReflectData.java:593)
at org.apache.avro.reflect.ReflectData$AllowNull.createFieldSchema(ReflectData.java:75)
at org.apache.avro.reflect.ReflectData.createSchema(ReflectData.java:472)
at org.apache.avro.specific.SpecificData.getSchema(SpecificData.java:189)
The class I am trying to serialize looks like this
public class Property<T> {
private T propertyValue;
}
I am trying to generate the schema on the fly based on the incoming POJO instance. My serialization code looks like this -
ByteArrayOutputStream os = new ByteArrayOutputStream();
ReflectData reflectData = ReflectData.AllowNull.get();
Schema schema = reflectData.getSchema(propertyValue.getClass());
DatumWriter<T> writer = new ReflectDatumWriter<T>(schema);
Encoder encoder = EncoderFactory.get().jsonEncoder(schema, os);
writer.write(propertyValue, encoder);
The line in my code that triggers the exception:
Schema schema = reflectData.getSchema(propertyValue.getClass());
The same code works fine for classes that don't have parameterized types.
Avro as of version 1.7.7 cannot generate a schema for a parameterized type due to issue AVRO-1571. A work around is to explicitly specify the schema for a parameterized type so Avro does not try to generate it:
private static final String PROPERTY_STRING_SCHEMA =
"{ " +
"\"type\": \"record\", " +
"\"name\": \"PropertyString\", " +
"\"fields\": [" +
"{ \"name\": \"propertyValue\", \"type\": \"string\" }" +
"] " +
"}";
#AvroSchema(PROPERTY_STRING_SCHEMA)
private Property<String> password;
Class MyClass has a method getMyClassId and I want to invoke something like this :
Method method = clazz.getMethod("get" + clazz.getName() + "Id");
method.invoke(myObject)
But clazz.getName() returns the fully qualified package information, I could do some string manipulation, but wondered if there was a better way ?
Try using class.getSimpleName()
try
Method method = clazz.getMethod("get" + clazz.getSimpleName() + "Id");
I am working on a RESTful web service, that will return a list of RSS feeds that someone has added to a feed list which I have previously implemented.
Now if I return a TEXT_PLAIN reply, this displays just fine in the browser, although when I attempt to return an APPLICATION_XML reply, then I get the following error:
XML Parsing Error: junk after document element
Location: http:// localhost:8080/Assignment1/api/feedlist
Line Number 1, Column 135:SMH Top Headlineshttp://feeds.smh.com.au/rssheadlines/top.xmlUTS Library Newshttp://www.lib.uts.edu.au/news/feed/all
Here is the code - I cannot figure out why it is not returning a well formed XML page (I have also tried formatting the XML reply with new lines and spaces(indents) - and of course this did not work):
package au.com.rest;
import java.io.FileNotFoundException;
import java.io.IOException;
import javax.ws.rs.*;
import javax.ws.rs.core.*;
import au.edu.uts.it.wsd.*;
#Path("/feedlist")
public class RESTFeedService {
String feedFile = "/tmp/feeds.txt";
String textReply = "";
String xmlReply = "<?xml version=\"1.0\"?><feeds>";
FeedList feedList = new FeedListImpl();
#GET
#Produces(MediaType.APPLICATION_XML)
public String showXmlFeeds() throws FileNotFoundException, IOException
{
feedList.load(feedFile);
for (Feed f:feedList.list()){
xmlReply += "<feed><name>" + f.getName() + "</name>";
xmlReply += "<uri>" + f.getURI() + "</uri></feed></feeds>";
}
return xmlReply;
}
}
EDIT: I've spotted the immediate problem now. You're closing the feeds element on every input element:
for (Feed f:feedList.list()){
xmlReply += "<feed><name>" + f.getName() + "</name>";
xmlReply += "<uri>" + f.getURI() + "</uri></feed></feeds>";
}
The minimal change would be:
for (Feed f:feedList.list()){
xmlReply += "<feed><name>" + f.getName() + "</name>";
xmlReply += "<uri>" + f.getURI() + "</uri></feed>";
}
xmlReply += "</feeds>";
... but you should still apply the rest of the advice below.
First step - you need to diagnose the problem further. Look at the source in the browser to see exactly what it's complaining about. Can you see the problem in the XML yourself? What does it look like?
Without knowing about the rest framework you're using, this looks like it could be a
problem to do with a single instance servicing multiple requests. For some reason you've got an instance variable which you're mutating in your method. Why would you want to do that? If a new instance of your class is created for each request, it shouldn't be a problem - but I don't know if that's the case.
As a first change, try moving this line:
String xmlReply = "<?xml version=\"1.0\"?><feeds>";
into the method as a local variable.
After that though:
Keep all your fields private
Avoid using string concatenation in a loop like this
More importantly, don't build up XML by hand - use an XML API to do it. (The built-in Java APIs aren't nice, but there are plenty of alternatives.)
Consider which of these fields (if any) is really state of the object rather than something which should be a local variable. What state does your object logically have at all?
I'm working with Avro 1.7.0 using the its Java's generic representation API and I have a problem dealing with our current case of schema evolution. The scenario we're dealing with here is making a primitive-type field optional by changing the field to be a union of null and that primitive type.
I'm going to use a simple example. Basically, our schemas are:
Initial: A record with one field of type int
Second version: Same record, same field name but the type is now a union of null and int
According to the schema resolution chapter of Avro's spec, the resolution for such a case should be:
if reader's is a union, but writer's is not
The first schema in the reader's union that matches the writer's schema is recursively resolved against it. If none match, an error is signalled.
My interpretation is that we should resolve data serialized with the initial schema properly as int is part of the union in the reader's schema.
However, when running a test of reading back a record serialized with version 1 using the version 2, I get
org.apache.avro.AvroTypeException: Attempt to process a int when a union was expected.
Here's a test that shows exactly this:
#Test
public void testReadingUnionFromValueWrittenAsPrimitive() throws Exception {
Schema writerSchema = new Schema.Parser().parse("{\n" +
" \"type\":\"record\",\n" +
" \"name\":\"NeighborComparisons\",\n" +
" \"fields\": [\n" +
" {\"name\": \"test\",\n" +
" \"type\": \"int\" }]} ");
Schema readersSchema = new Schema.Parser().parse(" {\n" +
" \"type\":\"record\",\n" +
" \"name\":\"NeighborComparisons\",\n" +
" \"fields\": [ {\n" +
" \"name\": \"test\",\n" +
" \"type\": [\"null\", \"int\"],\n" +
" \"default\": null } ] }");
// Writing a record using the initial schema with the
// test field defined as an int
GenericData.Record record = new GenericData.Record(writerSchema);
record.put("test", Integer.valueOf(10));
ByteArrayOutputStream output = new ByteArrayOutputStream();
JsonEncoder jsonEncoder = EncoderFactory.get().
jsonEncoder(writerSchema, output);
GenericDatumWriter<GenericData.Record> writer = new
GenericDatumWriter<GenericData.Record>(writerSchema);
writer.write(record, jsonEncoder);
jsonEncoder.flush();
output.flush();
System.out.println(output.toString());
// We try reading it back using the second schema
// version where the test field is defined as a union of null and int
JsonDecoder jsonDecoder = DecoderFactory.get().
jsonDecoder(readersSchema, output.toString());
GenericDatumReader<GenericData.Record> reader =
new GenericDatumReader<GenericData.Record>(writerSchema,
readersSchema);
GenericData.Record read = reader.read(null, jsonDecoder);
// We should be able to assert that the value is 10 but it
// fails on reading the record before getting here
assertEquals(10, read.get("test"));
}
I would like to either know if my expectations are correct (this should resolve successfully right?) or where I'm not using avro properly to handle such a scenario.
The expectations that migrating a primitive schema to a union of null and a primitive are correct.
The problem with the code above is with how the decoder is created. The decoder needs the writer's schema rather than the reader's schema.
Rather than doing this:
JsonDecoder jsonDecoder = DecoderFactory.get().
jsonDecoder(readersSchema, output.toString());
It should be like this:
JsonDecoder jsonDecoder = DecoderFactory.get().
jsonDecoder(writerSchema, output.toString());
Credits goes to Doug Cutting for the answer on avro's user mailing list:
http://mail-archives.apache.org/mod_mbox/avro-user/201208.mbox/browser