Alternatives to JSON-object binding in Android application - java

From my Android application I need to use a RESTful web service that returns me a list of objects in json format.
This list can be very long (about 1000/2000 object.).
What I need to do is to search and retrive just some of the objects inside the json file.
Due to the limited memory of mobile device, I was thinking that using object-binding (using for example GSON library) can be dangerous.
Which are the alternatives for solving this problem?

If you are using gson, use gson streaming.
I've added the sample from the link and added my comment inside of it:
public List<Message> readJsonStream(InputStream in) throws IOException {
JsonReader reader = new JsonReader(new InputStreamReader(in, "UTF-8"));
List<Message> messages = new ArrayList<Message>();
reader.beginArray();
while (reader.hasNext()) {
Message message = gson.fromJson(reader, Message.class);
// TODO : write an if statement
if(someCase) {
messages.add(message);
// if you want to use less memory, don't add the objects into an array.
// write them to the disk (i.e. use sql lite, shared preferences or a file...) and
// and retrieve them when you need.
}
}
reader.endArray();
reader.close();
return messages;
}

For example
1) Read the list as a stream and handle the single JSON entities on the fly and save only those that are of interest to you
2) Read the data into String object/objects and then find the JSON entities and handle them one by one instead of everything at the same time. Ways to analyse the String for JSON structures include regular expressions or manual indexOf combined with substring -type analysis.
1) is more efficient but requires a bit more work as you have to handle the stream at the same time where as 2) is probably more simple but it requires you to use quite a big Strings as temporary means.

Related

Best approach to update large JSON files in Java

The files I have contain larges JSON arrays where under some sub-objects I would like to update some values. Since the files are large, I was looking into a Streaming API to keep the memory footprint low.
What I'd like to achieve is streaming data in, parsing some specific sub-object, update its values, stringify that sub-object again and efficiently save the updated values back on disk.
I'm not sure how to do that using libraries like Jackson or GSON, because their Streaming API offer a way to do efficient reading (JsonParser or JsonReader), a way to do efficient writing (JsonGenerator or JsonWriter) but not something that let me do both at the same time:
try (JsonParser jsonParser = mapper.getFactory().createParser(new File("bigFile.json"))) {
// parsing logic...
while (jsonParser.nextToken() != JsonToken.END_ARRAY) {
MyClass obj = mapper.readValue(jsonParser, MyClass.class);
obj.field1 = "Some new value";
obj.field2 += 1;
// how to efficiently write `obj` back on disk?
}
}

Android - Accessing JSON children from a URL

I'm in the process of converting my website to an Android app and one of the pages' data currently is populated via JSON in my website. The way it works is that the URL generates a different JSON data with the same structure based on the passed ID. I already have the logic for passing the ID to the URL. Now I want to read the data through Java code and parse the JSON children and its values in it.
I have a URL that leads to the JSON file in textual form, but I'm not sure how to go about reading the data from it and accessing the child nodes based on the JSON key.
So I guess what I'm asking is what is the usual approach for this procedure? I see a lot of different examples, but none of which are applicable to my problem.
Anyone have any suggestions as to how I should approach this?
JSONObject = new JSONObject(yourjsonstring);
Now you have your Json Object...
If your Json start with array use this:
JSONArray = new JSONArray(yourjsonarray);
You can use existing libraries to parse JSON, gson or Moshi are two solutions.
The way you go about parsing the JSON is as followed
First you need to make pojo's with the same structure as the JSON file.
then you can parse it to java code via the fromJSON() method, this will make new objects and fill it with the data from the JSON.
gson example for clarification:
Gson gson = new Gson();
Response response = gson.fromJson(jsonLine, Response.class);
where jsonLine = your json file and the Response.Class the pojo in which you want to json to load.
Now you have the JSON values as Java classes in response.
If you're using Retrofit and OkHTTP to perform the network calls i suggest you use Moshi as it's also from Square and claimed to work faster and better than gson. (if you want to know why you can leave a comment).
I think what you're trying to do is this
on post execute method do the following
#Override
protected void onPostExecute(String result) {
String status = "";
String message = "";
String tag = "";
String mail = "";
try {
JSONObject jsonResult = new JSONObject(result);
status = jsonResult.optString("status");
message = jsonResult.optString("message");
tag = jsonResult.optString("tag");
mail = jsonResult.optString("mail");
} catch (JSONException e) {
e.printStackTrace();
}
of course your json array contains different keys
Just reolace them with yours

Parse multiple JSON objects in one file

I have multiple JSON objects stored in one file separated by new line character (but one object can span over multiple lines) - it's an output from MongoDB shell.
What is the easiest way to parse them (get them in an array or collection) using Gson and Java?
Another possibility is to use Jackson and its ObjectReader.readValues() methods:
public <T> Iterator<T> readStream(final InputStream _in) throws IOException {
ObjectMapper mapper = new ObjectMapper();
// configure object mappings
...
// and then
return mapper.reader(MapObject.class).readValues(_in);
}
works pretty good on big enough (few gigabytes) JSON datafiles

Saving variable state in between sessions?

So I'm in the process of developing a Java IRC bot as a bit of a side project for a friend of mine, and while development is going well, I'm a little unsure as how to save the current state of certain variables in between sessions. It doesn't have a GUI, so I didn't think that it would be too complex, but my searching efforts have been futile thus far.
Thanks.
It will depend on the sort of variables you want to keep, but all methods will require you to write some sort data to a file.
If you only need to keep a handful of variables, you could consider implementing a .config file that could be a simple delimited text file.
If it's an entire object that you want to keep track of, say, a player in an irc game, one option you have is to parse the object into JSON, and save it to a textfile, for reading later. You can use Gson for this
example for a 'player' object:
public String savePlayer(String playerName){
Gson gsonPretty = new GsonBuilder().setPrettyPrinting().create();
String playerFile = System.getProperty("user.dir")+"\\players\\"+playerName;
String jsonplayers = gsonPretty.toJson(players.get(playerName));
try{
FileWriter writer = new FileWriter(playerFile+".json");
writer.write(jsonplayers);
writer.close();
return "Player file saved successfully!";
}catch(Exception e){
e.printStackTrace();
}
return "Something went wrong";
}
you can then create a load method that either has the file name hard coded, or a string input to determine which file to load, and use something like:
playerFromJson = gson.fromJson(jsonString, Player.class);
to use that object in the code

Fastest way to import millions of JSON documents to MongoDB

I have more than 10 million JSON documents of the form :
["key": "val2", "key1" : "val", "{\"key\":\"val", \"key2\":\"val2"}"]
in one file.
Importing using JAVA Driver API took around 3 hours, while using the following function (importing one BSON at a time):
public static void importJSONFileToDBUsingJavaDriver(String pathToFile, DB db, String collectionName) {
// open file
FileInputStream fstream = null;
try {
fstream = new FileInputStream(pathToFile);
} catch (FileNotFoundException e) {
e.printStackTrace();
System.out.println("file not exist, exiting");
return;
}
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
// read it line by line
String strLine;
DBCollection newColl = db.getCollection(collectionName);
try {
while ((strLine = br.readLine()) != null) {
// convert line by line to BSON
DBObject bson = (DBObject) JSON.parse(JSONstr);
// insert BSONs to database
try {
newColl.insert(bson);
}
catch (MongoException e) {
// duplicate key
e.printStackTrace();
}
}
br.close();
} catch (IOException e) {
e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates.
}
}
Is there a faster way? Maybe, MongoDB settings may influence the insertion speed? (for, example adding key : "_id" which will function as index, so that MongoDB would not have to create artificial key and thus index for each document) or disable index creation at all at insertion.
Thanks.
I'm sorry but you're all picking minor performance issues instead of the core one. Separating the logic from reading the file and inserting is a small gain. Loading the file in binary mode (via MMAP) is a small gain. Using mongo's bulk inserts is a big gain, but still no dice.
The whole performance bottleneck is the BSON bson = JSON.parse(line). Or in other words, the problem with the Java drivers is that they need a conversion from json to bson, and this code seems to be awfully slow or badly implemented. A full JSON (encode+decode) via JSON-simple or specially via JSON-smart is 100 times faster than the JSON.parse() command.
I know Stack Overflow is telling me right above this box that I should be answering the answer, which I'm not, but rest assured that I'm still looking for an answer for this problem. I can't believe all the talk about Mongo's performance and then this simple example code fails so miserably.
I've done importing a multi-line json file with ~250M records. I just use mongoimport < data.txt and it took 10 hours. Compared to your 10M vs. 3 hours I think this is considerably faster.
Also from my experience writing your own multi-threaded parser would speed things up drastically. The procedure is simple:
Open the file as BINARY (not TEXT!)
Set markers(offsets) evenly across the file. The count of markers depends on the number of threads you want.
Search for '\n' near the markers, calibrate the markers so they are aligned to lines.
Parse each chunk with a thread.
A reminder:
when you want performance, don't use stream reader or any built-in line-based read methods. They are slow. Just use binary buffer and search for '\n' to identify a line, and (most preferably) do in-place parsing in the buffer without creating a string. Otherwise the garbage collector won't be so happy with this.
You can parse the entire file together at once and the insert the whole json in mongo document, Avoid multiple loops, You need to separate the logic as follows:
1)Parse the file and retrieve the json Object.
2)Once the parsing is over, save the json Object in the Mongo Document.
I've got a slightly faster way (I'm also inserting millions at the moment), insert collections instead of single documents with
insert(List<DBObject> list)
http://api.mongodb.org/java/current/com/mongodb/DBCollection.html#insert(java.util.List)
That said, it's not that much faster. I'm about to experiment with setting other WriteConcerns than ACKNOWLEDGED (mainly UNACKNOWLEDGED) to see if I can speed it up faster. See http://docs.mongodb.org/manual/core/write-concern/ for info
Another way to improve performance, is to create indexes after bulk inserting. However, this is rarely an option except for one off jobs.
Apologies if this is slightly wooly sounding, I'm still testing things myself. Good question.
You can also remove all the indexes (except for the PK index, of course) and rebuild them after the import.
Use bulk operations insert/upserts. After Mongo 2.6 you can do Bulk Updates/Upserts. Example below does bulk update using c# driver.
MongoCollection<foo> collection = database.GetCollection<foo>(collectionName);
var bulk = collection.InitializeUnorderedBulkOperation();
foreach (FooDoc fooDoc in fooDocsList)
{
var update = new UpdateDocument { {fooDoc.ToBsonDocument() } };
bulk.Find(Query.EQ("_id", fooDoc.Id)).Upsert().UpdateOne(update);
}
BulkWriteResult bwr = bulk.Execute();
You can use a bulk insertion
You can read the documentation at mongo website and you can also check this java example on StackOverflow

Categories

Resources