what is struts2 serilaization - java

I am trying to implement struts2-jquery-grid. But I am stuck with the serialization issue. I can't find out really what is mean by serialization in struts2 type="json". I have checked the struts website documentation but that is not clear to me. Anyone please tell me in simple words what is struts serialization?

The output of whatever object will be serialized into JSON format and returned to the client (web browser mostly in this case).
For example, if a class like such would to be returned in JSON:
class Person{
private int age;
private String name;
// omitted getter and setter
}
Its corresponding JSON return String will be: (values are mocked up and assumed)
{"person1": {"age": "2", "name": "Chin Boon"}}

serialization is the process of converting a data structure or object state into a format that can be stored (for example, in a file or memory buffer, or transmitted across a network connection link) and "resurrected" later in the same or another computer environment.
So JSON plugin is converting your whole object graph, starting on the action class and will send the data to the UI where the JSON data can be used for UI display and same process can be used in reverse order.
Concept of serialization is not specific to Struts2 its a generic concept and is used a lot in real life application

Related

Conflicting types of serialization/deserialization and annotations

I've just started to learn about serialization/deserialization and I'm a bit confused about which type is used where and when... let me explain:
I have an object containing many fields some of which are full of "useless" info.
Now, when I log the contents of such object for debugging purposes I would like the output to be in a nice json format.
So in the toString() method of this object I do the following:
import com.fasterxml.jackson.databind.ObjectMapper;
...
...
#Override
public String toString() {
ObjectMapper objectMapper = new ObjectMapper();
String s = "";
try{
s = objectMapper.writeValueAsString(this);
} catch (Exception e) {
}
return s;
}
but this also logs all the useless fields.
So I've looked around and found the #JsonIgnore annotation from com.fasterxml.jackson.annotation.JsonIgnore which I can put on top of the useless fields so as not to log them.
But from what I've understood serialization is a process of transforming a java object into a bytestream so that it can be written to file, saved in session, sent across the internet. So my noob question is: is it possible that using the #JsonIgnore annotation on top of certain fields will result in those fields not being saved into session (I use an hazelcast map), or not being sent in the http responses I send, or not being written to a file If I ever decide to do that?
If the answer to the previous question is NO, then is that because those types of actions (saving in session, writing to file, sending as http response) use different types of serialization than objectMapper.writeValueAsString(this); so they don't conflict?
In your case, you're using Jackson's ObjectMapper to convert your object to a string representation (in JSON format). The #JsonIgnore annotation is part of Jackson's annotations and will prevent fields annotated with it from being included in the JSON representation of your object.
However, this only affects the string representation created by the ObjectMapper, not other forms of serialization/deserialization. If you want to persist the object in a specific way, you may need to use a different form of serialization (such as binary serialization) or create a custom representation that excludes the fields you don't want to save.
So to answer your questions:
No, using #JsonIgnore will not affect the object saved in a session or sent as an HTTP response.
Yes, that's correct. Different forms of serialization/deserialization may handle fields differently, even if they are part of the same object.

Android Firestore limitations to custom object models

I am migrating my app to use Firebase Firestore, and one of my models is very complex (contains lists of other custom objects). Looking at the documentation, on how to commit a model object as a document, it looks like you simply create your model object with a public constructor, and getters and setters.
For example from the add data guide:
public class City {
private String name;
private String state;
private String country;
private boolean capital;
private long population;
private List<String> regions;
public City() {}
public City(String name, String state, String country, boolean capital, long population, List<String> regions) {
// getters/setters
}
Firestore automatically translates this to and from and document without any additional steps. You pass an instance to a DocumentReference.set(city) call, and retrieve it from a call to DocumentSnapshot.toObject(City.class)
How exactly does it serialize this to a document? Through reflection? It doesn't discuss any limitations. Basically, I'm left wondering if this will work on more complex models, and how complex. Will it work for a class with an ArrayList of custom objects?
Firestore automatically translates this to and from and document without any additional steps. How exactly does it serialize this to a document? Through reflection?
You're guessing right, through reflection. As also #Doug Stevenson mentioned in his comment, that's very common for systems as Firebase, to convert JSON data to POJO (Plain Old Java Object). Please also note that the setters are not required. If there is no setter for a JSON property, the Firebase client will set the value directly onto the field. A constructor-with-arguments is also not required. While both are idiomatic, there are good cases to have classes without them. Please also take a look at some informations regarding the existens fo the no-argument constructor.
It doesn't discuss any limitations.
Yes it does. The official documentation explains that the documents have limits. So there are some limits when it comes to how much data you can put into a document. According to the official documentation regarding usage and limits:
Maximum size for a document: 1 MiB (1,048,576 bytes)
As you can see, you are limited to 1 MiB total of data in a single document. When we are talking about storing text, you can store pretty much but as your array getts bigger (with custom objects), be careful about this limitation.
Please also note, that if you are storing large amount of data in arrays and those arrays should be updated by lots of users, there is another limitation that you need to take care of. So you are limited to 1 write per second on every document. So if you have a situation in which a lot of users al all trying to write/update data to the same documents all at once, you might start to see some of this writes to fail. So, be careful about this limitation too.
Will it work for a class with an ArrayList of custom objects?
It will work with any types of classes as long as are supported data type objects.
Basically, I'm left wondering if this will work on more complex models, and how complex.
It will work with any king of complex model as long as you are using the correct data types for your objects and your documents are within that 1 MIB limitation.

Spring/Jackson Mapping Inner JSON Objects

I have a RESTful web service that provides JSON that I am consuming. I am using Spring 3.2 and Spring's MappingJacksonHttpMessageConverter. My JSON looks like this:
{
"Daives": {
"Daive": {},
"Daive": {},
"Daive": {},
"Daive": {}
}
}
Now everything I have read seems to indicate that this JSON should be refactored to an array of JSON Daives. However, this is valid JSON so I want to make sure that I am thinking correctly before going back to the service provider to ask for changes. In the format above, I would have to know ahead of time how many Daives there are going to be such that my DTO accounted for them. The handy dandy Jackson mapper isn't going work with this kind of JSON setup. If the JSON was altered to provide and Array of JSON Daives, I could use a List to dynamically map them using Spring/Jackson.
Am I correct? Thanks :)
According to this thread, the JSON spec itself does not forbid multiple fields with the same name (in your case, multiple fields named "Daive" in the object "Daives").
However, most parsers will either return an error or ignore any value but the last one. As you said, putting these values into an array seems much more sensible; and indeed, you'll be able to map this array to a List with Jackson.

Partial deserialization of a huge binary file - Java

This is my first question to StackOverflow. Please let me know if the question is not clear and need any more details.
I have a class which has three attributes like this:
class SampleClass {
long [] field1;
float[] field2;
float[] field3;
}
A huge SampleClass object is built(with about a billion entries for each array). This object is serialized in one host and the serialized file is uploaded to another machine. Now I want to deserialize only a portion of the file so that I can get a smaller SampleClass object with about 10 indices filled for each field and not the complete object. Because this machine does not have enough capacity to load such a huge object in memory. Is this possible?
The object is serialized using JAVA's writeObject method and it is done by a different utility and so I have no control over it. Thanks in advance.
Forget using the Java serialization API - it's only designed to deserialize everything. If you have no control over how the serialized file is generated, then you should consider parsing the serialized file yourself and extracting the necessary parts - it's not really that hard.
The Java serialization format is well-documented (see e.g. official docs, informative article), and tools exist to parse the format (e.g. Serialysis, jdeserialize) though it isn't particularly hard to write your own tool based on the format spec.
Once you can parse the serialized data, you can simply extract what you need and skip over what you don't need.
Your best bet is to actually serialize only the portion you need, given that you cannot control/override serialization itself. On the machine which serialized entire file and is able to deserialize it:
1) load entire file into object
2) create new object of SampleClass
3) copy elements from required region in each array to blank SampleClass object
4) serialize this smaller version
If it helps any, fields can be made transient so they will not be serialized.
Still, it looks to me that this object should be in database:
It does not fit virtual memory
only portion of it is required at given time.
So you could use hard disk to store it and queries to get required portions.

Key-Value on top of Appengine

Although appengine already is schema-less, there still need to define the entities that needed to be stored into the Datastore through the Datanucleus persistence layer. So I am thinking of a way to get around this; by having a layer that will store Key-value at runtime, instead of compile-time Entities.
The way this is done with Redis is by creating a key like this:
private static final String USER_ID_FORMAT = "user:id:%s";
private static final String USER_NAME_FORMAT = "user:name:%s";
From the docs Redis types are: String, Linked-list, Set, Sorted set. I am not sure if there's more.
As for the GAE datastore is concerned a String "Key" and a "Value" have to be the entity that will be stored.
Like:
public class KeyValue {
private String key;
private Value value; // value can be a String, Linked-list, Set or Sorted set etc.
// Code omitted
}
The justification of this scheme is rooted to the Restful access to the datastore (that is provided by Datanucleus-api-rest)
Using this rest api, to persist a object or entity:
POST http://datanucleus.appspot.com/dn/guestbook.Greeting
{"author":null,
"class":"guestbook.Greeting",
"content":"test insert",
"date":1239213923232}
The problem with this approach is that in order to persist a Entity the actual class needs to be defined at compile time; unlike with the idea of having a key-value store mechanism we can simplify the method call:
POST http://datanucleus.appspot.com/dn/org.myframework.KeyValue
{ "class":"org.myframework.KeyValue"
"key":"user:id:johnsmith;followers",
"value":"the_list",
}
Passing a single string as "value" is fairly easy, I can use JSON array for list, set or sorted list. The real question would be how to actually persist different types of data passed into the interface. Should there be multiple KeyValue entities each representing the basic types it support: KeyValueString? KeyValueList? etc.
Looks like you're using a JSON based REST API, so why not just store Value as a JSON string?
You do not need to use the Datanucleus layer, or any of the other fine ORM layers (like Twig or Objectify). Those are optional, and are all based on the low-level API. If I interpret what you are saying properly, perhaps it already has the functionality that you want. See: https://developers.google.com/appengine/docs/java/datastore/entities
Datanucleus is a specific framework that runs on top of GAE. You can however access the database at a lower, less structured, more key/value-like level - the low-level API. That's the lowest level you can access directly.
BTW, the low-level-"GAE datastore" internally runs on 6 global Google Megastore tables, which in turn are hosted on the Google Big Table database system.
Saving JSON as a String works fine. But you will need ways to retrieve your objects other than by ID. That is, you need a way to index your data to support any kind of useful query on it.

Categories

Resources