As we know Serialization in java ,to quote from a blog here is used
to convert the state of an object into a byte stream, which can be
persisted into disk/file or sent over the network to any other running
Java virtual machine.
REST API CASE :
Now considering the second case , to send over a network to another running jvm , if we consider an example of a Rest API , i.e "host:port/path/resource".
Usually I use Spring's #RequestMapping to return the resource model pojo class as ResponseEntity. I do not implement Serializable interface in the model class and all works fine , I get the response of the API in json.
ModelX.java
public class ModelX {
private int x = 2 ;
private String xs = "stringx";
// getters and setters
}
Controller method :
#RequestMapping(value = "/test",method=RequestMethod.POST)
public ResponseEntity<ModelX> getTestModel(#RequestBody ModelX mox){
ModelX mx = new ModelX();
mx.setX(mox.getX());
mx.setXs(mox.getXs());
return new ResponseEntity<ModelX>(mx, HttpStatus.OK) ;
}
Is it because Spring framework makes it Serializable under the hood with those RestAPI annotations ? If not , how without making it serializable we are able to send over a network .
Persistence Case :
Just for more thought , Even in the case of persisting the Objects in database , we use #Entity from JPA , now I tested if the instance of any #Entity annotated class IS-A Serializable or not . and it gives false.
#Entity
class Car {
int id ;
String name ;
//getters and setters
}
testMethod -
Car c = new Car();
System.out.println(c instanceof Serializable);
O/p - false
So even when we try to save this object's state in a database , ORM also does some kind of Serialization under the hood ?
Serialization is a general notion:
In computer science, in the context of data storage, serialization (or serialisation) is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later (possibly in a different computer environment).
Serializable is used for a single specific implementation of serialization, which happens to be built into JVM and called "Java serialization". But conversion to JSON, or to whatever protocol database uses for communication are also serialization; they just have nothing to do with Serializable.
are there any specific scenarios or applications (maybe if you are aware of )where sending bytestream is more beneficial
For one, binary formats are smaller than JSON and can be read/written faster. And they can still be cross-platform. But for this you generally would use a different binary format than Java serialization: e.g. Protocol Buffers or Thrift (or many others).
so if the app that sends the data goes down , data will still be available for the other jvm . (won't happen in case of API)
This works perfectly well with JSON as well (or PB or Thrift as above).
Related
I have the updateProvider(ProviderUpdateDto providerUpdt) method in my Spring controller, But I do not see the need to send the whole payload of the provider entity, if for example the client can only update the name or other attribute, that is, it is not necessary to send the whole entity if only it is necessary to update a field, this produces a Excessive bandwidth consumption when it is not necessary.
What is a better practice to send only the fields that are going to be updated and be able to build a DTO dynamically? and How would I do if I'm using Spring Boot to build my API?
You can use Jackson library, it provides the annotation #JsonInclude(Include.NON_NULL) and with this only properties with not null values will be passed to your client.
Check the link http://www.baeldung.com/jackson-ignore-null-fields for an example.
There are many technique to improve bandwidth usage
not pretty print Json
enable HTTP GZIP compression
However, it is more important to ensure ur API is logically sound, omitting some fields may break the business rules, too fine grain API design will also increase the interface complexity
Another option would be to have a DTO object for field changes which would work for every entity you have. E.g:
class EntityUpdateDTO {
// The class of the object you are updating. Or just use a custom identifier
private Class<? extends DTO> entityClass;
// the id of such object
private Long entityId;
// the fields you are updating
private String[] updateFields;
// the values of those fields...
private Object[] updateValues;
}
Example of a json object:
{
entityClass: 'MyEntityDTO',
entityId: 324123,
updateFields: [
'property1',
'property2'
],
updateValues: [
'blabla',
25,
]
}
Might bring some issues if any of your updateValues are complex objects themselves though...
Your API would become updateProvider(EntityUpdateDTO update);.
Of course you should leave out the entityClass field if you have an update API for each DTO, as you'd already know which class entity you are working on...
Still, unless you are working with huge objects I wouldn't worry about bandwidth.
I'm currently trying to acquire skills in REST, and specifically in "good" Rest, hypermedia and all the good practices that comes with it.
In order to do so, I was asked to develop a prototype REST server containing data of my choice and implementing everything I'll have to use in a real project coming after that.
So I made a server using Spring boot and Jackson for json handling.
My data architecture is close to this : I have a collection of LaunchVehicle (I like space =D) like Ariane V, Falcon 9, etc. I can retrieve the JSON object flawlessly
{ "name":"Ariane V","country":"Europe","firstFlight":null,"GTO_Payload":1.0,"LEO_Payload":2.3,"weight":136.0 }
The thing is, I'd like to add a "space agency" field which would be an object containing some Strings and Floats, inside my LaunchVehicle. However, when the client retrieve a LaunchVehicle, I don't want it to retrieve the full SpaceAgency object, just the name for exemple. From here, he would be able to follow the link to the space agency via an hypermedia link included in the response it would have received.
How can I do this ? Right now I'm only able to send to the client my full LaunchVehicle object with the SpaceAgency object and all his fields. Is there any annotations doing what I want ? Thanks ;)
public class LaunchVehicle {
private String name;
private String country;
private Date firstFlight;
private Map<String, Float> characteristics;
private SpaceAgency spaceAgency;
#JsonCreator
constructor...
#JsonProperty(required=false)
getters and setters...
}
Thanks a lot, don't hesitate if I'm not precise or understandable enough.
Try #JsonIgnoreProperties annotation at the class level. That should provide you the feature that you want.
Otherwise, you could always use some kind of DTO object to create your response model, and there just have the fields that are going to be used at the API layer.
I would rather prefer to use an appropiate DTO/ApiModel for your API layer than having a full domain object with JSON annotations in it.
If your SpaceAgency class only defines the properties that you need to deserialize, Jackson will only deserialize those. It will forget the unmapped properties.
Try jax-ws-rs!
It's a standart REST implementation in Java.
Oracle docs
Very good tutorial by Mkyong
You can use the Gson API for this
JsonParser parser = new JsonParser();
JsonObject obj = parser.parse(spaceAgency).getAsJsonObject();
String agencyName = obj.get("agencyName").getAsString();
I think you should reference the space agency as a hyperlink.
So the JSON will look like:
{ "name":"Ariane V",
"country":"Europe",
< other fields omitted >
"_links": {
"agency": { "href": "agencies/ESA" },
< other links omitted >
}
}
To achieve this you need to specify the link in your data transfer object. Don't make this a reference to an actual object of that type -- to do so would mean populating that object, even when the client doesn't ask for it.
How you achieve this depends on what technology you're using. In Jersey it's
public class LaunchVehicle {
...
#InjectLink(resource=AgencyResource.class)
URI agencyLink;
...
}
https://jersey.java.net/documentation/latest/declarative-linking.html
Linking like this is what "real" REST is all about. However note that plenty of real-world solutions claim to be doing REST without actually using hyperlinks. A more hacky solution would be to have a String agencyId field in your JSON, which could be put into a URL template to get agency details.
So I have an app that needs to store certain configuration info, and so I am planning on storing the configs as simple JSON documents in Mongo:
appConfig: {
fizz: true,
buzz: 34
}
This might map to a Java POJO/entity like:
public class AppConfig {
private boolean fizz;
private int buzz;
}
etc. Ordinarily, with relational databases, I use Hibernate/JPA for O/R mapping from table data to/from Java entities. I believe the closest JSON/Mongo companion to table/Hibernate is a Morphia/GSON combo: use Morphia to drive connectivity from my Java app to Mongo, and then use GSON to O/J map the JSON to/from Java POJOs/entities.
The problem here is that, over time, my appConfig document structure will change. It may be something simple like:
appConfig: {
fizz: true,
buzz: 34
foo: "Hello!"
}
Which would then require the POJO/entity to become:
public class AppConfig {
private boolean fizz;
private int buzz;
private String foo;
}
But the problem is that I may have tens of thousands of JSON documents already stored in Mongo that don't have foo properties in them. In this specific case, the obvious solution is to set a default on the property like:
public class AppConfig {
private boolean fizz;
private int buzz;
private String foo = "Hello!"
}
However in reality, eventually the AppConfig document/schema/structure might change so much that it in no way, shape or form resembles its original design. But the kicker is: I need to be backwards-compatible and, preferably, be capable of updating/transforming documents to match the new schema/structure where appropriate.
My question: how is this "versioned document" problem typically solved?
I usually solve this problem by adding a version field to each document in the collection.
You might have several documents in the AppConfig collection:
{
_id: 1,
fizz: true,
buzz: 34
}
{
_id: 2,
version: 1,
fizz: false,
buzz: 36,
foo: "Hello!"
}
{
_id: 3,
version: 1,
fizz: true,
buzz: 42,
foo: "Goodbye"
}
In the above example, there are two documents at version one, and one older document at version zero (in this pattern, I generally interpret a missing or null version field to be version zero, because I always only add this once I'm versioning by documents in production).
The two principles of this pattern:
Documents are always saved at the newest version when they are actually modified.
When a document is read, if it's not at the newest version, it gets transparently upgraded to the newest version.
You do this by checking the version field, and performing a migration when the version isn't new enough:
DBObject update(DBObject document) {
if (document.getInt("version", 0) < 1) {
document.put("foo", "Hello!"); //add default value for foo
document.put("version", 1);
}
return document;
}
This migration can fairly easily add fields with default values, rename fields, and remove fields. Since it's located in application code, you can do more complicated calculations as necessary.
Once the document has been migrated, you can run it through whatever ODM solution you like to convert it into Java objects. This solution no longer has to worry about versioning, since the documents it deals with are all current!
With Morphia this could be done using the #PreLoad annotation.
Two caveats:
Sometimes you may want to save the upgraded document back to the database immediately. The most common reasons for this are when the migration is expensive, the migration is non-deterministic or integrates with another database, or you're in a hurry to upgrade an old version.
Adding or renaming fields that are used as criteria in queries is a bit trickier. In practice, you may need to perform more than one query, and unify the results.
In my opinion, this pattern highlights one of the great advantages of MongoDB: since the documents are versioned in the application, you can seamlessly migrate data representations in the application without any offline "migration phase" like you would need with a SQL database.
The JSON deserialzer solves this in a very simple way for you, (using JAVA)
Just allow your POJO/entity to grown with new fields. When you deserialize your JSON from mongo to you entity - all missing fields will be null.
mongoDocument v1 : Entity of v3
{
fizz="abc", --> fizz = "abc";
buzz=123 --> buzz = 123;
--> newObj = null;
--> obj_v3 = null;
}
You can even use this the other way around if you like to have you legacy servers work with new database objects:
mongoDocument v3 : Entity of v1
{
fizz:"abc", --> fizz = "abc";
buzz:123, --> buzz = 123;
newObj:"zzz", -->
obj_v3:"b -->
}
Depending if they have the fields or not - it will be populated by the deserializer.
Keep in mind that booleans are not best suited for this since they can default to false (depending on which deserializer you use).
So unless you are actively going to work with versioning of your objects why bother with the overhead when you can build a legacy safe server implementation what with just a few null checks can handle any of the older objects.
I hope this proposal might help you with your set-up
I guess below thread will help you although it is not about versioning documents in the DB, and it has been done using spring-data-mongodb,
How to add a final field to an existing spring-data-mongodb document collection?
So you can assign values to the POJO based on existence of the property in the document using the Converter implementation.
You have a couple of options with morphia, at least. You could use a versioned class name then rely on morphia's use of the className property to fetch correct class version. Then your application would just have to migrate that old object to the new class definition. Another option is to use #PreLoad and massage the DBObject coming out of mongo to the new shape before morphia maps the DBObject to your class. Using a version field on the class you can determine which migration to run when the data is loaded. From that point, it would just look like the new form to morphia and would map seamlessly. Once you save that configuration object back to mongo, it'd be in the new form and the next load wouldn't need to run the migration.
I have multiple web services running on Tomcat servers with Java backend. When one of the services queries for something from one of the other services, it returns the payload as a JSON string. Now i need to parse this and get the info that i need. I use the JSON library provided by json.org
What I wanted to ask is that, will be faster (wrt to processing) if i have a template class (i class with just attributes and their get/setters) as a library class in both the services and then pass the payload as an object of that class, and then accept it as by type casting to that object.
WebResource localWebResource = localClient.resource(url);
ClientResponse localClientResponse = (ClientResponse) localWebResource
.accept(new String[] { "application/json" }).get(ClientResponse.class);
//Scenario 1 - accept it as String
String jsonString = (String) localClientResponse.getEntity(String.class);
MyObject myObj = parseJson(jsonString);
//Scenario 2 - accept it as object of 'MyObject'
MyObject myObj = (MyObject) localClientResponse.getEntity(MyObject.class);
Using scenario 2, will it be any faster compared to scenario 1. Converting to 'MyObject' in scenario 2, will the compiler take the same amount of time as for manual parsing like in scenario 1?.
Note that the payload is transferred over the network. Will either of this approach have any effect on the network transfer time?
In theory, both methods would take the same amount of time. They both have to transform the JSON string into an object graph. However, my experience is that getting the object from the framework is much faster than trying to do it myself. (using Jersey 1.17, no Jackson, Jaxb/Json conversion)
In my db I have a number of entity classes and I run standart CRUD operations on them via Hibernate. Its not a problem to create generic dao class to make all main operations with classes. For example, in dao I have methods which look like this:
<T> List<T> loadAll(Class clazz)
Now I want to expose these methods to web-service client via Spring 3 operated web-service.
The only way I see is to implement web-methods for all entities i.e. write a class that looks like...
class BookResponse { List<BookEntity> books; }
... and return this in corresponding web-method "BookResponse getAllBooks()". This will ruin my attemts to make a code simplier by using dao with generics.
Is there are any other ways?
How can I do this without implementing web-methods for ALL my entities?
If generic web-service is not possible may be there are some other ways to resolve this task in a simple way?
UPDATE:
At the moment I am trying to implement a response class which should look like
public class ServiceResponse<T>{
#XmlElementWrapper( name = "data" )
#XmlElements( #XmlElement(name = "a", type = EntityA.class), #XmlElement(name = "b", type = EntityB.class) )
private List<T> data = new ArrayList<T>( );
//getters,setters
}
So I want to be able to insert a list of any entities mapped with annotations to this response. This produces no erros, but the response given me by web-service is empty.
I think you'll need a new POJO "GenericEntity" which can hold the information of any domain entity class instance.
It would hold a type string and an arbitrary/generic list of named attributes.
It can then be used to represent any of your real domain entities
e.g.
type = Book
attributes = (title=Order of the Phoenix, author=J K Rowling)
e.g.
type = Car
attributes = (make=Renault, model=Clio)
These examples show String attributes so you'll have to sort out if this is good enough or if you need strong typing - it's possible but harder.
You can then expose your "GenericEntity" via web services, allowing clients to make calls in and specify which domain entity they wish to search for, and even allow them to specify search criteria too.
Adds and deletes could be done in a similar way.
HTH,
David