I'm new to this Apache Avro(serialization framework). I know what serialization is but why there are separate frameworks lik avro, thrift, protocol buffers and
Why cant we use java serialization api's instead of these separate frameworks, are there any flaws in java serializatio api's.
What is the meaning of below phrase
"does not require running a code-generation program when a schema changes" in avro or in any other serializatio framework.
Please help me to understand all these!!
Why cant we use java serialization api's instead of these separate frameworks, are there any flaws in java serializatio api's.
I would assume you can use Java Serialization unless you know otherwise.
The main reasons not to use it are
you know there is a performance problem.
you need to exchange data across languages. Java Serialization is only for Java.
does not require running a code-generation program when a schema changes
I am guessing this means it can read serialized data with an older or newer model without having to re-generate and compile the code. i.e. it is tolerant of changes in the model.
BTW: As the data models I work with are usually a) very simple b) require maximum performance, I write my own Serialization without using a framework (or write my own framework) This is fine provided your model is very simple and won't change often.
In short, unless you know you can't, try Java Serialization first.
A comparison I did on different Serialization Methods
1.
The problem with java serialization is that it's not agnostic of your code. Meaning that is tightly coupled to the structure of you classes. Other serialization frameworks provide you with some flexibility/control that it's useful to bypass this kind of situations. Even though there is a way in java standard mechanism to control serialization through the writeObject readObject methods, it is a problem that other fwks have addressed in a more elegant way.
Second, you cannot interexchange the output of your java serialization with other language - platforms.
Last, but not least. Java serialization does not produce the more compact result possible, which might lead to performance degradation if you perform things like transfer data over a network. Other protocols (like Oracle's POF or protocol buffers) are more optimized to produce an smaller output.
2.
Regarding your second question I guess that what that means is that you don't need to run any precompile job that generates code in the case that the structure of your serialized classes changes. I personally hate frameworks that force some kind of compile-time code generation. I hate the hassle of having to even have to look at generated code, but that is just me and my ocd.
Two principle things Avro does well: Hadoop's MapReduce and communication protocol structures. I use it for MapReduce where I put numerous data instances in a single file all conforming to a particular schema; each record is stored very efficiently and markers delineate each individual record. Hadoop also uses it to communicate data between the Map and Reduce tasks. Much better than storing field names alongside data. These files are easy to split into multiple parts for processing in a distributed computing environment. Since the schema is embedded into the file, a reader doesn't have to know what the data looks like. Avro is not tied to any language and there are several language APIs for reading Avro data. If you want to write out a single complex object, then Java's serialization OR Avro will work. If you want more power and efficiency and are using millions of individual instances, then Avro is a good alternative. I am sure you can do this with the Java API, but why work that hard.
There are mechanisms to evolve schemas thru the schema resolution rules. There are also tools that will turn your java objects into schemas for you.
The best place to start is here: http://avro.apache.org/docs/current/spec.html It may take a couple of reads to get the gist. Read it again after trying to use some of the tools that come with the Avro package. Avro takes a while to get the hang of. JSON is only used as a data specification language it isn't used to store the data. You can generate schemas using the API or using a JSON file. Lots of flexibility and enough rope to easily get into trouble with -- well worth it tho.
Related
I need to query XML out of a MarkLogic server and marshal it into Java objects. What is a good way to go about this? Specifically:
Does using MarkLogic have any impact on the XML technology stack? (i.e. is there something about MarkLogic that leads to a different approach to searching for, reading and writing XML snippets?)
Should I process the XML myself using one of the XML APIs or is there a simpler way?
Is it worth using JAXB for this?
Someone asked a good question of why I am using Java. I am using Java/Java EE because I am strongest in that language. This is a one man project and I don't want to get stuck anywhere. The project is to develop web service APIs and data processing and transformation (CSV to XML) functionality. Java/Java EE can do this well and do it elegantly.
Note: I'm the EclipseLink JAXB (MOXy) lead, and a member of the JAXB 2 (JSR-222) expert group.
Does using MarkLogic have any impact on the XML technology stack?
(i.e. is there something about MarkLogic that leads to a different
approach to searching for, reading and writing XML snippets?)
Potentially. Some object-to-XML libraries support a larger variety of documents than other ones do. MOXy leverages XPath based mappings that allows it to handle a wider variety of documents. Below are some examples:
http://blog.bdoughan.com/2010/09/xpath-based-mapping-geocode-example.html
http://blog.bdoughan.com/2011/03/map-to-element-based-on-attribute-value.html
Should I process the XML myself using one of the XML APIs or is there
a simpler way?
Using a framework is generally easier. Java SE offers may standard libraries for processing XML: JAXB (javax.xml.bind), XPath (javax.xml.xpath), DOM, SAX, StAX. Since these standards there are also other implementations (i.e. MOXy and Apache JaxMe implement JAXB).
http://blog.bdoughan.com/2011/05/specifying-eclipselink-moxy-as-your.html
Is it worth using JAXB for this?
Yes.
There are a number of XML-> Java object marshall-ing libraries. I think you might want to look for an answer to this question by searching for generic Java XML marshalling/unmarshalling questions like this one:
Java Binding Vs Manually Defining Classes
Your use case is still not perfectly clear although the title edit helps - If you're looking for Java connectivity, you might also want to look at http://developer.marklogic.com/code/mljam which allows you to execute Java code from within MarkLogic XQuery.
XQSync uses XStream for this. As I understand it, JAXB is more powerful - but also more complex.
Having used JAXB to unmarshal xml served from XQuery for 5 years now, I have to say that I have found it to be exceptionally useful and time-saving. As for complexity, it is easy to learn and use for probably 90% of what you would be using it for. I've used it for both simple and complex schemas and found it to be very performant and time-saving. Executing Java code from within MarkLogic is usually a non-starter, because it runs in a separate VM on the Marklogic server, so it really can't leverage any session state or libraries from, say, a Java EE web application. With JAXB, it is very easy to take a result stream and convert it to Java objects. I really can't say enough good things about it. It has made my development efforts infinitely easier and allows you to leverage Java for those things that it does best (rich integration across various technologies and platforms, advanced business logic, fast memory management for heavy processing jobs, etc.) while still using XQuery for what it does best (i.e. searching and transforming content).
Does using MarkLogic have any impact on the XML technology stack?
No. By the time it comes out of MarkLogic, it's just XML that could have come from anywhere.
I need to query XML and marshal it into Java objects.
Why?
If you have a good reason for using Java, then we need to know what that reason is before we can tell you which Java technology is appropriate.
If you don't have a good reason for using Java, then you are better off using a high-level XML processing language such as XSLT or XQuery.
As for JAXB, it is appropriate when your schema is reasonably simple and stable. If the schema is complex (e.g. the schema for articles in an academic journal), then JAXB can be hopelessly unwieldy because of the number of classes that are generated. One problem with using it to process XQuery output is that it's very likely the XQuery output will not conform to any known schema, and the structure of the XQuery results will be different for each query that gets written.
I need a framework to transfer POJOs between two (or more in a client/server model) Java programs over TCP/IP. I need it to be as simple as possible but it must support several clients per server, and easy implementation of encryption is a plus.
So far I have looked at Java RMI, JRemoting, AltRMI and NinjaRMI. Right now JRemoting looks like the best choice as it is simple and don't require strange and seemingly unnecessary extends and implements as most of the others do. No active development seems to be going on on any of them except a little on Java RMI. I don't know if that is because they are stable and don't need more development, or because these kinds of frameworks are just not "cool" any more.
The POJOs are just bags of properties. I need the server to hold a static list of objects, and the clients must be able to (1) Read the list, (2) Add an object to the list, and (3) Remove an object from the list.
Any suggestions?
You could probably use any serialization technology, for example you could use JSON and add on encryption and compression later in order to cutdown the amount of traffic you are sending. JSON has the advantage of being language agnostic, so you don't restrict the implementation of either side of the connection.
Many JSON libraries are available; see json.org.
Do you need to do remote method calls, or are your POJOs just bags of properties? If the latter, it would probably be easiest to just use plain Java serialization.
look here: http://code.google.com/p/google-gson/
You can have a look on Protocol Buffers. I think Google uses Protocol Buffers internally.
Suppose I have a remote JAX-RS JSON API from a server running Tomcat. I want to access this API from a C/C++ client. Are there any tools available to make life easier for the C/C++ client, e.g. code generators? Or does anybody have a suggestion for an alternative?
I have never heard of such a tool. More to the point, I suspect that such a tool (a C / C++ generator for JSON) is impractical.
There are a number of reasons why. Some of the most significant ones are:
A key problem is that JSON doesn't have schemas. This means that an API generator would have to resort to looking at example messages and try to infer what fields to expect and what their types are. This can be difficult and even theoretically impossible in some cases.
In languages like Java and C#, there are straight-forward "right ways" to generate object APIs; e.g. the JavaBeans conventions. In C++ and especially C, the conventions are not there, and there are complicating issues like container protocols and memory management to deal with.
In languages like Java and C# are runtime type-safe, and have various language level mechanisms are provided that allow you to use dynamic programming to deal with the JSON's schema-less nature. For example, in Java you have reflection, proxy classes, dynamic code generation and dynamic code loading, all of which can help in dealing with JSON. In C and C++, these mechanisms are generally unavailable.
In short, if you are using C or C++, JSON libraries are as good as it will get.
FOLLOWUP
As a comment points out, this may be feasible to implement in the context of a specific JAX-RS-based server implementation. You'd need to get hold of the internal metadata, apply the JSON mapping to it, and generate C / C++ APIs from that. The problems are:
The generator implementation would be platform specific.
The C / C++ based client would not be able to cope with changes to the effective schema without regeneration of the APIs and corresponding client code changes. (By contrast, a JSON library-based solution can in theory be coded to deal with unexpected new attributes, etc)
You still have the container / memory management problem to deal with.
What you need is your choice of library for sending and receiving http requests, and a json parser. Nothing is going to generate code to make it easier for you because the idea of such an API is that it spits out JSON. The point of JSON is to go across language and transport barriers in a consistent way. A bit like XML, but simpler.
This question might interest you: what's the best json parser? JSON Spirit Looks like a particularly good article.
Now, as you're using REST, all you need is to communicate to the right urls. Done.
The final thing you want to decide is what libary to use for network communication. Boost would be many people's recommendation, I'm sure.
I'm looking for a super simple JSON or YAML library (not particularly bothered which one) written in Java, and can be used in both GWT on the client, and in its original Java form on the server.
What I'm trying to do is this: I have my models, which are shared between the client and the server, and these are the primary source of data interchange. I want to design the web service in between to be as simple as possible, and decided to take the RESTful approach.
My problem is that I know our application will grow substantially in the future, and writing all the getters, setters, serialization, factories, etc. by hand fills me with absolute dread. So in order to avoid it, I decided to implement annotations to keep track of attributes on the models.
The reason I can't just serialize everything directly, using GWT's own one, or one which works through reflection, is because we need a certain amount of logic going on in the serialization process. I.e. whether references to other models get serialized during the serialization of the original model, or whether an ID is just passed, and general simple things like that. I've then written an annotation processor to preprocess my shared models and generate an implementing class with all the getters, setters, serialization, lazy-loading, etc.
To make a long story short, I need some type of simple YAML or JSON library, which allows me to encode and decode manually, so I can generate this code through my annotation processor. I have had a look around the interwebs, but every single one I ran into supported some reflection which, while all fine and dandy, make it pretty much useless for GWT. And in the case of GWT's own JSON library, it uses JSNI for speed purposes, making it useless server side.
One solution I did think about involved writing writing two sets of serialization methods on the models, one for the client and one for the server, but I'd rather not do that.
Also, I'm pretty new to GWT, and even though I have done a lot of Java, it was back in the 1.2 days, so it's a bit rusty. So if you think I'm going about this problem completely the wrong way, I'm open to suggestions.
Have you looked into itemscript? Some excerpts from the description on the webpage:
A cross-platform GWT & standard Java JSON library, with convenient classes, parsers, and utilities.
A RESTful connector API for retrieval of data (JSON, text & small binary files) over a variety of protocols.
The same JSON API can be used in both standard Java and in GWT Java.
I'm currently working on a project which needs to persist any kind of object (of which implementation we don't have any control) so these objects could be recovered afterwards.
We can't implement an ORM because we can't restrict the users of our library at development time.
Our first alternative was to serialize it with the Java default serialization but we had a lot of trouble recovering the objects when the users started to pass different versions of the same object (attributes changed types, names, ...).
We have tried with the XMLEncoder class (transforms an object into a XML), but we have found that there is a lack of functionality (doesn't support Enums for example).
Finally, we also tried JAXB but this impose our users to annotate their classes.
Any good alternative?
It's 2011, and in a commercial grade REST web services project we use the following serializers to offer clients a variety of media types:
XStream (for XML but not for JSON)
Jackson (for JSON)
Kryo (a fast, compact binary serialization format)
Smile (a binary format that comes with Jackson 1.6 and later).
Java Object Serialization.
We experimented with other serializers recently:
SimpleXML seems solid, runs at 2x the speed of XStream, but requires a bit too much configuration for our situation.
YamlBeans had a couple of bugs.
SnakeYAML had a minor bug relating to dates.
Jackson JSON, Kryo, and Jackson Smile were all significantly faster than good old Java Object Serialization, by about 3x to 4.5x. XStream is on the slow side. But these are all solid choices at this point. We'll keep monitoring the other three.
http://x-stream.github.io/ is nice, please take a look at it! Very convenient
of which implementation we don't have any control
The solution is don't do this. If you don't have control of a type's implementation you shouldn't be serialising it. End of story. Java serialisation provides serialVersionUID specifically for managing serialisation incompatibilities between different versions of a type. If you don't control the implementation you cannot be sure that IDs are being changed correctly when a developer changes a class.
Take a simple example of a 'Point'. It can be represented by either a cartesian or a polar coordinate system. It would be cost prohibitive for you to build a system that could cope dynamically with these sorts of corrections - it really has to be the developer of the class who designs the serialisation.
In short it's your design that's wrong - not the technology.
The easiest thing for you to do is still to use serialization, IMO, but put more thought into the serialized form of the classes (which you really ought to do anyway). For instance:
Explicitly define the SerialUID.
Define your own serialized form where appropriate.
The serialized form is part of the class' API and careful thought should be put into its design.
I won't go into a lot of details, since pretty much everything I have said comes from Effective Java. I'll instead, refer you to it, specifically the chapters about Serialization. It warns you about all the problems you're running into, and provides proper solutions to the problem:
http://www.amazon.com/Effective-Java-2nd-Joshua-Bloch/dp/0321356683
With that said, if you're still considering a non-serialization approach, here are a couple:
XML marshalling
As many has pointed out is an option, but I think you'll still run into the same problems with backward compatibility. However, with XML marshalling, you'll hopefully catch these right away, since some frameworks may do some checks for you during initialization.
Conversion to/from YAML
This is an idea I have been toying with, but I really liked the YAML format (at least as a custom toString() format). But really, the only difference for you is that you'd be marshalling to YAML instead of XML. The only benefit is that that YAML is slightly more human readable than XML. The same restrictions apply.
google came up with a binary protocol -- http://code.google.com/apis/protocolbuffers/ is faster, has a smaller payload compared to XML -- which others have suggested as alternate.
One of the advanteages of protocol buffers is that it can exchange info with C, C++, python and java.
Try serializing to json with Gson for example.
Also a very fast JDK serialization drop-in replacement:
http://ruedigermoeller.github.io/fast-serialization/
If serialization speed is important to you then there is a comprehensive benchmark of JVM serializers here:
https://github.com/eishay/jvm-serializers/wiki
Personally, I use Fame a lot, since it features interoperability with Smalltalk (both VW and Squeak) and Python. (Disclaimer, I am the main contributor of the Fame project.)
Possibly Castor?
Betwixt is a good library for serializing objects - but it's not going to be an automatic kind of thing. If the number of objects you have to serialize is relatively fixed, this may be a good option for you, but if your 'customer' is going to be throwing new classes at you all the time, it may be more effort than it's worth (Definitely easier than XMLEncoder for all the special cases, though).
Another approach is to require your customer to provide the appropriate .betwixt files for any objects they throw at you (that effectively offloads the responsibility to them).
Long and short - serialization is hard - there is no completely brain dead approach to it. Java serialization is as close to a brain dead solution as I've ever seen, but as you've found, incorrect use of the version uid value can break it. Java serialization also requires use of the marker 'Serializable' interface, so if you can't control your source, you are kind of out of luck on that one.
If the requirement is truly as arduous as you describe, you may have to resort to some sort of BCE (Byte code modification) on the objects / aspects / whatever. This is getting way outside the realm of a small development project, and into the realm of Hibernate, Casper or an ORM....
SBE is an established library for fast, bytebuffer based serialization library and capable of versioning. However it is a bit hard to use as you need to write length wrapper classes over it.
In light of its shortcomings, I have recently made a Java-only serialization library inspired by SBE and FIX-protocol (common financial market protocol to exchange trade/quote messages), that tries to keep the advantages of both while overcoming their weaknesses. You can take a look at https://github.com/iceberglet/anymsg
Another idea: Use cache. Caches provide much better control, scalability and robustness to the application. Still need to serialize, though, but the management becomes much easier with within a caching service framework. Cache can be persisted in memory, disk, database or array - or all of the options - with one being overflow, stand by, fail-over for the other . Commons JCS and Ehcache are two java implementations, the latter is an enterprise solution free up to 32 GB of storage (disclaimer: I don't work for ehcache ;-)).