I was just introduced to the concept of serialisation in Java and while I 'get' the fundamentals, I can't help but feel like it's a bit of an overkill? My logic is that if I have pointers to the objects and I know how many bytes it takes up in memory. Why can't I just theoretically write these bytes to some txt file, along with the some extra bytes to indicate the type. With this, can't I just read these bytes back and restore my original object?
The amount of detail my book goes into serialisation is giving me a good indication that I'm not really understanding the importance of this and that there is probably something more subtle than just writing out all the bytes exactly as they are. Any help is greatly appreciated! (I have some background in c++ if that helps)
Why can't I just theoretically write these bytes to some txt file, along with the some extra bytes to indicate the type. With this, can't I just read these bytes back and restore my original object?
How could anyone ever read them back in? Say I'm writing code that's supposed to read in your file. Please tell me what the third byte means so that I can decode it properly.
What if the internal representation of the object contains pointers to other objects that might be in different memory locations the next time the program runs? For example, it is quite common to manage identical strings by having internal references to the same internal string object. How will writing that reference to a file be sensible given that the internal string object may not exist in the next run?
To write data to a file, you need to write it out in some specific format that actually contains all the information you need to be able to read back in. What happens to work internally for this program at this time just won't do as there's no guarantee another program at another time can make sense of it.
What you suggest works provided;
the order and type of fields doesn't change. Note this is not set at compile time.
the byte order doesn't change.
you don't have any references eg no String, enum, List or Map.
the name&package of the type doesn't change.
We at Chronicle, use a form of serialization which supports this as it's much faster but it's very limiting. You have to be very aware of those limitations and have a problem which is suitable. We also have a form of serialization which have none of these constraints, but it is slower.
The purpose of Java Serialization is to support arbitrary object graphs even if data is exchanged between systems which might arrange the data differently.
Related
Serialization is the process of converting an object stored in memory into a stream of bytes to be transferred over a network, stored in a DB, etc.
But isn't the object already stored in memory as bits and bytes? Why do we need another process to convert the object stored as bytes into another byte representation? Can't we just transmit the object directly over the network?
I think I may be missing something in the way the objects are stored in memory, or the way the object fields are accessed.
Can someone please help me in clearing up this confusion?
Different systems don't store things in memory in the same way. The obvious example is endianness.
Serialization defines a way by which systems using different in-memory representations can communicate.
Another important fact is that the requirements on in-memory and serialized data may be different: when in-memory, fast read (and maybe write) access is desirable; when serialized, small size is desirable. It is easier to create two different formats to fit these two use cases than it is to create one format which is good for both.
An example which springs to mind is LinkedHashMap: this basically stores two versions of the mapping when in memory (one to capture insertion order; one as a traditional hash map). However, you don't need both of these representations to reconstruct the same map from a serialized form: you only need the insertion order of key/value pairs. As such, the serialized form does not store the same data as the in-memory form.
Serialization turns the pre-existing bytes from the memory into a universal form.
This is done because different systems allocate memory in different ways. Thus, we cannot ensure that the object can be saved directly from the memory on one machine and then be loaded back in properly into another, different machine.
Mabe you can find more information on this page of Oracle docs.
Explanation of object serialization from book Thinking In Java.
When you create an object, it exists for as long as you need it, but under no circumstances does it exist when the program terminates. While this makes sense at first, there are situations in which it would be incredibly useful if an object could exist and hold its information even while the program wasn’t running. Then, the next time you started the program, the object would be there and it would have the same information it had the previous time the program was running. Of course, you can get a similar effect by writing the information to a file or to a database, but in the spirit of making everything an object, it would be quite convenient to declare an object to be "persistent," and have all the details taken care of for you.
Java’s object serialization allows you to take any object that implements the Serializable interface and turn it into a sequence of bytes that can later be fully restored to regenerate the original object. This is even true across a network, which means that the serialization mechanism automatically compensates for differences in operating systems. That is, you can create an object on a Windows machine, serialize it, and send it across the network to a Unix machine, where it will be correctly reconstructed. You don’t have to worry about the data representations on the different machines, the byte ordering, or any other details.
Hope this helps you.
Let's go with that set of mind : we take the object as is , and we send it as byte array over the network. another socket/httphandler receives that byte array.
now, two things come to mind:
ho much bytes to send?
what are these bytes? what class do these btyes represent?
you will have to provide this data as well. so for this action alone we need extra 2 steps.
Now, in C# and Java, as opposed to C++, the objects are scattered throught the heap, each object hold references to the objects it containes , so now we have another requirement
recursivly "catch" all the inner object and pack them into the byte array
now we get packed byte array which represent some object hirarchy, we need to tell the other side how to de-pack this byte array back to object+the object it holds so
Send information on how to unpack that byte array to object hirarchy
Some entities a obejct have cannot be sent over the net, such as functions. so now we have yet another step
Strip away things that cannot be serialized, like functions
this process goes on and one, for every new solution you will find many problems. Serialization is the process of taking that byte array you are talking about and making it something that can be handled in other enviroments, like network/files.
I opened this issue on github project prevayler-clj
https://github.com/klauswuestefeld/prevayler-clj/issues/1
because 1M short vectors, like this [:a1 1], forming the state of the prevayler, results in 1GB file size when serialized, one by one, with Java writeObject.
Is it possible? About 1kB for each PersistentVector? Further investigations demonstrated the same amount of vectors can be serialized in a 80MB file. So, what's going wrong in prevayler serialization? Am I doing something wrong in these tests. Please refer to the github issue for my tests code excerpts.
Prevayler apparently starts a fresh ObjectOutputStream for each serialized element, preventing any reuse of class data between them. Your test code, on the other hand, is written the "natural" way, allowing reuse. What forces Prevayler to restart every time is not clear to me, but I would hesitate to call it a "feature", given the negative impact it has; "workaround" is the more likely designation.
There's nothing wrong with prevLayer per say. It's just that java's writeObject method is not exactly tuned to writing clojure data; it's intended to store the internal structure of any serializable java object. Since clojure vectors are reasonably complex java objects under the hood, I'm not very suprised that a small vector may write out as roughly a Kb of data.
I'd guess that pretty much any clojure-specific serialization method would result in smaller files. From experience, standard clojure.core/pr + clojure.core/read gives a good balance between file size and speed and handles data structures of nearly any size.
See these pages for some insight in the internals of clojure vectors:
http://hypirion.com/musings/understanding-persistent-vector-pt-1
http://hypirion.com/musings/understanding-persistent-vector-pt-2
I am using hibernate to store and retrieve data from a MySQL database. I was using a byte array but came across the SerialBlob class. I can use the class successfully but I cant seem to find any difference between using the SerialBlob and a byte array. Does anyone know the basic differences or possible situations you wish to use a SerialBlob inlue of a byte[] are?
You are right that the SerialBlob is just a thin abstraction around a byte[], but:
Are you working in a team?
Do you sometimes make mistakes?
Are you lazy with writing comments?
Do you sometimes forget what your code from a year ago actually does?
If you anwsered any of the above questions with a yes, you should probably use SerialBlob.
It's basically the same with any other abstraction around a simple data structure (think ByteBuffer, for example) or another class. You want to use it over byte[], because:
It's more descriptive. A byte[] could be some sort of cache, it could be a circular buffer, it could be some sort of integrity checking mechanism gone wrong. But if you use SerialBlob, it's obvious that this is just a blob of binary data from the database / to be stored in the database.
Instead of manual array handling, you use methods on the class, which is, again, easier to read if you don't know the code. Even trivial array manipulation must be comprehended by the reader of your code. A method with a good name is self-descriptive.
This is helpful for your teammates and also for you when you'll read this code in a year.
It's more error proof. Every time you write any new code, there's a good chance you had made a bug in it. It may be not visible at first, but it is probably in there. The SerialBlob code has been tested by thousands of people around the world and it's safe to say that you won't get any bugs associated to it.
Even if you're sure you got your byte array handling right, because it's so straightforward, what if somebody else finds your code in half a year and starts "optimizing" things? What if he reuses an old blob, or messes up with your magic array padding? Every single off-by-one error in index manipulating will corrupt your data and that might not be detected right away (You are writing unit tests, aren't you?).
It restricts you to only a handful of possible interactions. This might actually look like a demerit, but it's not! It ensures you won't be using your blob as a local temporary variable after you're done with it. It ensures you won't try to make a String out of it or anything silly. It makes sure you'll only use it as a blob. Again, clarity and safety.
It's already written and always looks the same. You don't have to write a new implementation for every project, or read ten different implementations in ten different projects. If you'll ever see a SerialBlob in anyone's project, the usage will be clear to you. Everyone uses the same one.
TL; DR: A few years ago (or maybe still in C), using a byte[] would be ok. In Java (and OOP in general), try to use a specific class designed for the job instead of a primitive (low level) structure as it more clearly describes your intents, produces less errors and reduces the length of your code in the long run.
I'm trying to design a lightweight way to store persistent data in Java. I've already got a very efficient way to serialize POJOs to DataOutputStreams (and back), but I'm trying to think of a good way to ensure that changes to the data in the POJOs gets serialized when necessary.
This is for a client-side app where I'm trying to keep the size of the eventual distributable as low as possible, so I'm reluctant to use anything that would pull-in heavy-weight dependencies. Right now my distributable is almost 10MB, and I don't want it to get much bigger.
I've considered DB4O but its too heavy - I need something light. Really its probably more a design pattern I need, rather than a library.
Any ideas?
The 'lightest weight' persistence option will almost surely be simply marking some classes Serializable and reading/writing from some fixed location. Are you trying to accomplish something more complex than this? If so, it's time to bundle hsqldb and use an ORM.
If your users are tech savvy, or you're just worried about initial payload, there are libraries which can pull dependencies at runtime, such as Grape.
If you already have a compact data output format in bytes (which I assume you have if you can persist efficiently to a DataOutputStream) then an efficient and general technique is to use run-length-encoding on the difference between the previous byte array output and the new byte array output.
Points to note:
If the object has not changed, the difference in byte arrays will be an array of zeros and hence will compress very small....
For the first time you serialize the object, consider the previous output to be all zeros so that you communicate a complete set of data
You probably want to be a bit clever when the object has variable-sized substructures....
You can also try zipping the difference rather than RLE - might be more efficient in some cases where you have a large object graph with a lot of changes
Learning Java, so be gentle please. Ideally I need to create an array of bytes that will point to a portion of a bigger array:
byte[] big = new byte[1000];
// C-style code starts
load(file,big);
byte[100] sub = big + 200;
// C-style code ends
I know this is not possible in Java and there are two work-arounds that come to mind and would include:
Either copying portion of big into sub iterating through big.
Or writting own class that will take a reference to big + offset + size and implementing the "subarray" through accessor methods using big as the actual underlying data structure.
The task I am trying to solve is to load a file into memory an then gain read-only access to the records stored withing the file through a class. The speed is paramount, hence ideally I'd like to avoid copying or accessor methods. And since I'm learning Java, I'd like to stick with it.
Any other alternatives I've got? Please do ask questions if I didn't explain the task well enough.
Creating an array as a "view" of an other array is not possible in Java. But you could use java.nio.ByteBuffer, which is basically the class you suggest in work-around #2. For instance:
ByteBuffer subBuf = ByteBuffer.wrap(big, 200, 100).slice().asReadOnlyBuffer();
No copying involved (some object creation, though). As a standard library class, I'd also assume that ByteBuffer is more likely to receive special treatment wrt. "JIT" optimizations by the JVM than a custom one.
If you want to read a file fast and with low-level access, check the java nio stuff. Here's an example from java almanac.
You can use a mapped byte buffer to navigate within the file content.
Take a look at the source for java.lang.String (it'll be in the src.zip or src.jar). You will see that they have a an array of cahrs and then an start and end. So, yes, the solution is to use a class to do it.
Here is a link to the source online.
The variables of interest are:
value
offset
count
substring is probably a good method to look at as a starting point.
If you want to read directlry from the file make use of the java.nio.channels.FileChannel class, specifically the map() method - that will let you use memory mapped I/O which will be very fast and use less memory than the copying to arrays.