Java reading serialized in objects from file into an arrayList - java

I'm attempting to read a file that contains serialized objects of type Contact into an ArrayList contactsCollection. The issue I'm having is that the objects Contact never get added into the ArrayList.
try
{
ObjectInputStream in = new ObjectInputStream(new FileInputStream("contactList.dat"));
Contact temp;
while (in.available()!=0)
{
temp = (Contact)in.readObject();
contactsCollection.add(temp);
}
in.close();
}

This is a known behaviour of ObjectInputStream.available, it always returns 0, see http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4954570. Instead, you can read objects from file until EOFException is thrown, catch it and break.

Actually, you entire approach is wrong: You should serialize the List, not each object.
All List implementations are Serializable. Just create the list, add your onjbects and serialize the list - the objects in it will be serialized too (if they implement Serializable, which obviuosly your do).
Then to deserialize, simply read in the object, and voila - you have a list with all our objects added in already.

ArrayLists are Serializable provided their contents are. If the code that stores the Contacts to the stream has them in an ArrayList, just read the list in all at once.
If not, you probably want to have the code storing the Contacts store the length first:
try (FileInputStream fis = new FileInputStream("contactList.dat"),
ObjectInputStream in = new ObjectInputStream(fis)) {
int size = in.readInt();
for (final int i = 0; i < size; ++i) {
contacts.add((Contact) in.readObject());
}
} catch (IOException e) {
// Handle exception
}
Mixing available and readObject is unwise; available would tell how many bytes are available without causing the stream to block, except that Evegniy's comment applies. Those bytes may not represent a complete object.
If you can't get the code writing to the stream to put the size in first, you'll simply have to loop through and depend on the fact that an EOFException is an IOException.

Related

How to read DataInputStream until the end without needing to catch an EOFException?

Suppose we have some binary data byte[] data that only contains Integers. If I wanted to read this data utilizing a DataInputStream, the only approach I can come up with is the following:
DataInputStream in = new DataInputStream(new ByteArrayInputStream(data));
try {
while (true){
int i = in.readInt();
}
} catch (EOFException e) {
// we're done!
} catch (IOException e){
throw new RuntimeException(e);
}
What bugs me about this is that reaching the end of the stream is expected and it would exceptional only if no exception was thrown, what IMO defeats the purpose of exceptions in the first place.
When using Java NIO's IntBuffer, there's no such problem.
IntBuffer in = ByteBuffer.wrap(data).asIntBuffer();
while (in.hasRemaining()){
int i = in.get();
}
Coming from C# and being in the process of learning Java I refuse to believe that this is the intended way of doing this.
Moreover, I just came across Java NIO which seems to be "quite new". Using IntBuffer here instead would be my way of procrastinating the matter. Regardless, I wanna know how this is properly done in Java.
You can't. readInt() can return any integer value, so an out-of-band mechanism is required to signal end of stream, so an exception is thrown. That's how the API was designed. Nothing you can do about it.
Since you are coming from .NET, Java's DataInputStream is roughly equivalent to BinaryReader of .NET.
Just like its .NET equivalent, DataInputStream class and its main interface, DataInput, have no provision for determining if a primitive of any given type is available for retrieval at the current position of the stream.
You can gain valuable insight of how the designers of the API expect you to use it by looking at designer's own usage of the API.
For example, look at ObjectInputStream.java source, which is used for object deserialization. The code that reads arrays of various types calls type-specific readXYZ methods of DataInput in a loop. In order to figure out where the primitives end, the code retrieves the number of items (line 1642):
private Object readArray(boolean unshared) throws IOException {
if (bin.readByte() != TC_ARRAY) {
throw new InternalError();
}
ObjectStreamClass desc = readClassDesc(false);
int len = bin.readInt();
...
if (ccl == Integer.TYPE) {
bin.readInts((int[]) array, 0, len);
...
}
...
}
Above, bin is a BlockDataInputStream, which is another implementation of DataInput interface. Note how len, the number of items in the array stored by array serialization counterpart, is passed to readInts, which calls readInt in a loop len times (line 2918).

how to refer part of an array?

Given an object byte[], when we want to operate with such object often we need pieces of it. In my particular example i get byte[] from wire where first 4 bytes describe lenght of the message then another 4 bytes the type of the message (an integer that maps to concrete protobuf class) then remaining byte[] is actual content of the message... like this
length|type|content
in order to parse this message i have to pass content part to specific class which knows how to parse an instance from it... the problem is that often there are no methods provided so that you could specify from where to where parser shall read the array...
So what we end up doing is copying remaining chuks of that array, which is not effective...
As far as i know in java it is not possible to create another byte[] reference that actually refers to some original bigger byte[] array with just 2 indexes (this was approach with String that led to memory leaks)...
I wonder how do we solve situations like this? I suppose giving up on protobuf just because it does not provide some parseFrom(byte[], int, int) does not make sence... protobuf is just an example, anything could lack that api...
So does this force us to write inefficient code or there is something that can be done? (appart from adding that method)...
Normally you would tackle this kind of thing with streams.
A stream is an abstraction for reading just what you need to process the current block of data. So you can read the correct number of bytes into a byte array and pass it to your parse function.
You ask 'So does this force us to write inefficient code or there is something that can be done?'
Usually you get your data in the form of a stream and then using the technique demonstrated below will be more performant because you skip making one copy. (Two copies instead of three; once by the OS and once by you. You skip making a copy of the total byte array before you start parsing.) If you actually start out with a byte[] but it is constructed by yourself then you may want to change to constructing an object such as { int length, int type, byte[] contentBytes } instead and pass contentBytes to your parse function.
If you really, really have to start out with byte[] then the below technique is just a more convenient way to parse it, it would not be more performant.
So suppose you got a buffer of bytes from somewhere and you want to read the contents of that buffer. First you convert it to a stream:
private static List<Content> read(byte[] buffer) {
try {
ByteArrayInputStream bytesStream = new ByteArrayInputStream(buffer);
return read(bytesStream);
} catch (IOException e) {
e.printStackTrace();
}
}
The above function wraps the byte array with a stream and passes it to the function that does the actual reading.
If you can start out from a stream then obviously you can skip the above step and just pass that stream into the below function directly:
private static List<Content> read(InputStream bytesStream) throws IOException {
List<Content> results = new ArrayList<Content>();
try {
// read the content...
Content content1 = readContent(bytesStream);
results.add(content1);
// I don't know if there's more than one content block but assuming
// that there is, you can just continue reading the stream...
//
// If it's a fixed number of content blocks then just read them one
// after the other... Otherwise make this a loop
Content content2 = readContent(bytesStream);
results.add(content2);
} finally {
bytesStream.close();
}
return results;
}
Since your byte-array contains content you will want to read Content blocks from the stream. Since you have a length and a type field, I am assuming that you have different kinds of content blocks. The next function reads the length and type and passes the processing of the content bytes on to the proper class depending on the read type:
private static Content readContent(InputStream stream) throws IOException {
final int CONTENT_TYPE_A = 10;
final int CONTENT_TYPE_B = 11;
// wrap the InputStream in a DataInputStream because the latter has
// convenience functions to convert bytes to integers, etc.
// Note that DataInputStream handles the stream in a BigEndian way,
// so check that your bytes are in the same byte order. If not you'll
// have to find another stream reader that can convert to ints from
// LittleEndian byte order.
DataInputStream data = new DataInputStream(stream);
int length = data.readInt();
int type = data.readInt();
// I'm assuming that above length field was the number of bytes for the
// content. So, read length number of bytes into a buffer and pass that
// to your `parseFrom(byte[])` function
byte[] contentBytes = new byte[length];
int readCount = data.read(contentBytes, 0, contentBytes.length);
if (readCount < contentBytes.length)
throw new IOException("Unexpected end of stream");
switch (type) {
case CONTENT_TYPE_A:
return ContentTypeA.parseFrom(contentBytes);
case CONTENT_TYPE_B:
return ContentTypeB.parseFrom(contentBytes);
default:
throw new UnsupportedOperationException();
}
}
I have made up the below Content classes. I don't know what protobuf is but it can apparently convert from a byte array to an actual object with its parseFrom(byte[]) function, so take this as pseudocode:
class Content {
// common functionality
}
class ContentTypeA extends Content {
public static ContentTypeA parseFrom(byte[] contentBytes) {
return null; // do the actual parsing of a type A content
}
}
class ContentTypeB extends Content {
public static ContentTypeB parseFrom(byte[] contentBytes) {
return null; // do the actual parsing of a type B content
}
}
In Java, Array is not just section of memory - it is an object, that have some additional fields (at least - length). So you cannot link to part of array - you should:
Use array-copy functions or
Implement and use some algorithm that uses only part of byte array.
The concern seems that there is no way to create a view over an array (e.g., an array equivalent of List#subList()). A workaround might be making your parsing methods take in the reference to the entire array and two indices (or an index and a length) to specify the sub-array the method should work on.
This would not prevent the methods from reading or modifying sections of the array they should not touch. Perhaps an ByteArrayView class could be made to add a little bit of safety if this is a concern:
public class ByteArrayView {
private final byte[] array;
private final int start;
private final int length;
public ByteArrayView(byte[] array, int start, int length) { ... }
public byte[] get(int index) {
if (index < 0 || index >= length) {
throw new ArrayOutOfBoundsExceptionOrSomeOtherRelevantException();
}
return array[start + index];
}
}
But if, on the other hand, performance is a concern, then a method call to get() for fetching each byte is probably undesirable.
The code is for illustration; it's not tested or anything.
EDIT
On a second reading of my own answer, I realized that I should point this out: having a ByteArrayView will copy each byte you read from the original array -- just byte by byte rather than as a chunk. It would be inadequate for the OP's concerns.

How to workaround readObject similar to readLine

I am reading a file which contain many serialized objects.
I want to deserialize them back and realized that we cannot use readObject like readLine ie
while (ois.readObject != null) {
}
would throw an exception. We also dont have hasNext and next sort of mechanism in place from my knowledge.
How is the problem of reading object fixed in real world ?
Catch EOFException, and close and break when you get it.
readObject() only returns null if you wrote a null, and that doesn't have to imply the end of the stream.
Assuming you're trying to load Person objects, you could try something like:
ArrayList<Person> persons = new ArrayList();
while (true) {
try {
persons.add((Person) ois.readObject());
} catch (EOFException e) {
break;
}
}
Or instead of serializing the individual object, you could add your objects to an array or arraylist and serialize the list object. Then you can easily deserialize the list object and you won't have to deal with EOFException. See the example in John Purcell's serialization tutorial.

Serializable Errors with Java Object

Edit, Here's how i solved using the comments
So after trying different ways of serializing and looking through my code, I finally found out that each object drawn in the renderer contains FloatBuffers. I created a capsule class thanks to Ted Hopp. Then I tried returning the float representation of the FloatBuffers using .array(), which you can't do. My guess is because these are running on threads. So using a suggestion from Learn OpenGL ES to use get, i instead did
public float[] getVertexBuffer()
{
float[] local = new float[vertexBuffer.capacity()];
vertexBuffer.get(local);
return local;
}
Which does work and returns the float[].
Then i store them all in a capsule object for each mGrid object i created
Encapsulate capsule = new Encapsulate(values);
for(int i = 0; i < values[0]; i++)
{
for(int j = 0; j < values[1]; j++)
{
capsule.storeVertex(i,j,mRenderer.mGrid[i*values[1] + j].getVertexBuffer());
capsule.storeColors(i,j,mRenderer.mGrid[i*values[1] + j].getmColors());
capsule.storePillar(i,j,mRenderer.mGrid[i*values[1] + j].getPillarPositions());
}
}
Which I can then ultimately save because it's serializable. Thank you all
PROBLEM DESCRIPTION
So i'm trying to save a GLSurfaceView object, whose class is denoted as
class GLWorld extends GLSurfaceView implements Serializable
Now I'm sure as i do the saving correctly.
public void saveSimulation()
{
String fileName = "Test Save";
try {
FileOutputStream fos = openFileOutput(fileName, Context.MODE_PRIVATE);
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(mGLView);
Log.d("Save","Successfully Written");
oos.close();
fos.close();
} catch (FileNotFoundException e) {
Log.d("Save","File not found exception");
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
Log.d("Save","IO exception");
// TODO Auto-generated catch block
e.printStackTrace();
}
finish();
}
But i'm getting an error i have no clue how to fix. I've spent hours looking around but haven't found anything like it.
09-16 17:36:50.639: W/System.err(2996): java.io.NotSerializableException: java.nio.FloatToByteBufferAdapter
Along with many more system err lines below that, which i believe stem from this one error.
My GLWorld creates a renderer object in it which has different objects with floatbuffers in it which store vertex and color data. I can't figure out what to do to get past this error, or why those float buffers are throwing an error. Everything runs smoothly except actually trying to save this GLWorld object and it's driving me insane.
Just declaring that a class implements Serializable is not enough to successfully serialize objects of that class. The default implementation requires that every field of the class be serializable. In your case, there's a field of type FloatToByteBufferAdapter that isn't serializable (there may be more).
You can define your own serialization mechanism to serialize only what you need. The details can be found in the Serializable docs. Be aware that by subclassing GLSurfaceView, it is unlikely you will be able to successfully deserialize this class, even if you write the correct support methods. For one thing, GLSurfaceView does not have a default (no-arg) constructor, which is a requirement of Java's serialization mechanism. Also, many objects simply cannot be serialized (e.g., streams).
I suggest that you encapsulate the data you want to serialize in a helper class and limit the serialization/deserialization to those data.
Gotta assume that something within the mGLView inheritance contains FloatTOByteBufferAdapter, which isn't serializable.

Cannot add to custom built linked list that has been deserialised

I have made a singly linked list with node objects which contain a data variable and a next node variable. a Customer object is stored in the data variable the customer file has a surname variable and a forename variable as follows:
public customer
{
String Surname;
String Forename;
get and set methods for each;
}
The Linked list and node class will sort in ascending order by surname (A-Z) so that George Clooney precedes Jay Gatsby. The methods used to insert an item in the linked list recursively are fine and I have run numerous tests before even trying to save the file using ObjectOutputStream and FileOutputStream or load the file using ObjectInputStream and FileInputStream.
As a result of all the research I have done, I have concluded that it is better to save the whole linked list as an object using the serializable interface on my custom linked list (not a standard Java list) to serialize it and save it to the file.
Now, here's my problem: I add 1 Node object with a random surname and it works fine: each time the item is added the list is deserialized from the file and loaded into a new SinglyLinkedList object. The new node is added and placed at the right position in the list and then this new list is passed back to the file with ObjectOutputStream and FileOutputStream.
I add one item and it is fine; however, if I add another, it says that it is fine (no errors - I have all the appropriate try and catch statements), but it doesn't actually update the list. The print method reinforces this, as it also accesses the list in the same fashion and only prints the first item (even if more are added). When I ask for the length of the list, it simply gives me the first item (e.g. returns 1).
If I were to close it and re-run it, I would once again receive this same 1 Node and nothing more. Then, if I were to add another, it would not work once again.
Here are some useful bits of info:
The customer file, node and singly linked list classes have the necessary serializable imports and implements Serializable headers
There are 4 separate classes, the customer file-contains the data, the Node-contains the customer file and the next node, the Singly Linked List-contains all methods for editing a list and the Main class which contains all the Scanners and Serializable Stream objects (e.g. ObjectOutputStream) and takes commands and data from the keyboard (e.g. "ADD", "PRINT")
There are separate methods for saving and loading to the file which are used to reduce code coverage on screen- this may be the problem - I am a novice with this but here is the code.
Code for adding a node in MAIN METHOD - not recursive routine
CustomerFile custDat = new CustomerFile(fName,sName);
Node custNode = new Node(custDat, null);
SinglyLinkedList a = loadListFromFile();
if (a == null)
{
System.out.println("Creating new list");
SinglyLinkedList newList = new SinglyLinkedList();
newList.addRecord(custNode, null);
a = newList;
}
else
{
a.addRecord(a.getHead(), custNode);
}
saveListToFile(a);
System.out.println("File added successfully");
and the loadListFromFile method (the save list method is similar, but with output instead) has a set fileName:
private static SinglyLinkedList loadListFromFile()
{
SinglyLinkedList lst = null;
try
{
ObjectInputStream is = new ObjectInputStream(new FileInputStream(fileName));
lst = (SinglyLinkedList) is.readObject();
is.close();
}
catch(FileNotFoundException e)
{
e.printStackTrace();
}
catch(IOException e)
{
e.printStackTrace();
}
catch(ClassNotFoundException e)
{
e.printStackTrace();
}
return lst;
}
fortunately, I have solved the problem! thank you for your help but the problem was down to a failure in my logic. I running a loop that took values and added them to nodes, these nodes were being added to a list which was being declared NEW INSIDE THE LIST - ie the list could not reach the saveListToFile method and hence would not add to the list being saved in the file

Categories

Resources