Java object persistence question - java

I have an object class called BOM.
It has a method, getChildItem() which returns an Item object.
Let's say I do this:
BOM model = new BOM();
Item child = model.getChildItem();
ArrayList a = new ArrayList();
a.add(child);
model.close();
What happens? Does it:
Not actually close the model because the node’s in an array
Still close the model as the child object, once created, is independent of the model bject.
Close the model and set the child object to null (I’m pretty sure this doesn’t happen; it would wreak havoc and seems counterintuitive to the java garbage collection methodology)

It's impossible to say what your close() method does. Possibly you're thinking of somehing like a Database resultSet or an inputStream where the values are unavailable after you've closed? That wouldn't be the case unless you've explicitly built your objects that way, they're not part of the core language.
From the context I think you mean "what happens when the parent object goes out of scope?" (i.e. becomes eligible for garbage collection)
What happens is this:
BOM model = new BOM();
Item child = model.getChildItem();
// you now have a handle to the child object. Presumably, so does model, but we don't care about that.
ArrayList a = new ArrayList();
a.add(child);
//a now has a handle to child.
model.close();
// child is not eligible for garbage collection because a still has a handle to it.

Both child and model will be the same underlying object. close will be called on model, so any access to child that relies on it being open will now fail.
Hope that helps.

Related

Should a repository always return the same reference in memory when querying for the same ID?

In many blogs or articles one reads the following statement about the repository
You should think of a repository as a collection of domain objects in memory
Now i am asking myself what should happen when i query the repository for the same Id twice.
Entity a = theRepo.GetById(1);
Entity b = theRepo.GetById(1);
assertTrue( a == b ); // Do they share the same reference ?
assertTrue( a.equals( b ) ); // This should always be true
Should the repository always return the same reference in memory ?
Should the repository return a new instance of the entity but with equal state?
I don't think you can assume that a == b.
Consider the situation where you got instance a, and started to modify it, not yet saving it back to your database. If another thread requests the same entity and puts it in variable b, it should get a new one reflecting the data in the database, not a dirty one that another thread is modifying and hasn't yet (and possibly never will) save.
On the other hand, assuming that a or b has not been subsequently modified after it has been retrieved from the same repository, it should be safe to assume that a.equals(b), also assuming that the equals() method has been implemented correctly for the entity.
In my opinion, your problem boils down to the lifespan of the repository. Repositories are transient (ideally) and also, sometimes, they live inside another class called "Unit of Work" which is transient as well.
I don't think this is a DDD issue, but more of an Infrastructure issue.
Given an entity type, a repository is a collection of instances of a the given entity type.
A repository doesn't create instances of the entity. It is just a collection (in the sense of a "set") of instances that you created before. Then you add those instances to the repository (set of instances). And you can retrieve those instances.
A set doesn't duplicate elements. Given an id (eg id=1), the set will just have one instance with id=1, and that instance is the one you retrieve when you call "theRepo.GetById(1)".
So:
Entity a = theRepo.GetById(1);
Entity b = theRepo.GetById(1);
Should the repository always return the same reference in memory ?
See UPDATE 3.
Should the repository return a new instance of the entity but with
equal state?
No. The repository should return the instance that you added before. The repository shouldn't create new instances.
Anyway, in order to check whether two instances are the same, you shouldn't compare the references, you should compare their ids.
You are mixing concepts. A repository is just a collection (set) of instances. Instances are created by factories (or by constructor methods of the entity).
See IDDD book by Vaughn Vernon ("Collection-Oriented Repositories" section in Chapter 12).
Hope it helps.
UPDATE:
When I say "...Repository is a set of instances..." I mean it mimics a set. My fault not expressing it well. Regarding to update an instance of the repository, such operation doesn't exist, since when you retrieve an instance and modify it, the changes are made in the instance of the repository, you don't have to re-save the instance. The peristence mechanism implemeting the repository must have capabilities to ensure this behaviour. See Chapter 12 of the Implementing DDD book by Vaugn Vernon.
UPDATE 2:
I want to clarify that what I say here is my understanding after reading Vaughn Vernon book IDDD, and also another book (Domain Driven Design in PHP, by Carlos Buenosvinos). I'm not trying to be misleading at all.
UPDATE 3:
I asked Vaughn Vernon the following question:
Regarding collection-oriented repository, I have a question:
If I do Foo aFoo=fooRepository.getById(1); Foo anotherFoo=fooRepository.getById(1);
then is it guaranteed that both references are the same (aFoo==anotherFoo)?
And he answered the following:
That depends on the backing persistence mechanism, such as Hibernate/JPA. It seems it should if you are using the same session in both reads and both reads have the same transactional scope, but check with your ORM.

hasMap declared transient doesnt work after object is deserialized

Is there anything wrong with declaring collection transient? transient Map<String, Car> cars = new HashMap<>() is declared in Ownerinstance that is serialized, but the Car class is not serialized.
When program runs for the first time Owner instance it creates Car and insert it into collection Cars, however when running program for second time, Owner is deserialized, it correctly creates Car instance but when adding to collection cars.put(key, object) it causes NullPointerException. Also only when running after deserialization cars.containsKey(regNumIn) causes exception instead of giving true or false. It seems that on second run after Owner is recreated the new hashMap is created.
Does it have to do anything with hasCode() and equals()? I havent declared those, and if they are automatically declared by Netbeans IDE, the program doesnt work at all.
Your problem has nothing at all to do with Collections. "transient" tells Java that you do not want to store the fields value, so when you reload the stored object, transient fields are set to null (or 0, or other respective default values). Therefore, in your example code of cars.put(key, object), you are essentially attempting to do null.put(key, object).
containsKey of course fails for the same reason - you are attempting to call it on something that is null.
If you don't want to serialize your collection, you will have to do something like cars = new HashMap<>() after deserializing it.
That means the problem is also unrelated to equals and hashCode, however the information that your program 'breaks' when you have Netbeans generate them suggests that you may have other issues. Good information about equals and hashCode can be found in this related SO question:
What issues should be considered when overriding equals and hashCode in Java?
Java does not call the default constructor when deserializing the object. Therefore, your code
transient Map<String, Car> cars = new HashMap<>();
will not be executed.
To accomplish this, you can override the readObject method of your class:
public class ... implements Serializable {
...
private transient Map<String, Car> cars = new HashMap<>();
...
private void readObject(ObjectInputStream stream)
throws IOException, ClassNotFoundException {
stream.defaultReadObject();
// Important! recreate transient field cars as empty HashMap
this.cars = new HashMap<>();
}
...
}
I removed transient statement and I implemented Serializable interface in a Car class, and it works. I think the problem was that collection being transient was not saved after first run, and in the second run when Owner object was deserialized, the non-argument constructor is not called, therefore the new cars collection in that constructor was not created. So in the second run it was attempted to add car object to non existing collection.

Is it possible to find the "referring/parent object(s)" when serializing?

We are using an OODBMS, which allows both Java "entities" and serialized objects too. The DB supports true graphs (no "tree" restriction) and serialized objects can safely reference entities as well. The DB works (almost) transparently, and we can do whatever we want, and it-just-works.
Now, I've discovered that objects that had been marked as "logically deleted" (using a simple boolean flag, rather then built-in DB functionality, since the DB doesn't have such a concept) are loaded/saved within a particular object graph.
I want to know which object(s) references those "zombie" objects. Trying to use reflection to iterate over the graph has not worked so far. Instead of the DB, I can simply use Java serialization to export the object graph, and this also causes the "zombie" objects to be serialized.
My question is: can I somehow extract the information about the object(s) that is holding a reference to a "zombie" object during the serialisation process (the "parent" object)? There can be more then one, but as long as I have one, I can work iteratively until I killed off all those invalid references.
Most OODBMS allow to run queries which return object references that satisfy certain constraints. So you could write something like this:
return all objects
where deleted == true
and Foo.bar == this
where Foo is the type of the object which references the deleted objects and bar is the field/property that contains the reference.
The exact syntax depends on your OODBMS.

Tricky object serialization

What is the safest way to serialize any kind of Java Object such that when:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s1 = safeSerialize(id);
The obj.get(ID_KEY) method returns an object that serves as an "id", it could be a ObjectId, String, Long, Integer, or anyt kind of Object.
Then do the same thing:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s2 = safeSerialize(id);
I need to make sure the s1 is still equals to s2. I mean, for example obj.get() method might return new instance of say, new Integer(100) for a given ID_KEY, still having a "serialized" version.
You can't do this because java.lang.Object is not serializable.
Classes are marked with java.io.Serializable to indicate that the programmer has allowed for binary representations to be able to reanimated.
Even if you were to require that the object that you were storing was limited to a given set of objects which were serializable, you'd be subject to the usual fragility of serialization.
If you limited the set of objects to a set for which you then provided custom serialization (as opposed to using the default serialization), you could then make it work.
I don't think Java serialization gives you these guarantees:
It certainly doesn't if any on the classes involve could change.
It certainly doesn't if you serialize / deserialize on different JVM version/release/vendor platforms.
It possibly doesn't for any class that has custom writeObject / readObject methods ... and that includes some of the basic types in java.util, etcetera.
JSON is as bad, if not worse. The order of the attributes of JSON objects is explicitly undefined, so you have no guarantees that the attributes will appear in the serialization in the same order each time.
Binding based serial/deserializers for POJOs as XML could work (if they don't use attributes), but you need to beware of how a binding handles the serialization of inherently unordered collections such as HashSets and HashMaps. The chances are that the order of the set/map members in the serialization won't be predictable.
My advice would be to think of another way to solve your actual problem ... whatever it is.

Revert to old object state in Java

I'm just learning Java but I keep running into the same problem over and over again
How do i revert to an old state of some object efficiently?
public class Example {
MyObject myLargeObject;
public void someMethod(){
MyObject myLargeMyObjectRecovery = myLargeObject;
/**
* Update and change myLargeObject
*/
if(someCondition){
//revert to previous state of myLargeObject
myLargeObject = myLargeMyObjectRecovery;
}
}
}
The above is how I would like the code to work but it obviously doesn't since myLargeObject and myLargeObjectRecovery are references to the same object.
One solution is to create a copy constructor. This is fine for small objects but if I have a large object (in my project the object is a large 2D array meaning I would have to iterate over all of the entries), this way feels wrong.
This must be a very common problem in Java, how do others get around it?
Either deep copy, as you noted, or possibly serialization. You could store a serialized string, and then reconstruct the object from it later.
The best solution depends on whether you have external references to your MyObject instance or not.
If you use the myLargeObject from the Example class only, you can use
Serialize your object at the savepoint, and deserialize at the restore point (the serialized byte[] must be transient)
Create a new instance with a Copy Constructor (doing deep copying) at the savepoint, and replace the reference at the restore point.
If you have access to the MyObject instance from outside, then it becomes a bit more interesting, you must introduce synchronization.
All of your methods on MyObject must be synchronized (to avoid inconsistent read)
You should have a synchronized void saveState() method which saves your state (either by serialization, or by copy constructor) (the latter is better)
You should have a synchronized void restoreState(), where you internally restore your state (for copying fields you can use a common code fragment with the copy constructor)
In all cases it is recommended to close the transaction (kind of commit()) at some point, it means that when you get there, you can delete the saved state.
Also, it is very important, that if you have an underlying data structure, you should traverse the whole structure for it. Otherwise you may experience problems with the object references.
Be careful with JPA Entities or any externally-managed objects, it is unlikely that any of these methods will work with them.
If you assign a object to another, jvm refers to same object in the memory. Therefore, the changes on the object will be in the other objetc which references.
MyObject myLargeMyObjectRecovery = myLargeObject;
It is the same in your code. myLargeMyObjectRecovery and myLargeObject refer to same object in the memory.
If you want to exact copy of any object you can use Object.clone() method. this method will copy the object and return a object which is refer to another object whose fields and their valus same as the coppied object.
Since clone method is protected you can not access it direcly. you can implement a Prototype pattern depending on your requirements.
http://www.avajava.com/tutorials/lessons/prototype-pattern.html
public class MyObject{
//fields, getters, setters and other methods.
public MyObject doClone()
{
MyObject clonedObject = this.clone(); //you may need to override clone()depending on your requirements.
return clonedObject;
}
}
and call it in your code
MyObject myLargeMyObjectRecovery = myLargeObject.doClone();
The Memento Pattern addresses the design issue of the revert to previous state problem. This approach is useful when you need to capture one or multiple states of the object and be able to revert to them. Think of it as an N-step undo operation, just like in a text editor.
In the pattern you have a stateful object Originator, which is responsible for saving and restoring snapshots of it's state. The sate itself is saved in a wrapper class called Memento, which are stored in and accessed via the CareTaker.
In this approach the easiest idea is to deep-copy your objects. However this may be inefficient regarding performance and space, since you store whole objects and not the change-sets only.
Some object persistence libraries provide implementations of transactions and snapshots. Take a look at Prevayler, which is an object persistence library for java and an implementation of the prevalent system pattern. The library captures the changes to your objects in form of transactions and stores them in-memory. If you need a persistent storage of your POJOs, you can save snapshots of your objects on disk periodically and revert to them if needed.
You can find more on serializing POJOs in this SO question: Is there a object-change-tracking/versioning Java API out there?

Categories

Resources