I'm just learning Java but I keep running into the same problem over and over again
How do i revert to an old state of some object efficiently?
public class Example {
MyObject myLargeObject;
public void someMethod(){
MyObject myLargeMyObjectRecovery = myLargeObject;
/**
* Update and change myLargeObject
*/
if(someCondition){
//revert to previous state of myLargeObject
myLargeObject = myLargeMyObjectRecovery;
}
}
}
The above is how I would like the code to work but it obviously doesn't since myLargeObject and myLargeObjectRecovery are references to the same object.
One solution is to create a copy constructor. This is fine for small objects but if I have a large object (in my project the object is a large 2D array meaning I would have to iterate over all of the entries), this way feels wrong.
This must be a very common problem in Java, how do others get around it?
Either deep copy, as you noted, or possibly serialization. You could store a serialized string, and then reconstruct the object from it later.
The best solution depends on whether you have external references to your MyObject instance or not.
If you use the myLargeObject from the Example class only, you can use
Serialize your object at the savepoint, and deserialize at the restore point (the serialized byte[] must be transient)
Create a new instance with a Copy Constructor (doing deep copying) at the savepoint, and replace the reference at the restore point.
If you have access to the MyObject instance from outside, then it becomes a bit more interesting, you must introduce synchronization.
All of your methods on MyObject must be synchronized (to avoid inconsistent read)
You should have a synchronized void saveState() method which saves your state (either by serialization, or by copy constructor) (the latter is better)
You should have a synchronized void restoreState(), where you internally restore your state (for copying fields you can use a common code fragment with the copy constructor)
In all cases it is recommended to close the transaction (kind of commit()) at some point, it means that when you get there, you can delete the saved state.
Also, it is very important, that if you have an underlying data structure, you should traverse the whole structure for it. Otherwise you may experience problems with the object references.
Be careful with JPA Entities or any externally-managed objects, it is unlikely that any of these methods will work with them.
If you assign a object to another, jvm refers to same object in the memory. Therefore, the changes on the object will be in the other objetc which references.
MyObject myLargeMyObjectRecovery = myLargeObject;
It is the same in your code. myLargeMyObjectRecovery and myLargeObject refer to same object in the memory.
If you want to exact copy of any object you can use Object.clone() method. this method will copy the object and return a object which is refer to another object whose fields and their valus same as the coppied object.
Since clone method is protected you can not access it direcly. you can implement a Prototype pattern depending on your requirements.
http://www.avajava.com/tutorials/lessons/prototype-pattern.html
public class MyObject{
//fields, getters, setters and other methods.
public MyObject doClone()
{
MyObject clonedObject = this.clone(); //you may need to override clone()depending on your requirements.
return clonedObject;
}
}
and call it in your code
MyObject myLargeMyObjectRecovery = myLargeObject.doClone();
The Memento Pattern addresses the design issue of the revert to previous state problem. This approach is useful when you need to capture one or multiple states of the object and be able to revert to them. Think of it as an N-step undo operation, just like in a text editor.
In the pattern you have a stateful object Originator, which is responsible for saving and restoring snapshots of it's state. The sate itself is saved in a wrapper class called Memento, which are stored in and accessed via the CareTaker.
In this approach the easiest idea is to deep-copy your objects. However this may be inefficient regarding performance and space, since you store whole objects and not the change-sets only.
Some object persistence libraries provide implementations of transactions and snapshots. Take a look at Prevayler, which is an object persistence library for java and an implementation of the prevalent system pattern. The library captures the changes to your objects in form of transactions and stores them in-memory. If you need a persistent storage of your POJOs, you can save snapshots of your objects on disk periodically and revert to them if needed.
You can find more on serializing POJOs in this SO question: Is there a object-change-tracking/versioning Java API out there?
Related
I doesn't understand how to use repositoryItem in ATG. How do I need construct customized logic on it.
Do I need to create usual JavaBean over repositoryItem or I need to use it as is?
I will try to explain:
Logic on repositoryItem:
RepositoryItem store = getRepository().getItem(..);
String address = store.getPropertyValue(..);
Logic on JavaBean:
class StoreBean {
String address;
StoreBean(RepositoryItem store) {
address = store.getPropertyValue(..);
}
}
Then I can use StoreBean how I want, to get it fields(lazy load for them, for example).
What will be best practices in ATG?
It is a matter of preference.
What you do not get with RepositoryItem objects is strong type checking. You must either make assumptions about the type of RepositoryItem you are working with or you have to do manual checks in your code (see example below). Additionally, since the RepositoryItem properties are stored as a metadata, you have to know 1) the actual names of the properties from the XML repository descriptor and 2) you need to know the types, which requires type casting (Example: String firstName = (String) item.getProperty("firstName");) Here is an example of a validation to ensure the RepositoryItem object is of type "sku":
RepositoryItemDescriptor skuItemDescriptor = getCatalogTools().getCatalog().getItemDescriptor(getCatalogTools().getBaseSKUItemType());
if (!RepositoryUtils.isTypeOfItemDesc(itemDescriptor, skuItemDescriptor)) {
throw new IllegalArgumentException("RepositoryItem must be of type " + getCatalogTools().getBaseSKUItemType());
}
If you take the approach of not using "JavaBeans", then you are increasing the risk of having runtime errors in your application. My suggestion is that you have a healthy balance between using RepistoryItem objects and wrapper objects. For critical items you plan to use in a large amount of your code base, I suggest using a wrapper object.
I suggest that if you create wrapper objects, that for consistency, you follow the same design pattern that Oracle Commerce uses. For example, the "order" item is wrapped by OrderImpl and implements the ChangedProperties interface.
public class OrderImpl
extends CommerceIdentifierImpl
implements Order, ChangedProperties
http://docs.oracle.com/cd/E52191_03/Platform.11-1/apidoc/atg/commerce/order/OrderImpl.html
ATG out of box repository implementations do not use JavaBeans for the most part. One big disadvantage of using JavaBeans and lazy loading them into memory will be to lose many repository caching features and will increase your memory footprint. For instance you will not be able to monitor your cache statistic or invalidate cache periodically. You will also have overheads of instantiations when you have huge repotiroyitem result set from a query.
Instead you can also use DynamicBean which lets you refer to repository properties similar to java beans for instance Profile.city.
If you only want to wrap them so that developers don't accidentally parse them incorrectly, you can write a util class per repository for various types of ready write operations and centralize your type safety.
We are using an OODBMS, which allows both Java "entities" and serialized objects too. The DB supports true graphs (no "tree" restriction) and serialized objects can safely reference entities as well. The DB works (almost) transparently, and we can do whatever we want, and it-just-works.
Now, I've discovered that objects that had been marked as "logically deleted" (using a simple boolean flag, rather then built-in DB functionality, since the DB doesn't have such a concept) are loaded/saved within a particular object graph.
I want to know which object(s) references those "zombie" objects. Trying to use reflection to iterate over the graph has not worked so far. Instead of the DB, I can simply use Java serialization to export the object graph, and this also causes the "zombie" objects to be serialized.
My question is: can I somehow extract the information about the object(s) that is holding a reference to a "zombie" object during the serialisation process (the "parent" object)? There can be more then one, but as long as I have one, I can work iteratively until I killed off all those invalid references.
Most OODBMS allow to run queries which return object references that satisfy certain constraints. So you could write something like this:
return all objects
where deleted == true
and Foo.bar == this
where Foo is the type of the object which references the deleted objects and bar is the field/property that contains the reference.
The exact syntax depends on your OODBMS.
What is the safest way to serialize any kind of Java Object such that when:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s1 = safeSerialize(id);
The obj.get(ID_KEY) method returns an object that serves as an "id", it could be a ObjectId, String, Long, Integer, or anyt kind of Object.
Then do the same thing:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s2 = safeSerialize(id);
I need to make sure the s1 is still equals to s2. I mean, for example obj.get() method might return new instance of say, new Integer(100) for a given ID_KEY, still having a "serialized" version.
You can't do this because java.lang.Object is not serializable.
Classes are marked with java.io.Serializable to indicate that the programmer has allowed for binary representations to be able to reanimated.
Even if you were to require that the object that you were storing was limited to a given set of objects which were serializable, you'd be subject to the usual fragility of serialization.
If you limited the set of objects to a set for which you then provided custom serialization (as opposed to using the default serialization), you could then make it work.
I don't think Java serialization gives you these guarantees:
It certainly doesn't if any on the classes involve could change.
It certainly doesn't if you serialize / deserialize on different JVM version/release/vendor platforms.
It possibly doesn't for any class that has custom writeObject / readObject methods ... and that includes some of the basic types in java.util, etcetera.
JSON is as bad, if not worse. The order of the attributes of JSON objects is explicitly undefined, so you have no guarantees that the attributes will appear in the serialization in the same order each time.
Binding based serial/deserializers for POJOs as XML could work (if they don't use attributes), but you need to beware of how a binding handles the serialization of inherently unordered collections such as HashSets and HashMaps. The chances are that the order of the set/map members in the serialization won't be predictable.
My advice would be to think of another way to solve your actual problem ... whatever it is.
Currently I have a class setup to be processed as an autobean:
public interface Asset extends Hit {
String getGuid();
String getHitType();
Map<String,Serializable> getMetadata();
}
I tried using Object instead of Serializable:
Map<String,Object> getMetadata()
but this seems to blow up when trying to access data (because it's not 'reified').
The Metadata map may contain other maps, strings, ints, etc. How do I retrieve data from an inner map of that metadata object?
Currently, if I call asset.getMetadata().get("title"); this returns a SerializableAutoBean and performing toString() or String.valueOf(obj) on that object returns the in memory object information and not the actually string value.
Can an AutoBean object be this dynamic, or do you specifically have to define every field?
AutoBeans aren't "dynamic" in the Java generics or RTTI sense.
In GWT, all types have to be known at compile time for anything which is auto-generated (which includes AutoBeans). This places restrictions on your designs which don't allow you to take full advantage of Java's language features (specifically, generics and other RTTI features). So, AutoBeans are not dynamic in the RTTI or Java generic sense. However, AutoBeans are simply a low-level way of wrapping your data, and you still have access to the data by using Splittables!
As stated in the previous comments, you can use Splittables for the parts of your JSON object whose type is not known at serialization/decode time. Sure, it would be nice to have everything happen at once, but nothing is stopping you from performing some post-processing on your data objects to get them into your desired state.
A really good way for someone to "Grok" what is going on with AutoBeans (and anything else which is autogenerated) is to look at the resulting generated code. The default location for maven is: ${project.build.directory}/.generated.
If you look in there after you've compiled, you should find the code which the GWT compiler produces for your AutoBeans.
I have an object class called BOM.
It has a method, getChildItem() which returns an Item object.
Let's say I do this:
BOM model = new BOM();
Item child = model.getChildItem();
ArrayList a = new ArrayList();
a.add(child);
model.close();
What happens? Does it:
Not actually close the model because the node’s in an array
Still close the model as the child object, once created, is independent of the model bject.
Close the model and set the child object to null (I’m pretty sure this doesn’t happen; it would wreak havoc and seems counterintuitive to the java garbage collection methodology)
It's impossible to say what your close() method does. Possibly you're thinking of somehing like a Database resultSet or an inputStream where the values are unavailable after you've closed? That wouldn't be the case unless you've explicitly built your objects that way, they're not part of the core language.
From the context I think you mean "what happens when the parent object goes out of scope?" (i.e. becomes eligible for garbage collection)
What happens is this:
BOM model = new BOM();
Item child = model.getChildItem();
// you now have a handle to the child object. Presumably, so does model, but we don't care about that.
ArrayList a = new ArrayList();
a.add(child);
//a now has a handle to child.
model.close();
// child is not eligible for garbage collection because a still has a handle to it.
Both child and model will be the same underlying object. close will be called on model, so any access to child that relies on it being open will now fail.
Hope that helps.