What is the safest way to serialize any kind of Java Object such that when:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s1 = safeSerialize(id);
The obj.get(ID_KEY) method returns an object that serves as an "id", it could be a ObjectId, String, Long, Integer, or anyt kind of Object.
Then do the same thing:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s2 = safeSerialize(id);
I need to make sure the s1 is still equals to s2. I mean, for example obj.get() method might return new instance of say, new Integer(100) for a given ID_KEY, still having a "serialized" version.
You can't do this because java.lang.Object is not serializable.
Classes are marked with java.io.Serializable to indicate that the programmer has allowed for binary representations to be able to reanimated.
Even if you were to require that the object that you were storing was limited to a given set of objects which were serializable, you'd be subject to the usual fragility of serialization.
If you limited the set of objects to a set for which you then provided custom serialization (as opposed to using the default serialization), you could then make it work.
I don't think Java serialization gives you these guarantees:
It certainly doesn't if any on the classes involve could change.
It certainly doesn't if you serialize / deserialize on different JVM version/release/vendor platforms.
It possibly doesn't for any class that has custom writeObject / readObject methods ... and that includes some of the basic types in java.util, etcetera.
JSON is as bad, if not worse. The order of the attributes of JSON objects is explicitly undefined, so you have no guarantees that the attributes will appear in the serialization in the same order each time.
Binding based serial/deserializers for POJOs as XML could work (if they don't use attributes), but you need to beware of how a binding handles the serialization of inherently unordered collections such as HashSets and HashMaps. The chances are that the order of the set/map members in the serialization won't be predictable.
My advice would be to think of another way to solve your actual problem ... whatever it is.
Related
I doesn't understand how to use repositoryItem in ATG. How do I need construct customized logic on it.
Do I need to create usual JavaBean over repositoryItem or I need to use it as is?
I will try to explain:
Logic on repositoryItem:
RepositoryItem store = getRepository().getItem(..);
String address = store.getPropertyValue(..);
Logic on JavaBean:
class StoreBean {
String address;
StoreBean(RepositoryItem store) {
address = store.getPropertyValue(..);
}
}
Then I can use StoreBean how I want, to get it fields(lazy load for them, for example).
What will be best practices in ATG?
It is a matter of preference.
What you do not get with RepositoryItem objects is strong type checking. You must either make assumptions about the type of RepositoryItem you are working with or you have to do manual checks in your code (see example below). Additionally, since the RepositoryItem properties are stored as a metadata, you have to know 1) the actual names of the properties from the XML repository descriptor and 2) you need to know the types, which requires type casting (Example: String firstName = (String) item.getProperty("firstName");) Here is an example of a validation to ensure the RepositoryItem object is of type "sku":
RepositoryItemDescriptor skuItemDescriptor = getCatalogTools().getCatalog().getItemDescriptor(getCatalogTools().getBaseSKUItemType());
if (!RepositoryUtils.isTypeOfItemDesc(itemDescriptor, skuItemDescriptor)) {
throw new IllegalArgumentException("RepositoryItem must be of type " + getCatalogTools().getBaseSKUItemType());
}
If you take the approach of not using "JavaBeans", then you are increasing the risk of having runtime errors in your application. My suggestion is that you have a healthy balance between using RepistoryItem objects and wrapper objects. For critical items you plan to use in a large amount of your code base, I suggest using a wrapper object.
I suggest that if you create wrapper objects, that for consistency, you follow the same design pattern that Oracle Commerce uses. For example, the "order" item is wrapped by OrderImpl and implements the ChangedProperties interface.
public class OrderImpl
extends CommerceIdentifierImpl
implements Order, ChangedProperties
http://docs.oracle.com/cd/E52191_03/Platform.11-1/apidoc/atg/commerce/order/OrderImpl.html
ATG out of box repository implementations do not use JavaBeans for the most part. One big disadvantage of using JavaBeans and lazy loading them into memory will be to lose many repository caching features and will increase your memory footprint. For instance you will not be able to monitor your cache statistic or invalidate cache periodically. You will also have overheads of instantiations when you have huge repotiroyitem result set from a query.
Instead you can also use DynamicBean which lets you refer to repository properties similar to java beans for instance Profile.city.
If you only want to wrap them so that developers don't accidentally parse them incorrectly, you can write a util class per repository for various types of ready write operations and centralize your type safety.
I have an object containing cyclic references. According to the XStream Json documentation, cyclic references are NOT supported, and one should therefore use the NO_REFERENCES XStream mode when marshalling an object to Json:
What limitations has XStream's JSON support?
JSON represents a very simple data model for easy data transfer.
Especially it has no equivalent for XML attributes. Those are written
with a leading "#" character, but this is not always possible without
violating the syntax (e.g. for array types). Those may silently
dropped (and makes it therefore difficult to implement
deserialization). References are another issue in the serialized
object graph, since JSON has no possibility to express such a
construct. You should therefore always set the NO_REFERENCES mode of
XStream. Additionally you cannot use implicit collections, since the
properties in a JSON object must have unique names.
But I tried setting the mode to ID_REFERENCES and it appears as though the Object is marshalled with references, and the object can be unmarshalled properly. Is the XStream documentation simply outdated, or have I simply inadvertently created the object graph in such a way that I haven't hit any of the limitations?
Sorry, but I can't post my exact graph as an example as it contains application/domain-specific code and it might take some time to construct a 'clean' alternative.
I'm just learning Java but I keep running into the same problem over and over again
How do i revert to an old state of some object efficiently?
public class Example {
MyObject myLargeObject;
public void someMethod(){
MyObject myLargeMyObjectRecovery = myLargeObject;
/**
* Update and change myLargeObject
*/
if(someCondition){
//revert to previous state of myLargeObject
myLargeObject = myLargeMyObjectRecovery;
}
}
}
The above is how I would like the code to work but it obviously doesn't since myLargeObject and myLargeObjectRecovery are references to the same object.
One solution is to create a copy constructor. This is fine for small objects but if I have a large object (in my project the object is a large 2D array meaning I would have to iterate over all of the entries), this way feels wrong.
This must be a very common problem in Java, how do others get around it?
Either deep copy, as you noted, or possibly serialization. You could store a serialized string, and then reconstruct the object from it later.
The best solution depends on whether you have external references to your MyObject instance or not.
If you use the myLargeObject from the Example class only, you can use
Serialize your object at the savepoint, and deserialize at the restore point (the serialized byte[] must be transient)
Create a new instance with a Copy Constructor (doing deep copying) at the savepoint, and replace the reference at the restore point.
If you have access to the MyObject instance from outside, then it becomes a bit more interesting, you must introduce synchronization.
All of your methods on MyObject must be synchronized (to avoid inconsistent read)
You should have a synchronized void saveState() method which saves your state (either by serialization, or by copy constructor) (the latter is better)
You should have a synchronized void restoreState(), where you internally restore your state (for copying fields you can use a common code fragment with the copy constructor)
In all cases it is recommended to close the transaction (kind of commit()) at some point, it means that when you get there, you can delete the saved state.
Also, it is very important, that if you have an underlying data structure, you should traverse the whole structure for it. Otherwise you may experience problems with the object references.
Be careful with JPA Entities or any externally-managed objects, it is unlikely that any of these methods will work with them.
If you assign a object to another, jvm refers to same object in the memory. Therefore, the changes on the object will be in the other objetc which references.
MyObject myLargeMyObjectRecovery = myLargeObject;
It is the same in your code. myLargeMyObjectRecovery and myLargeObject refer to same object in the memory.
If you want to exact copy of any object you can use Object.clone() method. this method will copy the object and return a object which is refer to another object whose fields and their valus same as the coppied object.
Since clone method is protected you can not access it direcly. you can implement a Prototype pattern depending on your requirements.
http://www.avajava.com/tutorials/lessons/prototype-pattern.html
public class MyObject{
//fields, getters, setters and other methods.
public MyObject doClone()
{
MyObject clonedObject = this.clone(); //you may need to override clone()depending on your requirements.
return clonedObject;
}
}
and call it in your code
MyObject myLargeMyObjectRecovery = myLargeObject.doClone();
The Memento Pattern addresses the design issue of the revert to previous state problem. This approach is useful when you need to capture one or multiple states of the object and be able to revert to them. Think of it as an N-step undo operation, just like in a text editor.
In the pattern you have a stateful object Originator, which is responsible for saving and restoring snapshots of it's state. The sate itself is saved in a wrapper class called Memento, which are stored in and accessed via the CareTaker.
In this approach the easiest idea is to deep-copy your objects. However this may be inefficient regarding performance and space, since you store whole objects and not the change-sets only.
Some object persistence libraries provide implementations of transactions and snapshots. Take a look at Prevayler, which is an object persistence library for java and an implementation of the prevalent system pattern. The library captures the changes to your objects in form of transactions and stores them in-memory. If you need a persistent storage of your POJOs, you can save snapshots of your objects on disk periodically and revert to them if needed.
You can find more on serializing POJOs in this SO question: Is there a object-change-tracking/versioning Java API out there?
Currently I have a class setup to be processed as an autobean:
public interface Asset extends Hit {
String getGuid();
String getHitType();
Map<String,Serializable> getMetadata();
}
I tried using Object instead of Serializable:
Map<String,Object> getMetadata()
but this seems to blow up when trying to access data (because it's not 'reified').
The Metadata map may contain other maps, strings, ints, etc. How do I retrieve data from an inner map of that metadata object?
Currently, if I call asset.getMetadata().get("title"); this returns a SerializableAutoBean and performing toString() or String.valueOf(obj) on that object returns the in memory object information and not the actually string value.
Can an AutoBean object be this dynamic, or do you specifically have to define every field?
AutoBeans aren't "dynamic" in the Java generics or RTTI sense.
In GWT, all types have to be known at compile time for anything which is auto-generated (which includes AutoBeans). This places restrictions on your designs which don't allow you to take full advantage of Java's language features (specifically, generics and other RTTI features). So, AutoBeans are not dynamic in the RTTI or Java generic sense. However, AutoBeans are simply a low-level way of wrapping your data, and you still have access to the data by using Splittables!
As stated in the previous comments, you can use Splittables for the parts of your JSON object whose type is not known at serialization/decode time. Sure, it would be nice to have everything happen at once, but nothing is stopping you from performing some post-processing on your data objects to get them into your desired state.
A really good way for someone to "Grok" what is going on with AutoBeans (and anything else which is autogenerated) is to look at the resulting generated code. The default location for maven is: ${project.build.directory}/.generated.
If you look in there after you've compiled, you should find the code which the GWT compiler produces for your AutoBeans.
I'm looking for clever ways to build dynamic Java classes, that is classes where you can add/remove fields at runtime. Usage scenario: I have an editor where users should be able to add fields to the model at runtime or maybe even create the whole model at runtime.
Some design goals:
Type safe without casts if possible for custom code that works on the dynamic fields (that code would come from plugins which extend the model in unforeseen ways).
Good performance (can you beat HashMap? Maybe use an array and assign indexes to the fields during setup?)
Field "reuse" (i.e. if you use the same type of field in several places, it should be possible to define it once and then reuse it).
Calculated fields which depend on the value of other fields
Signals should be sent when fields change value (no necessarily via the Beans API)
"Automatic" parent child relations (when you add a child to a parent, then the parent pointer in the child should be set for "free").
Easy to understand
Easy to use
Note that this is a "think outside the circle" question. I'll post an example below to get you in the mood :-)
Type safe without casts if possible for custom code that works on the dynamic fields (that code would come from plugins which extend the model in unforeseen ways)
AFAIK, this is not possible. You can only get type-safety without type casts if you use static typing. Static typing means method signatures (in classes or interfaces) that are known at compile time.
The best you can do is have an interface with a bunch of methods like String getStringValue(String field), int getIntValue(String field) and so on. And of course you can only do that for a predetermined set of types. Any field whose type is not in that set will require a typecast.
The obvious answer is to use a HashMap (or a LinkedHashMap if you care for the order of fields). Then, you can add dynamic fields via a get(String name) and a set(String name, Object value) method.
This code can be implemented in a common base class. Since there are only a few methods, it's also simple to use delegation if you need to extend something else.
To avoid the casting issue, you can use a type-safe object map:
TypedMap map = new TypedMap();
String expected = "Hallo";
map.set( KEY1, expected );
String value = map.get( KEY1 ); // Look Ma, no cast!
assertEquals( expected, value );
List<String> list = new ArrayList<String> ();
map.set( KEY2, list );
List<String> valueList = map.get( KEY2 ); // Even with generics
assertEquals( list, valueList );
The trick here is the key which contains the type information:
TypedMapKey<String> KEY1 = new TypedMapKey<String>( "key1" );
TypedMapKey<List<String>> KEY2 = new TypedMapKey<List<String>>( "key2" );
The performance will be OK.
Field reuse is by using the same value type or by extending the key class of the type-safe object map with additional functionality.
Calculated fields could be implemented with a second map that stores Future instances which do the calculation.
Since all the manipulation happens in just two (or at least a few) methods, sending signals is simple and can be done any way you like.
To implement automatic parent/child handling, install a signal listener on the "set parent" signal of the child and then add the child to the new parent (and remove it from the old one if necessary).
Since no framework is used and no tricks are necessary, the resulting code should be pretty clean and easy to understand. Not using String as keys has the additional benefit that people won't litter the code with string literals.
So basically you're trying to create a new kind of object model with more dynamic properties, a bit like a dynamic language?
Might be worth looking at the source code for Rhino (i.e. Javascript implemented in Java), which faces a similar challenge of implementing a dynamic type system in Java.
Off the top of my head, I suspect you will find that internal HashMaps ultimately work best for your purposes.
I wrote a little game (Tyrant - GPL source available) using a similar sort of dynamic object model featuring HashMaps, it worked great and performance was not an issue. I used a few tricks in the get and set methods to allow dynamic property modifiers, I'm sure you could do the same kind of thing to implement your signals and parent/child relations etc.
[EDIT] See the source of BaseObject how it is implemented.
You can use the bytecode manipulation libraries for it. Shortcoming of this approach is that you need to do create own classloader to load changes in classes dynamically.
I do almost the same, it's pure Java solution:
Users generate their own models, which are stored as JAXB schema.
Schema is compiled in Java classes on the fly and stored in
user jars
All classes are forced to extend one "root" class, where you could put every extra functionality you want.
Appropriate classloaders are implemented with "model change"
listeners.
Speaking of performance (which is important in my case), you can hardly beat this solution. Reusability is the same of XML document.