We are using an OODBMS, which allows both Java "entities" and serialized objects too. The DB supports true graphs (no "tree" restriction) and serialized objects can safely reference entities as well. The DB works (almost) transparently, and we can do whatever we want, and it-just-works.
Now, I've discovered that objects that had been marked as "logically deleted" (using a simple boolean flag, rather then built-in DB functionality, since the DB doesn't have such a concept) are loaded/saved within a particular object graph.
I want to know which object(s) references those "zombie" objects. Trying to use reflection to iterate over the graph has not worked so far. Instead of the DB, I can simply use Java serialization to export the object graph, and this also causes the "zombie" objects to be serialized.
My question is: can I somehow extract the information about the object(s) that is holding a reference to a "zombie" object during the serialisation process (the "parent" object)? There can be more then one, but as long as I have one, I can work iteratively until I killed off all those invalid references.
Most OODBMS allow to run queries which return object references that satisfy certain constraints. So you could write something like this:
return all objects
where deleted == true
and Foo.bar == this
where Foo is the type of the object which references the deleted objects and bar is the field/property that contains the reference.
The exact syntax depends on your OODBMS.
Related
What is the safest way to serialize any kind of Java Object such that when:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s1 = safeSerialize(id);
The obj.get(ID_KEY) method returns an object that serves as an "id", it could be a ObjectId, String, Long, Integer, or anyt kind of Object.
Then do the same thing:
DBObject obj = getFromDB;
Object id = obj.get(ID_KEY);
String s2 = safeSerialize(id);
I need to make sure the s1 is still equals to s2. I mean, for example obj.get() method might return new instance of say, new Integer(100) for a given ID_KEY, still having a "serialized" version.
You can't do this because java.lang.Object is not serializable.
Classes are marked with java.io.Serializable to indicate that the programmer has allowed for binary representations to be able to reanimated.
Even if you were to require that the object that you were storing was limited to a given set of objects which were serializable, you'd be subject to the usual fragility of serialization.
If you limited the set of objects to a set for which you then provided custom serialization (as opposed to using the default serialization), you could then make it work.
I don't think Java serialization gives you these guarantees:
It certainly doesn't if any on the classes involve could change.
It certainly doesn't if you serialize / deserialize on different JVM version/release/vendor platforms.
It possibly doesn't for any class that has custom writeObject / readObject methods ... and that includes some of the basic types in java.util, etcetera.
JSON is as bad, if not worse. The order of the attributes of JSON objects is explicitly undefined, so you have no guarantees that the attributes will appear in the serialization in the same order each time.
Binding based serial/deserializers for POJOs as XML could work (if they don't use attributes), but you need to beware of how a binding handles the serialization of inherently unordered collections such as HashSets and HashMaps. The chances are that the order of the set/map members in the serialization won't be predictable.
My advice would be to think of another way to solve your actual problem ... whatever it is.
I'm just learning Java but I keep running into the same problem over and over again
How do i revert to an old state of some object efficiently?
public class Example {
MyObject myLargeObject;
public void someMethod(){
MyObject myLargeMyObjectRecovery = myLargeObject;
/**
* Update and change myLargeObject
*/
if(someCondition){
//revert to previous state of myLargeObject
myLargeObject = myLargeMyObjectRecovery;
}
}
}
The above is how I would like the code to work but it obviously doesn't since myLargeObject and myLargeObjectRecovery are references to the same object.
One solution is to create a copy constructor. This is fine for small objects but if I have a large object (in my project the object is a large 2D array meaning I would have to iterate over all of the entries), this way feels wrong.
This must be a very common problem in Java, how do others get around it?
Either deep copy, as you noted, or possibly serialization. You could store a serialized string, and then reconstruct the object from it later.
The best solution depends on whether you have external references to your MyObject instance or not.
If you use the myLargeObject from the Example class only, you can use
Serialize your object at the savepoint, and deserialize at the restore point (the serialized byte[] must be transient)
Create a new instance with a Copy Constructor (doing deep copying) at the savepoint, and replace the reference at the restore point.
If you have access to the MyObject instance from outside, then it becomes a bit more interesting, you must introduce synchronization.
All of your methods on MyObject must be synchronized (to avoid inconsistent read)
You should have a synchronized void saveState() method which saves your state (either by serialization, or by copy constructor) (the latter is better)
You should have a synchronized void restoreState(), where you internally restore your state (for copying fields you can use a common code fragment with the copy constructor)
In all cases it is recommended to close the transaction (kind of commit()) at some point, it means that when you get there, you can delete the saved state.
Also, it is very important, that if you have an underlying data structure, you should traverse the whole structure for it. Otherwise you may experience problems with the object references.
Be careful with JPA Entities or any externally-managed objects, it is unlikely that any of these methods will work with them.
If you assign a object to another, jvm refers to same object in the memory. Therefore, the changes on the object will be in the other objetc which references.
MyObject myLargeMyObjectRecovery = myLargeObject;
It is the same in your code. myLargeMyObjectRecovery and myLargeObject refer to same object in the memory.
If you want to exact copy of any object you can use Object.clone() method. this method will copy the object and return a object which is refer to another object whose fields and their valus same as the coppied object.
Since clone method is protected you can not access it direcly. you can implement a Prototype pattern depending on your requirements.
http://www.avajava.com/tutorials/lessons/prototype-pattern.html
public class MyObject{
//fields, getters, setters and other methods.
public MyObject doClone()
{
MyObject clonedObject = this.clone(); //you may need to override clone()depending on your requirements.
return clonedObject;
}
}
and call it in your code
MyObject myLargeMyObjectRecovery = myLargeObject.doClone();
The Memento Pattern addresses the design issue of the revert to previous state problem. This approach is useful when you need to capture one or multiple states of the object and be able to revert to them. Think of it as an N-step undo operation, just like in a text editor.
In the pattern you have a stateful object Originator, which is responsible for saving and restoring snapshots of it's state. The sate itself is saved in a wrapper class called Memento, which are stored in and accessed via the CareTaker.
In this approach the easiest idea is to deep-copy your objects. However this may be inefficient regarding performance and space, since you store whole objects and not the change-sets only.
Some object persistence libraries provide implementations of transactions and snapshots. Take a look at Prevayler, which is an object persistence library for java and an implementation of the prevalent system pattern. The library captures the changes to your objects in form of transactions and stores them in-memory. If you need a persistent storage of your POJOs, you can save snapshots of your objects on disk periodically and revert to them if needed.
You can find more on serializing POJOs in this SO question: Is there a object-change-tracking/versioning Java API out there?
I am developing a java application which needs a special component for dynamic attributes. The arguments are serialized (using JSON) and stored in a database and then deserialized at runtime. All attributes are displayed in a JTable with 3 columns (attribute name, attribute type and attribute value) and stored in a hashmap.
I have currently two problems to solve:
The hashmap can also store objects and the objects can be set to null. And if set to null i dont know which class they belong to. How could i store objects even if they are null and known which class they belong to? Do i need to wrap each object in a class that will holds the class of the stored object?
The objects are deserialized from json at runtime. The problem with this is that there are many different types of objects and i don't actually know all object types that will be stored in the hashmap. So i am looking for a way to dynamicly deserialize objects.. Is there such a way? Would i have to store the class of the object in the serialized json string?
Thanks!
Take a look to the Null Object Pattern. You can use an extra class to represent a Null instance of your type and still could contain information about itself.
There is something called a Class Token, Which is the use of Class objects as keys for heterogeneous containers. Take a look to Effective Java By Joshua Bloch, Item 29. I'm not sure how this approach could work for you since you may have many instances of the same type but I leave it as a reference.
First of all, can you motivate why you use JSON serialization for your attributes ?
This method is disadvantageous in many ways in my opinion, it can cause problems with database search and indexing, make database viewing painful and caus unnecessary code in your application. These problems can be not an issue, it depends how you want to use your attributes.
My solution for situation like these is simple table containing columns like:
id - int
attribute_name - varchar
And then add columns for each supported data type:
string_value - varchar
integer_value - int
date_value - date
... and any other types you want.
This design allow for supreme performance using simple and typesafe ORM mapping without any serialization or other boilerplate. It can store values of any type, you just set correct column for attribute type, leaving all other with null. You can simulate null value by using null in all data columns. Indexing and searching also becomes a piece of cake.
Currently I have a class setup to be processed as an autobean:
public interface Asset extends Hit {
String getGuid();
String getHitType();
Map<String,Serializable> getMetadata();
}
I tried using Object instead of Serializable:
Map<String,Object> getMetadata()
but this seems to blow up when trying to access data (because it's not 'reified').
The Metadata map may contain other maps, strings, ints, etc. How do I retrieve data from an inner map of that metadata object?
Currently, if I call asset.getMetadata().get("title"); this returns a SerializableAutoBean and performing toString() or String.valueOf(obj) on that object returns the in memory object information and not the actually string value.
Can an AutoBean object be this dynamic, or do you specifically have to define every field?
AutoBeans aren't "dynamic" in the Java generics or RTTI sense.
In GWT, all types have to be known at compile time for anything which is auto-generated (which includes AutoBeans). This places restrictions on your designs which don't allow you to take full advantage of Java's language features (specifically, generics and other RTTI features). So, AutoBeans are not dynamic in the RTTI or Java generic sense. However, AutoBeans are simply a low-level way of wrapping your data, and you still have access to the data by using Splittables!
As stated in the previous comments, you can use Splittables for the parts of your JSON object whose type is not known at serialization/decode time. Sure, it would be nice to have everything happen at once, but nothing is stopping you from performing some post-processing on your data objects to get them into your desired state.
A really good way for someone to "Grok" what is going on with AutoBeans (and anything else which is autogenerated) is to look at the resulting generated code. The default location for maven is: ${project.build.directory}/.generated.
If you look in there after you've compiled, you should find the code which the GWT compiler produces for your AutoBeans.
I have a question on object ID of an object across JVMs. ie Say suppose i have persisted an object created on JVM1, and now I want to use the same object on JVM2.
So how to do that.
Will the object Id of the object same on both the JVM?
If yes for the above question, then what will be the case if the JVM2 has already an object with the objectID same as the one which is persisted.
thanks.
The object won't exist on JVM2 until you deserialize it. There's no concept of a "universal object ID" in Java - if you need an ID for your objects, you'll have to add it yourself. You could add a UUID field to your object; you'd then want to maintain some sort of cache to allow you to spot duplicates.
Are you really sure you need all of this? It may be worth taking another look at the bigger picture and redesigning.
Check out serialization here or alternatively you could use RMI - check out this link
I'm not really sure what you mean by Object Id, if you mean the reference you get printed out when you print out an object with no toString method then, this is not an object ID this is the memory address reference and will be different on each JVM and different on different invocations of the same program.
You could add a UUID to your object to create an unique id.
UUID javadoc
UUID uuid = UUID.randomUUID();