I have a serialized object MyObject that contains integer foo. I set a value 10 to integer foo and save the object to a file using writeObject().
I add integer bar to object MyObject. I set a value 15 to integer bar and then load the old serialized file using readObject().
The old serializable file doesn't contain integer bar so integer bar will get value 0. I want to keep the value 15 in bar if the old serializable file doesn't contain variable bar.
Should I override readObject() or how could I prevent readObject() from setting "default values" to unknown objects?
I want to do this because in the constructor I'm setting my own default values and would like to use my own default values to control versioning.
Serialization doesn't set default values it defers to Java's default value initialization scheme.
If I can sum up your question. You want serialization to merge what's in the serialized stream with the values in memory. That's not possible with Java serialization as it controls what objects to create. You can read in your serialized object, then manually write the code to merge what fields you want merged together. If your stuck on Java serialization (I'd steer clear of it if I were you), but let's say you want to continue using it.
public class MyObject {
public void merge( MyObject that ) {
// given some other instance of an object merge this with that.
// write your code here, and you can figure out the rules for which values win.
}
}
ObjectInputStream stream = new ObjectInputStream( new FileInputStream( file ) );
MyObject that = stream.readObject();
someObject.merge( that );
Viola you control which fields will be merged from that into someObject. If you want a library to do this merge for you check out http://flexjson.sourceforge.net. It uses JSON serialization, and works from Beans rather than POJO. However, there is a way to take an already populated object and overwrite values from a JSON stream. There are limitations to this. The other benefit of this is you can actually read the stream back AFTER your object structure has changed something that Java serialization can technically do, but its very very hard.
Would adding the following method to your MyObject work for you?
private void readObject(ObjectInputStream ois) throws IOException, ClassNotFoundException
{
bar = 15; // Set a default value of 15 if it's not in the serialized output file
ois.defaultReadObject();
}
Use the keyword transient to exclude fields from serialization/deserialization.
Related
What I'm doing
I'm using Dependency Injection to decouple my classes.
How I'm trying to do it
The class I am making constructs ObjectImplementation (the interface) Objects to hold data and it acts as a sort of container. I'm doing this to parse data and cross reference two data-sets. My problem is that I currently have object construction tied to the data being formatted a certain way. I am using the Factory pattern and a properties file "config.properties".
What I want to be able to do
I want to be able to have the factory take in an array of fields or some other similar type and be able to construct instances of the reflected object type without dependencies on the data. In this case they are Salesrep instances but other times I want to construct Salesrep instances or other classtype instances with different fields filled and different ones null without formatting the data to contain the names of fields.
The end goal
The point is so that I can construct different objects with the same container code. If I want to contain the objects differently I'll simply make a new implementation of the parent interface of this container class.
What I'm thinking is the problem
I've figured out that a Fieldmap was a good idea through previous versions of this question and my own research. Yet there is no way to actually set those fields without having something in the data to match to the Fieldmap
Extra Clarification
I really want to know if I can find a way to achieve my goal without adding field names to the data
//creates new properties object and loads in the file configuration
Properties prop = new Properties();
prop.load(SalesRepbyId.class.getResourceAsStream("config.properties"));
//reflects in the class we wish to use
Class<? extends ObjectImplementation> Classtouse = Class.forName(prop.getProperty("ObjectImplementation")).asSubclass(ObjectImplementation.class);
//initializes the data and some hashmaps to store the data or the methods of the reflected class
ArrayList<String[]> Salesrep_contactlist = FileParser.ReadFile();
Map<String, ObjectImplementation> SalesrepByIdMap = new HashMap<>();
Map<String, Method> MethodMap = new HashMap<>();
//adds in the data (fields) by constructing objects of the reflected type using the ObjectImplementation interface
for (String[] fieldarray : Salesrep_contactlist) {
ObjectImplementation object_to_add = null;
try {
//utilizes the factory pattern to return an instance of the reflected class
object_to_add = Factory.getObjectImpl(prop.getProperty("ObjectImplementation"),fieldarray);
/**
uses a method hashmap to map the name of the method to the Method object.
I did it this way because dynamic variable declarations are not possible and
I wanted to decouple Method declarations from the specific class that has
them. If i just hardcoded in which methods I get from the implementing class
that introduces extra dependencies I don't want.
**/
for (Method method:Classtouse.getMethods()) {
MethodMap.put(method.getName(),method);
}
//same as above but for fields this time
for (Field field:Classtouse.getFields()) {
FieldMap.put(field.getName(),field);
}
//object_to_add is a String[] with the format [Fieldname1:fieldinput1,Fieldname2:Fieldinput2]
//so I want to get this array and get each element, seperate the fieldname and then use that string to access the actual Field object of the same name in FieldMap
String fieldname = object_to_add.get(0).split(":").get(0)
String fieldinput = object_to_add.get(0).split(":").get(1)
Field name_example = Fieldmap.get(fieldname)
name_example.set(String.class,fieldinput)
//This requires the data to have the fieldname in it rather than just the fieldinput (or data). Also it confines the input to be strings because I don't think I can use a generic type to set this field even though potentially I would want to.
There is no way for me to dynamically set Field types without something to go off of in the data or elsewhere. In order to avoid something hard coded like: Salesrep rep = new rep (arg1,arg2,arg3 ...) I needed to use the Fieldmap and be able to match the data coming in to what fields I wanted to set. Since I didn't want to do it by order ex:
List list = new list("bob","800-352-4324","foo#example.com");
int i = 0;
for(i = 0, i > list.size(), i++){
Field field = new Field(//need a name here automatically rather than hardcoded)
field.set(object_to_add,list[i])
i++
}
The above didn't have any reference to the actual name of the Field that I actually use in my class. I didn't want that and then it dawned on me that the first line of my data (which is in CSV format) has the Field names effectively listed. ex:
(in the CSV File) foo.txt:
1: name,phonenumber,email
2: "bob","800-352-4324","foo#example.com"
3: "steve","800-444-4444","annoyingcommercials#example.com"
4: ...
Using this knowledge My solution is to use the first line of my data to specify the field names and their order so that when I take in lines as an array of these strings I can use the first line array as a reference to how to set the fields. I will know that the first element in the array should be the name the second should be the number ect ect. This way I only have to change the first line if I want to change how many fields the data holding class actually has.
puesdocode:
ObjectImpl. Classtouse = refelct in the class to use here from properties file
List(String[]) fieldarray = the raw data taken in and converted to a list of string arrays
String[] firstline = fieldarray.getfirstline()
List(String[]) restoflines = fieldarray.getallotherlines()
for i = 0, i > firstline.size(), i++{
Fieldmap.put(Name of the field from firstline[i], create a new Field object here with the Name);
Field fieldtoset = Fieldmap.get(Name of the field again)
fieldtoset.set(make an instance of the Classtouse here, restoflines[i] which represents the data in the 'Name' column)
}
For some silly reason I had it in my head that there was a way to do this without any change to the data, as if the Factory which created the object could take in arbitrary/generic arguments and somehow just know where each field went. I realized that that was silly because I needed to tell the code how to actually set the fields but In a way that it wasn't hard-coded into the class. This solution puts the dependency on the data so now its not hard-coded into the class. I should have seen this sooner.
Can I store text from different text fields in an ArrayList and store it into my Customer property in the Booking class. Right now it's reading every text field and saving them into different properties.
private String flighttime;
private String flightlocation;
private String flightfee;
private boolean car;
private boolean insurance;
private Customer customer;
private void savebookingButtonActionPerformed(java.awt.event.ActionEvent evt) {
Booking customerbooking = new Booking();
Customer cust = null;
try {
if (custnameTF.getText().equals("")) {
throw new EmptyField("Please Insert Customer");
} else {
FileOutputStream fos = new FileOutputStream("Bookings/" + custidTF.getText() + ".txt");
ObjectOutputStream oos = new ObjectOutputStream(fos);
cust.setPersonName((custnameTF.getText()));
cust.setPersonSurname((custsurnameTF.getText()));
cust.setPersonID((custidTF.getText()));
cust.setConsultantname(consnameTF.getText());
cust.setConsultantsurname((conssurnameTF.getText()));
cust.setConsulid(considTF.getText());
customerbooking.setFlightlocation(locationCB.getSelectedItem().toString());
customerbooking.setFlighttime(timeCB.getSelectedItem().toString());
customerbooking.setFlightfee(feeCB.getSelectedItem().toString());
customerbooking.setCar(carRB.isSelected());
customerbooking.setInsurance(insuranceRB.isSelected());
oos.writeObject(customerbooking);
oos.close();
fos.close();
custnameTF.setText("");
custsurnameTF.setText("");
custidTF.setText("");
considTF.setText("");
consnameTF.setText("");
conssurnameTF.setText("");
locationCB.setSelectedItem("");
timeCB.setSelectedItem("");
feeCB.setSelectedItem("");
The short answer is "no".
Dealing with property values in a list with an implicit order for mapping to your data object is brittle.
However, it is sometimes very useful to manage properties of objects in aggregate using something like property change support or beans, though an ArrayList is a poor choice of a data structure versus at least a Map of property-name to value or a more thorough data-binding solution such as typed properties to represent keys, pairs to map keys to values, and data structures to store current values and mutations.
There is overhead somewhere that must match your property names to accessors and mutators of your data object classes. This may be done using a library such as for databinding or some custom solution of property-to-field mapping or perhaps storing your state in a Map within the data objects. There are complex ways to do this while maintaining type-safety, but many solutions deal with properties as String and values as Object and throw away the type information and type checks, so that would be one hurdle to implementing it well.
Another related part of your question ought to be change management and only writing things that changed. For example, if one of your fields changes, you don't need to overwrite all the other fields of your data object but only the ones that are different. Whether you move to a property-based update model or stick with explicit use of accessors and mutators, you ought to investigate this also. If going with the property-based approach where you can deal with properties in aggregate or via delegation to handlers, the code in your UI should evaporate and be passed to libraries which could deal with type-safety, change management, firing subsequent change events, and other related concerns.
I have a deserializer for a specific class which needs some ordering while reading fields.
Let's say I have two fields in my class (field1 and field2) and in order to read field2, it first needs field1.
For example for the following json data it works because when the deserializer parses field2, field1 is already set:
{"field1": 3, "field2": 4}
However if we reverse the fields:
{"field2": 4, "field1": 3}
I need to skip field2 via jp.skipChildren because field1 is not set. When field1 is parsed, Jackson should re-read and parse field2.
One option is to parse field2 instead of skipping and hold it in a variable so that when field1 is set, it can use the variable that holds data in field2. However; based on the value of field1, I may not need to parse field2 so I'm looking for a better solution since performance is critical in this part of the code.
I'm using Mapper.readValue(byte[], MyClass.class) method and it seems Jackson uses ReaderBasedJsonParser for parsing. Even though it's possible to get token position, I couldn't find a way to set token position.
Finally I found a way to do it. It's actually a workaround but it passes the tests that I wrote.
When you pass byte array to mapper.readValue it uses ReaderBasedJsonParser which iterates through the array and parse the JSON tree.
public static class SaveableReaderBasedJsonParser extends ReaderBasedJsonParser {
private int savedInputPtr = -1;
public SaveableReaderBasedJsonParser(IOContext ctxt, int features, Reader r, ObjectCodec codec, CharsToNameCanonicalizer st, char[] inputBuffer, int start, int end, boolean bufferRecyclable) {
super(ctxt, features, r, codec, st, inputBuffer, start, end, bufferRecyclable);
}
public void save() {
savedInputPtr = _inputPtr;
}
public boolean isSaved() {
return savedInputPtr>-1;
}
public void load() {
_currToken = JsonToken.START_OBJECT;
_inputPtr = savedInputPtr;
_parsingContext = _parsingContext.createChildObjectContext(0, 0);
}
}
When you use this JsonParser, the JsonParser instance that will be passed to your deserializer EventDeserializer.deserialize(JsonParser, DeserializationContext) will be a SaveableReaderBasedJsonParser so you can safely cast it.
When you want to save the position, call jp.save() so that when you need to go back, you can call just call jp.load().
As I said, it's actually a workaround but when you need this kind of feature and don't want to parse the tree twice for performance reasons, you may give it a try.
A custom deserializer needs uses the streaming API. There is no way to go back, reparse, etc.
Did you register the custom deserializer for the field type or for the class, that contains the fields, that need this special treatment?
If you register the deserializer for the class that contains those fields, you can just use the streaming API, read in all the fields of an instance, store them temporarily, e.g. in a HashMap, and then assign the values.
BTW: Your question smells like the XYProblem. Maybe you should post another question about the reason, you need to solve this problem here and see whether there is a better approach for it.
My java application makes use of complex object graphs that are jackson annotated and serialized to json in their entirety for client side use. Recently I had to change one of the objects in the domain model such that instead of having two children of type X it will instead contain a Set<X>. This changed object is referenced by several types of objects in the model.
The problem now is that I have a large quantity of test data in json form for running my unit tests that I need to convert to this new object model. My first thought for updating the json files was to use the old version java object model to deserialize the json data, create new objects using the new version object model, hydrate the new objects from the old objects and then finally serialize the new objects back to json. I realized though that the process of programmatically creating matching object graphs and then hydrating those object graphs could be just as tedious as fixing the json by hand since the object graphs are relatively deep and its not a simple clone.
I'm wondering how I can get around fixing these json files entirely by hand? I'm open to any suggestions even non-java based json transformation or parsing tools.
One possibility, if Objects in question are closely-enough related, structurally, is to just read using one setting of data-binding, write using another.
For example: if using Jackson, you could consider implementing custom set and get methods; so that setters could exist for child types; but getter only for Set value. Something like:
```
public class POJO {
private X a, b;
public void setA(X value) { a = value; }
public void setB(X value) { b = value; }
public X[] getValues() {
return new X[] { a, b };
}
```
would, just as an example, read structure where POJO would have two Object-valued properties, "a" and "b", but write structure that has one property "values", with JSON array of 2 Objects.
This is just an example of the basic idea that reading in (deserialization) and serialization (writing out) need not be symmetric or identical.
I have the following java model class in App Engine:
public class Xyz ... {
#Persistent
private Set<Long> uvw;
}
When saving an object Xyz with an empty set uvw in Java, I get a "null" field (as listed in the appengine datastore viewer).
When I try to load the same object in Python (through remote_api), as defined by the following python model class:
class Xyz(db.Model):
uvw = db.ListProperty(int)
I get a "BadValueError: Property uvw is required".
When saving another object of the same class in Python with an empty uvw list, the Datastore viewer prints a "missing" field.
Apparently empty lists storage handling differs between Java and Python and lead to "incompatible" objects.
Thus my question: Is there a way to, either:
force Java to store an empty list as a "missing" field,
force Python to gracefully accept a "null" list as an empty list when loading the object?
Or any other suggestion on how to handle empty list field in both languages.
Thanks for your answers!
It should work if you assign a default value to your Python property:
uvw = db.ListProperty(int, default=[])
I use the low-level java api, so perhaps what I am doing would be different. But before I save a collection-type data structure to the datastore, I convert it into something that the datastore naturally handles. This would include mainly Strings and ByteArrays.
It sounds like java app engine is interpreting the empty set as a null value. And python is not reading this null value correctly. You might try saving an empty set as the String value "empty set". And then have python check to see if the datastore holds that string value. If it does, it could allocate a new empty set, if not, it could read the property as a set.
The Java Set behavior is because Java's Collections are reference types, which default to being null.
To actually create an empty Set, declare it like this:
#Persistent
private Set<Long> uvw = new HashSet<Long>();
or using some other implementation of Set on the right side. HashSet is the most commonly used Set type, though. Other interesting set types are the two thread-safe Sets CopyOnWriteArraySet and ConcurrentSkipListSet; also the Ordered Set type LinkedHashSet and the Sorted Set type TreeSet.
It may work to you
uvw = db.ListProperty(int, default=[])
Its the most comment way to short it out...