(Disclaimer: Extreme oversimplification. The actual scenario is considerably more complex.)
Say I have two systems, Producer and Consumer. Their code is completely independent, aside from a single shared interface:
public interface Thing {
String getName();
String getDescription();
int getPrice();
}
The idea is that Producer creates a bunch of data and sends it as JSON over HTTP. Producer has a bunch of implementations of Thing, each with additional pieces of metadata and stuff required in the data generation process.
As it's undesirable for Producer to have any kind of knowledge of Jackson/serialization aside from a thin layer at the very top, serialization attributes should be kept out of the Thing implementations. Due to the amount of implementation being very likely to grow in the future, having mixins for all of them quickly becomes unsustainable. It was believed to be sufficient to apply annotations to the Thing interface itself.
The first simple approach was a #JsonSerialize annotation on the interface. At first, that seemed to work, but resulted in a problem. Some of the implementations of Thing are enums, resulting in Jackson simply serializing them as their name instead of the fields defined in the interface.
Some googling revealed the following annotation:
#JsonFormat(shape= JsonFormat.Shape.OBJECT)
While it did indeed solve the problem by serializing the fields instead of the name, it did it too well as it also began serializing the implementation-specific public fields not defined in the Thing interface, resulting not only in information leak, but also failed deserialization in Consumer due to the data containing unknown entries.
As further googling didn't yield any results, the only solution I can think of is marking all those fields as ignorable, something that is extremely undesirable due to the previously mentioned reasons.
Is there any way, simply by altering the interface itself and its annotations, to enforce that exactly those fields, no more, no less, should be serialized both when it comes to classes and enums?
I have this issue when I was working with Jackson. The deserialization fails because, during deserialization, Jackson is unable to find the polymorphic reference type.
You should be annotating your interface with #JsonTypeInfo.
Something like:
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "class")
There isn't much of code in your question and hence this answer.
Usually you should be able to force use of certain type with:
#JsonSerialize(as=Thing.class)
and similarly with #JsonDeserialize.
Does this not work with enums?
Related
I want to know if the below code is violating open closed principle.
Animal is a parent class of Dog, however Animal has jackson annotations that help ObjectMapper (de)serialize the classes. Anyone who extends Animal will have to edit only annotations present on Animal to make sure (de)serialization works as intended leaving the class untouched.
#JsonTypeInfo(
use = JsonTypeInfo.Id.NAME,
include = JsonTypeInfo.As.PROPERTY,
property = "type")
#JsonSubTypes({
// all subclasses
#Type(value = Dog.class, name = "dog")
})
public abstract class Animal {
// fields, constructors, getters and setters
}
public class Dog extends Animal {
}
Indeed it does. The idea of the open-close principle is to make objects extendable without having to modify them internally. Since any new child of Animal would have to modify it to work properly, it breaks the principle.
Theoretical point of view
Open/closed principle like the whole SOLID is Utopia. We should continually upgrade our code in that direction but probably we will never end up there because this is not possible. Let's read below articles to see how classical getters and annotation constructs can be debatable.
Printers Instead of Getters
Java Annotations Are a Big Mistake
Practical point of view
Like every practical programmer I like to use good tools to solve problems instead of implementing something new myself. When I am asked to serialise given model to JSON files I am checking whether it is:
Open-source
Fast
Under active development
It is easy to use
When we are talking about Jackson and it's annotations, I think, we can find golden middle way between theory and practice. And this is thanks to MixIn feature. You can separate model from the way how it is serialised to JSON. Of course when you add new class which extends base class you need to change MixIn interface with annotations but this is a price we need to pay.
Edit or Why I forgot to answer a question?
Sorry, I forgot to answer a question whether above example violates Open/Closed principle or not. First, get definition from Wikipedia article:
A class is closed, since it may be compiled, stored in a library,
baselined, and used by client classes. But it is also open, since any
new class may use it as parent, adding new features. When a descendant
class is defined, there is no need to change the original or to
disturb its clients.
Above example violates When a descendant class is defined, there is no need to change the original part. Even if we use MixIn there is a need to change other part of app. Even more, if your solution uses annotations in 99.99% of cases you violate this part because there is a need to configure somehow functionality which is hidden behind them.
Open/closed means a class should be open for extension, but closed for modification.
In other words... if you want to change the behavior of a class you should extend it in some way, but you should not modify it.
You can extent a class by
creating a subclass. This is usually done using e.g. the template method pattern.
defining an interface that class A uses so that it's behavior can be extended by passing it another instance of that interface, e.g. a strategy pattern. A good real life example is a TreeSet(Comparator<? super E> comparator), because it's sorting behavior can be changed without modifying TreeSet itself.
From my point of view the #JsonSubTypes annotation is not part of the behavior of the Animal class. It changes the behavior of another class - the object mapper. Thus it is not really a violation. Not really means that even if you don't change the behavior, you have to touch the Animal class and recompile it.
It is a really weird design of the annotation. Why did that json developers not allow you to put an annotation on the subclass, e.g. like JPA does when it comes to hierarchy mapping. See DiscriminatorValue
It is a strange design that a supertype references subtypes.
Abstract types should not depend on concrete ones. In my opinion that is a principle that should always be applied.
TL;DR
Can I use Java serialization/deserialization using Serializable interface, ObjectOutputStream and ObjectInputStream classes, and probably adding readObject and writeObject in the classes implementing Serializable as a valid implementation for Prototype pattern or not?
Note
This question is not to discuss if using copy constructor is better than serialization/deserialization or not.
I'm aware of the Prototype Pattern concept (from Wikipedia, emphasis mine):
The prototype pattern is a creational design pattern in software development. It is used when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects. This pattern is used to:
avoid subclasses of an object creator in the client application, like the abstract factory pattern does.
avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword) when it is prohibitively expensive for a given application.
And from this Q/A: Examples of GoF Design Patterns in Java's core libraries, BalusC explains that prototype pattern in Java is implemented by Object#clone only if the class implements Cloneable interface (marker interface similar to Serializable to serialize/deserialize objects). The problem using this approach is noted in blog posts/related Q/As like these:
Copy Constructor versus Cloning
Java: recommended solution for deep cloning/copying an instance
So, another alternative is using a copy constructor to clone your objects (the DIY way), but this fails to implement the prototype pattern for the text I emphasized above:
avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword)
AFAIK the only way to create an object without invoking its constructor is by deserialization, as noted in the example of the accepted answer of this question: How are constructors called during serialization and deserialization?
So, I'm just asking if using object deserialization through ObjectOutputStream (and knowing what you're doing, marking necessary fields as transient and understanding all the implications of this process) or a similar approach would be a proper implementation of Prototype Pattern.
Note: I don't think unmarshalling XML documents is a right implementation of this pattern because invokes the class constructor. Probably this also happens when unmarshalling JSON content as well.
People would advise using object constructor, and I would mind that option when working with simple objects. This question is more oriented to deep copying complex objects, where I may have 5 levels of objects to clone. For example:
//fields is an abbreviation for primitive type and String type fields
//that can vary between 1 and 20 (or more) declared fields in the class
//and all of them will be filled during application execution
class CustomerType {
//fields...
}
class Customer {
CustomerType customerType;
//fields
}
class Product {
//fields
}
class Order {
List<Product> productList;
Customer customer;
//fields
}
class InvoiceStatus {
//fields
}
class Invoice {
List<Order> orderList;
InvoiceStatus invoiceStatus;
//fields
}
//class to communicate invoice data for external systems
class InvoiceOutboundMessage {
List<Invoice> invoice;
//fields
}
Let's say, I want/need to copy a instance of InvoiceOutboundMessage. I don't think a copy constructor would apply in this case. IMO having a lot of copy constructors doesn't seem like a good design in this case.
Using Java object serialization directly is not quite the Prototype pattern, but serialization can be used to implement the pattern.
The Prototype pattern puts the responsibility of copying on the object to be copied. If you use serialization directly, the client needs to provide the deserialization and serialization code. If you own, or plan to write, all of the classes that are to be copied, it is easy to move the responsibility to those classes:
define a Prototype interface which extends Serializable and adds an instance method copy
define a concrete class PrototypeUtility with a static method copy that implements the serialization and deserialization in one place
define an abstract class AbstractPrototype that implements Prototype. Make its copy method delegate to PrototypeUtility.copy.
A class which needs to be a Prototype can either implement Prototype itself and use PrototypeUtility to do the work, or can just extend AbstractPrototype. By doing so it also advertises that it is safely Serializable.
If you don't own the classes whose instances are to be copied, you can't follow the Prototype pattern exactly, because you can't move the responsibility for copying to those classes. However, if those classes implement Serializable, you can still get the job done by using serialization directly.
Regarding copy constructors, those are a fine way to copy Java objects whose classes you know, but they don't meet the requirement that the Prototype pattern does that the client should not need to know the class of the object instance that it is copying. A client which doesn't know an instance's class but wants to use its copy constructor would have to use reflection to find a constructor whose only argument has the same class as the class it belongs to. That's ugly, and the client couldn't be sure that the constructor it found was a copy constructor. Implementing an interface addresses those issues cleanly.
Wikipedia's comment that the Prototype pattern avoids the cost of creating a new object seems misguided to me. (I see nothing about that in the Gang of Four description.) Wikipedia's example of an object that is expensive to create is an object which lists the occurrences of a word in a text, which of course are expensive to find. But it would be foolish to design your program so that the only way to get an instance of WordOccurrences was to actually analyze a text, especially if you then needed to copy that instance for some reason. Just give it a constructor with parameters that describe the entire state of the instance and assigns them to its fields, or a copy constructor.
So unless you're working with a third-party library that hides its reasonable constructors, forget about that performance canard. The important points of Prototype are that
it allows the client to copy an object instance without knowing its class, and
it accomplishes that goal without creating a hierarchy of factories, as meeting the same goal with the AbstractFactory pattern would.
I'm puzzled by this part of your requirements:
Note: I don't think unmarshalling XML documents is a right
implementation of this pattern because invokes the class constructor.
Probably this also happens when unmarshalling JSON content as well.
I understand that you might not want to implement a copy constructor, but you will always have a regular constructor. If this constructor is invoked by a library then what does it matter? Furthermore object creation in Java is cheap. I've used Jackson for marshalling/unmarshalling Java objects with great success. It is performant and has a number of awesome features that might be very helpful in your case. You could implement a deep copier as follows:
import com.fasterxml.jackson.databind.ObjectMapper;
public class MyCloner {
private ObjectMapper cloner; // with getter and setter
public <T> clone(T toClone){
String stringCopy = mapper.writeValueAsString(toClone);
T deepClone = mapper.readValue(stringCopy, toClone.getClass());
return deepClone;
}
}
Note that Jackson will work automatically with Beans (getter + setter pairs, no-arg constructor). For classes that break that pattern it needs additional configuration. One nice thing about this configuration is that it won't require you to edit your existing classes, so you can clone using JSON without any other part of your code knowing that JSON is being used.
Another reason I like this approach vs. serialization is it is more human debuggable (just look at the string to see what the data is). Additionally, there are tons of tools out there for working with JSON:
Online JSON formatter
Veiw JSON as HTML based webpage
Whereas tools for Java serialization isn't great.
One drawback to this approach is that by default duplicate references in the original object will be made unique in the copied object by default. Here is an example:
public class CloneTest {
public class MyObject { }
public class MyObjectContainer {
MyObject refA;
MyObject refB;
// Getters and Setters omitted
}
public static void runTest(){
MyCloner cloner = new MyCloner();
cloner.setCloner(new ObjectMapper());
MyObjectContainer container = new MyObjectContainer();
MyObject duplicateReference = new MyObject();
MyObjectContainer.setRefA(duplicateReference);
MyObjectContainer.setRefB(duplicateReference);
MyObjectContainer cloned = cloner.clone(container);
System.out.println(cloned.getRefA() == cloned.getRefB()); // Will print false
System.out.println(container.getRefA() == container.getRefB()); // Will print true
}
}
Given that there are several approaches to this problem each with their own pros and cons, I would claim there isn't a 'proper' way to implement the prototype pattern in Java. The right approach depends heavily on the environment you find yourself coding in. If you have constructors which do heavy computation (and can't circumvent them) then I suppose you don't have much option but to use Deserialization. Otherwise, I would prefer the JSON/XML approach. If external libraries weren't allowed and I could modify my beans, then I'd use Dave's approach.
Your question is really interesting Luiggi (I voted for it because the idea is great), it's a pitty you don't say what you are really concerned about. So I'll try to answer what I know and let you choose what you find arguable:
Advantages :
In terms of memory use, you will get a very good memory consumption by using serialization since it serializes your objects in binary format (and not in text as json or worse: xml). You may have to choose a strategy to keep your objects "pattern" in memory as long as you need it, and persist it in a "less used first persisted" strategy, or "first used first persisted"
Coding it is pretty direct. There are some rules to respect, but it you don't have many complex structures, this remains maintainable
No need for external libraries, this is pretty an advantage in institutions with strict security/legal rules (validations for each library to be used in a program)
If you don't need to maintain your objects between versions of the program/ versions of the JVM. You can profit from each JVM update as speed is a real concern for java programs, and it's very related to io operations (JMX, memory read/writes, nio, etc...). So there are big chances that new versions will have optimized io/memory usage/serialization algos and you will find you're writing/reading faster with no code change.
Disadvantages :
You loose all your prototypes if you change any object in the tree. Serialization works only with the same object definition
You need to deserialize an object to see what is inside it: as opposed to the prototype pattern that is 'self documenting' if you take it from a Spring / Guice configuration file. The binary objects saved to disk are pretty opaque
If you're planning to do a reusable library, you're imposing to your library users a pretty strict pattern (implementing Serializable on each object, or using transient for dields that are not serializable). In addition this constraints cannot be checked by the compiler, you have to run the program to see if there's something wrong (which might not be visible immediately if an object in the tree is null for the tests). Naturally, I'm comparing it to other prototyping technologies (Guice for example had the main feature of being compile time checked, Spring did it lately too)
I think it's all what comes to my mind for now, I'll add a comment if any new aspect raises suddenly :)
Naturally I don't know how fast is writing an object as bytes compared to invoking a constructor. The answer to this should be mass write/read tests
But the question is worth thinking.
There are cases where creating new object using copy constructor is different from creating new object "in a standard way". One example is explained in the Wikipedia link in your question. In that example, to create new WordOccurrences using the constructor WordOccurrences(text, word), we need to perform heavyweight computation. If we use copy constructor WordOccurrences(wordOccurences) instead, we can immediately get the result of that computation (in the Wikipedia, clone method is used, but the principle is the same).
Is it possible to serialize an object with no fields in Jackson using only annotations? When I attempt to serialize such an object with no annotations I get:
Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: No serializer found for class [redacted].SubjectObjectFeatureExtractor and no properties discovered to create BeanSerializer
I have examined the list of Jackson annotations without seeing a way to annotate the class as having no serializable data. I tried putting #JsonCreator on the empty constructor (not expecting it to work, since it's a deserialization annotation), and I got the same error. There are no accessors or fields to put #JsonProperty on. Any ideas?
Update: The reason for this is that I have a list of objects which represent transformations which can be applied to a certain type of data. Some of these transformations are defined by parameters which needs to be serialized, but some of these are parameter-less (the data-less objects in question). I'd like to be able to serialize and deserialize a sequence of these transformations. Also, I'm using DefaultTyping.NON_FINAL so that the class name will be serialized.
Update: An example class would be
class ExtractSomeFeature implements FeatureExtractor<SomeOtherType> {
public void extractFeature(SomeOtherType obj, WeightedFeatureList output) {
// do stuff
}
}
I don't particularly care how the JSON for this looks like, as long as I can deserialize List<FeatureExtractor>s properly. My impression is that using default typing, the expected JSON would be something like:
['com.mycompany.foo.ExtractSomeFeature', {}]
Other sub-classes of FeatureExtractor would have real parameters, so they would presumably look something like:
[`com.mycompany.foo.SomeParameterizedFeature', {some actual JSON stuff in here}]
I think I could use #JsonValue on some toJSONString() method to return {}, but if possible I'd like to hide such hackery from end-users who will be creating FeatureExtractor sub-classes.
You have to configure your object mapper to support this case.
ObjectMapper objectMapper = ...
objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false);
The documentation of this feature can be found here : Fail on empty beans
Feature that determines what happens when no accessors are found for a
type (and there are no annotations to indicate it is meant to be
serialized). If enabled (default), an exception is thrown to indicate
these as non-serializable types; if disabled, they are serialized as
empty Objects, i.e. without any properties.
The answer to disable SerializationFeature.FAIL_ON_EMPTY_BEANS is global, and you therefore might not wish to apply it.
The answer to add any serialisation annotation showed the correct (as in: the Javadoc of SerializationFeature.FAIL_ON_EMPTY_BEANS suggests it) way to fix it, but only with a hackish or an unrelated annotation.
By merely adding…
#JsonSerialize
… to my class (not even parenthesēs after it, lest alone arguments!) I was able to produce the same effect (as, again, indicated by the Javadoc of SerializationFeature.FAIL_ON_EMPTY_BEANS).
Adding the following annotation onto the class seems to solve the problem:
#JsonAutoDetect(fieldVisibility=JsonAutoDetect.Visibility.NONE)
Adding an unrelated annotated like
#JsonRootName("fred")
also seems to fix it. This seems to match the claim in the JIRA ticket that adding any Jackson annotation to the class will prevent the exception. However, it appears adding annotations within the class does not.
Not sure I get your question, but perhaps you want JsonInclude.Include.NON_DEFAULT, JsonInclude.Include.NON_NULL, or JsonInclude.Include. NON_EMPTY.
I'm using Jersey 1.x here and I have a #POST method that requires sending over a deeply nested, complex object. I'm not sure of all my options, but it seems like a lot are described in this documentation:
In general the Java type of the method parameter may:
Be a primitive type;
Have a constructor that accepts a single String argument;
Have a static method named valueOf or fromString that accepts a single String argument (see, for example, Integer.valueOf(String) and
java.util.UUID.fromString(String)); or
Be List, Set or SortedSet, where T satisfies 2 or 3 above. The resulting collection is read-only.
Ideally, I wish that I could define a method like this:
#POST
#Consumes(MediaType.APPLICATION_FORM_URLENCODED)
#Path("complexObject")
public void complexObject(#FormParam("complexObject") ComplexObject complexObject) throws Exception {
But I guess I can only do that if my object satisfies the requirements above (which in my case, it does not). To me it seems that I have a choice.
Option 1: Implement fromString
Implement item #3 above.
Option 2: Pass in the complexObject in pieces
Break up the complexObject into pieces so the parameters become this:
#POST
#Consumes(MediaType.APPLICATION_FORM_URLENCODED)
#Path("complexObject")
public void complexObject(#FormParam("piece1") LessComplexPiece lessComplexPiece1,
#FormParam("piece2") LessComplexPiece lessComplexPiece2,
#FormParam("piece3") LessComplexPiece lessComplexPiece3) throws Exception {
This may not be enough if LessComplexPiece does not satisfy the requirements above. I'm wondering what the best option is here. What do people usually do in this situation? Here are the pros and cons I can think of:
Cons of Implement fromString
Have to maintain a custom deserializer. Every time the class is modified, this deserializer may break. There's more risk for bugs in general.
It will probably be impossible to generate documentation that describes the pieces of the complex object. I'll have to write that by hand.
For each piece of the complex object, I'll have to write my own casting and validation logic.
I'm not sure what the post data would look like. But, this may make it very difficult for someone to call the API from a web page form. If the resource accepted primitives, it would be easy. EG: complexObject=seralizedString vs firstName=John and lastName=Smith
You may not be able to modify the class for various reasons (thankfully, this is not a limitation for me)
Pros of Implementing fromString
This could avoid a method with a ton of parameters. This will make the API less intimidating to use.
This argument is at the level of abstraction I want to work at in the body of my method:
I won't have to combine the pieces together by hand (well technically I will, it'll just have to be in the deserializer method)
The deserializer can be a library that automates the process (XStream, gensen, etc.) and save me a lot of time. This can mitigate the bug risk.
You may run into "namespace" clashes if you flatten the object to send over pieces. For example, imagine sending over an Employee. If he has a Boss, you now have to provide a EmployeeFirstName and a BossFirstName. If you were just deserializing an object, you could nest the data appropriately and not have to include context in your parameter names.
So which option should I choose? Is there a 3rd option I'm not aware of?
I know that this question is old but in case anybody has this problem there is new better solution since JAX-RS 2.0. Solution is #BeanParam. Due to documentation:
The annotation that may be used to inject custom JAX-RS "parameter aggregator" value object into a resource class field, property or resource method parameter.
The JAX-RS runtime will instantiate the object and inject all it's fields and properties annotated with either one of the #XxxParam annotation (#PathParam, #FormParam ...) or the #Context annotation. For the POJO classes same instantiation and injection rules apply as in case of instantiation and injection of request-scoped root resource classes.
If you are looking for extended explanation on how this works please look at article I've found:
http://java.dzone.com/articles/new-jax-rs-20-%E2%80%93-beanparam
For complex object models, you may want to consider using JSON or XML binding instead of URL-encoded string to pass your objects to your resource call so you can rely on JAXB framework?
The Jersey Client library is compatible with JAXB and can handle all the marshaling transparently for you if you annotate your classes #XmlElementRoot.
For documentation, XSDs are a good starting point if you choose the XML binding.
Other REST documentation tools like enunciate can take the automatic generation to the next level.
What about special handler which transforms object to e.g. json - kryo if you would prefer performance? You got couple options
Look also at persistence ignorance.
In Java, I would like to use hierarchies of immutable POJOs to express my domain model.
e.g.
final ServiceId id = new ServiceId(ServiceType.Foo, "my-foo-service")
final ServiceConfig cfg = new ServiceConfig("localhost", 8080, "abc", JvmConfig.DEFAULT)
final ServiceInfo info = new ServiceInfo(id, cfg)
All of these POJOs have public final fields with no getters or setters. (If you are a fan of getters, please pretend that the fields are private with getters.)
I would also like to serialize these objects using the MessagePack library in order to pass them around over the network, store them to ZooKeeper nodes, etc.
The problem is that MessagePack only supports serialization of public, non-final fields, so I cannot serialize the business objects as-is. Also MessagePack does not support enum, so I have to convert enum values to int or String for serialization. (Yes it does, if you add an annotation to your enums. See my comment below.)
To deal with this I have a hand-written corresponding hierarchy of "message" objects, with conversions between each business object and its corresponding message object. Obviously this is not ideal because it causes a large amount of duplicated code, and human error could result in missing fields, etc.
Are there any better solutions to this problem?
Code generation at compile time?
Some way to generate the appropriate serializable classes at runtime?
Give up on MessagePack?
Give up on immutability and enums in my business objects?
Is there some kind of generic wrapper library that can wrap a mutable object (the message object) into an immutable one (the business object)?
MessagePack also supports serialization of Java Beans (using the #MessagePackBeans annotation), so if I can automatically convert an immutable object to/from a Java Bean, that may get me closer to a solution.
Coincidentally, I recently created a project that does pretty much exactly what you are describing. The use of
immutable data models provides huge benefits, but many serialization technologies seem to approach
immutability as an afterthought. I wanted something that would fix this.
My project, Grains, uses code generation to create an immutable implementation
of a domain model. The implementation is generic enough that it can be adapted to different serialization frameworks.
MessagePack, Jackson, Kryo, and standard Java serialization are supported so far.
Just write a set of interfaces that describe your domain model. For example:
public interface ServiceId {
enum ServiceType {Foo, Bar}
String getName();
ServiceType getType();
}
public interface ServiceConfig {
enum JvmConfig {DEFAULT, SPECIAL}
String getHost();
int getPort();
String getUser();
JvmConfig getType();
}
public interface ServiceInfo {
ServiceId getId();
ServiceConfig getConfig();
}
The Grains Maven plugin then generates immutable implementations of these interfaces at compile time.
(The source it generates is designed to be read by humans.) You then create instances of your objects. This example
shows two construction patterns:
ServiceIdGrain id = ServiceIdFactory.defaultValue()
.withType(ServiceType.Foo)
.withName("my-foo-service");
ServiceConfigBuilder cfg = ServiceConfigFactory.newBuilder()
.setHost("localhost")
.setPort(8080)
.setUser("abc")
.setType(JvmConfig.DEFAULT);
ServiceInfoGrain info = ServiceInfoFactory.defaultValue()
.withId(id)
.withConfig(cfg.build());
Not as simple as your public final fields, I know, but inheritance and composition are not possible without getters
and setters. And, these objects are easily read and written with MessagePack:
MessagePack msgpack = MessagePackTools.newGrainsMessagePack();
byte[] data = msgpack.write(info);
ServiceInfoGrain unpacked = msgpack.read(data, ServiceInfoGrain.class);
If the Grains framework doesn't work for you, feel free to inspect its MessagePack templates.
You can write a generic TemplateBuilder that uses reflection to set the final fields of your hand-written domain model. The trick
is to create a custom TemplateRegistry that allows registration of your custom builder.
It sounds like you have merged, rather than separated, the read and write concerns of your application. You should probably consider CQRS at this point.
In my experience, immutable domain objects are almost always attached to an audit story (requirement), or it's lookup data (enums).
Your domain should probably be, mostly, mutable, but you still don't need getters and setters. Instead you should have verbs on your objects which result in a modified domain model, and which raise events when something interesting happens in the domain (interesting to the business -- business == someone paying for your time). It's probably the events that you're interested in passing over the wire, not the domain objects. Maybe it's even the commands (these are similar to events, but the source is an agent external to the bounded context in which your domain lives -- events are internal to the model's bounded context).
You can have a service to persist the events (and another one to persist commands), which is also your audit-log (fulfilling your audit stories).
You can have an event handler that pushes your events onto your bus. These events should contain either simple information or entity ID's. The services that respond to these events should perform their duties using the information provided, or they should query for the information they need using the given ID's.
You really shouldn't be exposing the internal state of your domain model. You're breaking encapsulation by doing that, and that's not really a desirable thing to do. If I were you I'd take a look at the Axon Framework. It's likely to get you further than MessagePack alone.