How to properly map between persistence layer and domain object - java

Let's say I have a domain java class representing a person:
class Person {
private final String id; // government id
private String name;
private String status;
private Person(String id, String name) {
this.id = id;
this.name = name;
this.status = "NEW";
}
Person static createNew(String id, String name) {
return new Person(id, name);
}
void validate() {
//logic
this.status = "VALID";
}
public static final class Builder {
private String id;
private String name;
private String status;
private Builder() {
}
public static Builder aPerson() {
return new Builder();
}
public Builder id(String id) {
this.id = id;
return this;
}
public Builder name(String name) {
this.name = name;
return this;
}
public Builder status(String status) {
this.status = status;
return this;
}
public Person build() {
Person person = new Person(id, name);
person.status = this.status;
return person;
}
}
I store this domain class object in a database, regular class with the same field + getters and setters. Currently when I want to store object I create new PersonDocument (data is stored in mongo), use getters and setters and save it. It gets complicated when I want to fetch it from DB. I would like my domain object to expose only what is necessary, for the business logic currently it is only creation and validation. Simply:
Person p = Person.createNew("1234", "John");
p.validate();
repository.save(p);
The other way it gets complicated, currently there is a builder which allows creation of object in any state. We do believe that data stored in DB has a proper state so it can be created that way but the downside is that there is a public API available, letting any one to do anything.
The initial idea was to use MapStruct java mapping library but it does use setters to create objects and exposing setters in the domain class (as far as I can tell) should be avoided.
Any suggestions how to do it properly?

Your problem likely comes from two conflicting requirements:
You want to expose only business methods.
You want to expose data too, since you want to be able to implement serialization/deserialization external to the object.
One of those has to give. To be honest, most people faced with this problem ignore the first one, and just introduce setter/getters. The alternative is of course to ignore the second one, and just introduce the serialization/deserialization into the object.
For example you can introduce a method Document toDocument() into the objects that produces the Mongo compatible json document, and also a Person fromDocument(Document) to deserialize.
Most people don't like this sort of solution, because it "couples" the technology to the object. Is that a good or bad thing? Depends on your use-case. Which one do you want to optimize for: Changing business logic or changing technologies? If you're not planning to change technologies very often and don't plan using the same class in a completely different application, there's no reason to separate technology.

Robert Bräutigam sentence is good:
Two conflicting requirements
But the is another sentence by Alan Kay that is better:
“I’m sorry that I long ago coined the term “objects” for this topic
because it gets many people to focus on the lesser idea. The big idea
is messaging.” ~ Alan Kay
So, instead of dealing with the conflict, let's just change the approach to avoid it. The best way I found is to take a functional aproach and avoid unnecessary states and mutations in classes by expresing the domain changes as events.
Instead to map classes (aggregates, V.o.'s and/or entities) to persistence, I do this:
Build an aggregate with the data needed (V.O.'s and entities) to apply aggregate rules and invariants given an action. This data comes from persistence. The aggregate do not expose getters not setters; just actions.
Call the aggretate's action with command data as parameter. This will call inner entities actions in case the overal rules need it. This allows responsibility segregation and decoupling as the Aggregate Root does not have to know how are implemeted their inner entities (Tell, don't ask).
Actions (in Aggregate roots and inner entities) does not modify its inner state; they instead returns events expressing the domain change. The aggregate main action coordinate and check the events returned by its inner entities to apply rules and invariants (the aggregate has the "big picture") and build the final Domain Event that is the output of the main Action call.
Your persistence layer has an apply method for every Domain Event that has to handle (Persistence.Apply(event)). This way your persistence knows what was happened and; as long as the event has all the data needed to persist the change; can apply the change into (even with behaviour if needed!).
Publish your Domain Event. Let the rest of your system knows that something has just happenend.
Check this post (it worth chek all DDD series in this blog) to see a similar implementation.

I do it this way:
The person as a domain entity have status (in the sense of the entity fields that define the entity, not your "status" field) and behaviour (methods).
What is stored in the db is just the status. Then I create a "PersonStatus" interface in the domain (with getter methods of the fields that we need to persist), so that PersonRepository deals with the status.
The Person entity implements PersonStatus (or instead of this, you can put a static method that returns the state).
In the infraestructure I have a PersonDB class implementing PersonStatus too, which is the persistence model.
So:
DOMAIN MODEL:
// ENTITY
public class Person implements PersonStatus {
// Fields that define status
private String id;
private String name;
...
// Constructors and behaviour
...
...
// Methods implementing PersonStatus
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// STATUS OF ENTITY
public interface PersonStatus {
public String id();
public String name();
...
}
// REPOSITORY
public interface PersonRepository {
public void add ( PersonStatus personStatus );
public PersonStatus personOfId ( String anId );
}
INFRAESTRUCTURE:
public class PersonDB implements PersonStatus {
private String id;
private String name;
...
public PersonDB ( String anId, String aName, ... ) {
this.id = anId;
this.name = aName;
...
}
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// AN INMEMORY REPOSITORY IMPLEMENTATION
public class InmemoryPersonRepository implements PersonRepository {
private Map<String,PersonDB> inmemorydb;
public InmemoryPersonRepository() {
this.inmemoryDb = new HashMap<String,PersonDB>();
}
#Override
public void add ( PersonStatus personStatus );
PersonDB personDB = new PersonDB ( personStatus.id(), personStatus.name(), ... );
this.inmemoryDb.put ( personDB.id(), personDB );
}
#Override
public PersonStatus personOfId ( String anId ) {
return this.inmemoryDb.personOfId ( anId );
}
}
APPLICATION LAYER:
...
Person person = new Person ( "1", "John Doe", ... );
personRepository.add ( person );
...
PersonStatus personStatus = personRepository.personOfId ( "1" );
Person person = new Person ( personStatus.id(), personStatus.name(), ... );
...

It basically boils down to two things, depending on how much you are willing to add extra work in on the necessary infrastructure and how constraining your ORM/persistence is.
Use CQRS+ES pattern
The most obvious choice that's used in bigger and complex domains is to use the CQRS (Command/Query Responsibility Segregation) "Event Sourcing" pattern. This means, that each mutable actions generates an event, that is persisted.
When your aggregate is loaded, all the events will be loaded from the database and applied in chronological order. Once applied, your aggregate will have its current state.
CQRS just means, that you separate read and write operations. Write operations would happen in the aggregate by creating events (by applying commands) which are stored/read via Event Sourcing.
Where the "Query" would be queries on projected data, which uses the events to create a current state of the object, that's used for querying and reading only. Aggregates still read by reapplying all the events from the event sourcing storage.
Pros
You have an history of all changes that were done on the aggregate. This can be seen as added value to the business and auditing
if your projected database is corrupted or in an invalid state, you can restore it by replaying all the events and generate the projection from anew.
It's easy to revert to a previous state in time (i.e. by applying compensating events, that does opposite of what a previous event did)
Its easy to fix a bug (i.e. when calculating the the state of the aggregate) and then reply all the events to get the new, corrected value.
Assume you have a BankingAccount aggregate and calculate the balance and you used regular rounding instead of "round-to-even". Here you can fix the calculation, then reapply all the events and you get the new and correct account balance.
Cons
Aggregates with 1000s of events can take some time to materialize (Snapshots/Mememto pattern can be used here to load a snapshot and apply the events after that snapshot)
Initially more time to implement the necessary infrastructure
You can't query event sourced aggregates w/o a read store; Requires projection and a message queue to publish the event sourcing events so they can be processed and applied to a projection (SQL or document table) which can be used for queries
Map directly to Domain Entities
Some ORM and Document database providers allow you to directly map to backing fields, i.e. via reflection.
In MongoDb C# Driver it can be done via something like in the linked answer.
Same applies to EF Core ORM. I'm sure theres something similar in the Java world too.
This may limit your database persistence library and technology usage, since it will require you to use one which supports such APIs via fluent or code configuration. You can't use attributes/annotations for this, because these are usually database specific and it would leak persistence knowledge into your domain.
It also MAY limit your ability to use the strong typed querying API (Linq in C#, Streams in Java), because that generally requires getters and setters, so you may have to use magic strings (with names of the fields or properties in the storage) in the persistence layer.
It may be acceptable for smaller/less complex domains. But CQRS+ES should always be preferred, if possible and within budget/timeline since its most flexible and works with all persistence storage and frameworks (even with key-value stores).
Pros
Not necessary to leverage more complex infrastructure (CQRS, ES, Pub/Sub messaging/queues)
No leaking of persistence knowledge into your models and no need to break encapsulation
Cons
No history of changes
No way to restore a previous state
May require magic strings when querying in the persistence layer (depends on framework/orm)
Can require a lot of fluent/code configuration in the persistence layer, to map it to the backing field
May break, when you rename the backing field

Related

Persistent Model to Domain Model mapping without exposing domains object attributes

I know this is a common question, but I haven't found another that solves my doubts.
Usually, if the project is small, I've persistence annotations in the same object that represents the domain object. This allows to load the entity from database and keep all the setters private, ensuring any instance is always in a valid state. Something like:
#Entity
class SomeEntity {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String attribute1;
private String attribute2;
private String attribute3;
// ... other attributes
protected SomeEntity() {}
/* Public getters */
public Long getId() { ... }
public String getAttribute1() { ... }
public String getAttribute2() { ... }
/* Expose some behaviour */
public void updateAttributes(String attribute1, String attribute2) {
/* do some validations before updating */
}
}
My problem appears if I want to hava a different persistent model. Then I would have something like:
/* SomeEntity without persistent info */
class SomeEntity {
private Long id;
private String attribute1;
private String attribute2;
private String attribute3;
// ... other attributes
protected SomeEntity() {}
/* Public getters */
public Long getId() { ... }
public String getAttribute1() { ... }
public String getAttribute2() { ... }
/* Expose some behaviour */
public void updateAttributes(String attribute1, String attribute2) {
/* do some validations before updating */
}
}
and DAO:
#Entity
class SomeEntityDAO {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String attribute1;
private String attribute2;
private String attribute3;
public SomeEntityDAO() {}
/* All getters and setters */
}
My question is, how can I map SomeEntityDAO to SomeEntity without exposing SomeEntity's attributes?
If I create a constructor like: public SomeEntity(String attribute1, String attribute2, ...) {}, then anyone can create an invalid instance of SomeEntity. The same occurs if I make all setters public in SomeEntity.
I also don't think is a valid solution build the object using updateAttributes() since this will execute some validations I don't whant to execute at this point (we trust the data that's persistet in database).
I'm thinking in having all the setters protected, so the DAO can extend the Entity and have access to setters... but I'm not sure if this is a good option.
Which is the best or common approach to solve this problem?
I've had the same kind of problem. And looking around I've found no solution. Believe me, if it exists is well hidden somewhere. None that suggests what to do when you have to deal with an old project where ORM entities are everywhere and there's a big step between Domain and ORM model.
Given this, I've deducted that if you really want to keep your Domain entities pure (so non get and set - the latter I would NEVER accept!) you have to do some deals. Because there's no way to share the internals without giving the entities some extra knowledge. Beware, this doesn't mean that you have to make the Domain entities aware of the ORM layer, nor that you have to use getters. Just, what I've concluded, the Domain entities should have ways to expose them as a different model.
So, in conclusion, what I would do in your situation is to build up a Visitor pattern. The Domain entity EntityA would implement the EntityAVisitable interface to accept a EntityAVisitor or something like this.
interface EntityAVisitable {
accepts(EntityAVisitor visitor);
}
The builder implements the interface required by the Visitor, EntityAVisitor.
interface EntityAVisitor<T>{
setCombinedValue1_2(String attribute1_attribute2_combinedInEntity);
<T> build();
}
The build() function of the interface EntityAVisitor uses a generic type T. In this way the Domain entity is agnostic about the return type of the concrete implementation of the EntityAVisitor.
Is it perfect? No.
Perfect solution would be to get rid of the ORM (actually I would say that I hate them, because the way are used is most of the times wrong - but this is my personal thought).
Is it nice? No.
A nice solution is not allowed due to language restrictions (I suppose you use Java).
Does it a good work in encapsulating the real content of your Domain entity? Yes.
Not only, in this way you can decide exactly what could be exposed and how. So, in my opinion, is a good deal between keeping the entity pure and having to work with an ORM under the seat.
Domain entity should be self-validating meaning it should only validate itself based on it's internal values. If update requires validation that depends on external dependencies, then I would create an updater class that is responsible for the update. From the updater class, you can use specification pattern (as an injectable dependency) to implement the validation.
Use domain entities when modifying, and DTOs for read-only projections. There are performance and simplification gains when you use straight DTOs in read-only. This is used in CQRS patterns.
class SomeEntity {
private Long id;
private String attribute1;
private String attribute2;
private String attribute3;
// ... other attributes
public SomeEntity() {}
/* Public getters/setter */
public Long getId() { ... }
public String getAttribute1() { ... }
public String getAttribute2() { ... }
public Long setId() { ... }
public String setAttribute1() { ... }
public String setAttribute2() { ... }
}
//classes/interfaces named for clarity
class EntityUpdater implements IEntityUpdater {
public EntityUpdater (ISpecification spec){
}
public updateEntity(SomeEntity entity){
//assert/execute validation
}
}
Some ORMs allow setting entity values through field access (as opposed to setter methods).
JPA uses the #Access annotation. See What is the purpose of AccessType.FIELD, AccessType.PROPERTY and #Access
I created an ORM, sormula, that can use field access. See #Row fieldAccess and test case org.sormula.tests.fieldaccess.

jOOQ: Allowed-Character constraints?

I am considering moving from Hibernate to jOOQ but I can't find e.g.
how to have Pattern-Constraints on a String like this in Hibernate:
#NotEmpty(message = "Firstname cannot be empty")
#Pattern(regexp = "^[a-zA-Z0-9_]*$", message = "First Name can only contain characters.")
private String firstname;
How would I do that in jOOQ?
The "jOOQ way"
The "jOOQ way" to do such validation would be to create either:
A CHECK constraint in the database.
A trigger in the database.
A domain in the database.
After all, if you want to ensure data integrity, the database is where such constraints and integrity checks belong (possibly in addition to functionally equivalent client-side validation). Imagine a batch job, a Perl script, or even a JDBC statement that bypasses JSR-303 validation. You'll find yourself with corrupt data in no time.
If you do want to implement client-side validation, you can still use JSR-303 on your DTOs, which interact with your UI, for instance. But you will have to perform validation before passing the data to jOOQ for storage (as artbristol explained).
Using a Converter
You could, however, use your own custom type by declaring a Converter on individual columns and by registering such Converter with the source code generator.
Essentially, a Converter is:
public interface Converter<T, U> extends Serializable {
U from(T databaseObject);
T to(U userObject);
Class<T> fromType();
Class<U> toType();
}
In your case, you could implement your annotations as such:
public class NotEmptyAlphaNumericValidator implements Converter<String, String> {
// Validation
public String to(String userObject) {
assertNotEmpty(userObject);
assertMatches(userObject, "^[a-zA-Z0-9_]*$");
return userObject;
}
// Boilerplate
public String from(String databaseObject) { return databaseObject; }
public Class<String> fromType() { return String.class; }
public Class<String> toType() { return String.class; }
}
Note that this is more of a workaround, as Converter hasn't been designed for this use-case, even if it can perfectly implement it.
Using formal client-side validation
There's also a pending feature request #4543 to add more support for client-side validation. As of jOOQ 3.7, this is not yet implemented.
I recommend you don't try to use jOOQ in a 'hibernate/JPA' way. Leave the jOOQ generated classes as they are and map to your own domain classes manually, which you are free to annotate however you like. You can then call a JSR validator before you attempt to persist them.
For example, jOOQ might generate the following class
public class BookRecord extends UpdatableRecordImpl<BookRecord> {
private String firstname;
public void setId(Integer value) { /* ... */ }
public Integer getId() { /* ... */ }
}
You can create your own domain object
public class Book {
#NotEmpty(message = "Firstname cannot be empty")
#Pattern(regexp = "^[a-zA-Z0-9_]*$", message = "First Name can only contain characters.")
private String firstname;
public void setId(Integer value) { /* ... */ }
public Integer getId() { /* ... */ }
}
and map by hand once you've retrieved a BookRecord, in your DAO layer
Book book = new Book();
book.setId(bookRecord.getId());
book.setFirstname(bookRecord.getFirstname());
This seems quite tedious (and ORM tries to spare you this tedium) but actually it scales quite well to complicated domain objects, in my opinion, and it's always easy to figure out the flow of data in your application.

GAE Endpoints (Java) with objectify - how to model partial data (for client)?

I have the following entities:
#Entity
public class Person {
#Id public Long id;
public String name;
public Ref<Picture> picture;
public String email;
public byte age;
public short birthday; // day of year
public String school;
public String very_long_life_story;
... some extra fields ...
}
#Entity
public class Place {
#Id public Long id;
public String name;
public String comment;
public long createdDateMS;
public long visitors;
#Load public List<Ref<Person>> owners;
}
Few notes:
(A) Maximum size of owners, in Place entity, is 4 (~)
(B) The person class is presumable very big, and when querying place, I would like to only show a subset of the person data. This optimizations is aimed both at server-client and server-database communications. Since objectify (gae actually) only load/save entire entities, I would like to do the following:
#Entity
pubilc class PersonShort {
#Id public Long id;
public Ref<Picture> picture;
public String name;
}
and inside Place, I would like to have (instead of owners):
#Load public List<PersonShort> owners;
(C) The problem with this approach, is that now I have a duplication inside the datastore.
Although this isn't such a bad thing, the real problem is when a Person will try to save a new picture, or change name; I will not only have to update it in his Person class,
but also search for every Place that has a PersonShort with same id, and update that.
(D) So the question is, is there any solution? Or am I simply forced to select between the options?
(1) Loading multiple Person class, which are big, when all I need is some really small information about it.
(2) Data duplication with many writes
If so, which one is better (Currently, I believe it's 1)?
EDIT
What about loading the entire class (1), but sending only part of it?
#Entity
public class Person {
#Id public Long id;
public String name;
public Ref<Picture> picture;
public String email;
public byte age;
public short birthday; // day of year
public String school;
public String very_long_life_story;
... some extra fields ...
}
public class PersonShort {
public long id;
public String name;
}
#Entity
public class Place {
#Id public Long id;
public String name;
public String comment;
public long createdDateMS;
public long visitors;
// Ignore saving to datastore
#Ignore
public List<PersonShort> owners;
// Do not serialize when sending to client
#ApiResourceProperty(ignored = AnnotationBoolean.TRUE)
#Load public List<Ref<Person>> ownersRef;
#OnLoad private void loadOwners() {
owners = new List<PersonShort>();
for (Ref<Person> r : ownersRef) {
owners.add(nwe PersonShort(r.get()));
}
}
}
It sounds like you are optimizing prematurely. Do you know you have a performance issue?
Unless you're talking about hundreds of K, don't worry about the size of your Person object in advance. There is no practical value in hiding a few extra fields unless the size is severe - and in that case, you should extract the big fields into some sort of meaningful entity (PersonPicture or whatnot).
No definite answer, but some suggestions to look at:
Lifecycle callbacks.
When you put your Person entity, you can have an #OnSave handler to automatically store your new PersonShort entity. This has the advantage of being transparent to the caller, but obviously you are still dealing with 2 entity writes instead of 1.
You may also find you are having to fetch two entities too; initially you may fetch the PersonShort and then later need some of the detail in the corresponding Person. Remember Objectify's caching can reduce your trips to Datastore: it's arguably better to have a bigger, cached, entity than two separate entities (meaning two RPCs).
Store your core properties (the ones in PersonShort) as separate properties in your Person class and then have the extended properties as a single JSON string which you can deserialize with Gson.
This has the advantage that you are not duplicating properties, but the disadvantage is that anything you want to be able to search on cannot be in the JSON blob.
Projection Queries. You can tell Datastore to return only certain properties from your entities. The problem with this method is that you can only return indexed properties, so you will probably find you need too many indexes for this to be viable.
Also, use #Load annotations with care. For example, in your Place class, think whether you really need all those owners' Person details when you fetch the owners. Perhaps you only need one of them? i.e., instead of getting a Place and 4 Persons every time you fetch a Place, maybe you are better off just loading the required Person(s) when you need them? (It will depend on your application.)
It is a good practice to return a different entity to your client than the one you get from your database. So you could create a ShortPerson or something that is only used as a return object in your REST endpoints. It will accept a Person in its constructor and fill in the properties you want to return to the client from this more complete object.
The advantage of this approach is actually less about optimization and more that your server models will change over time, but that change can be completely independent of your API. Also, you choose which data is publicly accessible, which is what you are trying to do.
As for the optimization between db and server, I wouldn't worry about it until it is an issue.

How configure Achiles work with entity methods properly?

I use Achilles library for working with cassandra database. The problem is when I create entity method that effects fields Achilles do not "see" these changes. See example below.
import info.archinnov.achilles.persistence.PersistenceManager;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
#Service
public class AhilesTest {
private static final UUID ID = UUID.fromString("083099f6-e423-498d-b810-d6c564228724");
//This is achilles persistence manager
#Autowired
private PersistenceManager persistenceManager;
public void test () {
//user creation and persistence
User toInsert = new User();
toInsert.setId(ID);
toInsert.setName("name");
toInsert.setVersion(0l);
persistenceManager.insert(toInsert);
//find user
User user = persistenceManager.find(User.class, id);
user.changeName("newName");
persistenceManager.update(user);
User updatedUser = persistenceManager.find(User.class, id);
//here old "name" value is returned
updatedUser.getName();
}
public class User {
private UUID id;
private String name;
private long version;
public void changeName (String newName) {
this.name = newName;
this.version++;
}
//getters and setters are omited
}
}
user.changeName("newName"); do not affect entity and "old" values are persisted. For my opinion (I have seen debug call stack) this happens because actual User entity is wrapper with Achilles proxy which react to gettter/setter calls. Also when I replace changeName: call to direct getter/setter invocation - user.setName("newName"); user.setVersion(user.getVersion()+1); updating became work.
So why it is happens and is there a way to configure Achilles to react of non getter/setter methods calls?
You have to use the setter methods explicitly.
According to the documentation, it intercepts the setter methods only.
"As a consequence of this design, internal calls inside an entity cannot be intercepted
and will escape dirty check mechanism. It is thus recommended to change state of the
entities using setters"
It is probably a design choice from achilles, and I suggest you raise it as an issue on the issues page, so it may receive some attention from the author.
Before do any actions with user you should get user proxy from info.archinnov.achilles.persistence.PersistenceManager and only after that use setters/getters for modification with 'user' entity.
User user = persistenceManager.getProxy(User.class, UUID.fromString(id));

How to store created/lastUpdate fields in AppEngine DataStore using Java JDO 3?

Abstract
I have a working application in Appengine using Java and JDO 3.
I found these arguments (auto_now and auto_now_add) which correspond exactly what I want to implement in Java. So essentially the question is: How to convert AppEngine's Python DateTimeProperty to Java JDO?
Constraints
Converting my application to Python is not an option.
Adding two Date properties and manually populating these values whenever a create/update happens is not an option.
I'm looking for a solution which corresponds to what JDO/Appengine/Database authors had in mind for this scenario when they created the APIs.
It would be preferable to have a generic option: say I have 4 entities in classes: C1, C2, C3, C4 and the solution is to add a base class C0, which all 4 entities would extend, so the 4 entities don't even know they're being "audited".
[update] I tried (using a simple entity)
#PersistenceCapable public class MyEntity {
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY, primaryKey = "true")
private Long id;
#Persistent private String name;
...
1. #Persistent public void getLastUpdate() { return new Date(); }
As suggested by answer, but it seems to always update the value, even when I just load the value from the datastore or just modify an unrelated field (e.g. String name).
You can easily enough have a property (setter/getter) on a java class and have the property persistable (rather than the field). Within that getter you can code whatever you want to control what value goes into the datastore.
If I didn't do the following hack, I can't read the value stored in the datastore [neither with the hack :( ]:
#Persistent public Date getLastUpdate() { return new Date(); }
private Date prevUpdate;
public void setLastUpdate(Date lastUpdate) { this.prevUpdate = lastUpdate; }
public Date getPrevUpdate() { return prevUpdate; }
Is there any way to differentiate if a persistence operation is in progress or my code is calling the getter?
2. #Persistent(customValueStrategy = "auto_now_add") private Date lastUpdate;
I modeled auto_now_add after org.datanucleus.store.valuegenerator.TimestampGenerator replacing Timestamp with java.util.Date.
But it was only populated once at the first makePersistent call, regardless of how many times I modified other fields and called makePersistent. Also note that it doesn't seem to behave as the documentation says (or my English is rusty):
Please note that by defining a value-strategy for a field then it will, by default, always generate a value for that field on persist. If the field can store nulls and you only want it to generate the value at persist when it is null (i.e you haven't assigned a value yourself) then you can add the extension "strategy-when-notnull" as false
3. preStore using PersistenceManager.addInstanceLifecycleListener
Works as expected, but I could make it work across multiple entities using a base class.
pm.addInstanceLifecycleListener(new StoreLifecycleListener() {
#Override public void preStore(InstanceLifecycleEvent event) {
MyEntity entity = (MyEntity)event.getPersistentInstance();
entity.setLastUpdate(new Date());
}
#Override public void postStore(InstanceLifecycleEvent event) {}
}, MyEntity.class);
4. implements StoreCallback and public void jdoPreStore() { this.setLastUpdate(new Date()); }
Works as expected, but I could make it work across multiple entities using a base class.
To satisfy my 4th constraint (using solutions 3 or 4)
Whatever I do I can't make the following structure work:
public abstract class Dateable implements StoreCallback {
#Persistent private Date created;
#Persistent private Date lastUpdate;
public Dateable() { created = new Date(); }
public void jdoPreStore() { this.setLastUpdate(new Date()); }
// ... normal get/set properties for the above two
}
#PersistenceCapable public class MyEntity extends Dateable {
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY, primaryKey = "true") private Long id;
#Persistent private String name;
The problems when the enhancer runs:
public abstract class Dateable:
DataNucleus.MetaData Registering class "[...].Dateable" as not having MetaData.
public abstract class Dateable with the above log, but running the code anyway:
Creation date changes whenever I create or read the data from datastore.
#PersistenceCapable public abstract class Dateable:
DataNucleus.MetaData Class "[...].MyEntity" has been specified with 1 primary key fields, but this class is using datastore identity and should be application identity.
JDO simply provides persistence of Java classes (and its fields/properties) so don't see what the design of JDO has to do with it.
You can easily enough have a property (setter/getter) on a java class and have the property persistable (rather than the field). Within that getter you can code whatever you want to control what value goes into the datastore. Either that or you use a preStore listener to be able to set things just before persistence so the desired value goes into the datastore.

Categories

Resources