I'm happening to end up with something really weird in Spring Data JDBC (using Spring Boot 2.1 with necessary starters) aggregate handling. Let me explain that case (I'm using Lombok, the issue might be related, though)...
This is an excerpt from my entity:
import java.util.Set;
#Data
public class Person {
#Id
private Long id;
...
private Set<Address> address;
}
This is an associated Spring Data repository:
public interface PersonsRepository extends CrudRepository<Person, Long> {
}
And this is a test, which fails:
#Autowired
private PersonsRepository personDao;
...
Person person = personDao.findById(1L).get();
Assert.assertTrue(person.getAddress().isEmpty());
person.getAddress().add(myAddress); // builder made, whatever
person = personDao.save(person);
Assert.assertEquals(1, person.getAddress().size()); // count is... 2!
Fact is that with debug I found out that the address collection (which is a Set) is containing TWO references of the same instance of the attached address.
I don't see how two references end up in, and most importantly how a SET (actually a LinkedHashSet, for the record) could handle the same instance TWICE!
person Person (id=218)
address LinkedHashSet<E> (id=228)
[0] Address (id=206)
[1] Address (id=206)
Does anybody have a clue on this situation ? Thx
A (Linked)HashSet can (as a side effect) store the same instance twice when this instance has been mutated in the meantime (quote from Set):
Note: Great care must be exercised if mutable objects are used as set elements. The behavior of a set is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is an element in the set.
So here's what probably happens:
You create a new instance of Address but its ID is not set (id=null).
You add it to the Set, and its hash code is calculated as some value A.
You call PersonsRepository.save which most likely persists the Address and sets on it some non-null ID.
The PersonsRepository.save probably also calls HashSet.add to ensure that the address is in the set. But since the ID changed, the hash code is now calculcated as some value B.
The hash codes A and B map to different buckets in the HashSet, and so the Address.equals method does not even get called during HashSet.add. As a result, you end up with the same instance in two different buckets.
Finally, I think your entities should rather have equals/hashCode semantics based on the ID only. To achieve it using Lombok, you'd use #EqualsAndHashCode as follows:
#Data
#EqualsAndHashCode(of = "id")
public class Person {
#Id
private Long id;
...
}
#Data
#EqualsAndHashCode(of = "id")
public class Address {
#Id
private Long id;
...
}
Still, this will not solve the problem you have because it's the ID that changes, so the hash codes will still differ.
One way of handling this would be persisting the Address before adding it to the Set.
Tomasz Linkowski's explanation is pretty much spot on. But I'd argue for a different resolution of the problem.
What happens internally is the following: the Person entity gets saved. This might or might not create a new Person instance if Person is immutable.
Then the Address gets saved and thereby gets a new id which changes it's hashcode. Then the Address gets added to the Person since again it might be a new Address instance.
But it is the same instance yet now with a changed hashcode, which results in the single set containing the same Address twice.
What you need to do to fix this is:
Define equals and hashCode so that both are stable when saving the instance
i.e. the hashCode must not change when the instance gets saved, or by anything else done in your application.
There are multiple possible approaches.
base equals and hashCode on a subset of the fields excluding the Id. Make sure that you don't edit these fields after adding the Address to the Set. You essentially have to treat it like an immutable class even if it isn't. From a DDD perspective this treats the entity as a value class.
base equals and hashCode on the Id and set the Id in the constructor. From a domain perspective this treats the class as a proper entity which is identified by its ID.
Related
I have a table "class" which is linked to tables "student" and "teachers".
A "class" is linked to multiple students and teachers via foriegn key relationship.
When I use hibernate associations and fetch large number of entities(tried for 5000) i am seeing that it is taking 4 times more memory than if i just use foreign key place holders.
Is there something wrong in hibernate association?
Can i use any memory profiler to figure out what's using too much memory?
This is how the schema is:
class(id,className)
student(id,studentName,class_id)
teacher(id,teacherName,class_id)
class_id is foreign key..
Case #1 - Hibernate Associations
1)in Class Entity , mapped students and teachers as :
#Entity
#Table(name="class")
public class Class {
private Integer id;
private String className;
private Set<Student> students = new HashSet<Student>();
private Set<Teacher> teachers = new HashSet<Teacher>();
#OneToMany(fetch = FetchType.EAGER, mappedBy = "classRef")
#Cascade({ CascadeType.ALL })
#Fetch(FetchMode.SELECT)
#BatchSize(size=500)
public Set<Student> getStudents() {
return students;
}
2)in students and teachers , mapped class as:
#Entity
#Table(name="student")
public class Student {
private Integer id;
private String studentName;
private Class classRef;
#ManyToOne
#JoinColumn(name = "class_id")
public Class getClassRef() {
return classRef;
}
Query used :
sessionFactory.openSession().createQuery("from Class where id<5000");
This however was taking a Huge amount of memory.
Case #2- Remove associations and fetch seperately
1)No Mapping in class entity
#Entity
#Table(name="class")
public class Class {
private Integer id;
private String className;
2)Only a placeholder for Foreign key in student, teachers
#Entity
#Table(name="student")
public class Student {
private Integer id;
private String studentName;
private Integer class_id;
Queries used :
sessionFactory.openSession().createQuery("from Class where id<5000");
sessionFactory.openSession().createQuery("from Student where class_id = :classId");
sessionFactory.openSession().createQuery("from Teacher where class_id = :classId");
Note - Shown only imp. part of the code. I am measuring memory usage of the fetched entities via JAMM library.
I also tried marking the query as readOnly in case #1 as below, which does not improve memory usage very much ; just a very little. So that's not the solve.
Query query = sessionFactory.openSession().
createQuery("from Class where id<5000");
query.setReadOnly(true);
List<Class> classList = query.list();
sessionFactory.getCurrentSession().close();
Below are the heapdump snapshots sorted by sizes. Looks like the Entity maintained by hibernate is creating the problem..
Snapshot of Heapdump for hibernate associations program
Snapshot of heapdump for fetching using separate entities
You are doing a EAGER fetch with the below annotation. This will in turn fetch all the students without even you accessing the getStudents(). Make it lazy and it will fetch only when needed.
From
#OneToMany(fetch = FetchType.EAGER, mappedBy = "classRef")
To
#OneToMany(fetch = FetchType.LAZY, mappedBy = "classRef")
When Hibernate loads a Class entity containing OneToMany relationships, it replaces the collections with its own custom version of them. In the case of a Set, it uses a PersistentSet. As can be seen on grepcode, this PersistentSet object contains quite a bit of stuff, much of it inherited from AbstractPersistentCollection, to help Hibernate manage and track things, particularly dirty checking.
Among other things, the PersistentSet contains a reference to the session, a boolean to track whether it's initialized, a list of queued operations, a reference to the Class object that owns it, a string describing its role (not sure what exactly that's for, just going by the variable name here), the string uuid of the session factory, and more. The biggest memory hog among the lot is probably the snapshot of the unmodified state of the set, which I would expect to approximately double memory consumption by itself.
There's nothing wrong here, Hibernate is just doing more than you realized, and in more complex ways. It shouldn't be a problem unless you are severely short on memory.
Note, incidentally, that when you save a new Class object that Hibernate previously was unaware of, Hibernate will replace the simple HashSet objects you created with new PersistentSet objects, storing the original HashSet wrapped inside the PersistentSet in its set field. All Set operations will be forwarded to the wrapped HashSet, while also triggering PersistentSet dirty tracking and queuing logic, etc. With that in mind, you should not keep and use any external references to the Set from before saving, and should instead fetch a new reference to Hibernate's PersistentSet instance and use that if you need to make any changes (to the set, not to the students or teachers within it) after the initial save.
Regarding the huge memory consumption you are noticing, one potential reason is Hibernate Session has to maintain the state of each entity it has loaded the form of EntityEntry object i.e., one extra object, EntityEntry, for each loaded entity. This is needed for hibernate automatic dirty checking mechanism during the flush stage to compare the current state of entity with its original state (one that is stored as EntityEntry).
Note that this EntityEntry is different from the object that we get to access in our application code when we call session.load/get/createQuery/createCriteria. This is internal to hibernate and stored in the first level cache.
Quoting form the javadocs for EntityEntry :
We need an entry to tell us all about the current state of an object
with respect to its persistent state Implementation Warning: Hibernate
needs to instantiate a high amount of instances of this class,
therefore we need to take care of its impact on memory consumption.
One option, assuming the intent is only to read and iterate through the data and not perform any changes to those entities, you can consider using StatelessSession instead of Session.
The advantage as quoted from Javadocs for Stateless Session:
A stateless session does not implement a first-level cache nor
interact with any second-level cache, nor does it implement
transactional write-behind or automatic dirty checking
With no automatic dirty checking there is no need for Hibernate to create EntityEntry for each entity of loaded entity as it did in the earlier case with Session. This should reduce pressure on memory utilization.
Said that, it does have its own set of limitations as mentioned in the StatelessSession javadoc documentation.
One limitation that is worth highlighting is, it doesn't lazy loading the collections. If we are using StatelessSession and want to load the associated collections we should either join fetch them using HQL or EAGER fetch using Criteria.
Another one is related to second level cache where it doesn't interact with any second-level cache, if any.
So given that it doesn't have any overhead of first-level cache, you may want to try with Stateless Session and see if that fits your requirement and helps in reducing the memory consumption as well.
Yes, you can use a memory profiler, like visualvm or yourkit, to see what takes so much memory. One way is to get a heap dump and then load it in one of these tools.
However, you also need to make sure that you compare apples to apples. Your queries in case#2 sessionFactory.openSession().createQuery("from Student where class_id = :classId");
sessionFactory.openSession().createQuery("from Teacher where class_id = :classId");
select students and teachers only for one class, while in case #1 you select way more. You need to use <= :classId instead.
In addition, it is a little strange that you need one student and one teacher record per one class. A teacher can teach more than one class and a student can be in more than one class. I do not know what exact problem you're solving but if indeed a student can participate in many classes and a teacher can teach more than one class, you will probably need to design your tables differently.
Try #Fetch(FetchMode.JOIN), This generates only one query instead of multiple select queries. Also review the generated queries. I prefer using Criteria over HQL(just a thought).
For profiling, use freewares like visualvm or jconsole. yourkit is good for advanced profiling, but it is not for free. I guess there is a trail version of it.
You can take the heapdump of your application and analyze it with any memory analyzer tools to check for any memory leaks.
BTW, I am not exactly sure about the memory usage for current scenario.
Its likely the reason is the bi-directional link from Student to Class and Class to Students. When you fetch Class A (id 4500), The Class object must be hydrated, in turn this must go and pull all the Student objects (and teachers presumably) associated with this class. When this happens each Student Object must be hydrated. Which causes the fetch of every class the Student is a part of. So although you only wanted class A, you end up with:
Fetch Class A (id 4900)
Returns Class A with reference to 3 students, Student A, B, C.
Student A has ref to Class A, B (id 5500)
Class B needs hydrating
Class B has reference to Students C,D
Student C needs hydrating
Student C only has reference to Class A and B
Student C hydration complete.
Student D needs hydrating
Student D only has reference to Class B
Student B hydration complete
Class B hydration complete
Student B needs hydrating (from original class load class A)
etc... With eager fetching, this continues until all links are hydrated. The point being that its possible you end up with Classes in memory that you didn't actually want. Or whose id is not less than 5000.
This could get worse fast.
Also, you should make sure you are overriding the hashcode and equals methods. Otherwise you may be getting redundant objects, both in memory and in your set.
One way to improve is either change to LAZY loading as other have mentioned or break the bidirectional links. If you know you will only ever access students per class, then don't have the link from student back to class. For student/class example it makes sense to have the bidirectional link, but maybe it can be avoided.
as you say you "I want "all" the collections". so lazy-loading won't help.
Do you need every field of every entity? In which case use a projection to get just the bits you want. See when to use Hibernate Projections.
Alternatively consider having minimalist Teacher-Lite and Student-Lite entity that the full-fat versions extend.
I am trying to query the App Engine Data Store through Objective in Java.
I have stored some dummy data locally buy I can't achieve to get the result ordered by key.
These are the classes:
Parent Class:
#Entity
public class Parent
{
#Getter
#Setter
#Id
long id;
#Getter
#Setter
String type;
public Parent() {
}
}
Main Class
#Entity
#Cache
#Index
public class MainObject
{
#Getter
#Setter
#Id
long id;
#Getter
#Setter
#Unindex
String url;
#Getter
#Setter
Date date;
#Parent
#Getter
#Setter
Key<Parent> type;
public MainObject() {
}
}
The thing is that I want to get this query:
Key<Parent> parent = Key.create(Parent.class, 1);
MainObjectlastUrl = OfyService.ofy().load().type(MainObject.class).ancestor(parent).order("-key").first().now();
This returns null.
List<MainObject> list = OfyService.ofy().load().type(MainObject.class).ancestor(parent).order("-key").list();
This returns an empty list.
But if I remove the order query, I get all entities.
list = OfyService.ofy().load().type(MainObject.class).ancestor(parent).list();
Any ideas?
I have checked at Objectify web page, but I couldn't find much.
Thanks in advance.
The magic Google field that means key is __key__ (two underscores on each side). This is built in to GAE, so you want order("-__key__").
Objectify could provide an orderKey(boolean) method on query to make this slightly more convenient. If you add it to the issue tracker, I'll implement it.
As of Objectify 5.0.1, you can use orderKey(boolean descending) instead of order("__key__") when sorting by key. See Javadoc at http://static.javadoc.io/com.googlecode.objectify/objectify/5.1.14/com/googlecode/objectify/cmd/SimpleQuery.html#orderKey-boolean-
What you are trying to do is fundamentally wrong. Your desire is to have your query return results ordered by key; the very same thing that uniquely identifies your entity within the datastore. I cannot understand why you would want to do this since the key is derived using the Kind, Id and optionally the parent, if your class has one, as such I can't see how this ordering by key could ever be useful, but I am sure you have your reasons for wanting this. Perhaps you could expand on your question by explaining fully what you're trying to achieve.
Now I will attempt to answer your questions on why your queries aren't returning your desired results and suggest some solutions:
Your first query:
MainObjectlastUrl = OfyService.ofy().load().type(MainObject.class).ancestor(parent).order("-key").first().now();
The reason this query is returning null is because the key property you are passing as the condition to the order method to sort against is not a field of your MainObject entity. It does not exist and will always return null when objectify tries to apply the sort order.
The same applies to your second query. It returns an empty list because there are not entities of type MainObject with a key field. The only difference to the first query is that you are specifically requesting a list of entities rather than calling first().
The third query
list = OfyService.ofy().load().type(MainObject.class).ancestor(parent).list();
This query works, of course, because you are querying for all entities of type MainObject that are ancestors of specified `parent' entity. Since such entities exist the query returns the expected results.
As you can see, the assumption that an entity's "key" somehow intrinsically exists as a property of your entity is incorrect. In order to use it sort by Key you would need to add, say a property key to your entity 'MainObject' to hold the value of entity's generated key, which would not make sense and definitely not recommended.
Caveat: there may be a way of getting hold of the key since we know it exists but I am not aware. Perhaps some datastore expert can shed light on this.
I suggest you sort using the indexed properties on your class which make sense within the domain of your application. For example sort by id, since it isn't auto-generated and is likely to have some meaning; ditto for the date property as they're both likely to have some domain value, as opposed to the key. Hope this helps!
If do not have time please have a look at the example
I have two types of users, temporary users and permanent users.
Temporary users use the system as guest just provide their name and use it but system needs to track them.
Permanent users are those that are registered and permanent.
Once user create a permanent record for himself, I need to copy all the information that has been tracked while user was a guest to his permanent record.
Classes are as following,
#Entity
public class PermUser{
#Id
#GeneratedValue
private long id;
#OneToMany
private List Favorites favorites;
....
}
#Entity
public class Favorites {
#Id
#GeneratedValue
private long id;
#OneToMany (cascade = CascadeType.ALL)
#LazyCollection(LazyCollectionOption.FALSE)
private List <FavoriteItems> items;
...
}
#Entity
public class FavoriteItems {
#Id
#GeneratedValue
private long id;
private int quantity;
#ManyToOne
private Ball ball;
..
}
#Entity
public class TempUser extends PermUser{
private String date;
....
}
Problems is :
If I clone the tempUser object, I am copying the id parameters as well so when saving the perm user object it shows a message like "Duplicate entry '10' for key ...", I can not remove the tempUser first then save the permUser as if saving permUser failed I will miss the data. If I try to copy each ball of favoriteitems separately without id of item it would not be an efficient way.
Example (Question in one sentence: As shown blew a user may have more than one TempUser record and just one PermUser record, therefore I need to add information of all the TempUser records to that single PermUser record.)
Type of record | name | favorites | date
| | |
1)TempUser | Jack | 2 items | 1/1/2013
2)TempUser | Jack | 3 items | 1/4/2013
---------------------------------------------------------------------------
PermUser | Jack | 5 items ( 2 + 3 items from his temp records)
*Please note, I need to find a solution, and do not care if try a new solution rather than cloning the object.
The reason that I have two different classes is that tempUser has few additional attributes, I may also need to add favorites of few tempUsers to favorites list of one permUser. and also as mentioned above a user may have many different not related temp records
Forgive me if I'm missing something, but I don't think that TempUser and PermUser should be different classes. TempUser extends PermUser, which is an "is-a" relationship. Clearly, temporary users are not a type of permanent user. Your question doesn't give enough information to justify making them different -- perhaps they're the same class, and the difference can be expressed as a few new attributes? Eg:
#Entity
public class User{
#OneToMany(cascade = CascadeType.ALL)
private List Favorites favorites;
private boolean isTemporary;
....
}
The "transition" from temporary to permanent can be handled by some controller, making sure that isTemporary = false and that the other properties of a permanent user are appropriately set. This would completely side-step the cloning issue and would be much easier on your database.
I just had the same problem. I've been digging through many interesting articles and questions in boards like SO untill I had enough inspiration.
At first I also wanted to have sub classes for different types of users. It turns out that the idea itself is a design flaw:
Don't use inheritance for defining roles!
More Information here Subtle design: inheritance vs roles
Think of an user as a big container which just harbors other entities like credentials, preferences, contacts, items, userinformation, etc.
With this in mind you can easily change certain abilities/behaviour of certain users,
Of course you can define a role many users can play. Users playing the same role will have the same features.
If you have many entities/objects depending on each other you shoukd think of a building mechanism/pattern that sets up a certain user role in a well defined way.
Some thoughts: A proper way for JPA entities instantiation
If you had a builder/factory for users, your other problem wouldn't be that complex anymore.
Example (really basic, do not expect too much!)
public void changeUserRoleToPermanent (User currentUser) {
UserBuilder builder = new UserBuilder();
builder.setRole(Role.PERMANENT); // builder internally does all the plumping
// copy the stuff you want to keep
builder.setId(user.getId);
builder.setPrefences();
// ...
User newRoleUser = builder.build();
newRoleUser = entityManager.merge(newRoleUser);
entitymanager.detach(currentUser);
// delete old stuff
entityManager.remove(currentUser.getAccountInfo()); // Changed to different implementaion...
}
I admit, it is some work but you will have many possibilities once you have the infrastructure ready! You can then "invent" new stuff really fast!
I hope I could spread some ideas. I'm sorry for my miserable english.
As I agree with prior comments that if it is possible you should reevaluate these entities, but if that is not possible I suggest that you return a general User from the database and then caste that user as either PermUser or TempUser which both would be extensions of User, based on the presence of certain criteria.
For part 2 of your problem:
You are using CascadeType.ALL for the favorites relation. This includes CascadeType.REMOVE, which means a remove operation on the user will cascade to that entity. So specify an array of CascadeType values that doesn't include CascadeType.REMOVE.
See http://webarch.kuzeko.com/2011/11/hibernate-understanding-cascade-types/.
What I am going to suggest might not be that OO but hope will be effective. I am happy to keep PermUser and TempUser separate do not extend it, not binding them into is-a relationship also. So I will have two separate tables in database one for TempUser and one for PermUser thereby treating them as two seperate entities. Many will find it to be redundant.. but read on... we all know.. sometimes redundancy is good.. So now...
1) I don't know when a TempUser would want to become PermUser. So I will always have all TempUsers in separate table.
2) What would I do if a user always wants to be TempUser..? I still have separate TempUser table to refer to..
3) I am assuming that when a TempUser wants to become a PermUser you are reading his TempUser name to get his records as TempUser.
So now your job is easy. So now when a TempUser want to become PermUser all you would do is copy TempUser objects,populate your required attributes and create a new PermUser object with it. After that you can keep your TempUser record if you want to or delete it.. :)
Also you would have a history how many of your TempUsers actually become permanent if you keep it and also know in what average time a TempUser becomes permanent.
I think you should do a manual deep clone. Not exactly a clone since you have to merge data from several tempUsers to a single permUser. You can use reflection and optionally annotations to automate the copy of information.
To automatically copy fields from an existing object to a new one you can follow this example. It is not a deep clone but may help you as starting point.
Class 'c' is used as reference. src and dest must be instances of 'c' or instance of subclases of 'c'. The method will copy the attributes defined in 'c' and superclasses of 'c'.
public static <E> E copyObject(E dest, E src, Class<?> c) throws IllegalArgumentException, IllegalAccessException{
// TODO: You may want to create new instance of 'dest' here instead of receiving one as parameter
if (!c.isAssignableFrom(src.getClass()))
{
throw new IllegalArgumentException("Incompatible classes: " + src.getClass() + " - " + c);
}
if (!c.isAssignableFrom(dest.getClass()))
{
throw new IllegalArgumentException("Incompatible classes: " + src.getClass() + " - " + c);
}
while (c != null && c != Object.class)
{
for (Field aField: c.getDeclaredFields())
{
// We skip static and final
int modifiers = aField.getModifiers();
if ( Modifier.isStatic(modifiers) || Modifier.isFinal(modifiers))
{
continue;
}
// We skip the fields annotated with #Generated and #GeneratedValue
if (aField.getAnnotation(GeneratedValue.class) == null &&
aField.getAnnotation(Generated.class) == null)
{
aField.setAccessible(true);
Object value = aField.get(src);
if (aField.getType().isPrimitive() ||
String.class == aField.getType() ||
Number.class.isAssignableFrom(aField.getType()) ||
Boolean.class == aField.getType() ||
Enum.class.isAssignableFrom(aField.getType()))
{
try
{
// TODO: You may want to recursive copy value too
aField.set(dest, value);
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
}
c = c.getSuperclass();
}
return dest;
}
Like some have already suggested I would tackle this problem using inheritance + either shallow copies (to share references) or deep cloning with libraries that let me exclude / manipulate the auto-generated ids (when you want to duplicate items).
Since you don't want to bend your database model too much, start with a Mapped Superclass with common attributes. This will not be reflected in your database at all. If you could I would go with Single Table Inheritance which maps close to your model (but may require some adjusts on the database layer).
#MappedSuperclass
public abstract class User {
#Id
#GeneratedValue
private long id;
// Common properties and relationships...
Then have both PermUser and TempUser inherit from User, so that they will have a lot of common state:
#Entity
#Table(name="USER")
public class PermUser extends User {
// Specific properties
}
Now there are several possible approaches, if your classes don't have a lot of state, you can, for instance, make a constructor that builds a PermUser collecting data of a List of TempUsers.
Mock code:
#Entity
#Table(name="PERMANENT_USER")
public class PermUser extends User {
public PermUser() {} // default constructor
public PermUser(List<TempUser> userData) {
final Set<Favorites> f = new LinkedHashSet<>();
// don't set the id
for(TempUser u : userData) {
this.name = u.getName();
// Shallow copy that guarants uniqueness and insertion order
// Favorite must override equals and hashCode
f.addAll(u.getFavorites());
}
this.favorites = new ArrayList<>(f);
// Logic to conciliate dates
}
}
When you persist the PermUser it will generate a new id, cascaded unidirectional relationships should work fine.
On the other hand, if your class have a lot of attributes and relationships, plus there are a lot of situations in which you really need to duplicate objects, then you could use a Bean Mapping library such as Dozer (but be warned, cloning objects is a code smell).
Mapper mapper = new DozerBeanMapper();
mapper.map(tempUser.getFavorites(), user.getFavorites());
With dozer you can configure Mappings through annotations, API or XML do to such things as excluding fields, type casting, etc.
Mock mapping:
<mapping>
<class-a>my.object.package.TempUser</class-a>
<class-b>my.object.package.PermUser</class-b>
<!-- common fields with the same name will be copied by convention-->
<!-- exclude ids and fields exclusive to temp
<field-exclude>
<a>fieldToExclude</a>
<b>fieldToExclude</b>
</field-exclude>
</mapping>
You can, for example, exclude ids, or maybe copy permUser.id to all of the cloned bidirectional relationships back to User (if there is one), etc.
Also, notice that cloning collections is a cumulative operation by default.
From Dozer documentation:
If you are mapping to a Class which has already been initialized, Dozer will either 'add' or 'update' objects to your List. If your List or Set already has objects in it dozer checks the mapped List, Set, or Array and calls the contains() method to determine if it needs to 'add' or 'update'.
I've used Dozer in several projects, for example, in one project there were a JAXB layer that needed to be mapped to a JPA model layer. They were close enough, but unfortunately I couldn't bend neither. Dozer worked quite well, was easy to learn and spare me from writing 70% of the boring code. I can deeply clone recommend this library out of personal experience.
From a pure OO perspective it does not really make sense for an instance to morph from one type into another, Hibernate or not. It sounds like you might want to reconsider the object model independently of its database representation. FourWD seems more like a property of a car than a specialization, for example.
A good way to model this is to create something like a UserData class such that TempUser has-a UserData and PermUser has-a UserData. You could also make TempUser has-a PermUser, though that's going to be less clear. If your application needs to use them interchangeably (something you'd get with the inheritance you were using), then both classes can implement an interface that returns the UserData (or in the second option, getPermUser, where PermUser returns itself).
If you really want to use inheritance, easiest might be to map it using the "Table per class hierarchy" and then using straight JDBC to update the discriminator column directly.
I have an entity which is an AccessCard and I want to have a boolean field "active". So for example multiple people may have had the AccessCard, but it is only active on the current holder.
The current holder will have the active field = true and all the previous holders will have the active field = false.
Obviously I could just handle this on the interface side, and not allow to activate a card before de-activation, but I would also like this contraint in the Database.
Any ideas how I should annotate the "active" column?
Which RDBMS you're using? In Oracle you could do some thing like in accepted answer HERE.
I don't think there is JPA annotation that allows creating constraints based on field value. There is #UniqueConstraint that allows creating constraints on fields list, but it still does not take under consideration field value (because you want to have only one record with active=true for AccessCard, but many with active=false).
What you could do is to create private getter to get access cards assignments and create public transient method for returning active card assignment. When first time called,it would iterate all assignments, pick up active one and assign to property so that you don't have to iterate through all assignemnts everytime getActiveAssignment is called.
On setter for active assignment you could set active to false for all assignments and then set true for expected one.
Instead of a boolean why not add a (possibly another, since you seem to track everyone who had a key) #ManyToOne reference to your person entity to the AccessCard Entity?
Like this:
public class AccessCard {
#ManyToOne(Cascade = CascadeType.PERSIST | CascadeType.MERGE)
#JoinColumn(referencedColumnName="person_id")
private Person cardHolder;
public getCardHolder() {
return cardHolder;
}
}
public class Person {
#OneToMany(Cascade = CascadeType.PERSIST | CascadeType.MERGE)
private Set posessingCards;
}
Now you can check if a specific card is in use by checking for null on the return of getCardHolder.
Or you could just add another method the does the check and returns a boolean instead.
This would ensure that transferring the card to a new person removes it from the current one and a check to see if it's active would just be a check for null instead.
Activation/Deactivation would effectively be handled by assigning the key to a person, two birds with one stone, you know if it's active AND who has it.
In my java application I am using equal objects multiple times at different places. That means the equals method returns true, when comparing theses objects. Now I want to update one object and make the changes to all objects that are equal. Do you know if there is a pattern for that?
My concrete use case is:
I am using JSF, JPA and CDI. A user is on web page that allows him to edit the detached entity EntityA. The page is sessionscoped. EntityA has two references to an EntityB (also detached). These objects can be same. Not the same reference, but they may be equal.
#Entity
public class EntityA {
#OneToOne()
private EntityB entity1;
#OneToOne();
private EntityB entity2;
}
The JSF view lets the uses select entity1 and entity2 from a selection list. It also shows some details of theses EntityBs and the user is allowed to edit entity1 and entity2 seperately. Everything works fine, except the user has choses the same (equal) EntityB for entity1 and entity2. Then, only the references to these objects are updated. Of course entity1 and entity2 are two different JPA entites, and are not the same reference. But I want to distribute the changes to all detached instances of EntityB. I have this situation hundreds of times in my application, so I dont want to take care about, which objects have to be updated in which situations. I need some solutation the does it for me automatically. One Idea was to keep all objects I use in this session in special list and every time a request was submitted and processed iterate over this map and change alle equal objects. But his sounds very dirty. Maybe there is a build in JPA function to make all equal objects the same reference. I dont know if this is possible. Do you have a solution for this? Thanks.
I'm going to abstract your problem out a bit here: if a change to one object requires changing a number of other objects, consider putting the field that you're changing in a separate object, and have all those objects reference it.
For example, if you have:
class MyClass {
String info;
int id;
}
and two instances of MyClass with the same 'id' should both be updated when the 'info' field changes then use this:
class myClass {
myInfoClass info;
int id
}
class myInfoClass {
String value;
}
and give all instances of myClass that are equal the same instances of myInfoClass. Changing myClass.info.value will effectively change all instances of myClass, because they all hold the same instance of myInfoClass.
Sorry if I've got the syntax slightly wrong, I jump between languages a lot.
I use this technique in a game I wrote recently where a switch activates a door- both the switch and door have a Circuit object that holds a boolean powered field. The doors 'isOpen()' method simply returned circuit.powered, and when the switch is activated I just call switch.circuit.powered = true, and the door is automatically considered 'open'. Previously, I had it searching the game's map for all doors with the same circuit id, and changing the powered field on each.
this is classic form handling logic
if the user clicks the save button manipulate the data in the database
reload the data every time you create the web page
you should not cache the data in the web session
if you need caching, activate it in the persistence layer (ex. hibernate cache)