Hibernate update before insert in one to many - java

I am getting the constraint violation exception because of the order of operations performed by Hibernate. I have the following entities defined.
#Entity
public class A {
#Id
private Integer id;
#OneToMany(mappedBy = "a", fetch = FetchType.LAZY, cascade = CascadeType.ALL, orphanRemoval = true)
private List<B> bList;
public void setBList(List<B> bList) {
if(CollectionUtils.isNotEmpty(this.bList)) {
this.bList.clear();
}
if(CollectionUtils.isNotEmpty(bList)) {
this.bList.addAll(bList);
}
}
}
#Entity
#Table(uniqueConstraints={#UniqueConstraint(columnNames = {"name", "a_id", "isDeleted"})})
public class B {
#Id
private Integer id;
private String name;
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn(name="a_id")
private A a;
private boolean isDeleted;
}
When I set the new list of Bs (containing one item updated as deleted and a new item having the same values in the columns corresponding to constraint) in entity A and save entity A, I get constraint violation.
Hibernate is performing insert of the new item before updating the old item as deleted leading to constraint violation when in fact the data is correct in the application.
Am I doing something wrong here or Is there any configuration or fix for this?

Answer changed on 2021/05/07 due to comment from the OP pointing out it was missing the point
There are 2 things you should change for things to work
You should not rely on Hibernate to guess the right order of operations for you. It relies on heuristics that might not fit your intent. In your case, you should call EntityManager.flush after your soft-delete of the old B and before persisting the new one.
Your unique constrain will cause problems anyway, when you'll soft-delete your second B, that is identical regarding unique columns. More hereafter
In general, ensuring this kind of constrains in DB is a bad idea. If you try and update/insert an entity that violates them, then you'll get an obscure PersistenceException and it will be hard to warn your users about the exact cause. So you will have to programmatically check those constrains before insertion/update anyways. Hence, you'd better remove them and ensure unicity through your program, unless they're vital to data integrity. Same goes for not-nullable columns and other constrains that are pure business logic.
Now last advice from experience: for soft-delete column, use a TimeStamp rather than a boolean. Same effort updating and reading your records, but it gives you some valuable information about when a record was deleted.

Related

Spring/JPA: Entity referenced by a view as a #ManyToOne association

Currently, my database is organized in a way that I have the following relationships(in a simplified manner):
#Entity
class A {
/*... class A columns */
#Id #NotNull
private Long id;
}
#Entity
#Immutable
#Table(name = "b_view")
class B {
/* ... same columns as class A, but no setters */
#Id #NotNull
private Long id;
}
The B entity is actually defined by a VIEW, which is written in this manner(assuming Postgres):
CREATE VIEW b_view AS
SELECT a.* FROM a WHERE EXISTS
(SELECT 1 FROM filter_table ft WHERE a.id = ft.b_id);
The idea here is that B references all elements of A that are present on filter_table. filter_table is another view that isn't really important, but it's the result of joining the A table with another, unrelated table, through a non-trivial comparison of substrings. These views are done so that I don't need to duplicate and control which elements of A also show up in B.
All of these are completely fine. JpaRepository is working great for B(obviously without saving the data, as B is Immutable) and it's all good.
However, at one point we have an entity that has a relationship with B objects:
#Entity
class SortOfRelatedEntity {
/** ... other columns of SortOfRelatedEntity */
#ManyToOne(fetch = FetchType.EAGER, targetEntity = Fornecedor.class)
#JoinColumn(name = "b_id", foreignKey = #ForeignKey(foreignKeyDefinition = "references a(id)"))
private B b;
}
For obvious reasons, I can't make this foreign key reference "b", since B is a view. However, I do want the query for searching this attribute to be defined by the b_view table, and having the foreign key defined by the underlying table(as written above) would be also nice in order to guarantee DB integrity.
However, when applying the above snippet, my sort-of-related-entity table doesn't create a foreign key as I would have expected. For the record, I'm using Hibernate 5.2.16 atm.
What am I doing wrong? Is this even possible? Is there something else I should do that I'm not aware of?
Oh FFS
I realized my mistake now. This:
#JoinColumn(name = "b_id", foreignKey = #ForeignKey(foreignKeyDefinition = "references a(id)"))
Should have been this:
#JoinColumn(name = "b_id", foreignKey = #ForeignKey(foreignKeyDefinition = "foreign key(b_id) references a(id)"))
Notice that the foreignKeyDefinition must include foreign key(), not just the references part.
Hopefully this helps someone in the future.

Hibernate many to many fetching associated objects

#Entity
#Table(name = "MATCHES")
public class Match implements Serializable{
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "MATCH_ID")
private Long id;
#ManyToMany(mappedBy = "matches", cascade = CascadeType.ALL)
private Set<Team> teams = new HashSet<Team>();
}
#Entity
#Table(name = "Teams")
public class Team implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "TEAM_ID")
private long id;
#ManyToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
#JoinTable(name = "TEAM_MATCH", joinColumns = { #JoinColumn(name = "TEAM_ID") }, inverseJoinColumns = {
#JoinColumn(name = "MATCH_ID") })
private Set<Match> matches = new HashSet<Match>();
}
I got those classes, now I want to get all the matches and let's say, print names of both teams.
public List getAllMatches() {
Session session = HibernateUtil.getSession();
Transaction t = session.beginTransaction();
Criteria criteria = session.createCriteria(Match.class, "match");
criteria.createAlias("match.teams", "mt", JoinType.LEFT_OUTER_JOIN);
List result = criteria.list();
t.commit();
session.close();
return result;
}
But when I invoke that method, result has size 2 when I got only 1 match in my table. Both of those matches in result have 2 teams, which is correct. I have no idea why this happends. What I want is to have one Match object with two Team objects in 'teams' set, but I have two of those Match objects. They are fine, but there are two of them. I'm completely new to this and have no idea how to fix those criterias. I tried deleting 'FetchType.LAZY' from #ManyToMany in Team but it doesn't work. Team also has properties like Players/Trainer etc. which are in their own tables, but I don't want to dig that deep yet, baby steps. I wonder tho if doing such queries is a good idea, should I just return Matches and then if I want to get Teams, get them in another session?
Edit: I added criteria.setResultTransformer(DistinctRootEntityResultTransformer.INSTANCE); and it works, is that how I was suppose to fix that or this is for something completely different and I just got lucky?
I think the duplication is a result of your createAlias call, which besides having this side effect is redundant in the first place.
By calling createAlias with those arguments, you are telling Hibernate to not just return all matches, but to first cross index the MATCHES table with the TEAM_MATCH table and return a result for each matching pair of rows. You get one result for a row in the matches table paired with the many-to-many mapping to the first team, and another result for the same row in the matches table paired with the many-to-many mapping to the second team.
I'm guessing your intent with that line was to tell Hibernate to fetch the association. This is not necessary, Hibernate will fetch associated objects on its own automatically when needed.
Simply delete the criteria.createAlias call, and you should get the result you expected - with one caveat. Because the association is using lazy fetching, Hibernate won't load it until you access it, and if that comes after the session is closed you will get a LazyInitializationException. In general I would suggest you prefer solving this by having the session opened and closed at a higher level of abstraction - getting all matches is presumably part of some larger task, and in most cases you should really use one session for the duration of the entire task unless there are substantial delays (such as waiting for user input) involved. Changing that would likely require significant redesign of your code, however; the quick solution is to simply loop over the result list and call Hibernate.initialize() on the teams collection in each Match. Or you could just change the fetch type to eager, if the performance cost of always loading the association whether or not you need it is acceptable.

How to use existing records in many-to-many relation and to avoid unique constraint violation (hibernate)

There are two classes, Person and Vehicle, in many-to-many relationship. If a new Person is created or a record of existing Person is updated (e.g. Vehicle records added) it would be desirable to use existing Vehicle record if it exists.
The question is how to achieve it. A query prior the insert or update is not an option because there are many threads which can update or insert.
At this moment the application checks unique constraint exception and when it is caught the new existing Vehicle object is replaced by one which is queried from the db by "registration" column. This solution is working however it seems to be kind of clumsy as there has to be a separate session created for each Vehicle record.
Is there any way how to achieve desired behaviour by hibernate annotations? Or completely different solution? Thanks.
#Entity
#Table(name="PERSON", uniqueConstraints = { #UniqueConstraint(columnNames = "name", name="NAME_KEY") })
public class Person implements Serializable {
private static final long serialVersionUID = 3507716047052335731L;
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="PersonIdSeq")
#SequenceGenerator( name = "PersonIdSeq", sequenceName="PERSON_ID_SEQ")
private Long id;
#Index(name="PERSON_NAME_IDX")
private String name;
#ManyToMany(targetEntity=Vehicle.class, cascade={CascadeType.PERSIST, CascadeType.MERGE})
#JoinTable(name="PERSON_VEHICLE_LNK", joinColumns=#JoinColumn(name="PERSON_ID"),inverseJoinColumns=#JoinColumn(name="VEHICLE_ID"),
uniqueConstraints = { #UniqueConstraint(columnNames = {"PERSON_ID", "VEHICLE_ID"}, name="person_vehicle_lnk_key")})
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="PersonVehicleLnkIdSeq")
#SequenceGenerator(name = "PersonVehicleLnkIdSeq", sequenceName="PERSON_VEHICLE_LNK_ID_SEQ")
#CollectionId(columns = #Column(name="ID"), type=#Type(type="long"), generator = "PersonVehicleLnkIdSeq")
private List<Vehicle> vehicle = new ArrayList<>();
...
#Entity
#Table( name = "VEHICLE", uniqueConstraints = {#UniqueConstraint(columnNames="registration", name="REGISTRATION_KEY")} )
public class Vehicle implements Serializable {
private static final long serialVersionUID = -5592281235230216382L;
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="VehicleIdSeq")
#SequenceGenerator( name = "VehicleIdSeq", sequenceName="VEHICLE_ID_SEQ")
private Long id;
#Index(name="REGISTRATION_IDX")
private String registration;
...
A query prior the insert or update is not an option because there are many threads which can update or insert.
But this is how to do it.
If it is a performance problem (I don't think so) then consider about using a 2nd level cache. The first level can't handle this because it is bound to a session and you need at least one session per thread.
And then you need a version column in both Person and Vehicle.
In your application you already have the following problem: 1. User A loads a Person record. 2. User B loads the same Person record. 3. User A modifies the telephone number and saves the Person record. 4. User B modifies the Address and also saves the Person record. => Result: The modification (telephone number change) of User A is overwritten, the record has the old telephone number and nobody gets informed about this problem.
A version column avoids this problem. When there is a version column, in the step 4 Hibernate finds out the record was modified in meantime and throws an exception. This exception must be caught and User B must be told to reload the record and redo his address change. This forces a little extra work from user B (not much because this case seldom happens), but no information get lost and the database contains the correct information.
The same you have to do when no record was found on first reading, but on insert a constraint violation is caught. You already catch this error, but you don't inform the user, which you probably should do.
There is no easy solution in Hibernate level for this because the application logic has to treat this case (for example with informing the user).

How can read-only collections be mapped in JPA / Hibernate that don't cause DB updates

is it possible to create relations in hibernate / jpa that are fetched when the containing entity is fetched but will never ever result in any db updates, when the containing entity is saved? I'll try to make the requirement clear by an example.
I have a simple entity B
#Entity
public class B {
private int bId;
#Id
public int getBId() {
return bId;
}
public void setBId(int aId) {
bId = aId;
}
}
And another entity A, which contains a uni-directional many-to-many mapping to this class.
#Entity
public class A {
private int aId;
private List<B> bs;
#Id
public int getAId() {
return aId;
}
public void setAId(int aId) {
this.aId = aId;
}
#ManyToMany
#JoinTable(name = "A_B",
joinColumns = {#JoinColumn(name = "AID")},
inverseJoinColumns = {#JoinColumn(name = "BID")}
)
public List<B> getBs() {
return bs;
}
public void setBs(List<B> aBs) {
bs = aBs;
}
}
When entity A is fetched from db and merged afterwards as follows
A a = em.find(A.class, 1);
a.getBs().size();
em.merge(a);
, the merge results in the following SQL statements
Hibernate:
delete
from
A_B
where
AID=?
Hibernate:
insert
into
A_B
(AID, BID)
values
(?, ?)
Hibernate:
insert
into
A_B
(AID, BID)
values
(?, ?)
I have to avoid these deletes + updates. For my application I can ensure that the mapping table will never ever be updated using hibernate. Anyway, it is required to update the containing entity.
So my question is: Is it possible to map such "read-only" collections and to avoid db changes?
Best regards
Thomas
Update:
These are the tables and the data I'm using:
CREATE TABLE A (
AID INTEGER NOT NULL
)
DATA CAPTURE NONE ;
CREATE TABLE B (
BID INTEGER NOT NULL
)
DATA CAPTURE NONE ;
CREATE TABLE A_B (
AID INTEGER NOT NULL,
BID INTEGER NOT NULL
)
DATA CAPTURE NONE ;
INSERT INTO A (AID) VALUES (1);
INSERT INTO B (BID) VALUES (1);
INSERT INTO B (BID) VALUES (2);
INSERT INTO A_B (AID, BID) VALUES (1, 1);
INSERT INTO A_B (AID, BID) VALUES (1, 2);
In addition the collection also needs to be initialized before the merge is performed:
a.getBs().size();
Note: I've added the line from above to the original post, too.
As written in a comment, I couldn't initially reproduce the behavior. Without altering the collection of B, Hibernate was just updating A, leaving the join table untouched. However, by altering the collection of Bs (e.g. adding a B), I could get the DELETE then INSERT behavior. I don't know if this illustrates your scenario but here is my explanation...
When using a Collection or List without an #IndexColumn (or a #CollectionId), you get Bag semantic with all their drawbacks: when you remove an element or alter the collection, Hibernate first delete all elements and then insert (it has no way to maintain the order).
So, to avoid this behavior, either use:
Set semantic (i.e. use a Set if you don't need a List, which is true for 95% of the cases).
Bag semantic with primary key (i.e. use a List with a #CollectionId) - I didn't test this.
true List semantic (i.e. use a List with a #org.hibernate.annotations.IndexColumn or the JPA 2.0 equivalent #OrderColumn if you are using JPA 2.0)
The Option 1 is the obvious choice if you don't need a List. If you do, I only tested Option 3 (feels more natural) that you would implement like this (requires an extra column and a unique constraint on (B_ID, BS_ORDER) in the join table):
#Entity
public class A {
private int aId;
private List<B> bs;
#Id
public int getAId() { return aId; }
public void setAId(int aId) { this.aId = aId; }
#ManyToMany
#JoinTable(name = "A_B",
joinColumns = {#JoinColumn(name = "AID")},
inverseJoinColumns = {#JoinColumn(name = "BID")}
)
#org.hibernate.annotations.IndexColumn(name = "BS_ORDER")
public List<B> getBs() { return bs; }
public void setBs(List<B> aBs) { bs = aBs; }
}
And Hibernate will update the BS_ORDER column as required upon update/removal of Bs.
References
Hibernate Annotations 3.4 Reference Guide
2.2.5.3. Collections
2.4.6.2. Extra collection types
Hibernate Annotations 3.5 Reference Guide
2.4.6. Collection related annotations
I've modified my test code and switched to Set as a replacement for the List. Due to this change the delete and update statements get no longer generated, as long as the collection remains unmodified. Unfortunately I have conditions in my project, where the collection is modified. Therefore I was asking for a read-only semantic where hibernate loads the mapped data from db but does not save any changes, which might have been done to the collection.
Yes, I know that this is what you were asking for but:
I thought that your concern was the DELETE then INSERT and the "read only part" of your question was looking like an ugly workaround of the real problem.
The "extra" requirement i.e. not saving the state of a persistent collection that has been modified (which is unusual, when you have a persistent collection, you usually want to save it if you modify its state) wasn't clear, at least not for me.
Anyway... Regarding the second point, Hibernate's #Immutable annotation won't fit here (it disallows all modifications, throwing an exception if any). But maybe you could work on a transient copy of the collection instead of modifying the persistent one?

Hibernate - #ElementCollection - Strange delete/insert behavior

#Entity
public class Person {
#ElementCollection
#CollectionTable(name = "PERSON_LOCATIONS", joinColumns = #JoinColumn(name = "PERSON_ID"))
private List<Location> locations;
[...]
}
#Embeddable
public class Location {
[...]
}
Given the following class structure, when I try to add a new location to the list of Person's Locations, it always results in the following SQL queries:
DELETE FROM PERSON_LOCATIONS WHERE PERSON_ID = :idOfPerson
And
A lotsa' inserts into the PERSON_LOCATIONS table
Hibernate (3.5.x / JPA 2) deletes all associated records for the given Person and re-inserts all previous records, plus the new one.
I had the idea that the equals/hashcode method on Location would solve the problem, but it didn't change anything.
Any hints are appreciated!
The problem is somehow explained in the page about ElementCollection of the JPA wikibook:
Primary keys in CollectionTable
The JPA 2.0 specification does not
provide a way to define the Id in the
Embeddable. However, to delete or
update a element of the
ElementCollection mapping, some unique
key is normally required. Otherwise,
on every update the JPA provider would
need to delete everything from the
CollectionTable for the Entity, and
then insert the values back. So, the
JPA provider will most likely assume
that the combination of all of the
fields in the Embeddable are unique,
in combination with the foreign key
(JoinColunm(s)). This however could be
inefficient, or just not feasible if
the Embeddable is big, or complex.
And this is exactly (the part in bold) what happens here (Hibernate doesn't generate a primary key for the collection table and has no way to detect what element of the collection changed and will delete the old content from the table to insert the new content).
However, if you define an #OrderColumn (to specify a column used to maintain the persistent order of a list - which would make sense since you're using a List), Hibernate will create a primary key (made of the order column and the join column) and will be able to update the collection table without deleting the whole content.
Something like this (if you want to use the default column name):
#Entity
public class Person {
...
#ElementCollection
#CollectionTable(name = "PERSON_LOCATIONS", joinColumns = #JoinColumn(name = "PERSON_ID"))
#OrderColumn
private List<Location> locations;
...
}
References
JPA 2.0 Specification
Section 11.1.12 "ElementCollection Annotation"
Section 11.1.39 "OrderColumn Annotation"
JPA Wikibook
Java Persistence/ElementCollection
In addition to Pascal's answer, you have to also set at least one column as NOT NULL:
#Embeddable
public class Location {
#Column(name = "path", nullable = false)
private String path;
#Column(name = "parent", nullable = false)
private String parent;
public Location() {
}
public Location(String path, String parent) {
this.path = path;
this.parent= parent;
}
public String getPath() {
return path;
}
public String getParent() {
return parent;
}
}
This requirement is documented in AbstractPersistentCollection:
Workaround for situations like HHH-7072. If the collection element is a component that consists entirely
of nullable properties, we currently have to forcefully recreate the entire collection. See the use
of hasNotNullableColumns in the AbstractCollectionPersister constructor for more info. In order to delete
row-by-row, that would require SQL like "WHERE ( COL = ? OR ( COL is null AND ? is null ) )", rather than
the current "WHERE COL = ?" (fails for null for most DBs). Note that
the param would have to be bound twice. Until we eventually add "parameter bind points" concepts to the
AST in ORM 5+, handling this type of condition is either extremely difficult or impossible. Forcing
recreation isn't ideal, but not really any other option in ORM 4.
We discovered that entities we were defining as our ElementCollection types did not have an equals or hashcode method defined and had nullable fields. We provided those (via #lombok for what it's worth) on the entity type and it allowed hibernate (v 5.2.14) to identify that the collection was or was not dirty.
Additionally, this error manifested for us because we were within a service method that was marked with the annotation #Transaction(readonly = true). Since hibernate would attempt to clear the related element collection and insert it all over again, the transaction would fail when being flushed and things were breaking with this very difficult to trace message:
HHH000346: Error during managed flush [Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1]
Here is an example of our entity model that had the error
#Entity
public class Entity1 {
#ElementCollection #Default private Set<Entity2> relatedEntity2s = Sets.newHashSet();
}
public class Entity2 {
private UUID someUUID;
}
Changing it to this
#Entity
public class Entity1 {
#ElementCollection #Default private Set<Entity2> relatedEntity2s = Sets.newHashSet();
}
#EqualsAndHashCode
public class Entity2 {
#Column(nullable = false)
private UUID someUUID;
}
Fixed our issue. Good luck.
I had the same issue but wanted to map a list of enums: List<EnumType>.
I got it working like this:
#ElementCollection
#CollectionTable(
name = "enum_table",
joinColumns = #JoinColumn(name = "some_id")
)
#OrderColumn
#Enumerated(EnumType.STRING)
private List<EnumType> enumTypeList = new ArrayList<>();
public void setEnumList(List<EnumType> newEnumList) {
this.enumTypeList.clear();
this.enumTypeList.addAll(newEnumList);
}
The issue with me was that the List object was always replaced using the default setter and therefore hibernate treated it as a completely "new" object although the enums did not change.

Categories

Resources