I want to rewrite the call delete operation (on association table) on a many-to-many association sending by EclipseLink when we use only java code.
Let me explain the goal.
I have 3 tables, person, unit and an associative one : PerInUnit, so a person can be in multiple units and a units can contains many people. But I have some dependances on the PeInUnit table (If the person was present or not on a specific date, another table (Participations)), so I can't (and I don't want) delete a record. For that, I make softs deletes, so I can keep records to make some statistics.
I read already about the Customizer and AdditionalCriteria and I setted them to the PerInUnit class. It works perfectly => when I make an em.remove(myPerInUnit); the sql query sent to the db is Update PER_IN_UNIT SET STATUS='delete' WHERE id = #id; and the specified row as "delete" for status. Also, when I read all records, I don't have them with status "delete". But I use explicitly the PeeInUnit class.
Here is the code :
#Entity
#Table(name = "PER_IN_UNIT")
#AdditionalCriteria("this.status is null")
#Customizer(PIUCustomizer.class)
public class PerInUnit implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "GEN_SEQ_PIU")
#SequenceGenerator(name = "GEN_SEQ_PIU", sequenceName = "SEQ_PIU", initialValue = 1, allocationSize = 1)
#Column(name = "ID")
private Long id;
#ManyToOne(cascade=javax.persistence.CascadeType.PERSIST)
#JoinColumn(name = "PER_ID")
private Person person;
#ManyToOne(cascade=javax.persistence.CascadeType.PERSIST)
#JoinColumn(name = "UNI_ID")
private Unit unit;
#Column(name = "STATUS")
private String status;
//Constructor, getters, setters
}
And the code for the PIUCustomizer :
public class PIUCustomizer implements DescriptorCustomizer {
#Override
public void customize(ClassDescriptor descriptor) {
descriptor.getQueryManager().setDeleteSQLString("UPDATE PER_IN_UNIT SET STATUS = 'delete' WHERE ID = #ID");
}
}
Here come the problem : As I use EclipseLink with bidirectionnal relationship I want to make some instruction like myUnit.getPeople.remove(currentPerson); (remove the current person from the unit "myUnit"). But EclipseLink sent the following instruction (during commit !) :
DELETE FROM PER_IN_UNIT WHERE ((UNI_ID = ?) AND (PER_ID = ?))
instead of the
Update PER_IN_UNIT SET STATUS='delete' WHERE ((UNI_ID = ?) AND (PER_ID = ?))
that I expected and raise (obviously, because of dependances (FKs)) the following exception :
Query: DataModifyQuery(sql="DELETE FROM PER_IN_UNIT WHERE ((UNI_ID = ?) AND (PER_ID = ?))")
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:157)
at test.Crud.update(Crud.java:116)
at test.Test.runTest(Test.java:96)
at test.Test.main(Test.java:106)
Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: ORA-02292: integrity constraint (PEOPLE.FK_PAR_PIU) violated - child record found
Other problem (in the same kind), when I make something like System.out.prinln(myUnit.getPeople()) I have all the people in the unit "myUnit", including them having status 'delete'.
Is it possible to change some code/instructions/Customizer/etc in eclipseLink to change the delete call from person for PerInunit table, or I have to make my own queries and use them instead of using powerful of orm ?
Thanks for your answers and please forgive me for my poor english !
Fab
You should not be getting a delete when you call myUnit.getPeople.remove(currentPerson) unless you mapped Unit to Person with a ManyToMany using the PER_IN_UNIT table. Since you have an entity for the PER_IN_UNIT table, this would be wrong, as it really should be a Unit-> PerInUnit OneToMany mapping and then a PerInUnit -> Person ManyToOne mapping. The myUnit.getPeople.remove(currentPerson) call would then simply be getting the PerInUnit instance and marking its status as deleted, or dereferencing it and letting JPA call remove, thereby using your soft delete SQL query.
By using a ManyToMany mapping for the PER_IN_UNIT table, this mapping is completely independent to your PerInUnit entity mapping, and knows nothing about the entities that maybe cached or the soft deletes required to remove them. If you don't want to map the PER_IN_UNIT table as an entity, see http://www.eclipse.org/forums/index.php/t/243467/ which shows how to configure a ManyToMany mapping for soft deletes.
Related
I have a case where I'm persisting a large jsonb field into a PostGres table, but do not want to read it when I fetch the entity; if I do fetch it, my service goes OOM. A better design might be to separate this into a 1 to 1 table, but I can't do that at this time.
To plead that this is not a duplicate question, here's some of my research:
I'm not able to mark the column LAZY since I have a simple column not a join`
JPA/Hibernate write only field with no read
I tried the empty setter in this suggestion, which makes sense - but it still appears to read the column and I OOM: https://www.zizka.ch/pages/programming/java/hibernate/hibernate-write-only.html
I also tried omitting the setter altogether in my #Data class: Omitting one Setter/Getter in Lombok
So, I can not see the field, but I can't seem to keep it from being read into memory in the background. It seems like there must be some simple setting in JPA or Hibernate to exclude a column from read. Before I go try to make a complex repository hierarchy just to see if it works, I thought I would ask here in case I get lucky.
Thanks in advance!
Lazy loading attributes
Hibernate can load attribute lazily, but you need to enable byte code enhancements:
First you need to set the property hibernate.enhancer.enableLazyInitialization to true
Then you can annotate the field with #Basic( fetch = FetchType.LAZY ).
Here's the example from the documentation:
#Entity
public class Customer {
#Id
private Integer id;
private String name;
#Basic( fetch = FetchType.LAZY )
private UUID accountsPayableXrefId;
#Lob
#Basic( fetch = FetchType.LAZY )
#LazyGroup( "lobs" )
private Blob image;
//Getters and setters are omitted for brevity
}
You can also enable this feature via the Hibernate ORM gradle plugin
Named Native queries
You could also decide to not map it and save/read it with a named native query. It seems a good trade off for a single attribute - it will just require an additional query to save the json.
Example:
#Entity
#Table(name = "MyEntity_table")
#NamedNativeQuery(
name = "write_json",
query = "update MyEntity_table set json_column = :json where id = :id")
#NamedNativeQuery(
name = "read_json",
query = "select json_column from MyEntity_table where id = :id")
class MyEntity {
....
}
Long id = ...
String jsonString = ...
session.createNamedQuery( "write_json" )
.setParameter( "id", id )
.setParameter( "json", jsonString )
.executeUpdate();
jsonString = (String)session.createNamedQuery( "read_json" )
.setParameter( "id", id )
.getSingleResult();
In this case, schema generation is not going to create the column, so you will need to add it manually (not a big deal, considering that there are better tools to update the schema in production).
MappedSuperclass
You can also have two entities extending the same superclass (this way you don't have to copy the attributes). They have to update the same table:
#MappedSuperclass
class MyEntity {
#Id
Long id;
String name
...
}
#Entity
#Table(name = "MyEntity_table")
class MyEntityWriter extends MyEntity {
String json
}
#Entity
#Table(name = "MyEntity_table")
class MyEntityReader extends MyEntity {
// No field is necessary here
}
Now you can use MyEntityWriter for saving all the values and MyEntityReader for loading only the values you need.
I think you will have some problems with schema generation if you try to create the tables because only one of the two will be created:
If MyEntityWriter is the first table created, then no problem
If MyEntityWriter is the second table created, the query will fail because the table already exist and the additional column won't be created.
I haven't tested this solution though, there might be something I haven't thought about.
I have two domain objects like so:
#Entity
public class Employee {
#Id
#Column(nullable = false, name = "id")
protected Integer id;
// Note: org_id is just an integer column in the database
#JoinColumn(nullable = true, name = "org_id")
#ManyToOne(targetEntity = Org.class)
private Org org;
}
...and:
#Entity
public class Org {
#Id
#Column(nullable = false, name = "id")
protected Integer id;
}
I've come to the situation in my logic where I need to make some drastic changes to what's actually saved in the database. i.e. some Orgs are getting deleted and the Employees who were in them are getting re-allocated.
The issue I have is that my program logic currently does the following:
Delete any Employees that need to be deleted via org.springframework.data.repository.delete(Iterable<? extends T> itrbl)
Delete any Orgs that need to be deleted via org.springframework.data.repository.delete(Iterable<? extends T> itrbl)
Create new/update existing Orgs via org.springframework.data.repository.save(Iterable<S> itrbl)
Create new/update existing Employees via org.springframework.data.repository.save(Iterable<S> itrbl)
The issue comes about at step 2. I get an exception like this:
org.springframework.dao.InvalidDataAccessApiUsageException:
org.hibernate.TransientPropertyValueException: object references an
unsaved transient instance - save the transient instance before
flushing : com.sample.domain.Employee.org -> com.sample.domain.Org;
nested exception is java.lang.IllegalStateException:
org.hibernate.TransientPropertyValueException: object references an
unsaved transient instance - save the transient instance before
flushing : com.sample.domain.Employee.org
-> com.sample.domain.Org
If an Org ends up with no employees I don't want to delete the Org. Likewise, if an Employee of an Org gets deleted I don't want the Org to be deleted either.
I essentially just want something that's the same as how I've got the foreign key setup in PostgreSQL on the employees table:
CONSTRAINT fk_employees_org_id FOREIGN KEY (org_id)
REFERENCES public.orgs (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE SET NULL
I've looked at the cascade options, and I'm not sure it's applicable seeing as it's not a straight parent/child relationship (and the Employee that defines the #ManyToOne relationship isn't really the parent - it's the child) and it's not bi-directional (there's no need for an Org to have a list of all of its Employees)
You don't want cascade, since you've said yourself you don't want related objects to be deleted (and that's all that cascade does).
If an Org needs to be deleted yet still has a FK pointing to it, then just null out the link to the Org in the Employee(s) ... PRIOR to delete of the Org. You can do this via a JPQL query to retrieve all Employee objects linked to a particular Org, and then null their relation field. Alternatively a Bulk Update could do it in one go (but be careful about in-memory objects since they would need refresh() calling on them to pick up this nulling of the FK).
I am trying to understand the one-to-many mapping in Hibernate with a small example. I have a Product with a set of Part's. Here are my entity classes:
Part.java
#Entity
public class Part {
#Id
#GeneratedValue
int id;
String partName;
//Setters & Getters
}
Product.java
#Entity
public class Product {
private String serialNumber;
private Set<Part> parts = new HashSet<Part>();
#Id
public String getSerialNumber() {
return serialNumber;
}
#OneToMany
#JoinColumn(name = "PRODUCT_ID")
public Set<Part> getParts() {
return parts;
}
// Setter methods
}
Then I tried to save some parts and products in my database and observed below queries generated by hibernate:
Hibernate: insert into Product (serialNumber) values (?)
Hibernate: insert into Part (partName, id) values (?, ?)
Hibernate: update Part set PRODUCT_ID=? where id=?
Here to add a record in Part table, hibernate generates 2 DML operations - insert and update. If a single insert command is sufficient to add a record in table then why hibernate uses both insert and update in this case? Please explain.
I know this is crazy old but I had the same problem and Google brought me here, so after fixing it I figured I should post an answer.
Hibernate will switch the insert/update approach to straight inserts if you make the join column not nullable and not updatable, which I assume in your case it is neither anyways:
#JoinColumn(name = "PRODUCT_ID", nullable = false, updatable = false)
If Part as composite element list then only two query will come. Please check and revert.
If its not a composite element , hibernate try to insert individual as a separate query and it will try to create relationship between them.
In earlier case hibernate will insert with relationship key.
**Hibernate: insert into Product (serialNumber) values (?)
Hibernate: insert into Part (partName, id) values (?, ?)**
In these two queries hibernate is simply inserting a record into the database.
At that stage hibernate is not creating any relationship between the two entities.
Hibernate: update Part set PRODUCT_ID=? where id=?
Now after making entity tables,hibernate is going to make a relationship between the two
by using the above third query...
The association is uni-directional, so Product is the owning side (because it's the only side).
Make the association bidirectional and make Part the association owner. That way you will avoid redundant updates because the foreign key values will be specified as part of insert statements for Part.
I have following Associated classes with one to one mapping.
#Entity
public class EmployeeEntity
{
#Id
private String id;
private String name;
#OneToOne(mappedBy = "employeeEntity", fetch = FetchType.EAGER, cascade = CascadeType.ALL)
#Fetch(FetchMode.SELECT)
#JoinColumn(name = "empid")
private AddressEntity addressEntity;
...
...
getters & setters
}
#Entity
public class AddressEntity
{
#Id
#Column(unique=true, nullable=false)
#GeneratedValue(generator="gen")
#GenericGenerator(name="gen", strategy="foreign", parameters=#Parameter(name="property", value="employeeEntity"))
private String empId;
#OneToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
#PrimaryKeyJoinColumn
private EmployeeEntity employeeEntity;
...
getters & setters
}
I am using postgres and having tables (employeeentity, addressentity) with following foriegn key constraint on addressentity table:
Foreign-key constraints:
"fkakhilesh" FOREIGN KEY (empid) REFERENCES employeeentity(id) ON DELETE CASCADE
I have following requirements with different REST calls:
POST Rest call - should create an employee with address.
POST Rest call - should create an employee without address.
GET Rest call - should retrieve an employeee. Address should also come if it exist.
PUT Rest call - should update an employee and address (if address exists).
PUT Rest call - should update an employee and address (when address is passed and it already exists in addressentity table for empid)
PUT Rest call - should update an employee and create the address (when address is passed and it does not exists in addressentity table for empid)
I am able to perform operations 1 to 5 without any issues.
The main problem is in 6 and following questions come to my mind:
1. when i do "getSession().update(object)" , I get hibernate's StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1.
is this not possible with "update" if address does not exists? can't I create a new address during update?
do i need to change my ServiceImpl call to "getSession().merge(object) ? is this case can only be handled by calling "merge" ? how it impacts performance?
If i do merge, i get hibernate's IdentifierGenerationException: attempted to assign id from null one-to-one property.
Am i missing something here?
this can be solved by changing hibernate mapping? or somethin related to cascade.
what is the importance of #GeneratedValue(generator="gen") here? why is #parameter used in #GenericGenerator
I am new to hibernate and trying to get into the depth of hibernate mapping.
Also, I would be delighted if you could suggest me on the design as what should be the best way to handle this.
I got the fix for this. This one-one mapping is somewhat tricky and not simple as i thought initially.
I have used bidirectional one to one mapping, so it is important to call the setters of both EmployeeEntity and AddressEntity to set each other during update. for example:
employeeEntity.setAddressEntity(addressEntity) and addressEntity.setEmpoyeeEntity(empoyeeEntity) has to explicitly called.
setting alone employeeEntity.setAddressEntity(addressEntity) will not work.
Always use integer Id and use .getSession.saveOrUpdate(entity); for save or update.
In the One to One Mapping you should mention constrained=true on the child. It makes Child Id the same as Parent Id.
Use these lines for child id. I don't know Java attributes syntax.
<generator class="foreign">
<param name="property">employeeEntity</param>
</generator>
Also remove Fetch type and Cascade.All from child. I think the default fetch mode is Select which is fine. Cascade is usally used for the Parent part which is responsible for the parent-child relation.
Why would the following query fail due to a foreign key constraint? There is no other way for me to delete the associated data that I am aware of.
Query query=em.createQuery("DELETE FROM Person");
query.executeUpdate();
em.getTransaction().commit();
The I believe the offending relationship causing the problem is the activationKey field:
2029 [main] ERROR org.hibernate.util.JDBCExceptionReporter - integrity
constraint violation: foreign key no action; FKCEC6E942485388AB
table: ACTIVATION_KEY
This is what I have now:
#Entity
#Table(name="person")
public class Person implements Comparable<Person> {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name="id")
private long id = 0;
#ElementCollection
#Column(name = "activation_key")
#CollectionTable(name = "activation_key")
private Set<String> activationKey = new HashSet<String>();
}
Why would the following query fail due to a foreign key constraint?
It looks like your bulk delete query is not deleting the entries from the collection table, hence the FK constraint violation.
And while the JPA spec explicitly writes that a bulk delete is not cascaded to related entities:
4.10 Bulk Update and Delete Operations
...
A delete operation only applies to
entities of the specified class and
its subclasses. It does not cascade to
related entities.
That's not exactly your case and I think that what you want to do should be supported.
You're probably facing one of the limitation of Hibernate's bulk delete, see for example:
HHH-3337 - Hibernate disregards #JoinTable when generating bulk UPDATE/DELETE for a self-joined entity
HHH-1917 - Bulk Delete on the owning side of a ManyToMany relation needs to delete corresponding rows from the JoinTable
I suggest raising an issue.
Workaround: use native queries to delete the collection table and then the entity table.