Integrity constraint violation when deleting object in ManyToMany relationship - java

I have a many to many relationship between CATEGORY and PRODUCT in a very basic e-commerce java app.
Category has #ManyToMany relation with product. Therefore there is a table CATEGORY_PRODUCT with two colums CATEGORY_ID and PRODUCTS_ID
I want to delete all relations for certain product in that table, am i doing it right?
public void deleteProduct(long id){
Session session = HibernateUtil.getCurrentSession();
session.beginTransaction();
Product product = session.find(entityClass, id);
String sql = "DELETE FROM PUBLIC.CATEGORY_PRODUCT WHERE PRODUCTS_ID = " + id;
SQLQuery query = session.createSQLQuery(sql);
query.setResultTransformer(Criteria.ALIAS_TO_ENTITY_MAP);
session.delete(product);
session.getTransaction().commit();
}
The plan is to delete the product but i have "integrity constraint violations" because of the relationship.

Before you call session.delete(product), you need to call query.executeUpdate().
You don't need the query.setResultTransformer() call. You should get rid of that line. The result of the executeUpdate() is an int that says how many records were deleted.
You asked this several months ago and I'm just now seeing it. I assume you aren't still waiting on an answer, but maybe this can help the next person.

Related

Spring data - insert data depending on previous insert

I need to save data into 2 tables (an entity and an association table).
I simply save my entity with the save() method from my entity repository.
Then, for performances, I need to insert rows into an association table in native sql. The rows have a reference on the entity I saved before.
The issue comes here : I get an integrity constraint exception concerning a Foreign Key. The entity saved first isn't known in this second query.
Here is my code :
The repo :
public interface DistributionRepository extends JpaRepository<Distribution, Long>, QueryDslPredicateExecutor<Distribution> {
#Modifying
#Query(value = "INSERT INTO DISTRIBUTION_PERIMETER(DISTRIBUTION_ID, SERVICE_ID) SELECT :distId, p.id FROM PERIMETER p "
+ "WHERE p.id in (:serviceIds) AND p.discriminator = 'SRV' ", nativeQuery = true)
void insertDistributionPerimeter(#Param(value = "distId") Long distributionId, #Param(value = "serviceIds") Set<Long> servicesIds);
}
The service :
#Service
public class DistributionServiceImpl implements IDistributionService {
#Inject
private DistributionRepository distributionRepository;
#Override
#Transactional
public DistributionResource distribute(final DistributionResource distribution) {
// 1. Entity creation and saving
Distribution created = new Distribution();
final Date distributionDate = new Date();
created.setStatus(EnumDistributionStatus.distributing);
created.setDistributionDate(distributionDate);
created.setDistributor(agentRepository.findOne(distribution.getDistributor().getMatricule()));
created.setDocument(documentRepository.findOne(distribution.getDocument().getTechId()));
created.setEntity(entityRepository.findOne(distribution.getEntity().getTechId()));
created = distributionRepository.save(created);
// 2. Association table
final Set<Long> serviceIds = new HashSet<Long>();
for (final ServiceResource sr : distribution.getServices()) {
serviceIds.add(sr.getTechId());
}
// EXCEPTION HERE
distributionRepository.insertDistributionPerimeter(created.getId(), serviceIds);
}
}
The 2 queries seem to be in different transactions whereas I set the #Transactionnal annotation. I also tried to execute my second query with an entityManager.createNativeQuery() and got the same result...
Invoke entityManager.flush() before you execute your native queries or use saveAndFlush instead.
I your specific case I would recommend to use
created = distributionRepository.saveAndFlush(created);
Important: your "native" queries must use the same transaction! (or you need a now transaction isolation level)
you also wrote:
I don't really understand why the flush action is not done by default
Flushing is handled by Hibernate (it can been configured, default is "auto"). This mean that hibernate will flush the data at any point in time. But always before you commit the transaction or execute an other SQL statement VIA HIBERNATE. - So normally this is no problem, but in your case, you bypass hibernate with your native query, so hibernate will not know about this statement and therefore it will not flush its data.
See also this answer of mine: https://stackoverflow.com/a/17889017/280244 about this topic

Update record if somcolumn!="somevalue", using Hibernate saveorupdate()

I am trying to write Criteria in Hibernate, My desired output is if column empfield1's value is not 'REGULARIZE' then update else do not update record.
i have tried below one.
Session session = factory1.openSession();
Criteria criteria=session.createCriteria(EmployeePunch.class);
criteria.add(Restrictions.ne("empField1","REGULARIZE"));
EmployeePunch empPunch = (EmployeePunch)criteria.uniqueResult();
empPunch.setId(empPuncId);
empPunch.setSigninTime(inTime);
empPunch.setSigninDate(dateOfUpdate);
empPunch.setSignoutTime(outTime);
empPunch.setPresent(presentStatus);
empPunch.setLastUpdateBy(empcode);
empPunch.setLastUpdateDate(time);
empPunch.setEmpField1(remark);
session.saveOrUpdate(empPunch);
tx.commit();
but it gives me error
Exception : query did not return a unique result: 527
I think you forget to give id without giving id hibernate will return multiple records with empField1="REGULARIZE"
You should give id as well like below:
Criteria criteria=session.createCriteria(EmployeePunch.class);
criteria.add(Restrictions.ne("empField1","REGULARIZE"))
.add(Restrictions.eq("empPuncId",empPuncId));
Now it will return single matching record and then update it.
That means ,With that criteria there are multiple records are there in your Database.
To know how many records are there,
Try
List<EmployeePunch> emps = (ArrayList<EmployeePunch>)criteria.list();
So that emps will give you a list of EmployeePunch's which meets the criteria.
Then iterate the list and see how many items are there inside database.
Why not use a HQL in this way?
Query query = session.createQuery("update EmployeePunch set signinTime = :signinTime, signinDate = :signinDate where empField1 = 'REGULARIZE').setParameter("signinTime",signinTime).setParameter("signinDate",signinDate);
int updateRecordCount = query.executeUpdate();
Of course, you have to set values for other properties (except for Id if it is your #Id field); in updateRecordCount you get the count of updated records.

JPA merge after delete query fails

please analize the following two codes and tell me why the first one fails with primary key violation when doing commit, and the second one does't.
Code which fails at commit:
try{
Query q = em.createQuery("DELETE FROM Puntaje");
q.executeUpdate();
//em.getTransaction().commit();
//em.getTransaction().begin();
Iterator it = l.iterator();
while(it.hasNext()){
DataPuntaje dp = (DataPuntaje)it.next();
Cliente c = new Cliente(dp.getCliente());
Puntaje p = new Puntaje(dp.getPuntaje(),c);
c.agregarPuntaje(p);
em.merge(c);
}
System.out.println("test1");
em.getTransaction().commit();
System.out.println("test2");
}
Code which works fine:
try{
Query q = em.createQuery("DELETE FROM Puntaje");
q.executeUpdate();
em.getTransaction().commit();
em.getTransaction().begin();
Iterator it = l.iterator();
while(it.hasNext()){
DataPuntaje dp = (DataPuntaje)it.next();
Cliente c = new Cliente(dp.getCliente());
Puntaje p = new Puntaje(dp.getPuntaje(),c);
c.agregarPuntaje(p);
em.merge(c);
}
System.out.println("test1");
em.getTransaction().commit();
System.out.println("test2");
}
The only difference is that the first one does not commit the delete query, but instead commit it all together at the end.
Cliente and Puntaje is a 1:N bidirectional relation with cascade = ALL.
And all the inserted instances of Cliente have the same ID, but merge should be smart enough to update instead of insert after the first one is persisted, but that seems to fail at the first example and i cant find any explanation.
Also im using H2 embedded database.
Also i would like to add, the first code works FINE if there is an already inserted Cliente value, this fails when the table is actually empty and so delete actually is doing nothing.
This is the error im getting:
Internal Exception: org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_5 ON PUBLIC.CLIENTE(NICK)"; SQL statement:
INSERT INTO CLIENTE (NICK) VALUES (?) [23505-169]
Error Code: 23505
Call: INSERT INTO CLIENTE (NICK) VALUES (?)
bind => [cbaldes]
Query: InsertObjectQuery(Clases.Cliente#21cd5b08)
javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_5 ON PUBLIC.CLIENTE(NICK)"; SQL statement:
INSERT INTO CLIENTE (NICK) VALUES (?) [23505-169]
Error Code: 23505
Call: INSERT INTO CLIENTE (NICK) VALUES (?)
bind => [cbaldes]
Query: InsertObjectQuery(Clases.Cliente#21cd5b08
)
These are the tables:
#Entity
public class Puntaje implements Comparable, Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private int total;
#ManyToOne(cascade=CascadeType.ALL, optional = false)
#JoinColumn(name="NICK")
private Cliente cliente;
#Entity
public class Cliente implements Serializable {
#Id
private String nick;
#OneToMany(cascade=CascadeType.ALL, mappedBy="cliente")
private List<Puntaje> puntajes;
When you perform operations on the object, all the operations are recorded in the cache ONLY. JPA will prepare an internal list of all objects to be inserted, updated and deleted. Which will be flushed together when flush or commitis called.
Now take your first example. You deleted all Puntaje, which adds all Puntaje in the deleted list. Now when you call merge, i*ts indeed smart enough* and it figured out that it should be inserted and not updated and added in the insert list. When you call commit, it tries to insert the objects from the insert list first and as you can expect, it will fail as old objects are not yet deleted.
Only difference in your second example is that, by force, you are deleting the objects first before insertion and hence it's not failing.
I am sure, it will not fail even if your use flush in place of commit.
Hope this helps you understand the reasoning behind the failure.

Fetching multiple bags efficiently

I'm developing a multilingual application. For this reason many objects have in their name and description fields collections of something I call LocalizedStrings instead of plain strings. Every LocalizedString is basically a pair of a locale and a string localized to that locale.
Let's take an example an entity, let's say a book -object.
public class Book{
#OneToMany
private List<LocalizedString> names;
#OneToMany
private List<LocalizedString> description;
//and so on...
}
When a user asks for a list of books, it does a query to get all the books, fetches the name and description of every book in the locale the user has selected to run the app in, and displays it back to the user.
This works but it is a major performance issue. For the moment hibernate makes one query to fetch all the books, and after that it goes through every single object and asks hibernate for the localized strings for that specific object, resulting in a "n+1 select problem". Fetching a list of 50 entities produces about 6000 rows of sql commands in my server log.
I tried making the collections eager but that lead me to the "cannot simultaneously fetch multiple bags"-issue.
Then I tried setting the fetch strategy on the collections to subselect, hoping that it would do one query for all books, and after that do one query that fetches all LocalizedStrings for all the books. Subselects didn't work in this case how i would have hoped and it basically just did exactly the same as my first case.
I'm starting to run out of ideas on how to optimize this.
So in short, what fetching strategy alternatives are there when you are fetching a collection and every element in that collection has one or multiple collections in itself, which has to be fetch simultaneously.
You said
I tried setting the fetch strategy on the collections to subselect, hoping that it would do one query for all books
You can, but you need to access some property to throw the subselect
#Entity
public class Book{
private List<LocalizedString> nameList = new ArrayList<LocalizedString>();
#OneToMany(cascade=javax.persistence.CascadeType.ALL)
#org.hibernate.annotations.Fetch(org.hibernate.annotations.FetchMode.SUBSELECT)
public List<LocalizedString> getNameList() {
return this.nameList;
}
private List<LocalizedString> descriptionList = new ArrayList<LocalizedString>();
#OneToMany(cascade=javax.persistence.CascadeType.ALL)
#org.hibernate.annotations.Fetch(org.hibernate.annotations.FetchMode.SUBSELECT)
private List<LocalizedString> getDescriptionList() {
return this.descriptionList;
}
}
Do as follows
public class BookRepository implements Repository {
public List<Book> getAll(BookFetchingStrategy fetchingStrategy) {
switch(fetchingStrategy) {
case BOOK_WITH_NAMES_AND_DESCRIPTIONS:
List<Book> bookList = session.createQuery("from Book").list();
// Notice empty statement in order to start each subselect
for (Book book : bookList) {
for (Name address: book.getNameList());
for (Description description: book.getDescriptionList());
}
return bookList;
}
}
public static enum BookFetchingStrategy {
BOOK_WITH_NAMES_AND_DESCRIPTIONS;
}
}
I have done the following one to populate the database
SessionFactory sessionFactory = configuration.buildSessionFactory();
Session session = sessionFactory.openSession();
session.beginTransaction();
// Ten books
for (int i = 0; i < 10; i++) {
Book book = new Book();
book.setName(RandomStringUtils.random(13, true, false));
// For each book, Ten names and descriptions
for (int j = 0; j < 10; j++) {
Name name = new Name();
name.setSomething(RandomStringUtils.random(13, true, false));
Description description = new Description();
description.setSomething(RandomStringUtils.random(13, true, false));
book.getNameList().add(name);
book.getDescriptionList().add(description);
}
session.save(book);
}
session.getTransaction().commit();
session.close();
And to retrieve
session = sessionFactory.openSession();
session.beginTransaction();
List<Book> bookList = session.createQuery("from Book").list();
for (Book book : bookList) {
for (Name address: book.getNameList());
for (Description description: book.getDescriptionList());
}
session.getTransaction().commit();
session.close();
I see
Hibernate:
select
book0_.id as id0_,
book0_.name as name0_
from
BOOK book0_
Hibernate: returns 100 rows (as expected)
select
namelist0_.BOOK_ID as BOOK3_1_,
namelist0_.id as id1_,
namelist0_.id as id1_0_,
namelist0_.something as something1_0_
from
NAME namelist0_
where
namelist0_.BOOK_ID in (
select
book0_.id
from
BOOK book0_
)
Hibernate: returns 100 rows (as expected)
select
descriptio0_.BOOK_ID as BOOK3_1_,
descriptio0_.id as id1_,
descriptio0_.id as id2_0_,
descriptio0_.something as something2_0_
from
DESCRIPTION descriptio0_
where
descriptio0_.BOOK_ID in (
select
book0_.id
from
BOOK book0_
)
Three select statements. No "n + 1" select problem. Be aware i am using property access strategy instead of field. Keep this in mind.
You can set a batch-size on your bags, when one unitialized collection is initialized, Hibernate will initialize a some other collections with a single query
More in the Hibernate doc

Hibernate: same generated value in two properties

I want the first to be generated:
#Id
#Column(name = "PRODUCT_ID", unique = true, nullable = false, precision = 12,
scale = 0)
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "PROD_GEN")
#BusinessKey
public Long getAId() {
return this.aId;
}
I want the bId to be initially exactly as the aId. One approach is to insert the entity, then get the aId generated by the DB (2nd query) and then update the entity, setting the bId to be equal to aId (3rd query). Is there a way to get the bId to get the same generated value as aId?
Note that afterwards, I want to be able to update bId from my gui.
If the solution is JPA, even better.
Choose your poison:
Option #1
you could annotate bId as org.hibernate.annotations.Generated and use a database trigger on insert (I'm assuming the nextval has already been assigned to AID so we'll assign the curval to BID):
CREATE OR REPLACE TRIGGER "MY_TRIGGER"
before insert on "MYENTITY"
for each row
begin
select "MYENTITY_SEQ".curval into :NEW.BID from dual;
end;
I'm not a big fan of triggers and things that happen behind the scene but this seems to be the easiest option (not the best one for portability though).
Option #2
Create a new entity, persist it, flush the entity manager to get the id assigned, set the aId on bId, merge the entity.
em.getTransaction().begin();
MyEntity e = new MyEntity();
...
em.persist(e);
em.flush();
e.setBId(e.getAId());
em.merge(e);
...
em.getTransaction().commit();
Ugly, but it works.
Option #3
Use callback annotations to set the bId in-memory (until it gets written to the database):
#PostPersist
#PostLoad
public void initialiazeBId() {
if (this.bId == null) {
this.bId = aId;
}
}
This should work if you don't need the id to be written on insert (but in that case, see Option #4).
Option #4
You could actually add some logic in the getter of bId instead of using callbacks:
public Long getBId() {
if (this.bId == null) {
return this.aId;
}
return this.bId;
}
Again, this will work if you don't need the id to be persisted in the database on insert.
If you use JPA, after inserting the new A the id should be set to the generated value, i tought (maybe it depends on which jpa provider you use), so no 2nd query needed. then set bld to ald value in your DAO?

Categories

Resources