please analize the following two codes and tell me why the first one fails with primary key violation when doing commit, and the second one does't.
Code which fails at commit:
try{
Query q = em.createQuery("DELETE FROM Puntaje");
q.executeUpdate();
//em.getTransaction().commit();
//em.getTransaction().begin();
Iterator it = l.iterator();
while(it.hasNext()){
DataPuntaje dp = (DataPuntaje)it.next();
Cliente c = new Cliente(dp.getCliente());
Puntaje p = new Puntaje(dp.getPuntaje(),c);
c.agregarPuntaje(p);
em.merge(c);
}
System.out.println("test1");
em.getTransaction().commit();
System.out.println("test2");
}
Code which works fine:
try{
Query q = em.createQuery("DELETE FROM Puntaje");
q.executeUpdate();
em.getTransaction().commit();
em.getTransaction().begin();
Iterator it = l.iterator();
while(it.hasNext()){
DataPuntaje dp = (DataPuntaje)it.next();
Cliente c = new Cliente(dp.getCliente());
Puntaje p = new Puntaje(dp.getPuntaje(),c);
c.agregarPuntaje(p);
em.merge(c);
}
System.out.println("test1");
em.getTransaction().commit();
System.out.println("test2");
}
The only difference is that the first one does not commit the delete query, but instead commit it all together at the end.
Cliente and Puntaje is a 1:N bidirectional relation with cascade = ALL.
And all the inserted instances of Cliente have the same ID, but merge should be smart enough to update instead of insert after the first one is persisted, but that seems to fail at the first example and i cant find any explanation.
Also im using H2 embedded database.
Also i would like to add, the first code works FINE if there is an already inserted Cliente value, this fails when the table is actually empty and so delete actually is doing nothing.
This is the error im getting:
Internal Exception: org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_5 ON PUBLIC.CLIENTE(NICK)"; SQL statement:
INSERT INTO CLIENTE (NICK) VALUES (?) [23505-169]
Error Code: 23505
Call: INSERT INTO CLIENTE (NICK) VALUES (?)
bind => [cbaldes]
Query: InsertObjectQuery(Clases.Cliente#21cd5b08)
javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_5 ON PUBLIC.CLIENTE(NICK)"; SQL statement:
INSERT INTO CLIENTE (NICK) VALUES (?) [23505-169]
Error Code: 23505
Call: INSERT INTO CLIENTE (NICK) VALUES (?)
bind => [cbaldes]
Query: InsertObjectQuery(Clases.Cliente#21cd5b08
)
These are the tables:
#Entity
public class Puntaje implements Comparable, Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private int total;
#ManyToOne(cascade=CascadeType.ALL, optional = false)
#JoinColumn(name="NICK")
private Cliente cliente;
#Entity
public class Cliente implements Serializable {
#Id
private String nick;
#OneToMany(cascade=CascadeType.ALL, mappedBy="cliente")
private List<Puntaje> puntajes;
When you perform operations on the object, all the operations are recorded in the cache ONLY. JPA will prepare an internal list of all objects to be inserted, updated and deleted. Which will be flushed together when flush or commitis called.
Now take your first example. You deleted all Puntaje, which adds all Puntaje in the deleted list. Now when you call merge, i*ts indeed smart enough* and it figured out that it should be inserted and not updated and added in the insert list. When you call commit, it tries to insert the objects from the insert list first and as you can expect, it will fail as old objects are not yet deleted.
Only difference in your second example is that, by force, you are deleting the objects first before insertion and hence it's not failing.
I am sure, it will not fail even if your use flush in place of commit.
Hope this helps you understand the reasoning behind the failure.
Related
I have a many to many relationship between CATEGORY and PRODUCT in a very basic e-commerce java app.
Category has #ManyToMany relation with product. Therefore there is a table CATEGORY_PRODUCT with two colums CATEGORY_ID and PRODUCTS_ID
I want to delete all relations for certain product in that table, am i doing it right?
public void deleteProduct(long id){
Session session = HibernateUtil.getCurrentSession();
session.beginTransaction();
Product product = session.find(entityClass, id);
String sql = "DELETE FROM PUBLIC.CATEGORY_PRODUCT WHERE PRODUCTS_ID = " + id;
SQLQuery query = session.createSQLQuery(sql);
query.setResultTransformer(Criteria.ALIAS_TO_ENTITY_MAP);
session.delete(product);
session.getTransaction().commit();
}
The plan is to delete the product but i have "integrity constraint violations" because of the relationship.
Before you call session.delete(product), you need to call query.executeUpdate().
You don't need the query.setResultTransformer() call. You should get rid of that line. The result of the executeUpdate() is an int that says how many records were deleted.
You asked this several months ago and I'm just now seeing it. I assume you aren't still waiting on an answer, but maybe this can help the next person.
I am having trouble using Hibernate with MSSQL Server 2012. No matter what I do when I try to insert a value in a certain table using Hibernate I get generated id=0.
Here is the model.
#Entity
#Table(name = "tbl_ClientInfo")
public class ClientInfo {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column (name = "auto_Client_ID", unique=true, nullable=false)
private int auto_Client_ID;
...
Here is the write.
public boolean addNewClient(Client client) {
// there is a class that wraps SessionFactory as singleton
Session session = getSessionFactory().openSession();
Transaction tx = null;
Integer clientFamId; //client family info id
Integer clientId; // actual client id
try {
// create fam info first with some data - need id for ClientInfo
tx = session.beginTransaction();
ClientFam clientFam = new ClientFam();
clientFamId = (Integer) session.save(clientFam);
clientFamId = (Integer) session.getIdentifier(clientFam); // this returns the right id
session.flush();
ClientInfo clientInfo = new ClientInfo();
clientInfo.setABunchOfFields(withStuff); //multiple methods
session.save(clientInfo);
clientInfoId = (Integer) session.getIdentifier(clientInfo); // this is always 0
session.flush();
tx.commit();
} catch (HibernateException e) {
if (tx!=null) tx.rollback();
e.printStackTrace();
return false;
} finally {
session.close();
}
return true;
}
In the database the PK auto_Client_ID is clustered, set to IDENTITY(1,1). Both ClientInfo and ClientFam records are created in the db, but hibernate returns 0. I also tried catching the value from save, but it's also 0.
I don't want to commit in-between separate insert: the transaction is when all inserts are fine (there are more after this, but I can't get to them because of this id issue yet).
The model for ClientFam is almost the same: the id field is #GeneratedValue(strategy=GenerationType.IDENTITY) as well.
I also tried specifying this for ClientInfo
#GeneratedValue(generator="increment", strategy=GenerationType.IDENTITY)
#GenericGenerator(name = "increment", strategy = "increment")
The first time I ran it it returned the correct value. However, the second time I ran it I got an error:
Cannot insert explicit value for identity column in table 'Report' when IDENTITY_INSERT is set to OFF
And that was the end of trying that. Everywhere I looked the recommendation is to use GenerationType.IDENTITY for auto incremented field in the db. That's supposed to return the right values. What might I be doing wrong?
I also tried getting the id from the ClientInfo object itself (I thought it should get written into it) after the right, but it's was also 0. Makes me think something is wrong with my ClientInfo model and/or annotations in it.
I found the problem with my situation - has nothing to do with Hibernate. There is a instead of insert trigger that wasn't returning id and hence messing up what save() returns.
This is just an educated guess, but you might want to remove the "unique=true" clause from the #Column definition. Hibernate may be handling the column as a unique constraint as opposed to a primary key.
Entity with id autogenerated from oracle trigger sequence.
#Entity
#Table(name = "REPORT", schema = "WEBPORTAL")
public class Report {
private Integer id;
....
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator="report_sequence")
#SequenceGenerator(name="report_sequence", sequenceName = "report_id_seq")
#Column(name="REPORT_ID", unique = true, nullable = false)
public Integer getId() {
return id;
}
....
}
Service
#Service("reportService")
public class ReportServiceImpl implements ReportService {
....
#Transactional(readOnly=false)
public void saveOrUpdate(Report report) {
reportDAO.saveOrUpdate(report);
}
}
DAO
#Repository
public class ReportDAOImpl implements ReportDAO {
....
#Override
public Report save(Report report) {
try {
Session session = sessionFactory.getCurrentSession();
session.save(report);
} catch (Exception e) {
logger.error("error", e);
}
return report;
}
}
And When I call service's saveOrUpdate and then try to reach id of entity I get different value than persisted in database. Values on database with autogeneration all is ok. Any suggestions?
reportService.saveOrUpdate(report);
System.out.println(report.getId());
prints: 4150
but saved id in database is: 84
NOTE: My purpose to get Id comes from that I wanted to save childs with cascade. But foreign key on child was different in database(the values of id that I get with getId()).
And Id generated in database is incremented by 2. EX: 80, 82, 84.
UPDATE:
Oracle trigger for sequence generation
CREATE OR REPLACE TRIGGER REPORT_ID_TRIG
BEFORE INSERT ON WEBPORTAL.REPORT
FOR EACH ROW
BEGIN
SELECT report_id_seq.NEXTVAL
INTO :new.report_id
FROM dual;
END;
ANSWER: Trigger should check if id is null
CREATE OR REPLACE TRIGGER REPORT_ID_TRIG
BEFORE INSERT ON WEBPORTAL.REPORT
FOR EACH ROW
WHEN (new.report_id is null)
BEGIN
SELECT report_id_seq.NEXTVAL
INTO :new.report_id
FROM dual;
END;
DESCRIPTION:
#GeneratedValue is not just a sequence generator. It's bit of HiLo algorithm.When it first requests id from database it multiplies it with 50(it can differ) and next 50 new entities will be given ids consequently, and than next request to database. This is to decrease request to database.
The numbers that I get from java was right numbers that should be saved on report.
Without id null value check Hibernate firstly requested for id from database and sequence.nextval called. When hibernate was persisting it(completing transaction) the database called sequence.next second time and set that value to database. So on ReportDetails there was true id value of report and on the Report id it was id set from database.
The problem is that two separate mechanisms are in place to generate the key:
one at Hibernate level which is to call a sequence and use the value to populate an Id column and send it to the database as the insert key
and another mechanism at the database that Hibernate does not know about: the column is incremented via a trigger.
Hibernate thinks that the insert was made with the value of the sequence, but in the database something else occurred. The simplest solution would probably be to remove the trigger mechanism, and let Hibernate populate the key based on the sequence only.
Another Solution:
check your trigger definition that should be in this format
(WHEN (new.report_id is null) ) is important.
CREATE OR REPLACE TRIGGER TRIGGER_NAME
BEFORE INSERT ON TABLE_NAME
FOR EACH ROW
WHEN (new.id is null)
BEGIN
SELECT SEQUENCE_NAME.NEXTVAL
INTO :new.id
FROM dual;
END
I need to save data into 2 tables (an entity and an association table).
I simply save my entity with the save() method from my entity repository.
Then, for performances, I need to insert rows into an association table in native sql. The rows have a reference on the entity I saved before.
The issue comes here : I get an integrity constraint exception concerning a Foreign Key. The entity saved first isn't known in this second query.
Here is my code :
The repo :
public interface DistributionRepository extends JpaRepository<Distribution, Long>, QueryDslPredicateExecutor<Distribution> {
#Modifying
#Query(value = "INSERT INTO DISTRIBUTION_PERIMETER(DISTRIBUTION_ID, SERVICE_ID) SELECT :distId, p.id FROM PERIMETER p "
+ "WHERE p.id in (:serviceIds) AND p.discriminator = 'SRV' ", nativeQuery = true)
void insertDistributionPerimeter(#Param(value = "distId") Long distributionId, #Param(value = "serviceIds") Set<Long> servicesIds);
}
The service :
#Service
public class DistributionServiceImpl implements IDistributionService {
#Inject
private DistributionRepository distributionRepository;
#Override
#Transactional
public DistributionResource distribute(final DistributionResource distribution) {
// 1. Entity creation and saving
Distribution created = new Distribution();
final Date distributionDate = new Date();
created.setStatus(EnumDistributionStatus.distributing);
created.setDistributionDate(distributionDate);
created.setDistributor(agentRepository.findOne(distribution.getDistributor().getMatricule()));
created.setDocument(documentRepository.findOne(distribution.getDocument().getTechId()));
created.setEntity(entityRepository.findOne(distribution.getEntity().getTechId()));
created = distributionRepository.save(created);
// 2. Association table
final Set<Long> serviceIds = new HashSet<Long>();
for (final ServiceResource sr : distribution.getServices()) {
serviceIds.add(sr.getTechId());
}
// EXCEPTION HERE
distributionRepository.insertDistributionPerimeter(created.getId(), serviceIds);
}
}
The 2 queries seem to be in different transactions whereas I set the #Transactionnal annotation. I also tried to execute my second query with an entityManager.createNativeQuery() and got the same result...
Invoke entityManager.flush() before you execute your native queries or use saveAndFlush instead.
I your specific case I would recommend to use
created = distributionRepository.saveAndFlush(created);
Important: your "native" queries must use the same transaction! (or you need a now transaction isolation level)
you also wrote:
I don't really understand why the flush action is not done by default
Flushing is handled by Hibernate (it can been configured, default is "auto"). This mean that hibernate will flush the data at any point in time. But always before you commit the transaction or execute an other SQL statement VIA HIBERNATE. - So normally this is no problem, but in your case, you bypass hibernate with your native query, so hibernate will not know about this statement and therefore it will not flush its data.
See also this answer of mine: https://stackoverflow.com/a/17889017/280244 about this topic
I have been trying for days to persists data obtained from a jtable from an imported Excel sheet without success and trying to find the exception is sickening.
here is some part of the code and the error stack trace
#Action
public void persist(){
emf = Persistence.createEntityManagerFactory("MauranaSurveyPU");
em = emf.createEntityManager();
em.getTransaction().begin();
//loops through table to retrieve object and persist
int count = jTable1.getRowCount();
for(int i=0; i<count; i++){
Mauranagroup mn = new Mauranagroup();
String obj1 = (String)GetData(jTable1,i,0);
String obj2 = (String)GetData(jTable1,i,1);
String obj3 = (String)GetData(jTable1,i,2);
//set entity
mn.setRespondentId(Integer.parseInt(obj1));
mn.setMale(obj2);
mn.setFemale(obj3);
em.persist(mn);
}//end for
em.getTransaction().commit();
}//end method persist
// get object from jtable
private Object GetData(JTable jTable1, int x, int y) {
return jTable1.getModel().getValueAt(x,y);
}
The problem with this code is that it actually persists ,but after the transaction commits
i get this stack trace;
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The statement was aborted because it would have caused a duplicate key value in a unique or primary key constraint or unique index identified by 'SQL130204062549290' defined on 'MAURANAGROUP'.
Error Code: 20000
Call: INSERT INTO MAURANAGROUP (RESPONDENT_ID, AMOUNTTOBESPENT, AREYOUFAMILIARNO
bind => [211 parameters bound]
Query: InsertObjectQuery(entity.Mauranagroup[ respondentId=5 ])
when i delete the records and persist again, i get another line
bind => [211 parameters bound]
Query: InsertObjectQuery(entity.Mauranagroup[ respondentId=2 ])
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:324)
i dont understand is it,it actually persist and i can see my saved data. is it the loop or is the loop not terminating well,im actually sick because of this project.
There is a DB constraint on your table that some column is unique and you are inserting the duplicate value. I guess it is the RESPONDENT_ID. If you are trying to persist all the record in the table for the first time, I wouldn't provide the id from the table data. Instead I would use sequence to generate the id for you and return it to the UI.
If you are trying to modify the existing data stored in the table. I would search them in the DB using the id, update the fields and persist them again.
btw. it is a good practice to name your methods with lowercase at the beginning.