JDO / Duplicate entry exception - java

I get an MySQLIntegrityConstraintViolationException when saving an object to my database. I know what this error means, but I cannot work around it.
Error: Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '12345' for key 'PRIMARY'
Basically, I want to save course objects to a database. Each course object may have several studypath objects, which can in turn be part of several course objects.
PersistenceManager pm = pmf.getPersistenceManager();
Transaction tx = pm.currentTransaction();
try {
tx.begin();
Query query = pm.newQuery(Studypath.class,"studypathID == paramStudypathID");
query.declareParameters("Integer paramStudypathID");
query.setUnique(true);
Studypath dbStudypath = (Studypath)query.execute(12345);
Studypath detachedStudypath = null;
if (dbStudypath != null) {
detachedStudypath = (Studypath)pm.detachCopy(dbStudypath);
} else {
Studypath newStudypath = new Studypath();
// ...
pm.makePersistent(newStudypath);
detachedStudypath = (Studypath)pm.detachCopy(newStudypath);
}
tx.commit();
// now I want to add this detached studypath to my newly created course
Course c = new Course();
c.addStudypath(detachedStudypath);
tx.begin();
pm.makePersistent(c); // <== error
tx.commit();
}
catch (Exception e)
{
//... handle exceptions
}
finally
{
if (tx.isActive())
{
// Error occurred so rollback the transaction
tx.rollback();
}
pm.close();
}
Course.java
#PersistenceCabable
public class Course {
// ...
#Persistent
private Set<Studypath> studypaths;
}
Studypath.java
#PersistenceCabable
public class Studypath {
// ...
#Persistent
#PrimaryKey
private Integer studypathID;
}
Is there any obvious mistake I'm missing? Thanks in advance!
Update (log):
DEBUG [DataNucleus.Datastore.Native] - SELECT 'Courses.Studypath' AS NUCLEUS_TYPE, ... FROM `STUDYPATH` `A0` WHERE `A0`.`STUDYPATHID` = <12345> // this one already exists
DEBUG [DataNucleus.Datastore.Retrieve] - Execution Time = 0 ms
DEBUG [DataNucleus.Datastore.Retrieve] - Retrieving PreparedStatement for connection "jdbc:mysql://127.0.0.1/database, UserName=user, MySQL-AB JDBC Driver"
DEBUG [DataNucleus.Datastore.Native] - SELECT 'Courses.Course' AS NUCLEUS_TYPE, ... FROM `COURSE` `A0` WHERE `A0`.`COURSEID` = <1111> // there is no such course, thus it gets created
DEBUG [DataNucleus.Datastore.Retrieve] - Execution Time = 1 ms
DEBUG [DataNucleus.Datastore.Retrieve] - Retrieving PreparedStatement for connection "jdbc:mysql://127.0.0.1/database, UserName=user, MySQL-AB JDBC Driver"
DEBUG [DataNucleus.Datastore.Native] - INSERT INTO `COURSE` (...,`COURSEID`) VALUES (...,<1111>)
DEBUG [DataNucleus.Datastore.Persist] - Execution Time = 1 ms (number of rows = 1)
DEBUG [DataNucleus.Datastore.Retrieve] - Closing PreparedStatement org.datanucleus.store.rdbms.ParamLoggingPreparedStatement#3baac1b5
DEBUG [DataNucleus.Datastore.Persist] - The requested statement "INSERT INTO `STUDYPATH` (...) VALUES (...)" has been made batchable
DEBUG [DataNucleus.Datastore.Persist] - Batch has been added to statement "INSERT INTO `STUDYPATH` (...) VALUES (...)" for processing (batch size = 1)
DEBUG [DataNucleus.Datastore.Persist] - Adding statement "INSERT INTO `STUDYPATH` (...) VALUES (...)" to the current batch (new batch size = 2)
DEBUG [DataNucleus.Datastore.Persist] - Batch has been added to statement "INSERT INTO `STUDYPATH` (...) VALUES (...)" for processing (batch size = 2)
DEBUG [DataNucleus.Datastore.Native] - BATCH [INSERT INTO `STUDYPATH` (...,`STUDYPATHID`) VALUES (...,<12345>); INSERT INTO `STUDYPATH` (...,`STUDYPATHID`) VALUES (<54321>)]
ERROR [DataNucleus.Datastore] - Exception thrown

I'm not sure it's kosher to associate a detached JDO to a transient one. There's no easy way for the ORM to know the relation is an existing JDO.
If it's really in the same code path, I'd associate the persistent instance:
c.addStudypath(dbStudypath);
Otherwise I would makePersistent(detachedStudypath) before associating it (assuming your class is #Detachable)

You can easily check state of objects by calling JDOHelper.getObjectState(obj). I strongly suggest to you that your object is in TRANSIENT state not DETACHED state, and likely because you haven't declared your class as detachable.

Related

hibernate query result not updating after change

I am having an issue with hibernate. the query result is not updating.
i have a simple query which checks customertable to see if the 'enabled' column = true. the query works fine. but when i change the colum value from 'true' to 'false' and run the same query... it still gives me 'true' as a result.
if i close the application and recompile, and run query again, it THEN shows false, but then again if i change it back to true.. it still shows 'false' result. what am I doing wrong?
public void isEnabled(String customer){
Session session = sessionFactory.openSession();
Transaction tx = null;
try{
tx=session.beginTransaction();
Query query = session.createQuery("FROM Customer WHERE enabled=true");
List qr = query.list();
boolean check = qr.iterator().hasNext();
boolean enabled;
sw = new ServerWriter(writer);
if(check){
for(Iterator itr = qr.iterator();itr.hasNext();)
{
Customer c =(Customer)itr.next();
enabled=c.geEnabled();
sw.sendMessage("Customer is enabled");
}
}else{
sw.sendMessage("Customer is not enabled");
}
} catch(HibernateException e){
if(tx!=null){tx.rollback();}
e.printStackTrace();
}finally{session.close();}
}
First you forgot to close the transaction:
session.getTransaction().commit();
The reason you get the same value when querying second time is Hibernate cache. You always have a first level cache and if you configured it, you can have a second level cache, too.
You can refresh the first level cache before executing the query with:
session.refresh()
If you happen to have a second level cache you can skip it with this hint:
query.setHint("org.hibernate.cacheMode", CacheMode.IGNORE);

Hibernate batch update: need to know the failed statement

I have a code looking like this:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
try {
for ( Customer customer: customers ) {
i++;
session.update(customer);
if ( i % 200 == 0 ) { //200, same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
} catch (Exc e) {
//TODO want to know customer id here!
}
tx.commit();
session.close();
Say, at some point session.flush() raises an DataException, because one of the fields did not map into the database column size, one of those batch of 200 customers. Nothing wrong with it, data can be corrupted, it's ok in this case. BUT, I really need to know the customer id which failed. Database returns meaningless error message, not stating what was the params of the statement, etc. Catched exception also does not contain which customer did fail, only the sql statement text, looking like 'update Customer set name=?'
Can I somehow determine it using the hibernate session? Does it store somewhere the information about last entity it tried to save down?

Using scroller in Hibernate throwing exception

I am using org. Hibernate. ScrollableResults within the streaming mode.
I made sure that I won't have the n+1 problem. And put all joins in the same SQL statement.
For some reason in the while loop I am having an exception and before that I can see an additional select (beside the first select which we expecting while working in a Streaming mode)
Any idea what am I missing?
My scroller:
protected void scroll(ScrollableHandler<T> handler,String namedQuery, Object... values){
T previousEntity=null;
Session s = null;
ScrollableResults results = null;
try {
s = (Session) em.getDelegate();
org.hibernate.Query query = s.getNamedQuery(namedQuery);
for (int i = 0; i < values.length; i++)
query.setParameter(i, values[i]);
results = query.setFetchSize(fetchSize).scroll(ScrollMode.FORWARD_ONLY);
while(results.next()) -> here I get the exception
{
T entity = (T) results.get(0);
if (null != entity &&
(! entity.equals(previousEntity))) {
handler.handle(entity);
previousEntity = entity;
}
s.clear();
}
} finally {
if (results != null)
results.close();
}
}
Logs:
11:54:24,182 WARN SqlExceptionHelper:143 - SQL Error: 0, SQLState: null
11:54:24,182 ERROR SqlExceptionHelper:144 - Streaming result set com.mysql.jdbc.RowDataDynamic#729d8721 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
11:54:24,183 WARN CollectionLoadContext:347 - HHH000160: On CollectionLoadContext#cleanup, localLoadingCollectionKeys contained [3] entries
Exception in thread "Thread-11" java.lang.RuntimeException: Could not export
Thanks,
ray.
The problem was that I used 2 queries. As we know we shouldn't open parallel query on an active streaming. Solution: avoid the second query.
ray.

Delay during commit (using JPA/JTA)

I would like to ask you for help with following problem. I have method:
String sql = "INSERT INTO table ...."
Query query = em.createNativeQuery(sql);
query.executeUpdate();
sql = "SELECT max(id) FROM ......";
query = em.createNativeQuery(sql);
Integer importId = ((BigDecimal) query.getSingleResult()).intValue();
for (EndurDealItem item : deal.getItems()) {
String sql2 = "INSERT INTO another_table";
em.createNativeQuery(sql2).executeUpdate();
}
And after executing it, data are not commited (it takes like 10 or 15 minutes until data are commited). Is there any way how to commit data explicitly or trigger commit? And what causes the transaction to remain uncommited for such a long time?
The reason we use nativeQueries is, that we are exporting data on some shared interface and we are not using the data anymore.
I would like to mention, that the transaction is Container-Managed (by Geronimo). EntityManager is created via linking:
#PersistenceContext(unitName = "XXXX", type = PersistenceContextType.TRANSACTION)
private EntityManager em;
Use explicitly the transaction commit:
EntityManager em = /* get an entity manager */;
em.getTransaction().begin();
// make some changes
em.getTransaction().commit();
This should work. The time of execution of all operation between .begin() and .end() depends of course also from the cycle you're performing, the number of row you're inserting, from the position of the database (the speed of the network matters) and so on...

Hibernate is not deleting my objects. Why?

I have just set up a test that checks that I am able to insert entries into my database using Hibernate. The thing that drives me crazy is that Hibernate does not actually delete the entries, although it reports that they are gone!
The test below runs successfully, but when I check my DB afterwards the entries that were inserted are still there! I even try to check it using assert (yes I have -ea as vm parameter). Does anyone have a clue why the entries are not deleted?
public class HibernateExportStatisticDaoIntegrationTest {
HibernateExportStatisticDao dao;
Transaction transaction;
#Before
public void setUp(){
assert numberOfStatisticRowsInDB() == 0;
dao = new HibernateExportStatisticDao(HibernateUtil.getSessionFactory());
}
#After
public void deleteAllEntries(){
assert numberOfStatisticRowsInDB() != 0;
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
for(PersistableStatisticItem item:allStatisticItemsInDB()) {
session.delete(item);
}
session.flush();
assert numberOfStatisticRowsInDB() == 0;
}
#Test public void exportAllSavesEntriesToDatabase(){
int expectedNumberOfStatistics = 20;
dao.exportAll(StatisticItemFactory.createTestStatistics(expectedNumberOfStatistics));
assertEquals(expectedNumberOfStatistics, numberOfStatisticRowsInDB());
}
private int numberOfStatisticRowsInDB() {
return allStatisticItemsInDB().size();
}
#SuppressWarnings("unchecked")
private List<PersistableStatisticItem> allStatisticItemsInDB(){
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
transaction = session.beginTransaction();
Query q = session.createQuery("FROM PersistableStatisticItem item");
return q.list();
}
}
The console is filled with
Hibernate: delete from UPTIME_STATISTICS where logDate=? and serviceId=?
but nothing has been deleted when I check it.
I guess it's related to inconsistent use of transactions (note that beginTransaction() in allStatisticItemsInDB() is called several times without corresponding commits).
Try to manage transactions in proper way, for example, like this:
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = session.beginTransaction();
for(PersistableStatisticItem item:
session.createQuery("FROM PersistableStatisticItem item").list()) {
session.delete(item);
}
session.flush();
assert session.createQuery("FROM PersistableStatisticItem item").list().size() == 0;
tx.commit();
See also:
13.2. Database transaction demarcation
I have the same problem. Although I was not using transaction at all. I was using namedQuery like this :
Query query = session.getNamedQuery(EmployeeNQ.DELETE_EMPLOYEES);
int rows = query.executeUpdate();
session.close();
It was returning 2 rows but the database still had all the records. Then I wrap up the above code with this :
Transaction transaction = session.beginTransaction();
Query query = session.getNamedQuery(EmployeeNQ.DELETE_EMPLOYEES);
int rows = query.executeUpdate();
transaction.commit();
session.close();
Then it started working fine. I was using SQL server. But I think if we use h2, above code (without transaction) will also work fine.
One more observation : To insert and get records usage of transaction is not mandatory but for deletion of records we will have to use transaction. (only tested in SQL server)
Can you post your DB schema and HBM or Fluent maps? One thing that got me a while back was I had a ReadOnly() in my Fluent map. It never threw an error and I too saw the "delete from blah where blahblah=..." in the logs.

Categories

Resources