I have CustomerEntity mapped to table customers in database. It has one-to-many relations to orders and addresses. Now I retreive customer with orders with next code:
DetachedCriteria criteria = DetachedCriteria.forClass(CustomerEntity.class);
criteria.add(Restrictions.eq("id", patientId));
criteria.createCriteria("orders", CriteriaSpecification.LEFT_JOIN).add(Restrictions.and(
Restrictions.isNull("deletedDate"),
Restrictions.or(Restrictions.isNull("old"), Restrictions.eq("old", BoolType.FALSE))));
CustomerEntity customer = queryDao.searchSingle(criteria);
QueryDAO:
public <T> T searchSingle(DetachedCriteria criteria) {
return (T) criteria.getExecutableCriteria(getCurrentSession()).uniqueResult();
}
But when I try to invoke customer.getAddresses() next exception is thrown:
org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: , no session or session was closed
It happens because by default hibernate isn't loaded one-to-many entities.
How can I without modifying Customer entity retreive his addresses too?
This exception occurs when you try to access the lazy-loading collection or field at the moment when session is not available already. My guess would be - the backend of your application loads data and closes the session, then you pass this data to frontend, for example, where there is no session already.
To resolve this, there are several possible ways to go:
Eager loading. Reliable, but possibly slow solution;
Changing the way session management is done. Consider strategy session-per-request which is suitable for typical web application. The question Create Hibernate-Session per Request contains information might be usable for you.
For information about another session management strategies, try reading this wiki page.
There are a few things that can assist you:
The use of Hibernate.initialize(customer.getAddresses())
You do not show where customer.getAddresses() is used ... so one other method is to use Hibernate Open Session in View (along with other session management strategies mentioned in the previous answer.
Related
i am trying to solve an issue from work that consists in a method that is called several times in production and breaks, here is the interface's method:
#Cacheable("CategoryDao.findAllLocale")
Set<Category> findAllLocale(String locale);
And here is the implementation:
public Set<Category> findAllLocale(final String locale) {
final Set<Category> localeCategories = this.findAllLocaleRootCategories(locale);
for (final Category rootCategory : localeCategories) {
final Set<Category> localeChildCategories = this.findAllLocaleByParent(locale, rootCategory.getCatId());
rootCategory.setCategories(localeChildCategories);
}
return localeCategories;
}
Is a simple DAO method but the problem is that returns a lot of data and in server productions throws this exception:
01-01-15 10:09:47:984 - {ERROR} categories.GetAllCategoriesAction - User:5007660771072025 - Unexpected exception executing the action
org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.company.app.data.Category.categories, no session or session was closed
at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380)
I think that the exception is something about that the #Cacheable overloads, because the app runs a few hours and works fine but then crash and log that fragment, I want make a "massive" use of this test so i will know is that or something else, any suggestions?
PD: #Cacheable is from ehcache framework
PD2: Sorry for my english
The issue comes from the fact that let Hibernate/JPA entities escape, making them live longer than the session they were attached to.
So the issue comes at a later point when you use one of them (from the cache but it may not even be directly related) and try to access a lazy loaded collection. And that last point fails because you are already outside of the bounds of your Hibernate/JPA session.
As a rule of thumb, you should not cache Hibernate/JPA entities.
This question already has answers here:
Spring Data JPA Update #Query not updating?
(5 answers)
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Let's suppose to have this situation:
We have Spring Data configured in the standard way, there is a Respository object, an Entity object and all works well.
Now for some complex motivations I have to use EntityManager (or JdbcTemplate, whatever is at a lower level than Spring Data) directly to update the table associated to my Entity, with a native SQL query. So, I'm not using Entity object, but simply doing a database update manually on the table I use as entity (it's more correct to say the table from which I get values, see next rows).
The reason is that I had to bind my spring-data Entity to a MySQL view that makes UNION of multiple tables, not directly to the table I need to update.
What happens is:
In a functional test, I call the "manual" update method (on table from which the MySQL view is created) as previously described (through entity-manager) and if I make a simple Respository.findOne(objectId), I get the old object (not updated one). I have to call Entitymanager.refresh(object) to get the updated object.
Why?
Is there a way to "synchronize" (out of the box) objects (or force some refresh) in spring-data? Or am I asking for a miracle?
I'm not ironical, but maybe I'm not so expert, maybe (or probably) is my ignorance. If so please explain me why and (if you want) share some advanced knowledge about this amazing framework.
If I make a simple Respository.findOne(objectId) I get old object (not
updated one). I've to call Entitymanager.refresh(object) to get
updated object.
Why?
The first-level cache is active for the duration of a session. Any object entity previously retrieved in the context of a session will be retrieved from the first-level cache unless there is reason to go back to the database.
Is there a reason to go back to the database after your SQL update? Well, as the book Pro JPA 2 notes (p199) regarding bulk update statements (either via JPQL or SQL):
The first issue for developers to consider when using these [bulk update] statements
is that the persistence context is not updated to reflect the results
of the operation. Bulk operations are issued as SQL against the
database, bypassing the in-memory structures of the persistence
context.
which is what you are seeing. That is why you need to call refresh to force the entity to be reloaded from the database as the persistence context is not aware of any potential modifications.
The book also notes the following about using Native SQL statements (rather than JPQL bulk update):
■ CAUTION Native SQL update and delete operations should not be
executed on tables mapped by an entity. The JP QL operations tell the
provider what cached entity state must be invalidated in order to
remain consistent with the database. Native SQL operations bypass such
checks and can quickly lead to situations where the inmemory cache is
out of date with respect to the database.
Essentially then, should you have a 2nd level cache configured then updating any entity currently in the cache via a native SQL statement is likely to result in stale data in the cache.
In Spring Boot JpaRepository:
If our modifying query changes entities contained in the persistence context, then this context becomes outdated.
In order to fetch the entities from the database with latest record.
Use #Modifying(clearAutomatically = true)
#Modifying annotation has clearAutomatically attribute which defines whether it should clear the underlying persistence context after executing the modifying query.
Example:
#Modifying(clearAutomatically = true)
#Query("UPDATE NetworkEntity n SET n.network_status = :network_status WHERE n.network_id = :network_id")
int expireNetwork(#Param("network_id") Integer network_id, #Param("network_status") String network_status);
Based on the way you described your usage, fetching from the repo should retrieve the updated object without the need to refresh the object as long as the method which used the entity manager to merge has #transactional
here's a sample test
#DirtiesContext(classMode = ClassMode.AFTER_CLASS)
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = ApplicationConfig.class)
#EnableJpaRepositories(basePackages = "com.foo")
public class SampleSegmentTest {
#Resource
SampleJpaRepository segmentJpaRepository;
#PersistenceContext
private EntityManager entityManager;
#Transactional
#Test
public void test() {
Segment segment = new Segment();
ReflectionTestUtils.setField(segment, "value", "foo");
ReflectionTestUtils.setField(segment, "description", "bar");
segmentJpaRepository.save(segment);
assertNotNull(segment.getId());
assertEquals("foo", segment.getValue());
assertEquals("bar",segment.getDescription());
ReflectionTestUtils.setField(segment, "value", "foo2");
entityManager.merge(segment);
Segment updatedSegment = segmentJpaRepository.findOne(segment.getId());
assertEquals("foo2", updatedSegment.getValue());
}
}
My current project is done using JavaFX. I use properties to bind (bidirectionnal) view fields to bean (with BeanPathAdapter of JFXtras).
I choose to use JPA with ObjectDB as model.
This is the first time I use JPA in a standalone project and here I'm facing the problem of managed entities.
Actually, I bind managed entities to view fields and when the value of a view field changes, the entities is updated... and the database also.
I'm trying to find a way to manually persist/merge an entity so I can ask the user if he wants to save or not.
Here's the code i use to get list :
EntityManagerFactory emf = Persistence.createEntityManagerFactory("$objectdb/data/db.odb");
EntityManager em = emf.createEntityManager();
List<XXX> entities = em.createQuery("SELECT x FROM XXX x").getResultList();
So when i do
entity.setName("test");
the entity is updated in the database.
What i'm looking for is that the entity doesn't update automatically.
I tried (just after the getResultList)
em.clear();
or
em.detach(entity);
but it looses the relations instances even with CascadeType.DETACH.
I also tried
em.setFlushMode(FlushModeType.COMMIT);
but it still updates automatically...
I also tried to clone the object. But when i want to merge it, it gives me an exception :
Attempt to reuse an existing primary key value
I thought an alternative solution : use a variable as 'buffer' and fill the managed bean with buffer if the user saves. But BeanPathAdapter looses its sense. It's the same as filling view fields manually and filling bean fields manually after saving.
Could you help me to find a solution ?
Thanks,
Smoky
EDIT:
I answer to my own question :p
After 3 hours of research, I found a solution.
The 'cloning' solution was the 'best' of each I quoted but I don't think it's the best one.
The cause of the exception was the code I used to persist/merge my entity. I was persisting an entity non-managed with an already existing id. I thought I was merging...
I did a generic method not to fail again
public <T extends IEntity> T persist(T object) {
em.getTransaction().begin();
if (object.getId() == null) {
em.persist(object);
em.flush();
em.getTransaction().commit();
em.refresh(object);
}
else {
object = em.merge(object);
em.getTransaction().commit();
}
return object;
}
So the solution : When I have to bind the entity to the view, I use entity.clone() so I can use the entity as non-managed and merge when I want.
But if you have a proper solution, i'm interested :)
Thanks again
In addition to the solution above, standard solutions are:
Use detached objects in the model and then merge them into the EntityManager.
Use managed objects in the model, keeping the EntityManager open (with no detach/merge).
I have a PostgreSQL 8.4 database with some tables and views which are essentially joins on some of the tables. I used NetBeans 7.2 (as described here) to create REST based services derived from those views and tables and deployed those to a Glassfish 3.1.2.2 server.
There is another process which asynchronously updates contents in some of tables used to build the views. I can directly query the views and tables and see these changes have occured correctly. However, when pulled from the REST based services, the values are not the same as those in the database. I am assuming this is because JPA has cached local copies of the database contents on the Glassfish server and JPA needs to refresh the associated entities.
I have tried adding a couple of methods to the AbstractFacade class NetBeans generates:
public abstract class AbstractFacade<T> {
private Class<T> entityClass;
private String entityName;
private static boolean _refresh = true;
public static void refresh() { _refresh = true; }
public AbstractFacade(Class<T> entityClass) {
this.entityClass = entityClass;
this.entityName = entityClass.getSimpleName();
}
private void doRefresh() {
if (_refresh) {
EntityManager em = getEntityManager();
em.flush();
for (EntityType<?> entity : em.getMetamodel().getEntities()) {
if (entity.getName().contains(entityName)) {
try {
em.refresh(entity);
// log success
}
catch (IllegalArgumentException e) {
// log failure ... typically complains entity is not managed
}
}
}
_refresh = false;
}
}
...
}
I then call doRefresh() from each of the find methods NetBeans generates. What normally happens is the IllegalArgumentsException is thrown stating somethng like Can not refresh not managed object: EntityTypeImpl#28524907:MyView [ javaType: class org.my.rest.MyView descriptor: RelationalDescriptor(org.my.rest.MyView --> [DatabaseTable(my_view)]), mappings: 12].
So I'm looking for some suggestions on how to correctly refresh the entities associated with the views so it is up to date.
UPDATE: Turns out my understanding of the underlying problem was not correct. It is somewhat related to another question I posted earlier, namely the view had no single field which could be used as a unique identifier. NetBeans required I select an ID field, so I just chose one part of what should have been a multi-part key. This exhibited the behavior that all records with a particular ID field were identical, even though the database had records with the same ID field but the rest of it was different. JPA didn't go any further than looking at what I told it was the unique identifier and simply pulled the first record it found.
I resolved this by adding a unique identifier field (never was able to get the multipart key to work properly).
I recommend adding an #Startup #Singleton class that establishes a JDBC connection to the PostgreSQL database and uses LISTEN and NOTIFY to handle cache invalidation.
Update: Here's another interesting approach, using pgq and a collection of workers for invalidation.
Invalidation signalling
Add a trigger on the table that's being updated that sends a NOTIFY whenever an entity is updated. On PostgreSQL 9.0 and above this NOTIFY can contain a payload, usually a row ID, so you don't have to invalidate your entire cache, just the entity that has changed. On older versions where a payload isn't supported you can either add the invalidated entries to a timestamped log table that your helper class queries when it gets a NOTIFY, or just invalidate the whole cache.
Your helper class now LISTENs on the NOTIFY events the trigger sends. When it gets a NOTIFY event, it can invalidate individual cache entries (see below), or flush the entire cache. You can listen for notifications from the database with PgJDBC's listen/notify support. You will need to unwrap any connection pooler managed java.sql.Connection to get to the underlying PostgreSQL implementation so you can cast it to org.postgresql.PGConnection and call getNotifications() on it.
An an alternative to LISTEN and NOTIFY, you could poll a change log table on a timer, and have a trigger on the problem table append changed row IDs and change timestamps to the change log table. This approach will be portable except for the need for a different trigger for each DB type, but it's inefficient and less timely. It'll require frequent inefficient polling, and still have a time delay that the listen/notify approach does not. In PostgreSQL you can use an UNLOGGED table to reduce the costs of this approach a little bit.
Cache levels
EclipseLink/JPA has a couple of levels of caching.
The 1st level cache is at the EntityManager level. If an entity is attached to an EntityManager by persist(...), merge(...), find(...), etc, then the EntityManager is required to return the same instance of that entity when it is accessed again within the same session, whether or not your application still has references to it. This attached instance won't be up-to-date if your database contents have since changed.
The 2nd level cache, which is optional, is at the EntityManagerFactory level and is a more traditional cache. It isn't clear whether you have the 2nd level cache enabled. Check your EclipseLink logs and your persistence.xml. You can get access to the 2nd level cache with EntityManagerFactory.getCache(); see Cache.
#thedayofcondor showed how to flush the 2nd level cache with:
em.getEntityManagerFactory().getCache().evictAll();
but you can also evict individual objects with the evict(java.lang.Class cls, java.lang.Object primaryKey) call:
em.getEntityManagerFactory().getCache().evict(theClass, thePrimaryKey);
which you can use from your #Startup #Singleton NOTIFY listener to invalidate only those entries that have changed.
The 1st level cache isn't so easy, because it's part of your application logic. You'll want to learn about how the EntityManager, attached and detached entities, etc work. One option is to always use detached entities for the table in question, where you use a new EntityManager whenever you fetch the entity. This question:
Invalidating JPA EntityManager session
has a useful discussion of handling invalidation of the entity manager's cache. However, it's unlikely that an EntityManager cache is your problem, because a RESTful web service is usually implemented using short EntityManager sessions. This is only likely to be an issue if you're using extended persistence contexts, or if you're creating and managing your own EntityManager sessions rather than using container-managed persistence.
You can either disable caching entirely (see: http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F ) but be preparedto a fairly large performance loss.
Otherwise, you can perform a clear cache programmatically with
em.getEntityManagerFactory().getCache().evictAll();
You can map it to a servlet so you can call it externally - this is better if your database is modify externally very seldom and you just want to be sure JPS will pick up the new version
Just a thought, but how do you receive your EntityManager/Session/whatever?
If you queried the entity in one session, it will be detached in the next one and you will have to merge it back into the persistence context to get it managed again.
Trying to work with detached entities may result in those not-managed exceptions, you should re-query the entity or you could try it with merge (or similar methods).
JPA doesn't do any caching by default. You have to explicitly configure it. I believe its the side effect of the architectural style you have chosen: REST. I think caching is happening at the web servers, proxy servers etc. I suggest you read this and debug more.
I want to duplicate a collection of entities in my database.
I retreive the collection with:
CategoryHistory chNew = new CategoryHistory();
CategoryHistory chLast = (CategoryHistory)em.createQuery("SELECT ch from CategoryHistory ch WHERE ch.date = MAX(date)").getSingleResult;
List<Category> categories = chLast.getCategories();
chNew.addCategories(categories)// Should be a copy of the categories: OneToMany
Now i want to duplicate a list of 'categories' and persist it with EntityManager.
I'm using JPA/Hibernate.
UPDATE
After knowing how to detach my entities, i need to know what to detach:
current code:
CategoryHistory chLast = (CategoryHistory)em.createQuery("SELECT ch from CategoryHistory ch WHERE ch.date=(SELECT MAX(date) from CategoryHistory)").getSingleResult();
Set<Category> categories =chLast.getCategories();
//detach
org.hibernate.Session session = ((org.hibernate.ejb.EntityManagerImpl) em.getDelegate()).getSession();
session.evict(chLast);//detaches also its child-entities?
//set the realations
chNew.setCategories(categories);
for (Category category : categories) {
category.setCategoryHistory(chNew);
}
//set now create date
chNew.setDate(Calendar.getInstance().getTime());
//persist
em.persist(chNew);
This throws a failed to lazily initialize a collection of role: entities.CategoryHistory.categories, no session or session was closed exception.
I think he wants to lazy load the categories again, as i have them detached. What should i do now?
You need to detach your instances from the session. There are three ways to do this:
Close the session (probably not possible in your case).
Serialize the object and deserialize it again.
Clone the object and clear/null the primary key/id field.
Then you must change the business key (so the new instances will return false when calling equals() with an unmodified instance). This is the important step: Without it, Hibernate will reattach the instances to the existing ones in the DB or you'll get other, strange errors.
After that, you can save the new copies just like any other instance.
Aaron Diguila's answer is the way to go here, i.e. you need to detach your instances, set the business key to null and then persist them.
Sadly, there is no way to disconnect one object from the entity manager with JPA 1.x (JPA 2.0 will have EntityManager.detach(Object) and fix this). So, either wait for JPA 2.x (not an option I guess) or use Hibernate's underlying Session.
To do so, you can cast the delegate of an EntityManager to an Hibernate Session.
Session session = (Session) em.getDelegate();
Of course, this only works if you use Hibernate as a Java Persistence provider, because the delegate is the Session API.
Then, to detach your object:
session.evict(object);
UPDATE: According to Be careful while using EntityManager.getDelegate(), with GlassFish one should actually use (and likely in your case too) :
org.hibernate.Session session = ((org.hibernate.ejb.EntityManagerImpl) em.getDelegate()).getSession();
But this would not work in JBoss that suggest to use the code previously mentioned.
org.hibernate.Session session = (Session) em.getDelegate();
While I understand that using getDelegate() makes JPA code non-portable, I must admit that I was not expecting the result of this method call to be implementation specific.
UPDATE2: To answer the updated part of the question, I'm not sure that you eagerly loaded the categories. This is not the best way to do this but what happens if you call categories.get(0) before eviction? Also, I may be missing that part but, where do you nullify the key of categories?
Ok,
Since I'm using glassfish v3, and JPA2.0 is final, i used the EntityManager.detach()
Strangely ejb3-persistence.jar was included in my lib, so i throwed it out and used javax.persistence of the glassfish jar. The detach method is there but my hibernate version has no implementation yet
Clone or copy properties of each object. You can use Apache BeanUtils.copyProperties(copy, orig)
In OpenJPA, manually remove monitoring of entity using Apache BeanUtils:
BeanUtils.setProperty(copy, "pcVersionInit", false);
Set the primary key to default/null.
Persist each copy.
Also see: http://www.java-tutorial.ch/java-persistence-api/how-to-persist-duplicate-of-an-entity-with-openjpa