org.hibernate.StaleStateException
Is thrown sometimes (not consistent) in unit test case.
The normal flow is:
Create
Update
Delete
StaleStateException is thrown while executing delete operation.
The entity has child entities with one to many relationship:
<set name="child" cascade="delete, all,delete-orphan" inverse="true"
lazy="true">
<cache usage="nonstrict-read-write"/>
<key column="COLUMN"/>
<one-to-many class="CHILDClass"/>
</set>
And the cache strategy is "nonstrict-read-write"
Not sure of its issue due to second level cache, as the exception is not consistent.
How to fix this issue and what extra details I need to check to fix this issue.
Related
I am migrating a legacy hibernate project from version 4.3 (with Java 11) to 5.6 (with Java 16). The HBM files below map an object graph of Jurisdiction -> Unit -> UnitAux. Units are lazy-loaded, and UnitAux is one-to-one with Unit. Under version 4.3, when initiazing Units, it would take about 100ms to load. Under version 5.6, it now takes 600-800ms.
These are the abbreviated HBM files for the 3 entities:
Jurisdiction.hbm.xml
<hibernate-mapping>
<class name="com.edc.c2c.core.model.impl.Jurisdiction" table="Jurisdiction" schema="domain" dynamic-update="true">
<set name="units"
inverse="true" cascade="all" lazy="true" fetch="select"
optimistic-lock="false" batch-size="1000" where="recordStatus = 'A'">
<key>
<column name="jurisdictionId"/>
</key>
<one-to-many class="com.edc.c2c.core.model.impl.Unit"/>
</set>
</class>
</hibernate-mapping>
Unit.hbm.xml
<hibernate-mapping>
<class name="com.edc.c2c.core.model.impl.Unit" table="Unit" schema="domain" dynamic-update="false">
<composite-id>
<key-property name="id" column="id" type="long"/>
<key-property name="owningJurisdictionId" column="jurisdictionId" type="long"/>
</composite-id>
<one-to-one name="unitAux" class="com.edc.c2c.core.model.impl.UnitAux" cascade="all" fetch="join" property-ref="unit"/>
</class>
</hibernate-mapping>
UnitAux.hbm.xml
<hibernate-mapping>
<class name="com.edc.c2c.core.model.impl.UnitAux" table="UnitAux" schema="domain" dynamic-update="true">
<composite-id>
<key-property name="id" column="id" type="long"/>
<key-property name="jurisdictionId" column="jurisdictionId" type="long"/>
</composite-id>
<many-to-one name="unit" class="com.edc.c2c.core.model.impl.Unit" unique="true" not-null="true"
cascade="all" insert="false" update="false">
<column name="id"/>
<column name="jurisdictionId"/>
</many-to-one>
</class>
</hibernate-mapping>
If I comment out the one-to-one in Unit.hbm.xml, the unit(s) Set loads fast, as expected.
In the UnitAux.hbm.xml, I replaced the many-to-one with a bag containing a one-to-many, something like this:
<bag name="unitGroup" inverse="true" cascade="all" lazy="true" fetch="select">
<key>
<column name="id"/>
<column name="jurisdictionId"/>
</key>
<one-to-many class="com.edc.c2c.core.model.impl.unit"/>
</bag>
With this, the UnitAux class had a List property called unitGroup. With the bag, the unit(s) load time dropped to 300ms.
I'm at a loss as to how to get the hibernate 5.6 to perform at the same load times as 4.3.
Any ideas or suggestions would be greatly appreciated.
Update: I forgot to mention, both versions effectively produce the same SQL. Something about how the objects themselves are initialized must be causing the slow down.
Update 2: The session statistics between 4.3 and 5.6 were very similar; not enough to explain the performance difference. My investigation has shown that the delays appear to be centered around initializing the entities. Specifically, the call to
Loader.initializeEntitiesAndCollections( final List hydratedObjects, final Object resultSetId, final SharedSessionContractImplementor session, final boolean readOnly, List<AfterLoadAction> afterLoadActions)
The time spent here is where the latency lies. Every property in each entity is tested for bytecode enhancement. In my test, I'm loading 600+ Units, along with the 600+ UnitAux entities. Is than an alternate loader that doesn't do this?
Update 3: Changing the association for Unit -> UnitAux to unidirectional reduced the latency by roughly half. Now it's only 3x slower.
Update 4: This is very strange. After experimenting with a variety of things, I made the following discovery. If I enable logging at the INFO (or ERROR) level for hibernate (see below config), everything runs fast, at the expected timing:
<logger name="org.hibernate" additivity="false">
<level value="info"/>
<appender-ref ref="STDOUT"/>
</logger>
If logging isn't declared, it runs slow (meaning nothing is specifically configured for hibernate). Is this something peculiar with jboss logging? I'm using jboss-logging-3.4.2.Final.jar. Does it run slower if nothing is explicitly declared in the log4j.xml? It's like the classic problem of having debug statements that never get used, but Java has to build all of the string values, which leads to extreme latency.
Update 5: I just did a spot check of the source code for Hibernate Core 5.6.0-Final. 141 classes use log.trace, and there are 249 class that use log.debug. Most of the log.trace calls don't pre-check to see if TRACE is enabled. The log.debug calls are checked more frequently, but there are stil a ton that don't pre-check to see if DEBUG is enabled.
In the end, I found my own answer to this problem.
First, there was no problem at all with Hibernate 5.x and one-to-one bidirectional associations. The problem had everything to do with debug/trace logging. With Hibernate 5.x and bytecode enhancement, a lot of new logging was added. Not only is logging done for each row-wise entity, but also for each column/property (testing for column lazy loading?).
Second, my entity classes have special formatting for toString(). Invocations of these toString()s can take a lot of time to execute, and then multiply if over hundreds of entities.
Third, there must be some bug between jboss-logging and log4j1-12.x. Internally, jboss-logging#doLogf() checks for whether or not the logging level is enabled, however, it is still reaching the String formatter. Here is a snapshot from VisualVM with the application running in Tomcat; you can see that 700ms was added for debug formatting, even though it would never be printed out to the log file:
If the Hibernate code had debug/trace level checks before the logging calls, this would not happen.
In the end, with Tomcat, I removed all log4j and slf4j-log4j jar files. That solved the problem.
Consider three tables. Interest date can come from Master document or Manual Override. If it from Master Document, the master_doc_id carries the reference and if it comes from manual override manual_override_id carries the reference
INTEREST_RATE
id,
rate,
master_doc_id,
manual_override_id
MASTER_DOC
id,
<Master doc related fields>
MANUAL_OVERRIDE
id,
<Manual Overide related fields>
Interest rate comes from either Master doc or manual override. Meaning, either one can have an id, and the other null. Never will both have values, or will both be null
Both MasterDoc and ManualOverride provide one-to-many mapping to Interest
MasterDoc
<set access="field" name="interestRates" inverse="false" cascade="all-delete-orphan">
<cache usage="read-write"/>
<key column="MASTER_DOC_ID"/>
<one-to-many class="com.foo.foos.InterestRate"/>
</set>
ManualOverRide
<set access="field" name="interestRates" inverse="false" cascade="all-delete-orphan">
<cache usage="read-write"/>
<key column="MANUAL_OVERRIDE_ID"/>
<one-to-many class="com.foo.foos.InterestRate"/>
</set>
This works fine.
But, i need to enforce the contstraint that, either of them has to have a value and the other null.
I tried.
Obvious not-null in one-to-many mapping. Obvious that this will fail because, both columns cant have values together.
Constraint on the table, (in the database). But, hibernate creates an insert followed by an update on the table. Even though the final result would be valid, the constraint will fail after the initial insert.
ex:
while uploading the initial file
insert into master_doc......
insert into interest_rate.... (with null values in master_doc_id and manual_override_id)
update interest_rate (set master_doc_id = id of master_doc)
The constraint fails on line no 2.
Any way I can get this to work. I need to have this constraint in place, to prevent manual updates on the table.
I have a many-to-many association defined like:
Parent.hbm.xml:
<set name="children" table="child_parent_map" lazy="true">
<cache usage="nonstrict-read-write" />
<key column="parent_id" />
<many-to-many class="Child">
<column name="child_id" not-null="true" index="child_parent_by_child"/>
</many-to-many>
</set>
Child.hbm.xml:
<set name="parents" table="child_parent_map" lazy="true">
<cache usage="nonstrict-read-write" />
<key column="child_id" />
<many-to-many column="parent_id" class="Parent" lazy="false"/>
</set>
I am quite sure I am initializing Parent.children by walking the collection. Something like:
for(Child child : parent.getChildren()) {
Hibernate.initialize(child.getAnotherProperty());
}
Parent has six children. However, in one session parent appears to have only five, and in another (2 seconds later, nothing changed in DB or in another session) - all six. Actually, I discovered it after detaching these entities from session with a custom cloner.
I thought that lazy collections are either completely initialized (i.e. all elements are), or not. Is it possible that somehow only a part of the collection was initialized? Can it be an issue with caching?
EDIT: This session handles a fairly large data set (a few thousands of entities). Is it possible that this is because some already-loaded entities got evicted from the session?
Start by checking your hashCode() and equals() methods, incorrect implementation of these methods often cause this kind of behavior.
I'm doing a search on one of my tables (legacy database) and recieving a horrible time here. The query is build by criteria api of hibernate, e.g.:
Criteria crit = getSessionFactory().getCurrentSession().createCriteria(P1.class);
crit.add(Restrictions.sqlRestriction("{alias}.compno like ?", "%" + s + "%", new StringType()));
crit.setMaxResults(25);
crit.setFirstResult(0);
crit.addOrder(Order.asc("compno"));
crit.list();
As you can see I'm already doing a paging here to improve the perfomance. This criteria needs ~6 seconds on average.
Well the native query which looks like this
select * from SCHEM.P1 where compno like '%100%' order by compno fetch first 25 rows only
takes only 10 ms which is a huge difference imo. Why does the criteria runs so slow? Need I switch back to the native sql query?
Good point on the comments:
Yes there are some relationships which I didn't had on the scope:
<set name="pI" table="P12" lazy="false">
<key column="awcompno" update="false" />
<one-to-many class="org.gee.hibernate.P12" not-found="ignore"/>
</set>
<one-to-one name="info" class="org.gee.hibernate.P13" />
<set name="ma" table="P03" lazy="true" schema="SCHEMP" mutable="false" >
<key column="macountry" property-ref="land" update="false" />
<one-to-many class="org.gee.hibernate.P03" not-found="ignore" />
</set>
<set name="users" table="P15" lazy="true">
<key column="apcompno" update="false" />
<one-to-many class="org.gee.hibernate.P15" not-found="ignore"/>
</set>
My tip is:
<set name="pI" table="P12" lazy="false">
<key column="awcompno" update="false" />
<one-to-many class="org.gee.hibernate.P12" not-found="ignore"/>
</set>
This collection is not lazy. That may be your bottleneck.
Do you need all informations? You may read fields of your entity with hibernate, if you only want to read the IDs.
IBM pureQuery has some really nice facilities for accelerating Hibernate applications that work with DB2. The other benefit ... it makes it much easier to debug as it allows you to correlate your SQL and your Java code.
Take a look at this article http://www.ibm.com/developerworks/data/library/techarticle/dm-1008hibernateibatispurequery1/index.html
I'd say have a look at the DB logs to check what are the exact SQL instructions that get executed. Hibernate maybe loading more than just the native query you would expect, as it may load eager collections etc.
So - enable Hibernate query logging, or better yet, check the DB logs to see what gets executed.
I have a User and a set of Authorities in a one-to-many relationship:
User.hbm.xml:
<set name="authorities" table="authorities" cascade="all-delete-orphan">
<key column="user_id" />
<one-to-many class="com.ebisent.domain.Authority" />
</set>
When I delete a user, I want to delete the authorities as well, but what is happening is that the child table's foreign key (authorities.user_id) is set to null instead. Then I get the following error and the User delete is rolled back:
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
The authorities.user_id update to null is not rolled back, however.
How can I delete Authorities when I delete the parent User?
EDIT:
I got this working by explicitly deleting the authorities, calling refresh() on the user, then deleting the user, but I would like to know the "right" way to do this.
This is weird, a cascade all-delete-orphan should cascade the delete operation from the parent to the children. So the following should suffice to get the children deleted:
Parent p = (Parent) session.load(Parent.class, pid);
session.delete(p);
session.flush();
Do you get a different result when using all,delete-orphan or even more simply delete (you shouldn't). Is the assocation bidirectional? If yes, could you show the other side and the corresponding mapping?
The association is only from parent to child, and I get the same results with all, delete-orphan, and delete, BUT...I didn't have session.flush(), and that appears to solve the problem.
The explicit flush might help. But it shouldn't be necessary. I think that defining the foreign key as non nullable would help to get the correct behavior:
<set name="authorities" table="authorities" cascade="all-delete-orphan">
<key column="user_id" not-null="true"/>
<one-to-many class="com.ebisent.domain.Authority" />
</set>
Not tested though.