Service: We have a service that receives Request and it just saves it in the database. Following is hbm file for same.
<hibernate-mapping package="com.project.dao">
<class name="Request" table="requests">
<id name="requestId" type="long" column="req_id">
<generator class="native" />
</id>
<class>
<discriminator column="req_type" type="string" force="true"/>
<property name="status" type="string" column="status"/>
<property name="processId" type="string" column="process_id"/>
<subclass name="RequestType1" discriminator-value="Type1">
..
</subclass>
<subclass name="RequestType2" discriminator-value="Type2">
..
</subclass>
</hibernate-mapping>
Code which obtains session and saves Request is as follows.
try{
Session session = HibernateUtil.getSessionFactory().getCurrentSession();
Transaction tx = session.beginTransaction();
requestDAO.save(request);
tx.commit();
}catch(Exception e){
log.error(e);
}
Client: There are two hosts on client side that reads these received and unprocessed request. Each client(Host1, Host2) does following.
Update process_id of unprocessed request to its hostname.
update requests set process_id='" + hostName + "' where status='Received' and process_id is null order by req_id asc limit 100"
Retrieve requests updated above and process them.
select * from requests where process_id='"+ hostName + "' where status='Received';
Now problem is, for some time, these clients would work fine. But after some time, they start throwing following exception.
org.hibernate.exception.GenericJDBCException: could not execute native bulk manipulation query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126)
Caused by: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
Client again starts to work normally if we restart them.
On service side also, we see following exceptions sometimes.
org.hibernate.AssertionFailure: null id don't flush the Session after an exception occurs
We have been trying to resolve this issue but not able to figure out root cause. Any insight into the probable issue would be helpful.
Thanks
Related
I am using mybatis 3.4.6 along with org.xerial:sqlite-jdbc 3.28.0. Below is my configuration to use an in-memory database with shared mode enabled
db.driver=org.sqlite.JDBC
db.url=jdbc:sqlite:file::memory:?cache=shared
The db.url is correct according to this test class
And I managed to setup the correct transaction isolation level with below mybatis configuration though there is a typo of property read_uncommitted according to this issue which is reported by me as well
<environment id="${db.env}">
<transactionManager type="jdbc"/>
<dataSource type="POOLED">
<property name="driver" value="${db.driver}" />
<property name="url" value="${db.url}"/>
<property name="username" value="${db.username}" />
<property name="password" value="${db.password}" />
<property name="defaultTransactionIsolationLevel" value="1" />
<property name="driver.synchronous" value="OFF" />
<property name="driver.transaction_mode" value="IMMEDIATE"/>
<property name="driver.foreign_keys" value="ON"/>
</dataSource>
</environment>
This line of configuration
<property name="defaultTransactionIsolationLevel" value="1" />
does the trick to set the correct value of PRAGMA read_uncommitted
I am pretty sure of it since I debugged the underneath code which initialize the connection and check the value has been set correctly
However with the above setting, my program still encounters SQLITE_LOCKED_SHAREDCACHE intermittently while reading, which I think it shouldn't happen according the description highlighted in the red rectangle of below screenshot. I want to know the reason and how to resolve it, though the occurring probability of this error is low.
Any ideas would be appreciated!!
The debug configurations is below
===CONFINGURATION==============================================
jdbcDriver org.sqlite.JDBC
jdbcUrl jdbc:sqlite:file::memory:?cache=shared
jdbcUsername
jdbcPassword ************
poolMaxActiveConnections 10
poolMaxIdleConnections 5
poolMaxCheckoutTime 20000
poolTimeToWait 20000
poolPingEnabled false
poolPingQuery NO PING QUERY SET
poolPingConnectionsNotUsedFor 0
---STATUS-----------------------------------------------------
activeConnections 5
idleConnections 5
requestCount 27
averageRequestTime 7941
averageCheckoutTime 4437
claimedOverdue 0
averageOverdueCheckoutTime 0
hadToWait 0
averageWaitTime 0
badConnectionCount 0
===============================================================
Attachments:
The exception is below
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
### The error may exist in mapper/MsgRecordDO-sqlmap-mappering.xml
### The error may involve com.super.mock.platform.agent.dal.daointerface.MsgRecordDAO.getRecord
### The error occurred while executing a query
### Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
I finally resolved this issue by myself and share the workaround below in case someone else encounters similar issue in the future.
First of all, we're able to get the completed call stack of the exception shown below
Going through the source code indicated by the callback, we have below findings.
SQLite is built-in with auto commit enabled by default which is contradict with MyBatis which disables auto commit by default since we're using SqlSessionManager
MyBatis would override the auto commit property during connection initialization using method setDesiredAutoCommit which finally invokes SQLiteConnection#setAutoCommit
SQLiteConnection#setAutoCommit would incur a begin immediate operation against the database which is actually exclusive, check out below source code screenshots for detailed explanation since we configure our transaction mode to be IMMEDIATE
<property name="driver.transaction_mode" value="IMMEDIATE"/>
So until now, An apparent solution is to change the transaction mode to be DEFERRED. Furthermore, the solution of making the auto commit setting the same between MyBatis and SQLite has been considered as well, however, it's not adopted since there is no way to set the auto commit of SQLiteConnection during initialization stage, there would be always switching (from true to false or vice versa) and switch would cause the above error probably if transaction mode is not set properly
I use java, apache-cayenne and postgreSQL.
My app works fine on desktop, but I get an error when I run it on Heroku:
org.postgresql.util.PSQLException: Bad value for type timestamp/date/time:
{1}
there are also warnings:
INFO org.apache.cayenne.log.JdbcEventLogger - --- transaction started.
WARN org.apache.cayenne.access.types.SerializableTypeFactory - Haven't found suitable ExtendedType for class 'java.time.LocalDate'. Most likely you need to define a custom ExtendedType.
WARN org.apache.cayenne.access.types.SerializableTypeFactory - SerializableType will be used for type conversion.
INFO org.apache.cayenne.log.JdbcEventLogger - --- transaction started.
INFO org.apache.cayenne.log.JdbcEventLogger - SELECT t0.DATE, t0.ROOM, t0.TIME, t0.TYPE, t0.PROFESSOR_ID, t0.SUBJECT_ID, t0.LESSON_ID FROM Lesson t0 JOIN Subject t1 ON (t0.SUBJECT_ID = t1.SUBJECT_ID) WHERE (t0.DATE = ?) AND (t1.USER_ID = ?) [bind: 1->DATE:2017-12-08, 2->USER_ID:81627965]
Here is my xml:
<db-entity name="Lesson">
<db-attribute name="DATE" type="DATE"/>
<db-attribute name="LESSON_ID" type="INTEGER" isPrimaryKey="true" isMandatory="true"/>
<db-attribute name="PROFESSOR_ID" type="INTEGER"/>
<db-attribute name="ROOM" type="VARCHAR" length="50"/>
<db-attribute name="SUBJECT_ID" type="INTEGER"/>
<db-attribute name="TIME" type="TIME"/>
<db-attribute name="TYPE" type="INTEGER"/>
</db-entity>
<obj-entity name="Lesson" className="com.intetics.organizerbot.entities.Lesson" dbEntityName="Lesson">
<obj-attribute name="date" type="java.time.LocalDate" db-attribute-path="DATE"/>
<obj-attribute name="room" type="java.lang.String" db-attribute-path="ROOM"/>
<obj-attribute name="time" type="java.time.LocalTime" db-attribute-path="TIME"/>
<obj-attribute name="type" type="int" db-attribute-path="TYPE"/>
</obj-entity>
I work with the same Heroku Postgres database from desktop and Heroku.
It seems there is some problem connected with LocalDate class. But I have no idea why everything works fine on my computer, while there are problems on heroku.
I also tried to deploy jar which worked fine and it still doesn't work.
Do you have any idea on why this happens and how can I fix it?
Simular question wath asked about production server Bad value for type timestamp on production server
but I doesn't seem I can apply it's answers on Heroku.
In order to use Java 8 java.time.* classes from Cayenne you need to make sure you include cayenne-java8 module to your project, see this docs for details. Without it Cayenne just don't know how to handle those classes.
I have this method :
mymethod(long id){
Person p = DAO.findPerson(id);
Car car = new Car();
car.setPerson(p);
p.getCars().add(car);
DAO.saveOrUpdate(car);
DAO.saveOrUpdate(p);
DAO.delete(p.getCars().get(0));//A person have many cars
}
Mapping :
Person.hbm.xml
<!-- one-to-many : [1,1]-> [0,n] -->
<set name="car" table="cars" lazy="true" inverse="true">
<key column="id_doc" />
<one-to-many class="Car" />
</set>
<many-to-one name="officialCar"
class="Car"
column="officialcar_id" lazy="false"/>
Cars.hbm.xml
<many-to-one name="person" class="Person"
column="id_person" not-null="true" lazy="false"/>
This method works well for a single thread, and on multiple threads, gives me an error:
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - WARN - org.hibernate.util.JDBCExceptionReporter - SQL Error: 60, SQLState: 61000
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - ERROR - org.hibernate.util.JDBCExceptionReporter - ORA-00060: deadlock detection while waiting for a resource
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - WARN - org.hibernate.util.JDBCExceptionReporter - SQL Error: 60, SQLState: 61000
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - ERROR - org.hibernate.util.JDBCExceptionReporter - ORA-00060: deadlock detection while waiting for a resource
02/08/2014 - 5:19:11 p.m. - [pool-1-thread-35] - ERROR - org.hibernate.event.def.AbstractFlushingEventListener - Could not synchronize database state with session
org.hibernate.exception.LockAcquisitionException: Could not execute JDBC batch update
AOP Transaction :
<tx:advice id="txAdviceNomService" transaction-manager="txManager">
<tx:attributes>
<tx:method name="*" propagation="REQUIRED" rollback-for="java.lang.Exception" />
<tx:method name="getAll*" read-only="true" propagation="SUPPORTS" />
<tx:method name="find*" read-only="true" propagation="SUPPORTS" />
</tx:attributes>
</tx:advice>
NB : When i add Thread.sleep(5000) after update, it is ok. But this solution is not clean.
According to your mapping, the sequence of operations should look like this:
Person p = DAO.findPerson(id);
Car car = new Car();
car.setPerson(p);
DAO.saveOrUpdate(car);
p.getCars().add(car);
Car firstCar = p.getCars().get(0);
firstCar.setPerson(null);
p.getCars().remove(firstCar);
if (p.officialCar.equals(firstCar)) {
p.officialCar = null;
p.officialCar.person = null;
}
DAO.delete(firstCar);
An update or a delete means acquiring an exclusive lock, even on READ_COMMITTED isolation level.
If another transaction wants to update the same row with the current running transaction (which already locked this row in question) you won't get a deadlock, but a lock acquisition timeout exception.
Since you got a deadlock, it means you acquire locks on multiple tables and the lock acquisitions are not properly ordered.
So, make sure that the service layer methods set the transaction boundaries, not the DAO methods. I see you declared the get and find methods to use SUPPORTED, meaning they will use a transaction only if one is currently started. I think you should use REQUIRED for those as well, but simply mark them as read-only = true.
So make sure the transaction aspect applies the transaction boundary on the "mymethod" and not on the DAO ones.
I have Cars -> (1 -n) places.
And i have a foreign key in the table place (id_car). This foreign key dont have an index.
When i add an index to this foreign key, my problem is resolved.
Refer to This Answer
When im running my hibernate project in java swing, it works at first. but when i wait for some time and i recieve error like org.hibernate.TransactionException: rollback failed.. tell me a solution for this.
Here is my error
Aug 16, 2013 10:52:21 AM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
WARN: SQL Error: 0, SQLState: 08S01
Aug 16, 2013 10:52:21 AM org.hibernate.engine.jdbc.spi.SqlExceptionHelper logExceptions
ERROR: Communications link failure
The last packet successfully received from the server was 89,371 milliseconds ago.
The last packet sent successfully to the server was 1 milliseconds ago.
Exception in thread "AWT-EventQueue-0" org.hibernate.TransactionException: rollback failed
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.rollback(AbstractTransactionImpl.java:215) at com.softroniics.queenpharma.services.PurchaseOrderService.showAllPurchase(PurchaseOrderService.java:131)
Here is my hibernate cfg file
<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.url">jdbc:mysql://queenpharma.db.11583306.hostedresource.com/queenpharma</property>
<property name="connection.username">queenpharma</property>
<property name="connection.password">Queenpharma#1</property>
<property name="connection.pool_size">1</property>
<property name="hbm2ddl.auto">update</property>
<property name="show_sql">true</property>
<property name="connection.autocommit">false</property>
<property name="hibernate.c3p0.max_size">1</property>
<property name="hibernate.c3p0.min_size">0</property>
<property name="hibernate.c3p0.timeout">5000</property>
<property name="hibernate.c3p0.max_statements">1000</property>
<property name="hibernate.c3p0.idle_test_period">300</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="current_session_context_class">thread</property>
<property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
<mapping class="com.softroniics.queenpharma.model.LoginModel" />
<mapping class="com.softroniics.queenpharma.model.PurchaseCompanyModel" />
---- and so on------
here is some my code
session = sessionFactory.openSession();
StockModel stockModel = null;
try {
tx = session.beginTransaction();
Iterator<StockModel> iterator = session
.createQuery("FROM StockModel where productid='" + id + "'")
.list().iterator();
if(iterator.hasNext()){
stockModel = iterator.next();
}
} catch (HibernateException e) {
if (tx != null)
tx.rollback();
e.printStackTrace();
} finally {
session.close();
The error code you get SQLState: 08S01 suggests that the host name that you use for the database is incorrect according to Mapping MySQL Error Numbers to JDBC SQLState Codes.
So first please make sure that the database host: queenpharma.db.11583306.hostedresource.com is spelled correctly.
If the error persists please modify your exception handler to catch the exception caused by the rollback statement like below so you are able to understand what caused the rollback in the first place.
Note: you should do this only for troubleshooting this issue. You do not want to shallow any exceptions when in a production environment
} catch (HibernateException e) {
if (tx != null) {
try {
tx.rollback();
} catch(Exception re) {
System.err.println("Error when trying to rollback transaction:"); // use logging framework here
re.printStackTrace();
}
}
System.err.println("Original error when executing query:"); // // use logging framework here
e.printStackTrace();
}
It seems like the issue with Mysql connection time out, Guess there would be default time out for Mysql. Refer this article might help you Hibernate Broken pipe
UPDATE
From the hibernate documents
Hibernate's own connection pooling algorithm is, however, quite
rudimentary. It is intended to help you get started and is not
intended for use in a production system, or even for performance
testing. You should use a third party pool for best performance and
stability. Just replace the hibernate.connection.pool_size property
with connection pool specific settings. This will turn off Hibernate's
internal pool. For example, you might like to use c3p0.
So you no need to specify the hibernate connection pool size property when you are using c3p0 connection pooling
Remember committing and closing the session. It might will be that you do not commit and Hibernate tries a rollback after the connection has already timed out.
It would be helpful to see how you access the DB, please post some code.
I am using Spring JDBCTemplate to perform SQL operations on an apache commons datasource (org.apache.commons.dbcp.BasicDataSource) and when the service is up and running to long, i end up getting this exception:
org.springframework.dao.RecoverableDataAccessException: StatementCallback; SQL [SELECT * FROM vendor ORDER BY name]; The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 64,206,061 milliseconds ago. The last packet sent successfully to the server was 64,206,062 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at org.springframework.jdbc.support.SQLExceptionSubclassTranslator.doTranslate(SQLExceptionSubclassTranslator.java:98)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:406)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:455)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:463)
at com.cable.comcast.neto.nse.taac.dao.VendorDao.getAllVendor(VendorDao.java:25)
at com.cable.comcast.neto.nse.taac.controller.RemoteVendorAccessController.requestAccess(RemoteVendorAccessController.java:78)
I have tried adding the 'autoReconnect=true' to the connection string, but this problem still occurs. Is there another datasource that can be used that will manage the reconnecting for me?
BasicDataSource can manage keeping the connections alive for you. You need to set the following properties :
minEvictableIdleTimeMillis = 120000 // Two minutes
testOnBorrow = true
timeBetweenEvictionRunsMillis = 120000 // Two minutes
minIdle = (some acceptable number of idle connections for your server)
These will configure the data source to keep continually test your connections, and expire and remove them if they become stale. There's a number of other properties on the basic data source that you may want to consider checking into as well to tweak your connection pooling performance. I've run into some strange problems in the past where I was having issues with my database access and it all came down to how the connection pool was configured.
You can try to C3PO:
http://sourceforge.net/projects/c3p0/
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"destroy-method="close">
<property name="user" value="${db.username}"/>
<property name="password" value="${db.password}"/>
<property name="driverClass" value="${db.driverClassName}"/>
<property name="jdbcUrl" value="${db.url}"/>
<property name="initialPoolSize" value="0"/>
<property name="maxPoolSize" value="1"/>
<property name="minPoolSize" value="1"/>
<property name="acquireIncrement" value="1"/>
<property name="acquireRetryAttempts" value="0"/>
<property name="idleConnectionTestPeriod" value="600"/> <!--in seconds-->
</bean>
grettings
pacovr