Openjpa lock does not work - java

In a project with OpenJPA 2.1 and oracle database I use mixed lock mode, which default has optimistic locking but can use pessimistic as well. But it does not work for me as I expected.
So I wrote small test for openjpa locking.
Persistance.xml config:
<properties>
<property name="openjpa.Optimistic" value="false" />
<property name="openjpa.LockManager" value="mixed" />
<!-- <property name="openjpa.LockManager"value="mixed(VersionCheckOnReadLock=true,VersionUpdateOnWriteLock=true)" /> -->
<property name="openjpa.RuntimeUnenhancedClasses" value="unsupported" />
<property name="openjpa.slice.ConnectionDriverName" value="oracle.jdbc.OracleDriver" />
<property name="openjpa.slice.ConnectionURL" value="jdbc:oracle:thin:******" />
<property name="openjpa.slice.ConnectionUserName" value="user" />
<property name="openjpa.slice.ConnectionPassword" value="userpassword" />
<property name="openjpa.jdbc.SynchronizeMappings" value="validate" />
<property name="openjpa.jdbc.SchemaFactory" value="native(ForeignKeys=true)" />
<property name="openjpa.Log"
value="DefaultLevel=INFO, Runtime=INFO, Tool=INFO, SQL=TRACE" />
</properties>
My simple test code:
Configuration config = entityManager.find(Configuration.class, 10025984L);
entityManager.lock(config, LockModeType.PESSIMISTIC_WRITE);
In sql trace is see, that after lock method calling OpenJPA does not execute any additional query to database. Single sql query:
TRACE [Thread-7] openjpa.jdbc.SQL - <t 874198043, conn 1773300146> executing prepstmnt 1809148885 SELECT t0.VERSION FROM Configuration t0 WHERE t0.ID = ? [params=?]
What I do wrong and how can I get lock for one entity instance?

I found soulution. The reason was in JPA settings. I just remove property:
<property name="openjpa.Optimistic" value="false" />
And everything became work fine.

Related

Issue setting FlushMode.COMMIT in Quarkus Hibernate ORM

Issue I have using Quarkus+Hibernate:
For performance purposes we need to set FlushMode as COMMIT in our hibernate Session.
And we realized that this property configuration is not available in application.properties parameters, see: https://quarkus.io/guides/hibernate-orm#hibernate-configuration-properties
So, we took the path to setting up the configuration with persistence.xml file: https://quarkus.io/guides/hibernate-orm#persistence-xml
...
<persistence-unit name="SomethingPU" transaction-type="JTA">
<description>Something Entities</description>
<properties>
<!-- Connection specific -->
<property name="hibernate.dialect" value="org.hibernate.dialect.SQLServer2012Dialect" />
<!-- cache properties -->
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_minimal_puts" value="true"/>
<property name="hibernate.cache.use_query_cache" value="true"/>
<!-- multitenancy -->
<property name="hibernate.multiTenancy" value="DATABASE" />
<!-- scan for annotated classes -->
<property name="hibernate.archive.autodetection" value="class"/>
<!-- performance tunning -->
<property name="org.hibernate.flushMode" value="COMMIT" />
<property name="hibernate.jdbc.batch_size" value="100" />
<property name="hibernate.jdbc.fetch_size" value="400" />
<property name="hibernate.order_updates" value="true" />
<property name="hibernate.order_inserts" value="true" />
<property name="hibernate.max_fetch_depth" value="1" />
</properties>
</persistence-unit>
When we start our service everything's fine, the properties are loaded into the Session, except for "org.hibernate.flushMode" parameter.
Debugging the code we see this behaviour:
when the service starts, Quarkus executes the Recorder: io.quarkus.hibernate.orm.runtime.HibernateOrmRecorder
this class initialize org.hibernate.Session using the class: io.quarkus.hibernate.orm.runtime.TransactionSessions
TransactionsSessions mantains a Map of io.quarkus.hibernate.orm.runtime.session.TransactionScopedSession
so, when TransactionScopedSession acquires the Session, it executes:
TransactionScopedSession.aquireSession
line 88:
Session newSession = jtaSessionOpener.openSession();
which ends calling JTASessionOpener.createOptions method:
return sessionFactory.withOptions()
.autoClose(true) // .owner() is deprecated as well, so it looks like we need to rely on deprecated code...
.connectionHandlingMode(
PhysicalConnectionHandlingMode.DELAYED_ACQUISITION_AND_RELEASE_BEFORE_TRANSACTION_COMPLETION)
.flushMode(FlushMode.ALWAYS);
Here JTASessionOpener is setting flushMode as ALWAYS
calling the method: SessionFactoryImpl.flushMode
When org.hibernate.internal.SessionImpl is created
org.hibernate.internal.SessionImpl.SessionImpl(SessionFactoryImpl, SessionCreationOptions)
line 266:
if ( getHibernateFlushMode() == null ) {
final FlushMode initialMode;
if ( this.properties == null ) {
initialMode = fastSessionServices.initialSessionFlushMode;
}
...
The method getHibernateFlushMode() is returning FlushMode.ALWAYS
And because of this, the parameter "org.hibernate.flushMode" from persistence.xml is never setted.
We fix this issue setting the flushMode directly in javax.persistence.EntityManager instance, before we use it in any query or DB operation.
But, my questions are:
is there a way to bypass JTASessionOpener or use another logic? can it be considered as a bug or issue to solve?
is there a better way (than our solution) to fix this issue?
is there a plan to add the property "org.hibernate.flushMode" in Quarkus Hibernate ORM extension?, so we can set this in application.properties
--
Hope the description is clear.

C3p0 datasource with PostgreSQL

I'm experiencing a weird behaviour with our Java web application, configured to access a PostgreSQL database through C3p0 ComboPooledDataSource.
First of all, PostgreSQL server installation has default parameters, we didn't need to change any variable in it..
We run our queries using Spring JdbcTemplate, version 3.2.12.RELEASE. As it is for PostgreSQL, it is configured with default parameters.
Here it is our context configuration with C3p0 and Spring;
<bean id="resoilDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<!-- access configuration -->
<property name="driverClass" value="org.postgresql.Driver" />
<property name="jdbcUrl" value="jdbc:postgresql://localhost:5432/*****" />
<property name="user" value="******" />
<property name="password" value="******" />
<!-- pool sizing -->
<property name="initialPoolSize" value="1" />
<property name="minPoolSize" value="1" />
<property name="maxPoolSize" value="6" />
<property name="acquireIncrement" value="3" />
<property name="maxStatements" value="150" />
<!-- refreshing connections -->
<property name="maxIdleTime" value="180" /> <!-- 3min -->
<property name="maxIdleTimeExcessConnections" value="120" /> <!-- 3min -->
<!-- timeouts e testing -->
<property name="idleConnectionTestPeriod" value="120" /> <!-- 60 -->
<property name="testConnectionOnCheckout" value="true" />
<property name="testConnectionOnCheckin" value="false" />
<property name="preferredTestQuery" value="SELECT 1" />
</bean>
<bean id="template" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="resoilDataSource"></property>
</bean>
Let me describe the problem we are encountering:
once we reach "maxPoolSize" of idle connections opened on Postgres side, this connections never expires, they remain in "idle" state and there's no way C3p0 will get any of them back for pooling..
I would expect that once one of these opened connections excess idle time, C3p0 is able to reuse it due to the "maxIdleTimeExcessConnections" parameter.
Unfortunately, this never happens.
I also tried to substitute C3p0 ComboPooledDataSource with Apache DBCP BasicDataSource, but nothing changed.
Before using PostgreSQL database for our application, our customers asked us to install our application integrating other popular databases instead of PostgreSQL (SQL Server and Oracle in particular) and we never experienced this behaviour.
Any ideas about what's going on it's truly appreciated, thanks in advance.

configure max pool size hikaricp hibernate JPA

I'm trying to use hibernate jpa and hikaricp for the CP.
But I have an issue that I dont understand, so either my config is bad ... or I do have something else.
this is the config In have in my persistence.xml file :
<properties>
<!-- SQL -->
<property name="hibernate.dialect" value="org.hibernate.spatial.dialect.mysql.MySQLSpatialDialect" />
<property name="hibernate.show_sql" value="false" />
<property name="hibernate.format_sql" value="false" />
<!-- HikariCP -->
<property name="hibernate.connection.provider_class" value="com.zaxxer.hikari.hibernate.HikariConnectionProvider"/>
<property name="hibernate.hikari.driverClassName" value="com.mysql.jdbc.Driver" />
<property name="hibernate.hikari.minimumIdle" value="5"/>
<property name="hibernate.hikari.maximumPoolSize" value="30"/>
<property name="hibernate.hikari.maxLifetime" value="150000"/>
<property name="hibernate.hikari.dataSource.user" value="user" />
<property name="hibernate.hikari.dataSource.password" value="password" />
<property name="hibernate.hikari.jdbcUrl"
value="jdbc:mysql://server:3306" />
</properties>
Still I'm having 100+ connections on the database. I thought that using maximumPoolSize it would have limit my number of connections. Is my configuration OK, based on my research it seems ok to me, but before trying to debug elsewhere I want to make sure it is.
Thanks
You must be missing the hibernate-hikari module jar on your classpath. This module is necessary to integrate hibernate with HikariCP
Here is the official documentation

Method org.postgresql.jdbc3.Jdbc3PreparedStatement.setQueryTimeout(int) is not yet implemented.;

Am New to spring batch. with spring batch job i am inserting data in postgres db then i am getting this error . how to fix this?
Method org.postgresql.jdbc3.Jdbc3PreparedStatement.setQueryTimeout(int) is not yet implemented.; nested exception is java.sql.SQLException: Method org.postgresql.jdbc3.Jdbc3PreparedStatement.setQueryTimeout(int) is not yet implemented.'
This is my datasource code.
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<!-- DB connection properties -->
<property name="driverClass" value="${db.driver:oracle.jdbc.OracleDriver}" />
<property name="jdbcUrl" value="${db.url}" />
<property name="user" value="${db.user:}" />
<property name="password" value="${db.password:}" />
<!-- Pool sizing properties -->
<property name="initialPoolSize" value="${db.pool.initialSize:5}" />
<property name="maxPoolSize" value="${db.pool.maxSize:25}" />
<property name="minPoolSize" value="${db.pool.minSize:0}" />
<property name="maxStatements" value="${db.pool.maxStatements:10}" />
<!-- Connection testing and acquisition properties -->
<property name="maxIdleTime" value="${db.con.maxIdleTime:300}" />
<property name="idleConnectionTestPeriod" value="${db.con.testPeriod:30}" />
<property name="preferredTestQuery" value="${db.con.testQuery:select 1 from dual}" />
<property name="acquireIncrement" value="${db.con.acquireIncrement:5}" />
<property name="acquireRetryAttempts" value="${db.con.retryAttempts:0}" />
<property name="acquireRetryDelay" value="${db.con.retryDelay:3000}" />
<!-- JMX name -->
<property name="dataSourceName" value="Datasource" />
<!-- Debugging options -->
<property name="unreturnedConnectionTimeout" value="${db.con.unreturnedTimeout:0}" />
<property name="debugUnreturnedConnectionStackTraces" value="${db.con.debugUnreturned:false}" />
</bean>
The data source looks OK.... #duffymo, the Oracle driver is the default, but would be overridden by the value of 'db.driver' property if 'db.driver' is specified.
The setTimeout error is thrown by some versions of the PostgreSQL driver because they have not, in fact, implemented setTimeout, so they don't want users thinking the setTimeout actually has any effect.
What version of the PostreSQL driver are you using? Can you share some details of the Spring Batch job? I'm not sure how to prevent Spring from setting the timeout on a PreparedStatement. At a guess, you could set db.con.unreturnedTimeout to 0; I'm thinking that value may be passed to setTimeout; but I'm not sure.

How to fine-tune a JPA/JAX-RS application

I've developed a JAX-RS JSON API in Wildfly/RESTeasy with a JPA/Hibernate backend and I have serious database access problems.
For example, suddenly the application stop responding and the logs show a bunch of:
ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (EJB default - 4) IJ031012: Unable to obtain lock in 60 seconds: org.jboss.jca.adapters.jdbc.local.LocalManagedConnection
INFO [org.jboss.jca.core.connectionmanager.listener.TxConnectionListener] IJ000302: Unregistered handle that was not registered: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7
WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] IJ000609: Attempt to return connection twice: org.jboss.jca.core.connectionmanager.listener.TxConnectionListener
WARN [org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory] (default task-9) IJ030020: Detected queued threads during cleanup
ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (ForkJoinPool.commonPool-worker-0) IJ031041: Connection handle has been closed and is unusable
ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (ForkJoinPool.commonPool-worker-1) IJ031050: The result set is closed
I think this is due to multiple access by different users (say 15, 20 users simultaneously), since with one/two users this not happens.
I'm on Hibernate 5.1 and Wildfly 10, and SQL Server 2014. It is a vanilla installation, with no tweaks or custom configurations. How can I fine-tune the infrastructure to avoid these issues?
The problem was related to the REST services being #Stateless and therefore opening transactions on each request. The solution was to inject #Stateless DAOs into the REST services: no more locking issues.
Use HikariCP connection pool.
Based on your traffic configure no of connections in the connection pool as well as in database.
In the connection pool there is option to maintain some amount connections available all the time. And also set the connection isolation type as after commit only.
Below is the sample of it,
<persistence-unit name="sample" transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<!-- provider -->
<property name="hibernate.connection.provider_class" value="com.xo.web.persistence.XOHikariCPConnectionProvider"/>
<!-- Hibernate properties -->
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5Dialect"/>
<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/>
<property name="hibernate.archive.autodetection" value="hbm" />
<property name="hibernate.hbm2ddl.auto" value="update"/>
<property name="show_sql" value="false"/>
<property name="hibernate.connection.release_mode" value="after_transaction"/>
<property name="hibernate.connection.autocommit" value="false"/>
<property name="hibernate.connection.isolation" value="2"/>
<property name="hibernate.ejb.interceptor" value="com.xo.web.persistence.intercept.XoEntityInterceptor"/>
<property name="hibernate.jdbc.batch_size" value="100"/>
<property name="hibernate.order_inserts" value="true"/>
<property name="hibernate.order_updates" value="true"/>
<!-- Hikari settings -->
<property name="maximumPoolSize" value="80" />
<property name="autoCommit" value="false" />
<property name="minimumPoolSize" value="20" />
<property name="idleTimeout" value="60000" />
<property name="maxLifetime" value="600000" />
<property name="connectionInitSql" value="select 1" />
<property name="connectionTimeout" value="1000" />
<property name="validationTimeout" value="1000" />
<property name="cachePrepStmts" value="true" />
<property name="prepStmtCacheSize" value="250" />
<property name="prepStmtCacheSqlLimit" value="2048" />
</properties>
</persistence-unit>

Categories

Resources