I'm using following settings in persistence.xml (I use Eclipselink 2.6.4) from my web application:
<properties>
<property name="eclipselink.jdbc.cache-statements" value="true" />
<property name="eclipselink.cache.query-results" value="true" />
<property name="eclipselink.ddl-generation.index-foreign-keys" value="true" />
<property name="eclipselink.logging.level" value="OFF" />
<property name="eclipselink.persistence-context.close-on-commit" value="true" />
<property name="eclipselink.persistence-context.flush-mode" value="commit" />
<property name="eclipselink.persistence-context.persist-on-commit" value="false" />
</properties>
I'm using the #Cacheable(false) annotation to prevent some entities from being cached.
The #Cache annotation doesn't work any more in version 2.6.4.
My question is, is there a possibility to clear the cache globally? Let say every 3 hours?
Thanks
First level cache is enabled by default and you can not disable it. i.e. no settings in your persistence.xml file will disable first level cache.
You can only clear out all the entity manager objects by calling
entityManager.clear()
this will make subsequent queries go to the database (the first time) and then objects are again stored in the cache
In your case you would need to store your entitymanager objects in a register somewhere and have a loop through it every 3 hours to call the clear() function.
You can force each query to go to the database directly by calling
query.setHint("javax.persistence.cache.storeMode", CacheStoreMode.REFRESH);
Related
Issue I have using Quarkus+Hibernate:
For performance purposes we need to set FlushMode as COMMIT in our hibernate Session.
And we realized that this property configuration is not available in application.properties parameters, see: https://quarkus.io/guides/hibernate-orm#hibernate-configuration-properties
So, we took the path to setting up the configuration with persistence.xml file: https://quarkus.io/guides/hibernate-orm#persistence-xml
...
<persistence-unit name="SomethingPU" transaction-type="JTA">
<description>Something Entities</description>
<properties>
<!-- Connection specific -->
<property name="hibernate.dialect" value="org.hibernate.dialect.SQLServer2012Dialect" />
<!-- cache properties -->
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_minimal_puts" value="true"/>
<property name="hibernate.cache.use_query_cache" value="true"/>
<!-- multitenancy -->
<property name="hibernate.multiTenancy" value="DATABASE" />
<!-- scan for annotated classes -->
<property name="hibernate.archive.autodetection" value="class"/>
<!-- performance tunning -->
<property name="org.hibernate.flushMode" value="COMMIT" />
<property name="hibernate.jdbc.batch_size" value="100" />
<property name="hibernate.jdbc.fetch_size" value="400" />
<property name="hibernate.order_updates" value="true" />
<property name="hibernate.order_inserts" value="true" />
<property name="hibernate.max_fetch_depth" value="1" />
</properties>
</persistence-unit>
When we start our service everything's fine, the properties are loaded into the Session, except for "org.hibernate.flushMode" parameter.
Debugging the code we see this behaviour:
when the service starts, Quarkus executes the Recorder: io.quarkus.hibernate.orm.runtime.HibernateOrmRecorder
this class initialize org.hibernate.Session using the class: io.quarkus.hibernate.orm.runtime.TransactionSessions
TransactionsSessions mantains a Map of io.quarkus.hibernate.orm.runtime.session.TransactionScopedSession
so, when TransactionScopedSession acquires the Session, it executes:
TransactionScopedSession.aquireSession
line 88:
Session newSession = jtaSessionOpener.openSession();
which ends calling JTASessionOpener.createOptions method:
return sessionFactory.withOptions()
.autoClose(true) // .owner() is deprecated as well, so it looks like we need to rely on deprecated code...
.connectionHandlingMode(
PhysicalConnectionHandlingMode.DELAYED_ACQUISITION_AND_RELEASE_BEFORE_TRANSACTION_COMPLETION)
.flushMode(FlushMode.ALWAYS);
Here JTASessionOpener is setting flushMode as ALWAYS
calling the method: SessionFactoryImpl.flushMode
When org.hibernate.internal.SessionImpl is created
org.hibernate.internal.SessionImpl.SessionImpl(SessionFactoryImpl, SessionCreationOptions)
line 266:
if ( getHibernateFlushMode() == null ) {
final FlushMode initialMode;
if ( this.properties == null ) {
initialMode = fastSessionServices.initialSessionFlushMode;
}
...
The method getHibernateFlushMode() is returning FlushMode.ALWAYS
And because of this, the parameter "org.hibernate.flushMode" from persistence.xml is never setted.
We fix this issue setting the flushMode directly in javax.persistence.EntityManager instance, before we use it in any query or DB operation.
But, my questions are:
is there a way to bypass JTASessionOpener or use another logic? can it be considered as a bug or issue to solve?
is there a better way (than our solution) to fix this issue?
is there a plan to add the property "org.hibernate.flushMode" in Quarkus Hibernate ORM extension?, so we can set this in application.properties
--
Hope the description is clear.
I have an issue in my Spring MVC JDBC call. If I make the call quickly after starting the server, the JDBC connection is make in a second and the data is retrieved. Similarly, if the other DAOs are called in quick succession with one another, the connection is made soon. But if I try to call a DAO after a gap of even a few minutes, the JDBC connection takes forever to be done. It gets stuck on
"DataSourceUtils:110 - Fetching JDBC Connection from DataSource"
I have never had the patience to really check how long it takes to retrieve the connection but I've waited for 10 minutes and there was no sign of the connection being made.
Next, I try to restart the server at least. But JDBC obstructs even stopping of the server!! The console is stuck on this line:
"DisposableBeanAdapter:327 - Invoking destroy method 'close' on bean with name 'dataSource'"
Eventually I restart Eclipse and it works alright until there is a time gap again.
This is my bean definition for the data source:
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="url" />
<property name="username" value="abc" />
<property name="password" value="abc" />
<property name="validationQuery" value="SELECT 1" />
<property name="testWhileIdle" value="true" />
<property name="maxActive" value="100" />
<property name="minIdle" value="10" />
<property name="initialSize" value="10" />
<property name="maxIdle" value="20" />
<property name="maxWait" value="1000" />
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="getDataDao" class="com.project.dao.GetDataDao">
<constructor-arg index="0" ref="jdbcTemplate" />
<constructor-arg index="1" value="STORED_PROC_NAME"></constructor-arg>
</bean>
In my DAO file, I extend Spring's StoredProcedure class and this is the constructor:
public GetDataDao(JdbcTemplate jdbcTemplate, String spName) {
super(jdbcTemplate, spName);
declareParameter(new SqlParameter("p_input", Types.VARCHAR));
declareParameter(new SqlOutParameter("o_result", Types.VARCHAR));
compile();
}
In another function, this is how I call the SP:
spOutput = super.execute(spInput);
where spOutput and spInput are HashMaps.
Am I doing something wrong in my configuration? TIA.
I too had the exact same issue. I found that issue was consistent with a particular query, checked the query and found that issue was within query itself. Running query separately was also taking time. Query was converting a column to lower and that column was not indexed. Query was like lower(trim(column_name)), removed the lower and trim. It worked perfectly fine after that.
The additional code helps, but I do not see anything in it that would cause issue you are seeing. The most likely reason for issue you are seeing is that connections are being pulled out of the pool, but they are not being returned, and the pool eventually becomes starved. The dbcp pool is then later blocking your shutdown because these connections are still open, and probably hung.
To verify, you might try setting maxActive and similar settings to something much lower, perhaps even "1", and then verify that you get the same issue immediately.
Have you verified that your stored procedure is returning? i.e. You actually get spOutput for every call and the stored procedure itself is not hanging consistently or randomly?
If so, my only other suggestion is to post more code, especially from the call stack leading in to GetDataDao, and including whatever method in the DAO is making the sp.execute call. An assumption is that you are not using transactions, but if you are, then showing where you start/commit transaction in code would also be very important.
As far as I tried, I have to manually change the CRON_TRIGGERS table in DB. Dirty...
Any way to make more like this?:
There are 2 apps running, both have in .properties file schedule defined as "every minute" and so works the job
I stop one instance and reconfigure (change in .properties file), so the schedule is "every hour"
I start the instance. Now I would like that instace to check, that such job is already defined in DB and to update the schedule there. It is not happening now using configuration from site http://www.objectpartners.com/2013/07/09/configuring-quartz-2-with-spring-in-clustered-mode/
Or what is the typical solution?
So I guess that when you say .properties file, you actually mean the spring bean XML file(s).
It does not make any sense that you statically configure identical jobs with different schedules. If for whatever reason, one instance restarts, it will automatically apply its own schedule. If statically configured, your job triggers should be the same on all instances
If you properly set <property name="overwriteExistingJobs" value="true"/> in your SchedulerFactoryBean it should automatically updates the schedule of the job.
You should never modify the database manually. Always update the scheduler through its API.
Try sth like this:
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="jobDetails">
<list>
<ref bean="yourJobDetail" />
</list>
</property>
<property name="triggers">
<list>
<ref bean="yourJobTrigger" />
</list>
</property>
<property name="configLocation" value="file:${HOME}/yourProperties.properties" />
<!-- Commented, because don't work with autocommit = false on spring data source -->
<!-- <property name="dataSource" ref="mainDataSource"/> -->
<property name="transactionManager" ref="mainTransactionManager" />
<property name="autoStartup" value="true" />
<property name="applicationContextSchedulerContextKey" value="applicationContext" />
<property name="jobFactory">
<bean class="FactoryForJobWithInjectionOfSpringBbean" />
</property>
<!-- Will update database cron triggers to what is in this jobs file on each deploy. Replaces all previous trigger and job data that
was in the database. YMMV -->
<!-- dont work properly with cluster -->
<!-- <property name="overwriteExistingJobs" value="true" /> -->
</bean>
Unfortunately i think that:
<property name="overwriteExistingJobs" value="true" />
dosen't work correctly in cluster mode.
I am working on warmup request to minimize my requests latency:
https://developers.google.com/appengine/docs/java/config/appconfig#Warmup_Requests
During that initialization I perform:
PersistenceManager pm = PMF.get().getPersistenceManager();
.. but from the logs I see it doesn't parse all the .jdo files where the class metadata are stored.
They are parsed only the first time I call this function "getObjectById" (for example) ..
Is it possible to force datanucleus to fully read all the metadata in order to be completely ready when the first getObjectById hits the PersistenceManager?
Thank you,
Michele
==============================================================================
UPDATE:
I tried with this persistence.xml file:
<persistence-unit name="my-transaction">
<mapping-file><path-to-first-jdo-file></mapping-file>
<mapping-file><path-to-second-jdo-file></mapping-file>
<mapping-file><path-to-third-jdo-file></mapping-file>
<properties>
<property name="datanucleus.NontransactionalRead" value="true"/>
<property name="datanucleus.NontransactionalWrite" value="true"/>
<property name="datanucleus.ConnectionURL" value="appengine"/>
</properties>
</persistence-unit>
that is associated with jdoconfig.xml:
<persistence-manager-factory name="my-transaction">
<property name="javax.jdo.PersistenceManagerFactoryClass" value="org.datanucleus.store.appengine.jdo.DatastoreJDOPersistenceManagerFactory" />
<property name="javax.jdo.option.ConnectionURL" value="appengine" />
<property name="javax.jdo.option.NontransactionalRead" value="true" />
<property name="javax.jdo.option.NontransactionalWrite" value="true" />
<property name="javax.jdo.option.RetainValues" value="true" />
<property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true" />
<property name="datanucleus.appengine.allowMultipleRelationsOfSameType" value="true" />
<property name="datanucleus.appengine.datastoreReadConsistency" value="STRONG" />
<property name="datanucleus.appengine.ignorableMetaDataBehavior" value="ERROR" />
<property name="javax.jdo.option.Multithreaded" value="true"/>
<property name="javax.jdo.option.Optimistic" value="false" />
</persistence-manager-factory>
.. but I continue to see in the logs the previuos behaviour. During loading request:
org.datanucleus.store.types.TypeManager addJavaType: Adding support for Java type <class>
.. and during the first request that really needs a class mapping (getObjectById for example):
org.datanucleus.metadata.xml.MetaDataParser parseMetaDataStream: Parsing MetaData file <class>.jdo
So the first request that retrieves the object is longer than the following ones because it needs to parse the XML file.
What's wrong?
I am using datanucleus 1.1.5
Thank you
Specify a "persistence.xml" defining all classes/mapping files. This is then read/loaded at startup.
Also ancient version of DataNucleus aren't supported, so use the newer version of the GAE JPA plugin with DataNucleus v3.x
how can i set openjpa to flush before query. When i change some values in database i want to propagate these changes into application.
I tried this settings in persistence.xml:
<property name="openjpa.FlushBeforeQueries" value="true" />
<property name="openjpa.IgnoreChanges" value="false"/> false/true - same behavior to my case
<property name="openjpa.DataCache" value="false"/>
<property name="openjpa.RemoteCommitProvider" value="sjvm"/>
<property name="openjpa.ConnectionRetainMode" value="always"/>
<property name="openjpa.QueryCache" value="false"/>
Any idea?
Thanks
calling refresh() on an object inside trasaction does the trick :)