Memcached Provider for NHibernate not working - java

I've been at this for days. I have configure my web/app config to use the second level cache with a Memcached server and the provider from NHContrib. I don't get any exceptions yet in testing I see that it does not use the cache for my queries that I have set cacheable = true.
If I switch the provider to the NHibernate.Cache.HashtableCacheProvider and test it works as expected.
here are the relevant config sections I am using
<configuration>
<configSections>
<section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler,NHibernate" />
<section name="memcache" type="NHibernate.Caches.MemCache.MemCacheSectionHandler,NHibernate.Caches.MemCache" />
</configSections>
<memcache>
<memcached host="192.168.215.60" port="11211" />
</memcache>
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
<session-factory>
<property name="connection.provider">
NHibernate.Connection.DriverConnectionProvider
</property>
<property name="dialect">
MT.Core.Persistence.Dialect, MT.Core
</property>
<property name="connection.driver_class">
NHibernate.Driver.SqlClientDriver
</property>
<property name="connection.connection_string">
Server=192.168.1.1;Initial Catalog=Test;User ID=TestUser;Password=fakepassword;
</property>
<property name="show_sql">true</property>
<property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory,NHibernate.ByteCode.LinFu</property>
<property name="cache.provider_class">NHibernate.Caches.MemCache.MemCacheProvider,NHibernate.Caches.MemCache</property>
<!--<property name="cache.provider_class">NHibernate.Cache.HashtableCacheProvider</property>-->
<property name="cache.use_second_level_cache">true</property>
<property name="cache.use_query_cache">true</property>
</session-factory>
</hibernate-configuration>
</configuration>

The problem ended up being due to a connectivity problem. I used log4net to log any errors to the console and to the application log. It was then I finally saw the errors regarding connecting to the memcached server. Once the code was promoted to a server in the same location the errors were gone. I should have learned to use log4net ages ago.

For memcache the property is 'default_expiration' not 'expiration'. I am not sure about SysCache. But I have used this property for memcache and it works for me.
Initailly I also faced the same error that CountCet mentioned. The attribute 'expiration' is not recognized by the MemCache provider. Later I checked the code and found that it use the property 'default_expiration' and its default value is 300 sec.

I think that the expiration property should set for the memcache provider on the session factory level and not on the provider configuration like others (SysCache)
<property name="expiration">300</property>

Related

Getting IgniteCheckedException: Default Ignite instance has already been started exception when enabling Persistence on single Node

I am deploying an application where I need to maintain some data in Ignite cache. I used in memory Ignite cache. Here is the Ignite configuration I have used:
<property name="cacheConfiguration">
<list>
<bean
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="IGNITE_DATA" />
<property name="cacheMode" value="PARTITIONED" />
<property name="atomicityMode" value="ATOMIC" />
<property name="writeSync"
value="PRIMARY_SYNC" />
<property name="backups"
value="${IGNITE_CACHE_BACKUPS}" />
</bean>
</list>
</property>
Now when I deployed multiple instances of my application and stored data in Ignite cache. Its shared among all the application instances.
Even if any any instance goes down and comes up after sometime it has the latest data via Ignite cache sync.
But issue occurs when all the application instances go down. When they come up data is gone since it was not persisted. For persistence I used dataStorageConfiguration property and enabled the persistence. Here is the change I added to Ignite configuration:
<property name="dataStorageConfiguration">
<bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="defaultDataRegionConfiguration">
<bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true" />
</bean>
</property>
<!-- Changing Write Ahead Log Mode. -->
<property name="storagePath" value="${IGNITE_BC_STORE_PATH}"/>
<property name="walMode" value="LOG_ONLY" />
</bean>
</property>
Now when I deploy my application and I try and start Ignite from Java code as mentioned below:
log.info("Initializing IGNITE...");
ignite = Ignition.start(getClass().getResource(CONF_FILE));
I get an exception every time stating the default instance has already started.Tried several things but didn't work. Even if I remove the CacheConfiguration from Ignite Configuration and just keep dataStorageConfiguration I still getting the same error. Error is :
Caused by: class org.apache.ignite.IgniteCheckedException: Default Ignite instance has already been started.
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1141)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:881)
at org.apache.ignite.Ignition.start(Ignition.java:373)
Normally this error comes when we try and run multiple Ignite nodes under same JVM but here I am running single node per JVM. Then also getting the error.
Please do correct me if I am wrong.
Any help here will be appreciated.
Most probably, you have more than one IgniteConfiguration bean in your config file. If one configuration bean extends another one, then make sure, that the parent is abstract.
I have resolved the issue. Seems like the issue was not woth the Ignite configuration but was with Spring Framework configuration.
I was creating the bean for the Ignite class using the lazy-init=true. I switched that to the eager-init and that resolved my issue.
Not sure how exactly it solved this but it worked at least in my case.

Hibernate xml file issue

I have set up Tapestry 5 project and all went fine, until I deployed Hibernate. I have created hibernate.xml file and
<hibernate-configuration>
<session-factory>
<property name="dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="connection.url">jdbc:mysql://localhost/project</property>
<property name="connection.username">root</property>
<property name="connection.password">password12</property>
<property name="connection.pool_size">5</property>
<!-- Print SQL to stdout. -->
<property name="show_sql">true</property>
<property name="format_sql">true</property>
<property name="use_sql_comments">true</property>
<property name="generate_statistics">true</property>
<property name="hibernate.archive.autodetection">class, hbm</property>
<property name="hibernate.transaction.flush_before_completion">true</property>
<!-- Mapping files TODO: Classify those mappings in exact order and define the relations between them in entities some time later on.-->
<mapping class="rs.project.com.entities.Fruit"/>
<mapping class="rs.project.com.entities.Article"/>
</session-factory>
and it's OK as far as the implementation of it is concerned. However when I deploy the app it defines me some other config, which can be seen on my trace log, and uses some other xml file, based on the mappings it shows me on the log, and it's about some completely different project I used a while ago. The thing is I can't see what's causing such a behavior, and I am really frustrated. I am using Tomcat Apache Catalina and MySQL for Hibernate. Also, I did some research and found out that persistance.xml file is being used in my project.properties which is kinda strange.
persistence.xml.dir=${conf.dir}
Driver for connecting my app to MySQL is jdbc.mysql.driver.So my goal is to possibly define the matter that causes such behavior here with you, and to solve it.
Thanks in advance for your answers.
If your tomcat log is referring to a different project, maybe your context declaration is not right?
Check your contexts directory (for me it's $Tomcat_home\conf\Catalina\localhost) or the Server.xml (if that's what you're using). Make sure that the context file in the contexts directory is pointing to the right directory/project. This error has happened to me before when a previous project had the same context-name as my current one.

loading properties file in spring

One of our team has implemented loading properties this way (see pseudo code below) and advises this approach is right as the client application using this is free to keep the properties in any file. Contrary to the widely used propertyplaceholderconfigurer.
application-context.xml
<bean class="com.mypackage.Myclass">
<property name="xml" value="classpath:"{com.myapp.myproperty1}"> </property>
</bean>
config.properties
com.myapp.myproperty1=data.xml
edit: I should have added it is data.properties and not data.xml. We want to load a property file (this property file is given in the config.properties as a "property".
com.myapp.myproperty1=data.properties
java class
import org.springframework.core.io.Resource;
public class Myclass {
private Resource xmlField;
// setter & getter methods..
}
Is it right to use spring core.io.Resource?
Another reason is the client application wants to load a environment specific configuration. I suggested use the propertyconfigurer and use maven profiles to generate the environment specific build
Can you please advise which one suits which case? and if it differs in different scenarios, please help me point out them?
thanks
You can put the properties in any file and still use PropertyPlaceholderConfigurer. Here's an example that satisfies both your coworker's concerns and your desire for environment specific stuff:
<bean id="propertyPlaceholderConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<!-- default settings -->
<value>classpath:MyCompany.properties</value>
<!-- environment-specific settings -->
<value>classpath:MyCompany.${mycompany.env:dev}.properties</value>
<!-- keep your coworker happy -->
<value>classpath:${mycoworker}</value>
<!-- allows emergency reconfiguration via the local file system -->
<value>file:///${user.home}/MyCompany.properties</value>
</list>
</property>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/>
<property name="ignoreResourceNotFound" value="true" />
<!-- should be validated separately, in case users of the library load additional properties -->
<property name="ignoreUnresolvablePlaceholders" value="false"/>
</bean>
If you pass in no -D arguments, then you'll pick up the following properties files, where properties in the later files overwrite previously determined values.
MyCompany.properties off the classpath
MyCompany.dev.properties off the classpath
$HOME/MyCompany.properties if it exists
To swap in a production config for #2, just pass -Dmycompany.env=prod to java. Similarly your coworker can pass -Dmycoworker=/some/path/config.properties if he/she wants.
I'm not sure why a PropertyPlaceholderConfigurator wouldn't have been the correct choice.
I've almost always handled environment-specific configs via a customized PPC that can either (a) get a -D parameter on startup, and/or (b) use the machine name, to decide which property file to load.
For me, this is more convenient than bundling the information in via Maven, since I can more easily test arbitrary configurations from whatever machine I'm on (using a -D property).
+1 for Dave's suggestion. You should be using PropertyPlaceholderConfigurer for loading\reading properties. Here is the example i just pulled out from my previous project if you wonder how to use this. This example is for loading multiple properties files but the concept is same. Good luck.
<bean id="projectProperties" class="org.springframework.beans.factory.config.PropertiesFactoryBean">
<property name="locations">
<list>
<value>classpath:config.properties</value>
</list>
</property>
</bean>
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="properties" ref="projectProperties" />
</bean>
<bean id="uniqueAssetIdRetriever" class="com.mypackage.Myclass">
<property name="xml" value="${com.myapp.myproperty1}" />
</bean>

Hibernate fails to open Connection with Oracle 11g?

I made a basic JUnit test to set up this Oracle database on my computer with hibernate. The database works and everything, but trying to hook it up to Hibernate is proving to be a challenge. My config file can be here:
The JUnit test is fairly straight forward and I'm sure it should work, but I'm getting this JUnit failure:
org.hibernate.exception.JDBCConnectionException: Cannot open connection
Any ideas what's wrong with it?
Connection properties in Hibernate config file:
<session-factory>
<property name="hibernate.connection.driver_class">
oracle.jdbc.OracleDriver</property>
<property name="hibernate.connection.url">
jdbc:Oracle:thin:#127.0.0.1:8080/slyvronline</property>
<property name="hibernate.connection.username">
YouNoGetMyLoginInfo</property>
<property name="hibernate.connection.password">
YouNoGetMyLoginInfo</property>
<property name="dialect">
org.hibernate.dialect.OracleDialect</property>
<!-- Other -->
<property name="show_sql">true</property>
<property name="hibernate.hbm2ddl.auto">validate</property>
<!-- Mapping files -->
<mapping class="com.slyvr.pojo.Person"/>
</session-factory>
It's unlikely (but possible) your DB is listening on port 8080. Oracle defaults to port 1521. Start there.
(Since it's a connection issue, relevant portions of Hibernate config are useful; I've edited to reflect.)
There are possible two issues in your connection string
first is the port that Dave Newton, second that after port you should add the sid after : not /.
So try this as a solution:
jdbc:Oracle:thin:#127.0.0.1:1521:slyvronline
When you are connecting with oracle, no need to mention the schema name so the connection URL looks like as below
jdbc:oracle:thin:#<hostname>:<port>:<sid>
ex:
jdbc:oracle:thin:#localhost:1521:xe

Different persistence.xml properies for test run / JPA Fixtures

I develop some Java EE/Spring web-app. I use JPA 2.0 - Hibernate. For integration tests I need to use different database. Those tests require Jetty to run application, but I managed to override web.xml for such run, there I can modify my Spring context files, it's ok.
But I need each time a clean database (and load some data into it).
As my database name and address are configured in sprig context I just switched them as I described above - but how can I change some of my persistence.xml properies for this tests only to have database drop and recreated?
I tried to make another persistence.xml in /src/test/resources/META-INF (and checked that test-classes are first in classpath) but it is not loaded and only the 'master' version is used (from /src/main/resources/META-INF). Any help?
With spring you usually define your data source as a spring bean. The database url and credentials are usually included form an external file, for example application.properties.
If you put a new applicaiton.properties in src/test/resources it will work. See also here.
You can define org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager :
<bean id="pum" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager">
<property name="persistenceXmlLocations">
<list>
<value>/path/to/my/test-persistence.xml</value>
</list>
</property>
<property name="dataSources">
<map>
<entry key="dataSource" value-ref="dataSource"/>
</map>
</property>
<!-- if no datasource is specified, use this one -->
<property name="defaultDataSource" ref="dataSource"/>
</bean>
Then, link it to your entityManagerFactory :
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
...
...
<property name="persistenceUnitManager" ref="pum"/>
</bean>
I used this to make my own persistence.xml linked to a HSQL in-memory db, preloaded with DBUnit (using hibernate.hbm2ddl.auto=create-drop).
It Works perfectly.

Categories

Resources