How to define persistence on Local Pivotal GemFire Region? - java

I have local Region and I want to persist the Region data to disk. How to define it using DataPolicy?

I'm not exactly sure about what you're asking here, the question is not detailed enough... either way, the DataPolicy enumeration has the PERSISTENT_REPLICATE and PERSISTENT_PARTITION values which you can use when configuring a region to make it persistent.
Hope this helps.

If your Region is truly "local", in that the data for the Region is only stored local to the GemFire data node (server) or perhaps your application if it is either a client or perhaps a peer member of the cluster (either way), then you can specify the data policy using the correct Region shortcut.
For example, if your application/arrangement is using a ClientCache (i.e. your application is a cache client), then you can still define LOCAL-only, persistent Regions using ClientRegionShortcut.LOCAL_PERSISTENT.
If your application/arrangement is using a peer Cache, (i.e. your application is actually participating as a data node/server, peer member in the cluster), then you can still define LOCAL-only, persistent Regions using RegionShortcut.LOCAL_PERSISTENT.
By way of GemFire API (peer Cache):
Cache peerCache = new CacheFactory(..)
.set(..)
.create();
RegionFactory localPersistentServerRegion =
peerCache.createRegionFactory(RegionShortcut.LOCAL_PERSISTENT);
By way of GemFire API (ClientCache):
ClientCache clientCache = new ClientCacheFactory(..)
.set(..)
.create();
ClientRegionFactory localPersistentClientRegion =
clientCache.createClientRegionFactory(ClientRegionShortcut.LOCAL_PERSISTENT);
By using SDG XML (peer Cache):
<gfe:local-region id="ExampleLocalPersistentServerRegion" persistent="true"/>
By using SDG XML (ClientCache):
<gfe:client-region id="ExampleLocalPersistentClientRegion"
shortcut="LOCAL_PERSISTENT"/>
By using SDG JavaConfig (peer Cache):
#Configuration
#PeerCacheApplication
class GemFireConfiguration {
#Bean
LocalRegionFactoryBean exampleLocalPersistentServerRegion(
GemFireCache peerCache) {
LocalRegionFactoryBean exampleRegion =
new LocalRegionFactoryBean();
exampleRegion.setCache(peerCache);
exampleRegion.setClose(false);
exampleRegion.setPersistent(true);
return exampleRegion;
}
By using SDG JavaConfig (ClientCache):
#Configuration
#ClientCacheApplication
class GemFireConfiguration {
#Bean
ClientRegionFactoryBean exampleLocalPersistentClientRegion(
GemFireCache clientCache) {
ClientRegionFactoryBean exampleRegion =
new ClientRegionFactoryBean();
exampleRegion.setCache(peerCache);
exampleRegion.setClose(false);
exampleRegion.setPersistent(true);
return exampleRegion;
}
There are also other ways of accomplishing the same using the pure Annotation-based configuration model in SDG, but this should get you going.
Cheers!
-John

Related

Is it possible to make the application ignore caching in case the cache server fails?

I have a spring boot application with the following properties:
spring.cache.type: redis
spring.redis.host: <hostname>
spring.redis.port: <hostport>
Right now if the remote host fails the application also fails with a connection error.
As in this case my cache is not core to my application, but it is used only for performance, I'd like for spring to simply bypass it and go to the database retrieving its data.
I saw that this could be attained by defining a custom errorHandler method, but in order to do so I have to implement the CachingConfigurer bean...but this also forces me to override every method (for example cache manager, cache resolver, ecc.).
#Configuration
public class CacheConfiguration implements CachingConfigurer{
#Override
public CacheManager cacheManager() {
// TODO Auto-generated method stub
return null;
}
#Override
public CacheResolver cacheResolver() {
// TODO Auto-generated method stub
return null;
}
...
#Override
public CacheErrorHandler errorHandler() {
// the only method I need, maybe
return null;
}
I would like to avoid that...I simply need a way to tell spring "the cache crashed but it's ok: just pretend you have no cache at all"
#Phate - Absolutely! I just answered a related question (possibly) using Apache Geode or Pivotal GemFire as the caching provider in a Spring Boot application with Spring's Cache Abstraction.
In that posting, rather than disabling the cache completely, I switched GemFire/Geode to run in a local-only mode (a possible configuration with GemFire/Geode). However, the same techniques can be applied to disable caching entirely if that is what is desired.
In essence, you need a pre-processing step, before Spring Boot and Spring in general start to evaluate the configuration of your application.
In my example, I implemented a custom Spring Condition that checked the availability of the cluster (i.e. servers). I then applied the Condition to my #Configuration class.
In the case of Spring Boot, Spring Boot applies auto-configuration for Redis (as a store and a caching provider) when it effectively sees (as well as see here) Redis and Spring Data Redis on the classpath of your application. So, essentially, Redis is only enabled as a caching provider when the "conditions" are true, primarily that a
RedisConnectionFactory bean was declared by your application configuration, your responsibility.
So, what would this look like?
Like my Apache Geode & Pivotal GemFire custom Spring Condition, you could implement a similar Condition for Redis, such as:
static RedisAvailableCondition implements Condition {
#Override
public boolean matches(ConditionContext conditionContext,
AnnotatedTypeMetadata annotatedTypeMetadata) {
// Check the available of the Redis server, such as by opening a Socket
// connection to the server node.
// NOTE: There might be other, more reliable/robust means of checking
// the availability of a Redis server in the Redis community.
Socket redisServer;
try {
Environment environment = conditionContext.getEnvironment();
String host = environment.getProperty("spring.redis.host");
Integer port = environment.getProperty("spring.redis.port", Integer.class);
SocketAddress redisServerAddress = new InetSocketAddress(host, port);
redisServer = new Socket();
redisServer.connect(redisServerAddress);
return true;
}
catch (Throwable ignore) {
System.setProperty("spring.cache.type", "none");
return false;
}
finally {
// TODO: You need to implement this method yourself.
safeCloseSocket(redisServer);
}
}
}
Additionally, I also set the spring.cache.type to NONE, to ensure that caching is rendered as a no-op in the case that Redis is not available. NONE is explained in more detail here.
Of course, you could also use a fallback caching option, using some other caching provider (like a simple ConcurrentHashMap, but I leave that as an exercise for you). Onward...
Then, in your Spring Boot application configuration class where you have defined your RedisConnectionFactory bean (as expected by Spring Boot's auto-configuration), you add this custom Condition using Spring's #Conditiional annotation, like so:
#Confgiuration
#Conditional(RedisAvailableCondition.class);
class MyRedisConfiguration {
#Bean
RedisConnectionFactory redisConnectionFactory() {
// Construct and return new RedisConnectionFactory
}
}
This should effectively handle the case when Redis is not available.
DISCLAIMER: I did not test this myself, but is based on my Apache Geode/Pivotal GemFire example that does work. So, perhaps, with some tweaks this will address your needs. It should also serve to give you some ideas.
Hope this helps!
Cheers!
We can use any of Circuit breaker implementation to use database as fallback option in case of any cache failure. The advantage of going with circuit breaker pattern is that once your cache is up, the request will be automatically routed back to your cache, so the switch happens seamlessly.
Also you configure how many times you want to retry before falling back to database and how frequently you want to check if your cache is back up online.
Spring cloud provides out of the box support for hystrix and Resilience4j circuit breaker implementation and its easy to integrate with spring boot applications.
https://spring.io/projects/spring-cloud-circuitbreaker
https://resilience4j.readme.io/docs/circuitbreaker
I have to implement the CachingConfigurer bean...but this also forces
me to override every method (for example cache manager, cache
resolver, ecc.)
Instead of this, you can simply extend CachingConfigurerSupport and only override the errorHandler() method, returning a custom CacheErrorHandler whose method implementations are no-ops. See https://stackoverflow.com/a/68072419/1527469

Spring Data GemFire: CustomExpiry Examples

I am using Pivotal GemFire 9.1.1 and Spring Data GemFire 2.0.7.RELEASE.
I have a token that will be stored in a GemFire Region with a String Key and a Map<String,String> Value. The expiration of the token (i.e. entry in the GemFire Region) should be dynamic dependent on a few business scenarios.
I could find Pivotal GemFire documentation for CustomExpiry whereas I could not find any proper example/documentation on Spring Data GemFire (<gfe:custom-entry-ttl>).
Please share if there is a resource which instructs how to enable custom data expiration in Spring Data GemFire.
There are actually 3 different ways a developer can configure a custom expiration policy for a Region using Pivotal GemFire's o.a.g.cache.CustomExpiry interface in SDG
Given an application-specific implementation of o.a.g.cacheCustomExpiry...
package example.app.gemfire.cache;
import org.apache.geode.cache.CustomExpiry;
import org.apache.geode.cache.ExpirationAttributes;
import ...;
class MyCustomExpiry implements CustomExpiry<String, Object> {
ExpirationAttributes getExpiry(Region.Entry<String, Object> entry) {
...
}
}
First, the XML approach.
<bean id="customTimeToLiveExpiration"
class="example.app.gemfire.cache.MyCustomExpiry"/>
<gfe:partitioned-region id="Example" persistent="false">
<gfe:custom-entry-ttl ref="customTimeToLiveExpiration"/>
<gfe:custom-entry-tti>
<bean class="example.app.gemfire.cache.MyCustomExpiry"/>
</gfe:custom-entry-tti>
</gfe:partitioned-region>
As you can see in the example above, you can define "custom" Expiration policies using either a bean reference, as in the nested Time-To-Live (TTL) Expiration Policy declaration or by using a anonymous bean definition, as in the nested Idle Timeout (TTI) Expiration Policy of the "Example" PARTITION Region bean definition.
Refer to the SDG XML schema for precise definitions.
Second, you can achieve the same thing Java Config...
#Configuration
class GemFireConfiguration {
#Bean
MyCustomExpiry customTimeToLiveExpiration() {
return new MyCustomExpiry();
}
#Bean("Example")
PartitionedRegionFactoryBean<String, Object> exampleRegion(
GemFireCache gemfireCache) {
PartitionedRegionFactoryBean<String, Object> exampleRegion =
new PartitionedRegionFactoryBean<>();
exampleRegion.setCache(gemfireCache);
exampleRegion.setClose(false);
exampleRegion.setPersistent(false);
exampleRegion.setCustomEntryTimeToLive(customTimeToLiveExpiration());
exampleRegion.setCustomEntryIdleTimeout(new MyCustomExpiry());
return exampleRegion;
}
}
Finally, you configure both TTL and TTI Expiration Policies using SDG Annotation-based Expiration configuration, as defined here. There is a test class along with the configuration in the SDG test suite demonstrating this capability.
Additional information about Annotation-based Expiration configuration in SDG can be found here.
Hope this helps!
-John

Enable schemaCreationSupport in spring-boot-starter-data-solr

I use spring-boot-starter-data-solr and would like to make use of the schmea cration support of Spring Data Solr, as stated in the documentation:
Automatic schema population will inspect your domain types whenever the applications context is refreshed and populate new fields to your index based on the properties configuration. This requires solr to run in Schemaless Mode.
However, I am not able to achieve this. As far as I can see, the Spring Boot starter does not enable the schemaCreationSupport flag on the #EnableSolrRepositories annotation. So what I tried is the following:
#SpringBootApplication
#EnableSolrRepositories(schemaCreationSupport = true)
public class MyApplication {
#Bean
public SolrOperations solrTemplate(SolrClient solr) {
return new SolrTemplate(solr);
}
}
But looking in Wireshark I cannot see any calls to the Solr Schema API when saving new entities through the repository.
Is this intended to work, or what am I missing? I am using Solr 6.2.0 with Spring Boot 1.4.1.
I've run into the same problem. After some debugging, I've found the root cause why the schema creation (or update) is not happening at all:
By using the #EnableSolrRepositories annotation, an Spring extension will add a factory-bean to the context that creates the SolrTemplate that is used in the repositories. This template initialises a SolrPersistentEntitySchemaCreator, which should do the creation/update.
public void afterPropertiesSet() {
if (this.mappingContext == null) {
this.mappingContext = new SimpleSolrMappingContext(
new SolrPersistentEntitySchemaCreator(this.solrClientFactory)
.enable(this.schemaCreationFeatures));
}
// ...
}
Problem is that the flag schemaCreationFeatures (which enables the creator) is set after the factory calls the afterPropertiesSet(), so it's impossible for the creator to do it's work.
I'll create an issue in the spring-data-solr issue tracker. Don't see any workaround right now, other either having a custom fork/build of spring-data or extend a bunch of spring-classes and trying to get the flag set before by using (but doubt of this can be done).

Automatic cache invalidation in Spring

I have a class that performs some read operations from a service XXX. These read operations will eventually perform DB reads and I want to optimize on those calls by caching the results of each method in the class for a specified custom key per method.
Class a {
public Output1 func1(Arguments1 ...) {
...
}
public Output2 func2(Arguments2 ...) {
...
}
public Output3 func3(Arguments3 ...) {
...
}
public Output4 func4(Arguments4 ...) {
...
}
}
I am thinking of using Spring caching(#Cacheable annotation) for caching results of each of these methods.
However, I want cache invalidation to happen automatically by some mechanism(ttl etc). Is that possible in Spring caching ? I understand that we have a #CacheEvict annotation but I want that eviction to happen automatically.
Any help would be appreciated.
According to the Spring documentation (section 36.8) :
How can I set the TTL/TTI/Eviction policy/XXX feature?
Directly through your cache provider. The cache abstraction is...
well, an abstraction not a cache implementation. The solution you are
using might support various data policies and different topologies
which other solutions do not (take for example the JDK
ConcurrentHashMap) - exposing that in the cache abstraction would be
useless simply because there would no backing support. Such
functionality should be controlled directly through the backing cache,
when configuring it or through its native API.#
This mean that Spring does not directly expose API to set Time To Live , but instead relays on the caching provider implementation to set this. This mean that you need to either set Time to live through the exposed Cache Manager, if the caching provider allows dynamic setup of these attributes. Or alternatively you should configure yourself the cache region that the Spring is using with the #Cacheable annotation.
In order to find the name of the cache region that the #Cacheable is exposing. You can use a JMX console to browse the available cache regions in your application.
If you are using EHCache for example once you know the cache region you can provide xml configuration like this:
<cache name="myCache"
maxEntriesLocalDisk="10000" eternal="false" timeToIdleSeconds="3600"
timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">
</cache>
Again I repeat all configuration is Caching provider specific and Spring does not expose an interface when dealing with it.
REMARK: The default cache provider that is configured by Spring if no cache provider defined is ConcurrentHashMap. It does not have support for Time To Live. In order to get this functionality you have to switch to a different cache provider(for example EHCache).

Spring dynamically choosing between data sources (alternative to ThreadLocal)

I've read about AbstractRoutingDataSource and the standard ways to bind a datasource dynamically in this article:
public class CustomerRoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return CustomerContextHolder.getCustomerType();
}
}
It uses a ThreadLocal context holder to "set" the DataSource:
public class CustomerContextHolder {
private static final ThreadLocal<CustomerType> contextHolder =
new ThreadLocal<CustomerType>();
public static void setCustomerType(CustomerType customerType) {
Assert.notNull(customerType, "customerType cannot be null");
contextHolder.set(customerType);
}
public static CustomerType getCustomerType() {
return (CustomerType) contextHolder.get();
}
// ...
}
I have a quite complex system where threads are not necessarily in my control, say:
Scheduled EJB reads a job list from the database
For each Job it fires a Spring (or Java EE) batch job.
Each job have its origin and destination databases (read from a central database).
Multiple jobs will run in parallel
Jobs may be multithreaded.
ItemReader will use the origin data source that was set for that specific job (origin data source must be bound to some repositories)
ItemWriter will use the destination data source that was set for that specific job (destination data source must also be bound to some repositories).
So I'm feeling somewhat anxious about ThreadLocal, specially, I'm not sure if the same thread will be used to handle multiple jobs. If that happens origin and destination databases may get mixed.
How can I "store" and bind a data source dynamically in a safe way when dealing with multiple threads?
I could not find a way to setup Spring to play nice with my setup and inject the desired DataSource, so I've decided to handle that manually.
Detailed solution:
I changed my repositories to be prototypes so that a new instance is constructed every time that I wire it:
#Repository
#Scope(BeanDefinition.SCOPE_PROTOTYPE)
I've introduced new setDataSource and setSchema methods in top level interfaces / implementations that are supposed to work with multiple instances / schemas.
Since I'm using spring-data-jdbc-repository my setDataSource method simple wraps the DataSource with a new JdbcTemplate and propagate the change.
setJdbcOperations(new JdbcTemplate(dataSource));
My implementation is obtaining the DataSources directly from the application server:
final Context context = new InitialContext();
final DataSource dataSource = (DataSource) context.lookup("jdbc/" + dsName);
Finally, for multiples schemas under the same database instance, I'm logging in with a special user (with the correct permissions) and using a Oracle command to switch to the desired schema:
getJdbcOperations().execute("ALTER SESSION SET CURRENT_SCHEMA = " + schema);
While this goes against the Dependency inversion principle it works and is handling my concurrency requirements very well.

Categories

Resources