We are migrating our Redis stack over to Redis Cluster.
In portions of our application, this has meant that we have had to replace the Jedis object with the JedisCluster object.
In our Spring client, we use the JedisConnectionFactory to persist sessions to redis. However, this class does not appear to support JedisCluster.
Any thoughts on how one would go about wiring up a Spring application to a Redis Cluster?
I noticed that this factory implements RedisConnectionFactory which requires an instance of RedisConnection to be returned. However, this assumes that only one connection to a Redis server would be required, which is not the case in RedisCluster (it takes a set of redis servers and creates connections for all of them). As a result, I am not sure what interfaces one would need to implement in order to bring Spring into our new stack.
Any help would be greatly appreciated. Thanks!
Spring Data Redis 1.7 will support Redis Cluster using the Jedis and the lettuce driver. Release-Date ETA first of April 2016.
Code samples for Spring Data Redis Cluster are already online: https://github.com/spring-projects/spring-data-examples/tree/master/redis/cluster
Operate through Sentinel:
> /** * jedis */ #Bean public RedisConnectionFactory jedisConnectionFactory() { RedisSentinelConfiguration sentinelConfig
> = new RedisSentinelConfiguration() .master("mymaster") .sentinel("127.0.0.1", 26379) .sentinel("127.0.0.1", 26380); return
> new JedisConnectionFactory(sentinelConfig); }
http://docs.spring.io/spring-data/redis/docs/current/reference/html/#redis:sentinel
Related
I have a question about springboot, quartz scheduler and HikariCP. I am relatively new to this domain and trying to understand the relations and working.
I have gone through many questions that are either related to Springboot HikariCP or Quartz scheduler using HikariCP but none of them is able to answer my questions.
I have an application with below configurations
#Database properties
spring.datasource.url = jdbc:mysql://localhost:3306/demo?user=root&password=root&useSSL=false&serverTimezone=UTC
spring.datasource.username = root
spring.datasource.password = root
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
#Hikari
spring.datasource.hikari.minimumIdle=5
spring.datasource.hikari.maximumPoolSize=20
#quartz settings
spring.quartz.properties.org.quartz.jobStore.dataSource = quartzDataSource
spring.quartz.properties.org.quartz.dataSource.quartzDataSource.driver = com.mysql.cj.jdbc.Driver
spring.quartz.properties.org.quartz.dataSource.quartzDataSource.provider=hikaricp
spring.quartz.properties.org.quartz.dataSource.quartzDataSource.URL = jdbc:mysql://localhost:3306/demo?user=root&password=root&useSSL=false&serverTimezone=UTC
spring.quartz.properties.org.quartz.dataSource.quartzDataSource.user = root
spring.quartz.properties.org.quartz.dataSource.quartzDataSource.password = root
spring.quartz.job-store-type = jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount=20
By default, springboot2 uses HikariCP. I have set the pool size to 20.
In quartz scheduler too, I have set it to use HikariCP.
Now my questions are
Whether springboot and quartz using the same connection pool or quartz is creating a new pool?
If quartz is creating a new pool, Is there any way to configure both such that both uses same connection pool created by springboot.
What should be the optimal connection pool for 1k,10k,50k users?
Thanks in advance.
Sorry, don't have enough time to come back with a complete answer, but maybe this will help:
you're giving the connection details in 2 different places, it's safe to assume you're creating 2 datasources with different pools.
Found this: https://www.candidjava.com/tutorial/quartz-reuse-existing-data-source-connection-pool/
The number of users can't be directly correlated to connection pool size. You should look at the number of concurrent requests you want to support: for 100 req/sec, each req taking 100 ms -> you need 10 connections. This is a very simplified way of calculating but it's a starting point, after that: monitoring and adjusting should help you.
Reusing Spring's datasource in Quartz is possible, and has been the case since Spring framework 4.x.
By default, Quartz creates a new connection pool based on the provided data source properties.
Even if you instruct Quartz to use a connection pooling provider (since it supports c3p0 and HikariCP out of the box), it will still create a new connection pool using the providers. It all comes down to the implementation details of the Quartz's JobStoreCMT class, which is usually the JobStore implementation used in Spring applications by default. JobStoreCMT will always create it's own pool.
Reusing Spring's datasource in Quartz is however very trivial, using the SchedulerFactoryBean in Spring. It accepts a Spring managed datasource through the setDataSource, as shown in the following snippet
#Configuration
public class SchedulerConfig {
#Autowired private DataSource dataSource;
#Bean
public SchedulerFactoryBean schedulerFactoryBean(){
SchedulerFactoryBean factory = new SchedulerFactoryBean();
factory.setDataSource(dataSource);
// ... set other properties
return factory;
}
Internally, Spring Framework instructs Quartz to use LocalDataSourceJobStore (a Spring provided job store that extends Quartz's JobStoreCMT) to manage jobs, when a datasource is provided to SchedulerFactoryBean. LocalDataSourceJobStore has a custom Quartz connection provider that reuses the provided datasource, instead of creating a new connection.
In Spring Boot 2, this is even simpler, since it does all of the auto-wiring, to use the application's default data source. One only needs to configure Quartz to use a JDBC store type:
spring.quartz.job-store-type=jdbc
Configuring Quartz to use a data source again in the properties file, might interfere with this autowiring behavior, and result in creation of a Quartz managed datasource with a new connection pool.
I have a spring boot application with the following properties:
spring.cache.type: redis
spring.redis.host: <hostname>
spring.redis.port: <hostport>
Right now if the remote host fails the application also fails with a connection error.
As in this case my cache is not core to my application, but it is used only for performance, I'd like for spring to simply bypass it and go to the database retrieving its data.
I saw that this could be attained by defining a custom errorHandler method, but in order to do so I have to implement the CachingConfigurer bean...but this also forces me to override every method (for example cache manager, cache resolver, ecc.).
#Configuration
public class CacheConfiguration implements CachingConfigurer{
#Override
public CacheManager cacheManager() {
// TODO Auto-generated method stub
return null;
}
#Override
public CacheResolver cacheResolver() {
// TODO Auto-generated method stub
return null;
}
...
#Override
public CacheErrorHandler errorHandler() {
// the only method I need, maybe
return null;
}
I would like to avoid that...I simply need a way to tell spring "the cache crashed but it's ok: just pretend you have no cache at all"
#Phate - Absolutely! I just answered a related question (possibly) using Apache Geode or Pivotal GemFire as the caching provider in a Spring Boot application with Spring's Cache Abstraction.
In that posting, rather than disabling the cache completely, I switched GemFire/Geode to run in a local-only mode (a possible configuration with GemFire/Geode). However, the same techniques can be applied to disable caching entirely if that is what is desired.
In essence, you need a pre-processing step, before Spring Boot and Spring in general start to evaluate the configuration of your application.
In my example, I implemented a custom Spring Condition that checked the availability of the cluster (i.e. servers). I then applied the Condition to my #Configuration class.
In the case of Spring Boot, Spring Boot applies auto-configuration for Redis (as a store and a caching provider) when it effectively sees (as well as see here) Redis and Spring Data Redis on the classpath of your application. So, essentially, Redis is only enabled as a caching provider when the "conditions" are true, primarily that a
RedisConnectionFactory bean was declared by your application configuration, your responsibility.
So, what would this look like?
Like my Apache Geode & Pivotal GemFire custom Spring Condition, you could implement a similar Condition for Redis, such as:
static RedisAvailableCondition implements Condition {
#Override
public boolean matches(ConditionContext conditionContext,
AnnotatedTypeMetadata annotatedTypeMetadata) {
// Check the available of the Redis server, such as by opening a Socket
// connection to the server node.
// NOTE: There might be other, more reliable/robust means of checking
// the availability of a Redis server in the Redis community.
Socket redisServer;
try {
Environment environment = conditionContext.getEnvironment();
String host = environment.getProperty("spring.redis.host");
Integer port = environment.getProperty("spring.redis.port", Integer.class);
SocketAddress redisServerAddress = new InetSocketAddress(host, port);
redisServer = new Socket();
redisServer.connect(redisServerAddress);
return true;
}
catch (Throwable ignore) {
System.setProperty("spring.cache.type", "none");
return false;
}
finally {
// TODO: You need to implement this method yourself.
safeCloseSocket(redisServer);
}
}
}
Additionally, I also set the spring.cache.type to NONE, to ensure that caching is rendered as a no-op in the case that Redis is not available. NONE is explained in more detail here.
Of course, you could also use a fallback caching option, using some other caching provider (like a simple ConcurrentHashMap, but I leave that as an exercise for you). Onward...
Then, in your Spring Boot application configuration class where you have defined your RedisConnectionFactory bean (as expected by Spring Boot's auto-configuration), you add this custom Condition using Spring's #Conditiional annotation, like so:
#Confgiuration
#Conditional(RedisAvailableCondition.class);
class MyRedisConfiguration {
#Bean
RedisConnectionFactory redisConnectionFactory() {
// Construct and return new RedisConnectionFactory
}
}
This should effectively handle the case when Redis is not available.
DISCLAIMER: I did not test this myself, but is based on my Apache Geode/Pivotal GemFire example that does work. So, perhaps, with some tweaks this will address your needs. It should also serve to give you some ideas.
Hope this helps!
Cheers!
We can use any of Circuit breaker implementation to use database as fallback option in case of any cache failure. The advantage of going with circuit breaker pattern is that once your cache is up, the request will be automatically routed back to your cache, so the switch happens seamlessly.
Also you configure how many times you want to retry before falling back to database and how frequently you want to check if your cache is back up online.
Spring cloud provides out of the box support for hystrix and Resilience4j circuit breaker implementation and its easy to integrate with spring boot applications.
https://spring.io/projects/spring-cloud-circuitbreaker
https://resilience4j.readme.io/docs/circuitbreaker
I have to implement the CachingConfigurer bean...but this also forces
me to override every method (for example cache manager, cache
resolver, ecc.)
Instead of this, you can simply extend CachingConfigurerSupport and only override the errorHandler() method, returning a custom CacheErrorHandler whose method implementations are no-ops. See https://stackoverflow.com/a/68072419/1527469
I have a spring boot kafka application. My brokers are recycled every few days. The old brokers are deprovisioned and new brokers are provisioned.
I have a scheduler which is checking for brokers every few hours. I would like to make sure as soon as the we have new brokers,
we should reload all the Spring Kafka related beans. Very similar to KafkaAutoConfiguration except I want a trigger on broker value change and load the auto configuration programmatically.
How do I call the auto configure programmatically whenever the old brokers are replaced with new one ?
Your requirements sounds like Config Server in Spring Cloud:https://cloud.spring.io/spring-cloud-static/Greenwich.SR2/multi/multi__spring_cloud_config_2.html#_spring_cloud_config_2 with its #RefreshScope feature: https://cloud.spring.io/spring-cloud-static/Greenwich.SR2/multi/multi__spring_cloud_context_application_context_services.html#refresh-scope.
So, you need to specify your own beans and mark them with that annotation:
#Bean
#RefreshScope
public ConsumerFactory<?, ?> kafkaConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(this.properties.buildConsumerProperties());
}
#Bean
#RefreshScope
public ProducerFactory<?, ?> kafkaProducerFactory() {
DefaultKafkaProducerFactory<?, ?> factory = new DefaultKafkaProducerFactory<>(
this.properties.buildProducerProperties());
String transactionIdPrefix = this.properties.getProducer().getTransactionIdPrefix();
if (transactionIdPrefix != null) {
factory.setTransactionIdPrefix(transactionIdPrefix);
}
return factory;
}
These two beans rely on the configuration properties for connection to Apache Kafka broker and that is really fully enough to have them refreshable. Whenever a ContextRefreshedEvent happens these beans are going to be re-initialized with a fresh configuration properties.
I think the ConsumerFactory consumers (MessageListenerContainer and KafkaListenerEndpointRegistry) have to be restarted on that event as well. The point is that MessageListenerContainer starts a long-living process and therefore caches a KafkaConsumer instance for the poll purposes.
All the ProducerFactory consumers don't need to be restarted. Even if KafkaProducer is cached in the DefaultKafkaProducerFactory it is going to be reinitialized during #RefreshScope phase.
UPDATE
I don’t use config server. I get the new hosts from consul catalog service.
Right, I didn't say that you use a Config Server. That just looks for me similar way. So, from big height I would really take a look into a Config Client implementation for your Consul catalog solution.
Nevertheless you still can emit a RefreshEvent which will trigger all your #RefreshScope'd beans to be reloaded. For that purpose you need to implement an ApplicationEventPublisherAware and emit that event whenever you have update from Consul. Remember: Kafka listener containers must be restarted. For that purpose you can listen for the RefreshScopeRefreshedEvent since you really are interested in the restart only when all the #RefreshScope have been refreshed.
More about refresh scope: https://gist.github.com/dsyer/a43fe5f74427b371519af68c5c4904c7
I have local Region and I want to persist the Region data to disk. How to define it using DataPolicy?
I'm not exactly sure about what you're asking here, the question is not detailed enough... either way, the DataPolicy enumeration has the PERSISTENT_REPLICATE and PERSISTENT_PARTITION values which you can use when configuring a region to make it persistent.
Hope this helps.
If your Region is truly "local", in that the data for the Region is only stored local to the GemFire data node (server) or perhaps your application if it is either a client or perhaps a peer member of the cluster (either way), then you can specify the data policy using the correct Region shortcut.
For example, if your application/arrangement is using a ClientCache (i.e. your application is a cache client), then you can still define LOCAL-only, persistent Regions using ClientRegionShortcut.LOCAL_PERSISTENT.
If your application/arrangement is using a peer Cache, (i.e. your application is actually participating as a data node/server, peer member in the cluster), then you can still define LOCAL-only, persistent Regions using RegionShortcut.LOCAL_PERSISTENT.
By way of GemFire API (peer Cache):
Cache peerCache = new CacheFactory(..)
.set(..)
.create();
RegionFactory localPersistentServerRegion =
peerCache.createRegionFactory(RegionShortcut.LOCAL_PERSISTENT);
By way of GemFire API (ClientCache):
ClientCache clientCache = new ClientCacheFactory(..)
.set(..)
.create();
ClientRegionFactory localPersistentClientRegion =
clientCache.createClientRegionFactory(ClientRegionShortcut.LOCAL_PERSISTENT);
By using SDG XML (peer Cache):
<gfe:local-region id="ExampleLocalPersistentServerRegion" persistent="true"/>
By using SDG XML (ClientCache):
<gfe:client-region id="ExampleLocalPersistentClientRegion"
shortcut="LOCAL_PERSISTENT"/>
By using SDG JavaConfig (peer Cache):
#Configuration
#PeerCacheApplication
class GemFireConfiguration {
#Bean
LocalRegionFactoryBean exampleLocalPersistentServerRegion(
GemFireCache peerCache) {
LocalRegionFactoryBean exampleRegion =
new LocalRegionFactoryBean();
exampleRegion.setCache(peerCache);
exampleRegion.setClose(false);
exampleRegion.setPersistent(true);
return exampleRegion;
}
By using SDG JavaConfig (ClientCache):
#Configuration
#ClientCacheApplication
class GemFireConfiguration {
#Bean
ClientRegionFactoryBean exampleLocalPersistentClientRegion(
GemFireCache clientCache) {
ClientRegionFactoryBean exampleRegion =
new ClientRegionFactoryBean();
exampleRegion.setCache(peerCache);
exampleRegion.setClose(false);
exampleRegion.setPersistent(true);
return exampleRegion;
}
There are also other ways of accomplishing the same using the pure Annotation-based configuration model in SDG, but this should get you going.
Cheers!
-John
I am using spring & hibernate. my application has 3 modules. Each module has a specific database. So, Application deals with 3 databases. On server start up, if any one of the databases is down, then server is not started. My requirement is even if one of the databases is down, server should start as other module's databases are up, user can work on other two modules. Please suggest me how can i achieve this?
I am using spring 3.x and hibernate 3.x. Also i am using c3p0 connection pooling.
App server is Tomcat.
Thanks!
I would use the #Configuration annotation to make an object who's job it is to construct the beans and deal with the DB down scenario. When constructing the beans, test if the DB connections are up, if not, return a Dummy Version of your bean. This will get injected into the relevant objects. The job of this dummy bean is to really just throw an unavailable exception when called. If your app can deal with these unavailable exceptions for certain functions and show that to the user while continuing to function when the other datasources are used, you should be fine.
#Configuration
public class DataAccessConfiguration {
#Bean
public DataSource dataSource() {
try {
//create data source to your database
....
return realDataSource;
} catch (Exception) {
//create dummy data source
....
return dummyDataSource;
}
}
}
This was originally a comment:
Have you tried it? You wouldn't know whether a database is down until you connect to it, so unless c3p0 prevalidates all its connections, you wouldn't know that a particular database is down until you try to use it. By that time your application will have already started.