Redis Cluster Configuration - java

I am using spring redisTemplate(jedis, 2.0.0) with redis-server(2.9.50). It's working perfectly for the single instance but I want to create a cluster master-slave environment with two different instances in which replication and failover happen automatically (over configuration).
Please answer the following queries
what's the proper way to create master/slave Redis cluster(right now I've only redis-server installed with no config change)?
how to connect jedis with redis cluster ?
what should I use to replicate data between redis clusters nodes ?

I think you need to upgrade your version of jedis to get the cluster support. From the README, the usage looks straight-forward:
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
//Jedis Cluster will attempt to discover cluster nodes automatically
jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7379));
JedisCluster jc = new JedisCluster(jedisClusterNodes);
jc.set("foo", "bar");
String value = jc.get("foo");
In terms of setup, there's a lot of considerations, you should consult this tutorial for basic setup and considerations. The section Creating a Redis Cluster using the create-cluster script will get you up and running pretty quickly and you can make tweaks & changes from there.

Related

Need reference document or code for jdbc Kafka connect configuration setup for distributed mode as docker container

I need to design and configure Kafka jdbc connect project where source and sink both are postgres db, and I am using apache Kafka 2.8.
I have prepared POC for standalone mode, but I need to design it for distributed mode and data volume would be several million records.
Can you share any reference to setup for distributed mode and also parameters tuning and best practices?
I have gone through several documents but not getting precise document only for apache Kafka with jdbc connector.
Also please let me know how can I make this solution dockerized?
Thanks,
Suvendu
reference to setup for distributed mode
This is in the Kafka documentation. Run connect-distributed.sh along with its config file.
parameters tuning and best practices?
The config has reasonable defaults, but you're welcome to inspect the file for any changes. Only other thing would be heap settings, but 2G is the default Xmx, and can be set with KAFKA_HEAP_OPTS env var
This starts an HTTP server, and you POST JSON to it that has the same key values as the standalone jdbc worker file
precise document only for apache Kafka with jdbc connector
There's the official configuration page and handful of blogs (by Confluent) about it
how can I make this solution dockerized?
The Confluent Docker images would be best for this, though you may have to confluent-hub install the JDBC connector into an image of your own
I'd recommend Debezium as the source, though

Elastic Beanstalk, Java Spring Boot and RDS Multi AZ Deployment

We are about to deploy a Spring Boot 2.3 Application on Elastic Beanstalk running Java 8 (Not Corretto 8).
We are thinking of using Multi AZ for the RDS and i am reading the Readme for that
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
and there is a part which states that we should be aware of the DNS cache in case of fail over
Setting the JVM TTL for DNS name lookups
which says the following thing
The default TTL can vary according to the version of your JVM and whether a
security manager is installed.
Many JVMs provide a default TTL less than 60 seconds.
If you're using such a JVM and not using a security manager,
you can ignore the rest of this topic. For more information on security managers
in Oracle, see The security manager in the Oracle documentation.
What is the default value of Java 8 In Elastic Beanstalk? I can't seem to find it.
Also from my understanding if the ttl value is big, and a fail happens on the database, it won't fail over to the instance in the other AZ because DNS won't change. Is that correct?
Also is the default value is too big, what is the Spring Boot way of setting that property without using XML files ?
Thanks a lot in advance
You can tune this in the JVM with code like:
java.security.Security.setProperty("networkaddress.cache.ttl" , "1");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "1");
This value is the number of seconds to cache the data.
However, you may also want to consider RDS Proxy as it can speed up failover. There should be no code changes, only configuration changes. There is no additional cost for RDS Proxy.

Hazelcast UI. Is there any Hazelcast UI endpoint which shows the metrics of the server

I have recently embedded Hazelcast distribution cache into my application and the performance is quiet good. Out of interest I would like to see the data that is stored in the Hazelcast server and any statistics of the server. Is there an UI end-point to see the metrics of the server?
You have 2 options.
(1) You can use the JMX beans that expose metrics and then use a GUI like VisualVM to view...
http://docs.hazelcast.org/docs/latest-development/manual/html/Management/Monitoring_with_JMX.html
(2) For up to 2 members in a cluster you can use the Hazelcast Management Centre which provide visual metrics/graphs etc...
http://docs.hazelcast.org/docs/management-center/3.8.4/manual/html/Deploying_and_Starting.html
You can deploy Hazelcast management center jar into your tomcat or any web server which is accessible to your Hazelcast application server.
https://download.hazelcast.com/management-center/management-center-3.8.4.zip
Then in code provide below configuration.
// Create Hazelcast configuration with management-center enabled.
Config config = new Config("instanceOne");
config.getManagementCenterConfig().setEnabled(true);
// Pass the URL where you deployed management-center jar
config.getManagementCenterConfig().setUrl("http://hostname:8080/mancenter");
//create hazelcast instance with config object.
HazelcastInstance instanceOne = Hazelcast.newHazelcastInstance(config);
The above answers are very informative but it's very confusing for someone like me who is new to JMX as well as Hazelcast. Were you able to figure this out and view your Hazelcast cache? I have a spring-boot application that uses Hazelcast and after following the link from the answer above, I installed and started Hazecast management-center. But I can't really figure out how to connect my application with this management-center thing. Any step-by-step help?

How to automate Kafka Testing

We have developed a system using kafka to queue the data and later consume that data to place orders for users.
We have tested certain things manually, but now our aim is automate the process.
Is there any client available to test it? I found out ways to Unit test it using kafka client itself, but my aim is to test the system as whole.
EDIT: our purpose is just API testing i.e., just the back-end, not the UI
You can start Kafka programmatically in your integration test, Kafka uses Zookeeper so firsly look at Zookeeper TestingServer - instance of this class creates and starts the Zk server using the given port.
Next look at KafkaServerStartable.scala, you have to provide configuration that points to your in memory Zk server and invoke startup() method, here is some code:
import kafka.server.KafkaConfig;
import kafka.server.KafkaServerStartable;
import java.util.Properties;
public KafkaTest() {
Properties properties = createProperties();
KafkaConfig kafkaConfig = new KafkaConfig(properties);
KafkaServerStartable kafka = new KafkaServerStartable(kafkaConfig);
kafka.startup();
}
Hope these help:)
You can go for integration-testing or end-to-end testing by bringing up Kafka in a docker container. If you use Apache kafka-clients:2.1.0, then you don't need to deal with ZooKeeper at the API level while producing or consuming the records.
Dockerizing Kafka, and testing helps to cover the scenarios in a single node as well as multi-node Kafka cluster. This way you don't have to test against Mock/In-Memory Kafka once, then real Kafka later. This can be done using TestContainers.
If you have too many test scenarios to cover, you can go for Kafka Declarative Testing like docker-compose style, by which you can eliminate the Kafka client API coding.
Checkout some handy examples here for validating produce and consume.
TestContainers project also supports docker-compose.
As I understood you want to implement end to end tests starting from messages. Me and some people from recently made a research for libraries, tools and frameworks to test Event-driven systems using Kafka.
We found Zerocode which is an automated API testing using declarative language like JSON or YAML. It support REST, SOAP and what we are interested, Messaging. It sends and consumes messages from topics and make assertions in the end, easy to learn and use. Here is the link for more details Zerocode. It seems like a good option although we are starting using it.
You will need to have Kafka brokers and the dependencies running to make this solution to work, but nothing like a docker compose and/or some scripts to bring a environment for tests.
Another way is to implement your own project with Kafka libraries and use the libraries to send and receive messages in the tests.
Unfortunately we couldn't find more options available out there. Kafka has a proposition to create a test kit but it's not in progress yet.
Unfortunately, the approach described by Pavel does not work for Kafka 2.8+ anymore. However, I could make our end-to-end tests with Kafka 3.2 work using the approach taken by KarelDB:
Properties props = TestUtils.createBrokerConfig(
brokerId,
zkConnect,
false,
false,
TestUtils.RandomPort(),
noInterBrokerSecurityProtocol,
noFile,
EMPTY_SASL_PROPERTIES,
true,
false,
TestUtils.RandomPort(),
false,
TestUtils.RandomPort(),
false,
TestUtils.RandomPort(),
Option.<String>empty(),
1,
false,
1,
(short) 1
);
KafkaConfig config = KafkaConfig.fromProps(props);
KafkaServer server = TestUtils.createServer(config, Time.SYSTEM);
// `createServer` will also start your Kafka server.
// To shutdown:
server.shutdown();

Integration tests with Oracle Coherence

We have a set of integration tests, that use Oracle Coherence. All of them use the same config and the problem is that when you are running them in parallel, their coherence nodes join into one cluster and it is possible that one test affects others. Is there a simple way to prevent this joining?
Thanks!
We use LittleGrid in our tests rather than start Coherence natively. You can programmatically set up the grid and set the configuration.
For creating different clusters on a single machine for testing, you can use different tangosol-override config file.Just keep a tangosol-override file in the classpath of each cluster, provide different name to the clusters and specify different multi-cast address (not mandatory i guess). If you are using coherence 12C then you can also create different managed cluster in a single domain of weblogic server.
When you start a coherence node, it will read the tangosol-override file and issue multi-cast messages to the address mentioned in the file. When it doesn't find any other node or cluster with same cluster name. It starts it's own cluster as identifies itself as the master node.

Categories

Resources