Azure Redis SSL Cluster + Lettuce Java (EDIT: lettuce version < 4.2) - java

I need to use Azure Redis Cluster, with password, with SSL, with pipelining support.
I was using Jedis until now but it lacks support for cluster+ssl+password+pipelining combo.
I tried lettuce (https://github.com/mp911de/lettuce/releases/tag/4.1.2.Final) and currently hit a connection issue I am not able to solve on my own.
Connecting to an Azure Redis Cluster (2 * P4) works without SSL but not with.
Also I can connect to a single node with SSL but without cluster support.
Problem is when combining cluster+ssl, the auth call times out (the command is sent over the wire but times out).
The cluster without SSL working code looks like this:
RedisURI redisURI = RedisURI.Builder.redis(host, 6379)
.withPassword(password)
.build();
RedisClusterClient client = RedisClusterClient.create(redisURI);
RedisAdvancedClusterCommands<String, String> connection = client.connect().sync();
connection.set("a", "1");
System.out.println(connection.get("a"));
Output is 1
Enabling SSL:
RedisURI redisURI = RedisURI.Builder.redis(host, 6380)
.withPassword(password)
.withSsl(true)
.build();
RedisClusterClient client = RedisClusterClient.create(redisURI);
RedisAdvancedClusterCommands<String, String> connection = client.connect().sync();
connection.set("a", "1");
System.out.println(connection.get("a"));
It hangs during 1 minute and log4j logs looks like this:
2016-05-26 14:25:17,110 | TRACE | lettuce-nioEventLoop-3-1 | CommandEncoder | [/{CLIENT} -> {HOST}/{IP}:6380] Sent: *2
$4
AUTH
$44
{PASSWORD}
2016-05-26 14:26:17,134 | WARN | main | ClusterTopologyRefresh | Cannot connect to RedisURI [host='***', port=6380]
com.lambdaworks.redis.RedisCommandTimeoutException: Command timed out
at com.lambdaworks.redis.LettuceFutures.await(LettuceFutures.java:95)
at com.lambdaworks.redis.LettuceFutures.awaitOrCancel(LettuceFutures.java:74)
at com.lambdaworks.redis.AbstractRedisAsyncCommands.auth(AbstractRedisAsyncCommands.java:64)
at com.lambdaworks.redis.cluster.RedisClusterClient.connectToNode(RedisClusterClient.java:342)
at com.lambdaworks.redis.cluster.RedisClusterClient.connectToNode(RedisClusterClient.java:301)
at com.lambdaworks.redis.cluster.ClusterTopologyRefresh.getConnections(ClusterTopologyRefresh.java:240)
at com.lambdaworks.redis.cluster.ClusterTopologyRefresh.loadViews(ClusterTopologyRefresh.java:132)
at com.lambdaworks.redis.cluster.RedisClusterClient.loadPartitions(RedisClusterClient.java:468)
at com.lambdaworks.redis.cluster.RedisClusterClient.initializePartitions(RedisClusterClient.java:445)
at com.lambdaworks.redis.cluster.RedisClusterClient.connectClusterImpl(RedisClusterClient.java:359)
at com.lambdaworks.redis.cluster.RedisClusterClient.connect(RedisClusterClient.java:244)
at com.lambdaworks.redis.cluster.RedisClusterClient.connect(RedisClusterClient.java:231)
at com.ubikod.ermin.reach.tools.Test.main(Test.java:20)
Exception in thread "main" com.lambdaworks.redis.RedisException: Cannot retrieve initial cluster partitions from initial URIs [RedisURI [host='***', port=6380]]
at com.lambdaworks.redis.cluster.RedisClusterClient.loadPartitions(RedisClusterClient.java:471)
at com.lambdaworks.redis.cluster.RedisClusterClient.initializePartitions(RedisClusterClient.java:445)
at com.lambdaworks.redis.cluster.RedisClusterClient.connectClusterImpl(RedisClusterClient.java:359)
at com.lambdaworks.redis.cluster.RedisClusterClient.connect(RedisClusterClient.java:244)
at com.lambdaworks.redis.cluster.RedisClusterClient.connect(RedisClusterClient.java:231)
at com.ubikod.ermin.reach.tools.Test.main(Test.java:20)
Keeping SSL and disabling cluster works:
RedisURI redisURI = RedisURI.Builder.redis(host, 6380)
.withPassword(password)
.withSsl(true)
.build();
RedisClient client = RedisClient.create(redisURI);
RedisCommands<String, String> connection = client.connect().sync();
connection.set("a", "1");
System.out.println(connection.get("a"));
So that's not just an SSL issue, its a SSL + cluster combo issue.
I tried to use withStartTls, disabling peer verification, raising the timeout, any combination of those without luck.
Any idea if it's a library bug or an Azure Redis bug?

I inspected the wiki page of lettuce, and I noticed the issue was not caused by a library bug or an Azure Redis bug, unfortunately, just only the lettuce not support Redis Cluster with SSL, please see the content below from the subsection Connecting to Redis using String RedisURI of the wiki page.
lettuce supports SSL only on regular Redis connections. Master resolution using Redis Sentinel or Redis Cluster are not supported since both strategies provide Redis addresses to the native port. Redis Sentinel and Redis Cluster cannot provide the SSL ports.

Related

VertX EventBus not receiving messages in AWS context

I have a Java service running on 3 different ec2 instances. They form a cluster using Hazelcast. Here's part of my cluster.xml configuration:
<join>
<multicast enabled="false"></multicast>
<tcp-ip enabled="false"></tcp-ip>
<aws enabled="${AWS_ENABLED}">
<iam-role>DEFAULT</iam-role>
<region>us-east-1</region>
<security-group-name>sec-group-name</security-group-name>
<hz-port>6100-6110</hz-port>
</aws>
</join>
Here's the log message that the discovery is successful:
[3.12.2] (This is the hazelcast version)
Members {size:3, ver:31} [
Member [10.0.3.117]:6100 - f5a9d579-ae9c-4c3d-8126-0e8d3a1ecdb9
Member [10.0.1.32]:6100 - 5799f451-f122-4886-92de-e351704e6980
Member [10.0.1.193]:6100 - 626de40a-197a-446e-a44f-ac456a52d118 this
]
vertxInstance.sharedData() is working fine, meaning we can cache data between the instances.
However, the issue is when publishing messages to the instances using the vertx eventbus:
this.vertx.eventBus().publish(EventBusService.TOPIC, memberId);
and having this listener:
eventBus.consumer(TOPIC, event -> {
logger.warn("Captured message: {}", event.body());
});
This configuration works locally, the consumer get's the messages, but once deployed to AWS it doesn't work.
I have tried setting up the host explicitly just for test, but this does not work either:
VertxOptions options = new VertxOptions();
options.setHAEnabled(true);
options.getEventBusOptions().setClustered(true);
options.getEventBusOptions().setHost("10.0.1.0");
What am I doing wrong and what are my options to debug this issue further?
eventbus communication does not use the cluster manager, but rather direct tcp connections
Quote from this conversation: https://groups.google.com/g/vertx/c/fCiJpQh66fk
The solution was to explicitly set the public host and port options for the eventbus:
vertxOptions.getEventBusOptions().setClusterPublicHost(privateIpAddress);
vertxOptions.getEventBusOptions().setClusterPublicPort(5702);

Topic created in all kafka port

server.propereties setup:
listeners=PLAINTEXT://:29092, SSL://:29093
SSL related set too done.
so that we can connect 29092 for plaintext and 29093 along with SSL setup.
Here am trying to produce data into port 29093 as below
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, System.getProperty("kafkaPort", "localhost:29093"));
//SSL related setup too done in props
Producer<Long, String> producer = new KafkaProducer<>(props, new LongSerializer(), new KafkaSerializer());
final ProducerRecord<Long, String> record = new ProducerRecord<Long, String>(System.getProperty("kafkaTopic", "dqerror"),
content);
RecordMetadata metadata = producer.send(record).get();
After publishing dqerror topic created in both also data get published in both
Data is published into two topic.
Actually, am trying to find is any possible to restrict to drop data into a specific port ?
Data is not published in "both" ports. There is only one Kafka cluster that is listening on two ports. There is one set of disks that the data is written into on your one broker.
Also, from what I can tell, there is only one topic used in your code.
If you want to restrict TCP traffic on any port, that would be a firewall rule from the OS, rather than any Kafka settings or Java code.

server failover with Quarkus Reactive MySQL Clients / io.vertx.mysqlclient

Does io.vertx.mysqlclient support server failover as it can be set up with MySQL Connector/J?
My application is based on quarkus using io.vertx.mutiny.mysqlclient.MySQLPool which in turn is based on io.vertx.mysqlclient. If there is support for server failover in that stack, how can it be set up? I did not find any hints in the documentation and code.
No it doesn't support failover.
You could create two clients and then use Munity failover methods to get the same effect:
MySQLPool client1 = ...
MySQLPool client2 = ...
private Uni<List<Data>> query(MySQLPool client) {
// Use client param to send queries to the database
}
Uni<List<Data>> results = query(client1)
.onFailure().recoverWithUni(() -> query(client2));

JedisCluster : redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster

I was trying to connect to JedisCluster (ElastiCache Redis) from java. But I was getting JedisConnectionException with No reachable node in the cluster.
Here was my code to connect to JedisCluster
public static void main(String[] args) throws IOException{
final GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
poolConfig.setMaxWaitMillis(2000);
poolConfig.setMaxTotal(300);
Set<HostAndPort> jedisClusterNode = new HashSet<HostAndPort>();
jedisClusterNode.add(new HostAndPort("mycachecluster.eaogs8.0001.usw2.cache.amazonaws.com",6379));
jedisClusterNode.add(new HostAndPort("mycachecluster.eaogs8.0002.usw2.cache.amazonaws.com",6379));
JedisCluster jedisCluster = new JedisCluster(jedisClusterNode,poolConfig);
System.out.println("Cluster Size...." + jedisCluster.getClusterNodes().size());
try{
jedisCluster.set("foo", "bar");
jedisCluster.get("foo");
}
catch(Exception e){
e.printStackTrace();
}
finally{
jedisCluster.close();
}
}
The exception I got after running this
redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnection(JedisSlotBasedConnectionHandler.java:57)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:74)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:116)
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:31)
at redis.clients.jedis.JedisCluster.set(JedisCluster.java:103)
I have checked
telnet mycachecluster.eaogs8.0001.usw2.cache.amazonaws.com 6379
as mentioned in AWS Doc I got the reply as Connected.
What is the issue here and why I am not able to connect to the JedisCluster using java?
Note :
I am using jedis version 2.9.0
Update:
In AWS Encryption in-transit and Encryption at-rest are activated
So
Jedis jedis = null;
try{
jedis = new Jedis(URI.create("rediss://mycachecluster.eaogs8.0001.usw2.cache.amazonaws.com:6379"));
System.out.println(jedis.ping());
System.out.println("XXXXX: "+jedis.get("c"));
}
catch(Exception exception){
exception.printStackTrace();
}
finally{
jedis.close();
}
works fine. But not the jedis cluster.
From URI.create("rediss://..."), it's evident that you are using Redis SSL Scheme to create a successful connection by Jedis. But JedisCluster doesn't have SSL support yet.
There is a pending feature request regarding this.
The JedisCluster had some problems while connecting the redis cluster server with SSL enabled. Even with the latest revision(as of July 2020) we were getting the exception JedisNoReachableClusterNodeException. There are very less articles on the configurations required for various server requirements.
We needed the library in 2 languages, one in Java and the other in Python. For python I used the python redis-py-cluster. While for Java initially we tried with Jedis and then Jedis Cluster but both were not helpful.
So another library I found is
Lettuce
For a redis cluster server with SSL support the configuration is pretty straight forward and supports a builder pattern to construct the connection object with optional parameters. Here is the sample to create and connect to redis-cluster server
RedisURI redisURI = RedisURI.Builder.redis("<<Redis Server primary endpoint>>", 6379).withSsl(true).withVerifyPeer(false).build();
RedisClusterClient redisClient = RedisClusterClient.create(redisURI);
StatefulRedisClusterConnection<String, String> conn = redisClient.connect();
List<KeyValue<String, String>> res_1= conn.sync().mget(keys...)_
conn.close();
But note that if the redis server is a single node instance then even Jedis library is also good to use.

ActiveMQ Java Broker, Python Client

I have for legacy reasons a Java activeMQ implementation of the Broker/Publisher over vanilla tcp transport protocol. I wish to connect a Python client to it, however all the "stomp" based documentation doesn't seem to have it, not over the stomp protcol, and when I try the basic examples I get the error on the Java Broker side:
[ActiveMQ Transport: tcp:///127.0.0.1:62860#5001] WARN org.apache.activemq.broker.TransportConnection.Transport - Transport Connection to: tcp://127.0.0.1:62860 failed: java.io.IOException: Unknown data type: 80
The Broker code is very vanilla in Java:
String localVMurl = "vm://localhost";
String remoterURL = "tcp://localhost:5001";
BrokerService broker = new BrokerService();
broker.addConnector(localVMurl);
broker.addConnector(remoterURL);
broker.setAdvisorySupport(true);
broker.start();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(localVMurl+"?create=false");
Connection connection = connectionFactory.createConnection();
and the Python just fails. I can't seem to find anything online using just basic "tcp://localhost:" connections from Python. Am I doing something wrong here?
import stomp
class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print('received an error "%s"' % message)
def on_message(self, headers, message):
print('received a message "%s"' % message)
conn = stomp.Connection(host_and_ports = [('localhost', 5001)])
conn.start()
conn.connect('admin', 'password', wait=True)
and I get the error:
IndexError: list index out of range
Without seeing the broker configuration it is a bit tricky to answer but from the error I'd guess you are trying to connect a STOMP client to the OpenWire transport which won't work, you need to have a STOMP TransportConnector configured on the broker and point the STOMP client there.
See the ActiveMQ STOMP documentation.
To add STOMP support for an embedded broker you'd do something along the lines of:
brokerService.addConnector("stomp://0.0.0.0:61613");

Categories

Resources