I try to create new endpoints for cassandra with different read request timeout. The endpoint with big timeout for requests with big data responds.
I found Scala code with com.datastax.cassandra driver and cassandra-default.yaml with read_request_timeout parameter. How to set read_request_timeout in Cluster builder or in other places in code ?
Cluster
.builder
.addContactPoints(cassandraHost.split(","): _*)
.withPort(cassandraPort)
.withRetryPolicy(DefaultRetryPolicy.INSTANCE)
.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().build())).build
# How long the coordinator should wait for read operations to complete
read_request_timeout_in_ms: 5000
Set at query level using :
session.execute(
new SimpleStatement("CQL HERE").setReadTimeoutMillis(65000));
If you want to set while cluster bulding use :
Cluster cluster = Cluster.builder()
.addContactPoint("127.0.0.1")
.withSocketOptions(
new SocketOptions()
.setConnectTimeoutMillis(2000))
.build();
Socket Options
Related
server.propereties setup:
listeners=PLAINTEXT://:29092, SSL://:29093
SSL related set too done.
so that we can connect 29092 for plaintext and 29093 along with SSL setup.
Here am trying to produce data into port 29093 as below
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, System.getProperty("kafkaPort", "localhost:29093"));
//SSL related setup too done in props
Producer<Long, String> producer = new KafkaProducer<>(props, new LongSerializer(), new KafkaSerializer());
final ProducerRecord<Long, String> record = new ProducerRecord<Long, String>(System.getProperty("kafkaTopic", "dqerror"),
content);
RecordMetadata metadata = producer.send(record).get();
After publishing dqerror topic created in both also data get published in both
Data is published into two topic.
Actually, am trying to find is any possible to restrict to drop data into a specific port ?
Data is not published in "both" ports. There is only one Kafka cluster that is listening on two ports. There is one set of disks that the data is written into on your one broker.
Also, from what I can tell, there is only one topic used in your code.
If you want to restrict TCP traffic on any port, that would be a firewall rule from the OS, rather than any Kafka settings or Java code.
I have a java play application that leverages akka streams to read from kafka and output to a websocket. my application creates a kafka source using the following:
Consumer.plainSource(getKafkaConsumerSettings(groupId),Subscriptions.topics(kafkaTopic))
.map(consumerRecord -> consumerRecord.value())
where getKafkaConsumerSettings is the below:
ConsumerSettings.create(this.actorSystem, new StringDeserializer(), new JsonHelper.deserializer())
.withBootstrapServers(this.bootstrapServers)
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
.withGroupId(groupId);
I then create a Flow from the source and return it to the websocket and when I run the application I can publish messages to kafka and correctly receive all messages sent after the websocket was created.
The issue I am facing is when I run this in a test case my source will not ever emit messages. I am able to get test cases to work if I update the auto offset reset config to be "earliest" but this is not a good solution as I would like to actually test with cases where new subscibers only see messages after they subscribed.
ActorSystem actorSystem = ActorSystem.create();
KafkaContainer kafka = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:6.2.1"));
#SneakyThrows
#Test
void testProduce4Valid2Invalid() {
kafka.start();
toTest = new KafkaService(
topic,
kafka.getBootstrapServers(),
actorSystem,
Materializer.createMaterializer(actorSystem),
"latest"
);
KafkaProducer<String, JsonNode> producer = new KafkaProducer<>(
ImmutableMap.of(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBootstrapServers(),
ProducerConfig.CLIENT_ID_CONFIG, UUID.randomUUID().toString()
),
new StringSerializer(),
new JsonSerdes<>(JsonNode.class).serializer()
);
var nodes = toTest.getKafkaStream(groupId)
.take(1)
.toMat(Sink.seq(), Consumer::createDrainingControl)
.run(actorSystem)
.streamCompletion()
.toCompletableFuture();
producer.send(new ProducerRecord<>(topic, "key", event1));
assertThat(nodes).asList().containsExactlyInAnyOrder(event1);
}
I am clearly sending the messages after the source has been materialized and thus would expect this to receive messages. The other pieces of code I have is running the source in the flow I send to the websocket I use
.runWith(BroadcastHub.of(JsonNode.class), this.materializer);
to have all connections share the same source.
How can I setup the tests to work with auto offset rest as latest?
I am trying to get Lettuce to connect to the newly promoted master (former slave) after the old one failed. But all writes stop. The writes continue after the failed host reconnects, now as a slave. And it continues to write to the new master (former slave).
I tried setting periodic topology refreshes, as well as adaptive ones on all events but it didn't help. Is there another setting I have to use?
This is how I configured the client:
final List<RedisURI> redisURIs = buildRedisURIs(redisServerSettings.getNodes());
final RedisClusterClient client = RedisClusterClient.create(clientResources, redisURIs);
final ClusterTopologyRefreshOptions refreshOptions =
ClusterTopologyRefreshOptions.builder()
.enableAllAdaptiveRefreshTriggers()
.adaptiveRefreshTriggersTimeout(Duration.ofMinutes(2))
.refreshTriggersReconnectAttempts(2)
.enablePeriodicRefresh(Duration.ofMinutes(10))
.build();
client.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(refreshOptions).build());
I solved the problem.
Because lettuce doesn't have a timeout normally, it waited forever for the response from the server. Setting the timeout caused some transactions to fail but after the failed transactions, the reads and writes continued.
Cassandra setup in 3 data-center (dc1, dc2 & dc3) forming a cluster
Running a Java Application on dc1.
dc1 application has Cassandra connectors pointed to dc1 (ips of cassandra in dc1 alone given to the application)
turning off the dc1 cassandra nodes application throws exception in application like
All host(s) tried for query failed (no host was tried)
More Info:
cassandra-driver-core-3.0.8.jar
netty-3.10.5.Final.jar
netty-buffer-4.0.37.Final.jar
netty-codec-4.0.37.Final.jar
netty-common-4.0.37.Final.jar
netty-handler-4.0.37.Final.jar
netty-transport-4.0.37.Final.jar
Keyspace : Network topology
Replication : dc1:2, dc2:2, dc3:2
Cassandra Version : 3.11.4
Here are some things I have found out with connections and Cassandra (and BTW, I believe Cassandra has one of the best HA configurations of any database I've worked with over the past 25 years).
1) Ensure you have all of the components specified in your connection connection. Here is an example of some of the connection components, but there are others as well (maybe you've already done this):
cluster = Cluster.builder()
.addContactPoints(nodes.split(","))
.withCredentials(username, password)
.withPoolingOptions(poolingOptions)
.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder()
.withLocalDc("MYLOCALDC")
.withUsedHostsPerRemoteDc(1)
.allowRemoteDCsForLocalConsistencyLevel()
.build()
)
).build();
2) Unless the entire DC you're "working in" is down, you could receive errors. Cassandra doesn't fail over to alternate DCs unless every node is down in the DC. If less than all nodes are down and your client can't satisfy the client CL settings, you will receive errors. I was actually hoping, when I did testing a while back, that if you couldn't achieve client CL in the LOCAL DC (even if some nodes in the current DC were up) and alternate DCs could, that it would automatically fail over, but this is not the case (since I last tested).
Maybe that helps?
-Jim
Here, I am having a mongo cluster setup with two config server, two shards each having 3 nodes and one mongos server. For e.g.
Config servers
IP1 configsvr1
IP2 configsvr2
Shard 1
IP3 shardsvr1 (Primary)
IP4 shardsvr2 (Secondary)
IP5 shardsvr3 (Secondary)
Shard 2
IP6 shardsvr4 (Primary)
IP7 shardsvr5 (Secondary)
IP8 shardsvr6 (Secondary)
IP9 mongos
Now, Is it possible to make all read operations on a particular node of each shard let's say -
All read operation should be perform in shard 1 over node 'shardsvr3' and in shard 2 over shardsvr6.
Please share your thoughts!
Thanks in advance,
After exploring, I came to know that it is possible to perform all read operations on a particular secondary node.
Here are the few steps to do that -
Add tag to secondary node
conf = rs.conf()
conf.members[0].tags = {"use": "production" }
rs.reconfig(conf)
set read preference secondary
db.getMongo().setReadPref('secondary')
Now query by passing tag in query string
Useing spring-data-mongodb
MongoClientOptions mongoClientOptions = MongoClientOptions.builder()
.connectTimeout(connectionTimeoutInterval)
.socketTimeout(socketTimeoutInterval)
.serverSelectionTimeout(serverSelectionTimeoutInterval)
.readPreference(TaggableReadPreference.secondary(new TagSet(createTagList())))
.build();
return new MongoClient(new ServerAddress(host, port),
Collections.singletonList(MongoCredential.createCredential(dbUserName, databaseName, dbPassword.toCharArray())),
mongoClientOptions);