Send message in specific partition of Topic - java

Is there any way producer could send messages to specific partiton of topic in broker ?
As of now, am able to send topic having 2 partitions, but dont have control to send in specific partition.
Component
#EnableBinding(Source.class)
public class RsvpsKafkaProducer {
private static final int SENDING_MESSAGE_TIMEOUT_MS = 10000;
private final Source source;
public RsvpsKafkaProducer(Source source) {
this.source = source;
}
public void sendRsvpMessage(WebSocketMessage<?> message) {
System.out.println("sendRsvpMessage");
source.output()
.send(MessageBuilder.withPayload(message.getPayload())
.build(),
SENDING_MESSAGE_TIMEOUT_MS);
}
}
application.properties
spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
spring.cloud.stream.kafka.binder.brokers=localhost:9093
spring.cloud.stream.bindings.output.destination=meetupTopic
spring.cloud.stream.bindings.output.producer.partitionCount=2
spring.cloud.stream.bindings.output.content-type=text/plain
spring.cloud.stream.bindings.output.producer.headerMode=raw
Is there any way I could achieve it using spring cloud stream ? I want some messages to go in P1 partition and some to P2 partition within meetupTopic.

MessageBuilder.withPayload(message.getPayload())
.setHeader(KafkaHeaders.PARTITION_ID, 23)
.build()

I haven't tried this but from the docs, it looks like it might work.
spring.cloud.stream.bindings.output.producer.partitionSelectorExpression=headers['partitionKey']
And then you add the header partitionKey when you send the message.

Related

How to create a test for DeadLetter Kafka

In my little microservice, I created a Producer Kafka to send the messages with errors (messages having errors in the JSON format) inside the DeadLetter in this way :
#Component
public class KafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send("DeadLetter", message);
}
}
I would like to create a JUnitTest for the completeness of the project, but I have no idea how to create the eventuality of a possible JSON error in order to create the test. I thank everyone for any possible help and advice
To create a JUnitTest consistent with your code. I should recreate the case where you pass it a warped or invalid JSON. In your case, I would opt to configure a MockConsumer from which to read any message that the logic of your code will be invited to the dead letter.
To have a usable test structure, I recommend something like this:
#KafkaListener(topics = "yourTopic")
public void listen(String message) {
messages.add(message);
}
For testing a basic structure could be
#Test
public void testDeadLetter(){
//Set up a mockConsumer
MockConsumer<String,String> yourMockConsumer = new MockConsumer<String,String> (OffsetResetStrategy.EARLIEST);
yourMockConsumer.subscribe(Collections.singletonList("yourTopic"));
//Sending message on embedded Kafka broker
String error = "ERRORE";
kafkaTemplate.send("yourTopic", error);
//Reading the message may take a second
Thread.sleep(1000);
//Create an Assert that checks you that the message is equal to the error specified above
}
I hope it will be useful to you!
You can create Kafka topic using testcontainers and write your tests on top of that.
Sharing an example on how to use testcontainers https://github.com/0001vrn/testcontainers-example

What are the major issues while using websocket to get stream of data?

I'm doing it first time. Where am going to read stream of data using websocket.
Here is my code snippet
RsvpApplication
#SpringBootApplication
public class RsvpApplication {
private static final String MEETUP_RSVPS_ENDPOINT = "ws://stream.myapi.com/2/rsvps";
public static void main(String[] args) {
SpringApplication.run(RsvpApplication.class, args);
}
#Bean
public ApplicationRunner initializeConnection(
RsvpsWebSocketHandler rsvpsWebSocketHandler) {
return args -> {
System.out.println("initializeConnection");
WebSocketClient rsvpsSocketClient = new StandardWebSocketClient();
rsvpsSocketClient.doHandshake(
rsvpsWebSocketHandler, MEETUP_RSVPS_ENDPOINT);
};
}
}
RsvpsWebSocketHandler
#Component
class RsvpsWebSocketHandler extends AbstractWebSocketHandler {
private static final Logger logger =
Logger.getLogger(RsvpsWebSocketHandler.class.getName());
private final RsvpsKafkaProducer rsvpsKafkaProducer;
public RsvpsWebSocketHandler(RsvpsKafkaProducer rsvpsKafkaProducer) {
this.rsvpsKafkaProducer = rsvpsKafkaProducer;
}
#Override
public void handleMessage(WebSocketSession session,
WebSocketMessage<?> message) {
logger.log(Level.INFO, "New RSVP:\n {0}", message.getPayload());
System.out.println("handleMessage");
rsvpsKafkaProducer.sendRsvpMessage(message);
}
}
RsvpsKafkaProducer
#Component
#EnableBinding(Source.class)
public class RsvpsKafkaProducer {
private static final int SENDING_MESSAGE_TIMEOUT_MS = 10000;
private final Source source;
public RsvpsKafkaProducer(Source source) {
this.source = source;
}
public void sendRsvpMessage(WebSocketMessage<?> message) {
System.out.println("sendRsvpMessage");
source.output()
.send(MessageBuilder.withPayload(message.getPayload())
.build(),
SENDING_MESSAGE_TIMEOUT_MS);
}
}
As far I know and read about websocket is that, It needs one time connection and stream of data will be flowing continuously until either party (client or server) stops.
I'm building it first time, so trying to cover major scenarios which can come acroos while dealing with 10000+ messages per minute. Total kafka brokers are two with enough space.
What can be done, if connection gets lost and again start consuming messages from webscoket once connected back where it was left in last failure and push messages into further Kafka broker ?
What can be done to put on hold websocket to keep pushing messages in broker if it has reached to threshold limit of not processed messages (in broker) ?
What can be done, When broker reached to its threshold, run a separate process to check available space in broker to push more messages and give indication to resume pushing messages in kafka broker ?
Please share other issues, which needs to be considered while setting up this thing ?

Handling dead letter queue with delay

I want to do the following: when a message fails and falls to my dead letter queue, I want to wait 5 minutes and republishes the same message on my queue.
Today, using Spring Cloud Streams and RabbitMQ, I did the following code Based on this documentation:
#Component
public class HandlerDlq {
private static final Logger LOGGER = LoggerFactory.getLogger(HandlerDlq.class);
private static final String X_RETRIES_HEADER = "x-retries";
private static final String X_DELAY_HEADER = "x-delay";
private static final int NUMBER_OF_RETRIES = 3;
private static final int DELAY_MS = 300000;
private RabbitTemplate rabbitTemplate;
#Autowired
public HandlerDlq(RabbitTemplate rabbitTemplate) {
this.rabbitTemplate = rabbitTemplate;
}
#RabbitListener(queues = MessageInputProcessor.DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = 0;
}
if (retriesHeader > NUMBER_OF_RETRIES) {
LOGGER.warn("Message {} added to failed messages queue", failedMessage);
this.rabbitTemplate.send(MessageInputProcessor.FAILED, failedMessage);
throw new ImmediateAcknowledgeAmqpException("Message failed after " + NUMBER_OF_RETRIES + " attempts");
}
retriesHeader++;
headers.put(X_RETRIES_HEADER, retriesHeader);
headers.put(X_DELAY_HEADER, DELAY_MS * retriesHeader);
LOGGER.warn("Retrying message, {} attempts", retriesHeader);
this.rabbitTemplate.send(MessageInputProcessor.DELAY_EXCHANGE, MessageInputProcessor.INPUT_DESTINATION, failedMessage);
}
#Bean
public DirectExchange delayExchange() {
DirectExchange exchange = new DirectExchange(MessageInputProcessor.DELAY_EXCHANGE);
exchange.setDelayed(true);
return exchange;
}
#Bean
public Binding bindOriginalToDelay() {
return BindingBuilder.bind(new Queue(MessageInputProcessor.INPUT_DESTINATION)).to(delayExchange()).with(MessageInputProcessor.INPUT_DESTINATION);
}
#Bean
public Queue parkingLot() {
return new Queue(MessageInputProcessor.FAILED);
}
}
My MessageInputProcessor interface:
public interface MessageInputProcessor {
String INPUT = "myInput";
String INPUT_DESTINATION = "myInput.group";
String DLQ = INPUT_DESTINATION + ".dlq"; //from application.properties file
String FAILED = INPUT + "-failed";
String DELAY_EXCHANGE = INPUT_DESTINATION + "-DlqReRouter";
#Input
SubscribableChannel storageManagerInput();
#Input(MessageInputProcessor.FAILED)
SubscribableChannel storageManagerFailed();
}
And my properties file:
#dlx/dlq setup - retry dead letter 5 minutes later (300000ms later)
spring.cloud.stream.rabbit.bindings.myInput.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.myInput.consumer.republish-to-dlq=true
spring.cloud.stream.rabbit.bindings.myInput.consumer.dlq-ttl=3000
spring.cloud.stream.rabbit.bindings.myInput.consumer.delayedExchange=true
#input
spring.cloud.stream.bindings.myInput.destination=myInput
spring.cloud.stream.bindings.myInput.group=group
With this code, I can read from dead letter queue, capture the header but I can't put it back to my queue (the line LOGGER.warn("Retrying message, {} attempts", retriesHeader); only runs once, even if I put a very slow time).
My guess is that the method bindOriginalToDelay is binding the exchange to a new queue, and not mine. However, I didn't find a way to get my queue to bind there instead of creating a new one. But I'm not even sure this is the error.
I've also tried to send to MessageInputProcessor.INPUT instead of MessageInputProcessor.INPUT_DESTINATION, but it didn't work as expected.
Also, unfortunately, I can't update Spring framework due to dependencies on the project...
Could you help me with putting back the failed message on my queue after some time? I really didn't want to put a thread.sleep there...
With that configuration, myInput.group is bound to the delayed (topic) exchange myInput with routing key #.
You should probably remove spring.cloud.stream.rabbit.bindings.myInput.consumer.delayedExchange=true because you don't need the main exchange to be delayed.
It will also be bound to your explicit delayed exchange, with key myInput.group.
Everything looks correct to me; you should see the same (single) queue bound to two exchanges:
The myInput.group.dlq is bound to DLX with key myInput.group
You should set a longer TTL and examine the message in the DLQ to see if something stands out.
EDIT
I just copied your code with a 5 second delay and it worked fine for me (with turning off the delay on the main exchange).
Retrying message, 4 attempts
and
added to failed messages queue
Perhaps you thought it was not working because you have a delay on the main exchange too?

Spring Cloud stream: Kafka Sink gets alternate message

I am trying to build a simple cloud stream application with kafka binding. Let me describe the set up.
1. I have a producer producing to topic topic_1.
2. There's a stream binder, binding topic_1 after some processing into topic_2.
#StreamListener(MyBinder.INPUT)
#SendTo(MyBinder.OUTPUT_2)
public String handleIncomingMsgs(String s) {
logger.info(s); // prints all the messages
return s;
}
When the producer produces messages, the StreamListner handleIncomingMsgs gets all the messages.
After receiving, it should forward the messages to some other channel.
#Service
#EnableBinding(MyBinder.class)
public class LogMsg {
#StreamListener(MyBinder.OUTPUT_2)
public void handle(String board) {
logger.info("Received payload: " + board); //prints every alternate messages
}
Here is my binder
public interface ViewsStreams {
String INPUT = "input";
String OUTPUT_1 = "output_1";
String OP_USERS = "output_2";
#Autowired
#Input(INPUT)
SubscribableChannel job_board_views();
#Autowired
#Output(OUTPUT_1)
MessageChannel outboundJobBoards();
#Autowired
#Output(OUTPUT_2)
MessageChannel outboundUsers();
}
I am new in these technologies. Unable to figure out what is going wrong here. Can someone please help?
Your guess is correct; you have two consumers on the OUTPUT_2 channel - the listener and the binding which sends out the message.
They each get alternate messages.

How to get a Kafka Topic Lag in Java

I want to see the lag position of a kafka topic in java. someone here says that below code will work.
AdminClient client = AdminClient.createSimplePlaintext("localhost:9092");
Map<TopicPartition, Object> offsets = JavaConversions.asJavaMap(
client.listGroupOffsets("groupID"));
Long offset = (Long) offsets.get(new TopicPartition("topic", 0));
But when I tried to import kafka.admin.AdminClient that listGroupOffsets method is not there. Please help me with this.
You can use https://github.com/yahoo/kafka-manager and can use their http Rest APIs to get consumer groups lag and other details.
Method listGroupOffsets was introduced to AdminClient.scala starting 0.10.2. See KAFKA-3853 for details. So you should use Kafka 0.10.2.0 or upwards.
I am using Spring framework. Using the below code, you can get the metrics via java.The code works.
#Component
public class Receiver {
private static final Logger LOGGER =
LoggerFactory.getLogger(Receiver.class);
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
public void testlag() {
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry
.getListenerContainers()) {
Map<String, Map<MetricName, ? extends Metric>> metrics = messageListenerContainer.metrics();
metrics.forEach( (clientid, metricMap) ->{
System.out.println("------------------------For client id : "+clientid);
metricMap.forEach((metricName,metricValue)->{
//if(metricName.name().contains("lag"))
System.out.println("------------Metric name: "+metricName.name()+"-----------Metric value: "+metricValue.metricValue());
});
});
}
}

Categories

Resources