I am currently working on Kafka module where I am using spring-kafka abstraction of Kafka communication. I am able to integrate the producer & consumer from real implementation standpoint however, I am not sure how to test (specifically integration test) the business logic surrounds at consumer with #KafkaListener. I tried to follow spring-kafk documentation and various blogs on the topic but none of those answer my intended question.
Spring Boot test class
//imports not mentioned due to brevity
#RunWith(SpringRunner.class)
#SpringBootTest(classes = PaymentAccountUpdaterApplication.class,
webEnvironment = SpringBootTest.WebEnvironment.NONE)
public class CardUpdaterMessagingIntegrationTest {
private final static String cardUpdateTopic = "TP.PRF.CARDEVENTS";
#Autowired
private ObjectMapper objectMapper;
#ClassRule
public static KafkaEmbedded kafkaEmbedded =
new KafkaEmbedded(1, false, cardUpdateTopic);
#Test
public void sampleTest() throws Exception {
Map<String, Object> consumerConfig =
KafkaTestUtils.consumerProps("test", "false", kafkaEmbedded);
consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
ConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerConfig);
ContainerProperties containerProperties = new ContainerProperties(cardUpdateTopic);
containerProperties.setMessageListener(new SafeStringJsonMessageConverter());
KafkaMessageListenerContainer<String, String>
container = new KafkaMessageListenerContainer<>(cf, containerProperties);
BlockingQueue<ConsumerRecord<String, String>> records = new LinkedBlockingQueue<>();
container.setupMessageListener((MessageListener<String, String>) data -> {
System.out.println("Added to Queue: "+ data);
records.add(data);
});
container.setBeanName("templateTests");
container.start();
ContainerTestUtils.waitForAssignment(container, kafkaEmbedded.getPartitionsPerTopic());
Map<String, Object> producerConfig = KafkaTestUtils.senderProps(kafkaEmbedded.getBrokersAsString());
producerConfig.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerConfig.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> pf =
new DefaultKafkaProducerFactory<>(producerConfig);
KafkaTemplate<String, Object> kafkaTemplate = new KafkaTemplate<>(pf);
String payload = objectMapper.writeValueAsString(accountWrapper());
kafkaTemplate.send(cardUpdateTopic, 0, payload);
ConsumerRecord<String, String> received = records.poll(10, TimeUnit.SECONDS);
assertThat(received).has(partition(0));
}
#After
public void after() {
kafkaEmbedded.after();
}
private AccountWrapper accountWrapper() {
return AccountWrapper.builder()
.eventSource("PROFILE")
.eventName("INITIAL_LOAD_CARD")
.eventTime(LocalDateTime.now().toString())
.eventID("8730c547-02bd-45c0-857b-d90f859e886c")
.details(AccountDetail.builder()
.customerId("idArZ_K2IgE86DcPhv-uZw")
.vaultId("912A60928AD04F69F3877D5B422327EE")
.expiryDate("122019")
.build())
.build();
}
}
Listener Class
#Service
public class ConsumerMessageListener {
private static final Logger LOGGER = LoggerFactory.getLogger(ConsumerMessageListener.class);
private ConsumerMessageProcessorService consumerMessageProcessorService;
public ConsumerMessageListener(ConsumerMessageProcessorService consumerMessageProcessorService) {
this.consumerMessageProcessorService = consumerMessageProcessorService;
}
#KafkaListener(id = "cardUpdateEventListener",
topics = "${kafka.consumer.cardupdates.topic}",
containerFactory = "kafkaJsonListenerContainerFactory")
public void processIncomingMessage(Payload<AccountWrapper,Object> payloadContainer,
Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) String partitionId,
#Header(KafkaHeaders.OFFSET) String offset) {
try {
// business logic to process the message
consumerMessageProcessorService.processIncomingMessage(payloadContainer);
} catch (Exception e) {
LOGGER.error("Unhandled exception in card event message consumer. Discarding offset commit." +
"message:: {}, details:: {}", e.getMessage(), messageMetadataInfo);
throw e;
}
acknowledgment.acknowledge();
}
}
My question is: In the test class I am asserting the partition, payload etc which is polling from BlockingQueue, however, my question is how can I verify that my business logic in the class annotated with #KafkaListener is getting executed properly and routing the messages to different topic based on error handling and other business scenarios. In some of the examples, I saw CountDownLatch to assert which I don't want to put in my business logic to assert in a production grade code. Also the message processor is Async so, how to assert the execution, not sure.
Any help, appreciated.
is getting executed properly and routing the messages to different topic based on error handling and other business scenarios.
An integration test can consume from that "different" topic to assert that the listener processed it as expected.
You could also add a BeanPostProcessor to your test case and wrap the ConsumerMessageListener bean in a proxy to verify the input arguments are as expected.
EDIT
Here is an example of wrapping the listener in a proxy...
#SpringBootApplication
public class So53678801Application {
public static void main(String[] args) {
SpringApplication.run(So53678801Application.class, args);
}
#Bean
public MessageConverter converter() {
return new StringJsonMessageConverter();
}
public static class Foo {
private String bar;
public Foo() {
super();
}
public Foo(String bar) {
this.bar = bar;
}
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
#Component
class Listener {
#KafkaListener(id = "so53678801", topics = "so53678801")
public void processIncomingMessage(Foo payload,
Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) String partitionId,
#Header(KafkaHeaders.OFFSET) String offset) {
System.out.println(payload);
// ...
acknowledgment.acknowledge();
}
}
and
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.ack-mode=manual
and
#RunWith(SpringRunner.class)
#SpringBootTest(classes = { So53678801Application.class,
So53678801ApplicationTests.TestConfig.class})
public class So53678801ApplicationTests {
#ClassRule
public static EmbeddedKafkaRule embededKafka = new EmbeddedKafkaRule(1, false, "so53678801");
#BeforeClass
public static void setup() {
System.setProperty("spring.kafka.bootstrap-servers",
embededKafka.getEmbeddedKafka().getBrokersAsString());
}
#Autowired
private KafkaTemplate<String, String> template;
#Autowired
private ListenerWrapper wrapper;
#Test
public void test() throws Exception {
this.template.send("so53678801", "{\"bar\":\"baz\"}");
assertThat(this.wrapper.latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(this.wrapper.argsReceived[0]).isInstanceOf(Foo.class);
assertThat(((Foo) this.wrapper.argsReceived[0]).getBar()).isEqualTo("baz");
assertThat(this.wrapper.ackCalled).isTrue();
}
#Configuration
public static class TestConfig {
#Bean
public static ListenerWrapper bpp() { // BPPs have to be static
return new ListenerWrapper();
}
}
public static class ListenerWrapper implements BeanPostProcessor, Ordered {
private final CountDownLatch latch = new CountDownLatch(1);
private Object[] argsReceived;
private boolean ackCalled;
#Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof Listener) {
ProxyFactory pf = new ProxyFactory(bean);
pf.setProxyTargetClass(true); // unless the listener is on an interface
pf.addAdvice(interceptor());
return pf.getProxy();
}
return bean;
}
private MethodInterceptor interceptor() {
return invocation -> {
if (invocation.getMethod().getName().equals("processIncomingMessage")) {
Object[] args = invocation.getArguments();
this.argsReceived = Arrays.copyOf(args, args.length);
Acknowledgment ack = (Acknowledgment) args[1];
args[1] = (Acknowledgment) () -> {
this.ackCalled = true;
ack.acknowledge();
};
try {
return invocation.proceed();
}
finally {
this.latch.countDown();
}
}
else {
return invocation.proceed();
}
};
}
}
}
Related
I'm really struggling to write a test to check if my Kafka Consumer is being correctly called when messages are sent to it's designated topic.
My consumer:
#Service
#Slf4j
#AllArgsConstructor(onConstructor = #__(#Autowired))
public class ProcessingConsumer {
private AppService appService;
#KafkaListener(
topics = "${topic}",
containerFactory = "processingConsumerContainerFactory")
public void listen(ConsumerRecord<Key, Value> message, Acknowledgment ack) {
try {
appService.processMessage(message);
ack.acknowledge();
} catch (Throwable t) {
log.error("error while processing message!", t);
}
}
}
My consumer config:
#EnableKafka
#Configuration
public class ProcessingCosumerConfig {
#Value("${spring.kafka.schema-registry-url}")
private String schemaRegistryUrl;
private KafkaProperties props;
public ProcessingCosumerConfig(KafkaProperties kafkaProperties) {
this.props = kafkaProperties;
}
public Map<String, Object> deserializerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
props.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl);
return props;
}
private KafkaAvroDeserializer getKafkaAvroDeserializer(Boolean isKey) {
KafkaAvroDeserializer kafkaAvroDeserializer = new KafkaAvroDeserializer();
kafkaAvroDeserializer.configure(deserializerConfigs(), isKey);
return kafkaAvroDeserializer;
}
private DefaultKafkaConsumerFactory consumerFactory() {
return new DefaultKafkaConsumerFactory<>(
props.buildConsumerProperties(),
getKafkaAvroDeserializer(true),
getKafkaAvroDeserializer(false));
}
#Bean(name = "processingConsumerContainerFactory")
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Key, Value>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Key, Value>
factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
}
Finally, my (wannabe) test:
#DirtiesContext
public class ProcessingConsumerTest extends BaseIntegrationTest{
#Autowired private ProcessingProducerFixture processingProducer;
#Autowired private ProcessingConsumer processingConsumer;
#org.springframework.beans.factory.annotation.Value("${topic}")
String topic;
#Test
public void consumer_shouldConsumeMessages_whenMessagesAreSent() throws Exception{
Thread.sleep(1000);
ProducerRecord<Key, Value> message = new ProducerRecord<>(topic, new Key("b"), new Value("a", "b", "c", "d"));
processingProducer.send(message);
}
}
And that's about it for all I have so far.
I've tried checking if this approach gets to the consumer manually using debug and also even just putting simple prints there but the execution simply doesn't seems to get there. Also, if it could be somehow called correctly by my tests, I have no idea what to do to actually assert it in the actual test.
Inject a mock AppService into the listener and verify its processMessage() was called.
I have two Kafka clusters, the IPs for which I am fetching dynamically from database. I am using #KafkaListener for creating listeners. Now I want to create multiple Kafka listeners at runtime depending on the bootstrap server attribute(comma-separated values), each one listening to a cluster. Can you please suggest me how do I achieve this?
Spring-boot: 2.1.3.RELEASE
Kafka-2.0.1
Java-8
Your requirements are not clear but, assuming you want the same listener configuration to listen to multiple clusters, here is one solution. i.e. make the listener bean a prototype and mutate the container factory for each instance...
#SpringBootApplication
#EnableConfigurationProperties(ClusterProperties.class)
public class So55311070Application {
public static void main(String[] args) {
SpringApplication.run(So55311070Application.class, args);
}
private final Map<String, MyListener> listeners = new HashMap<>();
#Bean
public ApplicationRunner runner(ClusterProperties props, ConsumerFactory<Object, Object> cf,
ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory,
ApplicationContext context, KafkaListenerEndpointRegistry registry) {
return args -> {
AtomicInteger instance = new AtomicInteger();
Arrays.stream(props.getClusters()).forEach(cluster -> {
Map<String, Object> consumerProps = new HashMap<>(cf.getConfigurationProperties());
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, cluster);
String groupId = "group" + instance.getAndIncrement();
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
containerFactory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerProps));
this.listeners.put(groupId, context.getBean("listener", MyListener.class));
});
registry.getListenerContainers().forEach(c -> System.out.println(c.getGroupId())); // 2.2.5 snapshot only
};
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MyListener listener() {
return new MyListener();
}
}
class MyListener {
#KafkaListener(topics = "so55311070")
public void listen(String in) {
System.out.println(in);
}
}
#ConfigurationProperties(prefix = "kafka")
public class ClusterProperties {
private String[] clusters;
public String[] getClusters() {
return this.clusters;
}
public void setClusters(String[] clusters) {
this.clusters = clusters;
}
}
kafka.clusters=localhost:9092,localhost:9093
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
Result
group0
group1
...
2019-03-23 11:43:25.993 INFO 74869 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer
: partitions assigned: [so55311070-0]
2019-03-23 11:43:25.994 INFO 74869 --- [ntainer#1-0-C-1] o.s.k.l.KafkaMessageListenerContainer
: partitions assigned: [so55311070-0]
EDIT
Add code to retry starting failed containers.
It turns out we don't need a local map of listeners, the registry has a map of all containers, including the ones that failed to start.
#SpringBootApplication
#EnableConfigurationProperties(ClusterProperties.class)
public class So55311070Application {
public static void main(String[] args) {
SpringApplication.run(So55311070Application.class, args);
}
private boolean atLeastOneFailure;
private ScheduledFuture<?> restartTask;
#Bean
public ApplicationRunner runner(ClusterProperties props, ConsumerFactory<Object, Object> cf,
ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory,
ApplicationContext context, KafkaListenerEndpointRegistry registry, TaskScheduler scheduler) {
return args -> {
AtomicInteger instance = new AtomicInteger();
Arrays.stream(props.getClusters()).forEach(cluster -> {
Map<String, Object> consumerProps = new HashMap<>(cf.getConfigurationProperties());
consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, cluster);
String groupId = "group" + instance.getAndIncrement();
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
attemptStart(containerFactory, context, consumerProps, groupId);
});
registry.getListenerContainers().forEach(c -> System.out.println(c.getGroupId())); // 2.2.5 snapshot only
if (this.atLeastOneFailure) {
Runnable rescheduleTask = () -> {
registry.getListenerContainers().forEach(c -> {
this.atLeastOneFailure = false;
if (!c.isRunning()) {
System.out.println("Attempting restart of " + c.getGroupId());
try {
c.start();
}
catch (Exception e) {
System.out.println("Failed to start " + e.getMessage());
this.atLeastOneFailure = true;
}
}
});
if (!this.atLeastOneFailure) {
this.restartTask.cancel(false);
}
};
this.restartTask = scheduler.scheduleAtFixedRate(rescheduleTask,
Instant.now().plusSeconds(60),
Duration.ofSeconds(60));
}
};
}
private void attemptStart(ConcurrentKafkaListenerContainerFactory<Object, Object> containerFactory,
ApplicationContext context, Map<String, Object> consumerProps, String groupId) {
containerFactory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerProps));
try {
context.getBean("listener", MyListener.class);
}
catch (BeanCreationException e) {
this.atLeastOneFailure = true;
}
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MyListener listener() {
return new MyListener();
}
#Bean
public TaskScheduler scheduler() {
return new ThreadPoolTaskScheduler();
}
}
class MyListener {
#KafkaListener(topics = "so55311070")
public void listen(String in) {
System.out.println(in);
}
}
The following class in included into several consumer applications:
#Component
#Configuration
public class HealthListener {
public static final String HEALTH_CHECK_QUEUE_NAME = "healthCheckQueue";
public static final String HEALTH_CHECK_FANOUT_EXCHANGE_NAME = "health-check-fanout";
#Bean
public Binding healthListenerBinding(
#Qualifier("healthCheckQueue") Queue queue,
#Qualifier("instanceFanoutExchange") FanoutExchange exchange) {
return BindingBuilder.bind(queue).to(exchange);
}
#Bean
public FanoutExchange instanceFanoutExchange() {
return new FanoutExchange(HEALTH_CHECK_FANOUT_EXCHANGE_NAME, true, false);
}
#Bean
public Queue healthCheckQueue() {
return new Queue(HEALTH_CHECK_QUEUE_NAME);
}
#RabbitListener(queues = HEALTH_CHECK_QUEUE_NAME)
public String healthCheck() {
return "some result";
}
}
I'm trying to send a message to fanout exchange, and receive all replies, to know which consumers are running.
I can send a message and get the first reply like this:
#Autowired
RabbitTemplate template;
// ...
String firstReply = template.convertSendAndReceiveAsType("health-check-fanout", "", "", ParameterizedTypeReference.forType(String.class));
However I need to get all repliest to this message, not just the first one. I need to set up a reply listener, but I'm not sure how.
The (convertS|s)endAndReceive.*() methods are not designed to handle multiple replies; they are strictly one request/one reply methods.
You would need to use a (convertAndS|s)end() method to send the request, and implement your own reply mechanism, perhaps using a listener container for the replies, together with some component to aggregate the replies.
You could use something like a Spring Integration Aggregator for that, but you would need some mechanism (ReleaseStrategy) that would know when all expected replies are received.
Or you can simply receive the discrete replies and handle them individually.
EDIT
#SpringBootApplication
public class So54207780Application {
public static void main(String[] args) {
SpringApplication.run(So54207780Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> template.convertAndSend("fanout", "", "foo", m -> {
m.getMessageProperties().setReplyTo("replies");
return m;
});
}
#RabbitListener(queues = "queue1")
public String listen1(String in) {
return in.toUpperCase();
}
#RabbitListener(queues = "queue2")
public String listen2(String in) {
return in + in;
}
#RabbitListener(queues = "replies")
public void replyHandler(String reply) {
System.out.println(reply);
}
#Bean
public FanoutExchange fanout() {
return new FanoutExchange("fanout");
}
#Bean
public Queue queue1() {
return new Queue("queue1");
}
#Bean
public Binding binding1() {
return BindingBuilder.bind(queue1()).to(fanout());
}
#Bean
public Queue queue2() {
return new Queue("queue2");
}
#Bean
public Binding binding2() {
return BindingBuilder.bind(queue2()).to(fanout());
}
#Bean
public Queue replies() {
return new Queue("replies");
}
}
and
FOO
foofoo
I'am working on a spring batch. I have a partitioning step (of a list of objects) and then a slave step with Reader and Writer.
I want to execute the processStep in parallel mode. So, I want to have a specific instances of Reader-Writer for each partition.
For the moment, created partitions uses same instances of Reader-Writer. So, those operations are done in serial mode: Read and write the first partition and then do the same for the next one when the first is completed.
The spring boot configuration class:
#Configuration
#Import({ DataSourceConfiguration.class})
public class BatchConfiguration {
private final static int COMMIT_INTERVAL = 1;
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
#Qualifier(value="mySqlDataSource")
private DataSource mySqlDataSource;
public static int GRID_SIZE = 3;
public static List<Pojo> myList;
#Bean
public Job myJob() throws UnexpectedInputException, ParseException, NonTransientResourceException, Exception {
return jobBuilderFactory.get("myJob")
.incrementer(new RunIdIncrementer())
.start(partitioningStep())
.build();
}
#Bean(name="partitionner")
public MyPartitionner partitioner() {
return new MyPartitionner();
}
#Bean
public SimpleAsyncTaskExecutor taskExecutor() {
SimpleAsyncTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor();
taskExecutor.setConcurrencyLimit(GRID_SIZE);
return taskExecutor;
}
#Bean
public Step partitioningStep() throws NonTransientResourceException, Exception {
return stepBuilderFactory.get("partitioningStep")
.partitioner("processStep", partitioner())
.step(processStep())
.taskExecutor(taskExecutor())
.build();
}
#Bean
public Step processStep() throws UnexpectedInputException, ParseException, NonTransientResourceException, Exception {
return stepBuilderFactory.get("processStep")
.<List<Pojo>, List<Pojo>> chunk(COMMIT_INTERVAL)
.reader(processReader())
.writer(processWriter())
.taskExecutor(taskExecutor())
.build();
}
#Bean
public ProcessReader processReader() throws UnexpectedInputException, ParseException, NonTransientResourceException, Exception {
return new ProcessReader();
}
#Bean
public ProcessWriter processWriter() {
return new ProcessWriter();
}
}
The partitionner class
public class MyPartitionner implements Partitioner{
#Autowired
private IService service;
#Override
public Map<String, ExecutionContext> partition(int gridSize) {
// list of 300 object partitionned like bellow
...
Map<String, ExecutionContext> partitionData = new HashMap<String, ExecutionContext>();
ExecutionContext executionContext0 = new ExecutionContext();
executionContext0.putString("from", Integer.toString(0));
executionContext0.putString("to", Integer.toString(100));
partitionData.put("Partition0", executionContext0);
ExecutionContext executionContext1 = new ExecutionContext();
executionContext1.putString("from", Integer.toString(101));
executionContext1.putString("to", Integer.toString(200));
partitionData.put("Partition1", executionContext1);
ExecutionContext executionContext2 = new ExecutionContext();
executionContext2.putString("from", Integer.toString(201));
executionContext2.putString("to", Integer.toString(299));
partitionData.put("Partition2", executionContext2);
return partitionData;
}
}
The Reader class
public class ProcessReader implements ItemReader<List<Pojo>>, ChunkListener {
#Autowired
private IService service;
private StepExecution stepExecution;
private static List<String> processedIntervals = new ArrayList<String>();
#Override
public List<Pojo> read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
System.out.println("Instance reference: "+this.toString());
if(stepExecution.getExecutionContext().containsKey("from") && stepExecution.getExecutionContext().containsKey("to")){
Integer from = Integer.valueOf(stepExecution.getExecutionContext().get("from").toString());
Integer to = Integer.valueOf(stepExecution.getExecutionContext().get("to").toString());
if(from != null && to != null && !processedIntervals.contains(from + "" + to) && to < BatchConfiguration.myList.size()){
processedIntervals.add(String.valueOf(from + "" + to));
return BatchConfiguration.myList.subList(from, to);
}
}
return null;
}
#Override
public void beforeChunk(ChunkContext context) {
this.stepExecution = context.getStepContext().getStepExecution();
}
#Override
public void afterChunk(ChunkContext context) { }
#Override
public void afterChunkError(ChunkContext context) { }
}
}
The writer class
public class ProcessWriter implements ItemWriter<List<Pojo>>{
private final static Logger LOGGER = LoggerFactory.getLogger(ProcessWriter.class);
#Autowired
private IService service;
#Override
public void write(List<? extends List<Pojo>> pojos) throws Exception {
if(!pojos.isEmpty()){
for(Pojo item : pojos.get(0)){
try {
service.remove(item.getId());
} catch (Exception e) {
LOGGER.error("Error occured while removing the item [" + item.getId() + "]", e);
}
}
}
}
}
Can you please tell me what is wrong with my code?
Resolved by adding #StepScope to my reader and writer beans declaration:
#Configuration
#Import({ DataSourceConfiguration.class})
public class BatchConfiguration {
...
#Bean
#StepScope
public ProcessReader processReader() throws UnexpectedInputException, ParseException, NonTransientResourceException, Exception {
return new ProcessReader();
}
#Bean
#StepScope
public ProcessWriter processWriter() {
return new ProcessWriter();
}
...
}
By this way, you I have an different instance of the chunck (Reader-Writer) for each partition.
How to acknowledge the messages manually without using auto acknowledgement.
Is there a way to use this along with the #RabbitListener and #EnableRabbit style of configuration.
Most of the documentation tells us to use SimpleMessageListenerContainer along with ChannelAwareMessageListener.
However using that we lose the flexibility that is provided with the annotations.
I have configured my service as below :
#Service
public class EventReceiver {
#Autowired
private MessageSender messageSender;
#RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order) throws Exception {
// code for processing order
}
My RabbitConfiguration is as below
#EnableRabbit
public class RabbitApplication implements RabbitListenerConfigurer {
public static void main(String[] args) {
SpringApplication.run(RabbitApplication.class, args);
}
#Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter((MessageConverter) jackson2Converter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
#Autowired
private EventReceiver receiver;
}
}
Any help will be appreciated on how to adapt manual channel acknowledgement along with the above style of configuration.
If we implement the ChannelAwareMessageListener then the onMessage signature will change.
Can we implement ChannelAwareMessageListener on a service ?
Add the Channel to the #RabbitListener method...
#RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) long tag) throws Exception {
...
}
and use the tag in the basicAck, basicReject.
EDIT
#SpringBootApplication
#EnableRabbit
public class So38728668Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So38728668Application.class, args);
context.getBean(RabbitTemplate.class).convertAndSend("", "so38728668", "foo");
context.getBean(Listener.class).latch.await(60, TimeUnit.SECONDS);
context.close();
}
#Bean
public Queue so38728668() {
return new Queue("so38728668");
}
#Bean
public Listener listener() {
return new Listener();
}
public static class Listener {
private final CountDownLatch latch = new CountDownLatch(1);
#RabbitListener(queues = "so38728668")
public void receive(String payload, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag)
throws IOException {
System.out.println(payload);
channel.basicAck(tag, false);
latch.countDown();
}
}
}
application.properties:
spring.rabbitmq.listener.acknowledge-mode=manual
Just in case you need to use #onMessage() from ChannelAwareMessageListener class. Then you can do it this way.
#Component
public class MyMessageListener implements ChannelAwareMessageListener {
#Override
public void onMessage(Message message, Channel channel) {
log.info("Message received.");
// do something with the message
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
}
}
And for the rabbitConfiguration
#Configuration
public class RabbitConfig {
public static final String topicExchangeName = "exchange1";
public static final String queueName = "queue1";
public static final String routingKey = "queue1.route.#";
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("xxxx");
connectionFactory.setPassword("xxxxxxxxxx");
connectionFactory.setPort(5672);
connectionFactory.setVirtualHost("vHost1");
return connectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate() {
return new RabbitTemplate(connectionFactory());
}
#Bean
Queue queue() {
return new Queue(queueName, true);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(routingKey);
}
#Bean
public SimpleMessageListenerContainer listenerContainer(MyMessageListener myRabbitMessageListener) {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory());
listenerContainer.setQueueNames(queueName);
listenerContainer.setMessageListener(myRabbitMessageListener);
listenerContainer.setAcknowledgeMode(AcknowledgeMode.MANUAL);
listenerContainer.setConcurrency("4");
listenerContainer.setPrefetchCount(20);
return listenerContainer;
}
}
Thanks for gary's help. I finally solved the issue. I am documenting this for the benefit of others.
This needs to be documented as part of standard documentation in Spring AMQP reference documentation page.
Service class is as below.
#Service
public class Consumer {
#RabbitListener(queues = "${eventqueue}")
public void receiveMessage(Order order, Channel channel) throws Exception {
// the above methodname can be anything but should have channel as second signature
channel.basicConsume(eventQueue, false, channel.getDefaultConsumer());
// Get the delivery tag
long deliveryTag = channel.basicGet(eventQueue, false).getEnvelope().getDeliveryTag();
try {
// code for processing order
catch(Exception) {
// handle exception
channel.basicReject(deliveryTag, true);
}
// If all logic is successful
channel.basicAck(deliveryTag, false);
}
the configuration has also been modified as below
public class RabbitApplication implements RabbitListenerConfigurer {
private static final Logger log = LoggerFactory.getLogger(RabbitApplication .class);
public static void main(String[] args) {
SpringApplication.run(RabbitApplication.class, args);
}
#Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
}
#Bean
public DefaultMessageHandlerMethodFactory myHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(jackson2Converter());
return factory;
}
#Autowired
private Consumer consumer;
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(myHandlerMethodFactory());
}
...
}
Note: no need to configure Rabbitconnectionfactory or containerfactor etc since the annotation implicity take care of all this.