I'm using spring cloud stream with kafka broker for microservice inter-communication. As part of which, stream bridge will be used to send the message, which is fine.
But during consumption of said message, the message need not be immediately consumed, rather when a condition is satisfied, then only, it should be consumed.
From the documentation, I understand that I need to use Polled Consumers(do correct me if I'm mistaken) for this.
This is what I've tried from what I've understood of the documentation.
application.properties
spring.cloud.stream.pollable-source = consumeResponse
spring.cloud.stream.function.definition = consumeResponse
#stream bridge
spring.cloud.stream.bindings.outputchannel1.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.outputchannel1.binder= kafka1
#polled Consumer
spring.cloud.stream.bindings.consumeResponse-in-0.binder= kafka1
spring.cloud.stream.bindings.consumeResponse-in-0.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.consumeResponse-in-0.group = consumer_cloud_stream1
spring.cloud.stream.binders.kafka1.type=kafka
MainApplication.java
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
//produce message
for (int i = 0; i < 5; i++) {
streamBridge.send("outputchannel1", "msg"+i);
System.out.println("Request :: " + "msg"+i);
}
};
}
#Bean
public Consumer<String> consumeResponse() {
return (response) -> {
//consume message
System.out.println("Response :: " + response);
};
}
#Bean
public ApplicationRunner poller(PollableMessageSource destIn, MessageChannel destOut) {
return args -> {
while (someCondition) { //some condition that checks whether or not to consume the message
try {
//condition satisfied, so forward message to consumer
if (!destIn.poll(m -> {
String newPayload = ((String) m.getPayload());
destOut.send(new GenericMessage<>(newPayload));
})) {
Thread.sleep(1000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
};
}
But this throws the following exception:-
Parameter 1 of method poller in com.MainApplication required a single bean, but 2 were found:
- nullChannel: defined in null
- errorChannel: defined in null
I'd appreciate it if someone could help me out here or point me towards a working example for the same.
Spring boot version: 2.6.4,
Spring cloud version: 2021.0.1
Why don't you just inject the StreamBridge into your runner instead of the message channel?
By default, stream bridge output channels are created on-demand (first send).
Related
after upgrading to spring boot version v2.7.1 we are seeing that there are lots of queued task, we never had seen such queued task increasing in the last version we were using v2.2.2.
Our team has tried to check the things in v2.7.1 but couldn't found anything in this version.
Can anyone please review the code and let us know what we are missing or have written wrong that is causing the issue. We are using spring integration to pull emails from client server and for that we have add a taskexecutor to have concurrent polling.
Versions that we use:
Spring Boot = 2.7.1
Spring Integration = 5.5.14
Earlier we were using:
Spring Boot = 2.2.2 release
Spring Integration = 5.2.3 release
I've attached the code below.
Configuration class for Imap Integration
#Configuration
#EnableIntegration
public class ImapIntegrationConfig {
private final ApplicationContext applicationContext;
#Autowired
public ImapIntegrationConfig(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
#Bean("mailTaskExecutor")
public ThreadPoolTaskExecutor mailTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(1000);
taskExecutor.setCorePoolSize(100);
taskExecutor.setTaskDecorator(new SecurityAwareTaskDecorator(applicationContext));
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(Integer.MAX_VALUE);
return taskExecutor;
}
#Bean("imapMailChannel")
public ExecutorChannelSpec imapMailChannel() {
return MessageChannels.executor(mailTaskExecutor());
}
#Bean
public HeaderMapper<MimeMessage> mailHeaderMapper() {
return new DefaultMailHeaderMapper();
}
}
ImapListener Class to register the flow
public void registerImapFlow(ImapSetting imapSetting) {
ImapMailReceiver mailReceiver = createImapMailReceiver(imapSetting);
// create the flow for an email process
//#formatter:off
StandardIntegrationFlow flow = IntegrationFlows
.from(Mail.imapInboundAdapter(mailReceiver),
consumer -> consumer.autoStartup(true)
.poller(Pollers.fixedDelay(Duration.ofSeconds(5), Duration.ofMinutes(2))
.taskExecutor(taskExecutor)
.errorHandler(t -> logger.error("Error while polling emails for address " + imapSetting.getUsername(), t))
.maxMessagesPerPoll(10)))
.enrichHeaders(Map.of(CONCERN_CODE, imapSetting.getConcernCode(), IMAP_CONFIG_ID, imapSetting.getImapSettingId()))
.channel(imapMailChannel).get();
//#formatter:on
// give the bean a unique name to avoid clashes with multiple imap settings
String flowId = concernIdentifier.getConcernIdentifier() + "-" + imapSetting.getImapSettingId();
IntegrationFlowContext.IntegrationFlowRegistration existingFlow = integrationFlowContext.getRegistrationById(flowId);
if (existingFlow != null) {
// destroy the previous beans
existingFlow.destroy();
}
// register the new flow
integrationFlowContext.registration(flow).id(flowId).useFlowIdAsPrefix().register();
}
Process message method
#ServiceActivator(inputChannel = "imapMailChannel")
public void processMessage(Message<?> message) throws InvalidMessageException {
String concern = (String) message.getHeaders().get(CONCERN_CODE);
if (isEmpty(concern)) {
logger.error("Received null concern!");
}
Long imapConfigId = (Long) message.getHeaders().get(IMAP_CONFIG_ID);
String logMessage = null;
String messageId = null;
try {
Object payload = message.getPayload();
if (payload instanceof MimeMultipart) {
//.......................//
}
else if (payload instanceof String) {
//......................//
}
catch (Exception e) {
logger.error("Error while processing " + logMessage, e);
if (concern != null) {
metricUtil.emailFailed(concern);
}
throw new MaxxtonException("CCM-MessageID: Exception in processMessage() method", e, MessageErrorCode.UNABLE_TO_PROCESS_EMAIL);
}
metricUtil.emailProcessed(concern);
}
ImapMailReceiver method
private ImapMailReceiver createImapMailReceiver(ImapSetting imapSettings) {
String url = String.format(imapSettings.getImapUrl(), URLEncoder.encode(imapSettings.getUsername(), UTF_8), URLEncoder.encode(imapSettings.getPassword(), UTF_8));
ImapMailReceiver receiver = new ImapMailReceiver(url);
receiver.setSimpleContent(true);
Properties mailProperties = new Properties();
mailProperties.put("mail.debug", "false");
mailProperties.put("mail.imap.connectionpoolsize", "5");
mailProperties.put("mail.imap.fetchsize", 4194304);
mailProperties.put("mail.imap.connectiontimeout", 15000);
mailProperties.put("mail.imap.timeout", 30000);
mailProperties.put("mail.imaps.connectionpoolsize", "5");
mailProperties.put("mail.imaps.fetchsize", 4194304);
mailProperties.put("mail.imaps.connectiontimeout", 15000);
mailProperties.put("mail.imaps.timeout", 30000);
receiver.setJavaMailProperties(mailProperties);
receiver.setSearchTermStrategy(this::notSeenTerm);
receiver.setAutoCloseFolder(false);
receiver.setShouldDeleteMessages(false);
receiver.setShouldMarkMessagesAsRead(true);
receiver.setHeaderMapper(mailHeaderMapper);
receiver.setEmbeddedPartsAsBytes(false);
return receiver;
}
Added a screenshot taken from Grafana of active and queued task when we have upgraded to SP v2.7.1 and SI v5.5.14
At a glance it all looks OK. Unless you really don't close that folder manually elsewhere since you use receiver.setAutoCloseFolder(false);
There is no reason in that .taskExecutor(taskExecutor) since you use MessageChannels.executor(mailTaskExecutor()) immediately after producing message from the Mail.imapInboundAdapter().
I remember that in Gitter I suggested you to check how it works with the spring.task.scheduling.pool.size=10 placed into the application.properties. This is the only obvious difference between the mentioned versions: https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.spring-integration.
Your screenshot doesn't prove that the problem is exactly with Spring Integration. Perhaps tasks are queued somehow by the tool which exports metrics to Graphana. I believe you have upgraded not just Spring Integration in your project...
I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo
I'm new in Spring Integration, I'm trying to get the message from temporary channel.
Reading the documentation there is a temporary channel use by spring.
I guess it named NullChannel
I need my gateway returns the value from temporary channel.
http controller -> gateway -> direct channel -> activator 1 -> queue channel -> activator 2
So my activator 2 will put the new value into temporary channel, so the gateway will retrieve the value from temporary channel
#MessageEndpoint
public class Activator2 {
#Autowired
private NullChannel nullChannel;
#ServiceActivator(inputChannel = "asyncChannel")
public void plus(Integer message){
try {
message++;
Thread.sleep(2000);
nullChannel.send(MessageBuilder.withPayload(message).build());
log.info("Activator 2: " +message );
} catch (InterruptedException e) {
log.error("I don't want to sleep");
}
}
}
it not working. I'm not sure if everything is well connected
NullChannel is like /dev/null in Unix; it just discards the value.
#ServiceActivator(inputChannel = "asyncChannel")
public Integer plus(Integer message){
return message;
}
will automatically do what you want.
If there is no outputChannel, the framework sends the result to the replyChannel header.
I have a Spring Cloud Stream application handling Dead letter queue.
Here I handle the record forwarded to the DLQ topic like below -
#SpringBootApplication
#EnableBinding(Processor.class)
public class EventDrivenApplication {
public static void main(String[] args) {
SpringApplication.run(EventDrivenApplication.class, args);
}
#StreamListener(target = Processor.INPUT)
#SendTo(Processor.OUTPUT)
public Message<?> reRoute(Message<?> failed) {
Integer retries = failed.getHeaders().get("x-retries", Integer.class);
if (retries == null) {
log.info("First retry for {}", failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE, failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
} else if (retries.intValue() < 3) {
log.info("Another retry for {}", failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(retries.intValue() + 1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE, failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
return null;
}
}
When I run this application, I get below error -
SpringApplication : Application run failed
java.lang.IllegalArgumentException: Cannot set a condition for methods that return a value
Why can't we write conditional statements in a method that has return value ?
I would really appreciate if someone helps me solve this.
conditions are not supported on request/reply methods.
When using conditions, multiple listeners can get the same message and they both cannot return a reply.
I suppose the check could have been more lenient, but it's not.
We have a Spring Integration DSL pipeline connected to a GCP Pubsub and things "work": The data is received and processed as defined in the pipeline, using a collection of Function implementations and .handle().
The problem we have (and why I used "work" in quotes) is that, in some handlers, when some of the data isn't found in the companion database, we raise IllegalStateException, which forces the data to be reprocessed (along the way, another service may complete the companion database and then function will now work). This exception is never shown anywhere.
We tried to capture the content of errorHandler, but we really can't find the proper way of doing it programmatically (no XML).
Our Functions have something like this:
Record record = recordRepository.findById(incomingData).orElseThrow(() -> new IllegalStateException("Missing information: " + incomingData));
This IllegalStateException is the one that is not appearing anywhere in the logs.
Also, maybe it's worth mentioning that we have our channels defined as
#Bean
public DirectChannel cardInputChannel() {
return new DirectChannel();
}
#Bean
public PubSubInboundChannelAdapter cardChannelAdapter(
#Qualifier("cardInputChannel") MessageChannel inputChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter = new PubSubInboundChannelAdapter(pubSubTemplate, SUBSCRIPTION_NAME);
adapter.setOutputChannel(inputChannel);
adapter.setAckMode(AckMode.AUTO);
adapter.setPayloadType(CardDto.class);
return adapter;
}
I am not familiar with the adapter, but I just looked at the code and it looks like they just nack the message and don't log anything.
You can add an Advice to the handler's endpoint to capture and log the exception
.handle(..., e -> e.advice(exceptionLoggingAdvice)
#Bean
public MethodInterceptor exceptionLoggingAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception thrown) {
// log it
throw thrown;
}
}
}
EDIT
#SpringBootApplication
public class So57224614Application {
public static void main(String[] args) {
SpringApplication.run(So57224614Application.class, args);
}
#Bean
public IntegrationFlow flow(MethodInterceptor myAdvice) {
return IntegrationFlows.from(() -> "foo", endpoint -> endpoint.poller(Pollers.fixedDelay(5000)))
.handle("crasher", "crash", endpoint -> endpoint.advice(myAdvice))
.get();
}
#Bean
public MethodInterceptor myAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception e) {
System.out.println("Failed with " + e.getMessage());
throw e;
}
};
}
}
#Component
class Crasher {
public void crash(Message<?> msg) {
throw new RuntimeException("test");
}
}
and
Failed with nested exception is java.lang.RuntimeException: test