Backoff policy for Spring Cloud Stream consumer with GCP PubSub - java

In a Spring Boot application I'm using spring-cloud-stream for PubSub (spring-cloud-gcp-pubsub-stream-binder) to subscribe to a topic.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-pubsub-stream-binder</artifactId>
</dependency>
I use the #EnableBinding and #StreamListener annotations to set up the subscriber:
#EnableBinding(Sink.class)
class Subscriber {
#StreamListener(INPUT)
public void handleMessage(Message<String> message) {
...
}
}
During the handling of the message it is possible that something goes wrong. In that case I throw an Exception to make sure the message will not get acknowledged and be retried at a later time.
According to the spring cloud stream documentation I should be able to use the properties
spring.cloud.stream.default.consumer.defaultRetryable=true
spring.cloud.stream.default.consumer.backOffInitialInterval=1000
spring.cloud.stream.default.consumer.backOffMultiplier=2.0
spring.cloud.stream.default.consumer.backOffMaxInterval=300000
spring.cloud.stream.default.consumer.maxAttempts=9999
or for a specific channel (input in this case)
spring.cloud.stream.bindings.input.consumer.defaultRetryable=true
spring.cloud.stream.bindings.input.consumer.backOffInitialInterval=1000
spring.cloud.stream.bindings.input.consumer.backOffMultiplier=2.0
spring.cloud.stream.bindings.input.consumer.backOffMaxInterval=300000
spring.cloud.stream.bindings.input.consumer.maxAttempts=9999
But those properties do not seem to be used in my application. The message gets retried every 100ms regardless of what values in use in the above properties.
Can anyone help me with setting the correct retry and/or backoff settings so that messages get retried accordingly?
A fully working minimal example to illustrate my issue can be found on GitHub and looks like this:
Producer:
#Component
public class Main {
private static final Logger LOG = getLogger(Main.class);
private boolean firstExecution = true;
#Autowired
private SuccessSwitch consumerSuccessSwitch;
#Autowired
private PubSubTemplate pubSubTemplate;
#Scheduled(fixedDelay = 10000)
public void doSomethingAfterStartup() {
if (firstExecution) {
firstExecution = false;
consumerSuccessSwitch.letFail();
pubSubTemplate.publish("topic", "payload");
LOG.info("Message published");
} else {
consumerSuccessSwitch.letSucceed();
}
}
}
Consumer:
#EnableBinding(Sink.class)
class Subscriber {
private static final Logger LOG = getLogger(Subscriber.class);
#Autowired
private SuccessSwitch successSwitch;
private int retryCounter = 0;
#StreamListener(INPUT)
public void handleMessage(Message<String> message) {
LOG.info("Received: {} for the {} time", message.getPayload(), ++retryCounter);
if (!successSwitch.succeeded()) {
throw new RuntimeException();
}
LOG.info("Received: {} times", retryCounter);
}
}
Toggle ack/nack in consumer:
#Component
public class SuccessSwitch {
private boolean success = false;
public void letSucceed() {
this.success = true;
}
public void letFail() {
this.success = false;
}
public boolean succeeded() {
return success;
}
}

Looking at PubSubChannelProvisioner , in the gcp-pubsub binding. When creating a subscription the binding does not configure the retry policy. So unless the retry is somehow handled within spring-cloud-stream instead of the underlying native pub-sub mechanisms, you are out of luck.
What i am considering to do is to create the subscription myself using PubSubAdmin and then spring-cloud-stream will see the existing subscription with the correct retry policy and use it.

Related

Apache Camel aggregation completion not working

I've configured a route to extract some data from exchanges and aggregate them; here is simple summary:
#Component
#RequiredArgsConstructor
public class FingerprintHistoryRouteBuilder extends RouteBuilder {
private final FingerprintHistoryService fingerprintHistoryService;
#Override
public void configure() throws Exception {
from("seda:httpFingerprint")
.aggregate( (AggregationStrategy) (oldExchange, newExchange) -> {
final FingerprintHistory newFingerprint = extract(newExchange);
if (oldExchange == null) {
List<FingerprintHistory> fingerprintHistories = new ArrayList<>();
fingerprintHistories.add(newFingerprint);
newExchange.getMessage().setBody(fingerprintHistories);
return newExchange;
}
final Message oldMessage = oldExchange.getMessage();
final List<FingerprintHistory> fingerprintHistories = (List<FingerprintHistory>) oldMessage.getBody(List.class);
fingerprintHistories.add(newFingerprint);
return oldExchange;
})
.constant(true)
.completionSize(aggregateCount)
.completionInterval(aggregateDuration.toMillis())
.to("direct:processFingerprint")
.end();
from("direct:processFingerprint")
.process(exchange -> {
List<FingerprintHistory> fingerprintHistories = exchange.getMessage().getBody(List.class);
fingerprintHistoryService.saveAll(fingerprintHistories);
});
strong text
}
}
The problem is aggregation completion never works for example this is a sample of my test:
#SpringBootTest
class FingerprintHistoryRouteBuilderTest {
#Autowired
ProducerTemplate producerTemplate;
#Autowired
FingerprintHistoryRouteBuilder fingerprintHistoryRouteBuilder;
#Autowired
CamelContext camelContext;
#MockBean
FingerprintHistoryService historyService;
#Test
void api_whenAggregate() {
UserSearchActivity activity = ActivityFactory.buildSampleSearchActivity("127.0.0.1", "salam", "finger");
Exchange exchange = buildExchange();
exchange.getMessage().setBody(activity);
ReflelctionTestUtils.setField(fingerprintHistoryRouteBuilder, "aggregateCount", 1);
ReflectionTestUtils.setFiled(fingerprintHistoryRouteBuilder, "aggregateDuration", Duration.ofNanos(1));
producerTemplate.send(FingerprintHistoryRouteBuilder.FINGERPRINT_HISTORY_ENDPOINT, exchange);
Mockito.verify(historyService).saveAll(Mockito.any());
}
Exchange buildExchange() {
DefaultExchange defaultExchange = new DefaultExchange(camelContext);
defaultExchange.setMessage(new DefaultMessage(camelContext));
return defaultExchange;
}
}
with the following result:
Wanted but not invoked: fingerprintHistoryService bean.saveAll(
);
I build this simplified example, and the test passes, so it looks like your usage of aggregate is probably correct.
Have you considered that your Mockito.verify() call is happening before the exchange finishes routing? You could test this by removing the verify call and adding a .log() statement to the FINGERPRINT_PROCESS_AGGREGATION route. If you see the log output during execution, you know the exchange is being routed as you expect. If this is the case, then your verify() call needs to be able to wait for the exchange to finish routing. I don't use mockito much, but it looks like you can do this:
Mockito.verify(historyService, timeout(10000)).saveAll(Mockito.any());

Java Spring Test Class Can't access variable from #Autowired component

so, i have this #Component class for listening topic from kafka
#Component
#Data
#Slf4j
public class KafkaConsumer {
public List<String> saveReserveStock = new ArrayList<>();
#KafkaListener(topics = "topic")
public void listenReserveStock(ConsumerRecord<?, ?> consumerRecord) {
System.out.println("==================================================================");
System.out.println("consuming records at: " + DateTime.now().toLocalDateTime());
System.out.println("consuming topic: " + consumerRecord.topic());
saveReserveStock.add(consumerRecord.value().toString());
saveReserveStock.add("dummy data");
saveReserveStock.forEach(System.out::println);
System.out.println("consumed at: " + DateTime.now().toLocalDateTime());
System.out.println("==================================================================");
System.out.println("end at: " + DateTime.now().toLocalDateTime());
}
public void emptyConsumer(){
saveReserveStock = new ArrayList<>();
}
}
and this is embedded kafka configuration
#Slf4j
#EnableKafka
public abstract class EmbeddedKafkaIntegrationTest {
#Autowired
protected static EmbeddedKafkaBroker embeddedKafkaBroker = new EmbeddedKafkaBroker(1, false);
#Autowired
protected KafkaConsumer kafkaConsumer;
#Autowired
private ReactorKafkaProducer reactorKafkaProducer;
protected abstract void setUp();
private static boolean started;
#BeforeClass
public static void createBroker(){
log.info("start test class");
Map<String, String> propertiesMap = new HashMap<>();
propertiesMap.put("listeners", "PLAINTEXT://localhost:9092");
embeddedKafkaBroker.brokerProperties(propertiesMap);
if (!started) {
try {
embeddedKafkaBroker.afterPropertiesSet();
log.info("before class - kafka connected to: "+embeddedKafkaBroker.getBrokersAsString());
}
catch (Exception e) {
log.error("Embedded broker failed to start", e);
}
started = true;
}
}
#Before
public void doSetUp() {
log.info("before - kafka connected to: "+embeddedKafkaBroker.getBrokersAsString());
kafkaConsumer.emptyConsumer();
this.setUp();
}
#After
public void tearDown() {
kafkaConsumer.emptyConsumer();
embeddedKafkaBroker.getZookeeper().getLogDir().deleteOnExit();
}
#AfterClass
public static void destroy(){
log.info("end test class");
}
}
then in my test class, using #Autowired for that KafkaConsumer class
and in the test class i have this to get message from the listener that already consumend
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {ImsStockApplication.class},
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#Slf4j
public class IntegrationTest extends EmbeddedKafkaIntegrationTest {
#Value("${local.server.port}")
private int port;
#Autowired
private KafkaConsumer kafkaConsumer;
#Autowired
private ReactorKafkaProducer reactorKafkaProducer;
#Before
public void setUp() {
RestAssured.port = port;
}
#Test
public void success_SubDetail() {
reactorKafkaProducer.send("topic", event).block();
Awaitility.await().atMost(10, TimeUnit.SECONDS).untilAsserted(() -> {
log.info("AWAITILITY AT: " + DateTime.now().toLocalDateTime());
Assert.assertTrue(kafkaConsumer.getFailDecreaseGoodsReceipt().size() > 0);
Assert.assertTrue(kafkaConsumer.getSaveReserveStock().size() > 0);
Assert.assertTrue(kafkaConsumer.getSaveBindStock().size() > 0);
});
}
}
but the result sometimes got failure (list empty)...
it's like the list variable is empty, while it should be not empty
below is the log where the listener receive the message and store it to the list
==================================================================
consuming records at: 2022-07-10T14:16:46.748
consuming topic: topic
{"id":9721,"eventId":"eventId","organizationCode":"ORG","createdDate":1657437282742,"lastModifiedDate":1657437282742,"routingId":"routingId"}
dummy data
consumed at: 2022-07-10T14:16:46.748
==================================================================
end at: 2022-07-10T14:16:46.748
and in my test class when i tried to access the variable, it got empty. it keep waiting for the list to be filled
AWAITILITY AT: 2022-07-10T14:16:46.829
AWAITILITY AT: 2022-07-10T14:16:46.945
AWAITILITY AT: 2022-07-10T14:16:47.056
AWAITILITY AT: 2022-07-10T14:16:47.164
AWAITILITY AT: 2022-07-10T14:16:47.273
AWAITILITY AT: 2022-07-10T14:16:47.384
AWAITILITY AT: 2022-07-10T14:16:47.490
AWAITILITY AT: 2022-07-10T14:16:47.598
if we looked at the timestamp, the list shouldn't be empty right? but why my test got failed?
Where did i go wrong?
Thanks
IMHO the question lacks information... So providing not a real answer but rather a series of possible answers that can help to find the solution.
There are many possible reasons for the test to fail and its not necessarily because of your test code.
One possible reason is that when you connect to kafka you start to listen to the "latest" messages (offset = latest) In this case you won't be able to consume the messages that are already in the topic.
While this can be the answer to the question really, but Maybe you can post the code that actually sends the message to the topic. And this is the real question here.
Another possible reason is the number of partitions. If the listener is configured to use the same consumer group as other listeners that might exist in the running application - maybe it doesn't get the partition that eventually receives the message
Its also possible that the reason is in the code itself, but again you don't show all the configuration at least the configuration of the test here.
An example of such a possible issue is that the #KafkaListener is not handled properly, so spring makes a spring bean from the component (after all it can be autowired from within the test) but doesn't plug in the whole kafka infrastructure under the hood.

How to use blocking queue in Spring Boot?

I am trying to use BlockingQueue inside Spring Boot. My design was like this: user submit request via a controller and controller in turn puts some objects onto a blocking queue. After that the consumer should be able to take the objects and process further.
I have used Asnyc, ThreadPool and EventListener. However with my code below I found consumer class is not consuming objects. Could you please help point out how to improve?
Queue Configuration
#Bean
public BlockingQueue<MyObject> myQueue() {
return new PriorityBlockingQueue<>();
}
#Bean
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(3);
executor.setMaxPoolSize(3);
executor.setQueueCapacity(10);
executor.setThreadNamePrefix("Test-");
executor.initialize();
return executor;
}
Rest Controller
#Autowired
BlockingQueue<MyObject> myQueue;
#RequestMapping(path = "/api/produce")
public void produce() {
/* Do something */
MyObject myObject = new MyObject();
myQueue.put(myObject);
}
Consumer Class
#Autowired
private BlockingQueue<MyObject> myQueue;
#EventListener
public void onApplicationEvent(ContextRefreshedEvent event) {
consume();
}
#Async
public void consume() {
while (true) {
try {
MyObject myObject = myQueue.take();
}
catch (Exception e) {
}
}
}
Your idea is using Queue to store messages, consumer listens to spring events and consume.
I didn't see your code have actually publish the event, just store them in queue.
If you want to use Spring Events, producers could like this:
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
public void doStuffAndPublishAnEvent(final String message) {
System.out.println("Publishing custom event. ");
CustomSpringEvent customSpringEvent = new CustomSpringEvent(this, message);
applicationEventPublisher.publishEvent(customSpringEvent);
}
check this doc
If you still want to use BlockingQueue, your consumer should be a running thread, continuously waiting for tasks in the queue, like:
public class NumbersConsumer implements Runnable {
private BlockingQueue<Integer> queue;
private final int poisonPill;
public NumbersConsumer(BlockingQueue<Integer> queue, int poisonPill) {
this.queue = queue;
this.poisonPill = poisonPill;
}
public void run() {
try {
while (true) {
Integer number = queue.take(); // always waiting
if (number.equals(poisonPill)) {
return;
}
System.out.println(Thread.currentThread().getName() + " result: " + number);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
could check this code example
#Async doesn't actually start a new thread if the target method is called from within the same object instance, this could be the problem in your case.
Also note that you need to put #EnableAsync on a config class to enable the #Async annotation.
See Spring documentation: https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#scheduling-annotation-support
The default advice mode for processing #Async annotations is proxy which allows for interception of calls through the proxy only. Local calls within the same class cannot get intercepted that way. For a more advanced mode of interception, consider switching to aspectj mode in combination with compile-time or load-time weaving.
In the end I came up with this solution.
Rest Controller
#Autowired
BlockingQueue<MyObject> myQueue;
#RequestMapping(path = "/api/produce")
public void produce() {
/* Do something */
MyObject myObject = new MyObject();
myQueue.put(myObject);
Consumer.consume();
}
It is a little bit weird because you have to first put the object on queue yourself then consume that object by yourself. Any suggestions on improvement is highly appreciated.

feign error handling by using feign decoder

I have created service 'A', which needs to be calls by the service 'B' by using feign client
And if service 'A' fails due to some validation, then service 'A' send back the error response which contains the below details,
(1)http status code
(2)error message
(3) custom error map which contains the custom errorcode and their error message
for example, <"Emp-1001", "invalid employee Id">
From Service 'B' we are using feigndecoder for handling feign exception, but it only provides the http status code not the custom error code
And, In my case, for different-different scenario, the http status code is same but custom error map value is different.
on the combination of both(http status code + custom error map), we have to handle the exception in service 'B'.
kindly provide some suggestions on this
You can enable circuit breaker and also configure your application to apply different fallback methods depending on the error returned, follow the next steps:
1.- Enable the circuit breaker itself
#SpringBootApplication
#EnableFeignClients("com.perritotutorials.feign.client")
#EnableCircuitBreaker
public class FeignDemoClientApplication {
2.- Create your fallback bean
#Slf4j
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class PetAdoptionClientFallbackBean implements PetAdoptionClient {
#Setter
private Throwable cause;
#Override
public void savePet(#RequestBody Map<String, ?> pet) {
log.error("You are on fallback interface!!! - ERROR: {}", cause);
}
}
Some things you must keep in mind for fallback implementations:
Must be marked as #Component, they are unique across the application.
Fallback bean should have a Prototype scope because we want a new one to be created for each exception.
Use constructor injection for testing purposes.
3.- Your ErrorDecoder, to implement fallback startegies depending on the HTTP error returned:
public class MyErrorDecoder implements ErrorDecoder {
private final ErrorDecoder defaultErrorDecoder = new Default();
#Override
public Exception decode(String methodKey, Response response) {
if (response.status() >= 400 && response.status() <= 499) {
return new MyCustomBadRequestException();
}
if (response.status() >= 500) {
return new RetryableException();
}
return defaultErrorDecoder.decode(methodKey, response);
}
}
4.- In your configuration class, add the Retryer and the ErrorDecoder into the Spring context:
#Bean
public MyErrorDecoder myErrorDecoder() {
return new MyErrorDecoder();
}
#Bean
public Retryer retryer() {
return new Retryer.Default();
}
You can also add customization to the Retryer:
class CustomRetryer implements Retryer {
private final int maxAttempts;
private final long backoff;
int attempt;
public CustomRetryer() {
this(2000, 5); //5 times, each 2 seconds
}
public CustomRetryer(long backoff, int maxAttempts) {
this.backoff = backoff;
this.maxAttempts = maxAttempts;
this.attempt = 1;
}
public void continueOrPropagate(RetryableException e) {
if (attempt++ >= maxAttempts) {
throw e;
}
try {
Thread.sleep(backoff);
} catch (InterruptedException ignored) {
Thread.currentThread().interrupt();
}
}
#Override
public Retryer clone() {
return new CustomRetryer(backoff, maxAttempts);
}
}
If you want to get a functional example about how to implement Feign in your application, read this article.

Using Cloud Stream Test does not work

I tried some stuff with spring-cloud-stream. Everything works and now I tried to write some test cases. Unfortunately they are not working. I reduced everything to the following (Everything is in the same boot app):
The Sender:
#EnableBinding(Sender.Emitter.class)
public class Sender {
public interface Emitter {
String CHANNEL = "emitter";
#Output(CHANNEL)
MessageChannel events();
}
private Emitter emitter;
public Sender(Emitter emitter) {
this.emitter = emitter;
}
public void sendMessage(String massage) {
emitter.events().send(MessageBuilder.withPayload(massage).build());
}
}
The Receiver:
#EnableBinding(Receiver.Subscriber.class)
public class Receiver {
public interface Subscriber {
String CHANNEL = "subscriber";
#Input(CHANNEL)
SubscribableChannel events();
}
private String lastMessage;
public String getLastMessage() {
return lastMessage;
}
#StreamListener(Subscriber.CHANNEL)
public void event(String message) {
this.lastMessage = message;
}
}
My config:
spring:
cloud:
stream:
default-binder: rabbit
bindings:
emitter:
destination: testtock
content-type: application/json
subscriber:
destination: testtock
The Test:
#RunWith(SpringRunner.class)
#SpringBootTest
public class BasicTest {
#Autowired
private Receiver receiver;
#Autowired
private Sender sender;
#Test
public void test() throws InterruptedException {
String massage = UUID.randomUUID().toString();
sender.sendMessage(massage);
//Thread.sleep(1000);
assertEquals(massage, receiver.getLastMessage());
}
}
I want use spring-cloud-stream-test-support for testing to not need a AMQP message broker. Outside of testing I use a rabbitmq, there everything is working.
Maybe the spring-cloud-stream-test-support does not really route messages? Or what is the Problem here?
Maybe the spring-cloud-stream-test-support does not really route messages?
Correct; the test binder is just a harness, it doesn't route between bindings; it's unusual to have a producer and consumer binding for the same destination in the same app.
When you send a message in a test, you have to query the binder to ensure it was sent expected. You use a MessageCollector to do that. See the documentation and you can also look at the tests for some of the out of the box apps.
The spring-cloud-stream-test-support provides an ability to test individual Spring Cloud Stream application and uses TestSupportBinder. Hence, this is not meant for end-to-end integration testing like the one you are using above.
For more information on using spring-cloud-stream-test-support and the TestSupportBinder, you can refer the doc here

Categories

Resources