so, i have this #Component class for listening topic from kafka
#Component
#Data
#Slf4j
public class KafkaConsumer {
public List<String> saveReserveStock = new ArrayList<>();
#KafkaListener(topics = "topic")
public void listenReserveStock(ConsumerRecord<?, ?> consumerRecord) {
System.out.println("==================================================================");
System.out.println("consuming records at: " + DateTime.now().toLocalDateTime());
System.out.println("consuming topic: " + consumerRecord.topic());
saveReserveStock.add(consumerRecord.value().toString());
saveReserveStock.add("dummy data");
saveReserveStock.forEach(System.out::println);
System.out.println("consumed at: " + DateTime.now().toLocalDateTime());
System.out.println("==================================================================");
System.out.println("end at: " + DateTime.now().toLocalDateTime());
}
public void emptyConsumer(){
saveReserveStock = new ArrayList<>();
}
}
and this is embedded kafka configuration
#Slf4j
#EnableKafka
public abstract class EmbeddedKafkaIntegrationTest {
#Autowired
protected static EmbeddedKafkaBroker embeddedKafkaBroker = new EmbeddedKafkaBroker(1, false);
#Autowired
protected KafkaConsumer kafkaConsumer;
#Autowired
private ReactorKafkaProducer reactorKafkaProducer;
protected abstract void setUp();
private static boolean started;
#BeforeClass
public static void createBroker(){
log.info("start test class");
Map<String, String> propertiesMap = new HashMap<>();
propertiesMap.put("listeners", "PLAINTEXT://localhost:9092");
embeddedKafkaBroker.brokerProperties(propertiesMap);
if (!started) {
try {
embeddedKafkaBroker.afterPropertiesSet();
log.info("before class - kafka connected to: "+embeddedKafkaBroker.getBrokersAsString());
}
catch (Exception e) {
log.error("Embedded broker failed to start", e);
}
started = true;
}
}
#Before
public void doSetUp() {
log.info("before - kafka connected to: "+embeddedKafkaBroker.getBrokersAsString());
kafkaConsumer.emptyConsumer();
this.setUp();
}
#After
public void tearDown() {
kafkaConsumer.emptyConsumer();
embeddedKafkaBroker.getZookeeper().getLogDir().deleteOnExit();
}
#AfterClass
public static void destroy(){
log.info("end test class");
}
}
then in my test class, using #Autowired for that KafkaConsumer class
and in the test class i have this to get message from the listener that already consumend
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {ImsStockApplication.class},
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#Slf4j
public class IntegrationTest extends EmbeddedKafkaIntegrationTest {
#Value("${local.server.port}")
private int port;
#Autowired
private KafkaConsumer kafkaConsumer;
#Autowired
private ReactorKafkaProducer reactorKafkaProducer;
#Before
public void setUp() {
RestAssured.port = port;
}
#Test
public void success_SubDetail() {
reactorKafkaProducer.send("topic", event).block();
Awaitility.await().atMost(10, TimeUnit.SECONDS).untilAsserted(() -> {
log.info("AWAITILITY AT: " + DateTime.now().toLocalDateTime());
Assert.assertTrue(kafkaConsumer.getFailDecreaseGoodsReceipt().size() > 0);
Assert.assertTrue(kafkaConsumer.getSaveReserveStock().size() > 0);
Assert.assertTrue(kafkaConsumer.getSaveBindStock().size() > 0);
});
}
}
but the result sometimes got failure (list empty)...
it's like the list variable is empty, while it should be not empty
below is the log where the listener receive the message and store it to the list
==================================================================
consuming records at: 2022-07-10T14:16:46.748
consuming topic: topic
{"id":9721,"eventId":"eventId","organizationCode":"ORG","createdDate":1657437282742,"lastModifiedDate":1657437282742,"routingId":"routingId"}
dummy data
consumed at: 2022-07-10T14:16:46.748
==================================================================
end at: 2022-07-10T14:16:46.748
and in my test class when i tried to access the variable, it got empty. it keep waiting for the list to be filled
AWAITILITY AT: 2022-07-10T14:16:46.829
AWAITILITY AT: 2022-07-10T14:16:46.945
AWAITILITY AT: 2022-07-10T14:16:47.056
AWAITILITY AT: 2022-07-10T14:16:47.164
AWAITILITY AT: 2022-07-10T14:16:47.273
AWAITILITY AT: 2022-07-10T14:16:47.384
AWAITILITY AT: 2022-07-10T14:16:47.490
AWAITILITY AT: 2022-07-10T14:16:47.598
if we looked at the timestamp, the list shouldn't be empty right? but why my test got failed?
Where did i go wrong?
Thanks
IMHO the question lacks information... So providing not a real answer but rather a series of possible answers that can help to find the solution.
There are many possible reasons for the test to fail and its not necessarily because of your test code.
One possible reason is that when you connect to kafka you start to listen to the "latest" messages (offset = latest) In this case you won't be able to consume the messages that are already in the topic.
While this can be the answer to the question really, but Maybe you can post the code that actually sends the message to the topic. And this is the real question here.
Another possible reason is the number of partitions. If the listener is configured to use the same consumer group as other listeners that might exist in the running application - maybe it doesn't get the partition that eventually receives the message
Its also possible that the reason is in the code itself, but again you don't show all the configuration at least the configuration of the test here.
An example of such a possible issue is that the #KafkaListener is not handled properly, so spring makes a spring bean from the component (after all it can be autowired from within the test) but doesn't plug in the whole kafka infrastructure under the hood.
Related
I've configured a route to extract some data from exchanges and aggregate them; here is simple summary:
#Component
#RequiredArgsConstructor
public class FingerprintHistoryRouteBuilder extends RouteBuilder {
private final FingerprintHistoryService fingerprintHistoryService;
#Override
public void configure() throws Exception {
from("seda:httpFingerprint")
.aggregate( (AggregationStrategy) (oldExchange, newExchange) -> {
final FingerprintHistory newFingerprint = extract(newExchange);
if (oldExchange == null) {
List<FingerprintHistory> fingerprintHistories = new ArrayList<>();
fingerprintHistories.add(newFingerprint);
newExchange.getMessage().setBody(fingerprintHistories);
return newExchange;
}
final Message oldMessage = oldExchange.getMessage();
final List<FingerprintHistory> fingerprintHistories = (List<FingerprintHistory>) oldMessage.getBody(List.class);
fingerprintHistories.add(newFingerprint);
return oldExchange;
})
.constant(true)
.completionSize(aggregateCount)
.completionInterval(aggregateDuration.toMillis())
.to("direct:processFingerprint")
.end();
from("direct:processFingerprint")
.process(exchange -> {
List<FingerprintHistory> fingerprintHistories = exchange.getMessage().getBody(List.class);
fingerprintHistoryService.saveAll(fingerprintHistories);
});
strong text
}
}
The problem is aggregation completion never works for example this is a sample of my test:
#SpringBootTest
class FingerprintHistoryRouteBuilderTest {
#Autowired
ProducerTemplate producerTemplate;
#Autowired
FingerprintHistoryRouteBuilder fingerprintHistoryRouteBuilder;
#Autowired
CamelContext camelContext;
#MockBean
FingerprintHistoryService historyService;
#Test
void api_whenAggregate() {
UserSearchActivity activity = ActivityFactory.buildSampleSearchActivity("127.0.0.1", "salam", "finger");
Exchange exchange = buildExchange();
exchange.getMessage().setBody(activity);
ReflelctionTestUtils.setField(fingerprintHistoryRouteBuilder, "aggregateCount", 1);
ReflectionTestUtils.setFiled(fingerprintHistoryRouteBuilder, "aggregateDuration", Duration.ofNanos(1));
producerTemplate.send(FingerprintHistoryRouteBuilder.FINGERPRINT_HISTORY_ENDPOINT, exchange);
Mockito.verify(historyService).saveAll(Mockito.any());
}
Exchange buildExchange() {
DefaultExchange defaultExchange = new DefaultExchange(camelContext);
defaultExchange.setMessage(new DefaultMessage(camelContext));
return defaultExchange;
}
}
with the following result:
Wanted but not invoked: fingerprintHistoryService bean.saveAll(
);
I build this simplified example, and the test passes, so it looks like your usage of aggregate is probably correct.
Have you considered that your Mockito.verify() call is happening before the exchange finishes routing? You could test this by removing the verify call and adding a .log() statement to the FINGERPRINT_PROCESS_AGGREGATION route. If you see the log output during execution, you know the exchange is being routed as you expect. If this is the case, then your verify() call needs to be able to wait for the exchange to finish routing. I don't use mockito much, but it looks like you can do this:
Mockito.verify(historyService, timeout(10000)).saveAll(Mockito.any());
In a Spring Boot application I'm using spring-cloud-stream for PubSub (spring-cloud-gcp-pubsub-stream-binder) to subscribe to a topic.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-pubsub-stream-binder</artifactId>
</dependency>
I use the #EnableBinding and #StreamListener annotations to set up the subscriber:
#EnableBinding(Sink.class)
class Subscriber {
#StreamListener(INPUT)
public void handleMessage(Message<String> message) {
...
}
}
During the handling of the message it is possible that something goes wrong. In that case I throw an Exception to make sure the message will not get acknowledged and be retried at a later time.
According to the spring cloud stream documentation I should be able to use the properties
spring.cloud.stream.default.consumer.defaultRetryable=true
spring.cloud.stream.default.consumer.backOffInitialInterval=1000
spring.cloud.stream.default.consumer.backOffMultiplier=2.0
spring.cloud.stream.default.consumer.backOffMaxInterval=300000
spring.cloud.stream.default.consumer.maxAttempts=9999
or for a specific channel (input in this case)
spring.cloud.stream.bindings.input.consumer.defaultRetryable=true
spring.cloud.stream.bindings.input.consumer.backOffInitialInterval=1000
spring.cloud.stream.bindings.input.consumer.backOffMultiplier=2.0
spring.cloud.stream.bindings.input.consumer.backOffMaxInterval=300000
spring.cloud.stream.bindings.input.consumer.maxAttempts=9999
But those properties do not seem to be used in my application. The message gets retried every 100ms regardless of what values in use in the above properties.
Can anyone help me with setting the correct retry and/or backoff settings so that messages get retried accordingly?
A fully working minimal example to illustrate my issue can be found on GitHub and looks like this:
Producer:
#Component
public class Main {
private static final Logger LOG = getLogger(Main.class);
private boolean firstExecution = true;
#Autowired
private SuccessSwitch consumerSuccessSwitch;
#Autowired
private PubSubTemplate pubSubTemplate;
#Scheduled(fixedDelay = 10000)
public void doSomethingAfterStartup() {
if (firstExecution) {
firstExecution = false;
consumerSuccessSwitch.letFail();
pubSubTemplate.publish("topic", "payload");
LOG.info("Message published");
} else {
consumerSuccessSwitch.letSucceed();
}
}
}
Consumer:
#EnableBinding(Sink.class)
class Subscriber {
private static final Logger LOG = getLogger(Subscriber.class);
#Autowired
private SuccessSwitch successSwitch;
private int retryCounter = 0;
#StreamListener(INPUT)
public void handleMessage(Message<String> message) {
LOG.info("Received: {} for the {} time", message.getPayload(), ++retryCounter);
if (!successSwitch.succeeded()) {
throw new RuntimeException();
}
LOG.info("Received: {} times", retryCounter);
}
}
Toggle ack/nack in consumer:
#Component
public class SuccessSwitch {
private boolean success = false;
public void letSucceed() {
this.success = true;
}
public void letFail() {
this.success = false;
}
public boolean succeeded() {
return success;
}
}
Looking at PubSubChannelProvisioner , in the gcp-pubsub binding. When creating a subscription the binding does not configure the retry policy. So unless the retry is somehow handled within spring-cloud-stream instead of the underlying native pub-sub mechanisms, you are out of luck.
What i am considering to do is to create the subscription myself using PubSubAdmin and then spring-cloud-stream will see the existing subscription with the correct retry policy and use it.
I am trying to write a unit test for a Kafka listener that I am developing using Spring Boot 2.x. Being a unit test, I don't want to start up a full Kafka server an instance of Zookeeper. So, I decided to use Spring Embedded Kafka.
The definition of my listener is very basic.
#Component
public class Listener {
private final CountDownLatch latch;
#Autowired
public Listener(CountDownLatch latch) {
this.latch = latch;
}
#KafkaListener(topics = "sample-topic")
public void listen(String message) {
latch.countDown();
}
}
Also the test, that verifies the latch counter to be equal to zero after receiving a message, is very easy.
#RunWith(SpringRunner.class)
#SpringBootTest
#DirtiesContext
#EmbeddedKafka(topics = { "sample-topic" })
#TestPropertySource(properties = { "spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}" })
public class ListenerTest {
#Autowired
private KafkaEmbedded embeddedKafka;
#Autowired
private CountDownLatch latch;
private KafkaTemplate<Integer, String> producer;
#Before
public void setUp() {
this.producer = buildKafkaTemplate();
this.producer.setDefaultTopic("sample-topic");
}
private KafkaTemplate<Integer, String> buildKafkaTemplate() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
ProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
return new KafkaTemplate<>(pf);
}
#Test
public void listenerShouldConsumeMessages() throws InterruptedException {
// Given
producer.sendDefault(1, "Hello world");
// Then
assertThat(latch.await(10L, TimeUnit.SECONDS)).isTrue();
}
}
Unfortunately, the test fails and I cannot understand why. Is it possible to use an instance of KafkaEmbedded to test a method marked with the annotation #KafkaListener?
All the code is shared in my GitHub repository kafka-listener.
Thanks to all.
You are probably sending the message before the consumer has been assigned the topic/partition. Set property...
spring:
kafka:
consumer:
auto-offset-reset: earliest
...it defaults to latest.
This is like using --from-beginning with the console consumer.
EDIT
Oh; you're not using boot's properties.
Add
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
EDIT2
BTW, you should probably also do a get(10L, TimeUnit.SECONDS) on the result of the template.send() (a Future<>) to assert that the send was successful.
EDIT3
To override the offset reset just for the test, you can do the same as what you did for the broker addresses:
#Value("${spring.kafka.consumer.auto-offset-reset:latest}")
private String reset;
...
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, this.reset);
and
#TestPropertySource(properties = { "spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}",
"spring.kafka.consumer.auto-offset-reset=earliest"})
However, bear in mind that this property only applies the first time a group consumes. To always start at the end each time the app starts, you have to seek to the end during startup.
Also, I would recommend setting enable.auto.commit to false so that the container takes care of committing the offsets rather than just relying on the consumer client doing it on a time schedule.
Maybe someone will find this useful. I had a similar problem.
Locally tests were running (some checks were performed within Awaitility.waitAtMost) but in the Jenkins pipeline, tests were failing.
The solution was, like already mentioned in the most voted answer, setting auto-offset-reset=earliest.
When tests are running, you can check if you set the configuration properly by looking into test logs. Spring outputs configuration for both producer and consumer
I tried some stuff with spring-cloud-stream. Everything works and now I tried to write some test cases. Unfortunately they are not working. I reduced everything to the following (Everything is in the same boot app):
The Sender:
#EnableBinding(Sender.Emitter.class)
public class Sender {
public interface Emitter {
String CHANNEL = "emitter";
#Output(CHANNEL)
MessageChannel events();
}
private Emitter emitter;
public Sender(Emitter emitter) {
this.emitter = emitter;
}
public void sendMessage(String massage) {
emitter.events().send(MessageBuilder.withPayload(massage).build());
}
}
The Receiver:
#EnableBinding(Receiver.Subscriber.class)
public class Receiver {
public interface Subscriber {
String CHANNEL = "subscriber";
#Input(CHANNEL)
SubscribableChannel events();
}
private String lastMessage;
public String getLastMessage() {
return lastMessage;
}
#StreamListener(Subscriber.CHANNEL)
public void event(String message) {
this.lastMessage = message;
}
}
My config:
spring:
cloud:
stream:
default-binder: rabbit
bindings:
emitter:
destination: testtock
content-type: application/json
subscriber:
destination: testtock
The Test:
#RunWith(SpringRunner.class)
#SpringBootTest
public class BasicTest {
#Autowired
private Receiver receiver;
#Autowired
private Sender sender;
#Test
public void test() throws InterruptedException {
String massage = UUID.randomUUID().toString();
sender.sendMessage(massage);
//Thread.sleep(1000);
assertEquals(massage, receiver.getLastMessage());
}
}
I want use spring-cloud-stream-test-support for testing to not need a AMQP message broker. Outside of testing I use a rabbitmq, there everything is working.
Maybe the spring-cloud-stream-test-support does not really route messages? Or what is the Problem here?
Maybe the spring-cloud-stream-test-support does not really route messages?
Correct; the test binder is just a harness, it doesn't route between bindings; it's unusual to have a producer and consumer binding for the same destination in the same app.
When you send a message in a test, you have to query the binder to ensure it was sent expected. You use a MessageCollector to do that. See the documentation and you can also look at the tests for some of the out of the box apps.
The spring-cloud-stream-test-support provides an ability to test individual Spring Cloud Stream application and uses TestSupportBinder. Hence, this is not meant for end-to-end integration testing like the one you are using above.
For more information on using spring-cloud-stream-test-support and the TestSupportBinder, you can refer the doc here
I have a Singleton class in Java and I have a timer using the #Schedule annotation. I wish to change the property of the Schedule at runtime. Below is the code:
#Startup
#Singleton
public class Listener {
public void setProperty() {
Method[] methods = this.getClass().getDeclaredMethods();
Method method = methods[0];
Annotation[] annotations = method.getDeclaredAnnotations();
Annotation annotation = annotations[0];
if(annotation instanceof Schedule) {
Schedule schedule = (Schedule) annotation;
System.out.println(schedule.second());
}
}
#PostConstruct
public void runAtStartUp() {
setProperty();
}
#Schedule(second = "3")
public void run() {
// do something
}
}
I wish to change the value at runtime of Schedule second based on the information from a Property file. Is this actually possibe? The Property file contains the configuration information. I tried to do #Schedule(second = SOME_VARIABLE) where private static String SOME_VARIABLE = readFromConfigFile(); This does not work. It expects a constant meaning a final and I don't want to set final.
I also saw this post: Modifying annotation attribute value at runtime in java
It shows this is not possible to do.
Any ideas?
EDIT:
#Startup
#Singleton
public class Listener {
javax.annotation.#Resource // the issue is this
private javax.ejb.TimerService timerService;
private static String SOME_VARIABLE = null;
#PostConstruct
public void runAtStartUp() {
SOME_VARIABLE = readFromFile();
timerService.createTimer(new Date(), TimeUnit.SECONDS.toMillis(Long.parse(SOME_VARIABLE)), null);
}
#Timeout
public void check(Timer timer) {
// some code runs every SOME_VARIABLE as seconds
}
}
The issue is injecting using #Resource. How can this be fixed?
The Exception is shown below:
No EJBContainer provider available The following providers: org.glassfish.ejb.embedded.EJBContainerProviderImpl Returned null from createEJBContainer call
javax.ejb.EJBException
org.glassfish.ejb.embedded.EJBContainerProviderImpl
at javax.ejb.embeddable.EJBContainer.reportError(EJBContainer.java:186)
at javax.ejb.embeddable.EJBContainer.createEJBContainer(EJBContainer.java:121)
at javax.ejb.embeddable.EJBContainer.createEJBContainer(EJBContainer.java:78)
#BeforeClass
public void setUpClass() throws Exception {
Container container = EJBContainer.createEJBContainer();
}
This occurs during unit testing using the Embeddable EJB Container. Some of the Apache Maven code is located on this post: Java EJB JNDI Beans Lookup Failed
I think the solution you are looking for was discussed here.
TomasZ is right you should use programmatic timers with TimerService for the situations when you want dynamically change schedule in run time.
Maybe you could use the TimerService. I have written some code but on my Wildfly 8 it seems to run multiple times even if its a Singleton.
Documentation http://docs.oracle.com/javaee/6/tutorial/doc/bnboy.html
Hope this helps:
#javax.ejb.Singleton
#javax.ejb.Startup
public class VariableEjbTimer {
#javax.annotation.Resource
javax.ejb.TimerService timerService;
#javax.annotation.PostConstruct
public void runAtStartUp() {
createTimer(2000L);
}
private void createTimer(long millis) {
//timerService.createSingleActionTimer(millis, new javax.ejb.TimerConfig());
timerService.createTimer(millis, millis, null);
}
#javax.ejb.Timeout
public void run(javax.ejb.Timer timer) {
long timeout = readFromConfigFile();
System.out.println("Timeout in " + timeout);
createTimer(timeout);
}
private long readFromConfigFile() {
return new java.util.Random().nextInt(5) * 1000L;
}
}