Where can close a Lettuce Redis connection in my Spring Boot application? - java

I have initialized the Spring Boot app with Lettuce(io.lettuce.core.api) configuration like this
#Configuration
class RedisConfiguration {
#Value("${spring.redis.host}")
private String redisHostname;
#Value("${spring.redis.port}")
private int redisPort;
private StatefulRedisConnection<String, String> redisConnection;
private static RedisClient redisClient;
#Bean
public RedisCommands connectionFactory() {
RedisURI redisURI = RedisURI.create(redisHostname,redisPort);
redisClient = RedisClient.create(redisURI);
redisConnection = redisClient.connect();
RedisCommands<String, String> syncCommands =
redisConnection.sync();
return syncCommands;
}
}
I want to call redisClient.shutdown(); when application shuts down or exits. What is the right place to terminate the redis connection ?

You have two options:
Using #PreDestroy:
#PreDestroy
public StatefulRedisConnection<String, String> redisConnection() {
redisConnection.close();
redisClient.close();
}
Via #Bean methods
Make sure to expose RedisClient and StatefulRedisConnection as beans. Command interfaces (RedisCommands) do not expose a close() method.
#Configuration
class RedisConfiguration {
#Value("${spring.redis.host}")
private String redisHostname;
#Value("${spring.redis.port}")
private int redisPort;
#Bean(destroyMethod = "close")
public StatefulRedisConnection<String, String> redisClient() {
RedisURI redisURI = RedisURI.create(redisHostname,redisPort);
return RedisClient.create(redisURI);
redisConnection = redisClient.connect();
}
#Bean(destroyMethod = "close")
public StatefulRedisConnection<String, String> redisConnection(RedisClient client) {
return client.connect();
}
#Bean
public RedisCommands redisCommands(StatefulRedisConnection<String, String> connection) {
return connection.sync();
}
}
The first method is shorter while the #Bean approach lets you interact with intermediate objects in your application.

Related

graphql-spring-boot-starter Application with only websockets

I am building a graphql application with spring-boot-starter-webflux 2.5.6 and com.graphql-java-kickstart:graphql-spring-boot-starter:12.0.0.
At this point the application is running fine since com.graphql-java-kickstart is easy to start with.
With http-Requests I can call Queries and run Mutations and I am even able to create and get updates via Subscriptions over websockets.
But for my application Queries and Mutations also have to run via websocket.
It seems that in com.graphql-java-kickstart:graphql-spring-boot-starter you can only configure a subscription endpoint as websocket.
Adding an additional websocket via 'extends Endpoint' and '#ServerEndpoint' did nothing at all.
I also tried to add my own HandlerMapping:
#PostConstruct
public void init()
{
Map<String, Object> map = new HashMap<String, Object>(
((SimpleUrlHandlerMapping) webSocketHandlerMapping).getUrlMap());
map.put("/mysocket", myWebSocketHandler);
//map.put("/graphql", myWebSocketHandler);
((SimpleUrlHandlerMapping) webSocketHandlerMapping).setUrlMap(map);
((SimpleUrlHandlerMapping) webSocketHandlerMapping).initApplicationContext();
}
This seems to work with the /mysocket Topic but how do I enable it for /graphql, it seems like there is already a handler listening on:
WARN 12168 --- [ctor-http-nio-2] notprivacysafe.graphql.GraphQL : Query failed to parse : ''
And how to connect the websocket with my GraphQLMutationResolvers?
My entry point to a solution for this problem was to create a RestController and connect the ServerWebExchange to a WebSocketHandler in the WebSocketService like this:
#RestController
#RequestMapping("/")
public class WebSocketController
{
private static final Logger logger = LoggerFactory.getLogger(WebSocketController.class);
private final GraphQLObjectMapper objectMapper;
private final GraphQLInvoker graphQLInvoker;
private final GraphQLSpringInvocationInputFactory invocationInputFactory;
private final WebSocketService service;
#Autowired
public WebSocketController(GraphQLObjectMapper objectMapper, GraphQLInvoker graphQLInvoker,
GraphQLSpringInvocationInputFactory invocationInputFactory, WebSocketService service)
{
this.objectMapper = objectMapper;
this.graphQLInvoker = graphQLInvoker;
this.invocationInputFactory = invocationInputFactory;
this.service = service;
}
#GetMapping("${graphql.websocket.path:graphql-ws}")
public Mono<Void> getMono(ServerWebExchange exchange)
{
logger.debug("New connection via GET");
return service.handleRequest(exchange,
new GraphQLWebsocketMessageConsumer(exchange, objectMapper, graphQLInvoker, invocationInputFactory));
}
#PostMapping("${graphql.websocket.path:graphql-ws}")
public Mono<Void> postMono(ServerWebExchange exchange)
{
...
}
}
In this prototype state the WebSocketHandler is also implementing the Consumer which is called to handle each WebSocketMessage:
public class GraphQLWebsocketMessageConsumer implements Consumer<String>, WebSocketHandler
{
private static final Logger logger = LoggerFactory.getLogger(GraphQLWebsocketMessageConsumer.class);
private final ServerWebExchange swe;
private final GraphQLObjectMapper objectMapper;
private final GraphQLInvoker graphQLInvoker;
private final GraphQLSpringInvocationInputFactory invocationInputFactory;
private final Sinks.Many<String> publisher;
public GraphQLWebsocketMessageConsumer(ServerWebExchange swe, GraphQLObjectMapper objectMapper,
GraphQLInvoker graphQLInvoker, GraphQLSpringInvocationInputFactory invocationInputFactory)
{
...
publisher = Sinks.many().multicast().directBestEffort();
}
#Override
public Mono<Void> handle(WebSocketSession webSocketSession)
{
Mono<Void> input = webSocketSession.receive().map(WebSocketMessage::getPayloadAsText).doOnNext(this).then();
Mono<Void> sender = webSocketSession.send(publisher.asFlux().map(webSocketSession::textMessage));
return Mono.zip(input, sender).then();
}
#Override
public void accept(String body)
{
try
{
String query = extractQuery(body);
if(query == null)
{
return;
}
GraphQLRequest request = objectMapper.readGraphQLRequest(query);
GraphQLSingleInvocationInput invocationInput = invocationInputFactory.create(request, swe);
Mono<ExecutionResult> executionResult = Mono.fromCompletionStage(graphQLInvoker.executeAsync(invocationInput));
Mono<String> jsonResult = executionResult.map(objectMapper::serializeResultAsJson);
jsonResult.subscribe(publisher::tryEmitNext);
} catch (Exception e)
{
...
}
}
#SuppressWarnings("unchecked")
private String extractQuery(final String query) throws Exception
{
Map<String, Object> map = (Map<String, Object>) objectMapper.getJacksonMapper().readValue(query, Map.class);
...
return queryPart;
}
#Override
public List<String> getSubProtocols()
{
logger.debug("getSubProtocols called");
return Collections.singletonList("graphql-ws");
}
}
This solution does not yet touch security aspects like authentication or session handling.

Use JMSListener with RabbitMQ

My Application currently uses IBM MQ and has queue config setup and working fine with JMS. e.g.
#EnableJms
#Configuration
public class IBMQueueConfig {
#Bean("defaultContainer")
public JmsListenerContainerFactory containerFactory(final ConnectionFactory connectionFactory,
final ErrorHandler errorHandler) {
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setErrorHandler(errorHandler);
return factory;
}
}
I can receive message and process as follows:
#Service
public class ProcessMessageReceive {
#JmsListener(destination = "${queue}", concurrency = "${threads}", containerFactory = "defaultContainer")
public Message processMessage(#Payload final String message) {
//do stuff
}
}
I need to use RabbitMQ for testing and require additional configuration. I have the following the class:
#Configuration
#ConfigurationProperties(prefix = "spring.rabbitmq")
#EnableRabbit
public class RabbitMQConfiguration {
private String host;
private int port;
private String username;
private String password;
private String virtualHost;
#Bean
public DirectExchange exchange() {
return new DirectExchange(exchange);
}
#Bean("defaultContainer")
public JmsListenerContainerFactory containerFactory(#Qualifier("rabbit-connection-factory") final ConnectionFactory connectionFactory) {
final DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(); //ERROR
return factory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(#Qualifier("rabbit-connection-factory") final ConnectionFactory connectionFactory,
#Value("spring.rabbitmq.listener.simple.concurrency") final int concurrency,
#Value("spring.rabbitmq.listener.simple.max-concurrency") final int maxConcurrency) {
final SimpleRabbitListenerContainerFactory containerFactory = new SimpleRabbitListenerContainerFactory();
containerFactory.setConnectionFactory(connectionFactory);
containerFactory.setConcurrentConsumers(concurrency);
containerFactory.setMaxConcurrentConsumers(maxConcurrency);
containerFactory.setDefaultRequeueRejected(false);
return containerFactory;
}
#Bean(name = "rabbit-connection-factory")
public ConnectionFactory connectionFactory() {
final CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(host);
connectionFactory.setPort(port);
connectionFactory.setUsername(username);
connectionFactory.setPassword(password);
connectionFactory.setVirtualHost(virtualHost);
return connectionFactory;
}
#Bean
public Queue inboundQueue() {
return new Queue(fixInboundQueue, true);
}
#Bean
public Binding inboundQueueBinding() {
return bind(inboundQueue())
.to(exchange())
.with(routingKey);
}
}
I get an error on line factory.setConnectionFactory(connectionFactory); as it expects a javax.jms.ConnectionFactory but provided is Rabbit MQ One.
Is there a way I can wire in the Rabbit MQ ConnectionFactory ? I know it is possible if I use RMQConnectionFactory, but I am looking to see If I can achieve it with Spring Rabbit dependency.
The objective is to avoid writing another processMessage() specifically for the Rabbit MQ and re-use what I already have.
Alternatively, can I use both annotations? In which case I would use spring profile to enable the one I need depending on prod or test?
#RabbitListener(queues = "${app.rabbitmq.queue}")
#JmsListener(destination = "${queue}", concurrency = "${threads}", containerFactory = "defaultContainer")
public Message processMessage(#Payload final String message) {
//do stuff
}
You have to use #RabbitListener instead of #JmsListener if you want to talk to RabbitMQ over AMQP.
You can add both annotations if you want to use JMS in production and RabbitMQ in tests.

How to test if method with #KafkaListener is being called

I'm really struggling to write a test to check if my Kafka Consumer is being correctly called when messages are sent to it's designated topic.
My consumer:
#Service
#Slf4j
#AllArgsConstructor(onConstructor = #__(#Autowired))
public class ProcessingConsumer {
private AppService appService;
#KafkaListener(
topics = "${topic}",
containerFactory = "processingConsumerContainerFactory")
public void listen(ConsumerRecord<Key, Value> message, Acknowledgment ack) {
try {
appService.processMessage(message);
ack.acknowledge();
} catch (Throwable t) {
log.error("error while processing message!", t);
}
}
}
My consumer config:
#EnableKafka
#Configuration
public class ProcessingCosumerConfig {
#Value("${spring.kafka.schema-registry-url}")
private String schemaRegistryUrl;
private KafkaProperties props;
public ProcessingCosumerConfig(KafkaProperties kafkaProperties) {
this.props = kafkaProperties;
}
public Map<String, Object> deserializerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
props.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl);
return props;
}
private KafkaAvroDeserializer getKafkaAvroDeserializer(Boolean isKey) {
KafkaAvroDeserializer kafkaAvroDeserializer = new KafkaAvroDeserializer();
kafkaAvroDeserializer.configure(deserializerConfigs(), isKey);
return kafkaAvroDeserializer;
}
private DefaultKafkaConsumerFactory consumerFactory() {
return new DefaultKafkaConsumerFactory<>(
props.buildConsumerProperties(),
getKafkaAvroDeserializer(true),
getKafkaAvroDeserializer(false));
}
#Bean(name = "processingConsumerContainerFactory")
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Key, Value>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Key, Value>
factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
}
Finally, my (wannabe) test:
#DirtiesContext
public class ProcessingConsumerTest extends BaseIntegrationTest{
#Autowired private ProcessingProducerFixture processingProducer;
#Autowired private ProcessingConsumer processingConsumer;
#org.springframework.beans.factory.annotation.Value("${topic}")
String topic;
#Test
public void consumer_shouldConsumeMessages_whenMessagesAreSent() throws Exception{
Thread.sleep(1000);
ProducerRecord<Key, Value> message = new ProducerRecord<>(topic, new Key("b"), new Value("a", "b", "c", "d"));
processingProducer.send(message);
}
}
And that's about it for all I have so far.
I've tried checking if this approach gets to the consumer manually using debug and also even just putting simple prints there but the execution simply doesn't seems to get there. Also, if it could be somehow called correctly by my tests, I have no idea what to do to actually assert it in the actual test.
Inject a mock AppService into the listener and verify its processMessage() was called.

How to write Unit test for #KafkaListener?

Trying to figure out if I can write unit test for #KafkaListener using spring-kafka and spring-kafka-test.
My Listener class.
public class MyKafkaListener {
#Autowired
private MyMessageProcessor myMessageProcessor;
#KafkaListener(topics = "${kafka.topic.01}", groupId = "SF.CLIENT", clientIdPrefix = "SF.01", containerFactory = "myMessageListenerContainerFactory")
public void myMessageListener(MyMessage message) {
myMessageProcessor.process(message);
log.info("MyMessage processed");
}}
My Test class :
#RunWith(SpringRunner.class)
#DirtiesContext
#EmbeddedKafka(partitions = 1, topics = {"I1.Topic.json.001"})
#ContextConfiguration(classes = {TestKafkaConfig.class})
public class MyMessageConsumersTest {
#Autowired
private MyMessageProcessor myMessageProcessor;
#Value("${kafka.topic.01}")
private String TOPIC_01;
#Autowired
private KafkaTemplate<String, MyMessage> messageProducer;
#Test
public void testSalesforceMessageListner() {
MyMessageConsumers myMessageConsumers = new MyMessageConsumers(mockService);
messageProducer.send(TOPIC_01, "MessageID", new MyMessage());
verify(myMessageProcessor, times(1)).process(any(MyMessage.class));
}}
My Test config class :
#Configuration
#EnableKafka
public class TestKafkaConfig {
#Bean
public MyMessageProcessor myMessageProcessor() {
return mock(MyMessageProcessor.class);
}
#Bean
public KafkaEmbedded kafkaEmbedded() {
return new KafkaEmbedded(1, true, 1, "I1.Topic.json.001");
}
//Consumer
#Bean
public ConsumerFactory<String, MyMessage> myMessageConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(MyMessage.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MyMessage> myMessageListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, MyMessage> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(myMessageConsumerFactory());
return factory;
}
//Producer
#Bean
public ProducerFactory<String, MyMessage> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaMessageSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public KafkaTemplate<String, MyMessage> messageProducer() {
return new KafkaTemplate<>(producerFactory());
}
}
Is there any simple way to make this work ?
Or should I do the testing of #KafkaListener in some other way ? In unit test, how do I ensure #KafkaListener is invoked when a new message is arrived in Kafka.
how do I ensure #KafkaListener is invoked when a new message is arrived in Kafka.
Well, this is essentially a Framework responsibility to test such a functionality. In your case you need just concentrate on the business logic and unit test exactly your custom code, but not that one compiled in the Framework. In addition there is not goo point to test the #KafkaListener method which just logs incoming messages. It is definitely going to be too hard to find the hook for test-case verification.
On the other hand I really believe that business logic in your #KafkaListener method is much complicated than you show. So, it might be really better to verify your custom code (e.g. DB insert, some other service call etc.) called from that method rather than try to figure out the hook exactly for the myMessageListener().
What you do with the mock(MyMessageProcessor.class) is really a good way for business logic verification. Only what is wrong in your code is about that duplication for the EmbeddedKafka: you use an annotation and you also declare a #Bean in the config. You should think about removing one of them. Although it isn't clear where is your production code, which is really free from the embedded Kafka. Otherwise, if everything is in the test scope, I don't see any problems with your consumer and producer factories configuration. You definitely have a minimal possible config for the #KafkaListener and KafkaTemplate. Only what you need is to remove a #EmbeddedKafka do not start the broker twice.
You can wrap the listener in your test case.
Given
#SpringBootApplication
public class So52783066Application {
public static void main(String[] args) {
SpringApplication.run(So52783066Application.class, args);
}
#KafkaListener(id = "so52783066", topics = "so52783066")
public void listen(String in) {
System.out.println(in);
}
}
then
#RunWith(SpringRunner.class)
#SpringBootTest
public class So52783066ApplicationTests {
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, "so52783066");
#Autowired
private KafkaListenerEndpointRegistry registry;
#Autowired
private KafkaTemplate<String, String> template;
#Before
public void setup() {
System.setProperty("spring.kafka.bootstrap-servers", embeddedKafka.getBrokersAsString());
}
#Test
public void test() throws Exception {
ConcurrentMessageListenerContainer<?, ?> container = (ConcurrentMessageListenerContainer<?, ?>) registry
.getListenerContainer("so52783066");
container.stop();
#SuppressWarnings("unchecked")
AcknowledgingConsumerAwareMessageListener<String, String> messageListener = (AcknowledgingConsumerAwareMessageListener<String, String>) container
.getContainerProperties().getMessageListener();
CountDownLatch latch = new CountDownLatch(1);
container.getContainerProperties()
.setMessageListener(new AcknowledgingConsumerAwareMessageListener<String, String>() {
#Override
public void onMessage(ConsumerRecord<String, String> data, Acknowledgment acknowledgment,
Consumer<?, ?> consumer) {
messageListener.onMessage(data, acknowledgment, consumer);
latch.countDown();
}
});
container.start();
template.send("so52783066", "foo");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
}
}
Here is my working solution for the Consumer, based on your code. Thank you :-)
The Configuration is the following:
#TestConfiguration
#EnableKafka
#Profile("kafka_test")
public class KafkaTestConfig {
private static Logger log = LoggerFactory.getLogger(KafkaTestConfig.class);
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
#Primary
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group-id");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
log.info("Consumer TEST config = {}", props);
return props;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
log.info("Producer TEST config = {}", props);
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new JsonDeserializer<String>());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(producerConfigs());
return pf;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
ConsumerFactory<String, String> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.setConcurrency(2);
return factory;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
return kafkaTemplate;
}
#Bean
public KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry() {
KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry = new KafkaListenerEndpointRegistry();
return kafkaListenerEndpointRegistry;
}
}
Place all the beans you need to include in the test in a different class:
#TestConfiguration
#Profile("kafka_test")
#EnableKafka
public class KafkaBeansConfig {
#Bean
public MyProducer myProducer() {
return new MyProducer();
}
// more beans
}
I created a BaseKafkaConsumerTest class to reuse it :
#ExtendWith(SpringExtension.class)
#TestPropertySource(properties = { "spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}" })
#TestInstance(Lifecycle.PER_CLASS)
#DirtiesContext
#ContextConfiguration(classes = KafkaTestConfig.class)
#ActiveProfiles("kafka_test")
public class BaseKafkaConsumerTest {
#Autowired
protected EmbeddedKafkaBroker embeddedKafka;
#Value("${spring.embedded.kafka.brokers}")
private String brokerAddresses;
#Autowired
protected KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#Autowired
protected KafkaTemplate<String, String> senderTemplate;
public void setUp() {
embeddedKafka.brokerProperty("controlled.shutdown.enable", true);
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry
.getListenerContainers()) {
System.err.println(messageListenerContainer.getContainerProperties().toString());
ContainerTestUtils.waitForAssignment(messageListenerContainer, embeddedKafka.getPartitionsPerTopic());
}
}
#AfterAll
public void tearDown() {
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry
.getListenerContainers()) {
messageListenerContainer.stop();
}
embeddedKafka.getKafkaServers().forEach(b -> b.shutdown());
embeddedKafka.getKafkaServers().forEach(b -> b.awaitShutdown());
}
}
Extend the base class to stest your consumer:
#EmbeddedKafka(topics = MyConsumer.TOPIC_NAME)
#Import(KafkaBeansConfig.class)
public class MYKafkaConsumerTest extends BaseKafkaConsumerTest {
private static Logger log = LoggerFactory.getLogger(PaymentMethodsKafkaConsumerTest.class);
#Autowired
private MyConsumer myConsumer;
// mocks with #MockBean
#Configuration
#ComponentScan({ "com.myfirm.kafka" })
static class KafkaLocalTestConfig {
}
#BeforeAll
public void setUp() {
super.setUp();
}
#Test
public void testMessageIsReceived() throws Exception {
//mocks
String jsonPayload = "{\"id\":\"12345\","cookieDomain\":"helloworld"}";
ListenableFuture<SendResult<String, String>> future =
senderTemplate.send(MyConsumer.TOPIC_NAME, jsonPayload);
Thread.sleep(10000);
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
log.info("successfully sent message='{}' with offset={}", jsonPayload,
result.getRecordMetadata().offset());
}
#Override
public void onFailure(Throwable ex) {
log.error("unable to send message='{}'", jsonPayload, ex);
}
});
Mockito.verify(myService, Mockito.times(1))
.update(Mockito.any(MyDetails.class));
}
As I read in other posts, donĀ“t test the business logic this way. Just that the calls are made.
If you want to write integration tests using EmbeddedKafka, then you can do something like this.
Assume we have some KafkaListener, which accepts a RequestDto as a Payload.
In your test class you should create a TestConfiguration in order to create producer beans and to autowire KafkaTemplate into your test. Also notice, that instead of autowiring consumer, we inject a consumer SpyBean.
In someTest method we are creating a latch, and setting up the consumer listener method so that when it is called, the latch will be opened and assertions will take place only after the listener have received the Payload.
Also notice any() ?: RequestDto() line. You should use elvis operator with any() only if you are using Mockito's any() with non-null Kotlin method arguments, because any() firstly returns null.
#EnableKafka
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#EmbeddedKafka(partitions = 10, brokerProperties = ["listeners=PLAINTEXT://localhost:9092", "port=9092"])
class KafkaIgniteApplicationTests {
#SpyBean
private lateinit var consumer: Consumer
#TestConfiguration
class Config {
#Value("\${spring.kafka.consumer.bootstrap-servers}")
private lateinit var servers: String
fun producerConfig(): Map<String, Any> {
val props = mutableMapOf<String, Any>()
props[ProducerConfig.BOOTSTRAP_SERVERS_CONFIG] = servers
props[ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java
props[ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java
return props
}
#Bean
fun producerFactory(): ProducerFactory<String, String> {
return DefaultKafkaProducerFactory(producerConfig())
}
#Bean
fun kafkaTemplate(producerFactory: ProducerFactory<String, String>): KafkaTemplate<String, String> {
return KafkaTemplate(producerFactory)
}
}
#Autowired
private lateinit var kafkaTemplate: KafkaTemplate<String, String>
#Test
fun someTest() {
val lock = CountDownLatch(1)
`when`(consumer.receive(any() ?: RequestDto())).thenAnswer {
it.callRealMethod()
lock.countDown()
}
val request = "{\"value\":\"1\"}"
kafkaTemplate.send(TOPIC, request)
lock.await(1000, TimeUnit.MILLISECONDS)
verify(consumer).receive(RequestDto().apply { value = BigDecimal.ONE })
}
}
In unit test, how do I ensure #KafkaListener is invoked when a new message is arrived in Kafka.
Instead of using Awaitility or CountDownLatch approach , a more easy way is to make the actual #KafkaListener bean as the mockito spy using #SpyBean. Spy basically allows you to record all interactions made on an actual bean instance such that you can verify its interactions later. Together with the timeout verify feature of the mockito , you can ensure that the verification will be done over and over until certain timeout after the producer send out the message.
Something like :
#SpringBootTest(properties = {"spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}"})
#EmbeddedKafka(topics = {"fooTopic"})
public class MyMessageConsumersTest {
#SpyBean
private MyKafkaListener myKafkaListener;
#Captor
private ArgumentCaptor<MyMessage> myMessageCaptor;
#Test
public void test(){
//create KafkaTemplate to send some message to the topic...
verify(myKafkaListener, timeout(5000)). myMessageListener(myMessageCaptor.capture());
//assert the KafkaListener is configured correctly such that it is invoked with the expected parameter
assertThat(myMessageCaptor.getValue()).isEqualTo(xxxxx);
}

#RabbitListener method testing in SpringBoot app

Code:
RabbitMQListener:
#Component
public class ServerThroughRabbitMQ implements ServerThroughAMQPBroker {
private static final AtomicLong ID_COUNTER=new AtomicLong();
private final long instanceId=ID_COUNTER.incrementAndGet();
#Autowired
public ServerThroughRabbitMQ( UserService userService,LoginService loginService....){
....
}
#Override
#RabbitListener(queues = "#{registerQueue.name}")
public String registerUserAndLogin(String json) {
.....
}
ServerConfig:
#Configuration
public class ServerConfig {
#Value("${amqp.broker.exchange-name}")
private String exchangeName;
#Value("${amqp.broker.host}")
private String ampqBrokerHost;
#Value("${amqp.broker.quidco.queue.postfix}")
private String quidcoQueuePostfix;
#Value("${amqp.broker.quidco.queue.durability:true}")
private boolean quidcoQueueDurability;
#Value("${amqp.broker.quidco.queue.autodelete:false}")
private boolean quidcoQueueAutodelete;
private String registerAndLoginQuequName;
#PostConstruct
public void init() {
registerAndLoginQuequName = REGISTER_AND_LOGIN_ROUTING_KEY + quidcoQueuePostfix;
public String getRegisterAndLoginQueueName() {
return registerAndLoginQuequName;
}
public String getLoginAndCheckBonusQueueName() {
return loginAndCheckBonusQuequName;
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(ampqBrokerHost);
return connectionFactory;
}
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory());
}
#Bean
public TopicExchange topic() {
return new TopicExchange(exchangeName);
}
#Bean(name = "registerQueue")
public Queue registerQueue() {
return new Queue(registerAndLoginQuequName, quidcoQueueDurability, false, quidcoQueueAutodelete);
}
#Bean
public Binding bindingRegisterAndLogin() {
return BindingBuilder.bind(registerQueue()).to(topic()).with(REGISTER_AND_LOGIN_ROUTING_KEY);
}
}
TestConfig:
#EnableRabbit
#TestPropertySource("classpath:test.properties")
public class ServerThroughAMQPBrokerRabbitMQIntegrationTestConfig {
private final ExecutorService=Executors.newCachedThreadPool();
private LoginService loginServiceMock=mock(LoginService.class);
private UserService userServiceMock =mock(UserService.class);
#Bean
public ExecutorService executor() {
return executorService;
}
#Bean
public LoginService getLoginServiceMock() {
return loginServiceMock;
}
#Bean
public UserService getUserService() {
return userServiceMock;
}
#Bean
#Autowired
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMaxConcurrentConsumers(5);
return factory;
}
#Bean
#Autowired
public RabbitTemplate getRabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
return rabbitTemplate;
}
#Bean
public ServerThroughRabbitMQ getServerThroughRabbitMQ() {
return new ServerThroughRabbitMQ(userServiceMock, loginServiceMock,...);
}
}
Integration tests:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes ={ServerConfig.class,ServerThroughAMQPBrokerRabbitMQIntegrationTestConfig.class})
#Category({IntegrationTest.class})
#TestPropertySource("classpath:test.properties")
public class ServerThroughAMQPBrokerRabbitMQIntegrationTest {
final private ObjectMapper jackson = new ObjectMapper();
#Autowired
private ExecutorService executor;
#Autowired
private ServerThroughRabbitMQ serverThroughRabbitMQ;
#Autowired
private RabbitTemplate template;
#Autowired
private TopicExchange exchange;
#Autowired
UserService userService;
#Autowired
LoginService loginService;
#Autowired
private AmqpAdmin amqpAdmin;
#Autowired
private ServerConfig serverConfig;
final String username = "username";
final String email = "email#email.com";
final Integer tcVersion=1;
final int quidcoUserId = 1;
final String jwt = ProcessLauncherForJwtPhpBuilderUnitWithCxtTest.EXPECTED_JWT;
#Before
public void cleanAfterOthersForMyself() {
cleanTestQueues();
}
#After
public void cleanAfterMyselfForOthers() {
cleanTestQueues();
}
private void cleanTestQueues() {
amqpAdmin.purgeQueue(serverConfig.getRegisterAndLoginQueueName(), false);
}
#Test
#Category({SlowTest.class,IntegrationTest.class})
public void testRegistrationAndLogin() throws TimeoutException {
final Waiter waiter = new Waiter();
when(userService.register(anyString(), anyString(), anyString())).thenReturn(...);
when(loginService....()).thenReturn(...);
executor.submit(() -> {
final RegistrationRequest request = new RegistrationRequest(username, email,tcVersion);
final String response;
try {
//#todo: converter to convert RegistrationRequest inside next method to json
response = (String) template.convertSendAndReceive(exchange.getName(), REGISTER_AND_LOGIN_ROUTING_KEY.toString(), jackson.writeValueAsString(request));
waiter.assertThat(response, not(isEmptyString()));
final RegistrationResponse registrationResponse = jackson.readValue(response, RegistrationResponse.class);
waiter.assertThat(...);
waiter.assertThat(...);
} catch (Exception e) {
throw new RuntimeException(e);
}
waiter.resume();
});
waiter.await(5, TimeUnit.SECONDS);
}
}
When I run that test separetly , everything works fine, but when I run it with other tests the mocked ServerThroughRabbitMQ isn't being used, so some spring caches force to use old rabbit listener.
I tried to debug it and I can see, that correct bean is being autowired to the test, but for some reason old listener is using(old bean field instanceId=1 new mocked bean instanceId=3) and test failing(Not sure how it's possible, so if in case of existing old bean I assume to get an autowire exception).
I tried to use #DirtiesContext BEFORE_CLASS, but faced anoter problem(see here)
RabbitMQ and Integration Testing can be hard, since Rabbit MQ keeps some kind of state:
- messages from previous tests in queues
- listeners from previous tests still listening on queues
There are several approaches:
Purge all queues before you start the test (that might be what you mean by cleanTestQueues())
Delete all queues (or use temporary queues) and recreate them before each test
Using the Rabbit Admin Rest API killing listeners or connections of previous tests
delete the vhost and recreating the infrasture for each test (which is the most brutal way)

Categories

Resources