How to clear topics in tests with Spring Kafka - java

I'm writing a unit test with Spring Kafka 2.4 to prove that my Spring Boot setup is correct. I'm validating that SeekToCurrentBatchErrorHandler works as expected which requires sending an incorrect message that should be retried. Unfortunately this incorrect messages breaks other tests because this message will be retried forever.
Because of above I'd like to ensure that each test is correctly isolated. I either need to:
Delete and recreate the Kafka topic with AdminClient
Seek to the end of the existing Kafka topic and commit new offsets
I was trying option 2 with Consumer.seekToEnd() method however Spring Kafka hides the created consumers behind few layers of internal framework classes. I'm also not 100% sure if this method can be called in test thread which is different from listener thread.
What is the recommended way to clear topics in tests with Spring Kafka?

Best practice is to use unique topic names in each test to provide complete isolation; you could also stop the container(s), create a new Consumer with the same group.id and perform the seeks there.

Related

Spring Rabbitmq - how to configure the consumer without using #RabbitListener

I'm writing some core features for developers who will use my library.
One of the features is to provide an ability to switch the message consumption between two different sources by flag configuration, the handling of these messages should remain as is, no mater to the source - for example switching message consumption from kafka to rabbitmq. the same business logic will be executed with income message.
**I try to figure out how to configure the consumer without using the #RabbitListener, is it possible? **
RabbitAdmin - responsible for connect to the source.
RabbitTemplete - responsible for publishing messages.
The only clue that I found is to use springs SimpleMessageListenerContainer, but the issue with him is that there is no ability to set multiple onMessage handlers?
Also I saw the option to use MessageListenerAdapter in this answer
The main issue with these answers is that I'm going to deal with multiple number of queues and bindings.. and here it seem like solution for single consumer in whole application. - am I wrong here?
You need a listener container for each listener.
You can use Boot's auto configured listener container factory to created each container and add a listener to it.
If the same listener can consume from multiple queues, you can configure the container to listen to those queues.

Spring, Junit, and Qpid - AMQSession.Dispatch won't stop so the test will not end

I'm running a few integration tests with Qpid (qpid setup based on the answer to this previous question. The application is a Spring Integration flow, receives an incoming message and processes. The test will receive the messages, process, and then check assertions on the outcome of the process.
I managed to start the java broker in the #Before method of the test, and the test will run correctly, but when the test finishes, the broker shuts down on #After, and then (on the client side I think, but I'm not sure), instead of wrapping up, new instances of org.apache.qpid.client.AMQSession.Dispatcher are created, and they end up in an endless loop instead of closing.
[2016-05-20 17:29:28,457][DEBUG][org.springframework.jms.listener.DefaultMessageListenerContainer#0-2][org.apache.qpid.client.AMQSession.Dispatcher] - Dispatcher-2-Conn-2 created
[2016-05-20 17:29:28,457][DEBUG][Dispatcher-2-Conn-2][org.apache.qpid.client.AMQSession.Dispatcher] - Dispatcher-2-Conn-2 started
Then Dispatcher-2-Conn-2 will run forever in a loop without ending. Spring is using a org.apache.qpid.client.PooledConnectionFactory as argument for a spring JmsTemplate for sending the messages.
So my questions are:
Anybody have any experience with qpid java broker + junit? (beyond the link below) to highlight anything that would be glaringly obvious that I'm missing from the setup/teardown.
I think the rebel threads are on the client side rather than the broker. How would one get hold of all sessions/connections/etc created by Spring to properly terminate them?
Would killing the vm stop a jenkins build that has this test? :D
(The tests used to run correctly with ActiveMQ, and with far less configuration and setup, but we've been told to move away from ActiveMQ so here we are).

how to make persistent JMS messages with java spring boot application?

I am trying to make a queue with activemq and spring boot using this link and it looks fine. What I am unable to do is to make this queue persistent after application goes down. I think that SimpleJmsListenerContainerFactory should be durable to achieve that but when I set factory.setSubscriptionDurable(true) and factory.setClientId("someid") I am unable to receive messages any more. I would be greatfull for any suggestions.
I guess you are embedding the broker in your application. While this is ok for integration tests and proof of concepts, you should consider having a broker somewhere in your infrastructure and connect to it. If you choose that, refer to the ActiveMQ documentation and you should be fine.
If you insist on embedding it, you need to provide a brokerUrl that enables message persistence.
Having said that, it looks like you misunderstand durable subscriber and message persistence. The latter can be achieved by having a broker that actually stores the content of the queue somewhere so that if the broker is stopped and restarted, it can restore the content of its queue. The former is to be able to receive a message even if the listener is not active at a period of time.
you can enable persistence of messages using ActiveMQConnectionFactory.
as mentioned in the spring boot link you provided, this ActiveMQConnectionFactory gets created automatically by spring boot.so you can have this bean in your application configuration created manually and you can set various property as well.
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=true");
Here is the link http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html

Is Apache Kafka able to handle transactions?

we plan to use Kafka as a central component in our data warehouse given that the producer is able to handle transactions (in short: rollbacks and commits).
When googling Kafka + Transactions I find a lot of theoretical thoughts about the possibility of how Kafka could handle transactions but at the moment I do not see any function in the java API that supports commits and rollbacks for the producer.
Has anybody made some experiences with transactions and Kafka and can give me an hint?
I think what you are looking for is basically called transactional messaging in Kafka where producers are capable of creating session (aka transactional session) and send messages within the sessions. Hence it can choose to either commit / abort the transaction.
[Source]: Please read the wiki for details
Actually, from the last version 0.11.0.0 transactions are supported. See Guarantee unique global transaction for Kafka Producers
No; Kafka does not support transactions.
You can get certainty that a message has been produced to a partition, but once produced you are not able to rollback that message.
Since version 0.11.0 Apache Kafka supports transactions: https://cwiki.apache.org/confluence/display/KAFKA/Transactional+Messaging+in+Kafka

How to automate Kafka Testing

We have developed a system using kafka to queue the data and later consume that data to place orders for users.
We have tested certain things manually, but now our aim is automate the process.
Is there any client available to test it? I found out ways to Unit test it using kafka client itself, but my aim is to test the system as whole.
EDIT: our purpose is just API testing i.e., just the back-end, not the UI
You can start Kafka programmatically in your integration test, Kafka uses Zookeeper so firsly look at Zookeeper TestingServer - instance of this class creates and starts the Zk server using the given port.
Next look at KafkaServerStartable.scala, you have to provide configuration that points to your in memory Zk server and invoke startup() method, here is some code:
import kafka.server.KafkaConfig;
import kafka.server.KafkaServerStartable;
import java.util.Properties;
public KafkaTest() {
Properties properties = createProperties();
KafkaConfig kafkaConfig = new KafkaConfig(properties);
KafkaServerStartable kafka = new KafkaServerStartable(kafkaConfig);
kafka.startup();
}
Hope these help:)
You can go for integration-testing or end-to-end testing by bringing up Kafka in a docker container. If you use Apache kafka-clients:2.1.0, then you don't need to deal with ZooKeeper at the API level while producing or consuming the records.
Dockerizing Kafka, and testing helps to cover the scenarios in a single node as well as multi-node Kafka cluster. This way you don't have to test against Mock/In-Memory Kafka once, then real Kafka later. This can be done using TestContainers.
If you have too many test scenarios to cover, you can go for Kafka Declarative Testing like docker-compose style, by which you can eliminate the Kafka client API coding.
Checkout some handy examples here for validating produce and consume.
TestContainers project also supports docker-compose.
As I understood you want to implement end to end tests starting from messages. Me and some people from recently made a research for libraries, tools and frameworks to test Event-driven systems using Kafka.
We found Zerocode which is an automated API testing using declarative language like JSON or YAML. It support REST, SOAP and what we are interested, Messaging. It sends and consumes messages from topics and make assertions in the end, easy to learn and use. Here is the link for more details Zerocode. It seems like a good option although we are starting using it.
You will need to have Kafka brokers and the dependencies running to make this solution to work, but nothing like a docker compose and/or some scripts to bring a environment for tests.
Another way is to implement your own project with Kafka libraries and use the libraries to send and receive messages in the tests.
Unfortunately we couldn't find more options available out there. Kafka has a proposition to create a test kit but it's not in progress yet.
Unfortunately, the approach described by Pavel does not work for Kafka 2.8+ anymore. However, I could make our end-to-end tests with Kafka 3.2 work using the approach taken by KarelDB:
Properties props = TestUtils.createBrokerConfig(
brokerId,
zkConnect,
false,
false,
TestUtils.RandomPort(),
noInterBrokerSecurityProtocol,
noFile,
EMPTY_SASL_PROPERTIES,
true,
false,
TestUtils.RandomPort(),
false,
TestUtils.RandomPort(),
false,
TestUtils.RandomPort(),
Option.<String>empty(),
1,
false,
1,
(short) 1
);
KafkaConfig config = KafkaConfig.fromProps(props);
KafkaServer server = TestUtils.createServer(config, Time.SYSTEM);
// `createServer` will also start your Kafka server.
// To shutdown:
server.shutdown();

Categories

Resources