I tried some stuff with spring-cloud-stream. Everything works and now I tried to write some test cases. Unfortunately they are not working. I reduced everything to the following (Everything is in the same boot app):
The Sender:
#EnableBinding(Sender.Emitter.class)
public class Sender {
public interface Emitter {
String CHANNEL = "emitter";
#Output(CHANNEL)
MessageChannel events();
}
private Emitter emitter;
public Sender(Emitter emitter) {
this.emitter = emitter;
}
public void sendMessage(String massage) {
emitter.events().send(MessageBuilder.withPayload(massage).build());
}
}
The Receiver:
#EnableBinding(Receiver.Subscriber.class)
public class Receiver {
public interface Subscriber {
String CHANNEL = "subscriber";
#Input(CHANNEL)
SubscribableChannel events();
}
private String lastMessage;
public String getLastMessage() {
return lastMessage;
}
#StreamListener(Subscriber.CHANNEL)
public void event(String message) {
this.lastMessage = message;
}
}
My config:
spring:
cloud:
stream:
default-binder: rabbit
bindings:
emitter:
destination: testtock
content-type: application/json
subscriber:
destination: testtock
The Test:
#RunWith(SpringRunner.class)
#SpringBootTest
public class BasicTest {
#Autowired
private Receiver receiver;
#Autowired
private Sender sender;
#Test
public void test() throws InterruptedException {
String massage = UUID.randomUUID().toString();
sender.sendMessage(massage);
//Thread.sleep(1000);
assertEquals(massage, receiver.getLastMessage());
}
}
I want use spring-cloud-stream-test-support for testing to not need a AMQP message broker. Outside of testing I use a rabbitmq, there everything is working.
Maybe the spring-cloud-stream-test-support does not really route messages? Or what is the Problem here?
Maybe the spring-cloud-stream-test-support does not really route messages?
Correct; the test binder is just a harness, it doesn't route between bindings; it's unusual to have a producer and consumer binding for the same destination in the same app.
When you send a message in a test, you have to query the binder to ensure it was sent expected. You use a MessageCollector to do that. See the documentation and you can also look at the tests for some of the out of the box apps.
The spring-cloud-stream-test-support provides an ability to test individual Spring Cloud Stream application and uses TestSupportBinder. Hence, this is not meant for end-to-end integration testing like the one you are using above.
For more information on using spring-cloud-stream-test-support and the TestSupportBinder, you can refer the doc here
Related
I've configured a route to extract some data from exchanges and aggregate them; here is simple summary:
#Component
#RequiredArgsConstructor
public class FingerprintHistoryRouteBuilder extends RouteBuilder {
private final FingerprintHistoryService fingerprintHistoryService;
#Override
public void configure() throws Exception {
from("seda:httpFingerprint")
.aggregate( (AggregationStrategy) (oldExchange, newExchange) -> {
final FingerprintHistory newFingerprint = extract(newExchange);
if (oldExchange == null) {
List<FingerprintHistory> fingerprintHistories = new ArrayList<>();
fingerprintHistories.add(newFingerprint);
newExchange.getMessage().setBody(fingerprintHistories);
return newExchange;
}
final Message oldMessage = oldExchange.getMessage();
final List<FingerprintHistory> fingerprintHistories = (List<FingerprintHistory>) oldMessage.getBody(List.class);
fingerprintHistories.add(newFingerprint);
return oldExchange;
})
.constant(true)
.completionSize(aggregateCount)
.completionInterval(aggregateDuration.toMillis())
.to("direct:processFingerprint")
.end();
from("direct:processFingerprint")
.process(exchange -> {
List<FingerprintHistory> fingerprintHistories = exchange.getMessage().getBody(List.class);
fingerprintHistoryService.saveAll(fingerprintHistories);
});
strong text
}
}
The problem is aggregation completion never works for example this is a sample of my test:
#SpringBootTest
class FingerprintHistoryRouteBuilderTest {
#Autowired
ProducerTemplate producerTemplate;
#Autowired
FingerprintHistoryRouteBuilder fingerprintHistoryRouteBuilder;
#Autowired
CamelContext camelContext;
#MockBean
FingerprintHistoryService historyService;
#Test
void api_whenAggregate() {
UserSearchActivity activity = ActivityFactory.buildSampleSearchActivity("127.0.0.1", "salam", "finger");
Exchange exchange = buildExchange();
exchange.getMessage().setBody(activity);
ReflelctionTestUtils.setField(fingerprintHistoryRouteBuilder, "aggregateCount", 1);
ReflectionTestUtils.setFiled(fingerprintHistoryRouteBuilder, "aggregateDuration", Duration.ofNanos(1));
producerTemplate.send(FingerprintHistoryRouteBuilder.FINGERPRINT_HISTORY_ENDPOINT, exchange);
Mockito.verify(historyService).saveAll(Mockito.any());
}
Exchange buildExchange() {
DefaultExchange defaultExchange = new DefaultExchange(camelContext);
defaultExchange.setMessage(new DefaultMessage(camelContext));
return defaultExchange;
}
}
with the following result:
Wanted but not invoked: fingerprintHistoryService bean.saveAll(
);
I build this simplified example, and the test passes, so it looks like your usage of aggregate is probably correct.
Have you considered that your Mockito.verify() call is happening before the exchange finishes routing? You could test this by removing the verify call and adding a .log() statement to the FINGERPRINT_PROCESS_AGGREGATION route. If you see the log output during execution, you know the exchange is being routed as you expect. If this is the case, then your verify() call needs to be able to wait for the exchange to finish routing. I don't use mockito much, but it looks like you can do this:
Mockito.verify(historyService, timeout(10000)).saveAll(Mockito.any());
I'm trying to receive message through Grpc service, send it to Kafka Emitter, and return some value back.
#Singleton
#GrpcService
public class MessageService implements protobuf.MessageService{
#Inject
#Channel("hello-out")
Emitter<Record<String, GeneratedMessageV3>> emitter;
#Override
public Uni<EnvelopeReply> processMessage(Envelope request) {
return Uni.createFrom().completionStage(
emitter.send(Record.of(request.getKey(), request))
).replaceWith(EnvelopeReply.newBuilder().build());
}
}
During build, I'm getting next error:
Error injecting org.eclipse.microprofile.reactive.messaging.Emitter<io.smallrye.reactive.messaging.kafka.Record<java.lang.String, com.google.protobuf.GeneratedMessageV3>> com.test.MessageService.emitter
...
Caused by: javax.enterprise.inject.spi.DefinitionException: SRMSG00019: Unable to connect an emitter with the channel `hello-out`
It works properly with Rest resource.
Without going deeply into the topic, here's my solution:
You can't inject Kafka Emmiter directly to grpc service, it'll throw an exception.
GrpcService <- Emitter<Record...>
Possible reason(I'm sure Quarkus team will reply lower with correct solution :)) is that all GrpcServices are of #Singleton type, and they can't have lazy-initialised properties, they need to have something directly injected. Emitter is generated at a later stage.
By adding a wrapper class you're solving all the headaches, so:
GrpcService <- KafkaService <- Emitter<Record...>
#ApplicationScoped
public class KafkaService {
#Inject
#Channel("hello-out")
Emitter<Record<String, GeneratedMessageV3>> emitter;
// Implement this part properly, added just for example
public Emitter<Record<String, GeneratedMessageV3>> getEmitter() {
return emitter;
}
}
...
#Singleton
#GrpcService
public class MessageService implements protobuf.MessageService {
#Inject
KafkaService kafkaService;
#Override
public Uni<EnvelopeReply> processMessage(Envelope request) {
// use metadata if needed
Map<String, String> metadataMap = request.getMetadataMap();
return Uni.createFrom().completionStage(
kafkaService.getEmitter().send(Record.of(request.getKey(), request))
).replaceWith(EnvelopeReply.newBuilder().build());
}
}
so, i have this #Component class for listening topic from kafka
#Component
#Data
#Slf4j
public class KafkaConsumer {
public List<String> saveReserveStock = new ArrayList<>();
#KafkaListener(topics = "topic")
public void listenReserveStock(ConsumerRecord<?, ?> consumerRecord) {
System.out.println("==================================================================");
System.out.println("consuming records at: " + DateTime.now().toLocalDateTime());
System.out.println("consuming topic: " + consumerRecord.topic());
saveReserveStock.add(consumerRecord.value().toString());
saveReserveStock.add("dummy data");
saveReserveStock.forEach(System.out::println);
System.out.println("consumed at: " + DateTime.now().toLocalDateTime());
System.out.println("==================================================================");
System.out.println("end at: " + DateTime.now().toLocalDateTime());
}
public void emptyConsumer(){
saveReserveStock = new ArrayList<>();
}
}
and this is embedded kafka configuration
#Slf4j
#EnableKafka
public abstract class EmbeddedKafkaIntegrationTest {
#Autowired
protected static EmbeddedKafkaBroker embeddedKafkaBroker = new EmbeddedKafkaBroker(1, false);
#Autowired
protected KafkaConsumer kafkaConsumer;
#Autowired
private ReactorKafkaProducer reactorKafkaProducer;
protected abstract void setUp();
private static boolean started;
#BeforeClass
public static void createBroker(){
log.info("start test class");
Map<String, String> propertiesMap = new HashMap<>();
propertiesMap.put("listeners", "PLAINTEXT://localhost:9092");
embeddedKafkaBroker.brokerProperties(propertiesMap);
if (!started) {
try {
embeddedKafkaBroker.afterPropertiesSet();
log.info("before class - kafka connected to: "+embeddedKafkaBroker.getBrokersAsString());
}
catch (Exception e) {
log.error("Embedded broker failed to start", e);
}
started = true;
}
}
#Before
public void doSetUp() {
log.info("before - kafka connected to: "+embeddedKafkaBroker.getBrokersAsString());
kafkaConsumer.emptyConsumer();
this.setUp();
}
#After
public void tearDown() {
kafkaConsumer.emptyConsumer();
embeddedKafkaBroker.getZookeeper().getLogDir().deleteOnExit();
}
#AfterClass
public static void destroy(){
log.info("end test class");
}
}
then in my test class, using #Autowired for that KafkaConsumer class
and in the test class i have this to get message from the listener that already consumend
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {ImsStockApplication.class},
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#Slf4j
public class IntegrationTest extends EmbeddedKafkaIntegrationTest {
#Value("${local.server.port}")
private int port;
#Autowired
private KafkaConsumer kafkaConsumer;
#Autowired
private ReactorKafkaProducer reactorKafkaProducer;
#Before
public void setUp() {
RestAssured.port = port;
}
#Test
public void success_SubDetail() {
reactorKafkaProducer.send("topic", event).block();
Awaitility.await().atMost(10, TimeUnit.SECONDS).untilAsserted(() -> {
log.info("AWAITILITY AT: " + DateTime.now().toLocalDateTime());
Assert.assertTrue(kafkaConsumer.getFailDecreaseGoodsReceipt().size() > 0);
Assert.assertTrue(kafkaConsumer.getSaveReserveStock().size() > 0);
Assert.assertTrue(kafkaConsumer.getSaveBindStock().size() > 0);
});
}
}
but the result sometimes got failure (list empty)...
it's like the list variable is empty, while it should be not empty
below is the log where the listener receive the message and store it to the list
==================================================================
consuming records at: 2022-07-10T14:16:46.748
consuming topic: topic
{"id":9721,"eventId":"eventId","organizationCode":"ORG","createdDate":1657437282742,"lastModifiedDate":1657437282742,"routingId":"routingId"}
dummy data
consumed at: 2022-07-10T14:16:46.748
==================================================================
end at: 2022-07-10T14:16:46.748
and in my test class when i tried to access the variable, it got empty. it keep waiting for the list to be filled
AWAITILITY AT: 2022-07-10T14:16:46.829
AWAITILITY AT: 2022-07-10T14:16:46.945
AWAITILITY AT: 2022-07-10T14:16:47.056
AWAITILITY AT: 2022-07-10T14:16:47.164
AWAITILITY AT: 2022-07-10T14:16:47.273
AWAITILITY AT: 2022-07-10T14:16:47.384
AWAITILITY AT: 2022-07-10T14:16:47.490
AWAITILITY AT: 2022-07-10T14:16:47.598
if we looked at the timestamp, the list shouldn't be empty right? but why my test got failed?
Where did i go wrong?
Thanks
IMHO the question lacks information... So providing not a real answer but rather a series of possible answers that can help to find the solution.
There are many possible reasons for the test to fail and its not necessarily because of your test code.
One possible reason is that when you connect to kafka you start to listen to the "latest" messages (offset = latest) In this case you won't be able to consume the messages that are already in the topic.
While this can be the answer to the question really, but Maybe you can post the code that actually sends the message to the topic. And this is the real question here.
Another possible reason is the number of partitions. If the listener is configured to use the same consumer group as other listeners that might exist in the running application - maybe it doesn't get the partition that eventually receives the message
Its also possible that the reason is in the code itself, but again you don't show all the configuration at least the configuration of the test here.
An example of such a possible issue is that the #KafkaListener is not handled properly, so spring makes a spring bean from the component (after all it can be autowired from within the test) but doesn't plug in the whole kafka infrastructure under the hood.
In a Spring Boot application I'm using spring-cloud-stream for PubSub (spring-cloud-gcp-pubsub-stream-binder) to subscribe to a topic.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-pubsub-stream-binder</artifactId>
</dependency>
I use the #EnableBinding and #StreamListener annotations to set up the subscriber:
#EnableBinding(Sink.class)
class Subscriber {
#StreamListener(INPUT)
public void handleMessage(Message<String> message) {
...
}
}
During the handling of the message it is possible that something goes wrong. In that case I throw an Exception to make sure the message will not get acknowledged and be retried at a later time.
According to the spring cloud stream documentation I should be able to use the properties
spring.cloud.stream.default.consumer.defaultRetryable=true
spring.cloud.stream.default.consumer.backOffInitialInterval=1000
spring.cloud.stream.default.consumer.backOffMultiplier=2.0
spring.cloud.stream.default.consumer.backOffMaxInterval=300000
spring.cloud.stream.default.consumer.maxAttempts=9999
or for a specific channel (input in this case)
spring.cloud.stream.bindings.input.consumer.defaultRetryable=true
spring.cloud.stream.bindings.input.consumer.backOffInitialInterval=1000
spring.cloud.stream.bindings.input.consumer.backOffMultiplier=2.0
spring.cloud.stream.bindings.input.consumer.backOffMaxInterval=300000
spring.cloud.stream.bindings.input.consumer.maxAttempts=9999
But those properties do not seem to be used in my application. The message gets retried every 100ms regardless of what values in use in the above properties.
Can anyone help me with setting the correct retry and/or backoff settings so that messages get retried accordingly?
A fully working minimal example to illustrate my issue can be found on GitHub and looks like this:
Producer:
#Component
public class Main {
private static final Logger LOG = getLogger(Main.class);
private boolean firstExecution = true;
#Autowired
private SuccessSwitch consumerSuccessSwitch;
#Autowired
private PubSubTemplate pubSubTemplate;
#Scheduled(fixedDelay = 10000)
public void doSomethingAfterStartup() {
if (firstExecution) {
firstExecution = false;
consumerSuccessSwitch.letFail();
pubSubTemplate.publish("topic", "payload");
LOG.info("Message published");
} else {
consumerSuccessSwitch.letSucceed();
}
}
}
Consumer:
#EnableBinding(Sink.class)
class Subscriber {
private static final Logger LOG = getLogger(Subscriber.class);
#Autowired
private SuccessSwitch successSwitch;
private int retryCounter = 0;
#StreamListener(INPUT)
public void handleMessage(Message<String> message) {
LOG.info("Received: {} for the {} time", message.getPayload(), ++retryCounter);
if (!successSwitch.succeeded()) {
throw new RuntimeException();
}
LOG.info("Received: {} times", retryCounter);
}
}
Toggle ack/nack in consumer:
#Component
public class SuccessSwitch {
private boolean success = false;
public void letSucceed() {
this.success = true;
}
public void letFail() {
this.success = false;
}
public boolean succeeded() {
return success;
}
}
Looking at PubSubChannelProvisioner , in the gcp-pubsub binding. When creating a subscription the binding does not configure the retry policy. So unless the retry is somehow handled within spring-cloud-stream instead of the underlying native pub-sub mechanisms, you are out of luck.
What i am considering to do is to create the subscription myself using PubSubAdmin and then spring-cloud-stream will see the existing subscription with the correct retry policy and use it.
I am working on a Spring-MVC application and thanks to users on SO, we already have a working Cometd chat functionality. Another functionality we have in the application is notifications, but we would like to integrate Real-time notifications as soon as they happen, kinda like what Facebook has.
Basically the idea is, whenever a new notification is created, it will be saved in the database, and its information from the backend has to be passed to the notifications for logged in users on unique channel for each user.
I would like to know if this approach will work, as it will take me some doing to route notifications to the chat class. Please note, I don't have an interface for the ChatServiceImpl class too. Is that okay? Enough talking, here's code :
ChatServiceImpl :
#Named
#Singleton
#Service
public class ChatServiceImpl {
#Inject
private BayeuxServer bayeux;
#Session
private ServerSession serverSession;
public void sendNotification(Notification notification,int id
// And then I send notification here like below, by extracting information from the notification object.
ServerChannel serverChannel = bayeux.createChannelIfAbsent("/person/notification/" + id).getReference();
serverChannel.setPersistent(true);
serverChannel.publish(serverSession, output);
}
}
The above class has no interface, so I was planning to use the method as follows :
#Service
#Transactional
public class GroupCanvasServiceImpl implements GroupCanvasService{
private ChatServiceImpl chatService;
public void someMethod(){
chatService.sendNotification(notification, id);
}
}
BayeuxInitializer :
#Component
public class BayeuxInitializer implements DestructionAwareBeanPostProcessor, ServletContextAware
{
private BayeuxServer bayeuxServer;
private ServerAnnotationProcessor processor;
#Inject
private void setBayeuxServer(BayeuxServer bayeuxServer)
{
this.bayeuxServer = bayeuxServer;
}
#PostConstruct
private void init()
{
this.processor = new ServerAnnotationProcessor(bayeuxServer);
}
#PreDestroy
private void destroy()
{
System.out.println("Bayeux in PreDestroy");
}
public Object postProcessBeforeInitialization(Object bean, String name) throws BeansException
{
processor.processDependencies(bean);
processor.processConfigurations(bean);
processor.processCallbacks(bean);
return bean;
}
public Object postProcessAfterInitialization(Object bean, String name) throws BeansException
{
return bean;
}
public void postProcessBeforeDestruction(Object bean, String name) throws BeansException
{
processor.deprocessCallbacks(bean);
}
#Bean(initMethod = "start", destroyMethod = "stop")
public BayeuxServer bayeuxServer()
{
return new BayeuxServerImpl();
}
public void setServletContext(ServletContext servletContext)
{
servletContext.setAttribute(BayeuxServer.ATTRIBUTE, bayeuxServer);
}
}
Kindly let me know if this approach is okay. Thanks a lot.
The #Listener annotation is meant for methods that handle messages received from remote clients.
If you only need to send server-to-client messages, you don't strictly need to annotate any method with #Listener: it is enough that you retrieve the ServerChannel you want to publish to, and use it to publish the message.
In your particular case, it seems that you don't really need to broadcast a message on a channel for multiple subscribers, but you only need to send a message to a particular client, identified by the id parameter.
If that's the case, then it's probably better to just use peer-to-peer messaging in this way:
public void sendNotification(Notification notification, int id)
{
ServerSession remoteClient = retrieveSessionFromId(id);
remoteClient.deliver(serverSession, "/person/notification", notification);
}
This solution has the advantage to create a lot less channels (you don't need a channel per id).
Even better, you can replace the /person/notification channel (which is a broadcast channel) with a service channel such as /service/notification.
In this way, it is clear that the channel used to convey notifications is for peer-to-peer communication (because service channels cannot be used to broadcast messages).
The retrieveSessionFromId() method is something that you have to map upon user login, see for example the documentation about CometD authentication.