How to make a real flux request? - java

I am starting to learn Webflux from Spring-boot. I learned that for an endpoint of a RestController you can define a Flux request body, where I expect a real flux stream, that is, the parts of the whole request come one after each other, and these parts can be processed also one after another. However after building a small example with a client and a server, I could not get this to work as expected.
So here is the snippet of the server:
#PostMapping("/digest")
public Flux<String> digest(#RequestBody Flux<String> text) {
continuousMD5.reset();
return text.log("server.request.").map(piece -> continuousMD5.update(piece)).log("server.response.");
}
Note: each piece of the text will be sent to a continuousMD5 object, which will accumulate all the pieces and calculate and return the intermediate MD5 hash value after each accumulation. The stream will be logged before and after the MD5 calculation.
And here is the snippet of the client:
#PostConstruct
private void init() {
webClient = webClientBuilder.baseUrl(reactiveServerUrl).build();
}
#PostMapping(value = "/send", consumes = MediaType.TEXT_PLAIN_VALUE)
public Flux<String> send(#RequestBody Flux<String> text) {
return webClient.post()
.uri("/digest")
.accept(MediaType.TEXT_PLAIN)
.body(text.log("client.request."), String.class)
.retrieve().bodyToFlux(String.class).log("client.response.");
}
Note: the client accepts a flux stream of some text and logs the stream and sends it to the server (as a flux stream).
Surprisingly I made it work to send a REST request and let the client receive a flux stream by the following command line:
for i in $(seq 1 100); do echo "The message $i"; done | http POST :8080/send Content-Type:text/plain
and I could see that in the log of the client:
2019-05-09 17:02:08.604 INFO 3462 --- [ctor-http-nio-2] client.response.Flux.MonoFlatMapMany.2 : onSubscribe(MonoFlatMapMany.FlatMapManyMain)
2019-05-09 17:02:08.606 INFO 3462 --- [ctor-http-nio-2] client.response.Flux.MonoFlatMapMany.2 : request(1)
2019-05-09 17:02:08.649 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : onSubscribe(FluxSwitchIfEmpty.SwitchIfEmptySubscriber)
2019-05-09 17:02:08.650 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(32)
2019-05-09 17:02:08.674 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onNext(The message 1)
2019-05-09 17:02:08.676 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.676 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onNext(The message 2)
...
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onNext(The message 100)
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.711 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onComplete()
2019-05-09 17:02:08.711 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.711 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.860 INFO 3462 --- [ctor-http-nio-6] client.response.Flux.MonoFlatMapMany.2 : onNext(CSubeSX3yIVP2CD6FRlojg==)
2019-05-09 17:02:08.862 INFO 3462 --- [ctor-http-nio-6] client.response.Flux.MonoFlatMapMany.2 : onComplete()
^C2019-05-09 17:02:47.393 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : cancel()
that each piece of the text was recognized as an element of a flux stream and was requested separately.
But in the server log:
2019-05-09 17:02:08.811 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : onSubscribe(FluxSwitchIfEmpty.SwitchIfEmptySubscriber)
2019-05-09 17:02:08.813 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : onSubscribe(FluxMap.MapSubscriber)
2019-05-09 17:02:08.814 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : request(1)
2019-05-09 17:02:08.814 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.838 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : onNext(The message 1The message 2The message 3The message 4The message 5The message 6The message 7The message 8The message 9The message 10The message 11The message 12The message 13The message 14The message 15The message 16The message 17The message 18The message 19The message 20The message 21The message 22The message 23The message 24The message 25The message 26The message 27The message 28The message 29The message 30The message 31The message 32The message 33The message 34The message 35The message 36The message 37The message 38The message 39The message 40The message 41The message 42The message 43The message 44The message 45The message 46The message 47The message 48The message 49The message 50The message 51The message 52The message 53The message 54The message 55The message 56The message 57The message 58The message 59The message 60The message 61The message 62The message 63The message 64The message 65The message 66The message 67The message 68The message 69The message 70The message 71The message 72The message 73The message 74The message 75The message 76The message 77The message 78The message 79The message 80The message 81The message 82The message 83The message 84The message 85The message 86The message 87The message 88The message 89The message 90The message 91The message 92The message 93The message 94The message 95The message 96The message 97The message 98The message 99The message 100)
2019-05-09 17:02:08.840 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : onNext(CSubeSX3yIVP2CD6FRlojg==)
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : request(32)
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : request(32)
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : onComplete()
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : onComplete()
2019-05-09 17:02:47.394 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : cancel()
2019-05-09 17:02:47.394 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : cancel()
I saw that all the pieces of the text arrived at the server at once and hence were processed as one big element in the flux stream (one can also verify that there was only one MD5 hash calculated instead of 100).
What I would expect is that the server also receives the pieces of text from the client as elements in the flux stream, otherwise for the server it is not real reactive but just a normal blocking request.
Could anyone please help me understand how to make a real flux reactive request using Webflux? Thanks!
Update
I used a similar command line to make a REST request against the server and could see that the server received the pieces of the text ("The message x") as a flux stream. So I guess the server is ok, now the problem may be the client: how can I use the WebClient to make a real flux REST request?

If you want to achieve the streaming effect, you can:
Use different content-type that supports streaming - application/stream+json. Check out the following SO thread about it:
Spring WebFlux Flux behavior with non streaming application/json
Change the underlying protocol to the one that better fits the streaming model, for instance to WebSockets. https://howtodoinjava.com/spring-webflux/reactive-websockets/

After trying things out and reading more documentations, I finally figured out how to make my example work:
For the client, I need to make sure that the request body sent to the server is also separated by a line feed:
#PostMapping(value = "/send", consumes = MediaType.TEXT_PLAIN_VALUE)
public Flux<String> send(#RequestBody Flux<String> text) {
return webClient.post()
.uri("/digest")
.accept(MediaType.TEXT_PLAIN)
.body(
text
.onBackpressureBuffer()
.log("client.request.")
.map(piece -> piece + "\n"),
String.class)
.retrieve().bodyToFlux(String.class)
.onBackpressureBuffer()
.log("client.response.");
}
this achieves the same effect as making REST request via the command line, as for i in $(seq 1 100); do echo "The message $i"; done outputs "The message x" in lines.
Similarly, for the server, the response body needs also to be separated by line feed so that the client can decode the body to a flux:
#PostMapping("/digest")
public Flux<String> digest(#RequestBody Flux<String> text) {
continuousMD5.reset();
return text
.log("server.request.")
.map(piece -> continuousMD5.update(piece))
.map(piece -> piece + "\n")
.log("server.response.");
}
I added also the onBackpressureBuffer() to the client before sending and after receiving so that there is no overflow exception while sending a large number of messages.
However even though the above code "works", but it is not doing real streaming, as I can see in the logs, the server started to receive request body after the client sent the whole request body, and the client started to receive the response body after the server sent the whole response body. Perhaps as Ilya Zinkovich mentioned, using WebSocket protocol may achieve real streaming effect, but I did not try it out yet.

Related

Spring Cloud Stream send/consume message to different partitions with KafkaHeaders.Message_KEY

I am trying to implement a prototype for implementing messaging system using Spring Cloud Stream. I selected Apache Kafka as binder. I created a topic with 2 partitions for scalability. Then I tried to send different messages to different partitions using following rest api method.
I set 2 different message keys for 2 partitions .
#PostMapping("/publish")
public void publish(#RequestParam String message) {
log.debug("REST request the message : {} to send to Kafka topic ", message);
Message message1 = MessageBuilder.withPayload("Hello from a")
.setHeader(KafkaHeaders.MESSAGE_KEY, "node1")
.build();
Message message2 = MessageBuilder.withPayload("Hello from b")
.setHeader(KafkaHeaders.MESSAGE_KEY, "node1")
.build();
Message message3 = MessageBuilder.withPayload("Hello from c")
.setHeader(KafkaHeaders.MESSAGE_KEY, "node1")
.build();
Message message4 = MessageBuilder.withPayload("Hello from d")
.setHeader(KafkaHeaders.MESSAGE_KEY, "node2")
.build();
Message message5 = MessageBuilder.withPayload("Hello from e")
.setHeader(KafkaHeaders.MESSAGE_KEY, "node2")
.build();
Message message6 = MessageBuilder.withPayload("Hello from f")
.setHeader(KafkaHeaders.MESSAGE_KEY, "node2")
.build();
output.send("simulatePf-out-0", message1);
output.send("simulatePf-out-0", message2);
output.send("simulatePf-out-0", message3);
output.send("simulatePf-out-0", message4);
output.send("simulatePf-out-0", message5);
output.send("simulatePf-out-0", message6);
}
This is my application.yml for producer application
cloud:
stream:
kafka:
binder:
replicationFactor: 2
auto-create-topics: true
brokers: localhost:9092,localhost:9093,localhost:9094
auto-add-partitions: true
bindings:
simulatePf-out-0:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.springframework.kafka.support.serializer.JsonSerializer
bindings:
simulatePf-out-0:
producer:
useNativeEncoding: true
partition-count: 3
destination: pf-topic
content-type: text/plain
group: dsa-back-end
To test parallelism, I created a consumer application that reads messages from pf-topic. This is configuration from consumer application.
cloud:
stream:
kafka:
binder:
replicationFactor: 2
auto-create-topics: true
brokers: localhost:9092, localhost:9093, localhost:9094
min-partition-count: 2
bindings:
simulatePf-in-0:
consumer:
configuration:
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
bindings:
simulatePf-in-0:
destination: pf-topic
content-type: text/plain
group: powerflowservice
consumer:
use-native-decoding: true
.
I created a function in consumer application to consume messages
#Bean
public Consumer<Message> simulatePf() {
return message -> {
log.info("header " + message.getHeaders());
log.info("received " + message.getPayload());
};
}
Now it is time for testing. To test parallelism, I run 2 instances of spring boot consumer application . I was expecting to see one consumer consumes messages from one partition, other consumer consumer messages from other partition. So I expect that message a, message b, message is consumed by consumer one. Message d, message e and message f is consumer by other consumer. Because I set different message keys to assign different partitions. But all messages are consumed by only one application
2022-06-30 20:34:48.895 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : header {deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=node1, kafka_receivedTopic=pf-topic, skip-input-type-conversion=true, kafka_offset=270, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#1eaf51df, source-type=streamBridge, id=a77d12f2-f184-0f2f-6a76-147803dd43f3, kafka_receivedPartitionId=0, kafka_receivedTimestamp=1656610488838, kafka_groupId=powerflowservice, timestamp=1656610488890}
2022-06-30 20:34:48.901 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : received Hello from a
2022-06-30 20:34:48.929 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : header {deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=node1, kafka_receivedTopic=pf-topic, skip-input-type-conversion=true, kafka_offset=271, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#1eaf51df, source-type=streamBridge, id=2e89f9b7-b6e7-482f-3c46-f73b2ad0705c, kafka_receivedPartitionId=0, kafka_receivedTimestamp=1656610488840, kafka_groupId=powerflowservice, timestamp=1656610488929}
2022-06-30 20:34:48.932 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : received Hello from b
2022-06-30 20:34:48.933 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : header {deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=node1, kafka_receivedTopic=pf-topic, skip-input-type-conversion=true, kafka_offset=272, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#1eaf51df, source-type=streamBridge, id=15640532-b57f-b58e-62e7-c2bc9375fdf0, kafka_receivedPartitionId=0, kafka_receivedTimestamp=1656610488841, kafka_groupId=powerflowservice, timestamp=1656610488933}
2022-06-30 20:34:48.934 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : received Hello from c
2022-06-30 20:34:48.935 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : header {deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=node2, kafka_receivedTopic=pf-topic, skip-input-type-conversion=true, kafka_offset=273, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#1eaf51df, source-type=streamBridge, id=590f0fb7-042f-e134-d214-ead570e42fe3, kafka_receivedPartitionId=0, kafka_receivedTimestamp=1656610488842, kafka_groupId=powerflowservice, timestamp=1656610488934}
2022-06-30 20:34:48.938 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : received Hello from d
2022-06-30 20:34:48.940 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : header {deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=node2, kafka_receivedTopic=pf-topic, skip-input-type-conversion=true, kafka_offset=274, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#1eaf51df, source-type=streamBridge, id=9a67e68b-95d4-a02e-cc14-ac30c684b639, kafka_receivedPartitionId=0, kafka_receivedTimestamp=1656610488842, kafka_groupId=powerflowservice, timestamp=1656610488940}
2022-06-30 20:34:48.941 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : received Hello from e
2022-06-30 20:34:48.943 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : header {deliveryAttempt=1, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=node2, kafka_receivedTopic=pf-topic, skip-input-type-conversion=true, kafka_offset=275, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#1eaf51df, source-type=streamBridge, id=333269af-bbd5-12b0-09de-8bd7959ebf08, kafka_receivedPartitionId=0, kafka_receivedTimestamp=1656610488843, kafka_groupId=powerflowservice, timestamp=1656610488943}
2022-06-30 20:34:48.943 INFO 11860 --- [container-0-C-1] c.s.powerflow.config.AsyncConfiguration : received Hello from f
Could you help me what I am missing.
You are only setting the message key as a header when you are sending. You can add the KafkaHeaders.PARTITION header on the message to force a specific partition.
If you don't want to add a hard-coded partition through the header, you can set a partition key SpEL expression or a partition key extractor bean in your application. Both of these mechanisms are Spring Cloud Stream specific. If you provide either of these, you still need to tell Spring Cloud Stream how you want to select the partition. For that, you can use a partition selector SpEL expression or a Partition Selector strategy. If you don't provide them, then it will use a default selector strategy by taking the hashCode of the message key % number of topic partitions.
I think you asked another related question yesterday and I linked this blog in my answer. In the last sections of that blog, all these details are explained.
Quoting from the blog:
If you don’t provide a partition key expression or partition key extractor bean, then Spring Cloud Stream will completely stay out of the business of making any partition decision for you. In that case, if the topic has more than one partition, Kafka’s default partitioning mechanisms will be triggered. By default, Kafka uses a DefaultPartitioner, which if the message has a key (see above), then using the hash of this key for computing the partition.
I think you are seeing Kafka's default behavior in your application.

Retry queue binding to RabbitMQ exchange

Using Spring-Boot with RabbitMQ I'm trying to create an exchange that can have n-number of queues, one for each of the microservices so each of them will get the same message.
Producer microservice has a Fanout Exchange defined.
Each Consumer microservice creates a queue and attempts to connect it to the Producer exchange
When Producer is started first, exchange is created. Starting Consumer microservices bind to the Producer exchange. However in case when Consumer microservices are started first, they will not bind as there is nothing to bind to yet giving this log:
2020-01-13 22:24:49.640 INFO [,,,] 88649 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2020-01-13 22:24:49.685 INFO [,,,] 88649 --- [ main] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#7746ae18:0/SimpleConnection#428ea503 [delegate=amqp://guest#127.0.0.1:5672/, localPort= 62282]
2020-01-13 22:24:49.726 ERROR [,,,] 88649 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'abc-exchange' in vhost '/', class-id=50, method-id=20)
2020-01-13 22:24:50.748 ERROR [,,,] 88649 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'abc-exchange' in vhost '/', class-id=50, method-id=20)
2020-01-13 22:24:52.754 ERROR [,,,] 88649 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'abc-exchange' in vhost '/', class-id=50, method-id=20)
2020-01-13 22:24:56.763 ERROR [,,,] 88649 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'abc-exchange' in vhost '/', class-id=50, method-id=20)
2020-01-13 22:25:01.794 ERROR [,,,] 88649 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'abc-exchange' in vhost '/', class-id=50, method-id=20)
2020-01-13 22:25:01.807 INFO [,,,] 88649 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Broker not available; cannot force queue declarations during start: java.io.IOException
2020-01-13 22:25:01.859 DEBUG [,,,] 88649 --- [ main] .b.c.i.c.AppConfig$CustomHttpTraceFilter : Filter 'httpTraceFilter' configured for use
How can I configure Consumer microservices(or Producer) to try to bind the queues to the Producer exchange even if they were started before the exchange existed.
Another approach would be Producer creating the queues dynamically based on the starting Consumer microservices information, which then will listen on given queue. However the issue would still be there, if queue is not created fast enough or Consumer is created before Producer then the listener will throw an exception
Can't we specify a binding bean in ur consumer app?

Spring cloud stream / Kafka exceptions

I have problems with a service which uses spring cloud stream and kafka. The service had been working ok, but yesterday started reporting a series of exceptions on startup:
Checking for rethrow: count=2
2018-09-11 10:43:34.904 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.retry.support.RetryTemplate : Retry: count=2
2018-09-11 10:43:34.904 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.integration.channel.DirectChannel : preSend on channel 'payment-reply', message: GenericMessage [payload=byte[1478], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:602, deliveryAttempt=3, X-B3-ParentSpanId=a9fe9b1c87b14698, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedTopic=paymentResponse, spanTraceId=966a10371583367f, spanId=7aa71302bc18bb4c, spanParentSpanId=a9fe9b1c87b14698, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:601, nativeHeaders={spanTraceId=[966a10371583367f], spanId=[7aa71302bc18bb4c], spanParentSpanId=[a9fe9b1c87b14698], spanSampled=[0]}, kafka_offset=2299, X-B3-SpanId=7aa71302bc18bb4c, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#2fb81502, X-B3-Sampled=0, X-B3-TraceId=966a10371583367f, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592853999}]
2018-09-11 10:43:34.904 DEBUG [payment-gateway,966a10371583367f,c94b21ccaaed668b,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Created a new span in pre sendNoopSpan{context=966a10371583367f/c94b21ccaaed668b}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,966a10371583367f,4476713d70434d52,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Created a new span in before handleNoopSpan{context=966a10371583367f/e1d1a2a6b9ad093e}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,966a10371583367f,4476713d70434d52,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Will finish the current span after message handled NoopSpan{context=966a10371583367f/4476713d70434d52}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,966a10371583367f,c94b21ccaaed668b,false] 1 --- [container-0-C-1] o.s.c.s.i.m.TracingChannelInterceptor : Will finish the current span after completion NoopSpan{context=966a10371583367f/c94b21ccaaed668b}
2018-09-11 10:43:34.905 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2018-09-11 10:43:35.001 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {}
2018-09-11 10:43:35.002 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-2, groupId=payment-gateway] Fetch READ_UNCOMMITTED at offset 0 for partition refundResponse-0 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, abortedTransactions = null, recordsSizeInBytes=0)
2018-09-11 10:43:35.002 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-2, groupId=payment-gateway] Added READ_UNCOMMITTED fetch request for partition refundResponse-0 at offset 0 to node 10.244.0.194:9092 (id: 2 rack: null)
2018-09-11 10:43:35.002 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-2, groupId=payment-gateway] Sending READ_UNCOMMITTED fetch for partitions [refundResponse-0] to broker 10.244.0.194:9092 (id: 2 rack: null)
2018-09-11 10:43:35.003 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.retry.support.RetryTemplate : Checking for rethrow: count=3
2018-09-11 10:43:35.003 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.retry.support.RetryTemplate : Retry failed last attempt: count=3
2018-09-11 10:43:35.004 DEBUG [payment-gateway,,,] 1 --- [container-0-C-1] o.s.i.h.a.ErrorMessageSendingRecoverer : Sending ErrorMessage: failedMessage: GenericMessage [payload=byte[1478], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:602, deliveryAttempt=3, X-B3-ParentSpanId=7aa71302bc18bb4c, kafka_timestampType=CREATE_TIME, kafka_receivedTopic=paymentResponse, spanTraceId=966a10371583367f, spanId=c94b21ccaaed668b, spanParentSpanId=7aa71302bc18bb4c, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:601, nativeHeaders={spanTraceId=[966a10371583367f], spanId=[c94b21ccaaed668b], spanParentSpanId=[7aa71302bc18bb4c], spanSampled=[0]}, kafka_offset=2299, X-B3-SpanId=c94b21ccaaed668b, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#2fb81502, X-B3-Sampled=0, X-B3-TraceId=966a10371583367f, id=83994228-ba45-2303-1f7e-2eaf8f49c400, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592853999, timestamp=1536662614904}]
2018-09-11 08:44:19.837 ERROR [payment-gateway,bd9888a7d590ebf7,535db983ae0aedab,false] 1 --- [container-0-C-1] o.s.integration.handler.LoggingHandler :
org.springframework.messaging.MessageDeliveryException:
Dispatcher has no subscribers for channel 'application-1.payment-reply'.; nested exception is org.springframework.integration.MessageDispatchingException:
Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[1197], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:426, deliveryAttempt=3, X-B3-ParentSpanId=760139e0bc5d9ac0, kafka_timestampType=CREATE_TIME, kafka_receivedTopic=paymentResponse, spanTraceId=bd9888a7d590ebf7, spanId=5c6ac2c521faf6e7, spanParentSpanId=760139e0bc5d9ac0, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:425, nativeHeaders={spanTraceId=[bd9888a7d590ebf7], spanId=[535db983ae0aedab], spanParentSpanId=[5c6ac2c521faf6e7], spanSampled=[0], X-B3-TraceId=[bd9888a7d590ebf7], X-B3-SpanId=[535db983ae0aedab], X-B3-ParentSpanId=[5c6ac2c521faf6e7], X-B3-Sampled=[0]}, kafka_offset=2258, X-B3-SpanId=5c6ac2c521faf6e7, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#59715a4a, X-B3-Sampled=0, X-B3-TraceId=bd9888a7d590ebf7, id=88531659-3fb0-a59f-bb69-54c9ba82d608, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592840192, timestamp=1536655459828}], failedMessage=GenericMessage [payload=byte[1197], headers={errorChannel=e61450f9-fa47-446f-95ae-5021868cadfa:426, deliveryAttempt=3, X-B3-ParentSpanId=760139e0bc5d9ac0, kafka_timestampType=CREATE_TIME, kafka_receivedTopic=paymentResponse, spanTraceId=bd9888a7d590ebf7, spanId=5c6ac2c521faf6e7, spanParentSpanId=760139e0bc5d9ac0, replyChannel=e61450f9-fa47-446f-95ae-5021868cadfa:425, nativeHeaders={spanTraceId=[bd9888a7d590ebf7], spanId=[535db983ae0aedab], spanParentSpanId=[5c6ac2c521faf6e7], spanSampled=[0], X-B3-TraceId=[bd9888a7d590ebf7], X-B3-SpanId=[535db983ae0aedab], X-B3-ParentSpanId=[5c6ac2c521faf6e7], X-B3-Sampled=[0]}, kafka_offset=2258, X-B3-SpanId=5c6ac2c521faf6e7, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#59715a4a, X-B3-Sampled=0, X-B3-TraceId=bd9888a7d590ebf7, id=88531659-3fb0-a59f-bb69-54c9ba82d608, spanSampled=0, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1536592840192, timestamp=1536655459828}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
after some time we then see exceptions like this:
Caused by: org.springframework.messaging.core.DestinationResolutionException: failed to look up MessageChannel with name '946859a6-bc27-466d-91ba-3da93af50ac9:1' in the BeanFactory.; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named '946859a6-bc27-466d-91ba-3da93af50ac9:1' available
the connection to kafka is configured with a property: spring.kafka.bootstrap-server = kafka.kafka:9092
and the topics are configured with spring cloud stream properties: spring.cloud.stream.bindings.[topic-name].destination = blah
The interaction with kafka goes via spring integration with code like this:
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = KafkaConfig.ENRICH_PAYMENT, replyChannel = ChannelNames.PAYMENT_REPLY, replyTimeout = 10000)
String processPayment(String payload);
}
//Different class:
private final StreamGateway gateway;
...
gateway.processPayment(message)
This is running on an azure kubernetes deployment, and kafka is in a separate pod from the spring boot service.
thanks in advance.
Update:
The problem reoccured and some further investigation has highlighted a couple of things
Because we're using spring integration #MessagingGateway and #Gateway to create a synchronous interaction with Kafka, there is no normal topic StreamListener or subscriber
The problem is occurring when there is a lag on the topic, i.e. there are messages in the topic beyond the topic offset.
The lack of a normal StreamListener means the lag messages have no means of being processed. Only when a connection is made by the MessageGateway, is it possible for messages to be read from the topic.
One means of getting rid of the problem is to read all 'lag' messages, so that the lag is 0. The service will then start normally, however if I manually post messages to the topic (out-with the MessageGateway interaction), then the error reoccurs.
A second partial solution (which I dont fully understand yet) is to add a #DependsOn annotation to the MessageGateway, indicating that it requires a bean separately created with a #Input SubscribableChannel object. This means the SubscribableChannel must be created before the MessageGateway, therefore creating a Subscriber, however there is still no StreamListener, so exceptions are still thrown as lag messages are pulled from the topic, with no-where to go 🤨
While I am not sure about the details of your application, what is clear is that a Message gets delivered to an application-1.payment-reply channel which, as the error states, has no subscriber. Basically it means there is no listener on that channel (such as #StreamListener or #ServiceActivator etc).
It is a very common Spring Integration miss-configuration, but without looking at your app it is hard to say where it is.
On looking at the debug log I noticed that the service was connecting to other topics correctly, but having problems with the payment-reply topic. I tried deleting this topic and restarting the service. This fixed the problem.

Eclipse milo: Session closed when trying to read data

I have an external OPC UA server, from which I would like to read data. I use username and password authentication, so my client is initialized like follows:
public class MyClient {
// ...
public MyClient() throws Exception {
EndpointDescription[] endpoints =
UaTcpStackClient.getEndpoints(OPCConstants.OPC_SERVER_URI).get();
// using the first endpoint
EndpointDescription endpoint = endpoints[0];
// client configuration
OpcUaClientConfig config = OpcUaClientConfig.builder()
.setApplicationName(LocalizedText.english("Example Client"))
.setApplicationUri(String.format("some:example-client:%s",
UUID.randomUUID()))
.setIdentityProvider(new UsernameProvider(USERNAME, PWD))
.setEndpoint(endpoint)
.build();
}
}
The client's request is the following:
public CompletableFuture<DataValue> getData(NodeId nodeId) {
LOGGER.debug("Sending request");
return client.readValue(60000000.0, TimestampsToReturn.Server, nodeId);
}
I call this request from the main method after initializing the client and connecting it to the server:
MyClient client = new MyClient();
NodeId requestedData = new NodeId(DATA_ID, DATA_KEY);
LOGGER.info("Sending synchronous TestStackRequest NodeId={}",
requestedData);
client.connect();
DataValue response = client.getData(requestedData).get();
LOGGER.info("Received response value={}", response.getValue());
client.disconnect();
However, this code doesn't work (the session is closed when trying to read informations from the server). I get the following output:
2018-04-12 17:43:27,765 DEBUG --- [ua-netty-event-loop-0] Recycler : -Dio.netty.recycler.maxCapacity.default: 262144
2018-04-12 17:43:27,777 DEBUG --- [ua-netty-event-loop-0] UaTcpClientAcknowledgeHandler : Sent Hello message on channel=[id: 0xfd9519e3, L:/172.20.100.54:55805 - R:/172.20.100.135:4840].
2018-04-12 17:43:27,786 DEBUG --- [ua-netty-event-loop-0] UaTcpClientAcknowledgeHandler : Received Acknowledge message on channel=[id: 0xfd9519e3, L:/172.20.100.54:55805 - R:/172.20.100.135:4840].
2018-04-12 17:43:27,793 DEBUG --- [ua-netty-event-loop-0] UaTcpClientMessageHandler : OpenSecureChannel timeout scheduled for +5s
2018-04-12 17:43:27,946 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Sent OpenSecureChannelRequest (Issue, id=0, currentToken=-1, previousToken=-1).
2018-04-12 17:43:27,951 DEBUG --- [ua-netty-event-loop-0] UaTcpClientMessageHandler : OpenSecureChannel timeout canceled
2018-04-12 17:43:27,961 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Received OpenSecureChannelResponse.
2018-04-12 17:43:27,967 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : SecureChannel id=1698234671, currentTokenId=1, previousTokenId=-1, lifetime=3600000ms, createdAt=DateTime{utcTime=131680285857690000, javaDate=Thu Apr 12 19:43:05 CEST 2018}
2018-04-12 17:43:27,968 DEBUG --- [ua-netty-event-loop-0] UaTcpClientMessageHandler : 0 message(s) queued before handshake completed; sending now.
2018-04-12 17:43:27,968 DEBUG --- [ua-shared-pool-1] ClientChannelManager : Channel bootstrap succeeded: localAddress=/172.20.100.54:55805, remoteAddress=/172.20.100.135:4840
2018-04-12 17:43:27,996 DEBUG --- [ua-shared-pool-0] ClientChannelManager : disconnect(), currentState=Connected
2018-04-12 17:43:27,997 DEBUG --- [ua-shared-pool-1] ClientChannelManager : Sending CloseSecureChannelRequest...
2018-04-12 17:43:28,000 DEBUG --- [ua-netty-event-loop-0] ClientChannelManager : channelInactive(), disconnect complete
2018-04-12 17:43:28,001 DEBUG --- [ua-netty-event-loop-0] ClientChannelManager : disconnect complete, state set to Idle
2018-04-12 17:43:28,011 INFO --- [main] OpcUaClient : Eclipse Milo OPC UA Stack version: 0.2.1
2018-04-12 17:43:28,011 INFO --- [main] OpcUaClient : Eclipse Milo OPC UA Client SDK version: 0.2.1
2018-04-12 17:43:28,056 DEBUG --- [main] OpcUaClient : Added ServiceFaultListener: org.eclipse.milo.opcua.sdk.client.session.SessionFsm$FaultListener#46d59067
2018-04-12 17:43:28,066 DEBUG --- [main] OpcUaClient : Added SessionActivityListener: org.eclipse.milo.opcua.sdk.client.subscriptions.OpcUaSubscriptionManager$1#78452606
2018-04-12 17:43:28,189 INFO --- [main] CommunicationMain : Sending synchronous TestStackRequest NodeId=NodeId{ns=6, id=::opcua:opcData.outGoing.basic.cycleStep}
2018-04-12 17:43:28,189 DEBUG --- [main] ClientChannelManager : connect(), currentState=NotConnected
2018-04-12 17:43:28,190 DEBUG --- [main] ClientChannelManager : connect() while NotConnected
java.lang.Exception
at org.eclipse.milo.opcua.stack.client.ClientChannelManager.connect(ClientChannelManager.java:67)
at org.eclipse.milo.opcua.stack.client.UaTcpStackClient.connect(UaTcpStackClient.java:127)
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.connect(OpcUaClient.java:313)
at com.mycompany.opcua.participants.MyClient.connect(MyClient.java:147)
at com.mycompany.opcua.participants.CommunicationMain.testClient(CommunicationMain.java:69)
at com.mycompany.opcua.participants.CommunicationMain.main(CommunicationMain.java:51)
2018-04-12 17:43:28,190 DEBUG --- [main] MyClient : Sending request
2018-04-12 17:43:28,197 DEBUG --- [ua-netty-event-loop-1] UaTcpClientAcknowledgeHandler : Sent Hello message on channel=[id: 0xd9b3f832, L:/172.20.100.54:55806 - R:/172.20.100.135:4840].
2018-04-12 17:43:28,204 DEBUG --- [ua-netty-event-loop-1] UaTcpClientAcknowledgeHandler : Received Acknowledge message on channel=[id: 0xd9b3f832, L:/172.20.100.54:55806 - R:/172.20.100.135:4840].
2018-04-12 17:43:28,205 DEBUG --- [ua-netty-event-loop-1] UaTcpClientMessageHandler : OpenSecureChannel timeout scheduled for +5s
2018-04-12 17:43:28,205 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Sent OpenSecureChannelRequest (Issue, id=0, currentToken=-1, previousToken=-1).
2018-04-12 17:43:28,208 DEBUG --- [ua-netty-event-loop-1] UaTcpClientMessageHandler : OpenSecureChannel timeout canceled
2018-04-12 17:43:28,208 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Received OpenSecureChannelResponse.
2018-04-12 17:43:28,209 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : SecureChannel id=1698234672, currentTokenId=1, previousTokenId=-1, lifetime=3600000ms, createdAt=DateTime{utcTime=131680285860260000, javaDate=Thu Apr 12 19:43:06 CEST 2018}
2018-04-12 17:43:28,209 DEBUG --- [ua-netty-event-loop-1] UaTcpClientMessageHandler : 0 message(s) queued before handshake completed; sending now.
2018-04-12 17:43:28,209 DEBUG --- [ua-shared-pool-1] ClientChannelManager : Channel bootstrap succeeded: localAddress=/172.20.100.54:55806, remoteAddress=/172.20.100.135:4840
2018-04-12 17:43:28,210 DEBUG --- [ua-shared-pool-0] SessionFsm : S(Inactive) x E(CreateSessionEvent) = S'(Creating)
Exception in thread "main" java.util.concurrent.ExecutionException: UaException: status=Bad_SessionClosed, message=The session was closed by the client.
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)2018-04-12 17:43:28,212 DEBUG --- [ua-shared-pool-1] SessionFsm : Sending CreateSessionRequest...
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at com.mycompany.opcua.participants.CommunicationMain.testClient(CommunicationMain.java:70)
at com.mycompany.opcua.participants.CommunicationMain.main(CommunicationMain.java:51)
Caused by: UaException: status=Bad_SessionClosed, message=The session was closed by the client.
at org.eclipse.milo.opcua.stack.core.util.FutureUtils.failedUaFuture(FutureUtils.java:100)
at org.eclipse.milo.opcua.stack.core.util.FutureUtils.failedUaFuture(FutureUtils.java:88)
at org.eclipse.milo.opcua.sdk.client.session.states.Inactive.(Inactive.java:28)
at org.eclipse.milo.opcua.sdk.client.session.SessionFsm.(SessionFsm.java:69)
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.(OpcUaClient.java:159)2018-04-12 17:43:28,212 INFO --- [NonceUtilSecureRandom] NonceUtil : SecureRandom seeded in 0ms.
at com.mycompany.opcua.participants.MyClient.(MyClient.java:112)
at com.mycompany.opcua.participants.CommunicationMain.testClient(CommunicationMain.java:60)
... 1 more
I use Eclipse milo 0.2.1 as OPC UA library.
Could you please tell me hat can cause this issue and how to fix it? Can it be a race condition related to this?
I can connect to the same server using other client (UaExpert).
Thank you in advance.
All of the calls you're making (connect(), disconnect(), and readValues()) are asynchronous, so what's likely happening here is you're not connected when you attempt the read.
Make sure for these examples you block for the result before moving to the next step. (you're not doing this on connect())

Flux subscribed through SSE raise a cancel() event

I have a Spring Boot 2.0.0.M7 + Spring Webflux application in which I am using Thymeleaf Reactive.
I noticed that on my microservices, when I call an endpoint returning a flux of data in SSE mode (text/event-stream), a cancel() occurs on this flux even if it has been processed correctly.
For example, here's a simple controller endpoint:
#GetMapping(value = "/posts")
public Flux<String> getCommunityPosts() {
return Flux.just("A", "B", "C").log("POSTS");
}
And here's the subscribed flux logs I get when I request it in SSE mode:
2018-02-13 17:04:09.841 INFO 4281 --- [nio-9090-exec-4] POSTS : | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
2018-02-13 17:04:09.841 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.842 INFO 4281 --- [nio-9090-exec-4] POSTS : | onNext(A)
2018-02-13 17:04:09.847 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.847 INFO 4281 --- [nio-9090-exec-4] POSTS : | onNext(B)
2018-02-13 17:04:09.848 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.848 INFO 4281 --- [nio-9090-exec-4] POSTS : | onNext(C)
2018-02-13 17:04:09.849 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.849 INFO 4281 --- [nio-9090-exec-4] POSTS : | onComplete()
2018-02-13 17:04:09.852 INFO 4281 --- [nio-9090-exec-4] POSTS : | cancel()
We can notice the cancel event after the onComplete. I don't have this behaviour when I call the same endpoint through a classic GET request. I suspect this cancel event to make the client side event source (javascript) throw a onError event.
Is it a known/wanted behaviour specific to SSE?
QUESTION UPDATE
I actually use SSE on some of my streams because I sometimes need my event sources to get JSON data instead of HTML already processed by Thymeleaf. Should I do it in another way?
I based my implementation on the last method of this example: https://github.com/danielfernandez/reactive-matchday/blob/master/src/main/java/com/github/danielfernandez/matchday/web/controller/MatchController.java
However, I may have missed providing some information in my previous post. I use Tomcat Server (8.5.23 with M7), and not Netty server. I forced Tomcat use including the following Maven dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
Using your code on a sample project, this seems to cause the issue.
When I run the code on a Netty server, I get the same results as you:
2018-02-14 12:30:48.713 INFO 3060 --- [ctor-http-nio-2] reactor.Flux.ConcatMap.1 : onSubscribe(FluxConcatMap.ConcatMapImmediate)
2018-02-14 12:30:48.714 INFO 3060 --- [ctor-http-nio-2] reactor.Flux.ConcatMap.1 : request(1)
2018-02-14 12:30:49.717 INFO 3060 --- [ parallel-2] reactor.Flux.ConcatMap.1 : onNext(a)
2018-02-14 12:30:49.739 INFO 3060 --- [ctor-http-nio-2] reactor.Flux.ConcatMap.1 : request(31)
2018-02-14 12:30:50.731 INFO 3060 --- [ parallel-3] reactor.Flux.ConcatMap.1 : onNext(b)
2018-02-14 12:30:51.733 INFO 3060 --- [ parallel-4] reactor.Flux.ConcatMap.1 : onNext(c)
2018-02-14 12:30:51.735 INFO 3060 --- [ parallel-4] reactor.Flux.ConcatMap.1 : onComplete()
When I run the same code on the Tomcat server, I have the cancel issue:
2018-02-14 12:33:18.294 INFO 3088 --- [nio-8080-exec-3] reactor.Flux.ConcatMap.2 : onSubscribe(FluxConcatMap.ConcatMapImmediate)
2018-02-14 12:33:18.295 INFO 3088 --- [nio-8080-exec-3] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:19.295 INFO 3088 --- [ parallel-4] reactor.Flux.ConcatMap.2 : onNext(a)
2018-02-14 12:33:19.297 INFO 3088 --- [ parallel-4] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:20.302 INFO 3088 --- [ parallel-5] reactor.Flux.ConcatMap.2 : onNext(b)
2018-02-14 12:33:20.302 INFO 3088 --- [ parallel-5] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:21.306 INFO 3088 --- [ parallel-6] reactor.Flux.ConcatMap.2 : onNext(c)
2018-02-14 12:33:21.306 INFO 3088 --- [ parallel-6] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:21.307 INFO 3088 --- [ parallel-6] reactor.Flux.ConcatMap.2 : onComplete()
2018-02-14 12:33:21.307 INFO 3088 --- [nio-8080-exec-4] reactor.Flux.ConcatMap.2 : cancel()
Could it be a Tomcat issue or am I doing something wrong?
First, I don't think you should use SSE for finite streams.
When I create a Controller method like:
#GetMapping(path = "/test", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
#ResponseBody
public Flux<String> test() {
return Flux.just("a", "b", "c").delayElements(Duration.ofSeconds(1)).log();
}
and request it from a browser (Chrome or Firefox) with:
<script type="text/javascript">
var testEventSource = new EventSource("/test");
testEventSource.onmessage = function (e) {
console.log(e);
};
</script>
I get the following logs on the server:
| onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
| request(1)
| onNext(a)
| request(31)
| onNext(b)
| onNext(c)
| onComplete()
| onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
| request(1)
| onNext(a)
| request(31)
| onNext(b)
| onNext(c)
| onComplete()
As soon as the Flux is completed, the connection is closed by the server and the browser reconnects automatically. This will replay the same sequence over and over again.
The only way I get a cancel() event on the server is when I close the browser tab during the stream.

Categories

Resources