I have a Spring Boot 2.0.0.M7 + Spring Webflux application in which I am using Thymeleaf Reactive.
I noticed that on my microservices, when I call an endpoint returning a flux of data in SSE mode (text/event-stream), a cancel() occurs on this flux even if it has been processed correctly.
For example, here's a simple controller endpoint:
#GetMapping(value = "/posts")
public Flux<String> getCommunityPosts() {
return Flux.just("A", "B", "C").log("POSTS");
}
And here's the subscribed flux logs I get when I request it in SSE mode:
2018-02-13 17:04:09.841 INFO 4281 --- [nio-9090-exec-4] POSTS : | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
2018-02-13 17:04:09.841 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.842 INFO 4281 --- [nio-9090-exec-4] POSTS : | onNext(A)
2018-02-13 17:04:09.847 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.847 INFO 4281 --- [nio-9090-exec-4] POSTS : | onNext(B)
2018-02-13 17:04:09.848 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.848 INFO 4281 --- [nio-9090-exec-4] POSTS : | onNext(C)
2018-02-13 17:04:09.849 INFO 4281 --- [nio-9090-exec-4] POSTS : | request(1)
2018-02-13 17:04:09.849 INFO 4281 --- [nio-9090-exec-4] POSTS : | onComplete()
2018-02-13 17:04:09.852 INFO 4281 --- [nio-9090-exec-4] POSTS : | cancel()
We can notice the cancel event after the onComplete. I don't have this behaviour when I call the same endpoint through a classic GET request. I suspect this cancel event to make the client side event source (javascript) throw a onError event.
Is it a known/wanted behaviour specific to SSE?
QUESTION UPDATE
I actually use SSE on some of my streams because I sometimes need my event sources to get JSON data instead of HTML already processed by Thymeleaf. Should I do it in another way?
I based my implementation on the last method of this example: https://github.com/danielfernandez/reactive-matchday/blob/master/src/main/java/com/github/danielfernandez/matchday/web/controller/MatchController.java
However, I may have missed providing some information in my previous post. I use Tomcat Server (8.5.23 with M7), and not Netty server. I forced Tomcat use including the following Maven dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
Using your code on a sample project, this seems to cause the issue.
When I run the code on a Netty server, I get the same results as you:
2018-02-14 12:30:48.713 INFO 3060 --- [ctor-http-nio-2] reactor.Flux.ConcatMap.1 : onSubscribe(FluxConcatMap.ConcatMapImmediate)
2018-02-14 12:30:48.714 INFO 3060 --- [ctor-http-nio-2] reactor.Flux.ConcatMap.1 : request(1)
2018-02-14 12:30:49.717 INFO 3060 --- [ parallel-2] reactor.Flux.ConcatMap.1 : onNext(a)
2018-02-14 12:30:49.739 INFO 3060 --- [ctor-http-nio-2] reactor.Flux.ConcatMap.1 : request(31)
2018-02-14 12:30:50.731 INFO 3060 --- [ parallel-3] reactor.Flux.ConcatMap.1 : onNext(b)
2018-02-14 12:30:51.733 INFO 3060 --- [ parallel-4] reactor.Flux.ConcatMap.1 : onNext(c)
2018-02-14 12:30:51.735 INFO 3060 --- [ parallel-4] reactor.Flux.ConcatMap.1 : onComplete()
When I run the same code on the Tomcat server, I have the cancel issue:
2018-02-14 12:33:18.294 INFO 3088 --- [nio-8080-exec-3] reactor.Flux.ConcatMap.2 : onSubscribe(FluxConcatMap.ConcatMapImmediate)
2018-02-14 12:33:18.295 INFO 3088 --- [nio-8080-exec-3] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:19.295 INFO 3088 --- [ parallel-4] reactor.Flux.ConcatMap.2 : onNext(a)
2018-02-14 12:33:19.297 INFO 3088 --- [ parallel-4] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:20.302 INFO 3088 --- [ parallel-5] reactor.Flux.ConcatMap.2 : onNext(b)
2018-02-14 12:33:20.302 INFO 3088 --- [ parallel-5] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:21.306 INFO 3088 --- [ parallel-6] reactor.Flux.ConcatMap.2 : onNext(c)
2018-02-14 12:33:21.306 INFO 3088 --- [ parallel-6] reactor.Flux.ConcatMap.2 : request(1)
2018-02-14 12:33:21.307 INFO 3088 --- [ parallel-6] reactor.Flux.ConcatMap.2 : onComplete()
2018-02-14 12:33:21.307 INFO 3088 --- [nio-8080-exec-4] reactor.Flux.ConcatMap.2 : cancel()
Could it be a Tomcat issue or am I doing something wrong?
First, I don't think you should use SSE for finite streams.
When I create a Controller method like:
#GetMapping(path = "/test", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
#ResponseBody
public Flux<String> test() {
return Flux.just("a", "b", "c").delayElements(Duration.ofSeconds(1)).log();
}
and request it from a browser (Chrome or Firefox) with:
<script type="text/javascript">
var testEventSource = new EventSource("/test");
testEventSource.onmessage = function (e) {
console.log(e);
};
</script>
I get the following logs on the server:
| onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
| request(1)
| onNext(a)
| request(31)
| onNext(b)
| onNext(c)
| onComplete()
| onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
| request(1)
| onNext(a)
| request(31)
| onNext(b)
| onNext(c)
| onComplete()
As soon as the Flux is completed, the connection is closed by the server and the browser reconnects automatically. This will replay the same sequence over and over again.
The only way I get a cancel() event on the server is when I close the browser tab during the stream.
Related
I have a controller with an endpoint that provide a flux like this reported below.
When the app is deployed to kubernetes, methods doOnCancel and doOnTerminate are not invoked.
Locally instead, it works like a charm (when the tab of the browser is closed as instance).
#Slf4j
#RestController
public class TestController {
...
#GetMapping(value = "/test", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> testStream() {
log.info("Requested test streaming");
return mySink.asFlux()
.startWith("INIT TEST")
.doOnCancel(() -> log.info("On cancel"))
.doOnTerminate(() -> log.info("On terminate"));
}
...
}
2022-08-06 18:25:42.115 INFO 3685 --- [ main] com.wuase.sinkdemo.SinkDemoApplication : Starting SinkDemoApplication using Java 1.8.0_252 on aniello-pc with PID 3685 (/home/pc/eclipse-workspace/sink-demo/target/classes started by pc in /home/pc/eclipse-workspace/sink-demo)
2022-08-06 18:25:42.124 INFO 3685 --- [ main] com.wuase.sinkdemo.SinkDemoApplication : No active profile set, falling back to 1 default profile: "default"
2022-08-06 18:25:44.985 INFO 3685 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8080
2022-08-06 18:25:45.018 INFO 3685 --- [ main] com.wuase.sinkdemo.SinkDemoApplication : Started SinkDemoApplication in 3.737 seconds (JVM running for 5.36)
2022-08-06 18:26:09.706 INFO 3685 --- [or-http-epoll-3] com.wuase.sinkdemo.TestController : Requested test streaming
2022-08-06 18:26:14.799 INFO 3685 --- [or-http-epoll-3] com.wuase.sinkdemo.TestController : On cancel
Has anyone encountred the same problem?
Any idea about that?
I started to work with Hazelcast cache and I want to expose metrics for them and I don't know how to do it.
My java-config
`#Configuration
public class HazelcastConfiguration {
#Bean
public Config config(){
return new Config()
.setInstanceName("hazelcast-instace")
.addMapConfig(
new MapConfig()
.setName("testing")
.setMaxSizeConfig(new MaxSizeConfig(10, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU)
.setTimeToLiveSeconds(1000)
.setStatisticsEnabled(true)
);
}
}`
During startup application, I see only this logs
2019-11-30 19:56:01.579 INFO 13444 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [3.12.4] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2019-11-30 19:56:01.671 INFO 13444 --- [ main] com.hazelcast.instance.AddressPicker : [LOCAL] [dev] [3.12.4] Picked [192.168.43.2]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2019-11-30 19:56:01.694 INFO 13444 --- [ main] com.hazelcast.system : [192.168.43.2]:5701 [dev] [3.12.4] Hazelcast 3.12.4 (20191030 - eab1290) starting at [192.168.43.2]:5701
2019-11-30 19:56:01.695 INFO 13444 --- [ main] com.hazelcast.system : [192.168.43.2]:5701 [dev] [3.12.4] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
2019-11-30 19:56:02.037 INFO 13444 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [192.168.43.2]:5701 [dev] [3.12.4] Backpressure is disabled
2019-11-30 19:56:02.761 INFO 13444 --- [ main] com.hazelcast.instance.Node : [192.168.43.2]:5701 [dev] [3.12.4] Creating MulticastJoiner
2019-11-30 19:56:02.998 INFO 13444 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [192.168.43.2]:5701 [dev] [3.12.4] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
2019-11-30 19:56:02.999 INFO 13444 --- [ main] c.h.internal.diagnostics.Diagnostics : [192.168.43.2]:5701 [dev] [3.12.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2019-11-30 19:56:03.007 INFO 13444 --- [ main] com.hazelcast.core.LifecycleService : [192.168.43.2]:5701 [dev] [3.12.4] [192.168.43.2]:5701 is STARTING
2019-11-30 19:56:05.085 INFO 13444 --- [ main] c.h.internal.cluster.ClusterService : [192.168.43.2]:5701 [dev] [3.12.4]
Members {size:1, ver:1} [
Member [192.168.43.2]:5701 - 6ed511ff-b20b-4875-9b39-2dc734d4a9aa this
]
2019-11-30 19:56:05.142 INFO 13444 --- [ main] com.hazelcast.core.LifecycleService : [192.168.43.2]:5701 [dev] [3.12.4] [192.168.43.2]:5701 is STARTED
2019-11-30 19:56:05.295 INFO 13444 --- [e.HealthMonitor] c.h.internal.diagnostics.HealthMonitor : [192.168.43.2]:5701 [dev] [3.12.4] processors=4, physical.memory.total=23,9G, physical.memory.free=11,8G, swap.space.total=27,2G, swap.space.free=10,3G, heap.memory.used=306,9M, heap.memory.free=357,1M, heap.memory.total=664,0M, heap.memory.max=5,3G, heap.memory.used/total=46,23%, heap.memory.used/max=5,63%, minor.gc.count=0, minor.gc.time=0ms, major.gc.count=0, major.gc.time=0ms, load.process=100,00%, load.system=100,00%, load.systemAverage=n/a thread.count=37, thread.peakCount=37, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=1, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0,00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
after autowired class CacheManager I and run on it function getCacheNames I see an empty list, and I don't know why?
and during first added to the cache in the log display
2019-11-30 20:11:12.760 INFO 16464 --- [nio-8080-exec-6] c.h.i.p.impl.PartitionStateManager : [192.168.43.2]:5701 [dev] [3.12.4] Initializing cluster partition table arrangement...
In Metrics Config file I have this and nothing in metrics will diplay:
#Autowired
private CacheMetricsRegistrar cacheMetricsRegistrar;
anyone have ide why its doesn't work?
The cache is initialized when you insert the first entry into it. That is why you don't see any cache in CacheManager at the beginning. After you insert the first value you should see HazelcastCache in cacheManager.caches.
The same with CacheMetricsRegistrar, I just tried and see HazelcastCacheMeterBinderProvider in binderProviders.
For Spring Boot 2.x create your registration component:
#Component
#AllArgsConstructor
public class CacheMetricsRegistrator {
private final CacheMetricsRegistrar cacheMetricsRegistrar;
private final CacheManager cacheManager;
private final Config cacheConfig;
#PostConstruct
public void register() {
this.cacheConfig.getMapConfigs().keySet().forEach(
cacheName -> this.cacheMetricsRegistrar.bindCacheToRegistry(
this.cacheManager.getCache(cacheName))
);
}
}
for your cache configuration of Hazelcast:
#EnableCaching
#Configuration
public class HazelcastConfig {
private MapConfig mapPortfolioCache() {
return new MapConfig()
.setName("my-entity-cache")
.setEvictionConfig(new EvictionConfig().setMaxSizePolicy(MaxSizePolicy.FREE_HEAP_SIZE).setSize(200))
.setTimeToLiveSeconds(60*15);
}
#Bean
public Config hazelCastConfig() {
Config config = new Config()
.setInstanceName("my-application-hazelcast")
.setNetworkConfig(new NetworkConfig().setJoin(new JoinConfig().setMulticastConfig(new MulticastConfig().setEnabled(false))))
.addMapConfig(mapPortfolioCache());
SubZero.useAsGlobalSerializer(config);
return config;
}
}
I am starting to learn Webflux from Spring-boot. I learned that for an endpoint of a RestController you can define a Flux request body, where I expect a real flux stream, that is, the parts of the whole request come one after each other, and these parts can be processed also one after another. However after building a small example with a client and a server, I could not get this to work as expected.
So here is the snippet of the server:
#PostMapping("/digest")
public Flux<String> digest(#RequestBody Flux<String> text) {
continuousMD5.reset();
return text.log("server.request.").map(piece -> continuousMD5.update(piece)).log("server.response.");
}
Note: each piece of the text will be sent to a continuousMD5 object, which will accumulate all the pieces and calculate and return the intermediate MD5 hash value after each accumulation. The stream will be logged before and after the MD5 calculation.
And here is the snippet of the client:
#PostConstruct
private void init() {
webClient = webClientBuilder.baseUrl(reactiveServerUrl).build();
}
#PostMapping(value = "/send", consumes = MediaType.TEXT_PLAIN_VALUE)
public Flux<String> send(#RequestBody Flux<String> text) {
return webClient.post()
.uri("/digest")
.accept(MediaType.TEXT_PLAIN)
.body(text.log("client.request."), String.class)
.retrieve().bodyToFlux(String.class).log("client.response.");
}
Note: the client accepts a flux stream of some text and logs the stream and sends it to the server (as a flux stream).
Surprisingly I made it work to send a REST request and let the client receive a flux stream by the following command line:
for i in $(seq 1 100); do echo "The message $i"; done | http POST :8080/send Content-Type:text/plain
and I could see that in the log of the client:
2019-05-09 17:02:08.604 INFO 3462 --- [ctor-http-nio-2] client.response.Flux.MonoFlatMapMany.2 : onSubscribe(MonoFlatMapMany.FlatMapManyMain)
2019-05-09 17:02:08.606 INFO 3462 --- [ctor-http-nio-2] client.response.Flux.MonoFlatMapMany.2 : request(1)
2019-05-09 17:02:08.649 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : onSubscribe(FluxSwitchIfEmpty.SwitchIfEmptySubscriber)
2019-05-09 17:02:08.650 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(32)
2019-05-09 17:02:08.674 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onNext(The message 1)
2019-05-09 17:02:08.676 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.676 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onNext(The message 2)
...
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onNext(The message 100)
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.710 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.711 INFO 3462 --- [ctor-http-nio-2] client.request.Flux.SwitchIfEmpty.1 : onComplete()
2019-05-09 17:02:08.711 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.711 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.860 INFO 3462 --- [ctor-http-nio-6] client.response.Flux.MonoFlatMapMany.2 : onNext(CSubeSX3yIVP2CD6FRlojg==)
2019-05-09 17:02:08.862 INFO 3462 --- [ctor-http-nio-6] client.response.Flux.MonoFlatMapMany.2 : onComplete()
^C2019-05-09 17:02:47.393 INFO 3462 --- [ctor-http-nio-6] client.request.Flux.SwitchIfEmpty.1 : cancel()
that each piece of the text was recognized as an element of a flux stream and was requested separately.
But in the server log:
2019-05-09 17:02:08.811 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : onSubscribe(FluxSwitchIfEmpty.SwitchIfEmptySubscriber)
2019-05-09 17:02:08.813 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : onSubscribe(FluxMap.MapSubscriber)
2019-05-09 17:02:08.814 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : request(1)
2019-05-09 17:02:08.814 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : request(1)
2019-05-09 17:02:08.838 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : onNext(The message 1The message 2The message 3The message 4The message 5The message 6The message 7The message 8The message 9The message 10The message 11The message 12The message 13The message 14The message 15The message 16The message 17The message 18The message 19The message 20The message 21The message 22The message 23The message 24The message 25The message 26The message 27The message 28The message 29The message 30The message 31The message 32The message 33The message 34The message 35The message 36The message 37The message 38The message 39The message 40The message 41The message 42The message 43The message 44The message 45The message 46The message 47The message 48The message 49The message 50The message 51The message 52The message 53The message 54The message 55The message 56The message 57The message 58The message 59The message 60The message 61The message 62The message 63The message 64The message 65The message 66The message 67The message 68The message 69The message 70The message 71The message 72The message 73The message 74The message 75The message 76The message 77The message 78The message 79The message 80The message 81The message 82The message 83The message 84The message 85The message 86The message 87The message 88The message 89The message 90The message 91The message 92The message 93The message 94The message 95The message 96The message 97The message 98The message 99The message 100)
2019-05-09 17:02:08.840 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : onNext(CSubeSX3yIVP2CD6FRlojg==)
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : request(32)
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : request(32)
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : onComplete()
2019-05-09 17:02:08.852 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : onComplete()
2019-05-09 17:02:47.394 INFO 3475 --- [ctor-http-nio-2] server.response.Flux.Map.2 : cancel()
2019-05-09 17:02:47.394 INFO 3475 --- [ctor-http-nio-2] server.request.Flux.SwitchIfEmpty.1 : cancel()
I saw that all the pieces of the text arrived at the server at once and hence were processed as one big element in the flux stream (one can also verify that there was only one MD5 hash calculated instead of 100).
What I would expect is that the server also receives the pieces of text from the client as elements in the flux stream, otherwise for the server it is not real reactive but just a normal blocking request.
Could anyone please help me understand how to make a real flux reactive request using Webflux? Thanks!
Update
I used a similar command line to make a REST request against the server and could see that the server received the pieces of the text ("The message x") as a flux stream. So I guess the server is ok, now the problem may be the client: how can I use the WebClient to make a real flux REST request?
If you want to achieve the streaming effect, you can:
Use different content-type that supports streaming - application/stream+json. Check out the following SO thread about it:
Spring WebFlux Flux behavior with non streaming application/json
Change the underlying protocol to the one that better fits the streaming model, for instance to WebSockets. https://howtodoinjava.com/spring-webflux/reactive-websockets/
After trying things out and reading more documentations, I finally figured out how to make my example work:
For the client, I need to make sure that the request body sent to the server is also separated by a line feed:
#PostMapping(value = "/send", consumes = MediaType.TEXT_PLAIN_VALUE)
public Flux<String> send(#RequestBody Flux<String> text) {
return webClient.post()
.uri("/digest")
.accept(MediaType.TEXT_PLAIN)
.body(
text
.onBackpressureBuffer()
.log("client.request.")
.map(piece -> piece + "\n"),
String.class)
.retrieve().bodyToFlux(String.class)
.onBackpressureBuffer()
.log("client.response.");
}
this achieves the same effect as making REST request via the command line, as for i in $(seq 1 100); do echo "The message $i"; done outputs "The message x" in lines.
Similarly, for the server, the response body needs also to be separated by line feed so that the client can decode the body to a flux:
#PostMapping("/digest")
public Flux<String> digest(#RequestBody Flux<String> text) {
continuousMD5.reset();
return text
.log("server.request.")
.map(piece -> continuousMD5.update(piece))
.map(piece -> piece + "\n")
.log("server.response.");
}
I added also the onBackpressureBuffer() to the client before sending and after receiving so that there is no overflow exception while sending a large number of messages.
However even though the above code "works", but it is not doing real streaming, as I can see in the logs, the server started to receive request body after the client sent the whole request body, and the client started to receive the response body after the server sent the whole response body. Perhaps as Ilya Zinkovich mentioned, using WebSocket protocol may achieve real streaming effect, but I did not try it out yet.
I am trying to use kafka with camel and set up the following route:
public class WorkflowEventConsumerRoute extends RouteBuilder {
private static final String KAFKA_ENDPOINT =
"kafka:payments-bus?brokers=localhost:9092";
...
#Override
public void configure() {
from(KAFKA_ENDPOINT)
.routeId(format(KAFKA_CONSUMER))
.to("mock:end);
}
}
When I start my spring boot application I can see the route gets started but immediately after this it shuts down without any reasons given in the logs:
2018-12-21 12:06:45.012 INFO 12184 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2018-12-21 12:06:45.013 INFO 12184 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2018-12-21 12:06:45.014 INFO 12184 --- [ main] o.a.camel.spring.SpringCamelContext : Route: kafka-consumer started and consuming from: kafka://payments-bus?brokers=localhost%3A9092
2018-12-21 12:06:45.015 INFO 12184 --- [r[payments-bus]] o.a.camel.component.kafka.KafkaConsumer : Subscribing payments-bus-Thread 0 to topic payments-bus
2018-12-21 12:06:45.015 INFO 12184 --- [ main] o.a.camel.spring.SpringCamelContext : Total 1 routes, of which 1 are started
2018-12-21 12:06:45.015 INFO 12184 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.23.0 (CamelContext: camel-1) started in 0.234 seconds
2018-12-21 12:06:45.019 INFO 12184 --- [ main] a.c.n.t.p.workflow.WorkflowApplication : Started WorkflowApplication in 3.815 seconds (JVM running for 4.513)
2018-12-21 12:06:45.024 INFO 12184 --- [ Thread-10] o.a.camel.spring.SpringCamelContext : Apache Camel 2.23.0 (CamelContext: camel-1) is shutting down
On the other hand if create an unit test and point to the same kafka endpoint I am able to read the kafka topic content using the org.apache.camel.ConsumerTemplate instance provided by the CamelTestSupport
Ultimately if I replace the kafka endpoint in my route with an activemq one the route starts OK and the application stays up.
Obviously I am missing something but I cannot figure out what.
Thank you in advance for your help.
Do your spring-boot app have a -web-starter or not. If not then you should turn on the camel run-controller to keep the boot application running.
In the application.properties add
camel.springboot.main-run-controller = true
I have an external OPC UA server, from which I would like to read data. I use username and password authentication, so my client is initialized like follows:
public class MyClient {
// ...
public MyClient() throws Exception {
EndpointDescription[] endpoints =
UaTcpStackClient.getEndpoints(OPCConstants.OPC_SERVER_URI).get();
// using the first endpoint
EndpointDescription endpoint = endpoints[0];
// client configuration
OpcUaClientConfig config = OpcUaClientConfig.builder()
.setApplicationName(LocalizedText.english("Example Client"))
.setApplicationUri(String.format("some:example-client:%s",
UUID.randomUUID()))
.setIdentityProvider(new UsernameProvider(USERNAME, PWD))
.setEndpoint(endpoint)
.build();
}
}
The client's request is the following:
public CompletableFuture<DataValue> getData(NodeId nodeId) {
LOGGER.debug("Sending request");
return client.readValue(60000000.0, TimestampsToReturn.Server, nodeId);
}
I call this request from the main method after initializing the client and connecting it to the server:
MyClient client = new MyClient();
NodeId requestedData = new NodeId(DATA_ID, DATA_KEY);
LOGGER.info("Sending synchronous TestStackRequest NodeId={}",
requestedData);
client.connect();
DataValue response = client.getData(requestedData).get();
LOGGER.info("Received response value={}", response.getValue());
client.disconnect();
However, this code doesn't work (the session is closed when trying to read informations from the server). I get the following output:
2018-04-12 17:43:27,765 DEBUG --- [ua-netty-event-loop-0] Recycler : -Dio.netty.recycler.maxCapacity.default: 262144
2018-04-12 17:43:27,777 DEBUG --- [ua-netty-event-loop-0] UaTcpClientAcknowledgeHandler : Sent Hello message on channel=[id: 0xfd9519e3, L:/172.20.100.54:55805 - R:/172.20.100.135:4840].
2018-04-12 17:43:27,786 DEBUG --- [ua-netty-event-loop-0] UaTcpClientAcknowledgeHandler : Received Acknowledge message on channel=[id: 0xfd9519e3, L:/172.20.100.54:55805 - R:/172.20.100.135:4840].
2018-04-12 17:43:27,793 DEBUG --- [ua-netty-event-loop-0] UaTcpClientMessageHandler : OpenSecureChannel timeout scheduled for +5s
2018-04-12 17:43:27,946 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Sent OpenSecureChannelRequest (Issue, id=0, currentToken=-1, previousToken=-1).
2018-04-12 17:43:27,951 DEBUG --- [ua-netty-event-loop-0] UaTcpClientMessageHandler : OpenSecureChannel timeout canceled
2018-04-12 17:43:27,961 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Received OpenSecureChannelResponse.
2018-04-12 17:43:27,967 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : SecureChannel id=1698234671, currentTokenId=1, previousTokenId=-1, lifetime=3600000ms, createdAt=DateTime{utcTime=131680285857690000, javaDate=Thu Apr 12 19:43:05 CEST 2018}
2018-04-12 17:43:27,968 DEBUG --- [ua-netty-event-loop-0] UaTcpClientMessageHandler : 0 message(s) queued before handshake completed; sending now.
2018-04-12 17:43:27,968 DEBUG --- [ua-shared-pool-1] ClientChannelManager : Channel bootstrap succeeded: localAddress=/172.20.100.54:55805, remoteAddress=/172.20.100.135:4840
2018-04-12 17:43:27,996 DEBUG --- [ua-shared-pool-0] ClientChannelManager : disconnect(), currentState=Connected
2018-04-12 17:43:27,997 DEBUG --- [ua-shared-pool-1] ClientChannelManager : Sending CloseSecureChannelRequest...
2018-04-12 17:43:28,000 DEBUG --- [ua-netty-event-loop-0] ClientChannelManager : channelInactive(), disconnect complete
2018-04-12 17:43:28,001 DEBUG --- [ua-netty-event-loop-0] ClientChannelManager : disconnect complete, state set to Idle
2018-04-12 17:43:28,011 INFO --- [main] OpcUaClient : Eclipse Milo OPC UA Stack version: 0.2.1
2018-04-12 17:43:28,011 INFO --- [main] OpcUaClient : Eclipse Milo OPC UA Client SDK version: 0.2.1
2018-04-12 17:43:28,056 DEBUG --- [main] OpcUaClient : Added ServiceFaultListener: org.eclipse.milo.opcua.sdk.client.session.SessionFsm$FaultListener#46d59067
2018-04-12 17:43:28,066 DEBUG --- [main] OpcUaClient : Added SessionActivityListener: org.eclipse.milo.opcua.sdk.client.subscriptions.OpcUaSubscriptionManager$1#78452606
2018-04-12 17:43:28,189 INFO --- [main] CommunicationMain : Sending synchronous TestStackRequest NodeId=NodeId{ns=6, id=::opcua:opcData.outGoing.basic.cycleStep}
2018-04-12 17:43:28,189 DEBUG --- [main] ClientChannelManager : connect(), currentState=NotConnected
2018-04-12 17:43:28,190 DEBUG --- [main] ClientChannelManager : connect() while NotConnected
java.lang.Exception
at org.eclipse.milo.opcua.stack.client.ClientChannelManager.connect(ClientChannelManager.java:67)
at org.eclipse.milo.opcua.stack.client.UaTcpStackClient.connect(UaTcpStackClient.java:127)
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.connect(OpcUaClient.java:313)
at com.mycompany.opcua.participants.MyClient.connect(MyClient.java:147)
at com.mycompany.opcua.participants.CommunicationMain.testClient(CommunicationMain.java:69)
at com.mycompany.opcua.participants.CommunicationMain.main(CommunicationMain.java:51)
2018-04-12 17:43:28,190 DEBUG --- [main] MyClient : Sending request
2018-04-12 17:43:28,197 DEBUG --- [ua-netty-event-loop-1] UaTcpClientAcknowledgeHandler : Sent Hello message on channel=[id: 0xd9b3f832, L:/172.20.100.54:55806 - R:/172.20.100.135:4840].
2018-04-12 17:43:28,204 DEBUG --- [ua-netty-event-loop-1] UaTcpClientAcknowledgeHandler : Received Acknowledge message on channel=[id: 0xd9b3f832, L:/172.20.100.54:55806 - R:/172.20.100.135:4840].
2018-04-12 17:43:28,205 DEBUG --- [ua-netty-event-loop-1] UaTcpClientMessageHandler : OpenSecureChannel timeout scheduled for +5s
2018-04-12 17:43:28,205 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Sent OpenSecureChannelRequest (Issue, id=0, currentToken=-1, previousToken=-1).
2018-04-12 17:43:28,208 DEBUG --- [ua-netty-event-loop-1] UaTcpClientMessageHandler : OpenSecureChannel timeout canceled
2018-04-12 17:43:28,208 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : Received OpenSecureChannelResponse.
2018-04-12 17:43:28,209 DEBUG --- [ua-shared-pool-0] UaTcpClientMessageHandler : SecureChannel id=1698234672, currentTokenId=1, previousTokenId=-1, lifetime=3600000ms, createdAt=DateTime{utcTime=131680285860260000, javaDate=Thu Apr 12 19:43:06 CEST 2018}
2018-04-12 17:43:28,209 DEBUG --- [ua-netty-event-loop-1] UaTcpClientMessageHandler : 0 message(s) queued before handshake completed; sending now.
2018-04-12 17:43:28,209 DEBUG --- [ua-shared-pool-1] ClientChannelManager : Channel bootstrap succeeded: localAddress=/172.20.100.54:55806, remoteAddress=/172.20.100.135:4840
2018-04-12 17:43:28,210 DEBUG --- [ua-shared-pool-0] SessionFsm : S(Inactive) x E(CreateSessionEvent) = S'(Creating)
Exception in thread "main" java.util.concurrent.ExecutionException: UaException: status=Bad_SessionClosed, message=The session was closed by the client.
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)2018-04-12 17:43:28,212 DEBUG --- [ua-shared-pool-1] SessionFsm : Sending CreateSessionRequest...
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at com.mycompany.opcua.participants.CommunicationMain.testClient(CommunicationMain.java:70)
at com.mycompany.opcua.participants.CommunicationMain.main(CommunicationMain.java:51)
Caused by: UaException: status=Bad_SessionClosed, message=The session was closed by the client.
at org.eclipse.milo.opcua.stack.core.util.FutureUtils.failedUaFuture(FutureUtils.java:100)
at org.eclipse.milo.opcua.stack.core.util.FutureUtils.failedUaFuture(FutureUtils.java:88)
at org.eclipse.milo.opcua.sdk.client.session.states.Inactive.(Inactive.java:28)
at org.eclipse.milo.opcua.sdk.client.session.SessionFsm.(SessionFsm.java:69)
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.(OpcUaClient.java:159)2018-04-12 17:43:28,212 INFO --- [NonceUtilSecureRandom] NonceUtil : SecureRandom seeded in 0ms.
at com.mycompany.opcua.participants.MyClient.(MyClient.java:112)
at com.mycompany.opcua.participants.CommunicationMain.testClient(CommunicationMain.java:60)
... 1 more
I use Eclipse milo 0.2.1 as OPC UA library.
Could you please tell me hat can cause this issue and how to fix it? Can it be a race condition related to this?
I can connect to the same server using other client (UaExpert).
Thank you in advance.
All of the calls you're making (connect(), disconnect(), and readValues()) are asynchronous, so what's likely happening here is you're not connected when you attempt the read.
Make sure for these examples you block for the result before moving to the next step. (you're not doing this on connect())