Reactor Netty websocket channel closed prematurely - java

I have a long running websocket client implemented in java Spring reactor with Netty (spring-boot-starter-parent 2.5.3) targeting Binance ws api.
According to specs, the weboscket channel is kept open 24 hours.
The websocket is unexpectedly and prematurely closed after around 3 minutes :
16:50:48.418 [main] DEBUG reactor.util.Loggers - Using Slf4j logging framework
16:50:48.434 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
16:50:48.436 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
16:50:48.437 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 14
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module #1efbd816
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): unavailable
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 8388608000 bytes (maybe)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
16:50:48.449 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
16:50:48.450 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default LoopResources: DefaultLoopResources {prefix=reactor-http, daemon=true, selectCount=8, workerCount=8}
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default ConnectionProvider: reactor.netty.resources.DefaultPooledConnectionProvider#192b07fd
16:50:48.485 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
16:50:48.486 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
16:50:48.582 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
16:50:48.583 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128
16:50:48.590 [main] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Connecting to wss://stream.binance.com:9443/ws
16:50:48.601 [main] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative not in the classpath; OpenSslEngine will be unavailable.
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default protocols (JDK): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1]
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default cipher suites (JDK): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384]
16:50:48.720 [main] DEBUG reactor.netty.resources.DefaultLoopIOUring - Default io_uring support : false
16:50:48.724 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true
16:50:48.730 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_6410359104745093945181.so
16:50:48.731 [main] DEBUG reactor.netty.resources.DefaultLoopEpoll - Default Epoll support : true
16:50:48.734 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
16:50:48.742 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
16:50:48.743 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
16:50:48.749 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
16:50:48.768 [main] DEBUG reactor.netty.resources.PooledConnectionProvider - Creating a new [http] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=500, maxIdleTime=-1, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=1000, pendingAcquireTimeout=45000}] for [stream.binance.com/<unresolved>:9443]
16:50:48.798 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 27223 (auto-detected)
16:50:48.799 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 28:16:ad:ff:fe:2b:7c:b7 (auto-detected)
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
16:50:48.814 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
16:50:48.828 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:48.845 [reactor-http-epoll-2] DEBUG reactor.netty.tcp.SslProvider - [id:d962b126] SSL enabled using engine sun.security.ssl.SSLEngineImpl#55608030 and SNI stream.binance.com/<unresolved>:9443
16:50:48.852 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#3ba51dc6
16:50:48.854 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConfig - [id:d962b126] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:48.866 [reactor-http-epoll-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1fb356c5
16:50:48.867 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [11524: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN A)
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
16:50:48.878 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [33872: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN AAAA)
16:50:48.904 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [11524: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 11524, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN A)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:48.907 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConnector - [id:d962b126] Connecting to [stream.binance.com/52.199.12.133:9443].
16:50:48.907 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [33872: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 33872, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN AAAA)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:49.162 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Registering pool release on close event for channel
16:50:49.163 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:49.807 [reactor-http-epoll-2] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
16:50:49.808 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}, [connected])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [configured])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientConnect - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Handler is being applied: {uri=wss://stream.binance.com:9443/ws, method=GET}
16:50:49.830 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [request_prepared])
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added encoder [reactor.left.httpAggregator] at the beginning of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, reactor.left.httpAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpMetricsHandler, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.left.httpAggregator = io.netty.handler.codec.http.HttpObjectAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:49.840 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientOperations - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Attempting to perform websocket handshake with wss://stream.binance.com:9443/ws
16:50:49.846 [reactor-http-epoll-2] DEBUG io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13 - WebSocket version 13 client handshake key: 7FNVb427OHllyiM2Clg//g==, expected response: iTvQFIKtv7xyyXvmEAooh8NZPVw=
16:50:50.122 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [response_received])
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.adapter.ReactorNettyWebSocketSession - [36eb4d6b] Session id "36eb4d6b" for wss://stream.binance.com:9443/ws
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Started session '36eb4d6b' for wss://stream.binance.com:9443/ws
16:50:50.147 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added decoder [reactor.left.wsFrameAggregator] at the end of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, ws-decoder, ws-encoder, reactor.left.wsFrameAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:50.149 [reactor-http-epoll-2] DEBUG reactor.netty.channel.FluxReceive - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] FluxReceive{pending=0, cancelled=false, inboundDone=false, inboundError=null}: subscribing inbound receiver
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - onSubscribe(FluxMap.MapSubscriber)
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - request(256)
16:50:50.411 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:50:50.413 [reactor-http-epoll-2] INFO TRACE - request(1)
...
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - request(1)
16:52:17.168 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Channel closed, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:52:17.169 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpAggregator, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (ws-decoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameDecoder), (ws-encoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameEncoder), (reactor.left.wsFrameAggregator = io.netty.handler.codec.http.websocketx.WebSocketFrameAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
A completed
A terminated
16:52:17.172 [reactor-http-epoll-2] INFO TRACE - onComplete()
B completed
B terminated
C success
C terminated
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [response_completed])
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [disconnecting])
I tried to reproduce the issue using another technology like javascript but everything runs fine.
It seems that the channel is closed so I tried to tune the ChannelOptions at TcpClient level... still no luck !
TcpClient wsTcp = TcpClient.create();
wsTcp.option(ChannelOption.AUTO_CLOSE, Boolean.FALSE);
wsTcp.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, Integer.MAX_VALUE);
wsTcp.option(ChannelOption.AUTO_READ, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_KEEPALIVE, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_TIMEOUT, Integer.MAX_VALUE);
I provided a java sample code to reproduce the issue:
package test;
import java.net.URI;
import java.util.concurrent.CountDownLatch;
import org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient;
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
public class WsTest {
public static void main(String[] args) throws InterruptedException {
CountDownLatch latch = new CountDownLatch(1);
ReactorNettyWebSocketClient wsclient = new ReactorNettyWebSocketClient();
wsclient.setMaxFramePayloadLength(Integer.MAX_VALUE);
EmitterProcessor<String> output = EmitterProcessor.create();
Mono<Void> execMono = wsclient.execute(URI.create("wss://stream.binance.com:9443/ws"),
session -> session.send(Flux.just(session.textMessage("{\"method\": \"SUBSCRIBE\",\"params\":[\"!ticker#arr\"],\"id\": 1}")))
.thenMany(session
.receive()
.doOnCancel(() -> System.out.println("A cancelled"))
.doOnComplete(() -> System.out.println("A completed"))
.doOnTerminate(() -> System.out.println("A terminated"))
.map(x -> "evt")
.log("TRACE")
.subscribeWith(output).then())
.then());
output.doOnCancel(() -> System.out.println("B cancelled"))
.doOnComplete(() -> System.out.println("B completed"))
.doOnTerminate(() -> System.out.println("B terminated"))
.doOnSubscribe(s -> execMono
.doOnCancel(() -> System.out.println("C cancelled"))
.doOnSuccess(x -> System.out.println("C success"))
.doOnTerminate(() -> System.out.println("C terminated"))
.subscribe())
.subscribe();
latch.await();
}
}
I don't understand why I get completed/terminated event from ReactorNettyWebSocketClient WebSocketHandler ?
Thank you for your help,

I finally managed to find the root cause.
The underlying error was java websocket 1006 Unexpected Status of SSLEngineResult after an unwrap() operation
After some investigation, I got the returned code 1006 meaning the connection was closed abnormally by the client as documented in the rfc https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1
1006 is a reserved value and MUST NOT be set as a status code in a
Close control frame by an endpoint. It is designated for use in
applications expecting a status code to indicate that the
connection was closed abnormally, e.g., without sending or
receiving a Close control frame.
At that time, I switched from WIFI connection to LAN connection and the issue vanished immediately.
My WIFI router was not able to handle the huge payload correctly.

Related

Apache Ignite: Slow Node join and failure

We have a Ignite setup with 3 Servers and Persistence and therefore Baselining enabled. From time to time we have the issue that the Servers take a long time to rebuild the cluster after all Nodes are restarted. Ignite runs embedded in the application.
20.11.2020 08:18:17.678 WARN [main] org.apache.ignite.internal.util.typedef.G:290 - Ignite work directory is not provided, automatically resolved to: D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work
20.11.2020 08:18:17.709 WARN [main] org.apache.ignite.internal.util.typedef.G:295 - Consistent ID is not set, it is recommended to set consistent ID for production clusters (use IgniteConfiguration.setConsistentId property)
20.11.2020 08:18:18.053 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Config URL: n/a
20.11.2020 08:18:18.084 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - IgniteConfiguration [igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=4, dataStreamerPoolSize=8, utilityCachePoolSize=8, utilityCacheKeepAliveTime=60000, p2pPoolSize=2, qryPoolSize=8, sqlQryHistSize=1000, dfltQryTimeout=0, igniteHome=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite, igniteWorkDir=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work, mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer#78c03f1f, nodeId=0e60d50b-ee2e-46ed-8d76-5cb51791011b, marsh=BinaryMarshaller [], marshLocJobs=false, daemon=false, p2pEnabled=true, netTimeout=5000, netCompressionLevel=1, sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=10000, metricsUpdateFreq=2000, metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=600000, soLinger=5, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false], segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=10000, commSpi=TcpCommunicationSpi [connectGate=null, connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy#522ba524, chConnPlc=null, enableForcibleNodeKill=false, enableTroubleshootingLog=false, locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=600000, connTimeout=5000, maxConnTimeout=600000, reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=null, shmemSrv=null, usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000, boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch#29c5ee1d[Count = 1], stopping=false, metricsLsnr=null], evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi#15cea7b0, colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [], indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi#1e6cc850, addrRslvr=null, encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi#7e7f0f0a, clientMode=false, rebalanceThreadPoolSize=4, rebalanceTimeout=10000, rebalanceBatchesPrefetchCnt=3, rebalanceThrottle=0, rebalanceBatchSize=524288, txCfg=TransactionConfiguration [txSerEnabled=false, dfltIsolation=REPEATABLE_READ, dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0, txTimeoutOnPartitionMapExchange=0, deadlockTimeout=10000, pessimisticTxLogSize=0, pessimisticTxLogLinger=10000, tmLookupClsName=null, txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true, discoStartupDelay=60000, deployMode=SHARED, p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100, failureDetectionTimeout=10000, sysWorkerBlockedTimeout=null, clientFailureDetectionTimeout=30000, metricsLogFreq=0, hadoopCfg=null, connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11211, noDelay=true, directBuf=false, sndBufSize=32768, rcvBufSize=32768, idleQryCurTimeout=600000, idleQryCurCheckFreq=60000, sndQueueLimit=0, selectorCnt=4, idleTimeout=7000, sslEnabled=false, sslClientAuth=false, sslCtxFactory=null, sslFactory=null, portRange=100, threadPoolSize=8, msgInterceptor=null], odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null, grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null, binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfiguration [sysRegionInitSize=10485760, sysRegionMaxSize=52428800, pageSize=0, concLvl=0, dfltDataRegConf=DataRegionConfiguration [name=default, maxSize=858886144, initSize=10485760, swapPath=null, pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100, metricsEnabled=true, metricsSubIntervalCount=5, metricsRateTimeInterval=60000, persistenceEnabled=true, checkpointPageBufSize=0, lazyMemoryAllocation=true], dataRegions=DataRegionConfiguration[] [DataRegionConfiguration [name=persistent, maxSize=52428800, initSize=10485760, swapPath=null, pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100, metricsEnabled=true, metricsSubIntervalCount=5, metricsRateTimeInterval=60000, persistenceEnabled=true, checkpointPageBufSize=0, lazyMemoryAllocation=true]], storagePath=null, checkpointFreq=180000, lockWaitTime=10000, checkpointThreads=4, checkpointWriteOrder=SEQUENTIAL, walHistSize=20, maxWalArchiveSize=1073741824, walSegments=4, walSegmentSize=10485760, walPath=db/wal, walArchivePath=db/wal/archive, metricsEnabled=false, walMode=LOG_ONLY, walTlbSize=131072, walBuffSize=0, walFlushFreq=2000, walFsyncDelay=1000, walRecordIterBuffSize=67108864, alwaysWriteFullPages=false, fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory#59429fac, metricsSubIntervalCnt=5, metricsRateTimeInterval=60000, walAutoArchiveAfterInactivity=-1, writeThrottlingEnabled=false, walCompactionEnabled=false, walCompactionLevel=1, checkpointReadLockTimeout=null, walPageCompression=DISABLED, walPageCompressionLevel=null], activeOnStart=true, autoActivation=true, longQryWarnTimeout=3000, sqlConnCfg=null, cliConnCfg=ClientConnectorConfiguration [host=null, port=10800, portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true, maxOpenCursorsPerConn=128, threadPoolSize=8, idleTimeout=0, handshakeTimeout=10000, jdbcEnabled=true, odbcEnabled=true, thinCliEnabled=true, sslEnabled=false, useIgniteSslCtxFactory=true, sslClientAuth=false, sslCtxFactory=null, thinCliCfg=ThinClientConfiguration [maxActiveTxPerConn=100]], mvccVacuumThreadCnt=2, mvccVacuumFreq=5000, authEnabled=false, failureHnd=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]], commFailureRslvr=null]
20.11.2020 08:18:18.084 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Daemon mode: off
...
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Remote Management [restart: off, REST: on, JMX (remote: on, port: 8071, auth: off, ssl: off)]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Logger: JavaLogger [quiet=true, config=null]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - IGNITE_HOME=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - VM arguments: [-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=8071, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Djava.rmi.server.hostname=127.0.0.1, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=log/dump.hprof, -XX:+UseG1GC, -XX:+UseStringDeduplication, --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED, --add-exports=java.base/sun.nio.ch=ALL-UNNAMED, --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED, --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED, --add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED, --illegal-access=permit, -Xmx500m]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - System cache's DataRegion size is configured to 10 MB. Use DataStorageConfiguration.systemRegionInitialSize property to change the setting.
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Configured caches [in 'sysMemPlc' dataRegion: ['ignite-sys-cache']]
20.11.2020 08:18:18.100 WARN [main] org.apache.ignite.internal.IgniteKernal:295 - Peer class loading is enabled (disable it in production for performance and deployment consistency reasons)
20.11.2020 08:18:18.100 WARN [main] org.apache.ignite.internal.IgniteKernal:295 - Please set system property '-Djava.net.preferIPv4Stack=true' to avoid possible problems in mixed environments.
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - 3-rd party licenses can be found at: D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\libs\licenses
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [BUILD_VERSION=2.1.4]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [NODE_NAME=EESRV-LBXC03]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [BUILD_NUMBER=848]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [NODE_TYPE=LABBOX]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [VERSION=0]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [BUILD_TIME=1604577743000]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [APPLICATION_NAME=Labbox]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [BUILD_GIT_HASH=ff2f1f3]
20.11.2020 08:18:18.100 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Local node user attribute [KEY=_OL2;f~.C3n}yo6p<Zx=BE4I2P:lDL"f]
20.11.2020 08:18:18.163 WARN [pub-#19] org.apache.ignite.internal.GridDiagnostic:295 - This operating system has been tested less rigorously: Windows Server 2012 R2 6.3 amd64. Our team will appreciate the feedback if you experience any problems running ignite in this environment.
20.11.2020 08:18:18.163 WARN [pub-#22] org.apache.ignite.internal.GridDiagnostic:295 - Initial heap size is 64MB (should be no less than 512MB, use -Xms512m -Xmx512m).
20.11.2020 08:18:18.334 INFO [main] o.a.i.i.p.plugin.IgnitePluginProcessor:285 - Configured plugins:
20.11.2020 08:18:18.334 INFO [main] o.a.i.i.p.plugin.IgnitePluginProcessor:285 - ^-- Authentication 1.0.0
20.11.2020 08:18:18.334 INFO [main] o.a.i.i.p.plugin.IgnitePluginProcessor:285 - ^-- null
20.11.2020 08:18:18.334 INFO [main] o.a.i.i.p.plugin.IgnitePluginProcessor:285 -
20.11.2020 08:18:18.334 INFO [main] o.a.i.i.processors.failure.FailureProcessor:285 - Configured failure handler: [hnd=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]
20.11.2020 08:18:18.600 INFO [main] o.a.i.s.communication.tcp.TcpCommunicationSpi:285 - Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
20.11.2020 08:18:18.678 WARN [main] o.a.i.s.communication.tcp.TcpCommunicationSpi:295 - Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
20.11.2020 08:18:18.694 WARN [main] o.a.i.spi.checkpoint.noop.NoopCheckpointSpi:295 - Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
20.11.2020 08:18:18.741 WARN [main] o.a.i.i.m.collision.GridCollisionManager:295 - Collision resolution is disabled (all jobs will be activated upon arrival).
20.11.2020 08:18:18.741 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Security status [authentication=off, tls/ssl=off]
20.11.2020 08:18:18.866 INFO [main] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=0e60d50b-ee2e-46ed-8d76-5cb51791011b]
20.11.2020 08:18:18.866 INFO [main] o.a.i.i.p.c.p.filename.PdsFoldersResolver:285 - Successfully locked persistence storage folder [D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5]
20.11.2020 08:18:18.866 INFO [main] o.a.i.i.p.c.p.filename.PdsFoldersResolver:285 - Consistent ID used for local node is [1dbddb2c-ef76-4811-b7d3-46da82061bc5] according to persistence data storage folders
20.11.2020 08:18:18.866 INFO [main] o.a.i.i.p.c.b.CacheObjectBinaryProcessorImpl:285 - Resolved directory for serialized binary metadata: D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\binary_meta\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5
20.11.2020 08:18:19.631 INFO [main] o.a.i.i.p.c.p.file.FilePageStoreManager:285 - Resolved page store work directory: D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5
20.11.2020 08:18:19.694 INFO [main] o.a.i.i.p.c.p.w.f.FileHandleManagerImpl:285 - Initialized write-ahead log manager [mode=LOG_ONLY]
20.11.2020 08:18:19.772 WARN [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:295 - DataRegionConfiguration.maxWalArchiveSize instead DataRegionConfiguration.walHistorySize would be used for removing old archive wal files
20.11.2020 08:18:19.803 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Configured data regions initialized successfully [total=5]
20.11.2020 08:18:19.834 INFO [main] o.a.i.i.p.c.d.d.t.PartitionsEvictManager:285 - Evict partition permits=2
20.11.2020 08:18:19.850 INFO [main] o.a.i.i.p.odbc.ClientListenerProcessor:285 - Client connector processor has started on TCP port 10800
20.11.2020 08:18:20.006 INFO [main] o.a.i.i.p.r.protocols.tcp.GridTcpRestProtocol:285 - Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
20.11.2020 08:18:20.115 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Non-loopback local IPs: 192.168.92.177, fe80:0:0:0:6859:37c8:f543:8087%eth4
20.11.2020 08:18:20.115 INFO [main] org.apache.ignite.internal.IgniteKernal:285 - Enabled local MACs: 00000000000000E0, 005056BD5072
20.11.2020 08:18:20.131 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Read checkpoint status [startMarker=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5\cp\1605855371041-8b5aaf2a-7867-47b0-879c-85791363041f-START.bin, endMarker=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5\cp\1605855371041-8b5aaf2a-7867-47b0-879c-85791363041f-END.bin]
20.11.2020 08:18:20.147 INFO [main] o.a.i.i.p.c.p.pagemem.PageMemoryImpl:285 - Started page memory [memoryAllocated=50,0 MiB, pages=12404, tableSize=988,2 KiB, checkpointBuffer=50,0 MiB]
20.11.2020 08:18:20.147 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Checking memory state [lastValidPos=FileWALPointer [idx=512, fileOff=3672982, len=99269], lastMarked=FileWALPointer [idx=512, fileOff=3672982, len=99269], lastCheckpointId=8b5aaf2a-7867-47b0-879c-85791363041f]
20.11.2020 08:18:20.225 WARN [main] o.a.i.i.p.c.p.wal.FileWriteAheadLogManager:290 - WAL segment tail reached. [idx=512, isWorkDir=true, serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer#5853495b, actualFilePtr=FileWALPointer [idx=512, fileOff=3772251, len=0]]
20.11.2020 08:18:20.256 WARN [main] o.a.i.i.p.c.p.wal.FileWriteAheadLogManager:290 - WAL segment tail reached. [idx=512, isWorkDir=true, serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer#21f459fc, actualFilePtr=FileWALPointer [idx=512, fileOff=3772251, len=0]]
20.11.2020 08:18:20.256 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Found last checkpoint marker [cpId=8b5aaf2a-7867-47b0-879c-85791363041f, pos=FileWALPointer [idx=512, fileOff=3672982, len=99269]]
20.11.2020 08:18:20.350 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Applying lost cache updates since last checkpoint record [lastMarked=FileWALPointer [idx=512, fileOff=3672982, len=99269], lastCheckpointId=8b5aaf2a-7867-47b0-879c-85791363041f]
20.11.2020 08:18:20.365 WARN [main] o.a.i.i.p.c.p.wal.FileWriteAheadLogManager:290 - WAL segment tail reached. [idx=512, isWorkDir=true, serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer#6c15e8c7, actualFilePtr=FileWALPointer [idx=512, fileOff=3772251, len=0]]
20.11.2020 08:18:20.381 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Finished applying WAL changes [updatesApplied=0, time=31 ms]
20.11.2020 08:18:20.381 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Restoring partition state for local groups.
20.11.2020 08:18:20.381 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Finished restoring partition state for local groups [groupsProcessed=0, partitionsProcessed=0, time=0ms]
20.11.2020 08:18:20.412 INFO [main] o.a.i.i.p.cluster.GridClusterStateProcessor:285 - Restoring history for BaselineTopology[id=12]
20.11.2020 08:18:20.522 INFO [main] o.a.i.i.c.DistributedBaselineConfiguration:285 - Baseline parameter 'baselineAutoAdjustEnabled' was changed from 'null' to 'true'
20.11.2020 08:18:20.522 INFO [main] o.a.i.i.c.DistributedBaselineConfiguration:285 - Baseline parameter 'baselineAutoAdjustTimeout' was changed from 'null' to '300000'
20.11.2020 08:18:20.522 INFO [main] o.a.i.i.p.c.p.file.FilePageStoreManager:285 - Cleanup cache stores [total=1, left=0, cleanFiles=false]
20.11.2020 08:18:20.522 INFO [main] o.a.i.i.p.c.p.pagemem.PageMemoryImpl:285 - Started page memory [memoryAllocated=50,0 MiB, pages=12404, tableSize=988,2 KiB, checkpointBuffer=50,0 MiB]
20.11.2020 08:18:20.537 INFO [main] o.a.i.i.p.c.p.pagemem.PageMemoryImpl:285 - Started page memory [memoryAllocated=50,0 MiB, pages=12404, tableSize=988,2 KiB, checkpointBuffer=50,0 MiB]
20.11.2020 08:18:20.537 INFO [main] o.a.i.i.p.c.p.pagemem.PageMemoryImpl:285 - Started page memory [memoryAllocated=50,0 MiB, pages=12404, tableSize=988,2 KiB, checkpointBuffer=50,0 MiB]
20.11.2020 08:18:20.537 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Configured data regions started successfully [total=5]
20.11.2020 08:18:20.537 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Starting binary memory restore for: [166757441, -1947899996, -8785046, -2100569601, 1793235927, -499392514, 30677022, 129211407, 1139332309, 1725334265]
20.11.2020 08:18:21.334 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Read checkpoint status [startMarker=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5\cp\1605855371041-8b5aaf2a-7867-47b0-879c-85791363041f-START.bin, endMarker=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5\cp\1605855371041-8b5aaf2a-7867-47b0-879c-85791363041f-END.bin]
20.11.2020 08:18:21.334 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Checking memory state [lastValidPos=FileWALPointer [idx=512, fileOff=3672982, len=99269], lastMarked=FileWALPointer [idx=512, fileOff=3672982, len=99269], lastCheckpointId=8b5aaf2a-7867-47b0-879c-85791363041f]
20.11.2020 08:18:21.365 WARN [main] o.a.i.i.p.c.p.wal.FileWriteAheadLogManager:290 - WAL segment tail reached. [idx=512, isWorkDir=true, serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer#317e9c3c, actualFilePtr=FileWALPointer [idx=512, fileOff=3772251, len=0]]
20.11.2020 08:18:21.397 WARN [main] o.a.i.i.p.c.p.wal.FileWriteAheadLogManager:290 - WAL segment tail reached. [idx=512, isWorkDir=true, serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer#31a3f4de, actualFilePtr=FileWALPointer [idx=512, fileOff=3772251, len=0]]
20.11.2020 08:18:21.397 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Found last checkpoint marker [cpId=8b5aaf2a-7867-47b0-879c-85791363041f, pos=FileWALPointer [idx=512, fileOff=3672982, len=99269]]
20.11.2020 08:18:21.412 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Binary memory state restored at node startup [restoredPtr=FileWALPointer [idx=512, fileOff=3772251, len=0]]
20.11.2020 08:18:21.428 INFO [main] o.a.i.i.p.c.p.pagemem.PageMemoryImpl:285 - Started page memory [memoryAllocated=50,0 MiB, pages=12404, tableSize=988,2 KiB, checkpointBuffer=50,0 MiB]
20.11.2020 08:18:21.568 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=license, id=166757441, dataRegionName=persistent, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
20.11.2020 08:18:21.584 INFO [main] o.a.i.i.p.c.p.pagemem.PageMemoryImpl:285 - Started page memory [memoryAllocated=819,1 MiB, pages=203256, tableSize=15,8 MiB, checkpointBuffer=256,0 MiB]
20.11.2020 08:18:21.584 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=commservices, id=-8785046, dataRegionName=default, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
20.11.2020 08:18:21.615 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=ignite-sys-cache, id=-2100569601, dataRegionName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL, backups=2147483647, mvcc=false]
20.11.2020 08:18:21.615 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=machinespecifications, id=1793235927, dataRegionName=persistent, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
20.11.2020 08:18:21.615 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=nxisPorts, id=-499392514, dataRegionName=persistent, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
20.11.2020 08:18:21.631 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=datastructures_ATOMIC_PARTITIONED_1#labqueue, id=1205724040, group=labqueue, dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
20.11.2020 08:18:21.631 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=ignite-sys-atomic-cache#labqueue, id=-327698687, group=labqueue, dataRegionName=default, mode=PARTITIONED, atomicity=TRANSACTIONAL, backups=1, mvcc=false]
20.11.2020 08:18:21.631 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=machinemaxbatchno, id=30677022, dataRegionName=persistent, mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
20.11.2020 08:18:21.646 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=machineconfiguration, id=129211407, dataRegionName=persistent, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
20.11.2020 08:18:21.646 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=specimentracer, id=1139332309, dataRegionName=persistent, mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
20.11.2020 08:18:21.646 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Started cache in recovery mode [name=machinestatus, id=1725334265, dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
20.11.2020 08:18:21.646 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Binary recovery performed in 1109 ms.
20.11.2020 08:18:21.646 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Read checkpoint status [startMarker=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5\cp\1605855371041-8b5aaf2a-7867-47b0-879c-85791363041f-START.bin, endMarker=D:\IntegrationSolutions\Services\LabDeviceHUB\Labbox\.\..\userdata\labbox\ignite\work\db\node00-1dbddb2c-ef76-4811-b7d3-46da82061bc5\cp\1605855371041-8b5aaf2a-7867-47b0-879c-85791363041f-END.bin]
20.11.2020 08:18:21.662 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Applying lost cache updates since last checkpoint record [lastMarked=FileWALPointer [idx=512, fileOff=3672982, len=99269], lastCheckpointId=8b5aaf2a-7867-47b0-879c-85791363041f]
20.11.2020 08:18:21.693 INFO [main] o.a.i.i.p.c.p.GridCacheDatabaseSharedManager:285 - Finished applying WAL changes [updatesApplied=0, time=31 ms]
20.11.2020 08:18:21.693 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Restoring partition state for local groups.
20.11.2020 08:18:21.943 INFO [main] o.a.i.i.processors.cache.GridCacheProcessor:285 - Finished restoring partition state for local groups [groupsProcessed=10, partitionsProcessed=5220, time=235ms]
20.11.2020 08:18:22.021 INFO [main] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Connection check threshold is calculated: 10000
20.11.2020 08:19:19.373 INFO [tcp-disco-srvr-[:47500]-#3] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - TCP discovery accepted incoming connection [rmtAddr=/192.168.92.175, rmtPort=56962]
20.11.2020 08:19:19.389 INFO [tcp-disco-srvr-[:47500]-#3] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - TCP discovery spawning a new thread for connection [rmtAddr=/192.168.92.175, rmtPort=56962]
20.11.2020 08:19:19.389 INFO [tcp-disco-sock-reader-[]-#4] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Started serving remote node connection [rmtAddr=/192.168.92.175:56962, rmtPort=56962]
20.11.2020 08:19:19.389 INFO [tcp-disco-sock-reader-[9f44068b 192.168.92.175:56962 client]-#4] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Initialized connection with remote client node [nodeId=9f44068b-b8ca-4d8b-bb32-efd2e2a1940c, rmtAddr=/192.168.92.175:56962]
20.11.2020 08:19:19.498 INFO [tcp-disco-sock-reader-[9f44068b 192.168.92.175:56962 client]-#4] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Finished serving remote node connection [rmtAddr=/192.168.92.175:56962, rmtPort=56962
20.11.2020 08:20:21.287 INFO [tcp-disco-srvr-[:47500]-#3] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - TCP discovery accepted incoming connection [rmtAddr=/192.168.92.176, rmtPort=55941]
20.11.2020 08:20:21.287 INFO [tcp-disco-srvr-[:47500]-#3] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - TCP discovery spawning a new thread for connection [rmtAddr=/192.168.92.176, rmtPort=55941]
20.11.2020 08:20:21.287 INFO [tcp-disco-sock-reader-[]-#5] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Started serving remote node connection [rmtAddr=/192.168.92.176:55941, rmtPort=55941]
20.11.2020 08:20:21.287 INFO [tcp-disco-sock-reader-[6a50abff 192.168.92.176:55941]-#5] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Initialized connection with remote server node [nodeId=6a50abff-8cfd-4b3a-b894-54fa9d405d36, rmtAddr=/192.168.92.176:55941]
20.11.2020 08:20:21.287 INFO [tcp-disco-sock-reader-[6a50abff 192.168.92.176:55941]-#5] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - Finished serving remote node connection [rmtAddr=/192.168.92.176:55941, rmtPort=55941
20.11.2020 08:20:26.239 INFO [tcp-disco-srvr-[:47500]-#3] o.a.ignite.spi.discovery.tcp.TcpDiscoverySpi:285 - TCP discovery accepted incoming connection [rmtAddr=/192.168.92.175, rmtPort=56996]
... it continues like that till the join or failure
The logs log the same on all servers. In this case server 1 and 2 create a cluster after 7 minutes. server 3 fails after 9 minutes due to incompatible baseline topology. After reseting the failed server it can rejoin the cluster. The behavior only happens sometimes. Most of the time the servers rebuild the cluster without problem.

Connect to an azure iot hub from inside a kubernetes cluster via amqp over websockets

we are trying to communicate to an azure iothub via amqp over websocket from a java docker container inside an azure kubernetes cluster. Sadly it seems, that the container cant establish a connection while locally or even on another virtual machine (where only docker is installed) the container run successfully.
The network policies rules should allow all necessary protocols and ports to communicate with eventhub endpoint of the iot hub.
Does anybody know which switch we have to pull to "allow" the container from the cluster the communication with the iothub?
The only logs we have are this:
13:10:26.688 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
13:10:26.851 [main] INFO com.azure.messaging.eventhubs.EventHubClientBuilder - connectionId[REDACTED]: Emitting a single connection.
13:10:26.901 [main] DEBUG com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Connection state: UNINITIALIZED
13:10:26.903 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: Setting next AMQP channel.
13:10:26.903 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: Next AMQP channel received, updating 0 current subscribers
13:10:26.920 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Creating and starting connection to REDACTED:443
13:10:26.940 [main] INFO com.azure.core.amqp.implementation.ReactorExecutor - connectionId[REDACTED], message[Starting reactor.]
13:10:26.955 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionInit hostname[REDACTED], connectionId[REDACTED]
13:10:26.956 [single-1] INFO com.azure.core.amqp.implementation.handler.ReactorHandler - connectionId[REDACTED] reactor.onReactorInit
13:10:26.956 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionLocalOpen hostname[REDACTED:443], connectionId[REDACTED], errorCondition[null], errorDescription[null]
13:10:26.975 [main] DEBUG com.azure.core.amqp.implementation.ReactorSession - Connection state: UNINITIALIZED
13:10:26.991 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - Emitting new response channel. connectionId: REDACTED. entityPath: $management. linkName: mgmt.
13:10:26.991 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - namespace[REDACTED] entityPath[$management]: Setting next AMQP channel.
13:10:26.991 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - namespace[REDACTED] entityPath[$management]: Next AMQP channel received, updating 0 current subscribers
13:10:26.993 [main] INFO com.azure.messaging.eventhubs.implementation.ManagementChannel - Management endpoint state: UNINITIALIZED
13:10:27.032 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - Upstream connection publisher was completed. Terminating processor.
13:10:27.033 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: AMQP channel processor completed. Notifying 0 subscribers.
13:10:27.040 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubReactorAmqpConnection - connectionId[REDACTED]: Disposing of connection.
13:10:27.040 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - Upstream connection publisher was completed. Terminating processor.
13:10:27.040 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - namespace[REDACTED] entityPath[$management]: AMQP channel processor completed. Notifying 0 subscribers.
13:10:27.041 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Disposing of ReactorConnection.
13:10:27.041 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: Channel is disposed.
13:10:27.041 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Removing session 'mgmt-session'
13:10:27.041 [main] INFO com.azure.core.amqp.implementation.ReactorSession - sessionId[mgmt-session]: Disposing of session.
13:10:27.043 [main] INFO com.azure.core.amqp.implementation.AmqpExceptionHandler - Shutdown received: ReactorExecutor.close() was called., isTransient[false], initiatedByClient[true]
13:10:27.089 [single-1] DEBUG com.azure.core.amqp.implementation.handler.SessionHandler - onSessionLocalOpen connectionId[REDACTED], entityName[mgmt-session], condition[Error{condition=null, description='null', info=null}]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.SendLinkHandler - onLinkLocalClose connectionId[REDACTED], linkName[mgmt:sender], errorCondition[null], errorDescription[null]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.ReceiveLinkHandler - onLinkLocalClose connectionId[REDACTED], linkName[mgmt:receiver], errorCondition[null], errorDescription[null]
13:10:27.090 [single-1] DEBUG com.azure.core.amqp.implementation.handler.SessionHandler - onSessionLocalClose connectionId[mgmt-session], entityName[REDACTED], condition[Error{condition=null, description='null', info=null}]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionLocalClose hostname[REDACTED:443], connectionId[REDACTED], errorCondition[null], errorDescription[null]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionBound hostname[REDACTED], connectionId[REDACTED]
13:10:27.098 [single-1] DEBUG com.azure.core.amqp.implementation.handler.WebSocketsConnectionHandler - connectionId[REDACTED] Adding web sockets transport layer for hostname[REDACTED:443]
13:10:27.125 [single-1] DEBUG com.azure.core.amqp.implementation.handler.DispatchHandler - Running task for event: %s
13:10:27.126 [single-1] INFO com.azure.core.amqp.implementation.ReactorExecutor - connectionId[REDACTED], message[Processing all pending tasks and closing old reactor.]
13:10:27.126 [single-1] DEBUG com.azure.core.amqp.implementation.handler.SendLinkHandler - onLinkLocalOpen connectionId[REDACTED], linkName[mgmt:sender], localTarget[Target{address='$management', durable=NONE, expiryPolicy=SESSION_END, timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null}]
13:10:27.126 [single-1] INFO com.azure.core.amqp.implementation.handler.ReceiveLinkHandler - onLinkLocalOpen connectionId[REDACTED], linkName[mgmt:receiver], localSource[Source{address='$management', durable=NONE, expiryPolicy=SESSION_END, timeout=0, dynamic=false, dynamicNodeProperties=null, distributionMode=null, filter=null, defaultOutcome=null, outcomes=null, capabilities=null}]
13:10:27.127 [single-1] INFO com.azure.core.amqp.implementation.ReactorExecutor - connectionId[REDACTED], message[Stopping the reactor because thread was interrupted or the reactor has no more events to process.]
The failure was not the networking at all.
My mistake was, that i assume if i can run the container manual via docker run -it * it should also work in a kubernetes cluster. But with the -it argument the container stays open und and pseudo-tty was attached. But in the kubernetes cluster this of cause will not happen, so we have to adjust the loop logic of the java application and it works after.
Thanks # all

Riak java client, execute() never returns

I've setup a riak server on ubuntu.
http://192.168.0.102:8098/ping return "OK"
I'm trying to remotely connect to it using riak java client(2.1.1) using the following code. client.execute() never returns. I'm attaching the log also.
public class Testing {
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8098, "192.168.0.102");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
Response rv = client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
}
Log on the console is...
17:19:40.841 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
17:19:40.865 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
17:19:40.891 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
17:19:40.892 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
17:19:40.893 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - direct buffer constructor: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
17:19:40.896 [main] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available
17:19:40.896 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: Windows
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
17:19:40.898 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: unavailable
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: C:\Users\Rakesh\AppData\Local\Temp (java.io.tmpdir)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 32 (sun.arch.data.model)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - io.netty.maxDirectMemory: 259522560 bytes
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
17:19:40.922 [main] DEBUG i.n.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
17:19:41.039 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 2924 (auto-detected)
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
17:19:41.162 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
17:19:41.163 [main] DEBUG io.netty.util.NetUtil - \proc\sys\net\core\somaxconn: 200 (non-existent)
17:19:41.321 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: e4:b3:18:ff:fe:6c:52:eb (auto-detected)
17:19:41.321 [main] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0xb620b93d4006e503
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
17:19:41.364 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
17:19:41.406 [main] INFO com.basho.riak.client.core.RiakNode - RiakNode started; 192.168.0.102:8098
17:19:41.407 [main] INFO c.basho.riak.client.core.RiakCluster - RiakCluster is starting.
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - No desired charset found in system properties, the default charset 'windows-1252' will be used
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - Initializing client charset to: windows-1252
17:19:41.443 [main] DEBUG com.basho.riak.client.core.RiakNode - Attempting to acquire channel permit
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 32768
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
17:19:41.447 [main] DEBUG com.basho.riak.client.core.RiakNode - Operation 28144878 being executed on RiakNode 192.168.0.102:8098
17:19:41.461 [nioEventLoopGroup-2-10] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.bytebuf.checkAccessible: true
17:19:41.463 [nioEventLoopGroup-2-10] DEBUG i.n.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1536e36
Call stack of suspended thread
Thread [main] (Suspended)
Unsafe.park(boolean, long) line: not available [native method]
LockSupport.park(Object) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).parkAndCheckInterrupt() line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).doAcquireSharedInterruptibly(int) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).acquireSharedInterruptibly(int) line: not available
CountDownLatch.await() line: not available
StoreOperation(FutureOperation<T,U,S>).await() line: 387
GenericRiakCommand$1(CoreFutureAdapter<T2,S2,T,S>).await() line: 90
StoreValue(RiakCommand<T,S>).execute(RiakCluster) line: 92
RiakClient.execute(RiakCommand<T,S>) line: 355
Testing.main(String[]) line: 29
A simple code addition after the following line of your code should fix things for you:
response rv = client.execute(store);
add:
client.shutdown();
to release that connection and continue execution.
Note that you will need to create a new connection for your next request against the database since you closed client or use .executeAsync() in place of .execute().
It appears you are expecting the Riak java client to connect using HTTP API. The Riak java client only connects using protocol buffer; using the HTTP address and port will freeze.
Yoy have to use this, its works fine...
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8087,"192.168.0.65");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
hope you will also get... !!!

OpenNMS v18 AMQP Message Sending Issue

I cannot get OpenNMS to send messages to an AMQP end point.
I have never had this working and this is the first time i have used OpenNMS and AMQP so it could be down my inexperience.
I have configured RabbitMQ 3.5.7 and tested it as per this question .
It works fine when using an external QPID 0.32 client and also works fine when using either python or perl.
Definition of works fine is that communications are established and the message payload is transmitted into an exchange and then delivered into a backend queue.
Message can then be viewed within the RabbitMQ Admin GUI.
In OpenNMS i have been following the instructs here
Started with the EventForwarder and then tried the AlarmNorthbounder both yield a NullPointerException.
I set up Karaf as follows setting the properties using these statements:-
opennms> config:edit org.opennms.features.amqp.alarmnorthbounder
opennms> propset connectionUrl amqp://simon:simon#/test?brokerlist=\'localhost:5672\'
opennms> propset destination "amqp:onms3/Simon;{'create':'always','node':{'type':'topic'} }"
opennms> propset processorName default-alarm-northbounder-processor
opennms> config:update
opennms> config:list '(service.pid=org.opennms.features.amqp.alarmnorthbounder)'
----------------------------------------------------------------
Pid: org.opennms.features.amqp.alarmnorthbounder
BundleLocation: mvn:org.opennms.features.amqp/org.opennms.features.amqp.alarm-northbounder/18.0.0
Properties:
connectionUrl = amqp://simon:simon#/test?brokerlist='localhost:5672'
destination = amqp:onms3/Simon;{'create':'always','node':{'type':'topic'} }
felix.fileinstall.filename = file:/usr/share/opennms/etc/org.opennms.features.amqp.alarmnorthbounder.cfg
processorName = default-alarm-northbounder-processor
service.pid = org.opennms.features.amqp.alarmnorthbounder
I get the following error message in the logs
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[forwardAlarm ] [forwardAlarm ] [seda://forwardAlarm ] [ 8]
[forwardAlarm ] [convertBodyTo3 ] [convertBodyTo[org.opennms.netmgt.alarmd.api.NorthboundAlarm] ] [ 0]
[forwardAlarm ] [log3 ] [log ] [ 1]
[forwardAlarm ] [bean3 ] [bean[ref:dynamicallyTrackedProcessor] ] [ 0]
[forwardAlarm ] [to3 ] [amqp:onms3/Simon;{'create':'always','node':{'type':'topic'} } ] [ 7]
Exchange
---------------------------------------------------------------------------------------------------------------------------------------
Exchange[
Id ID-ubuntu-1604-35241-1465752556843-2-60
ExchangePattern InOnly
Headers {breadcrumbId=ID-ubuntu-1604-35241-1465752556843-2-58, CamelRedelivered=false, CamelRedeliveryCounter=0}
BodyType String
Body NorthboundAlarm[id=3, uei='uei.opennms.org/generic/traps/EnterpriseDefault', nodeId=1]
]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
java.lang.NullPointerException
at org.apache.qpid.client.BasicMessageProducer_0_8.declareDestination(BasicMessageProducer_0_8.java:63)[212:org.apache.servicemix.bundles.qpid:0.28.0.1]
at org.apache.qpid.client.BasicMessageProducer.<init>(BasicMessageProducer.java:136)[212:org.apache.servicemix.bundles.qpid:0.28.0.1]
at org.apache.qpid.client.BasicMessageProducer_0_8.<init>(BasicMessageProducer_0_8.java:55)[212:org.apache.servicemix.bundles.qpid:0.28.0.1]
at org.apache.qpid.client.AMQSession_0_8.createMessageProducer(AMQSession_0_8.java:559)[212:org.apache.servicemix.bundles.qpid:0.28.0.1]
at org.apache.qpid.client.AMQSession_0_8.createMessageProducer(AMQSession_0_8.java:62)[212:org.apache.servicemix.bundles.qpid:0.28.0.1]
My expectation is that it should be able to deliver the message to the queue using the same libraries as i am using externally.
Putting DEBUG on for org.apache.qpid i get:-
2016-06-12 19:00:38,725 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.state.AMQStateManager: Notififying State change to 1 : [org.apache.qpid.client.state.StateWaiter#5d7e5d44]
2016-06-12 19:00:38,725 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.framing.FieldTable: FieldTable::writeToBuffer: Writing encoded length of 254...
2016-06-12 19:00:38,725 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.framing.FieldTable: {instance=[LONG_STRING: ubuntu-16041465757105204], product=[LONG_STRING: qpid], version=[LONG_STRING: 0.28], platform=[LONG_STRING: Java(TM) SE Runtime Environment, 1.8.0_45-b14, Oracle Corporation, amd64, Linux, 4.4.0-22-generic, unknown], qpid.client_process=[LONG_STRING: Qpid Java Client], qpid.client_pid=[INT: 1318]}
2016-06-12 19:00:38,725 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.protocol.AMQProtocolHandler: (1404676419)Method frame received: [ConnectionTuneBodyImpl: channelMax=0, frameMax=131072, heartbeat=60]
2016-06-12 19:00:38,726 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.handler.ConnectionTuneMethodHandler: ConnectionTune frame received
2016-06-12 19:00:38,726 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.state.AMQStateManager: State changing to AMQState: id = 3 name: CONNECTION_NOT_OPENED from old state AMQState: id = 2 name: CONNECTION_NOT_TUNED
2016-06-12 19:00:38,726 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.state.AMQStateManager: Notififying State change to 1 : [org.apache.qpid.client.state.StateWaiter#5d7e5d44]
2016-06-12 19:00:38,726 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.protocol.AMQProtocolHandler: (1404676419)Method frame received: [ConnectionOpenOkBodyImpl: knownHosts=null]
2016-06-12 19:00:38,726 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.state.AMQStateManager: State changing to AMQState: id = 4 name: CONNECTION_OPEN from old state AMQState: id = 3 name: CONNECTION_NOT_OPENED
2016-06-12 19:00:38,726 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.state.AMQStateManager: Notififying State change to 1 : [org.apache.qpid.client.state.StateWaiter#5d7e5d44]
2016-06-12 19:00:38,726 INFO org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.AMQConnection: Connection 44 now connected from /127.0.0.1:45388 to localhost/127.0.0.1:5672
2016-06-12 19:00:38,727 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.AMQConnection: Are we connected:true
2016-06-12 19:00:38,727 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.AMQConnection: Connected with ProtocolHandler Version:0-91
2016-06-12 19:00:38,727 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.AMQConnectionDelegate_8_0: Write channel open frame for channel id 1
2016-06-12 19:00:38,727 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.AMQSession: Created session:org.apache.qpid.client.AMQSession_0_8#179483f0
2016-06-12 19:00:38,728 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.protocol.AMQProtocolHandler: (1404676419)Method frame received: [ChannelOpenOkBodyImpl: channelId=null]
2016-06-12 19:00:38,728 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [IoReceiver - localhost/127.0.0.1:5672] org.apache.qpid.client.protocol.AMQProtocolHandler: (1404676419)Method frame received: [BasicQosOkBodyImpl: ]
2016-06-12 19:00:38,731 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.AMQDestination: Based on onms3/Simon;{'create':'always','node':{'type':'topic'} } the selected destination syntax is ADDR
2016-06-12 19:00:38,731 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.AMQSession: Closing session: org.apache.qpid.client.AMQSession_0_8#179483f0
2016-06-12 19:00:38,731 DEBUG org.apache.servicemix.bundles.qpid:0.28.0.1(212) [Camel (amqpAlarmNorthbounderCamelContext) thread #4 - seda://forwardAlarm] org.apache.qpid.client.protocol.AMQProtocolSession: closeSession called on protocol session for session 1
It closes the session before writing any information.
When i do essentially the same thing from external qpid client - executed as follows:-
#!/bin/bash
rm log.out
java -Dqpid.amqp.version=0-91 -Dlog4j.debug -Dlog4j.configuration=file:./log4j.properties -cp "client/example/target/classes/:client/example/target/dependency/*:slf4j-1.7.21/slf4j-log4j12-1.7.
21.jar:apache-log4j-1.2.17/log4j-1.2.17.jar" \
org.apache.qpid.example.ListSender
I get this:-
142 [main] DEBUG org.apache.qpid.client.AMQConnection - Are we connected:true
142 [main] DEBUG org.apache.qpid.client.AMQConnection - Connected with ProtocolHandler Version:0-91
146 [main] DEBUG org.apache.qpid.client.AMQConnectionDelegate_8_0 - Write channel open frame for channel id 1
162 [main] DEBUG org.apache.qpid.client.AMQSession - Created session:org.apache.qpid.client.AMQSession_0_8#6bdf28bb
164 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (333274164)Method frame received: [ChannelOpenOkBody]
165 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (333274164)Method frame received: [BasicQosOkBodyImpl: ]
173 [main] DEBUG org.apache.qpid.client.AMQDestination - Based on onms3/Simon;{create: always, node:{type: topic } } the selected destination syntax is ADDR
177 [main] DEBUG org.apache.qpid.framing.FieldTable - FieldTable::writeToBuffer: Writing encoded length of 0...
178 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (333274164)Method frame received: [ExchangeDeclareOkBodyImpl: ]
179 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (333274164)Method frame received: [ExchangeDeclareOkBodyImpl: ]
179 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (333274164)Method frame received: [ExchangeDeclareOkBodyImpl: ]
180 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - MessageProducer org.apache.qpid.client.BasicMessageProducer_0_8#1936f0f5 using publish mode : ASYNC_PUBLISH_ALL
190 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - Sending content body frames to 'onms3'/'Simon'; {
'create': 'always',
'node': {
'type': 'topic'
}
}
190 [main] DEBUG org.apache.qpid.client.BasicMessageProducer_0_8 - Sending content header frame to 'onms3'/'Simon'; {
'create': 'always',
'node': {
'type': 'topic'
}
}
190 [main] DEBUG org.apache.qpid.framing.FieldTable - FieldTable::writeToBuffer: Writing encoded length of 90...
191 [main] DEBUG org.apache.qpid.framing.FieldTable - {Id=[INT: 987654321], name=[LONG_STRING: WidgetSimon], price=[DOUBLE: 0.99], qpid.subject=[LONG_STRING: Simon], JMS_QPID_DESTTYPE=[INT: 2]}
192 [main] DEBUG org.apache.qpid.client.AMQSession - Closing session: org.apache.qpid.client.AMQSession_0_8#6bdf28bb
192 [main] DEBUG org.apache.qpid.client.protocol.AMQProtocolSession - closeSession called on protocol session for session 1
194 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (333274164)Method frame received: [ChannelCloseOkBody]
194 [IoReceiver - localhost/127.0.0.1:5672] INFO org.apache.qpid.client.handler.ChannelCloseOkMethodHandler - Received channel-close-ok for channel-id 1
195 [IoReceiver - localhost/127.0.0.1:5672] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - (333274164)Method frame received: [ConnectionCloseOkBody]
196 [main] DEBUG org.apache.qpid.client.protocol.AMQProtocolHandler - Session closed called by client
ListSender.java as follows:-
package org.apache.qpid.example;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.jms.Connection;
import javax.jms.Destination;
import javax.jms.Message;
import javax.jms.MessageProducer;
import javax.jms.Session;
import org.apache.qpid.client.AMQAnyDestination;
import org.apache.qpid.client.AMQConnection;
import org.apache.qpid.framing.AMQShortString;
import org.apache.qpid.jms.ListMessage;
public class ListSender {
public static void main(String[] args) throws Exception
{
Connection connection =
new AMQConnection("amqp://simon:simon#localhost/test?brokerlist='tcp://localhost:5672'");
AMQShortString a1 = new AMQShortString("");
AMQShortString a2 = new AMQShortString("");
AMQShortString[] bindvars = new AMQShortString[]{a1,a2};
boolean is_durable = true;
/*
Destination queue = new AMQAnyDestination( new AMQShortString("onms2"),
new AMQShortString("direct"),
new AMQShortString("Simon"),
true,
true,
new AMQShortString(""),
false,
bindvars);
*/
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
//Destination queue = new AMQAnyDestination("onms3/Simon");
Destination queue = new AMQAnyDestination("onms3/Simon;{create: always, node:{type: topic } }");
//Destination queue = new AMQAnyDestination("onms3/Simon;%7Bcreate%3A%20always%2C%20node%3A%7Btype%3A%20topic%20%7D%20%7D");
//Destination queue = new AMQAnyDestination("amqp:OpenNMSExchange/Taylor; {create: always}");
//Destination queue = new AMQAnyDestination("OpenNMSExchange; {create: always}");
MessageProducer producer = session.createProducer(queue);
ListMessage m = ((org.apache.qpid.jms.Session)session).createListMessage();
m.setIntProperty("Id", 987654321);
m.setStringProperty("name", "WidgetSimon");
m.setDoubleProperty("price", 0.99);
List<String> colors = new ArrayList<String>();
colors.add("red");
colors.add("green");
colors.add("white");
m.add(colors);
Map<String,Double> dimensions = new HashMap<String,Double>();
dimensions.put("length",10.2);
dimensions.put("width",5.1);
dimensions.put("depth",2.0);
m.add(dimensions);
List<List<Integer>> parts = new ArrayList<List<Integer>>();
parts.add(Arrays.asList(new Integer[] {1,2,5}));
parts.add(Arrays.asList(new Integer[] {8,2,5}));
m.add(parts);
Map<String,Object> specs = new HashMap<String,Object>();
specs.put("colours", colors);
specs.put("dimensions", dimensions);
specs.put("parts", parts);
m.add(specs);
producer.send((Message)m);
System.out.println("Sent: " + m);
connection.close();
}
}
My assumption was that this was a connectivity issue to the AMQP server of some description. Having faced a number of issues whilst troubleshooting this if i faced the issue from the external jar it was clear from the logs what the issue was.
Is this an issue with OpenNMS?
Does anyone have this working successfully?
Any ideas?
Cheers
Simon
Following guidance from the OpenNMS list I decided to install the QPID Broker v6.0.3 in place of RabbitMQ and messages are flowing no problems now.

Cassandra Java Driver Cold to Hot in 500ms?

I experience a cold to hot (first use) of cluster and session to a local data source (Cassandra) to take 640ms. Any additional connect takes 80 to 100ms so the overhead of the first connect is about 500+ms. Is that normal and is there anything I can do to get this figure down somehow? I use a T410 (i5 2.5GHz).
[Update]
23:27:11.453 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
23:27:11.460 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default value 4
23:27:11.463 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
23:27:11.607 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [localhost/127.0.0.1:9042]
23:27:11.905 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
23:27:11.906 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:11.969 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
23:27:12.016 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.051 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Successfully connected to localhost/127.0.0.1:9042
23:27:12.052 [main] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
23:27:12.053 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host localhost/127.0.0.1:9042 added
23:27:12.076 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized and ready
23:27:12.077 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Session - Added connection pool for localhost/127.0.0.1:9042
23:27:12.097 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=true] closing connection
23:27:12.103 [main] DEBUG com.datastax.driver.core.Cluster - Shutting down
23:27:12.105 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true] closing connection
23:27:12.123 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [/127.0.0.1:9042]
23:27:12.132 [main] DEBUG com.datastax.driver.core.Connection - Connection[/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
23:27:12.132 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.138 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
23:27:12.168 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.192 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Successfully connected to /127.0.0.1:9042
23:27:12.192 [main] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
23:27:12.192 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /127.0.0.1:9042 added
23:27:12.201 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized and ready
23:27:12.202 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Session - Added connection pool for /127.0.0.1:9042
As one can see the first connection attempt uses up to 600ms and more depending how one might read the figures.
My guess is this has to do with connection initialization. In all currently released versions of the java driver connections are initialized 1 after another synchronously. Fortunately, individual host pools are initialized in parallel, but the connections are not. If you are using 2.0.9, which has a default # of core connections of 8 that could explain why you are seeing slow initialization times. Also if you are using password authentication, that will slow things down quite a bit as well (from ~0-10ms per connection to ~60-120ms).
In java driver 2.0.10, which will be released soon, all connections are initialized in parallel which greatly improves Session initialization. For information see JAVA-701.

Categories

Resources