Spring WebClient downloading PDF gives an HTTP error - java

While using Spring's WebClient to retrieve a PDF file from a REST API, I get an error.
Here's the code with the WebClient :
return WebClient.create().get()
.uri(builder.build().toUri())
.accept(MediaType.APPLICATION_PDF)
.exchange()
.flatMap(response -> response.bodyToMono(byte[].class))
.block();
And I'm getting this error from the REST API that serves the file :
03-02-2021 14:32:01.180 [http-nio-8080-exec-6] DEBUG o.s.w.s.m.m.a.HttpEntityMethodProcessor.writeWithMessageConverters - Found 'Content-Type:application/pdf' in response
03-02-2021 14:32:01.181 [http-nio-8080-exec-6] DEBUG o.s.w.s.m.m.a.HttpEntityMethodProcessor.traceDebug - Writing [InputStream resource [resource loaded through InputStream]]
03-02-2021 14:32:01.185 [http-nio-8080-exec-6] DEBUG o.s.o.j.s.OpenEntityManagerInViewInterceptor.afterCompletion - Closing JPA EntityManager in OpenEntityManagerInViewInterceptor
03-02-2021 14:32:01.186 [http-nio-8080-exec-6] DEBUG o.s.web.servlet.DispatcherServlet.logResult - Completed 200 OK
03-02-2021 14:32:01.187 [http-nio-8080-exec-6] DEBUG o.a.t.util.net.SocketWrapperBase.log - Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#d1d3000:org.apache.tomcat.util.net.NioChannel#4b5f2cea:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:57797]], Read from buffer: [0]
03-02-2021 14:32:01.187 [http-nio-8080-exec-6] DEBUG o.a.coyote.http11.Http11Processor.log - Error parsing HTTP request header
java.io.EOFException: null
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.fillReadBuffer(NioEndpoint.java:1230)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.read(NioEndpoint.java:1140)
at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:780)
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:356)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1589)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:834)
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.a.coyote.http11.Http11Processor.log - Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#d1d3000:org.apache.tomcat.util.net.NioChannel#4b5f2cea:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:57797]], Status in: [OPEN_READ], State out: [CLOSED]
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.a.coyote.http11.Http11NioProtocol.log - Pushed Processor [org.apache.coyote.http11.Http11Processor#1c06182a]
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.a.tomcat.util.threads.LimitLatch.log - Counting down[http-nio-8080-exec-6] latch=2
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.apache.tomcat.util.net.NioEndpoint.log - Calling [org.apache.tomcat.util.net.NioEndpoint#6df434e4].closeSocket([org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#d1d3000:org.apache.tomcat.util.net.NioChannel#4b5f2cea:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:57797]])
03-02-2021 14:32:08.051 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)
03-02-2021 14:32:08.051 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool.fillPool - HikariPool-1 - Fill pool skipped, pool is at sufficient level.
03-02-2021 14:32:31.180 [Catalina-utility-2] DEBUG o.a.catalina.session.ManagerBase.log - Start expire sessions StandardManager at 1612359151180 sessioncount 0
03-02-2021 14:32:31.180 [Catalina-utility-2] DEBUG o.a.catalina.session.ManagerBase.log - End expire sessions StandardManager processingTime 0 expired sessions: 0
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.a.coyote.http11.Http11NioProtocol.log - Processing socket [org.apache.tomcat.util.net.NioChannel#6a7679d5:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8080 remote=/0:0:0:0:0:0:0:1:57771]] with status [ERROR]
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.a.coyote.http11.Http11NioProtocol.log - Found processor [null] for socket [org.apache.tomcat.util.net.NioChannel#6a7679d5:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8080 remote=/0:0:0:0:0:0:0:1:57771]]
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.a.tomcat.util.threads.LimitLatch.log - Counting down[http-nio-8080-exec-1] latch=1
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.apache.tomcat.util.net.NioEndpoint.log - Calling [org.apache.tomcat.util.net.NioEndpoint#6df434e4].closeSocket([org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#1bf3167e:org.apache.tomcat.util.net.NioChannel#6a7679d5:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8080 remote=/0:0:0:0:0:0:0:1:57771]])
Any idea where the problem is ?
Thanx

Related

Reactor Netty websocket channel closed prematurely

I have a long running websocket client implemented in java Spring reactor with Netty (spring-boot-starter-parent 2.5.3) targeting Binance ws api.
According to specs, the weboscket channel is kept open 24 hours.
The websocket is unexpectedly and prematurely closed after around 3 minutes :
16:50:48.418 [main] DEBUG reactor.util.Loggers - Using Slf4j logging framework
16:50:48.434 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
16:50:48.436 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
16:50:48.437 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 14
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module #1efbd816
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): unavailable
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 8388608000 bytes (maybe)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
16:50:48.449 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
16:50:48.450 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default LoopResources: DefaultLoopResources {prefix=reactor-http, daemon=true, selectCount=8, workerCount=8}
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default ConnectionProvider: reactor.netty.resources.DefaultPooledConnectionProvider#192b07fd
16:50:48.485 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
16:50:48.486 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
16:50:48.582 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
16:50:48.583 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128
16:50:48.590 [main] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Connecting to wss://stream.binance.com:9443/ws
16:50:48.601 [main] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative not in the classpath; OpenSslEngine will be unavailable.
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default protocols (JDK): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1]
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default cipher suites (JDK): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384]
16:50:48.720 [main] DEBUG reactor.netty.resources.DefaultLoopIOUring - Default io_uring support : false
16:50:48.724 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true
16:50:48.730 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_6410359104745093945181.so
16:50:48.731 [main] DEBUG reactor.netty.resources.DefaultLoopEpoll - Default Epoll support : true
16:50:48.734 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
16:50:48.742 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
16:50:48.743 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
16:50:48.749 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
16:50:48.768 [main] DEBUG reactor.netty.resources.PooledConnectionProvider - Creating a new [http] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=500, maxIdleTime=-1, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=1000, pendingAcquireTimeout=45000}] for [stream.binance.com/<unresolved>:9443]
16:50:48.798 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 27223 (auto-detected)
16:50:48.799 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 28:16:ad:ff:fe:2b:7c:b7 (auto-detected)
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
16:50:48.814 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
16:50:48.828 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:48.845 [reactor-http-epoll-2] DEBUG reactor.netty.tcp.SslProvider - [id:d962b126] SSL enabled using engine sun.security.ssl.SSLEngineImpl#55608030 and SNI stream.binance.com/<unresolved>:9443
16:50:48.852 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#3ba51dc6
16:50:48.854 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConfig - [id:d962b126] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:48.866 [reactor-http-epoll-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1fb356c5
16:50:48.867 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [11524: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN A)
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
16:50:48.878 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [33872: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN AAAA)
16:50:48.904 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [11524: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 11524, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN A)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:48.907 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConnector - [id:d962b126] Connecting to [stream.binance.com/52.199.12.133:9443].
16:50:48.907 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [33872: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 33872, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN AAAA)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:49.162 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Registering pool release on close event for channel
16:50:49.163 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:49.807 [reactor-http-epoll-2] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
16:50:49.808 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}, [connected])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [configured])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientConnect - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Handler is being applied: {uri=wss://stream.binance.com:9443/ws, method=GET}
16:50:49.830 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [request_prepared])
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added encoder [reactor.left.httpAggregator] at the beginning of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, reactor.left.httpAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpMetricsHandler, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.left.httpAggregator = io.netty.handler.codec.http.HttpObjectAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:49.840 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientOperations - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Attempting to perform websocket handshake with wss://stream.binance.com:9443/ws
16:50:49.846 [reactor-http-epoll-2] DEBUG io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13 - WebSocket version 13 client handshake key: 7FNVb427OHllyiM2Clg//g==, expected response: iTvQFIKtv7xyyXvmEAooh8NZPVw=
16:50:50.122 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [response_received])
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.adapter.ReactorNettyWebSocketSession - [36eb4d6b] Session id "36eb4d6b" for wss://stream.binance.com:9443/ws
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Started session '36eb4d6b' for wss://stream.binance.com:9443/ws
16:50:50.147 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added decoder [reactor.left.wsFrameAggregator] at the end of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, ws-decoder, ws-encoder, reactor.left.wsFrameAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:50.149 [reactor-http-epoll-2] DEBUG reactor.netty.channel.FluxReceive - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] FluxReceive{pending=0, cancelled=false, inboundDone=false, inboundError=null}: subscribing inbound receiver
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - onSubscribe(FluxMap.MapSubscriber)
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - request(256)
16:50:50.411 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:50:50.413 [reactor-http-epoll-2] INFO TRACE - request(1)
...
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - request(1)
16:52:17.168 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Channel closed, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:52:17.169 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpAggregator, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (ws-decoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameDecoder), (ws-encoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameEncoder), (reactor.left.wsFrameAggregator = io.netty.handler.codec.http.websocketx.WebSocketFrameAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
A completed
A terminated
16:52:17.172 [reactor-http-epoll-2] INFO TRACE - onComplete()
B completed
B terminated
C success
C terminated
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [response_completed])
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [disconnecting])
I tried to reproduce the issue using another technology like javascript but everything runs fine.
It seems that the channel is closed so I tried to tune the ChannelOptions at TcpClient level... still no luck !
TcpClient wsTcp = TcpClient.create();
wsTcp.option(ChannelOption.AUTO_CLOSE, Boolean.FALSE);
wsTcp.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, Integer.MAX_VALUE);
wsTcp.option(ChannelOption.AUTO_READ, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_KEEPALIVE, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_TIMEOUT, Integer.MAX_VALUE);
I provided a java sample code to reproduce the issue:
package test;
import java.net.URI;
import java.util.concurrent.CountDownLatch;
import org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient;
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
public class WsTest {
public static void main(String[] args) throws InterruptedException {
CountDownLatch latch = new CountDownLatch(1);
ReactorNettyWebSocketClient wsclient = new ReactorNettyWebSocketClient();
wsclient.setMaxFramePayloadLength(Integer.MAX_VALUE);
EmitterProcessor<String> output = EmitterProcessor.create();
Mono<Void> execMono = wsclient.execute(URI.create("wss://stream.binance.com:9443/ws"),
session -> session.send(Flux.just(session.textMessage("{\"method\": \"SUBSCRIBE\",\"params\":[\"!ticker#arr\"],\"id\": 1}")))
.thenMany(session
.receive()
.doOnCancel(() -> System.out.println("A cancelled"))
.doOnComplete(() -> System.out.println("A completed"))
.doOnTerminate(() -> System.out.println("A terminated"))
.map(x -> "evt")
.log("TRACE")
.subscribeWith(output).then())
.then());
output.doOnCancel(() -> System.out.println("B cancelled"))
.doOnComplete(() -> System.out.println("B completed"))
.doOnTerminate(() -> System.out.println("B terminated"))
.doOnSubscribe(s -> execMono
.doOnCancel(() -> System.out.println("C cancelled"))
.doOnSuccess(x -> System.out.println("C success"))
.doOnTerminate(() -> System.out.println("C terminated"))
.subscribe())
.subscribe();
latch.await();
}
}
I don't understand why I get completed/terminated event from ReactorNettyWebSocketClient WebSocketHandler ?
Thank you for your help,
I finally managed to find the root cause.
The underlying error was java websocket 1006 Unexpected Status of SSLEngineResult after an unwrap() operation
After some investigation, I got the returned code 1006 meaning the connection was closed abnormally by the client as documented in the rfc https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1
1006 is a reserved value and MUST NOT be set as a status code in a
Close control frame by an endpoint. It is designated for use in
applications expecting a status code to indicate that the
connection was closed abnormally, e.g., without sending or
receiving a Close control frame.
At that time, I switched from WIFI connection to LAN connection and the issue vanished immediately.
My WIFI router was not able to handle the huge payload correctly.

Apache Storm 1.1.0 not running and giving Unable to read additional data from client sessionid

I am running a simple Hello World kind of application in Apache Storm 1.1.0 . Application has a random integer spout and a bolt which prints the tuple output. But somehow I am not able to get it working on my windows system.
I am new to Apache storm and following a tutorial. I have looked for the answers in stack overflow, but i was not able to find any solved question regarding the same.
Following is my run Topology code:
public static void runTopology() {
//String filePath = "./src/main/resources/operations.txt";
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("randomNumberSpout", new RandomIntSpout());
builder.setBolt("printingBolt", new PrintingBolt()).shuffleGrouping("randomNumberSpout");
Config config = new Config();
config.setDebug(true);
LocalCluster cluster = new LocalCluster();
try{
cluster.submitTopology("Test", config, builder.createTopology());
}finally{
cluster.shutdown();
}
}
Bolt code
public class PrintingBolt extends BaseBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
System.out.println("Printing Tupple!!!!");
System.out.println(tuple);
System.out.println("Tupple processed " + tuple.getInteger(1));
basicOutputCollector.emit(new Values(tuple.getInteger(1)));
}
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("TestOutput"));
}
}
Spout Code
public class RandomIntSpout extends BaseRichSpout {
/**
*
*/
private static final long serialVersionUID = 1L;
private Random random;
private SpoutOutputCollector outputCollector;
/*#Override
public void open(Map<String,Object> map, TopologyContext topologyContext,
SpoutOutputCollector spoutOutputCollector) {
random = new Random();
outputCollector = spoutOutputCollector;
}*/
public void nextTuple() {
Utils.sleep(1000);
outputCollector.emit(new Values(random.nextInt(), System.currentTimeMillis()));
}
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("randomInt", "timestamp"));
}
public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
random = new Random();
outputCollector = collector;
}
}
I can provide the rest of the code as well, but I don't think that will be required. If required please mention in comments, I will provide that as well.
I get following error whenever I try to run the application.
10620 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Initiating client connection, connectString=localhost:2000/storm sessionTimeout=20000 > watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#31b0f02
10625 [main-SendThread(0:0:0:0:0:0:0:1:2000)] INFO o.a.s.s.o.a.z.ClientCnxn - Opening socket connection to server > 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2000. Will not attempt to authenticate using SASL (unknown error)
10627 [main-SendThread(0:0:0:0:0:0:0:1:2000)] INFO o.a.s.s.o.a.z.ClientCnxn - Socket connection established to > 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2000, initiating session
10627 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxnFactory - Accepted socket connection from /> 0:0:0:0:0:0:0:1:56905
10628 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.ZooKeeperServer - Client attempting to establish new session at /> 0:0:0:0:0:0:0:1:56905
10631 [main-SendThread(0:0:0:0:0:0:0:1:2000)] INFO o.a.s.s.o.a.z.ClientCnxn - Session establishment complete on server > 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:2000, sessionid = 0x16a8e5abd97000d, negotiated timeout = 20000
10631 [SyncThread:0] INFO o.a.s.s.o.a.z.s.ZooKeeperServer - Established session 0x16a8e5abd97000d with negotiated timeout 20000 for client /> 0:0:0:0:0:0:0:1:56905
10632 [main-EventThread] INFO o.a.s.s.o.a.c.f.s.ConnectionStateManager - State change: CONNECTED
10635 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x16a8e5abd97000d type:create cxid:0x2 zxid:0x26 txntype:-1 reqpath:n/a Error Path:/storm/blobstoremaxkeysequencenumber > Error:KeeperErrorCode = NoNode for /storm/blobstoremaxkeysequencenumber
10655 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
10657 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd97000d
10659 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
10659 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd97000d closed
10661 [main] INFO o.a.s.cluster - setup-path/blobstore/Test-1-1557166474-stormconf.ser/IBMT450PC053RLV.Corp.CVS.com:6627-1
10660 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid > 0x16a8e5abd97000d, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.1.0.jar:1.1.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
10671 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /> 0:0:0:0:0:0:0:1:56905 which had sessionid 0x16a8e5abd97000d
10746 [main] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - Starting
10747 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Initiating client connection, connectString=localhost:2000/storm sessionTimeout=20000 > watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#73893ec1
10755 [main-SendThread(127.0.0.1:2000)] INFO o.a.s.s.o.a.z.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2000. Will not > attempt to authenticate using SASL (unknown error)
10756 [main-SendThread(127.0.0.1:2000)] INFO o.a.s.s.o.a.z.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:2000, initiating > session
10758 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:56908
10759 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.ZooKeeperServer - Client attempting to establish new session at > /127.0.0.1:56908
10766 [main-SendThread(127.0.0.1:2000)] INFO o.a.s.s.o.a.z.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:2000, > sessionid = 0x16a8e5abd97000e, negotiated timeout = 20000
10766 [SyncThread:0] INFO o.a.s.s.o.a.z.s.ZooKeeperServer - Established session 0x16a8e5abd97000e with negotiated timeout 20000 for client > /127.0.0.1:56908
10767 [main-EventThread] INFO o.a.s.s.o.a.c.f.s.ConnectionStateManager - State change: CONNECTED
10778 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
10781 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd97000e
10785 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd97000e closed
10785 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:56908 > which had sessionid 0x16a8e5abd97000e
10785 [main] INFO o.a.s.cluster - setup-path/blobstore/Test-1-1557166474-stormcode.ser/IBMT450PC053RLV.Corp.CVS.com:6627-1
10786 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
10821 [main] INFO o.a.s.d.nimbus - desired replication count 1 achieved, current-replication-count for conf key = 1, current-replication-count > for code key = 1, current-replication-count for jar key = 1
11042 [main] INFO o.a.s.d.nimbus - Activating Test: Test-1-1557166474
11058 [main] INFO o.a.s.d.nimbus - Shutting down master
11064 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11066 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd970003
11068 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11069 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid > 0x16a8e5abd970003, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.1.0.jar:1.1.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
11068 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd970003 closed
11069 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:56875 > which had sessionid 0x16a8e5abd970003
11069 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11072 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd970004
11074 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11075 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /> 0:0:0:0:0:0:0:1:56878 which had sessionid 0x16a8e5abd970004
11074 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd970004 closed
11077 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11079 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd970000
11081 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd970000 closed
11081 [main] INFO o.a.s.zookeeper - closing zookeeper connection of leader elector.
11082 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11082 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:56866 > which had sessionid 0x16a8e5abd970000
11082 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11084 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd970001
11086 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd970001 closed
11087 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11087 [main] INFO o.a.s.d.nimbus - Shut down master
11087 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11089 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid > 0x16a8e5abd970001, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.1.0.jar:1.1.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
11089 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:56869 > which had sessionid 0x16a8e5abd970001
11090 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd970006
11092 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd970006 closed
11093 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11093 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid > 0x16a8e5abd970006, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.1.0.jar:1.1.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
11094 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:56884 > which had sessionid 0x16a8e5abd970006
11095 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11095 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd970008
11098 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd970008 closed
11098 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11099 [main] INFO o.a.s.d.s.ReadClusterState - Setting Thread[SLOT_1024,5,main] assignment to null
11099 [main] INFO o.a.s.d.s.ReadClusterState - Setting Thread[SLOT_1025,5,main] assignment to null
11099 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid > 0x16a8e5abd970008, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.1.0.jar:1.1.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
11099 [main] INFO o.a.s.d.s.ReadClusterState - Setting Thread[SLOT_1026,5,main] assignment to null
11099 [main] INFO o.a.s.d.s.ReadClusterState - Waiting for Thread[SLOT_1024,5,main] to be EMPTY, currently EMPTY
11099 [main] INFO o.a.s.d.s.ReadClusterState - Waiting for Thread[SLOT_1025,5,main] to be EMPTY, currently EMPTY
11099 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /> 0:0:0:0:0:0:0:1:56890 which had sessionid 0x16a8e5abd970008
11099 [main] INFO o.a.s.d.s.ReadClusterState - Waiting for Thread[SLOT_1026,5,main] to be EMPTY, currently EMPTY
11099 [main] INFO o.a.s.d.s.Supervisor - Shutting down supervisor 009e412c-0d39-400c-8302-08296524c703
11100 [Thread-10] INFO o.a.s.e.EventManagerImp - Event manager interrupted
11102 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11103 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd97000a
11105 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd97000a closed
11105 [main] INFO o.a.s.d.s.ReadClusterState - Setting Thread[SLOT_1027,5,main] assignment to null
11105 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11105 [main] INFO o.a.s.d.s.ReadClusterState - Setting Thread[SLOT_1028,5,main] assignment to null
11105 [main] INFO o.a.s.d.s.ReadClusterState - Setting Thread[SLOT_1029,5,main] assignment to null
11106 [main] INFO o.a.s.d.s.ReadClusterState - Waiting for Thread[SLOT_1027,5,main] to be EMPTY, currently EMPTY
11106 [main] INFO o.a.s.d.s.ReadClusterState - Waiting for Thread[SLOT_1028,5,main] to be EMPTY, currently EMPTY
11106 [main] INFO o.a.s.d.s.ReadClusterState - Waiting for Thread[SLOT_1029,5,main] to be EMPTY, currently EMPTY
11106 [main] INFO o.a.s.d.s.Supervisor - Shutting down supervisor 5daf8496-451f-43ca-b176-b16055d6183c
11106 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid > 0x16a8e5abd97000a, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.1.0.jar:1.1.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
11106 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /> 0:0:0:0:0:0:0:1:56896 which had sessionid 0x16a8e5abd97000a
11106 [Thread-14] INFO o.a.s.e.EventManagerImp - Event manager interrupted
11108 [Curator-Framework-0] INFO o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl - backgroundOperationsLoop exiting
11109 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Processed session termination for sessionid: > 0x16a8e5abd97000c
11112 [main] INFO o.a.s.s.o.a.z.ZooKeeper - Session: 0x16a8e5abd97000c closed
11112 [main-EventThread] INFO o.a.s.s.o.a.z.ClientCnxn - EventThread shut down
11112 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] WARN o.a.s.s.o.a.z.s.NIOServerCnxn - caught end of stream exception
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid > 0x16a8e5abd97000c, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.1.0.jar:1.1.0]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.1.0.jar:1.1.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
11113 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxn - Closed socket connection for client /127.0.0.1:56902 > which had sessionid 0x16a8e5abd97000c
11114 [main] INFO o.a.s.testing - Shutting down in process zookeeper
11115 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2000] INFO o.a.s.s.o.a.z.s.NIOServerCnxnFactory - NIOServerCnxn factory exited run method
11116 [main] INFO o.a.s.s.o.a.z.s.ZooKeeperServer - shutting down
11116 [main] INFO o.a.s.s.o.a.z.s.SessionTrackerImpl - Shutting down
11116 [main] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - Shutting down
11117 [main] INFO o.a.s.s.o.a.z.s.SyncRequestProcessor - Shutting down
11117 [SyncThread:0] INFO o.a.s.s.o.a.z.s.SyncRequestProcessor - SyncRequestProcessor exited!
11117 [ProcessThread(sid:0 cport:-1):] INFO o.a.s.s.o.a.z.s.PrepRequestProcessor - PrepRequestProcessor exited loop!
11117 [main] INFO o.a.s.s.o.a.z.s.FinalRequestProcessor - shutdown of request processor complete
11118 [main] INFO o.a.s.testing - Done shutting down in process zookeeper
11118 [main] INFO o.a.s.testing - Deleting temporary path C:\Users\AKHAND~1\AppData\Local\Temp\ae4119b4-70b3-4d04-9aee-5bfae4c4775b
11203 [main] INFO o.a.s.testing - Deleting temporary path C:\Users\AKHAND~1\AppData\Local\Temp\a78b8c79-b9b3-438d-8df6-5d7bd74281fc
11215 [main] INFO o.a.s.testing - Unable to delete file: > C:\Users\AKHAND~1\AppData\Local\Temp\a78b8c79-b9b3-438d-8df6-5d7bd74281fc\version-2\log.1
11215 [main] INFO o.a.s.testing - Deleting temporary path C:\Users\AKHAND~1\AppData\Local\Temp\0e4fbadc-ad33-4577-9784-4cc163a778fa
11255 [main] INFO o.a.s.testing - Deleting temporary path C:\Users\AKHAND~1\AppData\Local\Temp\456d6b1d-eb21-4b76-98f1-a2bb44b2aa5e
12197 [SessionTracker] INFO o.a.s.s.o.a.z.s.SessionTrackerImpl - SessionTrackerImpl exited loop!
I am not able to understand why client socket is closed and why session is closed? I am not able to get it working. Please help.
I think you might need to add a sleep here
try{
cluster.submitTopology("Test", config, builder.createTopology());
//Sleep here
}finally{
cluster.shutdown();
}
Currently you are submitting the topology, and immediately shutting down. Unless you sleep a bit, your topology doesn't get a chance to run.

how to disable DEBUG in log4j?

need your help, I have log4j.properties like this
# Root logger option
log4j.rootLogger=stdout, file
# Redirect log messages to console
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
# Redirect log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${catalina.home}/logs/Admin.log
log4j.appender.file.MaxFileSize=5MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
and this is my Controller
#SuppressWarnings("unused")
#RequestMapping(value="/addedc", method = RequestMethod.POST, consumes = "application/json", headers = "content-type=application/x-www-form-urlencoded")
public #ResponseBody Status_new addedc(#RequestBody installasimodel edc){
log.info("<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< START ADDEDC >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>");
log.debug("qqqqqqqqqqqqqq");
List<installasimodel>mapusr = null;
try{
insta.addistlsi(edc);
log.info(new Status_new(1, "Sukses!"));
return new Status_new(1, "Sukses!");
}catch(Exception mapi){
log.info(new Status_new(0, mapi.getMessage()));
log.info("<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< STOP ADDEDC >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>");
return new Status_new(0, mapi.getMessage());
}
}
I want to show "INFO" in the file .log, but why DEBUG also appear? thus fulfilling the page.This example logs generated
....
....
2016-02-05 15:14:58 DEBUG FilterSecurityInterceptor:185 - Public object - authentication not attempted
2016-02-05 15:14:58 DEBUG FilterChainProxy:323 - /ins-server-insta/ins-list-all-insta-installasi reached end of additional filter chain; proceeding with original chain
2016-02-05 15:14:58 DEBUG DispatcherServlet:838 - DispatcherServlet with name 'mvc-dispatcher' processing GET request for [/admin-teknikal/ins-server-insta/ins-list-all-insta-installasi]
2016-02-05 15:14:58 DEBUG RequestMappingHandlerMapping:246 - Looking up handler method for path /ins-server-insta/ins-list-all-insta-installasi
2016-02-05 15:14:58 DEBUG RequestMappingHandlerMapping:251 - Returning handler method [public java.util.List<com.bni.edc.model.installasimodel> com.bni.edc.controller.instaController.getInsta()]
2016-02-05 15:14:58 DEBUG DefaultListableBeanFactory:249 - Returning cached instance of singleton bean 'instaController'
2016-02-05 15:14:58 DEBUG DispatcherServlet:925 - Last-Modified value for [/admin-teknikal/ins-server-insta/ins-list-all-insta-installasi] is: -1
2016-02-05 15:14:58 INFO nanda:63 - <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< START ALL INSTALLASI LIST >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2016-02-05 15:14:58 DEBUG AbstractTransactionImpl:160 - begin
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:226 - Obtaining JDBC connection
2016-02-05 15:14:58 DEBUG DriverManagerDataSource:142 - Creating new JDBC DriverManager Connection to [jdbc:mysql://localhost:3306/bni]
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:232 - Obtained JDBC connection
2016-02-05 15:14:58 DEBUG JdbcTransaction:69 - initial autocommit status: true
2016-02-05 15:14:58 DEBUG JdbcTransaction:71 - disabling autocommit
2016-02-05 15:14:58 DEBUG SQL:109 - SELECT * FROM istlsi_edc_tkn_tebel WHERE sts!='1' ORDER BY id_istlsi_tkn DESC
2016-02-05 15:14:58 DEBUG Loader:951 - Result set row: 0
2016-02-05 15:14:58 DEBUG Loader:1485 - Result row: EntityKey[com.bni.edc.model.installasimodel#22344444]
2016-02-05 15:14:58 DEBUG Loader:951 - Result set row: 1
2016-02-05 15:14:58 DEBUG Loader:1485 - Result row: EntityKey[com.bni.edc.model.installasimodel#232323]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:160 - Resolving associations for [com.bni.edc.model.installasimodel#22344444]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:286 - Done materializing entity [com.bni.edc.model.installasimodel#22344444]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:160 - Resolving associations for [com.bni.edc.model.installasimodel#232323]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:286 - Done materializing entity [com.bni.edc.model.installasimodel#232323]
2016-02-05 15:14:58 DEBUG AbstractTransactionImpl:175 - committing
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:149 - Processing flush-time cascades
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:189 - Dirty checking collections
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:123 - Flushed: 0 insertions, 0 updates, 0 deletions to 2 objects
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:130 - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
2016-02-05 15:14:58 DEBUG EntityPrinter:114 - Listing entities:
2016-02-05 15:14:58 DEBUG EntityPrinter:121 - com.bni.edc.model.installasimodel{tngl_qrcode=null, tngl_sbm_istlsi=null, tngl_sbmit=2016-02-04, ttd_mrchn=null, kde_pos_sls=0, own=BN, mid=23232323, hp_penerima=null, id_istlsi_tkn=48, id_wlyh=1, tid=232323, id_spv=0, foto_istlsi=null, sc=1, ttd_istlsi=null, alamat_mrchn=asasa, jam=null, kde_pos=0, sn=null, ket_istlsi=sdsddsdsdsdsd, kde_pos_tkn=null, ntf_adm=0, ttd=null, ms=null, id_tkn=28, koor_lat=null, gprs_id=null, tngl_chck_adm=null, version=null, koor_long=null, sts=0, foto=null, phone=23232, nm_penerima=daa, sts_edc=0, id_usr_adm_sls=0, own_mrchn=null, nm_mrchn=dsds, id_usr_sls=0}
2016-02-05 15:14:58 DEBUG EntityPrinter:121 - com.bni.edc.model.installasimodel{tngl_qrcode=null, tngl_sbm_istlsi=null, tngl_sbmit=2016-02-04, ttd_mrchn=null, kde_pos_sls=0, own=BN, mid=20397878789, hp_penerima=null, id_istlsi_tkn=49, id_wlyh=3, tid=22344444, id_spv=0, foto_istlsi=null, sc=1, ttd_istlsi=null, alamat_mrchn=jl.soedirman kav.04, jam=null, kde_pos=0, sn=null, ket_istlsi=butuh cepat dan segera, kde_pos_tkn=null, ntf_adm=0, ttd=null, ms=null, id_tkn=27, koor_lat=null, gprs_id=null, tngl_chck_adm=null, version=null, koor_long=null, sts=0, foto=null, phone=09787879, nm_penerima=yuyun, sts_edc=0, id_usr_adm_sls=0, own_mrchn=null, nm_mrchn=laksana baru, id_usr_sls=0}
2016-02-05 15:14:58 DEBUG JdbcTransaction:113 - committed JDBC Connection
2016-02-05 15:14:58 DEBUG JdbcTransaction:126 - re-enabling autocommit
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:246 - Releasing JDBC connection
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:264 - Released JDBC connection
2016-02-05 15:14:58 INFO nanda:71 - <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< STOP ALL INSTALLASI LIST >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
....
how can I disable DEBUG ?
Change DEBUG to INFO
log4j.rootLogger=DEBUG, stdout, file
I'm assuming that you are using Log4j v1.x...
In your configuration properties you're only configuring appenders (root logger output will be sent to stdout end file):
log4j.rootLogger=stdout, file
but you aren't specifying logging level (default level is DEBUG), so everything is logged on your appenders.
To set a specific logging level you need to configure it properly. In particular, if you need to log only from INFO level to FATAL level, you have to set this:
log4j.rootLogger=INFO, stdout, file
Take a look: https://logging.apache.org/log4j/1.2/manual.html
UPDATE
If you need to log Hibernate activities (only INFO level) you also need to set these configurations:
log4j.logger.org.hibernate=INFO, stdout, file
You are not setting the Logging Level in your log4j.xml.
Set your Logger level to INFO like this:
# Root logger option
log4j.rootLogger=INFO, stdout, file
Change this log4j.rootLogger=stdout, file to log4j.rootLogger= INFO, stdout, file
Solution A: Initialize root logger with level INFO for stdout and file
log4j.rootLogger=INFO,stdout,file
Solution B: Set the log level for specified components
log4j.logger.com.endeca=INFO

AWS PutObject Connection reset

My AWS Java Client is throwing
javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) ~[na:1.8.0_60]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) ~[na:1.8.0_60]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) ~[na:1.8.0_60]
at org.apache.http.impl.io.AbstractSessionOutputBuffer.flushBuffer(AbstractSessionOutputBuffer.java:159) ~[httpcore-4.3.3.jar:4.3.3]
My code is
public void save(String name, byte[] file) {
ObjectMetadata metaData = new ObjectMetadata();
String streamMD5 = new String(Base64.encodeBase64(file));
metaData.setContentMD5(streamMD5);
metaData.setContentLength(file.length);
InputStream stream = new ByteArrayInputStream(file);
try {
PutObjectRequest put = new PutObjectRequest(
configuration.getBucketName(), name, stream, metaData);
s3client.putObject(put);
} finally {
IOUtils.closeQuietly(stream);
}
}
The s3client is a spring bean and is not garbage collected before the stream has finished uploading. I've tried without specifying MD5 and/or content length but still has the same exception thrown.
Logging through AWS library shows:
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.c.protocol.RequestAddCookies - CookieSpec selected: best-match
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.c.protocol.RequestAuthCache - Auth cache not set in the context
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.c.p.RequestProxyAuthentication - Proxy auth state: UNCHALLENGED
10:09:15.540 [http-nio-8080-exec-1] DEBUG c.a.http.impl.client.SdkHttpClient - Attempt 1 to execute request
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.i.conn.DefaultClientConnection - Sending request: PUT /documeent.pdf HTTP/1.1
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "PUT /document.pdf HTTP/1.1[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "Host: bucket.s3.amazonaws.com[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "Authorization: AWS 123445667788=[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "User-Agent: aws-sdk-java/1.10.21 Linux/3.13.0-65-generic Java_HotSpot(TM)_Server_VM/25.60-b23/1.8.0_60[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "Date: Tue, 06 Oct 2015 09:09:15 GMT[\r][\n]"
10:09:15.663 [http-nio-8080-exec-1] DEBUG com.amazonaws.internal.SdkSSLSocket - closing bucket.s3.amazonaws.com/12.34.56.78:443
10:09:15.665 [http-nio-8080-exec-1] DEBUG o.a.h.i.conn.DefaultClientConnection - I/O error closing connection
I've checked that the file size (3.2M) does not exceed the max file siez for this bucket.
Get/List requests work fine and I can copy files into the S3 bucket using the s3 client tools.
Does anyone know of anything else I should check?
Thanks.

Cassandra Java Driver Cold to Hot in 500ms?

I experience a cold to hot (first use) of cluster and session to a local data source (Cassandra) to take 640ms. Any additional connect takes 80 to 100ms so the overhead of the first connect is about 500+ms. Is that normal and is there anything I can do to get this figure down somehow? I use a T410 (i5 2.5GHz).
[Update]
23:27:11.453 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
23:27:11.460 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default value 4
23:27:11.463 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
23:27:11.607 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [localhost/127.0.0.1:9042]
23:27:11.905 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
23:27:11.906 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:11.969 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
23:27:12.016 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.051 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Successfully connected to localhost/127.0.0.1:9042
23:27:12.052 [main] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
23:27:12.053 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host localhost/127.0.0.1:9042 added
23:27:12.076 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized and ready
23:27:12.077 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Session - Added connection pool for localhost/127.0.0.1:9042
23:27:12.097 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=true] closing connection
23:27:12.103 [main] DEBUG com.datastax.driver.core.Cluster - Shutting down
23:27:12.105 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true] closing connection
23:27:12.123 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [/127.0.0.1:9042]
23:27:12.132 [main] DEBUG com.datastax.driver.core.Connection - Connection[/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
23:27:12.132 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.138 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
23:27:12.168 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.192 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Successfully connected to /127.0.0.1:9042
23:27:12.192 [main] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
23:27:12.192 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /127.0.0.1:9042 added
23:27:12.201 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized and ready
23:27:12.202 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Session - Added connection pool for /127.0.0.1:9042
As one can see the first connection attempt uses up to 600ms and more depending how one might read the figures.
My guess is this has to do with connection initialization. In all currently released versions of the java driver connections are initialized 1 after another synchronously. Fortunately, individual host pools are initialized in parallel, but the connections are not. If you are using 2.0.9, which has a default # of core connections of 8 that could explain why you are seeing slow initialization times. Also if you are using password authentication, that will slow things down quite a bit as well (from ~0-10ms per connection to ~60-120ms).
In java driver 2.0.10, which will be released soon, all connections are initialized in parallel which greatly improves Session initialization. For information see JAVA-701.

Categories

Resources