xfire web-service client gets null response - java

I use xfire to develop a web-service client in my Spring 3.x project, here are my code below
WebServiceClient wClient = new WebServiceClient();
WebServiceSoap soap = wClient.getWebServiceSoap(Commons.VD_INFO_WSDL);
com.webservice.object.GetVDInfo input = new com.webservice.object.GetVDInfo();
input.setPASSWORD(Commons.VD_INFO_PASSWORD);
com.webservice.object.GetVDInfoResponse response = soap.getVDInfo(input);
System.out.println(response==null ? "response=null" : "response not null,");
When I run these on the server, I can see the wsdl was get on my Eclipse console, but the "response" object is null....
Here is the final part log below, Can anyone give me a hint or suggestion?
If there is not enough information, plz let me know....
thx so much...
DEBUG: httpclient.wire.content - << " <wsdl:service name="ToServiceImplService">[\r][\n]"
DEBUG: httpclient.wire.content - << "[\n]"
DEBUG: httpclient.wire.content - << " <wsdl:port binding="impl:ToServiceImplSoapBinding" name="ToServiceImpl">[\r][\n]"
DEBUG: httpclient.wire.content - << "[\n]"
DEBUG: httpclient.wire.content - << " <wsdlsoap:address location="http://xx.xx.10.77/TO_SERVICE/services/ToServiceImpl"/>[\r][\n]"
DEBUG: httpclient.wire.content - << "[\n]"
DEBUG: httpclient.wire.content - << " </wsdl:port>[\r][\n]"
DEBUG: httpclient.wire.content - << "[\n]"
DEBUG: httpclient.wire.content - << " </wsdl:service>[\r][\n]"
DEBUG: httpclient.wire.content - << "[\n]"
DEBUG: httpclient.wire.content - << "</wsdl:definitions>[\r][\n]"
DEBUG: org.apache.commons.httpclient.HttpMethodBase - Resorting to protocol version default close connection policy
DEBUG: org.apache.commons.httpclient.HttpMethodBase - Should NOT close connection, using HTTP/1.1
DEBUG: org.apache.commons.httpclient.HttpConnection - Releasing connection back to connection manager.
DEBUG: org.apache.commons.httpclient.MultiThreadedHttpConnectionManager - Freeing connection, hostConfig=HostConfiguration[host=http://61.60.10.77]
DEBUG: org.apache.commons.httpclient.util.IdleConnectionHandler - Adding connection at: 1404713700948
DEBUG: org.apache.commons.httpclient.MultiThreadedHttpConnectionManager - Notifying no-one, there are no waiting threads
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking phase pre-dispatch
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking handler org.codehaus.xfire.client.CorrelatorHandler in phase pre-dispatch
DEBUG: org.codehaus.xfire.client.Client - Correlating context with ID 14047137002771-1400602077
DEBUG: org.codehaus.xfire.client.Client - Found correlated context with ID 14047137002771-1400602077
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - adding handler org.codehaus.xfire.client.ClientReceiveHandler#37e80c87 to phase service
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking phase dispatch
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking handler org.codehaus.xfire.handler.LocateBindingHandler in phase dispatch
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking handler org.codehaus.xfire.soap.handler.SoapBodyHandler in phase dispatch
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking handler org.codehaus.xfire.soap.handler.SoapActionInHandler in phase dispatch
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking phase policy
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking phase user
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking phase pre-invoke
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking handler org.codehaus.xfire.soap.handler.ValidateHeadersHandler in phase pre-invoke
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking phase service
DEBUG: org.codehaus.xfire.handler.HandlerPipeline - Invoking handler org.codehaus.xfire.client.ClientReceiveHandler in phase service
DEBUG: org.codehaus.xfire.client.XFireProxy - Result [null]
response=null

Related

Reactor Netty websocket channel closed prematurely

I have a long running websocket client implemented in java Spring reactor with Netty (spring-boot-starter-parent 2.5.3) targeting Binance ws api.
According to specs, the weboscket channel is kept open 24 hours.
The websocket is unexpectedly and prematurely closed after around 3 minutes :
16:50:48.418 [main] DEBUG reactor.util.Loggers - Using Slf4j logging framework
16:50:48.434 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
16:50:48.436 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
16:50:48.437 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 14
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module #1efbd816
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): unavailable
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 8388608000 bytes (maybe)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
16:50:48.449 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
16:50:48.450 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default LoopResources: DefaultLoopResources {prefix=reactor-http, daemon=true, selectCount=8, workerCount=8}
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default ConnectionProvider: reactor.netty.resources.DefaultPooledConnectionProvider#192b07fd
16:50:48.485 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
16:50:48.486 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
16:50:48.582 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
16:50:48.583 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128
16:50:48.590 [main] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Connecting to wss://stream.binance.com:9443/ws
16:50:48.601 [main] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative not in the classpath; OpenSslEngine will be unavailable.
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default protocols (JDK): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1]
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default cipher suites (JDK): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384]
16:50:48.720 [main] DEBUG reactor.netty.resources.DefaultLoopIOUring - Default io_uring support : false
16:50:48.724 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true
16:50:48.730 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_6410359104745093945181.so
16:50:48.731 [main] DEBUG reactor.netty.resources.DefaultLoopEpoll - Default Epoll support : true
16:50:48.734 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
16:50:48.742 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
16:50:48.743 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
16:50:48.749 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
16:50:48.768 [main] DEBUG reactor.netty.resources.PooledConnectionProvider - Creating a new [http] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=500, maxIdleTime=-1, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=1000, pendingAcquireTimeout=45000}] for [stream.binance.com/<unresolved>:9443]
16:50:48.798 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 27223 (auto-detected)
16:50:48.799 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 28:16:ad:ff:fe:2b:7c:b7 (auto-detected)
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
16:50:48.814 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
16:50:48.828 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:48.845 [reactor-http-epoll-2] DEBUG reactor.netty.tcp.SslProvider - [id:d962b126] SSL enabled using engine sun.security.ssl.SSLEngineImpl#55608030 and SNI stream.binance.com/<unresolved>:9443
16:50:48.852 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#3ba51dc6
16:50:48.854 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConfig - [id:d962b126] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:48.866 [reactor-http-epoll-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1fb356c5
16:50:48.867 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [11524: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN A)
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
16:50:48.878 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [33872: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN AAAA)
16:50:48.904 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [11524: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 11524, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN A)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:48.907 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConnector - [id:d962b126] Connecting to [stream.binance.com/52.199.12.133:9443].
16:50:48.907 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [33872: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 33872, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN AAAA)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:49.162 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Registering pool release on close event for channel
16:50:49.163 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:49.807 [reactor-http-epoll-2] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
16:50:49.808 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}, [connected])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [configured])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientConnect - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Handler is being applied: {uri=wss://stream.binance.com:9443/ws, method=GET}
16:50:49.830 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [request_prepared])
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added encoder [reactor.left.httpAggregator] at the beginning of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, reactor.left.httpAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpMetricsHandler, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.left.httpAggregator = io.netty.handler.codec.http.HttpObjectAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:49.840 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientOperations - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Attempting to perform websocket handshake with wss://stream.binance.com:9443/ws
16:50:49.846 [reactor-http-epoll-2] DEBUG io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13 - WebSocket version 13 client handshake key: 7FNVb427OHllyiM2Clg//g==, expected response: iTvQFIKtv7xyyXvmEAooh8NZPVw=
16:50:50.122 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [response_received])
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.adapter.ReactorNettyWebSocketSession - [36eb4d6b] Session id "36eb4d6b" for wss://stream.binance.com:9443/ws
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Started session '36eb4d6b' for wss://stream.binance.com:9443/ws
16:50:50.147 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added decoder [reactor.left.wsFrameAggregator] at the end of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, ws-decoder, ws-encoder, reactor.left.wsFrameAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:50.149 [reactor-http-epoll-2] DEBUG reactor.netty.channel.FluxReceive - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] FluxReceive{pending=0, cancelled=false, inboundDone=false, inboundError=null}: subscribing inbound receiver
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - onSubscribe(FluxMap.MapSubscriber)
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - request(256)
16:50:50.411 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:50:50.413 [reactor-http-epoll-2] INFO TRACE - request(1)
...
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - request(1)
16:52:17.168 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Channel closed, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:52:17.169 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpAggregator, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (ws-decoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameDecoder), (ws-encoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameEncoder), (reactor.left.wsFrameAggregator = io.netty.handler.codec.http.websocketx.WebSocketFrameAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
A completed
A terminated
16:52:17.172 [reactor-http-epoll-2] INFO TRACE - onComplete()
B completed
B terminated
C success
C terminated
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [response_completed])
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [disconnecting])
I tried to reproduce the issue using another technology like javascript but everything runs fine.
It seems that the channel is closed so I tried to tune the ChannelOptions at TcpClient level... still no luck !
TcpClient wsTcp = TcpClient.create();
wsTcp.option(ChannelOption.AUTO_CLOSE, Boolean.FALSE);
wsTcp.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, Integer.MAX_VALUE);
wsTcp.option(ChannelOption.AUTO_READ, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_KEEPALIVE, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_TIMEOUT, Integer.MAX_VALUE);
I provided a java sample code to reproduce the issue:
package test;
import java.net.URI;
import java.util.concurrent.CountDownLatch;
import org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient;
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
public class WsTest {
public static void main(String[] args) throws InterruptedException {
CountDownLatch latch = new CountDownLatch(1);
ReactorNettyWebSocketClient wsclient = new ReactorNettyWebSocketClient();
wsclient.setMaxFramePayloadLength(Integer.MAX_VALUE);
EmitterProcessor<String> output = EmitterProcessor.create();
Mono<Void> execMono = wsclient.execute(URI.create("wss://stream.binance.com:9443/ws"),
session -> session.send(Flux.just(session.textMessage("{\"method\": \"SUBSCRIBE\",\"params\":[\"!ticker#arr\"],\"id\": 1}")))
.thenMany(session
.receive()
.doOnCancel(() -> System.out.println("A cancelled"))
.doOnComplete(() -> System.out.println("A completed"))
.doOnTerminate(() -> System.out.println("A terminated"))
.map(x -> "evt")
.log("TRACE")
.subscribeWith(output).then())
.then());
output.doOnCancel(() -> System.out.println("B cancelled"))
.doOnComplete(() -> System.out.println("B completed"))
.doOnTerminate(() -> System.out.println("B terminated"))
.doOnSubscribe(s -> execMono
.doOnCancel(() -> System.out.println("C cancelled"))
.doOnSuccess(x -> System.out.println("C success"))
.doOnTerminate(() -> System.out.println("C terminated"))
.subscribe())
.subscribe();
latch.await();
}
}
I don't understand why I get completed/terminated event from ReactorNettyWebSocketClient WebSocketHandler ?
Thank you for your help,
I finally managed to find the root cause.
The underlying error was java websocket 1006 Unexpected Status of SSLEngineResult after an unwrap() operation
After some investigation, I got the returned code 1006 meaning the connection was closed abnormally by the client as documented in the rfc https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1
1006 is a reserved value and MUST NOT be set as a status code in a
Close control frame by an endpoint. It is designated for use in
applications expecting a status code to indicate that the
connection was closed abnormally, e.g., without sending or
receiving a Close control frame.
At that time, I switched from WIFI connection to LAN connection and the issue vanished immediately.
My WIFI router was not able to handle the huge payload correctly.

Connect to an azure iot hub from inside a kubernetes cluster via amqp over websockets

we are trying to communicate to an azure iothub via amqp over websocket from a java docker container inside an azure kubernetes cluster. Sadly it seems, that the container cant establish a connection while locally or even on another virtual machine (where only docker is installed) the container run successfully.
The network policies rules should allow all necessary protocols and ports to communicate with eventhub endpoint of the iot hub.
Does anybody know which switch we have to pull to "allow" the container from the cluster the communication with the iothub?
The only logs we have are this:
13:10:26.688 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
13:10:26.851 [main] INFO com.azure.messaging.eventhubs.EventHubClientBuilder - connectionId[REDACTED]: Emitting a single connection.
13:10:26.901 [main] DEBUG com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Connection state: UNINITIALIZED
13:10:26.903 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: Setting next AMQP channel.
13:10:26.903 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: Next AMQP channel received, updating 0 current subscribers
13:10:26.920 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Creating and starting connection to REDACTED:443
13:10:26.940 [main] INFO com.azure.core.amqp.implementation.ReactorExecutor - connectionId[REDACTED], message[Starting reactor.]
13:10:26.955 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionInit hostname[REDACTED], connectionId[REDACTED]
13:10:26.956 [single-1] INFO com.azure.core.amqp.implementation.handler.ReactorHandler - connectionId[REDACTED] reactor.onReactorInit
13:10:26.956 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionLocalOpen hostname[REDACTED:443], connectionId[REDACTED], errorCondition[null], errorDescription[null]
13:10:26.975 [main] DEBUG com.azure.core.amqp.implementation.ReactorSession - Connection state: UNINITIALIZED
13:10:26.991 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - Emitting new response channel. connectionId: REDACTED. entityPath: $management. linkName: mgmt.
13:10:26.991 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - namespace[REDACTED] entityPath[$management]: Setting next AMQP channel.
13:10:26.991 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - namespace[REDACTED] entityPath[$management]: Next AMQP channel received, updating 0 current subscribers
13:10:26.993 [main] INFO com.azure.messaging.eventhubs.implementation.ManagementChannel - Management endpoint state: UNINITIALIZED
13:10:27.032 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - Upstream connection publisher was completed. Terminating processor.
13:10:27.033 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: AMQP channel processor completed. Notifying 0 subscribers.
13:10:27.040 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubReactorAmqpConnection - connectionId[REDACTED]: Disposing of connection.
13:10:27.040 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - Upstream connection publisher was completed. Terminating processor.
13:10:27.040 [main] INFO class com.azure.core.amqp.implementation.RequestResponseChannel<mgmt-session> - namespace[REDACTED] entityPath[$management]: AMQP channel processor completed. Notifying 0 subscribers.
13:10:27.041 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Disposing of ReactorConnection.
13:10:27.041 [main] INFO com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor - namespace[REDACTED] entityPath[REDACTED]: Channel is disposed.
13:10:27.041 [main] INFO com.azure.core.amqp.implementation.ReactorConnection - connectionId[REDACTED]: Removing session 'mgmt-session'
13:10:27.041 [main] INFO com.azure.core.amqp.implementation.ReactorSession - sessionId[mgmt-session]: Disposing of session.
13:10:27.043 [main] INFO com.azure.core.amqp.implementation.AmqpExceptionHandler - Shutdown received: ReactorExecutor.close() was called., isTransient[false], initiatedByClient[true]
13:10:27.089 [single-1] DEBUG com.azure.core.amqp.implementation.handler.SessionHandler - onSessionLocalOpen connectionId[REDACTED], entityName[mgmt-session], condition[Error{condition=null, description='null', info=null}]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.SendLinkHandler - onLinkLocalClose connectionId[REDACTED], linkName[mgmt:sender], errorCondition[null], errorDescription[null]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.ReceiveLinkHandler - onLinkLocalClose connectionId[REDACTED], linkName[mgmt:receiver], errorCondition[null], errorDescription[null]
13:10:27.090 [single-1] DEBUG com.azure.core.amqp.implementation.handler.SessionHandler - onSessionLocalClose connectionId[mgmt-session], entityName[REDACTED], condition[Error{condition=null, description='null', info=null}]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionLocalClose hostname[REDACTED:443], connectionId[REDACTED], errorCondition[null], errorDescription[null]
13:10:27.090 [single-1] INFO com.azure.core.amqp.implementation.handler.ConnectionHandler - onConnectionBound hostname[REDACTED], connectionId[REDACTED]
13:10:27.098 [single-1] DEBUG com.azure.core.amqp.implementation.handler.WebSocketsConnectionHandler - connectionId[REDACTED] Adding web sockets transport layer for hostname[REDACTED:443]
13:10:27.125 [single-1] DEBUG com.azure.core.amqp.implementation.handler.DispatchHandler - Running task for event: %s
13:10:27.126 [single-1] INFO com.azure.core.amqp.implementation.ReactorExecutor - connectionId[REDACTED], message[Processing all pending tasks and closing old reactor.]
13:10:27.126 [single-1] DEBUG com.azure.core.amqp.implementation.handler.SendLinkHandler - onLinkLocalOpen connectionId[REDACTED], linkName[mgmt:sender], localTarget[Target{address='$management', durable=NONE, expiryPolicy=SESSION_END, timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null}]
13:10:27.126 [single-1] INFO com.azure.core.amqp.implementation.handler.ReceiveLinkHandler - onLinkLocalOpen connectionId[REDACTED], linkName[mgmt:receiver], localSource[Source{address='$management', durable=NONE, expiryPolicy=SESSION_END, timeout=0, dynamic=false, dynamicNodeProperties=null, distributionMode=null, filter=null, defaultOutcome=null, outcomes=null, capabilities=null}]
13:10:27.127 [single-1] INFO com.azure.core.amqp.implementation.ReactorExecutor - connectionId[REDACTED], message[Stopping the reactor because thread was interrupted or the reactor has no more events to process.]
The failure was not the networking at all.
My mistake was, that i assume if i can run the container manual via docker run -it * it should also work in a kubernetes cluster. But with the -it argument the container stays open und and pseudo-tty was attached. But in the kubernetes cluster this of cause will not happen, so we have to adjust the loop logic of the java application and it works after.
Thanks # all

Riak java client, execute() never returns

I've setup a riak server on ubuntu.
http://192.168.0.102:8098/ping return "OK"
I'm trying to remotely connect to it using riak java client(2.1.1) using the following code. client.execute() never returns. I'm attaching the log also.
public class Testing {
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8098, "192.168.0.102");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
Response rv = client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
}
Log on the console is...
17:19:40.841 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
17:19:40.865 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
17:19:40.891 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
17:19:40.892 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
17:19:40.893 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - direct buffer constructor: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
17:19:40.896 [main] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available
17:19:40.896 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: Windows
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
17:19:40.898 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: unavailable
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: C:\Users\Rakesh\AppData\Local\Temp (java.io.tmpdir)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 32 (sun.arch.data.model)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - io.netty.maxDirectMemory: 259522560 bytes
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
17:19:40.922 [main] DEBUG i.n.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
17:19:41.039 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 2924 (auto-detected)
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
17:19:41.162 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
17:19:41.163 [main] DEBUG io.netty.util.NetUtil - \proc\sys\net\core\somaxconn: 200 (non-existent)
17:19:41.321 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: e4:b3:18:ff:fe:6c:52:eb (auto-detected)
17:19:41.321 [main] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0xb620b93d4006e503
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
17:19:41.364 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
17:19:41.406 [main] INFO com.basho.riak.client.core.RiakNode - RiakNode started; 192.168.0.102:8098
17:19:41.407 [main] INFO c.basho.riak.client.core.RiakCluster - RiakCluster is starting.
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - No desired charset found in system properties, the default charset 'windows-1252' will be used
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - Initializing client charset to: windows-1252
17:19:41.443 [main] DEBUG com.basho.riak.client.core.RiakNode - Attempting to acquire channel permit
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 32768
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
17:19:41.447 [main] DEBUG com.basho.riak.client.core.RiakNode - Operation 28144878 being executed on RiakNode 192.168.0.102:8098
17:19:41.461 [nioEventLoopGroup-2-10] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.bytebuf.checkAccessible: true
17:19:41.463 [nioEventLoopGroup-2-10] DEBUG i.n.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1536e36
Call stack of suspended thread
Thread [main] (Suspended)
Unsafe.park(boolean, long) line: not available [native method]
LockSupport.park(Object) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).parkAndCheckInterrupt() line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).doAcquireSharedInterruptibly(int) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).acquireSharedInterruptibly(int) line: not available
CountDownLatch.await() line: not available
StoreOperation(FutureOperation<T,U,S>).await() line: 387
GenericRiakCommand$1(CoreFutureAdapter<T2,S2,T,S>).await() line: 90
StoreValue(RiakCommand<T,S>).execute(RiakCluster) line: 92
RiakClient.execute(RiakCommand<T,S>) line: 355
Testing.main(String[]) line: 29
A simple code addition after the following line of your code should fix things for you:
response rv = client.execute(store);
add:
client.shutdown();
to release that connection and continue execution.
Note that you will need to create a new connection for your next request against the database since you closed client or use .executeAsync() in place of .execute().
It appears you are expecting the Riak java client to connect using HTTP API. The Riak java client only connects using protocol buffer; using the HTTP address and port will freeze.
Yoy have to use this, its works fine...
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8087,"192.168.0.65");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
hope you will also get... !!!

AWS PutObject Connection reset

My AWS Java Client is throwing
javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLException: java.net.SocketException: Connection reset
at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) ~[na:1.8.0_60]
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) ~[na:1.8.0_60]
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) ~[na:1.8.0_60]
at org.apache.http.impl.io.AbstractSessionOutputBuffer.flushBuffer(AbstractSessionOutputBuffer.java:159) ~[httpcore-4.3.3.jar:4.3.3]
My code is
public void save(String name, byte[] file) {
ObjectMetadata metaData = new ObjectMetadata();
String streamMD5 = new String(Base64.encodeBase64(file));
metaData.setContentMD5(streamMD5);
metaData.setContentLength(file.length);
InputStream stream = new ByteArrayInputStream(file);
try {
PutObjectRequest put = new PutObjectRequest(
configuration.getBucketName(), name, stream, metaData);
s3client.putObject(put);
} finally {
IOUtils.closeQuietly(stream);
}
}
The s3client is a spring bean and is not garbage collected before the stream has finished uploading. I've tried without specifying MD5 and/or content length but still has the same exception thrown.
Logging through AWS library shows:
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.c.protocol.RequestAddCookies - CookieSpec selected: best-match
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.c.protocol.RequestAuthCache - Auth cache not set in the context
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.c.p.RequestProxyAuthentication - Proxy auth state: UNCHALLENGED
10:09:15.540 [http-nio-8080-exec-1] DEBUG c.a.http.impl.client.SdkHttpClient - Attempt 1 to execute request
10:09:15.540 [http-nio-8080-exec-1] DEBUG o.a.h.i.conn.DefaultClientConnection - Sending request: PUT /documeent.pdf HTTP/1.1
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "PUT /document.pdf HTTP/1.1[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "Host: bucket.s3.amazonaws.com[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "Authorization: AWS 123445667788=[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "User-Agent: aws-sdk-java/1.10.21 Linux/3.13.0-65-generic Java_HotSpot(TM)_Server_VM/25.60-b23/1.8.0_60[\r][\n]"
10:09:15.540 [http-nio-8080-exec-1] DEBUG org.apache.http.wire - >> "Date: Tue, 06 Oct 2015 09:09:15 GMT[\r][\n]"
10:09:15.663 [http-nio-8080-exec-1] DEBUG com.amazonaws.internal.SdkSSLSocket - closing bucket.s3.amazonaws.com/12.34.56.78:443
10:09:15.665 [http-nio-8080-exec-1] DEBUG o.a.h.i.conn.DefaultClientConnection - I/O error closing connection
I've checked that the file size (3.2M) does not exceed the max file siez for this bucket.
Get/List requests work fine and I can copy files into the S3 bucket using the s3 client tools.
Does anyone know of anything else I should check?
Thanks.

Cassandra Java Driver Cold to Hot in 500ms?

I experience a cold to hot (first use) of cluster and session to a local data source (Cassandra) to take 640ms. Any additional connect takes 80 to 100ms so the overhead of the first connect is about 500+ms. Is that normal and is there anything I can do to get this figure down somehow? I use a T410 (i5 2.5GHz).
[Update]
23:27:11.453 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1
23:27:11.460 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default value 4
23:27:11.463 [main] DEBUG c.d.driver.core.SystemProperties - com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60
23:27:11.607 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [localhost/127.0.0.1:9042]
23:27:11.905 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
23:27:11.906 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:11.969 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
23:27:12.016 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.051 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Successfully connected to localhost/127.0.0.1:9042
23:27:12.052 [main] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
23:27:12.053 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host localhost/127.0.0.1:9042 added
23:27:12.076 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized and ready
23:27:12.077 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Session - Added connection pool for localhost/127.0.0.1:9042
23:27:12.097 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-2, inFlight=0, closed=true] closing connection
23:27:12.103 [main] DEBUG com.datastax.driver.core.Cluster - Shutting down
23:27:12.105 [main] DEBUG com.datastax.driver.core.Connection - Connection[localhost/127.0.0.1:9042-1, inFlight=0, closed=true] closing connection
23:27:12.123 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [/127.0.0.1:9042]
23:27:12.132 [main] DEBUG com.datastax.driver.core.Connection - Connection[/127.0.0.1:9042-1, inFlight=0, closed=false] Transport initialized and ready
23:27:12.132 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.138 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing schema
23:27:12.168 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Refreshing node list and token map
23:27:12.192 [main] DEBUG c.d.driver.core.ControlConnection - [Control connection] Successfully connected to /127.0.0.1:9042
23:27:12.192 [main] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
23:27:12.192 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /127.0.0.1:9042 added
23:27:12.201 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/127.0.0.1:9042-2, inFlight=0, closed=false] Transport initialized and ready
23:27:12.202 [Cassandra Java Driver worker-0] DEBUG com.datastax.driver.core.Session - Added connection pool for /127.0.0.1:9042
As one can see the first connection attempt uses up to 600ms and more depending how one might read the figures.
My guess is this has to do with connection initialization. In all currently released versions of the java driver connections are initialized 1 after another synchronously. Fortunately, individual host pools are initialized in parallel, but the connections are not. If you are using 2.0.9, which has a default # of core connections of 8 that could explain why you are seeing slow initialization times. Also if you are using password authentication, that will slow things down quite a bit as well (from ~0-10ms per connection to ~60-120ms).
In java driver 2.0.10, which will be released soon, all connections are initialized in parallel which greatly improves Session initialization. For information see JAVA-701.

Categories

Resources