how to disable DEBUG in log4j? - java

need your help, I have log4j.properties like this
# Root logger option
log4j.rootLogger=stdout, file
# Redirect log messages to console
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
# Redirect log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${catalina.home}/logs/Admin.log
log4j.appender.file.MaxFileSize=5MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
and this is my Controller
#SuppressWarnings("unused")
#RequestMapping(value="/addedc", method = RequestMethod.POST, consumes = "application/json", headers = "content-type=application/x-www-form-urlencoded")
public #ResponseBody Status_new addedc(#RequestBody installasimodel edc){
log.info("<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< START ADDEDC >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>");
log.debug("qqqqqqqqqqqqqq");
List<installasimodel>mapusr = null;
try{
insta.addistlsi(edc);
log.info(new Status_new(1, "Sukses!"));
return new Status_new(1, "Sukses!");
}catch(Exception mapi){
log.info(new Status_new(0, mapi.getMessage()));
log.info("<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< STOP ADDEDC >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>");
return new Status_new(0, mapi.getMessage());
}
}
I want to show "INFO" in the file .log, but why DEBUG also appear? thus fulfilling the page.This example logs generated
....
....
2016-02-05 15:14:58 DEBUG FilterSecurityInterceptor:185 - Public object - authentication not attempted
2016-02-05 15:14:58 DEBUG FilterChainProxy:323 - /ins-server-insta/ins-list-all-insta-installasi reached end of additional filter chain; proceeding with original chain
2016-02-05 15:14:58 DEBUG DispatcherServlet:838 - DispatcherServlet with name 'mvc-dispatcher' processing GET request for [/admin-teknikal/ins-server-insta/ins-list-all-insta-installasi]
2016-02-05 15:14:58 DEBUG RequestMappingHandlerMapping:246 - Looking up handler method for path /ins-server-insta/ins-list-all-insta-installasi
2016-02-05 15:14:58 DEBUG RequestMappingHandlerMapping:251 - Returning handler method [public java.util.List<com.bni.edc.model.installasimodel> com.bni.edc.controller.instaController.getInsta()]
2016-02-05 15:14:58 DEBUG DefaultListableBeanFactory:249 - Returning cached instance of singleton bean 'instaController'
2016-02-05 15:14:58 DEBUG DispatcherServlet:925 - Last-Modified value for [/admin-teknikal/ins-server-insta/ins-list-all-insta-installasi] is: -1
2016-02-05 15:14:58 INFO nanda:63 - <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< START ALL INSTALLASI LIST >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2016-02-05 15:14:58 DEBUG AbstractTransactionImpl:160 - begin
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:226 - Obtaining JDBC connection
2016-02-05 15:14:58 DEBUG DriverManagerDataSource:142 - Creating new JDBC DriverManager Connection to [jdbc:mysql://localhost:3306/bni]
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:232 - Obtained JDBC connection
2016-02-05 15:14:58 DEBUG JdbcTransaction:69 - initial autocommit status: true
2016-02-05 15:14:58 DEBUG JdbcTransaction:71 - disabling autocommit
2016-02-05 15:14:58 DEBUG SQL:109 - SELECT * FROM istlsi_edc_tkn_tebel WHERE sts!='1' ORDER BY id_istlsi_tkn DESC
2016-02-05 15:14:58 DEBUG Loader:951 - Result set row: 0
2016-02-05 15:14:58 DEBUG Loader:1485 - Result row: EntityKey[com.bni.edc.model.installasimodel#22344444]
2016-02-05 15:14:58 DEBUG Loader:951 - Result set row: 1
2016-02-05 15:14:58 DEBUG Loader:1485 - Result row: EntityKey[com.bni.edc.model.installasimodel#232323]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:160 - Resolving associations for [com.bni.edc.model.installasimodel#22344444]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:286 - Done materializing entity [com.bni.edc.model.installasimodel#22344444]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:160 - Resolving associations for [com.bni.edc.model.installasimodel#232323]
2016-02-05 15:14:58 DEBUG TwoPhaseLoad:286 - Done materializing entity [com.bni.edc.model.installasimodel#232323]
2016-02-05 15:14:58 DEBUG AbstractTransactionImpl:175 - committing
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:149 - Processing flush-time cascades
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:189 - Dirty checking collections
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:123 - Flushed: 0 insertions, 0 updates, 0 deletions to 2 objects
2016-02-05 15:14:58 DEBUG AbstractFlushingEventListener:130 - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
2016-02-05 15:14:58 DEBUG EntityPrinter:114 - Listing entities:
2016-02-05 15:14:58 DEBUG EntityPrinter:121 - com.bni.edc.model.installasimodel{tngl_qrcode=null, tngl_sbm_istlsi=null, tngl_sbmit=2016-02-04, ttd_mrchn=null, kde_pos_sls=0, own=BN, mid=23232323, hp_penerima=null, id_istlsi_tkn=48, id_wlyh=1, tid=232323, id_spv=0, foto_istlsi=null, sc=1, ttd_istlsi=null, alamat_mrchn=asasa, jam=null, kde_pos=0, sn=null, ket_istlsi=sdsddsdsdsdsd, kde_pos_tkn=null, ntf_adm=0, ttd=null, ms=null, id_tkn=28, koor_lat=null, gprs_id=null, tngl_chck_adm=null, version=null, koor_long=null, sts=0, foto=null, phone=23232, nm_penerima=daa, sts_edc=0, id_usr_adm_sls=0, own_mrchn=null, nm_mrchn=dsds, id_usr_sls=0}
2016-02-05 15:14:58 DEBUG EntityPrinter:121 - com.bni.edc.model.installasimodel{tngl_qrcode=null, tngl_sbm_istlsi=null, tngl_sbmit=2016-02-04, ttd_mrchn=null, kde_pos_sls=0, own=BN, mid=20397878789, hp_penerima=null, id_istlsi_tkn=49, id_wlyh=3, tid=22344444, id_spv=0, foto_istlsi=null, sc=1, ttd_istlsi=null, alamat_mrchn=jl.soedirman kav.04, jam=null, kde_pos=0, sn=null, ket_istlsi=butuh cepat dan segera, kde_pos_tkn=null, ntf_adm=0, ttd=null, ms=null, id_tkn=27, koor_lat=null, gprs_id=null, tngl_chck_adm=null, version=null, koor_long=null, sts=0, foto=null, phone=09787879, nm_penerima=yuyun, sts_edc=0, id_usr_adm_sls=0, own_mrchn=null, nm_mrchn=laksana baru, id_usr_sls=0}
2016-02-05 15:14:58 DEBUG JdbcTransaction:113 - committed JDBC Connection
2016-02-05 15:14:58 DEBUG JdbcTransaction:126 - re-enabling autocommit
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:246 - Releasing JDBC connection
2016-02-05 15:14:58 DEBUG LogicalConnectionImpl:264 - Released JDBC connection
2016-02-05 15:14:58 INFO nanda:71 - <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< STOP ALL INSTALLASI LIST >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
....
how can I disable DEBUG ?

Change DEBUG to INFO
log4j.rootLogger=DEBUG, stdout, file

I'm assuming that you are using Log4j v1.x...
In your configuration properties you're only configuring appenders (root logger output will be sent to stdout end file):
log4j.rootLogger=stdout, file
but you aren't specifying logging level (default level is DEBUG), so everything is logged on your appenders.
To set a specific logging level you need to configure it properly. In particular, if you need to log only from INFO level to FATAL level, you have to set this:
log4j.rootLogger=INFO, stdout, file
Take a look: https://logging.apache.org/log4j/1.2/manual.html
UPDATE
If you need to log Hibernate activities (only INFO level) you also need to set these configurations:
log4j.logger.org.hibernate=INFO, stdout, file

You are not setting the Logging Level in your log4j.xml.
Set your Logger level to INFO like this:
# Root logger option
log4j.rootLogger=INFO, stdout, file

Change this log4j.rootLogger=stdout, file to log4j.rootLogger= INFO, stdout, file

Solution A: Initialize root logger with level INFO for stdout and file
log4j.rootLogger=INFO,stdout,file
Solution B: Set the log level for specified components
log4j.logger.com.endeca=INFO

Related

Reactor Netty websocket channel closed prematurely

I have a long running websocket client implemented in java Spring reactor with Netty (spring-boot-starter-parent 2.5.3) targeting Binance ws api.
According to specs, the weboscket channel is kept open 24 hours.
The websocket is unexpectedly and prematurely closed after around 3 minutes :
16:50:48.418 [main] DEBUG reactor.util.Loggers - Using Slf4j logging framework
16:50:48.434 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
16:50:48.436 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
16:50:48.437 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 14
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
16:50:48.438 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable: Reflective setAccessible(true) disabled
16:50:48.439 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module #1efbd816
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): unavailable
16:50:48.440 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 8388608000 bytes (maybe)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
16:50:48.448 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
16:50:48.449 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
16:50:48.450 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
16:50:48.450 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default LoopResources: DefaultLoopResources {prefix=reactor-http, daemon=true, selectCount=8, workerCount=8}
16:50:48.460 [main] DEBUG reactor.netty.tcp.TcpResources - [http] resources will use the default ConnectionProvider: reactor.netty.resources.DefaultPooledConnectionProvider#192b07fd
16:50:48.485 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
16:50:48.486 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
16:50:48.581 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
16:50:48.582 [main] DEBUG io.netty.util.NetUtilInitializations - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
16:50:48.583 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128
16:50:48.590 [main] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Connecting to wss://stream.binance.com:9443/ws
16:50:48.601 [main] DEBUG io.netty.handler.ssl.OpenSsl - netty-tcnative not in the classpath; OpenSslEngine will be unavailable.
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default protocols (JDK): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1]
16:50:48.712 [main] DEBUG io.netty.handler.ssl.JdkSslContext - Default cipher suites (JDK): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384]
16:50:48.720 [main] DEBUG reactor.netty.resources.DefaultLoopIOUring - Default io_uring support : false
16:50:48.724 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true
16:50:48.725 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true
16:50:48.730 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Successfully loaded the library /tmp/libnetty_transport_native_epoll_x86_6410359104745093945181.so
16:50:48.731 [main] DEBUG reactor.netty.resources.DefaultLoopEpoll - Default Epoll support : true
16:50:48.734 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
16:50:48.742 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
16:50:48.743 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
16:50:48.749 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
16:50:48.768 [main] DEBUG reactor.netty.resources.PooledConnectionProvider - Creating a new [http] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=500, maxIdleTime=-1, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=1000, pendingAcquireTimeout=45000}] for [stream.binance.com/<unresolved>:9443]
16:50:48.798 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 27223 (auto-detected)
16:50:48.799 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 28:16:ad:ff:fe:2b:7c:b7 (auto-detected)
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 16
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
16:50:48.809 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
16:50:48.813 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
16:50:48.814 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
16:50:48.828 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:48.845 [reactor-http-epoll-2] DEBUG reactor.netty.tcp.SslProvider - [id:d962b126] SSL enabled using engine sun.security.ssl.SSLEngineImpl#55608030 and SNI stream.binance.com/<unresolved>:9443
16:50:48.852 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
16:50:48.853 [reactor-http-epoll-2] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#3ba51dc6
16:50:48.854 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConfig - [id:d962b126] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:48.866 [reactor-http-epoll-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1fb356c5
16:50:48.867 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [11524: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN A)
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
16:50:48.869 [reactor-http-epoll-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.delayedQueue.ratio: 8
16:50:48.878 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsQueryContext - [id: 0xdd7103d7] WRITE: UDP, [33872: /127.0.0.53:53], DefaultDnsQuestion(stream.binance.com. IN AAAA)
16:50:48.904 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [11524: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 11524, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN A)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(stream.binance.com. 12 IN A 4B)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:48.907 [reactor-http-epoll-2] DEBUG reactor.netty.transport.TransportConnector - [id:d962b126] Connecting to [stream.binance.com/52.199.12.133:9443].
16:50:48.907 [reactor-http-epoll-1] DEBUG io.netty.resolver.dns.DnsNameResolver - [id: 0xdd7103d7] RECEIVED: UDP [33872: /127.0.0.53:53], DatagramDnsResponse(from: /127.0.0.53:53, 33872, QUERY(0), NoError(0), RD RA)
DefaultDnsQuestion(stream.binance.com. IN AAAA)
DefaultDnsRawRecord(OPT flags:0 udp:65494 0B)
16:50:49.162 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Registering pool release on close event for channel
16:50:49.163 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
16:50:49.807 [reactor-http-epoll-2] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
16:50:49.808 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}, [connected])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [configured])
16:50:49.826 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientConnect - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Handler is being applied: {uri=wss://stream.binance.com:9443/ws, method=GET}
16:50:49.830 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(GET{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [request_prepared])
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added encoder [reactor.left.httpAggregator] at the beginning of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, reactor.left.httpAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:49.839 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpMetricsHandler, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.left.httpAggregator = io.netty.handler.codec.http.HttpObjectAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
16:50:49.840 [reactor-http-epoll-2] DEBUG reactor.netty.http.client.HttpClientOperations - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Attempting to perform websocket handshake with wss://stream.binance.com:9443/ws
16:50:49.846 [reactor-http-epoll-2] DEBUG io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13 - WebSocket version 13 client handshake key: 7FNVb427OHllyiM2Clg//g==, expected response: iTvQFIKtv7xyyXvmEAooh8NZPVw=
16:50:50.122 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443]}}, [response_received])
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.adapter.ReactorNettyWebSocketSession - [36eb4d6b] Session id "36eb4d6b" for wss://stream.binance.com:9443/ws
16:50:50.135 [reactor-http-epoll-2] DEBUG org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient - Started session '36eb4d6b' for wss://stream.binance.com:9443/ws
16:50:50.147 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] Added decoder [reactor.left.wsFrameAggregator] at the end of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.httpCodec, ws-decoder, ws-encoder, reactor.left.wsFrameAggregator, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
16:50:50.149 [reactor-http-epoll-2] DEBUG reactor.netty.channel.FluxReceive - [id:d962b126-1, L:/192.168.1.5:44690 - R:stream.binance.com/52.199.12.133:9443] FluxReceive{pending=0, cancelled=false, inboundDone=false, inboundError=null}: subscribing inbound receiver
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - onSubscribe(FluxMap.MapSubscriber)
16:50:50.150 [reactor-http-epoll-2] INFO TRACE - request(256)
16:50:50.411 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:50:50.413 [reactor-http-epoll-2] INFO TRACE - request(1)
...
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - onNext(evt)
16:52:16.652 [reactor-http-epoll-2] INFO TRACE - request(1)
16:52:17.168 [reactor-http-epoll-2] DEBUG reactor.netty.resources.PooledConnectionProvider - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Channel closed, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
16:52:17.169 [reactor-http-epoll-2] DEBUG reactor.netty.ReactorNetty - [id:d962b126-1, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] Non Removed handler: reactor.left.httpAggregator, context: null, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (ws-decoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameDecoder), (ws-encoder = io.netty.handler.codec.http.websocketx.WebSocket13FrameEncoder), (reactor.left.wsFrameAggregator = io.netty.handler.codec.http.websocketx.WebSocketFrameAggregator), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
A completed
A terminated
16:52:17.172 [reactor-http-epoll-2] INFO TRACE - onComplete()
B completed
B terminated
C success
C terminated
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [response_completed])
16:52:17.177 [reactor-http-epoll-2] DEBUG reactor.netty.resources.DefaultPooledConnectionProvider - [id:d962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443] onStateChange(ws{uri=/ws, connection=PooledConnection{channel=[id: 0xd962b126, L:/192.168.1.5:44690 ! R:stream.binance.com/52.199.12.133:9443]}}, [disconnecting])
I tried to reproduce the issue using another technology like javascript but everything runs fine.
It seems that the channel is closed so I tried to tune the ChannelOptions at TcpClient level... still no luck !
TcpClient wsTcp = TcpClient.create();
wsTcp.option(ChannelOption.AUTO_CLOSE, Boolean.FALSE);
wsTcp.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, Integer.MAX_VALUE);
wsTcp.option(ChannelOption.AUTO_READ, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_KEEPALIVE, Boolean.TRUE);
wsTcp.option(ChannelOption.SO_TIMEOUT, Integer.MAX_VALUE);
I provided a java sample code to reproduce the issue:
package test;
import java.net.URI;
import java.util.concurrent.CountDownLatch;
import org.springframework.web.reactive.socket.client.ReactorNettyWebSocketClient;
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
public class WsTest {
public static void main(String[] args) throws InterruptedException {
CountDownLatch latch = new CountDownLatch(1);
ReactorNettyWebSocketClient wsclient = new ReactorNettyWebSocketClient();
wsclient.setMaxFramePayloadLength(Integer.MAX_VALUE);
EmitterProcessor<String> output = EmitterProcessor.create();
Mono<Void> execMono = wsclient.execute(URI.create("wss://stream.binance.com:9443/ws"),
session -> session.send(Flux.just(session.textMessage("{\"method\": \"SUBSCRIBE\",\"params\":[\"!ticker#arr\"],\"id\": 1}")))
.thenMany(session
.receive()
.doOnCancel(() -> System.out.println("A cancelled"))
.doOnComplete(() -> System.out.println("A completed"))
.doOnTerminate(() -> System.out.println("A terminated"))
.map(x -> "evt")
.log("TRACE")
.subscribeWith(output).then())
.then());
output.doOnCancel(() -> System.out.println("B cancelled"))
.doOnComplete(() -> System.out.println("B completed"))
.doOnTerminate(() -> System.out.println("B terminated"))
.doOnSubscribe(s -> execMono
.doOnCancel(() -> System.out.println("C cancelled"))
.doOnSuccess(x -> System.out.println("C success"))
.doOnTerminate(() -> System.out.println("C terminated"))
.subscribe())
.subscribe();
latch.await();
}
}
I don't understand why I get completed/terminated event from ReactorNettyWebSocketClient WebSocketHandler ?
Thank you for your help,
I finally managed to find the root cause.
The underlying error was java websocket 1006 Unexpected Status of SSLEngineResult after an unwrap() operation
After some investigation, I got the returned code 1006 meaning the connection was closed abnormally by the client as documented in the rfc https://datatracker.ietf.org/doc/html/rfc6455#section-7.4.1
1006 is a reserved value and MUST NOT be set as a status code in a
Close control frame by an endpoint. It is designated for use in
applications expecting a status code to indicate that the
connection was closed abnormally, e.g., without sending or
receiving a Close control frame.
At that time, I switched from WIFI connection to LAN connection and the issue vanished immediately.
My WIFI router was not able to handle the huge payload correctly.

Spring WebClient downloading PDF gives an HTTP error

While using Spring's WebClient to retrieve a PDF file from a REST API, I get an error.
Here's the code with the WebClient :
return WebClient.create().get()
.uri(builder.build().toUri())
.accept(MediaType.APPLICATION_PDF)
.exchange()
.flatMap(response -> response.bodyToMono(byte[].class))
.block();
And I'm getting this error from the REST API that serves the file :
03-02-2021 14:32:01.180 [http-nio-8080-exec-6] DEBUG o.s.w.s.m.m.a.HttpEntityMethodProcessor.writeWithMessageConverters - Found 'Content-Type:application/pdf' in response
03-02-2021 14:32:01.181 [http-nio-8080-exec-6] DEBUG o.s.w.s.m.m.a.HttpEntityMethodProcessor.traceDebug - Writing [InputStream resource [resource loaded through InputStream]]
03-02-2021 14:32:01.185 [http-nio-8080-exec-6] DEBUG o.s.o.j.s.OpenEntityManagerInViewInterceptor.afterCompletion - Closing JPA EntityManager in OpenEntityManagerInViewInterceptor
03-02-2021 14:32:01.186 [http-nio-8080-exec-6] DEBUG o.s.web.servlet.DispatcherServlet.logResult - Completed 200 OK
03-02-2021 14:32:01.187 [http-nio-8080-exec-6] DEBUG o.a.t.util.net.SocketWrapperBase.log - Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#d1d3000:org.apache.tomcat.util.net.NioChannel#4b5f2cea:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:57797]], Read from buffer: [0]
03-02-2021 14:32:01.187 [http-nio-8080-exec-6] DEBUG o.a.coyote.http11.Http11Processor.log - Error parsing HTTP request header
java.io.EOFException: null
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.fillReadBuffer(NioEndpoint.java:1230)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.read(NioEndpoint.java:1140)
at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:780)
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:356)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:260)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1589)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:834)
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.a.coyote.http11.Http11Processor.log - Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#d1d3000:org.apache.tomcat.util.net.NioChannel#4b5f2cea:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:57797]], Status in: [OPEN_READ], State out: [CLOSED]
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.a.coyote.http11.Http11NioProtocol.log - Pushed Processor [org.apache.coyote.http11.Http11Processor#1c06182a]
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.a.tomcat.util.threads.LimitLatch.log - Counting down[http-nio-8080-exec-6] latch=2
03-02-2021 14:32:01.188 [http-nio-8080-exec-6] DEBUG o.apache.tomcat.util.net.NioEndpoint.log - Calling [org.apache.tomcat.util.net.NioEndpoint#6df434e4].closeSocket([org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#d1d3000:org.apache.tomcat.util.net.NioChannel#4b5f2cea:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:57797]])
03-02-2021 14:32:08.051 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool.logPoolState - HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)
03-02-2021 14:32:08.051 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool.fillPool - HikariPool-1 - Fill pool skipped, pool is at sufficient level.
03-02-2021 14:32:31.180 [Catalina-utility-2] DEBUG o.a.catalina.session.ManagerBase.log - Start expire sessions StandardManager at 1612359151180 sessioncount 0
03-02-2021 14:32:31.180 [Catalina-utility-2] DEBUG o.a.catalina.session.ManagerBase.log - End expire sessions StandardManager processingTime 0 expired sessions: 0
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.a.coyote.http11.Http11NioProtocol.log - Processing socket [org.apache.tomcat.util.net.NioChannel#6a7679d5:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8080 remote=/0:0:0:0:0:0:0:1:57771]] with status [ERROR]
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.a.coyote.http11.Http11NioProtocol.log - Found processor [null] for socket [org.apache.tomcat.util.net.NioChannel#6a7679d5:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8080 remote=/0:0:0:0:0:0:0:1:57771]]
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.a.tomcat.util.threads.LimitLatch.log - Counting down[http-nio-8080-exec-1] latch=1
03-02-2021 14:32:34.196 [http-nio-8080-exec-1] DEBUG o.apache.tomcat.util.net.NioEndpoint.log - Calling [org.apache.tomcat.util.net.NioEndpoint#6df434e4].closeSocket([org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#1bf3167e:org.apache.tomcat.util.net.NioChannel#6a7679d5:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8080 remote=/0:0:0:0:0:0:0:1:57771]])
Any idea where the problem is ?
Thanx

Riak java client, execute() never returns

I've setup a riak server on ubuntu.
http://192.168.0.102:8098/ping return "OK"
I'm trying to remotely connect to it using riak java client(2.1.1) using the following code. client.execute() never returns. I'm attaching the log also.
public class Testing {
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8098, "192.168.0.102");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
Response rv = client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
}
Log on the console is...
17:19:40.841 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
17:19:40.865 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
17:19:40.891 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
17:19:40.892 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
17:19:40.893 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - direct buffer constructor: available
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
17:19:40.894 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available
17:19:40.896 [main] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available
17:19:40.896 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: Windows
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
17:19:40.897 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
17:19:40.898 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: unavailable
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
17:19:40.899 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: C:\Users\Rakesh\AppData\Local\Temp (java.io.tmpdir)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 32 (sun.arch.data.model)
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
17:19:40.900 [main] DEBUG i.n.util.internal.PlatformDependent - io.netty.maxDirectMemory: 259522560 bytes
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
17:19:40.921 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
17:19:40.922 [main] DEBUG i.n.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
17:19:41.039 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 2924 (auto-detected)
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
17:19:41.041 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
17:19:41.162 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
17:19:41.163 [main] DEBUG io.netty.util.NetUtil - \proc\sys\net\core\somaxconn: 200 (non-existent)
17:19:41.321 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: e4:b3:18:ff:fe:6c:52:eb (auto-detected)
17:19:41.321 [main] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0xb620b93d4006e503
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
17:19:41.333 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 2
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
17:19:41.355 [main] DEBUG i.n.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
17:19:41.364 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
17:19:41.365 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
17:19:41.406 [main] INFO com.basho.riak.client.core.RiakNode - RiakNode started; 192.168.0.102:8098
17:19:41.407 [main] INFO c.basho.riak.client.core.RiakCluster - RiakCluster is starting.
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - No desired charset found in system properties, the default charset 'windows-1252' will be used
17:19:41.408 [main] INFO c.b.r.c.core.util.DefaultCharset - Initializing client charset to: windows-1252
17:19:41.443 [main] DEBUG com.basho.riak.client.core.RiakNode - Attempting to acquire channel permit
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 32768
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
17:19:41.445 [main] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
17:19:41.447 [main] DEBUG com.basho.riak.client.core.RiakNode - Operation 28144878 being executed on RiakNode 192.168.0.102:8098
17:19:41.461 [nioEventLoopGroup-2-10] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.bytebuf.checkAccessible: true
17:19:41.463 [nioEventLoopGroup-2-10] DEBUG i.n.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#1536e36
Call stack of suspended thread
Thread [main] (Suspended)
Unsafe.park(boolean, long) line: not available [native method]
LockSupport.park(Object) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).parkAndCheckInterrupt() line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).doAcquireSharedInterruptibly(int) line: not available
CountDownLatch$Sync(AbstractQueuedSynchronizer).acquireSharedInterruptibly(int) line: not available
CountDownLatch.await() line: not available
StoreOperation(FutureOperation<T,U,S>).await() line: 387
GenericRiakCommand$1(CoreFutureAdapter<T2,S2,T,S>).await() line: 90
StoreValue(RiakCommand<T,S>).execute(RiakCluster) line: 92
RiakClient.execute(RiakCommand<T,S>) line: 355
Testing.main(String[]) line: 29
A simple code addition after the following line of your code should fix things for you:
response rv = client.execute(store);
add:
client.shutdown();
to release that connection and continue execution.
Note that you will need to create a new connection for your next request against the database since you closed client or use .executeAsync() in place of .execute().
It appears you are expecting the Riak java client to connect using HTTP API. The Riak java client only connects using protocol buffer; using the HTTP address and port will freeze.
Yoy have to use this, its works fine...
public static void main(String[] args) throws ExecutionException,
InterruptedException, UnknownHostException {
RiakClient client = RiakClient.newClient(8087,"192.168.0.65");
// put some stuff
Namespace ns = new Namespace("TestBucket");
Location location = new Location(ns, "TestKey");
String myData = "TestValue";
StoreValue store = new StoreValue.Builder(myData)
.withLocation(location).build();
client.execute(store); // << NEVER GETS PAST THIS
System.out.println("write done");
// get some stuff
FetchValue fv = new FetchValue.Builder(location).build();
FetchValue.Response response = client.execute(fv);
String obj = response.getValue(String.class);
System.out.println(obj);
System.out.println("fetch done");
}
hope you will also get... !!!

Data not getting persisted Hibernate + Derby Embedded

I am trying to persist data to an embedded Derby DB using hibernate. But data not getting persisted at all.
hibernate configuration
<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD//EN"
"http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.connection.url">
jdbc:derby:wso2FlightRecorder
</property>
<property name="hibernate.connection.driver_class">
org.apache.derby.jdbc.EmbeddedDriver
</property>
<property name="hibernate.dialect">
org.hibernate.dialect.DerbyTenSevenDialect
</property>
<property name="connection.username"/>
<property name="connection.password"/>
<!-- DB schema will be updated if needed -->
<property name="hibernate.hbm2ddl.auto">create</property>
<mapping class="org.wso2.esbMonitor.network.PassThruHTTPBean">
</mapping>
</session-factory>
</hibernate-configuration>
This is the DAO class
package org.wso2.esbMonitor.network;
import javax.annotation.Generated;
import javax.persistence.*;
import java.util.Date;
#Entity
#Table(name = "HTTP_LOG")
public class PassThruHTTPBean {
#Id
#GeneratedValue
#Column(name = "id")
private int id;
#Column(name = "activeThreadCount")
private int activeThreadCount;
#Column(name = "avgSizeRecieved")
private double avgSizeRecieved;
#Column(name ="avgSizeSent")
private double avgSizeSent;
#Column(name ="faultsRecieving")
private long faultsRecieving;
#Column(name ="faultSending")
private long faultSending;
#Column(name ="messagesRecieved")
private long messagesRecieved;
#Column(name ="messageSent")
private long messageSent;
#Column(name ="queueSize")
private int queueSize;
#Column(name = "time")
private Date date;
}
This is the method use to commit to DB
public synchronized static void addNetworkTrafficDetailsToDB(){
try {
if (scheduledList.size() > 0){
logger.info("Started persisting");
Session session = HibernateSessionCreator.getSession();
for(PassThruHTTPBean passThruHTTPBean : scheduledList){
Transaction tx;
tx = session.beginTransaction();
session.save(passThruHTTPBean);
tx.commit();
}
session.flush();
session.close();
scheduledList.clear();
}
Stack trace
2016-06-01 09:31:08 DEBUG AbstractTransactionImpl:158 - begin
2016-06-01 09:31:08 DEBUG LogicalConnectionImpl:212 - Obtaining JDBC
connection
2016-06-01 09:31:08 TRACE DriverManagerConnectionProviderImpl:175 - Total
checked-out connections: 0
2016-06-01 09:31:08 TRACE DriverManagerConnectionProviderImpl:181 - Using
pooled JDBC connection, pool size: 0
2016-06-01 09:31:08 DEBUG LogicalConnectionImpl:218 - Obtained JDBC
connection
2016-06-01 09:31:08 DEBUG JdbcTransaction:69 - initial autocommit status: false
2016-06-01 09:31:08 TRACE AbstractServiceRegistryImpl:146 - Initializing service [role=org.hibernate.event.service.spi.EventListenerRegistry]
2016-06-01 09:31:08 TRACE DefaultSaveOrUpdateEventListener:177 - Saving transient instance
2016-06-01 09:31:08 TRACE AbstractSaveEventListener:167 - Saving [org.wso2.esbMonitor.network.PassThruHTTPBean#<null>]
2016-06-01 09:31:08 TRACE ActionQueue:177 - Adding an EntityIdentityInsertAction for [org.wso2.esbMonitor.network.PassThruHTTPBean] object
2016-06-01 09:31:08 TRACE ActionQueue:185 - Executing inserts before finding non-nullable transient entities for early insert: [EntityIdentityInsertAction[org.wso2.esbMonitor.network.PassThruHTTPBean#<null>]]
2016-06-01 09:31:08 TRACE ActionQueue:193 - Adding insert with no non-nullable, transient entities: [EntityIdentityInsertAction[org.wso2.esbMonitor.network.PassThruHTTPBean#<null>]]
2016-06-01 09:31:08 TRACE ActionQueue:211 - Executing insertions before resolved early-insert
2016-06-01 09:31:08 DEBUG ActionQueue:213 - Executing identity-insert immediately
2016-06-01 09:31:08 TRACE AbstractEntityPersister:2960 - Inserting entity: org.wso2.esbMonitor.network.PassThruHTTPBean (native id)
2016-06-01 09:31:08 DEBUG SQL:104 - insert into HTTP_LOG (id, activeThreadCount, avgSizeRecieved, avgSizeSent, time, faultSending, faultsRecieving, messageSent, messagesRecieved, queueSize) values (default, ?, ?, ?, ?, ?, ?, ?, ?, ?)
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:319 - Registering statement [c065801d-0155-0a1f-282f-000004235ae8]
2016-06-01 09:31:08 TRACE AbstractEntityPersister:2780 - Dehydrating entity: [org.wso2.esbMonitor.network.PassThruHTTPBean#<null>]
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [1] as [INTEGER] - 0
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [2] as [DOUBLE] - 0.0
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [3] as [DOUBLE] - 0.0
2016-06-01 09:31:08 TRACE BasicBinder:72 - binding parameter [4] as [TIMESTAMP] - <null>
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [5] as [BIGINT] - 0
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [6] as [BIGINT] - 0
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [7] as [BIGINT] - 0
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [8] as [BIGINT] - 0
2016-06-01 09:31:08 TRACE BasicBinder:84 - binding parameter [9] as [INTEGER] - 0
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:358 - Releasing statement [c065801d-0155-0a1f-282f-000004235ae8]
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:472 - Closing prepared statement [c065801d-0155-0a1f-282f-000004235ae8]
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:249 - Starting after statement execution processing [ON_CLOSE]
2016-06-01 09:31:08 DEBUG SQL:104 - values identity_val_local()
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:319 - Registering statement [787c0020-0155-0a1f-282f-000004235ae8]
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:374 - Registering result set [org.apache.derby.impl.jdbc.EmbedResultSet42#1718dbaa]
2016-06-01 09:31:08 DEBUG IdentifierGeneratorHelper:93 - Natively generated identity: 1
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:401 - Releasing result set [org.apache.derby.impl.jdbc.EmbedResultSet42#1718dbaa]
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:515 - Closing result set [org.apache.derby.impl.jdbc.EmbedResultSet42#1718dbaa]
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:358 - Releasing statement [787c0020-0155-0a1f-282f-000004235ae8]
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:472 - Closing prepared statement [787c0020-0155-0a1f-282f-000004235ae8]
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:249 - Starting after statement execution processing [ON_CLOSE]
2016-06-01 09:31:08 TRACE UnresolvedEntityInsertActions:214 - No unresolved entity inserts that depended on [[org.wso2.esbMonitor.network.PassThruHTTPBean#1]]
2016-06-01 09:31:08 TRACE UnresolvedEntityInsertActions:121 - No entity insert actions have non-nullable, transient entity dependencies.
2016-06-01 09:31:08 DEBUG AbstractTransactionImpl:173 - committing
2016-06-01 09:31:08 TRACE SessionImpl:403 - Automatically flushing session
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:82 - Flushing session
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:144 - Processing flush-time cascades
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:185 - Dirty checking collections
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:200 - Flushing entities and processing referenced collections
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:242 - Processing unreferenced collections
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:254 - Scheduling collection removes/(re)creates/updates
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:118 - Flushed: 0 insertions, 0 updates, 0 deletions to 1 objects
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:125 - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
2016-06-01 09:31:08 DEBUG EntityPrinter:114 - Listing entities:
2016-06-01 09:31:08 DEBUG EntityPrinter:121 - org.wso2.esbMonitor.network.PassThruHTTPBean{date=null, faultsRecieving=0, queueSize=0, activeThreadCount=0, avgSizeRecieved=0.0, messagesRecieved=0, id=1, avgSizeSent=0.0, faultSending=0, messageSent=0}
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:327 - Executing flush
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:249 - Starting after statement execution processing [ON_CLOSE]
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:359 - Post flush
2016-06-01 09:31:08 TRACE SessionImpl:612 - before transaction completion
2016-06-01 09:31:08 DEBUG JdbcTransaction:113 - committed JDBC Connection
2016-06-01 09:31:08 TRACE TransactionCoordinatorImpl:136 - after transaction completion
2016-06-01 09:31:08 TRACE SessionImpl:624 - after transaction completion
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:82 - Flushing session
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:144 - Processing flush-time cascades
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:185 - Dirty checking collections
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:200 - Flushing entities and processing referenced collections
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:242 - Processing unreferenced collections
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:254 - Scheduling collection removes/(re)creates/updates
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:118 - Flushed: 0 insertions, 0 updates, 0 deletions to 1 objects
2016-06-01 09:31:08 DEBUG AbstractFlushingEventListener:125 - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
2016-06-01 09:31:08 DEBUG EntityPrinter:114 - Listing entities:
2016-06-01 09:31:08 DEBUG EntityPrinter:121 - org.wso2.esbMonitor.network.PassThruHTTPBean{date=null, faultsRecieving=0, queueSize=0, activeThreadCount=0, avgSizeRecieved=0.0, messagesRecieved=0, id=1, avgSizeSent=0.0, faultSending=0, messageSent=0}
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:327 - Executing flush
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:249 - Starting after statement execution processing [ON_CLOSE]
2016-06-01 09:31:08 TRACE AbstractFlushingEventListener:359 - Post flush
2016-06-01 09:31:08 TRACE SessionImpl:342 - Closing session
2016-06-01 09:31:08 TRACE JdbcCoordinatorImpl:171 - Closing JDBC container [org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl#4e6454a]
2016-06-01 09:31:08 TRACE LogicalConnectionImpl:164 - Closing logical connection
2016-06-01 09:31:08 DEBUG LogicalConnectionImpl:232 - Releasing JDBC connection
2016-06-01 09:31:08 TRACE DriverManagerConnectionProviderImpl:233 - Returning connection to pool, pool size: 1
2016-06-01 09:31:08 DEBUG LogicalConnectionImpl:250 - Released JDBC connection
2016-06-01 09:31:08 TRACE LogicalConnectionImpl:176 - Logical connection closed
Session factory
public class HibernateSessionCreator {
private static SessionFactory ourSessionFactory;
private static ServiceRegistry serviceRegistry;
public static void init() {
try {
Configuration configuration = new Configuration();
configuration.configure();
serviceRegistry = new ServiceRegistryBuilder().applySettings(configuration.getProperties()).buildServiceRegistry();
ourSessionFactory = new AnnotationConfiguration()
.addAnnotatedClass(PassThruHTTPBean.class)
.configure()
.buildSessionFactory(serviceRegistry);
} catch (Throwable ex) {
throw new ExceptionInInitializerError(ex);
}
}
public static Session getSession() throws HibernateException {
return ourSessionFactory.openSession();
}
Please help me solve the problem :) Thank you in advance

Hibernate batch size confusion

This program does tens of thousands of consecutive inserts one after the other. I've never used Hibernate before. I'm getting extremely slow performance (if I just connect and execute the SQL manually I am 10-12x quicker. My batch_size is set to 50 as per many hibernate tutorials.
Here is a log from a single insert - perhaps you could help me understand exactly what is happening:
START INSERT
11:02:56.121 [main] DEBUG org.hibernate.impl.SessionImpl - opened session at timestamp: 13106053761
11:02:56.121 [main] DEBUG o.h.transaction.JDBCTransaction - begin
11:02:56.121 [main] DEBUG org.hibernate.jdbc.ConnectionManager - opening JDBC connection
11:02:56.121 [main] TRACE o.h.c.DriverManagerConnectionProvider - total checked-out connections: 0
11:02:56.121 [main] TRACE o.h.c.DriverManagerConnectionProvider - using pooled JDBC connection, pool size: 0
11:02:56.121 [main] DEBUG o.h.transaction.JDBCTransaction - current autocommit status: false
11:02:56.121 [main] TRACE org.hibernate.jdbc.JDBCContext - after transaction begin
11:02:56.121 [main] TRACE org.hibernate.impl.SessionImpl - setting flush mode to: MANUAL
11:02:56.121 [main] TRACE o.h.e.def.DefaultLoadEventListener - loading entity: [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.121 [main] TRACE o.h.e.def.DefaultLoadEventListener - creating new proxy for entity
11:02:56.122 [main] TRACE o.h.e.d.DefaultSaveOrUpdateEventListener - saving transient instance
11:02:56.122 [main] DEBUG o.h.e.def.AbstractSaveEventListener - generated identifier: component[keyW000]{keyW000=F000 ADSUFC}, using strategy: org.hibernate.id.CompositeNestedGeneratedValueGenerator
11:02:56.122 [main] TRACE o.h.e.def.AbstractSaveEventListener - saving [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.123 [main] TRACE o.h.e.d.AbstractFlushingEventListener - flushing session
11:02:56.123 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - processing flush-time cascades
11:02:56.123 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - dirty checking collections
11:02:56.123 [main] TRACE o.h.e.d.AbstractFlushingEventListener - Flushing entities and processing referenced collections
11:02:56.125 [main] TRACE o.h.e.d.AbstractFlushingEventListener - Processing unreferenced collections
11:02:56.125 [main] TRACE o.h.e.d.AbstractFlushingEventListener - Scheduling collection removes/(re)creates/updates
11:02:56.126 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - Flushed: 1 insertions, 0 updates, 0 deletions to 62 objects
11:02:56.126 [main] DEBUG o.h.e.d.AbstractFlushingEventListener - Flushed: 0 (re)creations, 0 updates, 0 removals to 0 collections
11:02:56.132 [main] TRACE o.h.e.d.AbstractFlushingEventListener - executing flush
11:02:56.132 [main] TRACE org.hibernate.jdbc.ConnectionManager - registering flush begin
11:02:56.132 [main] TRACE o.h.p.entity.AbstractEntityPersister - Inserting entity: [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.132 [main] DEBUG org.hibernate.jdbc.AbstractBatcher - about to open PreparedStatement (open PreparedStatements: 0, globally: 0)
11:02:56.132 [main] DEBUG org.hibernate.SQL - insert into MSW000 (W000_DATA_REC, W000_FILE_FLAGS, KEY_W000) values (?, ?, ?)
11:02:56.132 [main] TRACE org.hibernate.jdbc.AbstractBatcher - preparing statement
11:02:56.132 [main] TRACE o.h.p.entity.AbstractEntityPersister - Dehydrating entity: [com.xyzcompany.foo.edoi.ejb.msw000.MSW000Rec#component[keyW000]{keyW000=F000 ADSUFC}]
11:02:56.132 [main] TRACE org.hibernate.type.StringType - binding ' ADSUFCA ' to parameter: 1
11:02:56.132 [main] TRACE org.hibernate.type.StringType - binding ' ' to parameter: 2
11:02:56.132 [main] TRACE org.hibernate.type.StringType - binding 'F000 ADSUFC' to parameter: 3
11:02:56.132 [main] DEBUG org.hibernate.jdbc.AbstractBatcher - Executing batch size: 1
11:02:56.133 [main] DEBUG org.hibernate.jdbc.AbstractBatcher - about to close PreparedStatement (open PreparedStatements: 1, globally: 1)
11:02:56.133 [main] TRACE org.hibernate.jdbc.AbstractBatcher - closing statement
11:02:56.133 [main] TRACE org.hibernate.jdbc.ConnectionManager - registering flush end
11:02:56.133 [main] TRACE o.h.e.d.AbstractFlushingEventListener - post flush
11:02:56.133 [main] DEBUG o.h.transaction.JDBCTransaction - commit
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - automatically flushing session
11:02:56.133 [main] TRACE org.hibernate.jdbc.JDBCContext - before transaction completion
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - before transaction completion
11:02:56.133 [main] DEBUG o.h.transaction.JDBCTransaction - committed JDBC Connection
11:02:56.133 [main] TRACE org.hibernate.jdbc.JDBCContext - after transaction completion
11:02:56.133 [main] DEBUG org.hibernate.jdbc.ConnectionManager - transaction completed on session with on_close connection release mode; be sure to close the session to release JDBC resources!
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - after transaction completion
11:02:56.133 [main] TRACE org.hibernate.impl.SessionImpl - closing session
11:02:56.133 [main] TRACE org.hibernate.jdbc.ConnectionManager - performing cleanup
11:02:56.133 [main] DEBUG org.hibernate.jdbc.ConnectionManager - releasing JDBC connection [ (open PreparedStatements: 0, globally: 0) (open ResultSets: 0, globally: 0)]
11:02:56.133 [main] TRACE o.h.c.DriverManagerConnectionProvider - returning connection to pool, pool size: 1
11:02:56.133 [main] TRACE org.hibernate.jdbc.JDBCContext - after transaction completion
11:02:56.133 [main] DEBUG org.hibernate.jdbc.ConnectionManager - transaction completed on session with on_close connection release mode; be sure to close the session to release JDBC resources!
11:02:56.134 [main] TRACE org.hibernate.impl.SessionImpl - after transaction completion
FINISH INSERT
When you call session.save(), hibernate will generate an INSERT SQL. This INSERT SQL will be appended to be issued to the DB during flushing (i.e session.flush()) .
During flushing, if hibernate.jdbc.batch_size is set to some non-zero value, Hibernate will use the batching feature introduced in the JDBC2 API to issue the batch insert SQL to the DB .
For example , if you save() 100 records and your hibernate.jdbc.batch_size is set to 50. During flushing, instead of issue the following SQL 100 times :
insert into TableA (id , fields) values (1, 'val1');
insert into TableA (id , fields) values (2, 'val2');
insert into TableA (id , fields) values (3, 'val3');
.........................
insert into TableA (id , fields) values (100, 'val100');
Hiberate will group them in batches of 50 , and only issue 2 SQL to the DB, like this:
insert into TableA (id , fields) values (1, 'val1') , (2, 'val2') ,(3, 'val3') ,(4, 'val4') ,......,(50, 'val50')
insert into TableA (id , fields) values (51, 'val51') , (52, 'val52') ,(53, 'val53') ,(54, 'val54'),...... ,(100, 'val100')
Please note that Hibernate would disable insert batching at the JDBC level transparently if the primary key of the inserting table isGenerationType.Identity.
From your log: you save() only one record and then flush(), so there is only one appending INSERT SQL to be processed for every flush. That's why Hibernate cannot help you to batch inserting as there is only one INSERT SQL to be processed. You should save() up to the certain amount of records before calling flush() instead of calling flush() for every save().
The best practise of batch inserting is something like this:
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<888888; i++ ) {
TableA record = new TableA();
record.setXXXX();
session.save(record)
if ( i % 50 == 0 ) { //50, same as the JDBC batch size
//flush a batch of inserts and release memory:
session.flush();
session.clear();
}
}
tx.commit();
session.close();
You save and flush the records batch by batch. In the end of each batch you should clear the persistence context to release some memory to prevent memory exhaustion as every persistent object is placed into the first level cache (your JVM's memory). You could also disable the second-level cache to reduce the unnecessary overhead.
Reference:
Official Hibernate Documentation : Chapter 14. Batch processing
Hibernate Batch Processing – Why you may not be using it. (Even if you think you are)
If you must use hibernate for huge batch jobs StatelessSession is the way to go. It strips things down to the most basic converting-objects-to-SQL-statements mapping and eliminates all of the overhead of the ORM features you're not using when just cramming rows into the DB wholesale.
It would also be much easier to make suggestions on your actual code than the log :)
11:02:56.133 [main] DEBUG o.h.transaction.JDBCTransaction - commit
This is saying that the database is committing after every insert. Ensure you are not committing your transaction / closing your session inside the insert loop. Do this once at the end instead.

Categories

Resources