glassfish server getting hang after sometime - java

I am facing new problem in glassfish server now :(
After server starts successfully after sometime it gets hang and stops processing requests
following are the exceptions in server log
[#|2011-05-08T17:44:34.027+0300|WARNING|sun-appserver2.1|javax.enterprise.system.stream.err|_ThreadID=17;_ThreadName=Timer-18
;_RequestID=a47329c8-5efa-4e29-b90d-a5bda9c8f725;|
java.util.logging.ErrorManager: 5: Error in formatting Logrecord|#]
[#|2011-05-08T17:44:34.036+0300|WARNING|sun-appserver2.1|javax.enterprise.system.stream.err|_ThreadID=17;_ThreadName=Timer-18
;_RequestID=a47329c8-5efa-4e29-b90d-a5bda9c8f725;|
java.security.ProviderException: implNextBytes() failed
at sun.security.pkcs11.P11SecureRandom.implNextBytes(P11SecureRandom.java:170)
at sun.security.pkcs11.P11SecureRandom.engineNextBytes(P11SecureRandom.java:117)
at java.security.SecureRandom.nextBytes(SecureRandom.java:413)
at java.util.UUID.randomUUID(UUID.java:161)
at com.sun.enterprise.admin.monitor.callflow.AgentImpl$ThreadLocalState.<init>(AgentImpl.java:162)
at com.sun.enterprise.admin.monitor.callflow.AgentImpl$1.initialValue(AgentImpl.java:256)
at com.sun.enterprise.admin.monitor.callflow.AgentImpl$1.initialValue(AgentImpl.java:255)
at java.lang.ThreadLocal$ThreadLocalMap.getAfterMiss(ThreadLocal.java:374)
at java.lang.ThreadLocal$ThreadLocalMap.get(ThreadLocal.java:347)
at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:225)
at java.lang.ThreadLocal.get(ThreadLocal.java:127)
at com.sun.enterprise.admin.monitor.callflow.AgentImpl.getThreadLocalData(AgentImpl.java:1070)
at com.sun.enterprise.server.logging.UniformLogFormatter.uniformLogFormat(UniformLogFormatter.java:320)
at com.sun.enterprise.server.logging.UniformLogFormatter.format(UniformLogFormatter.java:151)
at java.util.logging.StreamHandler.publish(StreamHandler.java:179)
at com.sun.enterprise.server.logging.FileandSyslogHandler.publish(FileandSyslogHandler.java:512)
at java.util.logging.Logger.log(Logger.java:440)
at java.util.logging.Logger.doLog(Logger.java:462)
at java.util.logging.Logger.log(Logger.java:526)
at com.sun.enterprise.deployment.autodeploy.AutoDeployControllerImpl$AutoDeployTask.run(AutoDeployControllerImpl.java
:391)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
Caused by: sun.security.pkcs11.wrapper.PKCS11Exception: CKR_DEVICE_ERROR
at sun.security.pkcs11.wrapper.PKCS11.C_GenerateRandom(Native Method)
at sun.security.pkcs11.P11SecureRandom.implNextBytes(P11SecureRandom.java:167)
... 21 more
|#]
[#|2011-05-08T17:44:36.034+0300|SEVERE|sun-appserver2.1|javax.enterprise.system.tools.deployment|_ThreadID=17;_ThreadName=Tim
er-18;_RequestID=a47329c8-5efa-4e29-b90d-a5bda9c8f725;|"DPL8011: autodeployment failure while deploying the application : nul
l"|#]
[#|2011-05-08T17:44:38.034+0300|SEVERE|sun-appserver2.1|javax.enterprise.system.tools.deployment|_ThreadID=17;_ThreadName=Tim
er-18;_RequestID=a47329c8-5efa-4e29-b90d-a5bda9c8f725;|"DPL8011: autodeployment failure while deploying the application : nul
l"|#]
********
when it is in hang state i took thread dump and found 1 dead lock as below !!!
Found one Java-level deadlock:
=============================
"SelectorThread-4242":
waiting to lock monitor 0x0040ef60 (object 0x56cd6dd0, a org.apache.jasper.util.SystemLogHandler),
which is held by "Timer-2"
"Timer-2":
waiting to lock monitor 0x0040f5d8 (object 0x568bc440, a com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream),
which is held by "SelectorThread-4242"
Java stack information for the threads listed above:
===================================================
"SelectorThread-4242":
at java.lang.Throwable.printStackTrace(Throwable.java:461)
- waiting to lock <0x56cd6dd0> (a org.apache.jasper.util.SystemLogHandler)
at java.lang.Throwable.printStackTrace(Throwable.java:452)
at java.util.logging.ErrorManager.error(ErrorManager.java:78)
- locked <0x5a4fe388> (a java.util.logging.ErrorManager)
at com.sun.enterprise.server.logging.UniformLogFormatter.uniformLogFormat(UniformLogFormatter.java:364)
at com.sun.enterprise.server.logging.UniformLogFormatter.format(UniformLogFormatter.java:151)
at java.util.logging.StreamHandler.publish(StreamHandler.java:179)
- locked <0x568825e0> (a com.sun.enterprise.server.logging.FileandSyslogHandler)
at com.sun.enterprise.server.logging.FileandSyslogHandler.publish(FileandSyslogHandler.java:512)
- locked <0x568825e0> (a com.sun.enterprise.server.logging.FileandSyslogHandler)
at java.util.logging.Logger.log(Logger.java:440)
at java.util.logging.Logger.doLog(Logger.java:462)
at java.util.logging.Logger.log(Logger.java:485)
at com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingByteArrayOutputStream.flush(SystemOutandErrHandler.java:368)
- locked <0x568b84f0> (a com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingByteArrayOutputStream)
at java.io.PrintStream.write(PrintStream.java:414)
- locked <0x568bc440> (a com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream)
at com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream.write(SystemOutandErrHandler.java:293)
at sun.nio.cs.StreamEncoder$CharsetSE.writeBytes(StreamEncoder.java:336)
at sun.nio.cs.StreamEncoder$CharsetSE.implFlushBuffer(StreamEncoder.java:404)
at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:115)
- locked <0x568bc470> (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:169)
at java.io.PrintStream.write(PrintStream.java:459)
- locked <0x568bc440> (a com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream)
at java.io.PrintStream.print(PrintStream.java:602)
at com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream.print(SystemOutandErrHandler.java:205)
at java.io.PrintStream.println(PrintStream.java:739)
- locked <0x568bc440> (a com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream)
at com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream.println(SystemOutandErrHandler.java:187)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:254)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:254)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:254)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:254)
at java.util.logging.ErrorManager.error(ErrorManager.java:76)
- locked <0x5a4fe3a8> (a java.util.logging.ErrorManager)
at com.sun.enterprise.server.logging.UniformLogFormatter.uniformLogFormat(UniformLogFormatter.java:364)
at com.sun.enterprise.server.logging.UniformLogFormatter.format(UniformLogFormatter.java:151)
at com.sun.enterprise.server.logging.AMXLoggingHook.publish(AMXLoggingHook.java:198)
at com.sun.enterprise.server.logging.FileandSyslogHandler.publish(FileandSyslogHandler.java:531)
- locked <0x568825e0> (a com.sun.enterprise.server.logging.FileandSyslogHandler)
at java.util.logging.Logger.log(Logger.java:440)
at java.util.logging.Logger.doLog(Logger.java:462)
at java.util.logging.Logger.log(Logger.java:551)
at com.sun.enterprise.web.connector.grizzly.SelectorThread.doSelect(SelectorThread.java:1445)
at com.sun.enterprise.web.connector.grizzly.SelectorThread.startListener(SelectorThread.java:1316)
- locked <0x56cdf878> (a [Ljava.lang.Object;)
at com.sun.enterprise.web.connector.grizzly.SelectorThread.startEndpoint(SelectorThread.java:1279)
at com.sun.enterprise.web.connector.grizzly.SelectorThread.run(SelectorThread.java:1255)
"Timer-2":
at java.io.PrintStream.println(PrintStream.java:738)
- waiting to lock <0x568bc440> (a com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream)
at com.sun.enterprise.server.logging.SystemOutandErrHandler$LoggingPrintStream.println(SystemOutandErrHandler.java:178)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:258)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:258)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:258)
at org.apache.jasper.util.SystemLogHandler.println(SystemLogHandler.java:258)
at java.lang.Throwable.printStackTrace(Throwable.java:462)
- locked <0x56cd6dd0> (a org.apache.jasper.util.SystemLogHandler)
at java.lang.Throwable.printStackTrace(Throwable.java:452)
at com.sun.jbi.management.system.AdminService.heartBeat(AdminService.java:975)
at com.sun.jbi.management.system.AdminService.handleNotification(AdminService.java:198)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor$ListenerWrapper.handleNotification(DefaultMBeanServerInterceptor.java:1652)
at javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:221)
at javax.management.NotificationBroadcasterSupport.sendNotification(NotificationBroadcasterSupport.java:184)
at javax.management.timer.Timer.sendNotification(Timer.java:1295)
- locked <0x57d2e590> (a javax.management.timer.TimerNotification)
at javax.management.timer.Timer.notifyAlarmClock(Timer.java:1264)
at javax.management.timer.TimerAlarmClock.run(Timer.java:1347)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
**Found 1 deadlock.**
Please suggest
Thanks
Ali

got same issue
i resolve it cause my .properties file contained a simple quote ' , thus when Jersey logger tries to log it, it cannot parse the Json send back to the server.
So look which kind of String Jersey is trying to log and be carefull on special characters

Related

Kafka Consumer Heartbeat Thread Block on Java Nio

Software libraries’ Versions in use: Spring 2.2.3.Release, Spring kafka 2.x , Kafka-Client 2.0.0
Java Version: OpenJdk 1.8
Platform Kafka version: Apache kafka 2.2.0 (https://docs.confluent.io/5.2.1/release-notes.html#apache-kafka-2-2-0-cp2)
Configuration: Topic has 6 partitions.
Data rate: The incoming data rate is 3-4 msgs/second.
Scenario:
With the above configuration a test was run continuously for 3 days. After 2.5 days, I encountered problem in terms of not receiving messages from topic in our consumer group.
Upon detailed investigation, I found out that consumer threads were blocked precisely 18 threads were in blocked state. Also, the thread graph is suggesting that consumer threads were waiting on “Java.nio.
PFA logs and screenshots .
a.) Logs
b.) Screenshots
a)Logs
"kafka-coordinator-heartbeat-thread | release-registry-group" #128 daemon prio=5 os_prio=0 tid=0x00007f1954001800 nid=0x8a waiting for monitor entry [0x00007f197f7f6000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1005)
- waiting to lock <0x00000000c3382070> (a org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
"org.springframework.kafka.KafkaListenerEndpointContainer#1-2-C-1" #127 prio=5 os_prio=0 tid=0x00007f1b6dbc1800 nid=0x89 runnable [0x00007f197f8f6000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000c33822d8> (a sun.nio.ch.Util$3)
- locked <0x00000000c33822c8> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000c33822e8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:689)
at org.apache.kafka.common.network.Selector.poll(Selector.java:409)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:271)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:218)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230)
- locked <0x00000000c3382070> (a org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:314)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:732)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:689)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
b.) Screenshots
I would like to understand root cause and possible solutions to fix it and thanks in advance.

AsyncHttpClient creates too many AsyncHttpClient-timer threads

I am using AsyncHttpClient 2.3.0 and Default configuration.
I've noticed that AHC created two types of threads (from the thread dump):
1)
AsyncHttpClient-timer-478-1" - Thread t#30390 java.lang.Thread.State: TIMED_WAITING
at java.lang.Thread.$$YJP$$sleep(Native Method)
at java.lang.Thread.sleep(Thread.java)
at io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
2)
AsyncHttpClient-3-4" - Thread t#20320 java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.$$YJP$$epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <16163575> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <49280039> (a java.util.Collections$UnmodifiableSet)
- locked <2decd496> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
I've expected that AsyncHttpClient uses a few threads under the hood. But after a few days of running, AsyncHttpClient creates ~500 of AsyncHttpClient-timer-xxx-x threads and a few AsyncHttpClient-x-x.
It's called not very intensively, probably also ~500 times per this period.
Only executeRequest is used (execute request and get on returned future) https://static.javadoc.io/org.asynchttpclient/async-http-client/2.3.0/org/asynchttpclient/AsyncHttpClient.html#executeRequest-org.asynchttpclient.Request-org.asynchttpclient.AsyncHandler-:
<T> ListenableFuture<T> executeRequest(Request request, AsyncHandler<T> handler);
I've seen a page about connection pool configuration (https://github.com/AsyncHttpClient/async-http-client/wiki/Connection-pooling) but nothing about thread pool configuration.
What is the difference between both types of thread and what can cause a large number of threads created? Is there any configuration I should apply?
AHC has two types of threads:
For I/O operation.
On your screen, it's AsyncHttpClient-x-x
threads. AHC creates 2*core_number of those.
For timeouts.
On your screen, it's AsyncHttpClient-timer-1-1 thread. Should be
only one.
Any different number means you’re creating multiple clients.
Source: issue on GitHub https://github.com/AsyncHttpClient/async-http-client/issues/1658

ActiveMQ 5.9 - BLOCKED threads on broker due to lock not released

I'm using ActiveMQ 5.9 with Camel 2.10.3, and under load (during a performance test) I'm experiencing some problems, it seems that the broker is stuck trying to close a connection, and I can't understand the reason.
The configuration of the JMS system is the following: there are two broker (configured in failover mode) and many client nodes that acts both as consumer both as producer of some queues (let's take one for example: 'customer_update queue'.
I'm using PooledConnectionFactory with default configuration, 'CACHE_CONSUMER' cache level and 10 max concurrent consumer per each client node.
Broker configuration is the following: tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
Here's the thread holding lock on the broker, and never releasing it:
"ActiveMQ Transport: tcp:///10.128.43.206:38694#61616" #5774 daemon prio=5 os_prio=0 tid=0x00007f2a4424e800 nid=0
xaba4 waiting on condition [0x00007f29fe397000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000004fd008fb0> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.
java:1037)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at org.apache.activemq.broker.TransportConnection.stop(TransportConnection.java:983)
at org.apache.activemq.broker.TransportConnection.processAddConnection(TransportConnection.java:699)
- locked <0x000000050401eed0> (a java.lang.Object)
at org.apache.activemq.broker.jmx.ManagedTransportConnection.processAddConnection(ManagedTransportConnection.java:79)
at org.apache.activemq.command.ConnectionInfo.visit(ConnectionInfo.java:139)
at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:292)
at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:149)
at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:214)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:196)
at java.lang.Thread.run(Thread.java:748)
I have more than 500 other threads on the broker that look like this:
"ActiveMQ Transport: tcp:///10.128.43.206:52074#61616" #2962 daemon prio=5 os_prio=0 tid=0x00007f2a440c9000 nid=0xa01f waiting for monitor entry [0x00007f29fc768000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.activemq.broker.TransportConnection.processAddConnection(TransportConnection.java:696)
- waiting to lock <0x000000050401eed0> (a java.lang.Object)
at org.apache.activemq.broker.jmx.ManagedTransportConnection.processAddConnection(ManagedTransportConnection.java:79)
at org.apache.activemq.command.ConnectionInfo.visit(ConnectionInfo.java:139)
at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:292)
at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:149)
at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:214)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:196)
at java.lang.Thread.run(Thread.java:748)
First error I see on the broker is this one:
2018-05-16 16:36:59,336 [org.apache.activemq.broker.TransportConnection.Transport:856]
WARN - Transport Connection to: tcp://10.128.43.206:48747 failed: java.io.EOFException
On the client node that is referenced in the broker (10.128.43.206), I see this logs, it seems that the node is trying to reconnect, but right after it's disconnected again, and this happens again and again.
2018-05-16 16:36:59,322 [org.apache.activemq.transport.failover.FailoverTransport:856] WARN - Transport (tcp://10.128.43.169:61616) failed, reason: java.io.IOException, attempting to automatically reconnect
2018-05-16 16:36:59,322 [org.apache.activemq.transport.failover.FailoverTransport:856] WARN - Transport (tcp://10.128.43.169:61616) failed, reason: java.io.IOException, attempting to automatically reconnect
2018-05-16 16:36:59,375 [org.apache.activemq.transport.failover.FailoverTransport:856] INFO - Successfully reconnected to tcp://10.128.43.169:61616
2018-05-16 16:36:59,375 [org.apache.activemq.transport.failover.FailoverTransport:856] INFO - Successfully reconnected to tcp://10.128.43.169:61616
2018-05-16 16:36:59,375 [org.apache.activemq.TransactionContext:856] INFO - commit failed for transaction TX:ID:52374-1526300283331-1:1:898
javax.jms.TransactionRolledBackException: Transaction completion in doubt due to failover. Forcing rollback of TX:ID:52374-1526300283331-1:1:898
at org.apache.activemq.state.ConnectionStateTracker.restoreTransactions(ConnectionStateTracker.java:231)
at org.apache.activemq.state.ConnectionStateTracker.restore(ConnectionStateTracker.java:169)
at org.apache.activemq.transport.failover.FailoverTransport.restoreTransport(FailoverTransport.java:827)
at org.apache.activemq.transport.failover.FailoverTransport.doReconnect(FailoverTransport.java:1005)
at org.apache.activemq.transport.failover.FailoverTransport$2.iterate(FailoverTransport.java:136)
at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129)
at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-05-16 16:37:00,091 [org.apache.activemq.transport.failover.FailoverTransport:856] INFO - Successfully reconnected to tcp://10.128.43.169:61616
2018-05-16 16:37:00,091 [org.apache.activemq.transport.failover.FailoverTransport:856] INFO - Successfully reconnected to tcp://10.128.43.169:61616
2018-05-16 16:37:00,112 [org.apache.activemq.transport.failover.FailoverTransport:856] WARN - Transport (tcp://10.128.43.169:61616) failed, reason: java.io.IOException, attempting to automatically reconnect
At the end, broker reach maxConnections available (1000), and needs to be restarted.
Could it be due to the fact that one client node is acting both as a consumer and a producer, using same connection pool, generating some kind of deadlock?
Do you have some suggestions?
Thanks
Giulio
Most probably I was affected by this issue:
https://issues.apache.org/jira/browse/AMQ-5090
Updating to ActiveMQ 5.10.0 and Camel 2.13.1 solved the issue (system is much more stable, even during performance tests).
Thanks
Giulio

Out of memory error and 100% CPU usage in Java web application with jTDS

I am getting OutOfMemoryError in my web application.
When I analysed thread dump i found one blocker thread and stack trace is :
Finalizer
Stack Trace is:
java.lang.Thread.State: BLOCKED
at net.sourceforge.jtds.jdbc.JtdsConnection.releaseTds(JtdsConnection.java:2024)
- waiting to lock <662301fc> (a net.sourceforge.jtds.jdbc.JtdsConnection) owned by "default task-204" t#896
at net.sourceforge.jtds.jdbc.JtdsStatement.close(JtdsStatement.java:972)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.close(JtdsPreparedStatement.java:707)
at net.sourceforge.jtds.jdbc.JtdsStatement.finalize(JtdsStatement.java:219)
at java.lang.System$2.invokeFinalize(System.java:1270)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:98)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:210)
Locked ownable synchronizers:
- None
Details stack trace :
default task-204
Stack Trace is:
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at net.sourceforge.jtds.jdbc.SharedSocket.readPacket(SharedSocket.java:850)
at net.sourceforge.jtds.jdbc.SharedSocket.getNetPacket(SharedSocket.java:731)
- locked <5bc15901> (a java.util.concurrent.ConcurrentHashMap)
at net.sourceforge.jtds.jdbc.ResponseStream.getPacket(ResponseStream.java:477)
at net.sourceforge.jtds.jdbc.ResponseStream.read(ResponseStream.java:114)
at net.sourceforge.jtds.jdbc.ResponseStream.peek(ResponseStream.java:99)
at net.sourceforge.jtds.jdbc.TdsCore.wait(TdsCore.java:4127)
at net.sourceforge.jtds.jdbc.TdsCore.executeSQL(TdsCore.java:1086)
- locked <51eb98a0> (a net.sourceforge.jtds.jdbc.TdsCore)
at net.sourceforge.jtds.jdbc.JtdsCallableStatement.executeMSBatch(JtdsCallableStatement.java:165)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeBatch(JtdsStatement.java:1051)
- locked <662301fc> (a net.sourceforge.jtds.jdbc.JtdsConnection)
at org.jboss.jca.adapters.jdbc.WrappedStatement.executeBatch(WrappedStatement.java:1077)
at com.XXX.MyDao.myMethod(MyDao.java:102)
On line number 102 of MyDao.java I am executing callableStatement.executeBatch()
I am using jtds-1.3.1 for connection to database.
I analysed thread dump using http://fastthread.io
Image for the same:
After restarting my application it works fine.
You can read about finalizer thread from here.

High CPU load with the JSSE client poller on Tomcat 8.5

I'm running a Tomcat 8.5.3 on Windows Server 2008R2 and Java 1.8.0_92.
The process is consuming a lot of CPU (~ 50% from 4 CPUs).
JTop shows that the two most consuming threads are, by far, https-jsse-nio-443-ClientPoller-0 and https-jsse-nio-443-ClientPoller-1.
The threads are mainly looping on thes four stacktraces :
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(WindowsSelectorImpl.java:296)
sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(WindowsSelectorImpl.java:278)
sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:159)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked sun.nio.ch.Util$2#2a3e6629
- locked java.util.Collections$UnmodifiableSet#7cdb1cd3
- locked sun.nio.ch.WindowsSelectorImpl#13dc3a00
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:791)
java.lang.Thread.run(Thread.java:745)
.
sun.nio.ch.WindowsSelectorImpl$SubSelector.processFDSet(WindowsSelectorImpl.java:345)
sun.nio.ch.WindowsSelectorImpl$SubSelector.processSelectedKeys(WindowsSelectorImpl.java:315)
sun.nio.ch.WindowsSelectorImpl$SubSelector.access$2900(WindowsSelectorImpl.java:278)
sun.nio.ch.WindowsSelectorImpl.updateSelectedKeys(WindowsSelectorImpl.java:495)
sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:172)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked sun.nio.ch.Util$2#6f4350d0
- locked java.util.Collections$UnmodifiableSet#36157c3f
- locked sun.nio.ch.WindowsSelectorImpl#120cc3aa
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:791)
java.lang.Thread.run(Thread.java:745)
.
sun.nio.ch.WindowsSelectorImpl.resetWakeupSocket0(Native Method)
sun.nio.ch.WindowsSelectorImpl.resetWakeupSocket(WindowsSelectorImpl.java:473)
- locked java.lang.Object#450e5040
sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:174)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked sun.nio.ch.Util$2#6f4350d0
- locked java.util.Collections$UnmodifiableSet#36157c3f
- locked sun.nio.ch.WindowsSelectorImpl#120cc3aa
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:791)
java.lang.Thread.run(Thread.java:745)
.
java.lang.Object.notifyAll(Native Method)
sun.nio.ch.WindowsSelectorImpl$StartLock.startThreads(WindowsSelectorImpl.java:189)
- locked sun.nio.ch.WindowsSelectorImpl$StartLock#1c512d03
sun.nio.ch.WindowsSelectorImpl$StartLock.access$300(WindowsSelectorImpl.java:181)
sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:153)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked sun.nio.ch.Util$2#2a3e6629
- locked java.util.Collections$UnmodifiableSet#7cdb1cd3
- locked sun.nio.ch.WindowsSelectorImpl#13dc3a00
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:791)
java.lang.Thread.run(Thread.java:745)
Any idea ?
I had a similar problem on Tomcat 8.5.4 / Linux where I saw very high cpu usage on the poller threads (eg, https-jsse-nio-443-ClientPoller-0 and 1). Upgrading to 8.5.5 seems to have resolved the issue.
May have been this bug: https://bz.apache.org/bugzilla/show_bug.cgi?id=60030

Categories

Resources