Deadlock while using ORMLite - java

I have a multithreaded server Java application which receives requests and does queries/updates to a Postgres DB through OrmLite. Under load, several requests come in which are interested in the same DB row. Thread1 might select, change values and then update. At the same time Thread2 tries something similar. This is currently not synchronized and not done inside a transaction. Without surprise, the update of Thread1, might not be seen by Thread2. That's OK (Thread2 can overwrite results from Thread1) and is not my problem.
However, when running this application for some time, I get to a deadlock situation, which results in all available DB connections being used up (and then crash). It seems it is not a standard deadlock (with a circular lock dependency), instead most threads are waiting on a lock, and the thread holding this lock seems to be waiting for a socket read (which probably does not happen, see below).
Using
OrmLite 5.1,
JVM is Java 1.8.0_251 Hotspot Client VM,
Postgres JDBC 42.2.9
How should I go forward to fix this?
Below are relevant parts of the thread dump (analyzed by https://spotify.github.io/threaddump-analyzer )
The thread holding the main lock (0x00000000c0179e18), seems to be waiting on a socket:
"RaspService-2089": running, holding [0x00000000c0179e18, 0x00000000c2c1f6c0]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:140)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:109)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:67)
at org.postgresql.core.PGStream.receiveChar(PGStream.java:335)
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:505)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:141)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)
at org.postgresql.Driver.makeConnection(Driver.java:458)
at org.postgresql.Driver.connect(Driver.java:260)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.j256.ormlite.jdbc.JdbcConnectionSource.makeConnection(JdbcConnectionSource.java:266)
at com.j256.ormlite.jdbc.JdbcPooledConnectionSource.getReadWriteConnection(JdbcPooledConnectionSource.java:140)
at com.j256.ormlite.dao.BaseDaoImpl.update(BaseDaoImpl.java:408)
at vgs.vigi.servlet.OrmLite.update(OrmLite.java:361)
at vgs.vigi.servlet.CachedDao.update(CachedDao.java:287)
at vgs.vigi.ble.RaspClient.run(RaspClient.java:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
26 threads waiting to free connections wait on that lock with stacks like:
"pool-4-thread-96": waiting to acquire [0x00000000c0179e18], holding [0x00000000c0b250a8]
at com.j256.ormlite.jdbc.JdbcPooledConnectionSource.releaseConnection(JdbcPooledConnectionSource.java:168)
at com.j256.ormlite.dao.BaseDaoImpl.create(BaseDaoImpl.java:331)
at vgs.vigi.servlet.OrmLite.create(OrmLite.java:181)
at vgs.vigi.servlet.CachedDao.create(CachedDao.java:126)
at vgs.vigi.logic.Notification.sendNotification(Notification.java:491)
at vgs.vigi.logic.Notification$1.run(Notification.java:640)
at vgs.lib.MyTimer$2.run(MyTimer.java:103)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
More threads waiting to release connections
"RaspService-828": waiting to acquire [0x00000000c0179e18], holding [0x00000000c1187f88]
at com.j256.ormlite.jdbc.JdbcPooledConnectionSource.releaseConnection(JdbcPooledConnectionSource.java:168)
at com.j256.ormlite.dao.BaseDaoImpl.update(BaseDaoImpl.java:412)
at vgs.vigi.servlet.OrmLite.update(OrmLite.java:361)
at vgs.vigi.servlet.CachedDao.update(CachedDao.java:287)
at vgs.vigi.ble.CmdRaspExcutor$8.exec(CmdRaspExcutor.java:318)
at vgs.vigi.ble.RaspClient.run(RaspClient.java:182)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
While many also try to acquire connections:
"RaspService-991": waiting to acquire [0x00000000c0179e18], holding [0x00000000c1624058]
at com.j256.ormlite.jdbc.JdbcPooledConnectionSource.getReadWriteConnection(JdbcPooledConnectionSource.java:125)
at com.j256.ormlite.dao.BaseDaoImpl.update(BaseDaoImpl.java:408)
at vgs.vigi.servlet.OrmLite.update(OrmLite.java:361)
at vgs.vigi.servlet.CachedDao.update(CachedDao.java:287)
at vgs.vigi.ble.RaspClient.run(RaspClient.java:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Some GC is happening (inconsistent means Thread is "BLOCKED (on object monitor)" without waiting for anything)
"qtp1719311117-931": inconsistent?, holding [0x00000000eabfc510]
at java.lang.Runtime.gc(Native Method)
at java.lang.System.gc(System.java:993)
at vgs.vigi.servlet.OrmLite.clearCache(OrmLite.java:33)
at vgs.vigi.servlet.OrmLite.dao(OrmLite.java:215)
at vgs.vigi.servlet.OrmLite.getAll(OrmLite.java:300)
at vgs.vigi.servlet.CachedDao.getAll(CachedDao.java:227)
at vgs.lib.Ajax.sGetAll(Ajax.java:101)
...
And also GC in another thread (which is explicitely coded in our code - not sure why though)
"RaspService-1882": running, holding [0x00000000c05c84c8, 0x00000000c2bed7b0]
at java.lang.Runtime.gc(Native Method)
at java.lang.System.gc(System.java:993)
at vgs.vigi.servlet.OrmLite.clearCache(OrmLite.java:33)
at vgs.vigi.servlet.OrmLite.dao(OrmLite.java:215)
at vgs.vigi.servlet.OrmLite.update(OrmLite.java:360)
at vgs.vigi.servlet.CachedDao.update(CachedDao.java:285)
at vgs.vigi.ble.RaspClient.run(RaspClient.java:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Can I expect OrmLite to be safe with a multithreaded approach as described?
Are there best practices to avoid this issue (while still keeping the multithreaded nature of the server)?
Update
I have a thread dump of a second run, which looks a bit different.
Here the thread holding the lock that everyone is waiting for is inconsistent
"RaspService-1405": inconsistent?, holding [0x00000000c01da9b8, 0x00000000c1cfed28]
With a raw stack of:
"RaspService-1405" #1469 prio=5 os_prio=0 tid=0x0000000021579800 nid=0xa2f4 waiting for monitor entry [0x000000002b36e000]
java.lang.Thread.State: BLOCKED (on object monitor)
at com.j256.ormlite.jdbc.JdbcPooledConnectionSource.getReadWriteConnection(JdbcPooledConnectionSource.java:125)
- locked <0x00000000c01da9b8> (a java.lang.Object)
at com.j256.ormlite.dao.BaseDaoImpl.update(BaseDaoImpl.java:408)
at vgs.vigi.servlet.OrmLite.update(OrmLite.java:361)
at vgs.vigi.servlet.CachedDao.update(CachedDao.java:287)
at vgs.vigi.ble.RaspClient.run(RaspClient.java:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- <0x00000000c1cfed28> (a java.util.concurrent.ThreadPoolExecutor$Worker)
There is also a RUNNING thread which reads from a connection. Not sure whether that is blocked:
"RaspService-1410": running, holding [0x00000000ed1877c8, 0x00000000c1cfe208]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:140)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:109)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:67)
at org.postgresql.core.PGStream.receiveChar(PGStream.java:335)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2008)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:310)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:447)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:368)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:158)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:124)
at com.j256.ormlite.jdbc.JdbcDatabaseConnection.update(JdbcDatabaseConnection.java:294)
at com.j256.ormlite.jdbc.JdbcDatabaseConnection.update(JdbcDatabaseConnection.java:217)
at com.j256.ormlite.stmt.mapped.MappedUpdate.update(MappedUpdate.java:101)
at com.j256.ormlite.stmt.StatementExecutor.update(StatementExecutor.java:472)
at com.j256.ormlite.dao.BaseDaoImpl.update(BaseDaoImpl.java:410)
at vgs.vigi.servlet.OrmLite.update(OrmLite.java:361)
at vgs.vigi.servlet.CachedDao.update(CachedDao.java:287)
at vgs.vigi.ble.RaspClient.run(RaspClient.java:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Update 2:
Here's a graph of sessions as seen from pgAdmin

seems to be waiting on a socket:
It waits until new connection is created, also holding lock inside pool.
Do you have session limit in Postgres? If you do I suggest you set it slightly bigger than pool size in Java.
Otherwise, it is easy to have deadlock if pool size is equal to sessions limit size
All connections are taken (Java pool limit is reached, session count limit is reached)
Application tries to get new connection, takes pool lock and blocked by PG
Application tries to release connection, it cannot take pool lock so cannot release connection to PG, so session limit is still reached

Related

StandardPBEByteEncryptor lock

Why does StandardPBEByteEncryptor lock an object?
"pool-2-thread-115" - Thread t#169
java.lang.Thread.State: BLOCKED
at org.jasypt.encryption.pbe.StandardPBEByteEncryptor.decrypt(StandardPBEByteEncryptor.java:1035)
- waiting to lock <55a2b29b> (a javax.crypto.Cipher) owned by "pool-2-thread-114" t#168
at org.jasypt.encryption.pbe.StandardPBEStringEncryptor.decrypt(StandardPBEStringEncryptor.java:725)
at org.jasypt.util.text.BasicTextEncryptor.decrypt(BasicTextEncryptor.java:112)
at com.ahmetk.tims.statcalculator.kafka.consumer.XMPPTxnLogKafkaClient.process(XMPPTxnLogKafkaClient.java:129)
at com.ahmetk.tims.statcalculator.kafka.consumer.XMPPTxnLogKafkaClient.lambda$processMessage$0(XMPPTxnLogKafkaClient.java:107)
at com.ahmetk.tims.statcalculator.kafka.consumer.XMPPTxnLogKafkaClient$$Lambda$227/859670158.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- locked <21b2529d> (a java.util.concurrent.ThreadPoolExecutor$Worker)
"pool-2-thread-114" - Thread t#168
java.lang.Thread.State: RUNNABLE
at sun.security.provider.DigestBase.engineDigest(DigestBase.java:158)
at java.security.MessageDigest$Delegate.engineDigest(MessageDigest.java:592)
at java.security.MessageDigest.digest(MessageDigest.java:365)
at com.sun.crypto.provider.PBES1Core.deriveCipherKey(PBES1Core.java:272)
at com.sun.crypto.provider.PBES1Core.init(PBES1Core.java:244)
at com.sun.crypto.provider.PBEWithMD5AndDESCipher.engineInit(PBEWithMD5AndDESCipher.java:221)
at javax.crypto.Cipher.init(Cipher.java:1394)
at javax.crypto.Cipher.init(Cipher.java:1327)
at org.jasypt.encryption.pbe.StandardPBEByteEncryptor.decrypt(StandardPBEByteEncryptor.java:1036)
- locked <55a2b29b> (a javax.crypto.Cipher)
at org.jasypt.encryption.pbe.StandardPBEStringEncryptor.decrypt(StandardPBEStringEncryptor.java:725)
at org.jasypt.util.text.BasicTextEncryptor.decrypt(BasicTextEncryptor.java:112)
at com.ahmetk.tims.statcalculator.kafka.consumer.XMPPTxnLogKafkaClient.process(XMPPTxnLogKafkaClient.java:129)
at com.ahmetk.tims.statcalculator.kafka.consumer.XMPPTxnLogKafkaClient.lambda$processMessage$0(XMPPTxnLogKafkaClient.java:107)
at com.ahmetk.tims.statcalculator.kafka.consumer.XMPPTxnLogKafkaClient$$Lambda$227/859670158.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Using Multi-Threaded Decryption
When we're operating on the multi-core machine we want to handle processing of decryption in parallel. To achieve a good performance we can use a PooledPBEStringEncryptor and the setPoolSize() API to create a pool of digesters. Each of them can be used by the different thread in parallel:
PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor();
encryptor.setPoolSize(4);
encryptor.setPassword("some-random-data");
encryptor.setAlgorithm("PBEWithMD5AndTripleDES");
It's good practice to set pool size to be equal to the number of cores of the machine. The code for encryption and decryption is the same as previous ones.

AsyncHttpClient creates too many AsyncHttpClient-timer threads

I am using AsyncHttpClient 2.3.0 and Default configuration.
I've noticed that AHC created two types of threads (from the thread dump):
1)
AsyncHttpClient-timer-478-1" - Thread t#30390 java.lang.Thread.State: TIMED_WAITING
at java.lang.Thread.$$YJP$$sleep(Native Method)
at java.lang.Thread.sleep(Thread.java)
at io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:560)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:459)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
2)
AsyncHttpClient-3-4" - Thread t#20320 java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.$$YJP$$epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <16163575> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <49280039> (a java.util.Collections$UnmodifiableSet)
- locked <2decd496> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
I've expected that AsyncHttpClient uses a few threads under the hood. But after a few days of running, AsyncHttpClient creates ~500 of AsyncHttpClient-timer-xxx-x threads and a few AsyncHttpClient-x-x.
It's called not very intensively, probably also ~500 times per this period.
Only executeRequest is used (execute request and get on returned future) https://static.javadoc.io/org.asynchttpclient/async-http-client/2.3.0/org/asynchttpclient/AsyncHttpClient.html#executeRequest-org.asynchttpclient.Request-org.asynchttpclient.AsyncHandler-:
<T> ListenableFuture<T> executeRequest(Request request, AsyncHandler<T> handler);
I've seen a page about connection pool configuration (https://github.com/AsyncHttpClient/async-http-client/wiki/Connection-pooling) but nothing about thread pool configuration.
What is the difference between both types of thread and what can cause a large number of threads created? Is there any configuration I should apply?
AHC has two types of threads:
For I/O operation.
On your screen, it's AsyncHttpClient-x-x
threads. AHC creates 2*core_number of those.
For timeouts.
On your screen, it's AsyncHttpClient-timer-1-1 thread. Should be
only one.
Any different number means you’re creating multiple clients.
Source: issue on GitHub https://github.com/AsyncHttpClient/async-http-client/issues/1658

Deadlock acquiring locks

I've got a thread dump for a deadlock and I can't see the cause. On first inspection it looks like some client code simply fails to acquire the lock on a ReentrantLock which is owned by MyClass:
"qtp1450652220-77" Id=77 WAITING on java.util.concurrent.locks.ReentrantLock$NonfairSync#1e319fef owned by "pool-2-thread-2" Id=1651
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync#1e319fef
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at com.mycode.MyClass.methodName(MyClass.java:1008)
However the owning thread's dump is:
"pool-2-thread-2" Id=1651 WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#56171f7a
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#56171f7a
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Number of locked synchronizers = 1
- java.util.concurrent.locks.ReentrantLock$NonfairSync#1e319fef
Sure enough the lock on the ReentrantLock is listed at the bottom. But what surprises me is there's none of my client code in the thread dump. There's no indication as to how that ReentrantLock was acquired in the first place, so how can I fix it?
The code in MyClass is:
public Collection<String> methodName() {
interruptLock.lock();
try {
/* do stuff */
return tagsToReturn;
} finally {
interruptLock.unlock();
}
}
Line 1008 is the interruptLock.lock(); line.
It is possible that you have to capture the thread stack with jstack and -l option:
https://docs.oracle.com/javase/8/docs/technotes/tools/unix/jstack.html
-l
Long listing. Prints additional information about locks such as a list of owned java.util.concurrent ownable synchronizers. See the
AbstractOwnableSynchronizer class description at
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/AbstractOwnableSynchronizer.html

Too many parking to wait threads

I am analyzing an application hang, and through the Thread Dumps, I am having 90% of worker threads in this state:
"pool-3-thread-352" #13082 prio=5 os_prio=0 tid=0x00007ff6407fc800
nid=0x1e94 waiting on condition [0x00007ff5a53b4000]
java.lang.Thread.State: TIMED_WAITING (parking) at
sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000044af6bcd0> (a java.util.concurrent.SynchronousQueue$TransferStack) at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
"pool-21-thread-214" #13081 prio=5 os_prio=0 tid=0x0000000002e6a800
nid=0x1e92 waiting on condition [0x00007ff5a54b5000]
java.lang.Thread.State: TIMED_WAITING (parking) at
sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000004ad95fba8> (a java.util.concurrent.SynchronousQueue$TransferStack) at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
As per my understanding, these are basically request worker threads on a tomcat sever, waiting on a blocking queue until a request comes. When a request comes, one thread will get permit and will run to execute the request.
So if no tasks are available these threads will wait (park) on the queue. When a task is available, one worker thread will get permit and become a running thread. It will execute the task.
But these threads still can cause issue if too many threads in the thread pool are created and they will be eating up resource.
Zero Deadlocks found, but still the app hanging, with almost Exceptions everywhere of type:
javax.ws.rs.ProcessingException: RESTEASY004655: Unable to invoke request
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:287)
at com.agfa.orbis.core.client.service.rest.ClientHttpEngineWrapper.invoke(ClientHttpEngineWrapper.java:59)
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(ClientInvocation.java:436)
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocationBuilder.get(ClientInvocationBuilder.java:159)
at com.agfa.hap.crs.commons.client.rest.RestClient.getResponse(RestClient.java:238)
at com.agfa.hap.crs.commons.client.rest.RestClient.get(RestClient.java:70)
at com.agfa.hap.crs.alertsystem.client.orbis.ForwardedUserAlertsMonitor.getSharedAlertState(ForwardedUserAlertsMonitor.java:88)
at com.agfa.hap.crs.alertsystem.client.orbis.ForwardedUserAlertsMonitor.getCurrentAlertState(ForwardedUserAlertsMonitor.java:79)
at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor.requestMonitorUpdate(AbstractAlertMonitor.java:275)
at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$10.execute(AbstractAlertMonitor.java:823)
at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$Task.call(AbstractAlertMonitor.java:952)
at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$Task.call(AbstractAlertMonitor.java:942)
at com.agfa.hap.crs.alertsystem.client.orbis.AbstractAlertMonitor$TaskWrapper.call(AbstractAlertMonitor.java:925)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:992)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:535)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:403)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(ApacheHttpClient4Engine.java:283)
... 16 more
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
... 29 more
Am looking to link these exceptions through the threads activity !!!
Any idea why the connection is closed incorrectly ?!!!!
These Threads are waiting for something to happen. As you wrote:
these are basically request worker threads on a tomcat sever, waiting on a blocking queue until a request comes
As far as I understand, this happens under low load. So a too big ThreadPool will not be a problem. If you're really worried about it, you can configure a maxIdleTime for ThreadPools. So Tomcat is going to kill the old idle threads - until the ThreadPool reaches the minSpareThreads.
This is the thread pool documentation for Tomcat 8.
This is the thread pool documentation for Tomcat 7.
This is the thread pool documentation for Tomcat 6.

Locked object found on oracle.jdbc.driver.T4CConnection

I am using JMC to perform application profiling and I did not see any locked/thread contention as shown in the screenshot below.
I ran the SQL below (every few secs) also did not return any result.
select
(select username from v$session where sid=a.sid) blocker,
a.sid,
' is blocking ',
(select username from v$session where sid=b.sid) blockee,
b.sid
from
v$lock a,
v$lock b
where
a.block = 1
and
b.request > 0
and
a.id1 = b.id1
and
a.id2 = b.id2;
What could be the caused of a lock database connection? Could it be database record/table locks?
Below is the thread dump which I have extracted during the execution of my program when it seems to be running forever.
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at oracle.net.ns.Packet.receive(Packet.java:283)
at oracle.net.ns.DataPacket.receive(DataPacket.java:103)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:230)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:175)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:100)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:85)
at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1122)
at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1099)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:288)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:863)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1153)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1275)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3620)
- locked <0x00000007af3423c0> (a oracle.jdbc.driver.T4CConnection)
You're confusing database locks with Java locks here. JMC only shows you the locks inside your Java program (synchronized blocks, waits etc), it knows nothing about what's going on inside your DB. Your SQL-query only shows the locks on the DB level (table locks, row locks etc) and knows nothing about the locks inside your Java program. Those are absolutely different areas and absolutely different locks.
What you have here is a dump of a thread that holds a lock on the object of type T4CConnection with the address 0x7af3423c0. It only means that this thread is in the process of executing a code inside some synchronized(connection) block. That's all. The thread is not blocked by other threads (otherwise its state wouldn't be RUNNABLE, it would be WAITING or BLOCKED). It's running and reading something from a network socket (probably, the response from the DB).
Such behaviour is absolutely normal. The DB driver does synchronization on the connection instance while it's in the process of executing an SQL-query to not allow other threads to use it in parallel.
There's nothing you should worry about on this screenshot and in this thread dump.

Categories

Resources