I have a java web application which i have deployed in wildfly 10 web server. The application works fine for most of the time, but very unexpectedly the calls to the java servlet never reaches the servlet.
I became curious and analyzed the thread dump of the wildfly server in visualvm. Although i am not an expert in analyzing thread dumps, i expect that some thread locks are occurring due to which the task thread for that servlet call never executes; keeps waiting forever.
Right now i don't know whether this is a problem from the application side. I am suspecting that this is a problem with the servlet container configuration which i have set to default, or is this some wildfly bug?.., which i hope so isn't. Please reply.
This is my login servlet code:
response.setContentType("application/json");
UserInfo user = null;
boolean authenticated = false;
String message = "";
String ipAddress = request.getHeader("X-FORWARDED-FOR");
if (ipAddress == null) {
ipAddress = request.getRemoteAddr();
}
try {
ApplicationHelper.clearSession(request);
String body = request.getReader().lines().reduce("", (accumulator, actual) -> accumulator + actual);
HashMap inputDataMap = new ObjectMapper().readValue(body, HashMap.class);
String userName = (String) inputDataMap.get("username");
String password = (String) inputDataMap.get("password");
user = UserDataProvider.verifyEncryptedAccount(userName, password);
if (user != null) {
UserDataProvider.updateLoginStatus(user.getIdKey(), request.getSession().getId(), ipAddress, true);
request.getSession(true).setAttribute("userInfo", user);
authenticated = true;
message = MPHLTHConstants.Success;
} else {
throw new InsufficientAccessException("Insufficient access");
}
} catch (Exception ex) {
authenticated = false;
if (ex instanceof ApplicationException) {
message = ex.getMessage();
}
ExceptionDataProvider.logException(ex, request, user);
} finally {
try {
Response objResponse = new Response(user, message, authenticated, 1);
Map<String, String[]> jsonFilters = new HashMap<>();
jsonFilters.put("ResponseFilter", new String[0]);
jsonFilters.put("UserInfoFilter", new String[0]);
JSONHelper.writeJSONResponse(objResponse, response, jsonFilters);
} catch (Exception ex) {
ExceptionDataProvider.logException(ex, request, user);
}
}
These are the threads where i saw the locks, and i several of them at different times and these didn't change over time:
"default task-64" #206 prio=5 os_prio=0 tid=0x000000001c59b800 nid=0x5608 waiting for monitor entry [0x000000001f8bd000] java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.PrintStream.println(PrintStream.java:805)
- waiting to lock <0x00000000e0058f58> (a java.io.PrintStream)
at org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474
and this one:
> "default task-61" #203 prio=5 os_prio=0 tid=0x000000001c599000 nid=0x4934 runnable [0x000000001f5bd000]
java.lang.Thread.State: RUNNABLE
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
- locked <0x00000000e0aeb790> (a java.io.BufferedOutputStream)
at java.io.PrintStream.write(PrintStream.java:482)
- locked <0x00000000e0aeb770> (a java.io.PrintStream)
at org.jboss.logmanager.handlers.UncloseableOutputStream.write(UncloseableOutputStream.java:44)
at org.jboss.logmanager.handlers.UninterruptibleOutputStream.write(UninterruptibleOutputStream.java:84)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
- locked <0x00000000e0aeb738> (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at java.io.BufferedWriter.flush(BufferedWriter.java:254)
- locked <0x00000000e0aeb738> (a java.io.OutputStreamWriter)
at org.jboss.logmanager.handlers.WriterHandler.safeFlush(WriterHandler.java:170)
at org.jboss.logmanager.handlers.WriterHandler.flush(WriterHandler.java:139)
- locked <0x00000000e0aeb700> (a java.lang.Object)
at org.jboss.logmanager.ExtHandler.doPublish(ExtHandler.java:104)
at org.jboss.logmanager.handlers.WriterHandler.doPublish(WriterHandler.java:67)
- locked <0x00000000e0aeb700> (a java.lang.Object)
at org.jboss.logmanager.ExtHandler.publish(ExtHandler.java:76)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:314)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:322)
at org.jboss.logmanager.Logger.logRaw(Logger.java:850)
at org.jboss.logmanager.Logger.log(Logger.java:596)
at org.jboss.stdio.AbstractLoggingWriter.write(AbstractLoggingWriter.java:71)
- locked <0x00000000e0058fb8> (a java.lang.StringBuilder)
at org.jboss.stdio.WriterOutputStream.finish(WriterOutputStream.java:143)
at org.jboss.stdio.WriterOutputStream.flush(WriterOutputStream.java:164)
- locked <0x00000000e0059128> (a sun.nio.cs.SingleByte$Decoder)
at java.io.PrintStream.write(PrintStream.java:482)
- locked <0x00000000e0058f58> (a java.io.PrintStream)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
- locked <0x00000000e00579c0> (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
at java.io.PrintStream.newLine(PrintStream.java:546)
- locked <0x00000000e0058f58> (a java.io.PrintStream)
at java.io.PrintStream.println(PrintStream.java:807)
- locked <0x00000000e0058f58> (a java.io.PrintStream)
at org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
This issue is finally resolved. It was found that the thread locks were occurring due to println() statements which were present in several places in the application. The logging subsytem in WildFly 10 and the println() statements were creating locks to the print output stream and eventually getting into a deadlock.
Related
I am trying to work with Kafka Streams and I have created the following Topology:
KStream<String, HistoryEvent> eventStream = builder.stream(applicationTopicName, Consumed.with(Serdes.String(),
historyEventSerde));
eventStream.selectKey((key, value) -> new HistoryEventKey(key, value.getIdentifier()))
.groupByKey()
.reduce((e1, e2) -> e2, Materialized.as(streamByKeyStoreName));
I later start the streams like this:
private void startKafkaStreams(KafkaStreams streams) {
CompletableFuture<KafkaStreams.State> stateFuture = new CompletableFuture<>();
streams.setStateListener((newState, oldState) -> {
if(stateFuture.isDone()) {
return;
}
if(newState == KafkaStreams.State.RUNNING || newState == KafkaStreams.State.ERROR) {
stateFuture.complete(newState);
}
});
streams.start();
try {
KafkaStreams.State finalState = stateFuture.get();
if(finalState != KafkaStreams.State.RUNNING) {
// ...
}
} catch (InterruptedException ex) {
// ...
} catch(ExecutionException ex) {
// ...
}
}
My Streams start without an error and they eventually get into the state of RUNNING where the future is completed. Later I am trying to access that store that I created in my topology for the KTable:
public KafkaFlowHistory createFlowHistory(String flowId) {
ReadOnlyKeyValueStore<HistoryEventKey, HistoryEvent> store = streams.store(streamByKeyStoreName,
QueryableStoreTypes.keyValueStore());
return new KafkaFlowHistory(flowId, store, event -> topicProducer.send(new ProducerRecord<>(applicationTopicName, flowId, event)));
}
I have verified that the createFlowHistory is called after the initializing future is completed in RUNNING state, however I am consistently unable to do this and KafkaStreams is reporting the following error:
Exception in thread "main"
org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get
state store flow-event-stream-file-service-test-instance-by-key
because the stream thread is PARTITIONS_ASSIGNED, not RUNNING
Apparently the state of the thread has changed. Do I need to take care of this manually when trying to query a store and wait for the internal thread of Kafka to get into the right state?
Older Versions (before 2.2.0)
On startup, Kafka Streams does the following state transitions:
CREATED -> RUNNING -> REBALANCING -> RUNNING
You need to wait for the second RUNNING state before you can query.
New Version: as of 2.2.0
The state transition behavior on startup was changed (via https://issues.apache.org/jira/browse/KAFKA-7657) to:
CREATED -> REBALANCING -> RUNNING
Hence, you should not hit this issue any longer.
For my thesis, I need to upload data from a file to Cassandra Cluster. with session.execute() it too slow. So I decide to use session.executeAsyn(). but it causes BusyConnectionException.
Here is my code in Java:
final PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setMaxRequestsPerConnection(HostDistance.LOCAL, 32768)
.setMaxRequestsPerConnection(HostDistance.REMOTE, 32768);
final Cluster cluster = Cluster.builder()
.withPoolingOptions(poolingOptions)
.addContactPoint("x.x.x.x")
.withPort(9042)
.build();
final Session session = cluster.connect();
System.out.println("session object---" + session.getState());
final String path = "&PathToFile%";
final File dir = new File(path);
session.execute("use products;");
for (final File file : dir.listFiles()) {
final BufferedReader br = new BufferedReader(new FileReader(file));
String str;
final String insert = br.readLine();
while ((str = br.readLine()) != null) {
final String query = insert + str.substring(0, str.length() - 1) + "IF NOT EXISTS ;";
session.executeAsync(query);
}
}
session.close();
cluster.close();
}
here are the exceptions that I had when I execute the Code:
Error querying /x.x.x.1:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.1] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.2:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.2] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.3:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.3] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.4:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.4] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.5:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.5] Pool is busy (no available connection and the queue has reached its max size 256)
Busy exception occurs when you put too many request on one connection. You need to control how many requests are sent. Simplest way will be to use semaphore or something like. I have a class that wraps the Session and allows to control the number of inflight requests, so it behaves like async until you reach the limit, and will block until the number of in-flight requests will go under the limit. You can use my code, or implement something similar.
Update: You're using the light-weight transactions (LWT) (the IF NOT EXISTS clause), and this is heavily affect performance of your cluster because every insert need to be coordinated with other nodes...
I'm trying to build a process that will watch a list of directories (populated via JPA) and when a new file is detected in a folder a new thread is started to process that folder. A maximum of one thread should only be running per folder but multiple threads could run spanning different folders.
I've got that working somewhat with the below code but the issue I've found is.. say 1 out of 5 files have moved so far. A thread will be immediately made once the first is detected, the ProcessDatasource thread would then loop through the dir and make 1 file objects to process. In the mean time 4 files would trigger the systemfilewatcher but would block due to a datasource thread already running on that folder. Now since filesystemwatcher will have already triggered when the files landed it won't run again which will leave those 4 files in limbo until another lands in that folder....
To solve this I thought if a file lands and a thread is already running I could call a method within the thread to add the file to the List of files it's currently processing but I'm struggling to do that when the threads are made dynamically in the below loop. Of course this could just be an awful way of doing all this so open to any suggestions.
private boolean checkThreadRunning(String threadName){
Set<Thread> threadSet = Thread.getAllStackTraces().keySet();
for ( Thread t : threadSet){
if ( t.getThreadGroup() == Thread.currentThread().getThreadGroup() && t.getName().equals(threadName)) {
return true;
}
}
return false;
}
public void run(String... args) throws IOException {
WatchService watchService = FileSystems.getDefault().newWatchService();
List<DataSource> datasourceList = readDataSources(); // Load a list of DataSource objects into the datasourceList.
Map<WatchKey, DataSource> keys = registerKeys(watchService, datasourceList);
WatchKey key;
while ((key = watchService.take()) != null) {
DataSource dataSource = keys.get(key);
for (WatchEvent<?> event : key.pollEvents()) {
String dataSourceName = dataSource.getDatasourceName();
String threadName = "datasourceThread-" + dataSourceName;
// Check if there is already a thread running on this datasource (folder)
if (checkThreadRunning(threadName)) {
System.out.println("Found another file for datasource " + dataSourceName + "but an instance is already running");
// Need something here to pass this new file into the currently running thread to be processed...
} else {
// If not then start a thread which will work through processing the files within the folder.
new Thread(new ProcessDatasource(threadName, dataSource)).start();
}
}
key.reset();
}
}
I have an integration flow that scans for files for processing. Since there might be multiple processors scanning the same directory, I added ".nioLocker()" to prevent processors from other JVMs from processing the file.
Here's the flow configuration:
IntegrationFlows.from( // Scan files from input dir
s -> s.file(new File(fileInputDir))
.preventDuplicates(true)
.nioLocker()
.regexFilter("(.)*\\.[xX][mM][lL]|(.)+\\.[dD][nN][eE]"), // to match any case of the letters XML
p -> p.poller(Pollers.fixedRate(filePollerInterval)
.taskExecutor(new ScheduledThreadPoolExecutor(filePoolSize))
)
Now, the problem is that even with one processor running when I call BufferedReader.readLine, I get an exception stating that the file is locked
java.io.IOException: The process cannot access the file because another process has locked a portion of the file
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:255)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
I tried to release the lock by calling
private NioFileLocker fileLocker = new NioFileLocker();
fileLocker.unlock(file);
But that doesn't work! (I suspect because it's called from a different thread than that of the locker but I am not sure)
What is the proper way to obtain the lock? Is there a better way to ensure that only one processor obtains access to a resource?
----------------------------EDIT---------------------------------
So I went an extra step to make sure that the thread locking the file is the same as the thread that reads from its file channel. For this I used Direct Channels. (before, the message passed to the fileSplitter was through a QueueChannel which would execute a send() on a different thread). Still I get the error
2017-07-21 11:22:03.316 INFO 336488 --- [ main] c.f.e.m.i.MailerInboundApplication : Started MailerInboundApplication in 13.541 seconds (JVM running for 14.419)
2017-07-21 11:22:09.946 INFO 336488 --- [ask-scheduler-5] o.s.i.file.FileReadingMessageSource : Created message: [GenericMessage [payload=input\EMAIL92770.9352177.20170617.xml, headers={id=5dba6d62-b0a5-508e-48a9-cfddfa3b331f, timestamp=1500654129946}]]
2017-07-21 11:22:09.962 DEBUG 336488 --- [ask-scheduler-5] c.f.edd.mailer.inbound.core.FileRouter : fileRouter received message: GenericMessage [payload=input\EMAIL92770.9352177.20170617.xml, headers={CORRELATION_ID=92770.9352177.20170617, id=32a8846d-5425-0b
ee-657e-8767e1fb6105, timestamp=1500654129962}]
2017-07-21 11:22:09.962 DEBUG 336488 --- [ask-scheduler-5] c.f.e.mailer.inbound.core.FileSplitter : fileSplitter received message: GenericMessage [payload=input\EMAIL92770.9352177.20170617.xml, headers={CORRELATION_ID=92770.9352177.20170617, id=32a8846d-5425-
0bee-657e-8767e1fb6105, timestamp=1500654129962}]
java.io.IOException: The process cannot access the file because another process has locked a portion of the file
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.FileDispatcherImpl.read(FileDispatcherImpl.java:61)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:159)
at com.fiserv.edd.mailer.inbound.core.FileSplitter.splitMessage(FileSplitter.java:93)
The code at FileSplitter.java:
#Override
protected Object splitMessage(Message<?> message) {
String correlationId = (String) message.getHeaders().get("CORRELATION_ID"); //Save the correlation ID so we can use it to send the DNE/CLP file later
File file = (File) message.getPayload();
String inputFileName = file.getName();
log.info(LogEvent.getBuilder().withMessageId(inputFileName)
.withMessage("Processing file: " + inputFileName).build());
long startTime = System.currentTimeMillis();
Optional<InputHeader> inputHeader = Optional.empty();// headerParser.parse(file);
ParsingReport pr = new ParsingReport(inputFileName);
try (RandomAccessFile lfs = new RandomAccessFile(file.getAbsolutePath(), "rw")){
FileChannel fc = lfs.getChannel();
byte[] bytes = new byte[1024];
fc.read(ByteBuffer.wrap(bytes));
System.out.println(new String(bytes));
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return Collections.EMPTY_LIST;
}
When we use java.nio.channels.FileLock, we can get access to file content only via FileChannel or InputStream associated with that lock:
FileInputStream in = new FileInputStream(file);
try {
java.nio.channels.FileLock lock = in.getChannel().lock();
try {
Reader reader = new InputStreamReader(in, charset);
...
} finally {
lock.release();
}
} finally {
in.close();
}
The NioFileLocker doesn't let to get access to the FileLock easily.
So, you should use in your code something like this:
new DirectFieldAccessor(this.nioFileLocker).getPropertyValue("lockCache");
and cast it into the Map<File, FileLock> to be able to get a FileLock created for the file.
Meanwhile, please, raise a JIRA on the matter. This NioFileLocker causes a lot of problems. Should be revised somehow. Thanks
While sending ARRAY to the stord proc we are getting java level dead locks. I am attaching the thread dump.
Found one Java-level deadlock:
=============================
"http-bio-8080-exec-11":
waiting to lock monitor 0x00000000406fb2d8 (object 0x00000000fea1b130, a oracle.jdbc.driver.T4CConnection),
which is held by "http-bio-8080-exec-4"
"http-bio-8080-exec-4":
waiting to lock monitor 0x00000000407d6038 (object 0x00000000fe78b680, a oracle.jdbc.driver.T4CConnection),
which is held by "http-bio-8080-exec-11"
Java stack information for the threads listed above:
===================================================
"http-bio-8080-exec-11":
at oracle.sql.TypeDescriptor.getName(TypeDescriptor.java:682)
- waiting to lock <0x00000000fea1b130> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.isInHierarchyOf(OracleTypeCOLLECTION.java:149)
at oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:2063)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3579)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
- locked <0x00000000fe78b680> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4714)
- locked <0x00000000fe78b680> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1066)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1014)
at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1064)
at org.springframework.jdbc.object.StoredProcedure.execute(StoredProcedure.java:144)
How to avoid these kind of deadlocks.
Code :
Class extends org.springframework.jdbc.object.StoredProcedure
Map result;
Map hashMap = new HashMap();
hashMap.put(SOME_IDS_PARAM, getJdbcTemplate().execute(new ConnectionCallback() {
#Override
public Object doInConnection(Connection con)
throws SQLException, DataAccessException {
Connection connection = new SimpleNativeJdbcExtractor().getNativeConnection(con);
ArrayDescriptor descriptor = ArrayDescriptor.createDescriptor(schema + ".ARRAY_OF_NUMBER" , connection);
return new oracle.sql.ARRAY(descriptor, connection, someIds);
}
}));
result = super.execute(hashMap);
Even I tried with this approach:
OracleConnection connection = null;
DataSource datasource = null;
Map result;
try {
datasource = getJdbcTemplate().getDataSource();
connection = (OracleConnection) DataSourceUtils.getConnection(datasource);
synchronized (connection) {
Map hashMap = new HashMap();
hashMap.put(SOME_IDS_PARAM, getArrayOfNumberValue(someIds, schema, connection));
result = super.execute(hashMap);
}
} finally {
if (null != connection) {
DataSourceUtils.releaseConnection(connection, datasource);
}
}
Array :
public ARRAY getArrayOfNumberValue(Integer[] array, String schema, OracleConnection connection) throws DataAccessResourceFailureException {
String arrayOfNumberTypeName = schema + ARRAY_OF_NUMBER;
ARRAY oracleArray = null;
ArrayDescriptor descriptor = null;
try {
descriptor = (ArrayDescriptor) connection.getDescriptor(arrayOfNumberTypeName);
if (null == descriptor) {
descriptor = new ArrayDescriptor(arrayOfNumberTypeName, connection);
connection.putDescriptor(arrayOfNumberTypeName, descriptor);
}
oracleArray = new ARRAY(descriptor, connection, array);
} catch (SQLException ex) {
throw new DataAccessResourceFailureException("SQLException " + "encountered while attempting to retrieve Oracle ARRAY", ex);
}
return oracleArray;
}
I suspect that, when i check out the connection from "connection = (OracleConnection) DataSourceUtils.getConnection(datasource);". It will give you the logical connection but underlying it will make use of the "T4Connection" but it is releasing it. And again looking for the same connection.
java.lang.Thread.State: BLOCKED (on object monitor)
at oracle.sql.TypeDescriptor.getName(TypeDescriptor.java:682)
- waiting to lock <0x00000000c1356fc8> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.isInHierarchyOf(OracleTypeCOLLECTION.java:149)
at oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:2063)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3579)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
- locked <0x00000000c14b34f0> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4714)
- locked <0x00000000c14b34f0> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1066)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1014)
at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1064)
at org.springframework.jdbc.object.StoredProcedure.execute(StoredProcedure.java:144)
at com.intuit.platform.integration.sdx.da.procedures.subscription.serviceSubscription.LookupRealmSubscriptions.execute(LookupRealmSubscriptions.java:55)
- locked <0x00000000fbd00bc0> (a oracle.jdbc.driver.LogicalConnection)
at com.intuit.platform.integration.sdx.da.ServiceSubscriptionDAOImpl.getRealmServiceSubscriptions(ServiceSubscriptionDAOImpl.java:153)
at com.intuit.platform.integration.sdx.ws.beans.ServiceSubscriptionResourceBean.filterRealmIds(ServiceSubscriptionResourceBean.java:84)
The connection in the ARRAY is not the same as the connection in which the Stored Procedure is being executed. You can see this because the T4CConnection that is waiting for a lock (line 3 of the stack trace) has a different IF from the one locked earlier.
Use the answer in How to get current Connection object in Spring JDBC to get your current Connection, and then downcast it to an Oracle connection using https://stackoverflow.com/a/7879073/1395668. You should then be able to create the ARRAY valid for your current connection, and you shouldn't get the deadlock.