Deadlocks while passing ARRAY to stored proc - java

While sending ARRAY to the stord proc we are getting java level dead locks. I am attaching the thread dump.
Found one Java-level deadlock:
=============================
"http-bio-8080-exec-11":
waiting to lock monitor 0x00000000406fb2d8 (object 0x00000000fea1b130, a oracle.jdbc.driver.T4CConnection),
which is held by "http-bio-8080-exec-4"
"http-bio-8080-exec-4":
waiting to lock monitor 0x00000000407d6038 (object 0x00000000fe78b680, a oracle.jdbc.driver.T4CConnection),
which is held by "http-bio-8080-exec-11"
Java stack information for the threads listed above:
===================================================
"http-bio-8080-exec-11":
at oracle.sql.TypeDescriptor.getName(TypeDescriptor.java:682)
- waiting to lock <0x00000000fea1b130> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.isInHierarchyOf(OracleTypeCOLLECTION.java:149)
at oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:2063)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3579)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
- locked <0x00000000fe78b680> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4714)
- locked <0x00000000fe78b680> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1066)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1014)
at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1064)
at org.springframework.jdbc.object.StoredProcedure.execute(StoredProcedure.java:144)
How to avoid these kind of deadlocks.
Code :
Class extends org.springframework.jdbc.object.StoredProcedure
Map result;
Map hashMap = new HashMap();
hashMap.put(SOME_IDS_PARAM, getJdbcTemplate().execute(new ConnectionCallback() {
#Override
public Object doInConnection(Connection con)
throws SQLException, DataAccessException {
Connection connection = new SimpleNativeJdbcExtractor().getNativeConnection(con);
ArrayDescriptor descriptor = ArrayDescriptor.createDescriptor(schema + ".ARRAY_OF_NUMBER" , connection);
return new oracle.sql.ARRAY(descriptor, connection, someIds);
}
}));
result = super.execute(hashMap);
Even I tried with this approach:
OracleConnection connection = null;
DataSource datasource = null;
Map result;
try {
datasource = getJdbcTemplate().getDataSource();
connection = (OracleConnection) DataSourceUtils.getConnection(datasource);
synchronized (connection) {
Map hashMap = new HashMap();
hashMap.put(SOME_IDS_PARAM, getArrayOfNumberValue(someIds, schema, connection));
result = super.execute(hashMap);
}
} finally {
if (null != connection) {
DataSourceUtils.releaseConnection(connection, datasource);
}
}
Array :
public ARRAY getArrayOfNumberValue(Integer[] array, String schema, OracleConnection connection) throws DataAccessResourceFailureException {
String arrayOfNumberTypeName = schema + ARRAY_OF_NUMBER;
ARRAY oracleArray = null;
ArrayDescriptor descriptor = null;
try {
descriptor = (ArrayDescriptor) connection.getDescriptor(arrayOfNumberTypeName);
if (null == descriptor) {
descriptor = new ArrayDescriptor(arrayOfNumberTypeName, connection);
connection.putDescriptor(arrayOfNumberTypeName, descriptor);
}
oracleArray = new ARRAY(descriptor, connection, array);
} catch (SQLException ex) {
throw new DataAccessResourceFailureException("SQLException " + "encountered while attempting to retrieve Oracle ARRAY", ex);
}
return oracleArray;
}
I suspect that, when i check out the connection from "connection = (OracleConnection) DataSourceUtils.getConnection(datasource);". It will give you the logical connection but underlying it will make use of the "T4Connection" but it is releasing it. And again looking for the same connection.
java.lang.Thread.State: BLOCKED (on object monitor)
at oracle.sql.TypeDescriptor.getName(TypeDescriptor.java:682)
- waiting to lock <0x00000000c1356fc8> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.oracore.OracleTypeCOLLECTION.isInHierarchyOf(OracleTypeCOLLECTION.java:149)
at oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:2063)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3579)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
- locked <0x00000000c14b34f0> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4714)
- locked <0x00000000c14b34f0> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1066)
at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1014)
at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1064)
at org.springframework.jdbc.object.StoredProcedure.execute(StoredProcedure.java:144)
at com.intuit.platform.integration.sdx.da.procedures.subscription.serviceSubscription.LookupRealmSubscriptions.execute(LookupRealmSubscriptions.java:55)
- locked <0x00000000fbd00bc0> (a oracle.jdbc.driver.LogicalConnection)
at com.intuit.platform.integration.sdx.da.ServiceSubscriptionDAOImpl.getRealmServiceSubscriptions(ServiceSubscriptionDAOImpl.java:153)
at com.intuit.platform.integration.sdx.ws.beans.ServiceSubscriptionResourceBean.filterRealmIds(ServiceSubscriptionResourceBean.java:84)

The connection in the ARRAY is not the same as the connection in which the Stored Procedure is being executed. You can see this because the T4CConnection that is waiting for a lock (line 3 of the stack trace) has a different IF from the one locked earlier.
Use the answer in How to get current Connection object in Spring JDBC to get your current Connection, and then downcast it to an Oracle connection using https://stackoverflow.com/a/7879073/1395668. You should then be able to create the ARRAY valid for your current connection, and you shouldn't get the deadlock.

Related

Use of pooling for my web App in servlet?

I'm using google app engine JAVA 8 and servlet 3.1 and would like to use HikariCP for pooling.
I'll write my logic in pseudo-code for better understanding.
At this point when user connects to a servlet it creates a new connection to database every time.
so my servlet looks a bit like this
doGet(){
DatabaseObject db = new DatabaseObject()
Connection conn = db.getConnection()
db.createTable(conn)
db.readData(conn)
...
conn.close()
}
Now I've seen many pooling examples like this one
but first I'm not sure this is what I'm trying to achieve also I don't really understand the whole process
Any examples, explanations are welcome as I've tried searching the net and couldn't find some for servlets. So maybe I'm thinking the wrong direction
That example looks like it stores the pool in the app (servlet) context.
I've done it differently. Usually I create a class, call it MyDb. Then I add various methods to it to access data. Within it there is a getConnection() method.
Internally, MyDb has its own connection pool. getConnection() simply returns a connection from the pool. The pool is initialized when the first MyDb is created.
Something like this (this is for app engine so no port is specified):
private static DataSource pool = null;
public MyDb( String dbhost, String dbdsn, String dbuid, String dbpwd )
{
try
{
if( MyDb.pool == null )
{
String dbconn = null;
String dbclassname = null;
HikariConfig config = new HikariConfig();
dbconn = "jdbc:google:mysql://" + dbhost + "/" + dbdsn;
dbclassname = "com.mysql.jdbc.GoogleDriver";
config.setJdbcUrl( dbconn );
config.setUsername( dbuid );
config.setPassword( dbpwd );
MyDb.pool = new HikariDataSource( config );
}
catch( Exception e )
{
logger.error( e.getMessage() );
}
}
protected Connection getConnection() throws Exception
{
return pool.getConnection();
}
}

Import data from File to Cassandra Cluster with 5 nodes causes BusyConnectionException

For my thesis, I need to upload data from a file to Cassandra Cluster. with session.execute() it too slow. So I decide to use session.executeAsyn(). but it causes BusyConnectionException.
Here is my code in Java:
final PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setMaxRequestsPerConnection(HostDistance.LOCAL, 32768)
.setMaxRequestsPerConnection(HostDistance.REMOTE, 32768);
final Cluster cluster = Cluster.builder()
.withPoolingOptions(poolingOptions)
.addContactPoint("x.x.x.x")
.withPort(9042)
.build();
final Session session = cluster.connect();
System.out.println("session object---" + session.getState());
final String path = "&PathToFile%";
final File dir = new File(path);
session.execute("use products;");
for (final File file : dir.listFiles()) {
final BufferedReader br = new BufferedReader(new FileReader(file));
String str;
final String insert = br.readLine();
while ((str = br.readLine()) != null) {
final String query = insert + str.substring(0, str.length() - 1) + "IF NOT EXISTS ;";
session.executeAsync(query);
}
}
session.close();
cluster.close();
}
here are the exceptions that I had when I execute the Code:
Error querying /x.x.x.1:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.1] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.2:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.2] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.3:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.3] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.4:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.4] Pool is busy (no available connection and the queue has reached its max size 256)
Error querying /x.x.x.5:9042 : com.datastax.driver.core.exceptions.BusyPoolException: [/x.x.x.5] Pool is busy (no available connection and the queue has reached its max size 256)
Busy exception occurs when you put too many request on one connection. You need to control how many requests are sent. Simplest way will be to use semaphore or something like. I have a class that wraps the Session and allows to control the number of inflight requests, so it behaves like async until you reach the limit, and will block until the number of in-flight requests will go under the limit. You can use my code, or implement something similar.
Update: You're using the light-weight transactions (LWT) (the IF NOT EXISTS clause), and this is heavily affect performance of your cluster because every insert need to be coordinated with other nodes...

If I pass a JDBC Connection object's value to another Connection object, is only one connection open? (MySQL DB used) [duplicate]

This question already has answers here:
Difference between creating an instance variable and creating a new object in Java?
(6 answers)
Closed 6 years ago.
My method returns the value of a Connection object.
Public class DatabaseConnection
{
public Connection establishConnection(C)
{
try
{
this.readLogin(); // prompt user to enter String values for user, pass, host
this.createDatabaseIfNeeded(); // make chessleaguedb if not found
conn = DriverManager.getConnection
("jdbc:mysql://"+host+":3306/chessleaguedb", user, pass);
System.out.println("Successfully connected to chessleaguedb");
}
catch (Exception e)
{
// logic
}
return conn; // want the logic to handle opening a connection in this class and method, then pass it to a Connection object in my 'menu' class
}
}
This method is called in another class and passes the return value to a new Connection object that is used from now on.
public class DBAMenu
{
DatabaseConnection startConnection = new DatabaseConnection; // new instance of class that contains the aforementioned method establishConnection()
Connection conn = null;
conn = startConnection.establishConnection();
}
Is there only one database connection being opened here, or two, because I'm returning the value and passing it to a new Connection object?
I've tried using the NetBeans debugger, and the Connection value remains the same after this process, but admittedly I'm not 100% on the meaning of the value column in Netbeans debugger.
(Not using Java EE so can't use pooling, and can't use open source software to handle pooling as work must be my own for year 2 undergrad project)
If I pass a JDBC Connection object's value to another Connection object, is only one connection open?
You don't have 'another Connection object'. You have another reference of type Connection. Both references refer to the same object.
You have used only one connection as you have invoked conn = startConnection.establishConnection(); only once.
You need to implement close the connection, otherwise it will create connection leaks.
You can use like below:
try(Connection conn =startConnection.establishConnection()) {
//actual logic to perform db operations
} catch(SQLException sqlexe) {
//log exceptions
}
P.S.: It is not a best practice to handle connections explicitly/manually like this, rather try to implement Connection Pooling.

The java servlet calls never reaches the servlet

I have a java web application which i have deployed in wildfly 10 web server. The application works fine for most of the time, but very unexpectedly the calls to the java servlet never reaches the servlet.
I became curious and analyzed the thread dump of the wildfly server in visualvm. Although i am not an expert in analyzing thread dumps, i expect that some thread locks are occurring due to which the task thread for that servlet call never executes; keeps waiting forever.
Right now i don't know whether this is a problem from the application side. I am suspecting that this is a problem with the servlet container configuration which i have set to default, or is this some wildfly bug?.., which i hope so isn't. Please reply.
This is my login servlet code:
response.setContentType("application/json");
UserInfo user = null;
boolean authenticated = false;
String message = "";
String ipAddress = request.getHeader("X-FORWARDED-FOR");
if (ipAddress == null) {
ipAddress = request.getRemoteAddr();
}
try {
ApplicationHelper.clearSession(request);
String body = request.getReader().lines().reduce("", (accumulator, actual) -> accumulator + actual);
HashMap inputDataMap = new ObjectMapper().readValue(body, HashMap.class);
String userName = (String) inputDataMap.get("username");
String password = (String) inputDataMap.get("password");
user = UserDataProvider.verifyEncryptedAccount(userName, password);
if (user != null) {
UserDataProvider.updateLoginStatus(user.getIdKey(), request.getSession().getId(), ipAddress, true);
request.getSession(true).setAttribute("userInfo", user);
authenticated = true;
message = MPHLTHConstants.Success;
} else {
throw new InsufficientAccessException("Insufficient access");
}
} catch (Exception ex) {
authenticated = false;
if (ex instanceof ApplicationException) {
message = ex.getMessage();
}
ExceptionDataProvider.logException(ex, request, user);
} finally {
try {
Response objResponse = new Response(user, message, authenticated, 1);
Map<String, String[]> jsonFilters = new HashMap<>();
jsonFilters.put("ResponseFilter", new String[0]);
jsonFilters.put("UserInfoFilter", new String[0]);
JSONHelper.writeJSONResponse(objResponse, response, jsonFilters);
} catch (Exception ex) {
ExceptionDataProvider.logException(ex, request, user);
}
}
These are the threads where i saw the locks, and i several of them at different times and these didn't change over time:
"default task-64" #206 prio=5 os_prio=0 tid=0x000000001c59b800 nid=0x5608 waiting for monitor entry [0x000000001f8bd000] java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.PrintStream.println(PrintStream.java:805)
- waiting to lock <0x00000000e0058f58> (a java.io.PrintStream)
at org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474
and this one:
> "default task-61" #203 prio=5 os_prio=0 tid=0x000000001c599000 nid=0x4934 runnable [0x000000001f5bd000]
java.lang.Thread.State: RUNNABLE
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
- locked <0x00000000e0aeb790> (a java.io.BufferedOutputStream)
at java.io.PrintStream.write(PrintStream.java:482)
- locked <0x00000000e0aeb770> (a java.io.PrintStream)
at org.jboss.logmanager.handlers.UncloseableOutputStream.write(UncloseableOutputStream.java:44)
at org.jboss.logmanager.handlers.UninterruptibleOutputStream.write(UninterruptibleOutputStream.java:84)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
- locked <0x00000000e0aeb738> (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at java.io.BufferedWriter.flush(BufferedWriter.java:254)
- locked <0x00000000e0aeb738> (a java.io.OutputStreamWriter)
at org.jboss.logmanager.handlers.WriterHandler.safeFlush(WriterHandler.java:170)
at org.jboss.logmanager.handlers.WriterHandler.flush(WriterHandler.java:139)
- locked <0x00000000e0aeb700> (a java.lang.Object)
at org.jboss.logmanager.ExtHandler.doPublish(ExtHandler.java:104)
at org.jboss.logmanager.handlers.WriterHandler.doPublish(WriterHandler.java:67)
- locked <0x00000000e0aeb700> (a java.lang.Object)
at org.jboss.logmanager.ExtHandler.publish(ExtHandler.java:76)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:314)
at org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:322)
at org.jboss.logmanager.Logger.logRaw(Logger.java:850)
at org.jboss.logmanager.Logger.log(Logger.java:596)
at org.jboss.stdio.AbstractLoggingWriter.write(AbstractLoggingWriter.java:71)
- locked <0x00000000e0058fb8> (a java.lang.StringBuilder)
at org.jboss.stdio.WriterOutputStream.finish(WriterOutputStream.java:143)
at org.jboss.stdio.WriterOutputStream.flush(WriterOutputStream.java:164)
- locked <0x00000000e0059128> (a sun.nio.cs.SingleByte$Decoder)
at java.io.PrintStream.write(PrintStream.java:482)
- locked <0x00000000e0058f58> (a java.io.PrintStream)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104)
- locked <0x00000000e00579c0> (a java.io.OutputStreamWriter)
at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185)
at java.io.PrintStream.newLine(PrintStream.java:546)
- locked <0x00000000e0058f58> (a java.io.PrintStream)
at java.io.PrintStream.println(PrintStream.java:807)
- locked <0x00000000e0058f58> (a java.io.PrintStream)
at org.jboss.stdio.StdioContext$DelegatingPrintStream.println(StdioContext.java:474)
This issue is finally resolved. It was found that the thread locks were occurring due to println() statements which were present in several places in the application. The logging subsytem in WildFly 10 and the println() statements were creating locks to the print output stream and eventually getting into a deadlock.

Can i really read LOB through closed connection?

I use wildfly-8.2.0.Final. There is connection pool (Oracle) on this server.
Look at following code:
public ArrayList<HashMap<String, Object>> fetchSome(String query)
throws OracleQueryProcessorException {
ArrayList<HashMap<String, Object>> result = new ArrayList<HashMap<String, Object>>();
try {
Context initCtx = new InitialContext();
DataSource ds = (DataSource) initCtx.lookup(driver);
try (Connection con = ds.getConnection();
PreparedStatement stmt = con.prepareStatement(query)) {
try (ResultSet rs = stmt.executeQuery()) {
ResultSetMetaData rsmd = rs.getMetaData();
rs.next();
HashMap<String, Object> row = new HashMap<String, Object>();
String name = rsmd.getColumnName(1);
Object value = rs.getObject(1);
if (value instanceof Blob) {
Blob bl = (Blob) value;
if (bl.length() > 0)
value = bl.getBinaryStream();
else
value = null;
}
row.put(name, value);
result.add(row);
}
} catch (SQLException e) {
throw new OracleQueryProcessorException();
}
} catch (NamingException e) {
throw new OracleQueryProcessorException();
}
return result;
}
And this is usage of this function:
InputStream is = (InputStream) fetchSome("SELECT BLOB_FIELD FROM TEST WHERE ID = 1").get(0).get("BLOB_FIELD");
if (is != null) {
byte[] a = new byte[3];
is.read(a);
}
Reading from this stream is working!! How can it work? Connection is closed (cause using try-with-resources clause). Reading from this stream take no connection from pool (All pool's connections are available).
fetchSome() opens a Connection, sends the query, and then reads the data back into the resulting ArrayList. Then fetchSome closes the Connection and returns the ArrayList. The code you are curious about then reads from the ArrayList that was returned, not from the Connection that was, as you correctly noticed, closed.
By the time your method returns, all database communication has finished, and all the data has been copied into the returned list, from which it can then be read as often and as late as you want, without needing a Connection again.
Does it really work for various BLOB sizes? Good thresholds are:
4000B (limit where BLOB might be in lined in the row - not stored aside)
2000B (maximum size for RAW) - BLOB can be casted to RAW somewhere
16KB, 32KB
some huge value bigger than JVM heap size
AFAIK on OCI level(C client library) LOBs might be "pre-allocated" .i.e. some smaller portion of BLOB can be sent to client, although it was not requested yet by the client. This should reduce number of round-trips between database and client.
Also you should try check v$instance view to check whether the connection really was closed. Cooperation between JDBC and Oracle is tricky sometimes.
For example temporary LOBs created via Connection.createBLOB() are treaded differently than any other temporary lobs by the database. I think it is because Oracle database can not talk to JVM GC and it does not know when really the java instance was disposed. So these lobs are kept in the database "forever".

Categories

Resources