JDBC call causes UI to Hang - java

Can somebody please help to optimize the code below.
The problem statement is : i am trying to populate the struct array by looping through the List List. which is causing performance issue. is there a way to do it without the loop?
The code below works as expected but the UI hangs becuase of loop, can somebody please help optimise it.
public BigDecimal saveCSV(String dataSource,int rollNumber,String username,List<Project> projects) throws SQLException{
Connection conn = getConnection(dataSource);
Connection nativeConn=doGetNativeConnection(conn);
nativeConn.setAutoCommit(false);
CallableStatement cs= nativeConn.prepareCall(ProjectConstants.PROC);
ArrayDescriptor des = ArrayDescriptor.createDescriptor("PROJECT_DETAILS_TYPE", nativeConn);
Object [] data = projects.toArray();
Array array_to_pass = new ARRAY(des,nativeConn,data);
STRUCT[] structArrayOfProjects=new STRUCT[projects.size()];
Object[] projObjectArray = null;
for (int i = 0; i < projects.size(); ++i) {
Project proj=projects.get(i);
projObjectArray=new Object[]{proj.name,proj.activity};
StructDescriptor desc = StructDescriptor.createDescriptor("PROJECT_DETAILS_TYPE", nativeConn);
STRUCT structprojects = new STRUCT(desc, nativeConn, projObjectArray);
structArrayOfProjects[i] = structprojects;
}
ArrayDescriptor projectTypeArrayDesc = ArrayDescriptor.createDescriptor("PROJECT_DETAILS_TAB_TYPE", nativeConn);
ARRAY arrayOfProjects = new ARRAY(projectTypeArrayDesc, nativeConn, structArrayOfProjects);
cs.setArray(1, array_to_pass);
cs.setInt(2, rollNumber);
cs.setString(3, username);
cs.registerOutParameter(4, OracleTypes.ARRAY,"NUMBER_TAB_TYPE");
cs.registerOutParameter(5, OracleTypes.ARRAY,"PROJECTS_ERROR_TAB_TYPE");
cs.execute();
nativeConn.commit();
Array value=cs.getArray(4);
BigDecimal[] projDetailsId = (BigDecimal[])value.getArray();
BigDecimal rmt_id = null;
try{
rmt_id=projDetailsId[0];
}
catch(Exception e){
e.printStackTrace();
}
return rmt_id;
}

Use worker thread to perform DB tasks and UI thread to update your GUI.
Doing I/O and CPU intensive tasks on UI thread is discouraged.
As you didn't specify what kind of user interface you are using,
I assume Swing, if so read this guide how to handle such tasks.
UPDATE
After OP comment that environment where code is running is Spring MVC, here is suggestion.
Same logic applies to applications deployed into servlet containers.
When you have long running task in request thread, you should use ExecutorService to create asynchronous task and return HTTP202 immediately.
Then you need to use some polling methods to periodically request completion status (or use websocket if possible).
Here are some examples : here, here or here.

Related

How many roundtrips are made to a MongoDB server when using transactions?

I wonder how many roundtrips that are made to the server when using transactions MongoDB? For example if the Java driver is used like this:
ClientSession clientSession = client.startSession();
TransactionOptions txnOptions = TransactionOptions.builder()
.readPreference(ReadPreference.primary())
.readConcern(ReadConcern.LOCAL)
.writeConcern(WriteConcern.MAJORITY)
.build();
TransactionBody txnBody = new TransactionBody<String>() {
public String execute() {
MongoCollection<Document> coll1 = client.getDatabase("mydb1").getCollection("foo");
MongoCollection<Document> coll2 = client.getDatabase("mydb2").getCollection("bar");
coll1.insertOne(clientSession, new Document("abc", 1));
coll2.insertOne(clientSession, new Document("xyz", 999));
return "Inserted into collections in different databases";
}
};
try {
clientSession.withTransaction(txnBody, txnOptions);
} catch (RuntimeException e) {
// some error handling
} finally {
clientSession.close();
}
In this case two documents are stored in a transaction:
coll1.insertOne(clientSession, new Document("abc", 1));
coll2.insertOne(clientSession, new Document("xyz", 999));
Are the "insert operations" stacked up and sent to the server in one roundtrip or are two calls (or more?) actually made to the server?
Each insert is sent separately. You can use use bulk writes to batch write operations together.
The commit at the end is a separate operation also.

JDBC Oracle stored procedure ref_cursor using vertx

I am trying to read an Oracle stored procedure returning a ref_cursor using vertx3. The same procedure is working if I edit it to return clob and use JDBCType.CLOB but for some reason I have to use ref_cursor. Can someone please help me?
JDBCClient client = JDBCClient.createShared(vertx, new JsonObject()
.put("url", "jdbc:oracle:thin:#localhost:8787:TEST")
.put("driver_class", "oracle.jdbc.OracleDriver")
.put("user", "user")
.put("password", "****"));
client.getConnection( connection -> {
if (connection.succeeded()) {
SQLConnection con = connection.result();
JsonObject params = new JsonObject()
.put("query", "{ call ? := package.procedure(?) }")
.put("paramsIn", new JsonArray().addNull().add(89))
.put("paramsOut", new JsonArray().add(JDBCType.REF_CURSOR));
con.callWithParams(params.getString("query"), params.getJsonArray("paramsIn"), params.getJsonArray("paramsOut"), query -> {
if(query.succeeded()){
ResultSet rs = query.result();
System.out.println(rs.toJson().toString())
}else{
System.out.println(req.body() + query.cause().toString());
}
});
} else {
System.out.println(connection.cause().toString())
}
});
and I get the error :
{ call ? := package.procedure(?) } java.sql.SQLException: Type de
colonne non valide: 2012
As of Vert.x 3.4.1, cursors are not supported. As a workaround, you could create your own javax.sql.DataSource and use it with Vertx.executeBlocking to invoke a JDBC java.sql.CallableStatement.
For the rest of your queries you can still use Vert.x APIs by creating a JDBCClient instance from your javax.sql.DataSource. This will avoid to maintain two different connection pools.
Have you tried using oracle.jdbc.OracleTypes.CURSOR from com.oracle:ojdbc8:12.2.0.1 instead of JDBCType.REF_CURSOR?

JDBC-JobStoreCMT lock when scheduling

I am using Weblogic + Spring + quartz.
Quartz is configured to use JobStoreCMT.
I noticed that JobStoreCMT is aquireing a DB lock on the quartz tables when jobs are scheduled.
Below is the JobStoreCMT snippet
protected Object executeInLock(
String lockName,
TransactionCallback txCallback) throws JobPersistenceException {
boolean transOwner = false;
Connection conn = null;
try {
if (lockName != null) {
// If we aren't using db locks, then delay getting DB connection
// until after aquiring the lock since it isn't needed.
if (getLockHandler().requiresConnection()) {
conn = getConnection();
}
transOwner = getLockHandler().obtainLock(conn, lockName);
}
if (conn == null) {
conn = getConnection();
}
return txCallback.execute(conn);
} finally {
try {
releaseLock(conn, LOCK_TRIGGER_ACCESS, transOwner);
} finally {
cleanupConnection(conn);
}
}
}
After this method I see in the quartz tables in the DB inserted the triggers and jobs i scheduled.
My question is why Quartz needs lock on the DB level at this phase ?
I would see a need to have the lock when the jobs are started to be executed , finished etc.
Thanks
I found some setting which solved my issue:
setLockOnInsert to false because it is true by default.
public void setLockOnInsert(boolean lockOnInsert)
Whether or not to obtain locks when inserting new jobs/triggers. Defaults to true, which is safest - some db's (such as MS SQLServer) seem to require this to avoid deadlocks under high load, while others seem to do fine without.
Setting this property to false will provide a significant performance increase during the addition of new jobs and triggers.
+org.quartz.jobStore.acquireTriggersWithinLock i set it to false (as default ) not to true as i configured initially.

Neo4j ExecutionEngine does not return valid results

Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.

How to fix recommendations Failing on SetFetchSize() with Mahout

I have a functioning recommender that I wanted to make faster so I decided to connect it directly to my database. However, every time I try to recommend things to people I get an error that setFetchSize() is not >=0. Here is my code:
MySQLJDBCDataModel dataModel = null;
try {
Class.forName("net.sourceforge.jtds.jdbc.Driver");
net.sourceforge.jtds.jdbcx.JtdsDataSource ds = new net.sourceforge.jtds.jdbcx.JtdsDataSource();
ds.setServerName("xxxxx");
ds.setDatabaseName("xxxxx");
ds.setUser("xxxxx");
ds.setPassword(xxxxx);
ds.setDomain("xxxxx");
//net.sourceforge.jtds.jdbc.JtdsStatement.setFetchSize(10);
dataModel = new MySQLJDBCDataModel(ds, "test_tbl", "user_id", "item_id", "preference", null);
} catch (Exception e) {
System.out.println("can't connect");
}
ArrayList<String> itemList=new ArrayList<String>();
ItemSimilarity similarity = new FileItemSimilarity(new File("output/part-r-00000"));
ItemBasedRecommender recommender = new GenericItemBasedRecommender(dataModel, similarity);
//List<RecommendedItem> recommendedItems=recommender.recommend(userid,10);
Recommender cachingRecommender = new CachingRecommender(recommender);
List<userRecData> allUserRecs = new ArrayList<userRecData>();
List<RecommendedItem> uRec=cachingRecommender.recommend(userobjectid,10);
And I get the error:
java.sql.SQLException: The setFetchSize method requires a parameter value >= 0.
at net.sourceforge.jtds.jdbc.JtdsStatement.setFetchSize(JtdsStatement.java:998)
at org.apache.mahout.cf.taste.impl.model.jdbc.AbstractJDBCDataModel.getNumThings(AbstractJDBCDataModel.java:584)
at org.apache.mahout.cf.taste.impl.model.jdbc.AbstractJDBCDataModel.getNumUsers(AbstractJDBCDataModel.java:560)
at org.apache.mahout.cf.taste.impl.recommender.CachingRecommender.<init>(CachingRecommender.java:63)
at mia.recommender.RecommenderIntro.getRecommendations(RecommenderIntro.java:79)
at mia.recommender.RecommenderIntro.main(RecommenderIntro.java:43)
It fails on the cachingRecommender, or if I take that out, on recommnder.recommend
I thought that Mahout automatically set the fetch size to 1000
You're using MySQLJDBCDataModel, but your database is SQL Server. The MySQL implementation disables fetch size with a negative value because its driver needs that. You need to customize AbstractJDBCDataModel to work with SQL Server -- by not overriding getFetchSize() for example.

Categories

Resources