neo4j : Limited queries sent in java - java

For a website developed under Tomcat, I connect my program to a database neo4j. Connection is done through jdbc.
My program is right now local, and the database is on a remote server.
When I start Tomcat, it first check if specifics nodes are presents, and if not, it creates them.
There are about 135 nodes.
Problem : After about ten, the program stops, and stay in something like an infinite loop.
I assume I should close something, but what?
Here is my code :
private ResultSet sendCommand(String command) throws SQLException
{
try(Statement statement = _neo4jConnection.createStatement())
{
return statement.executeQuery(command);
}
}
and a function to call this code (all functions are based on the same structure)
public static Long createNode(NodeLabel labelName)
{
try
{
ResultSet rs = getInstance().sendCommand("CREATE (n:"+labelName+") RETURN id(n)");
Long result= rs.next()?rs.getLong("id(n)"):null;
rs.close();
return result;
} catch (SQLException e) {
e.printStackTrace();
return null;
}
}

in my latest experiment I reused the same statement multiple times, not sure that is the best way but it seems to work good, too.
https://github.com/peterneubauer/blogs/tree/master/csv_jdbc, code at https://github.com/peterneubauer/blogs/blob/master/csv_jdbc/src/test/java/org/neo4j/jdbctest/JDBCTest.java

The only solution I found was to regularly disconnect and reconnect to the database (After about 20 statements). This is awful, but it works.
Well, however, I gave up with jdbc for neo4j, too much issues, and catastrophical performances (about 1s to get one simple data), what occures still more issues!
Anyway, Thanks for your help
Niko

Related

Temporary tablespace of CLOB not freed

I have the Problem that my Java-Application is exporting a larger amount of clobs from a database, but always runs out of temporary tablespace as the old clobs are not freed.
A simplified code example how I do it would be:
public void getClobAndDoSomething (oracle.jdbc.OracleCallableStatement pLSQLCodeReturningClob) {
try (OracleCallableStatement statement = pLSQLCodeReturningClob) {
statment.registerOutParameter(1, Types.CLOB);
statement.execute();
oracle.sql.CLOB clob = statement.getCLOB(1);
clob.open(CLOB.MODE_READONLY);
Reader reader = clob.getCharacterStream();
BufferedReader bufferedReader = new BufferedReader(reader);
doSomethingWithClob(bufferedReader);
bufferedReader.close();
reader.close();
clob.close();
clob.freeTemporary();
} catch (SQLException e) {
if (e.getErrorCode() == 1652) {
//Server ran out of temporary tablespace
} else
handleException(e);
} catch (IOException e) {
handleException(e);
}
}
If this method is called in a loop it will always end up running out of temporary table space at some point.
The only reliable way to free the space is by closing the connection and opening a new one (for example by using clob.getInternalConnection.close()) but this would slow down the application and make the current multi-threaded approach unusable.
Sadly the oracle documentation on ojdbc where not really helpful and google only found articles telling me to use the free() method of lobs which is not even implemented by oracles temporary clobs.
Additional Note:
This issue does also occur when using oracles APEXExport.class to export a big workspace.
Driver and System specifics:
OS: Windows 7 Professional x64
Java: 1.8.0_45 64-Bit
ojdbc: 6 (Are there more specific versions?)
Database: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
Test code if you have a APEX-Application:
java.sql.Connection con = getConnection();
String gStmtGetAppClob = "begin ? := wwv_flow_utilities.export_application_to_clob(?, ?, ?, ?); end;";
int appId = 100;
while (true) {
OracleCallableStatement exportApplicationToClob = (OracleCallableStatement) con.prepareCall(gStmtGetAppClob);
exportApplicationToClob.setString(3, "Y"); //Public reports
exportApplicationToClob.setString(4, "N"); //Saved reports
exportApplicationToClob.setString(5, "N"); //Interactive report notifications
exportApplicationToClob.setBigDecimal(2, new BigDecimal(appId));
getClobAndDoSomething(exportApplicationToClob);
try {
Thread.sleep(50);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
}
con.close();
Update:
After more testing I found out that the clobs are getting freed at some point without closing the connection. So it seems like the free() is actually a lazyFree(). But this can take more than a minute.
I can also convert the CLOB to Clob, don't know what I was doing wrong earlier. Problem stays unchanged if using Clob.
In pl/sql world this would have been handled through temporary CLOB and reusing it inside loop.
Assuming that you are using java.sql.CLOB., it does not seem to have createTemporary CLOB option, but oracle.sql.CLOB does. It also has freeTemporary() method to clear temp space.
https://docs.oracle.com/cd/E18283_01/appdev.112/e13995/oracle/sql/CLOB.html
Your calling routine can create a temporary CLOB and pass it as a parameter (lets say p_clob) to this method. Assign the return value of query to p_clob every time instead of creating new CLOB (e.g. CLOB clob = statement.getCLOB).
Short of time right now, but will edit a detailed code later. If you can work with above, then good.

How do I edit MS Access database using Java (NetBeans)

I'm trying to edit an MS Access database using some Java code (running NetBeans 7.2.1). I set up the data source and linked it to my database ProjectDatabase using the ODBC tool and named the data source DB, then i run the following code:
import java.sql.*;
public class NewMain {
public static void main(String[] args) {
try{
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
Connection con = DriverManager.getConnection("jdbc:odbc:DB");
Statement st=con.createStatement();
String name="roseindia";
String address="delhi";
int i=st.executeUpdate("insert into user(name,address) values('"+name+"','"+address+"')");
System.out.println("Row is added");
}
catch(Exception e){
System.out.println(e);
}
}
}
The code runs without and error and returns the "Row is added" message. The problem is that when I go back to view the database the changes have not taken effect. I have tried this with a code for deleting the data, also to no effect. Has anybody had this problem and knows how to solve it?
I'm running Windows 7 64-bit, Microsoft Office 64-bit with all the 64-bit drivers and I have been unable to find any mention of this problem through web searches.
Thanks in advance for any help =)
First of all you are not closing the connection, so that is one problem. Also change your code to:
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
Connection con = DriverManager.getConnection("jdbc:odbc:DB");
Statement st=con.createStatement();
con.setAutoCommit(false); //Notice change here
String name="roseindia";
String address="delhi";
int i=st.executeUpdate("insert into user(name,address) values('"+name+"','"+address+"')");
con.commit(); //Notice change here
System.out.println("Row is added");
con.close(); //Notice change here
This will commit the changes to access database, so now you should be able to see data in MS Access.
Read here to know more about best practices for Closing and Releasing JDBC resources

Java storedProcedure stops with OutOfMemoryError

I'm working on a Java project, running on Tomcat 6, which connects to a MySQL database. All procedures run as they should, both when testing local as testing on the server of our customer. There is one exception however, and that's for one procedure which retrieves a whole lot of data to generate a report. The stored procedure takes like 13 minutes or so when executing it from MySQL. When I run the application locally and connect to the online database, the procedure does work, the only time it doesn't work, is when it is run on the server of our client.
The client is pretty protective over his server, so we have limited control over it, but they do want us to solve the problem. When i check the log files, no errors are thrown from the function that executes the stored procedure. And putting some debug logs in the code, it shows that it does get to the execute call, but doesn't log the debug right after the call, neither logs the error in the catch, but does get into the finally section.
They claim there are no time-out errors in the MySQL logs.
If anyone has any idea on what might cause this problem, any help will be appreciated.
update:
after some nagging to the server administrator, I've finally got access to the catalina logs, and in those logs, i've finally found an error that has some meaning:
Exception in thread "Thread-16" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2894)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:117)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:407)
at java.lang.StringBuffer.append(StringBuffer.java:241)
at be.playlane.mink.database.SelectExportDataProcedure.bufferField(SelectExportDataProcedure.java:68)
at be.playlane.mink.database.SelectExportDataProcedure.extractData(SelectExportDataProcedure.java:54)
at org.springframework.jdbc.core.JdbcTemplate.processResultSet(JdbcTemplate.java:1033)
at org.springframework.jdbc.core.JdbcTemplate.extractReturnedResultSets(JdbcTemplate.java:947)
at org.springframework.jdbc.core.JdbcTemplate$5.doInCallableStatement(JdbcTemplate.java:918)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:876)
at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:908)
at org.springframework.jdbc.object.StoredProcedure.execute(StoredProcedure.java:113)
at be.playlane.mink.database.SelectExportDataProcedure.execute(SelectExportDataProcedure.java:29)
at be.playlane.mink.service.impl.DefaultExportService$ExportDataRunnable.run(DefaultExportService.java:82)
at java.lang.Thread.run(Thread.java:636)
Weird tho that this doesn't log to the application logs, even tho it is wrapped within a try catch. Now based upon the error, the problem lies withing this methods:
public Object extractData(ResultSet rs) throws SQLException, DataAccessException
{
StringBuffer buffer = new StringBuffer();
try
{
// get result set meta data
ResultSetMetaData meta = rs.getMetaData();
int count = meta.getColumnCount();
// get the column names; column indices start from 1
for (int i = 1; i < count + 1; ++i)
{
String name = meta.getColumnName(i);
bufferField(name, i == count, buffer);
}
while (rs.next())
{
// get the column values; column indices start from 1
for (int i = 1; i < count + 1; ++i)
{
String value = rs.getString(i);
bufferField(value, i == count, buffer);
}
}
}
catch (Exception e)
{
logger.error("Failed to extractData SelectExportDataProcedue: ", e);
}
return buffer.toString();
}
private void bufferField(String field, boolean last, StringBuffer buffer)
{
try
{
if (field != null)
{
field = field.replace('\r', ' ');
field = field.replace('\n', ' ');
buffer.append(field);
}
if (last)
{
buffer.append('\n');
}
else
{
buffer.append('\t');
}
}
catch (Exception e)
{
logger.error("Failed to bufferField SelectExportDataProcedue: ", e);
}
}
The goal of these function is to export a certain resultset to an excel file (which happens on a higher level).
So if anyone has some tips on optimising this, they are very well welcome.
Ok, your stack trace gives you the answer:
Exception in thread "Thread-16" java.lang.OutOfMemoryError: Java heap space
That's why you're not logging, the application is crashing (Thread, to be specific). Judging from your description it sounds like you have a massive dataset that needs to be paged.
while (rs.next())
{
// get the column values; column indices start from 1
for (int i = 1; i < count + 1; ++i)
{
String value = rs.getString(i);
bufferField(value, i == count, buffer);
}
}
This is where you're thread dies (probably). Basically your StringBuffer runs out of memory. As for correcting it, there's a huge amount of options. Throw more memory at the problem on the client side (either by configuring the JVM (Here's a link):
How to set the maximum memory usage for JVM?
Or, if you're already doing that, throw more RAM into the device.
From a programming perspective it sounds like this is a hell of a report. You could offload some of the number crunching to MySQL rather than buffering on your end (if possible), or, if this is a giant report I would consider streaming it to a File and then reading via a buffered stream to fill the report.
It totally depends on what the report is. If it is tiny, I would aim at doing more work in SQL to minimize the result set. If it is a giant report then buffering is the other option.
Another possibility that you might be missing is that the ResultSet (depending on implementations) is probably buffered. That means instead of reading it all to strings maybe your report can take the ResultSet object directly and print from it. The downside to this, of course, is that a stray SQL exception will kill your report.
Best of luck, I'd try the memory options first. You might be running with something hilariously small like 128 and it will be simple (I've seen this happen a lot on remotely administered machines).

Is it safe to gather references to JDBC objects and close them in a loop?

So I'm trying to refactor some code which creates JDBC objects in a loop and didn't close them out cleanly. My first thought is to create a LinkedList to store the prepared statements, result sets, etc., and then close them in a loop inside a finally block. So, the approach is like:
Connection conn = null;
LinkedList<PreparedStatement> statements = new LinkedList<PreparedStatement>();
LinkedList<ResultSet> results = new LinkedList<ResultSet>();
try {
conn = database.getConnection();
for (String i : arr1) {
for (String j : arr2) {
Statement stmt = conn.createStatement();
statements.add(stmt);
ResultSet rs = stmt.executeQuery(...);
results.add(rs);
// ...work...
}
}
}
catch(SQLException ex) {ex.printStackTrace();}
finally {
// close all result sets..
for (ResultSet rs : (ResultSet[])results.toArray()) {
if (rs != null) try { rs.close(); } catch (SQLException ex) {ex.printStackTrace();}
}
for (Statement stmt : (Statement[])statements.toArray()) {
if (stmt != null) try { stmt.close(); } catch (SQLException ex) {ex.printStackTrace();}
}
if (conn != null) try { conn.close(); } catch (SQLException ex) {ex.printStackTrace();}
}
Is this a reasonable approach? Will this end up causing some kind of leak or problem? Thanks in advance, and please let me know if this belongs rather on codereview.se or somewhere else.
This is IMHO a bad idea for at least three reasons:
Resources aren't cleaned up immediately when they are no longer used. ResultSet is an expensive resource and I am not even sure whether you can have several opened result sets on one connection (update: you can, see comments).
In this approach you are opening multiple resources at once, which might lead to excessive and unnecessary usage of database resources and peaks. Especially dangerous if the number of iterations is high.
A special case of previous point is memory - if either Statement or ResultSet holds a lot of memory, holding an unnecessary reference to several such objects wil cause excessive memory usage.
That being said consider using already built and safe utility classes like JdbcTemplate. I know it comes from Spring framework, but you can use it outside of the container (just pass an instance of DataSource) and never worry about closing JDBC resources again.
Not necessarily a leak, but I could see issues.
My experience with Oracle JDBC (specifically) has taught me that the very best thing to do when handling JDBC resources is to close them in exactly the reverse order that you opened them. Every time. As soon as possible.
Collecting them for later cleanup, and releasing them in a different order may cause an issue. I can't sight a specific example, but Oracle seems to be the one that bit me the hardest on this in the past. It is good that you release ResultSet, before Statement, before Connection, but it may not be enough.
This is indeed bad, because it may force the database to hold on to resources you're no longer using. I've seen cases where failure to close Statement or ResultSet objects (can't remember which; possibly both) caused cursor leak errors in Oracle.
You should do all your work in the try and only close the connection in the finally. That is the standard pattern.

Inserting or updating multiple records in database in a multi-threaded way in java

I am updating multiple records in database. Now whenever UI sends the list of records to be updated, I have to just update those records in database. I am using JDBC template for that.
Earlier Case
Earlier what I was whenever I got records from UI, I just do
jdbcTemplate.batchUpdate(Query, List<object[]> params)
Whenever there was an exception, I used to rollback whole transaction.
(Updated : Is batchUpdate multi-threaded or faster than batch update in some way?)
Later Case
But later as requirement changed whenever there was exception. So, whenever there is some exception, I should know which records failed to update. So I had to sent the records back to UI in case of exception with a reason, why did they failed.
so I had to do something similar to this:
for(Record record : RecordList)
{
try{
jdbcTemplate.update(sql, Object[] param)
}catch(Exception ex){
record.setReason("Exception : "+ex.getMessage());
continue;
}
}
So am I doing this in right fashion, by using the loop?
If yes, can someone suggest me how to make it multi-threaded.
Or is there anything wrong in this case.
To be true, I was hesitating to use try catch block inside the loop :( .
Please correct me, really need to learn a better way because I myself feel, there must be a better way , thanks.
make all update-operation to a Collection Callable<>,
send it to java.util.concurrent.ThreadPoolExecutor. the pool is multithreaded.
make Callable:
class UpdateTask implements Callable<Exception> {
//constructor with jdbctemplate,sql,param goes here.
#Override
public Exception call() throws Exception {
try{
jdbcTemplate.update(sql, Object[] param)
}catch(Exception ex){
return ex;
}
return null;
}
invoke call:
<T> List<Future<T>> java.util.concurrent.ExecutorService.invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException
Your case looks like you need to use validation in java and filter out the valid data alone and send to the data base for updating.
BO layer
-> filter out the Valid Record.
-> Invalid Record should be send back with some validation text.
In DAO layer
-> batch update your RecordList
This will give you the best performance.
Never use database insert exception as a validation mechanism.
Exceptions are costly as the stack trace has to be created
Connection to database is another costly process and will take time to get a connection
Java If-Else will run much faster for same data-base validation

Categories

Resources