OutOfMemoryExceptions on Productions Servers - java

Can the following code snippets leak memory
A
BufferedWriter logoutput;
FileWriter fstream = null;
try {
Calendar cal = Calendar.getInstance();
SimpleDateFormat sdf = new SimpleDateFormat(DATE_FILE_FORMAT_NOW);
fstream = new FileWriter("..\\GetInfoLogs\\" + sdf.format(cal.getTime()) + ".log", true);
logoutput = new BufferedWriter(fstream);
logoutput.write(statement);
// Missing fstream.close();
logoutput.close();
}
} catch (IOException e) {
System.err.println("Unable to write to file");
}
B
String info[] = {"", ""};
try {
conn.setAutoCommit(false);
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery("select ....");
boolean hasRows = rset.next();
if (!hasRows) {
stmt.close();
return info;
} else {
info[0] = rset.getString(1);
info[1] = rset.getString(2);
}
// MISSING rset.close();
stmt.close();
} catch (SQLException e) {
logTransaction(service, "error at getPFpercentage: " + e.getMessage() + " ");
}

I would recommend use YourKit Java Profiler, as it very intuitive and easy to use tool.
Start your application locally, connect profiler to it and perform some of your application use cases.

No, they can't.
Objects are collected by the garbage collector when there are no longer any live references to them in a program. This is in general unrelated to whether they have been closed.
The only way closing an object (or calling any other method on it) could affect its eligibility for collection was if there was some global structure which held a reference to the object, and closing it had the side effect of removing it from this structure. I am not aware of any such structure in the JDK's IO libraries. Indeed, IO classes in the JDK are generally designed to close themselves when they get garbage collected, which would be pretty futile if their being open prevented them being collected.
Database classes like Connections are a bit trickier, because the have implementations provided by the JDBC driver. It is possible a poorly-written JDBC driver would prevent unclosed objects being collected. It seems unlikely, though, as that would be a huge screwup, frankly.
You can use the JDK's jmap tool to get a heap dump of a running application. You can then analyse this to try to work out why your application is using so much memory. Be warned that the dump files are huge (bigger than the dumped heap), and analysing them is a real pain. A colleague of mine has got good results using the Eclipse Memory Analyzer plugin.

Related

Temporary tablespace of CLOB not freed

I have the Problem that my Java-Application is exporting a larger amount of clobs from a database, but always runs out of temporary tablespace as the old clobs are not freed.
A simplified code example how I do it would be:
public void getClobAndDoSomething (oracle.jdbc.OracleCallableStatement pLSQLCodeReturningClob) {
try (OracleCallableStatement statement = pLSQLCodeReturningClob) {
statment.registerOutParameter(1, Types.CLOB);
statement.execute();
oracle.sql.CLOB clob = statement.getCLOB(1);
clob.open(CLOB.MODE_READONLY);
Reader reader = clob.getCharacterStream();
BufferedReader bufferedReader = new BufferedReader(reader);
doSomethingWithClob(bufferedReader);
bufferedReader.close();
reader.close();
clob.close();
clob.freeTemporary();
} catch (SQLException e) {
if (e.getErrorCode() == 1652) {
//Server ran out of temporary tablespace
} else
handleException(e);
} catch (IOException e) {
handleException(e);
}
}
If this method is called in a loop it will always end up running out of temporary table space at some point.
The only reliable way to free the space is by closing the connection and opening a new one (for example by using clob.getInternalConnection.close()) but this would slow down the application and make the current multi-threaded approach unusable.
Sadly the oracle documentation on ojdbc where not really helpful and google only found articles telling me to use the free() method of lobs which is not even implemented by oracles temporary clobs.
Additional Note:
This issue does also occur when using oracles APEXExport.class to export a big workspace.
Driver and System specifics:
OS: Windows 7 Professional x64
Java: 1.8.0_45 64-Bit
ojdbc: 6 (Are there more specific versions?)
Database: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
Test code if you have a APEX-Application:
java.sql.Connection con = getConnection();
String gStmtGetAppClob = "begin ? := wwv_flow_utilities.export_application_to_clob(?, ?, ?, ?); end;";
int appId = 100;
while (true) {
OracleCallableStatement exportApplicationToClob = (OracleCallableStatement) con.prepareCall(gStmtGetAppClob);
exportApplicationToClob.setString(3, "Y"); //Public reports
exportApplicationToClob.setString(4, "N"); //Saved reports
exportApplicationToClob.setString(5, "N"); //Interactive report notifications
exportApplicationToClob.setBigDecimal(2, new BigDecimal(appId));
getClobAndDoSomething(exportApplicationToClob);
try {
Thread.sleep(50);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
}
con.close();
Update:
After more testing I found out that the clobs are getting freed at some point without closing the connection. So it seems like the free() is actually a lazyFree(). But this can take more than a minute.
I can also convert the CLOB to Clob, don't know what I was doing wrong earlier. Problem stays unchanged if using Clob.
In pl/sql world this would have been handled through temporary CLOB and reusing it inside loop.
Assuming that you are using java.sql.CLOB., it does not seem to have createTemporary CLOB option, but oracle.sql.CLOB does. It also has freeTemporary() method to clear temp space.
https://docs.oracle.com/cd/E18283_01/appdev.112/e13995/oracle/sql/CLOB.html
Your calling routine can create a temporary CLOB and pass it as a parameter (lets say p_clob) to this method. Assign the return value of query to p_clob every time instead of creating new CLOB (e.g. CLOB clob = statement.getCLOB).
Short of time right now, but will edit a detailed code later. If you can work with above, then good.

Java exception out of memory

This is extremely strange. I am getting an out of memory exception when trying to create a file. However, there is more than enough space on the disk for the file to be created.
The application is running on a Citrix (not sure if that is even relevant).
I have no idea where to even start to debug this since I can clearly see that there is space on the disk.
The file I'm trying to create is 4 KB and named history.db
Any ideas here?
This is the code I'm using to create the file:
try {
String databaseFileLocation = "";
String fileSeparator = System.getProperty("file.separator");
String homeDir = System.getProperty("user.home");
File myAppDir = new File(homeDir, ".imbox");
if (System.getProperty("os.name").contains("Windows")) {
databaseFileLocation = "jdbc:sqlite:" + myAppDir + fileSeparator + "history_" + agentID + ".db";
} else if (System.getProperty("os.name").contains("Mac")) {
databaseFileLocation = "jdbc:sqlite:history_" + agentID + ".db";
}
Class.forName("org.sqlite.JDBC");
Connection conn = DriverManager.getConnection(databaseFileLocation);
Statement stat = conn.createStatement();
stat.executeUpdate("CREATE TABLE IF NOT EXISTS visitorInfo (channelID text UNIQUE, currentPage, userCountry, userCity, org);");
stat.executeUpdate("CREATE TABLE IF NOT EXISTS chatHistory (channelID, sender, message, recipient, time);");
stat.executeUpdate("ALTER TABLE visitorInfo ADD COLUMN visitorTag;");
} catch (Exception eef) {
eef.printStackTrace();
final ImageIcon icon = new javax.swing.ImageIcon(getClass().getResource("/resources/warning_icon.gif"));
JOptionPane.showMessageDialog(null, "Failed to create database file.\n\nError description:\n" + eef.getMessage(), "Error when creating database file", JOptionPane.WARNING_MESSAGE, icon);
}
I can only say so much given such a small piece of code and no stack trace but judging by how this code is written I'll make a few assumptions.
Resources are being opened and never closed here, if that is happening here I can imagine that the same thing is happening elsewhere in the code. If that's the case then it's likely that there's some other unsafe code around.
This means with such little information the best I can say is that if the code has been running a while that you have a memory leak in there somewhere, probably using resources. If this has been started and restarted more than a few times then it's probably not being closed cleanly and old versions of this program are hanging around.
To properly figure this out, we would either need more code and/or a stack trace. Possibly running jps on the machine would help as well, otherwise we don't have enough information to say more.

Create a database / execute a bunch of mysql statements from Java

I have a library that needs to create a schema in MySQL from Java. Currently, I have a dump of the schema that I just pipe into the mysql command. This works okay, but it is not ideal because:
It's brittle: the mysql command needs to be on the path: usually doesn't work on OSX or Windows without additional configuration.
Also brittle because the schema is stored as statements, not descriptively
Java already can access the mysql database, so it seems silly to depend on an external program to do this.
Does anyone know of a better way to do this? Perhaps...
I can read the statements in from the file and execute them directly from Java? Is there a way to do this that doesn't involve parsing semicolons and dividing up the statements manually?
I can store the schema in some other way - either as a config file or directly in Java, not as statements (in the style of rails' db:schema or database.yml) and there is a library that will create the schema from this description?
Here is a snippet of the existing code, which works (when mysql is on the command line):
if( db == null ) throw new Exception ("Need database name!");
String userStr = user == null ? "" : String.format("-u %s ", user);
String hostStr = host == null ? "" : String.format("-h %s ", host);
String pwStr = pw == null ? "" : String.format("-p%s ", pw);
String cmd = String.format("mysql %s %s %s %s", hostStr, userStr, pwStr, db);
System.out.println(cmd + " < schema.sql");
final Process pr = Runtime.getRuntime().exec(cmd);
new Thread() {
public void run() {
try (OutputStream stdin = pr.getOutputStream()) {
Files.copy(f, stdin);
}
catch (IOException e) { e.printStackTrace(); }
}
}.start();
new Thread() {
public void run() {
try (InputStream stdout = pr.getInputStream() ) {
ByteStreams.copy(stdout, System.out);
}
catch (IOException e) { e.printStackTrace(); }
}
}.start();
int exitVal = pr.waitFor();
if( exitVal == 0 )
System.out.println("Create db succeeded!");
else
System.out.println("Exited with error code " + exitVal);
The short answer (as far as i know) is no.
You will have to do some parsing of the file into separate statements.
I have faced the same situation and you can find many questions on this topic here on SO.
some like here will show a parser. others can direct to tools Like this post from apache that can convert the schema to an xml format and then can read it back.
My main intention when writing this answer is to tell that I chose to use the command line in the end.
extra configuration: maybe it is an additional work but you can do it by config or at runtime based on the system you are running inside. you do the effort one time and you are done
depending on external tool: it is not as bad as it seems. you have some benefits too.
1- you don't need to write extra code or introduce additional libraries just for parsing the schema commands.
2- the tool is provided by the vendor. it is probably more debugged and tested than any other code that will do the parsing.
3- it is safer on the long run. any additions or changes in the format of dump that "might" break the parser will most probably be supported with the tool that comes with the database release. you won't need to do any change in your code.
4- the nature of the action where you are going to use the tool (creating schema) does not suggest frequent usage, minimizing the risk of it becoming a performance bottle neck.
I hope you can find the best solution for your needs.
Check out Yank, and more specifically the code examples linked to on that page. It's a light-weight persistence layer build on top of DBUtils, and hides all the nitty-gritty details of handling connections and result sets. You can also easily load a config file like you mentioned. You can also store and load SQL statements from a properties file and/or hard code the SQL statements in your code.

Java storedProcedure stops with OutOfMemoryError

I'm working on a Java project, running on Tomcat 6, which connects to a MySQL database. All procedures run as they should, both when testing local as testing on the server of our customer. There is one exception however, and that's for one procedure which retrieves a whole lot of data to generate a report. The stored procedure takes like 13 minutes or so when executing it from MySQL. When I run the application locally and connect to the online database, the procedure does work, the only time it doesn't work, is when it is run on the server of our client.
The client is pretty protective over his server, so we have limited control over it, but they do want us to solve the problem. When i check the log files, no errors are thrown from the function that executes the stored procedure. And putting some debug logs in the code, it shows that it does get to the execute call, but doesn't log the debug right after the call, neither logs the error in the catch, but does get into the finally section.
They claim there are no time-out errors in the MySQL logs.
If anyone has any idea on what might cause this problem, any help will be appreciated.
update:
after some nagging to the server administrator, I've finally got access to the catalina logs, and in those logs, i've finally found an error that has some meaning:
Exception in thread "Thread-16" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2894)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:117)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:407)
at java.lang.StringBuffer.append(StringBuffer.java:241)
at be.playlane.mink.database.SelectExportDataProcedure.bufferField(SelectExportDataProcedure.java:68)
at be.playlane.mink.database.SelectExportDataProcedure.extractData(SelectExportDataProcedure.java:54)
at org.springframework.jdbc.core.JdbcTemplate.processResultSet(JdbcTemplate.java:1033)
at org.springframework.jdbc.core.JdbcTemplate.extractReturnedResultSets(JdbcTemplate.java:947)
at org.springframework.jdbc.core.JdbcTemplate$5.doInCallableStatement(JdbcTemplate.java:918)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:876)
at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:908)
at org.springframework.jdbc.object.StoredProcedure.execute(StoredProcedure.java:113)
at be.playlane.mink.database.SelectExportDataProcedure.execute(SelectExportDataProcedure.java:29)
at be.playlane.mink.service.impl.DefaultExportService$ExportDataRunnable.run(DefaultExportService.java:82)
at java.lang.Thread.run(Thread.java:636)
Weird tho that this doesn't log to the application logs, even tho it is wrapped within a try catch. Now based upon the error, the problem lies withing this methods:
public Object extractData(ResultSet rs) throws SQLException, DataAccessException
{
StringBuffer buffer = new StringBuffer();
try
{
// get result set meta data
ResultSetMetaData meta = rs.getMetaData();
int count = meta.getColumnCount();
// get the column names; column indices start from 1
for (int i = 1; i < count + 1; ++i)
{
String name = meta.getColumnName(i);
bufferField(name, i == count, buffer);
}
while (rs.next())
{
// get the column values; column indices start from 1
for (int i = 1; i < count + 1; ++i)
{
String value = rs.getString(i);
bufferField(value, i == count, buffer);
}
}
}
catch (Exception e)
{
logger.error("Failed to extractData SelectExportDataProcedue: ", e);
}
return buffer.toString();
}
private void bufferField(String field, boolean last, StringBuffer buffer)
{
try
{
if (field != null)
{
field = field.replace('\r', ' ');
field = field.replace('\n', ' ');
buffer.append(field);
}
if (last)
{
buffer.append('\n');
}
else
{
buffer.append('\t');
}
}
catch (Exception e)
{
logger.error("Failed to bufferField SelectExportDataProcedue: ", e);
}
}
The goal of these function is to export a certain resultset to an excel file (which happens on a higher level).
So if anyone has some tips on optimising this, they are very well welcome.
Ok, your stack trace gives you the answer:
Exception in thread "Thread-16" java.lang.OutOfMemoryError: Java heap space
That's why you're not logging, the application is crashing (Thread, to be specific). Judging from your description it sounds like you have a massive dataset that needs to be paged.
while (rs.next())
{
// get the column values; column indices start from 1
for (int i = 1; i < count + 1; ++i)
{
String value = rs.getString(i);
bufferField(value, i == count, buffer);
}
}
This is where you're thread dies (probably). Basically your StringBuffer runs out of memory. As for correcting it, there's a huge amount of options. Throw more memory at the problem on the client side (either by configuring the JVM (Here's a link):
How to set the maximum memory usage for JVM?
Or, if you're already doing that, throw more RAM into the device.
From a programming perspective it sounds like this is a hell of a report. You could offload some of the number crunching to MySQL rather than buffering on your end (if possible), or, if this is a giant report I would consider streaming it to a File and then reading via a buffered stream to fill the report.
It totally depends on what the report is. If it is tiny, I would aim at doing more work in SQL to minimize the result set. If it is a giant report then buffering is the other option.
Another possibility that you might be missing is that the ResultSet (depending on implementations) is probably buffered. That means instead of reading it all to strings maybe your report can take the ResultSet object directly and print from it. The downside to this, of course, is that a stray SQL exception will kill your report.
Best of luck, I'd try the memory options first. You might be running with something hilariously small like 128 and it will be simple (I've seen this happen a lot on remotely administered machines).

Is it safe to gather references to JDBC objects and close them in a loop?

So I'm trying to refactor some code which creates JDBC objects in a loop and didn't close them out cleanly. My first thought is to create a LinkedList to store the prepared statements, result sets, etc., and then close them in a loop inside a finally block. So, the approach is like:
Connection conn = null;
LinkedList<PreparedStatement> statements = new LinkedList<PreparedStatement>();
LinkedList<ResultSet> results = new LinkedList<ResultSet>();
try {
conn = database.getConnection();
for (String i : arr1) {
for (String j : arr2) {
Statement stmt = conn.createStatement();
statements.add(stmt);
ResultSet rs = stmt.executeQuery(...);
results.add(rs);
// ...work...
}
}
}
catch(SQLException ex) {ex.printStackTrace();}
finally {
// close all result sets..
for (ResultSet rs : (ResultSet[])results.toArray()) {
if (rs != null) try { rs.close(); } catch (SQLException ex) {ex.printStackTrace();}
}
for (Statement stmt : (Statement[])statements.toArray()) {
if (stmt != null) try { stmt.close(); } catch (SQLException ex) {ex.printStackTrace();}
}
if (conn != null) try { conn.close(); } catch (SQLException ex) {ex.printStackTrace();}
}
Is this a reasonable approach? Will this end up causing some kind of leak or problem? Thanks in advance, and please let me know if this belongs rather on codereview.se or somewhere else.
This is IMHO a bad idea for at least three reasons:
Resources aren't cleaned up immediately when they are no longer used. ResultSet is an expensive resource and I am not even sure whether you can have several opened result sets on one connection (update: you can, see comments).
In this approach you are opening multiple resources at once, which might lead to excessive and unnecessary usage of database resources and peaks. Especially dangerous if the number of iterations is high.
A special case of previous point is memory - if either Statement or ResultSet holds a lot of memory, holding an unnecessary reference to several such objects wil cause excessive memory usage.
That being said consider using already built and safe utility classes like JdbcTemplate. I know it comes from Spring framework, but you can use it outside of the container (just pass an instance of DataSource) and never worry about closing JDBC resources again.
Not necessarily a leak, but I could see issues.
My experience with Oracle JDBC (specifically) has taught me that the very best thing to do when handling JDBC resources is to close them in exactly the reverse order that you opened them. Every time. As soon as possible.
Collecting them for later cleanup, and releasing them in a different order may cause an issue. I can't sight a specific example, but Oracle seems to be the one that bit me the hardest on this in the past. It is good that you release ResultSet, before Statement, before Connection, but it may not be enough.
This is indeed bad, because it may force the database to hold on to resources you're no longer using. I've seen cases where failure to close Statement or ResultSet objects (can't remember which; possibly both) caused cursor leak errors in Oracle.
You should do all your work in the try and only close the connection in the finally. That is the standard pattern.

Categories

Resources