"System resource exceeded" during connection to Access file through Java jdbc odbc - java

I've read all "System resource exceeded" posts, but this is nothing like them.
I've spend the last 3 hours searching for a solution.
I don't have many connections / statements / resultsets and I always close all of them.
My code used to work but now I get the "System resource exceeded" exception, not during queries, but WHEN I TRY TO CONNECT.
I didn't change a thing from my code, however it doesn't work at the moment, except 1 out of 10 times I try it. I tried to change some things in it but no difference.
My Access files are 15 - 50 MB.
My code is:
private String accessFilePath;
private Connection myConnection;
public boolean connectToAccess(String myAccessFilePath) {
accessFilePath = myAccessFilePath;
//Get connection to database
try {
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
// set properties for unicode
Properties myProperties = new Properties();
myProperties.put("charSet", "windows-1253");
myConnection = DriverManager.getConnection("jdbc:odbc:driver={Microsoft Access Driver (*.mdb)};DBQ=" + accessFilePath, myProperties); // I get the exception here
} catch (Exception ex) {
System.out.println("Failed to connect to " + accessFilePath + " database\n" + ex.getMessage());
return false;
}
return true;
}
What is now different from other times? Do Access files keep previous connections open? What can be wrong here?

OK, I found the solution.
At first I started a new java project and copied the same codelines there.
I successfully connected to my files every time I tried it in my new project.
So it struck me. I looked at my VM settings.
In my original program I ASSIGNED TOO MUCH MEMORY TO THE VIRTUAL MACHINE so there was no memory left even for a single connection to the files.
My settings were --> VM Options: -Xmx1536m -Xms768m (a little bit excessive)
I changed it to --> VM Options: -Xmx512m -Xms256m
And it worked. Thank you for your comments.
I hope this helps other people, because I spend many hours to find it.

Related

how to create backup of postgres database using java

i want to take backup of postgres database using java. I am using following code for this
but this is not working and not generating dump.
String pgDump = "C:\\Program Files\\PostgreSQL\\9.2\\bin\\pg_dump";
String dumpFile = "D:\\test\\"+ tenant.getTenantAsTemplate()+".sql";
String sql = pgDump+" -h localhost -U postgres -P postgres " + tenant.getTenantAsTemplate()+" > "+dumpFile;
Process p = Runtime.getRuntime().exec(sql);
int time = p.waitFor();
System.out.println("time is "+time);
if(time == 0){
System.out.println("backup is created");
}
else{
System.out.println("fail to create backup");
}
Here i am getting time is 1.
This is also operating system dependent and we need also pg_dump. is there any other way to generate backup of database without pg_dump?
please reply soon.
No, there is no way to generate a database backup without pg_dump, using the regular SQL connection. It's a bit of an FAQ, but the people who want the feature never step up to do the work to implement the feature in PostgreSQL.
I guess technically you could use a replication connection to do a physical base backup like pg_basebackup does, but that's not really what you want, requires copying all databases on the machine, and would be a lot of work.
You should use the String[] form of Runtime.exec as I mentioned in a related answer regarding pg_restore.
You must also check the process exit value to see if it terminated successfully or not, and you must be careful to handle, not just swallow, any exceptions thrown.
Your code fails to check the exit value, and I think it's probably generating a malformed command that's failing with a non-zero exit code, probably because you are not correctly quoting the path to pg_dump. To see what's wrong, print the final assembled command line, you'll see something like:
C:\Program Files\PostgreSQL\9.2\bin\pg_dump -h localhost ....
which cmd.exe will split into:
c:\Program
Files\postgresql\9.2\bin\pg_dump
-h
localhost
... etc
See the problem?
Do not just quote the path to pg_dump to work around this. Use the String[] form of exec and you won't have to, plus it'll work correctly for other things like accidental %environmentvars% in paths.

Tomcat Fix Memory Leak?

I am using 6.0.20 I have a number of web apps running on the server, over time, approximately 3 days and the server needs restarting otherwise the server crashes and becomes unresponsive.
I have the following settings for the JVM:
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\logs
This provides me with a hprof file which I have loaded using Java VisualVM which identifies the following:
byte[] 37,206 Instances | Size 86,508,978
int[] 540,909 Instances | Size 55,130,332
char[] 357,847 Instances | Size 41,690,928
The list goes on, but how do I determine what is causing these issues?
I am using New Relic to monitor the JVM and only one error seems to appear but it's a reoccurring one, org.apache.catalina.connector. ClientAbortException. Is it possible that when a user session is aborted, any database connections or variables created are not being closed and are therefore left orphaned?
There is a function which is used quite heavily throughout each web app, not sure if this has any bearing on the leak:
public static String replaceCharacters(String s)
{
s = s.replaceAll(" ", " ");
s = s.replaceAll(" ", "_");
s = s.replaceAll("\351", "e");
s = s.replaceAll("/", "");
s = s.replaceAll("--", "-");
s = s.replaceAll("&", "and");
s = s.replaceAll("&", "and");
s = s.replaceAll("__", "_");
s = s.replaceAll("\\(", "");
s = s.replaceAll("\\)", "");
s = s.replaceAll(",", "");
s = s.replaceAll(":", "");
s = s.replaceAll("\374", "u");
s = s.replaceAll("-", "_");
s = s.replaceAll("\\+", "and");
s = s.replaceAll("\"", "");
s = s.replaceAll("\\[", "");
s = s.replaceAll("\\]", "");
s = s.replaceAll("\\*", "");
return s;
}
Is it possible that when a user connection is aborted, such as a user browser closed or the users has left the site that all variables, connections, etc... are purged/released, but isn't GC supposed to handled that?
Below are my JVM settings:
-Dcatalina.base=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20
-Dcatalina.home=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20
-Djava.endorsed.dirs=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\endorsed
-Djava.io.tmpdir=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\temp
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\conf\logging.properties
-Dfile.encoding=UTF-8
-Dsun.jnu.encoding=UTF-8
-javaagent:c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\newrelic\newrelic.jar
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\logs
-Dcom.sun.management.jmxremote.port=8086
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false vfprintf
-Xms1024m
-Xmx1536m
Am I missing anything? The server has 3GB ram.
Any help would be much appreciated :-)
... but how do I determine what is causing these issues?
You need to use a dump analyser that allows you to see what is making these objects reachable. Pick an object, and see what other object or objects refer to it ... and work backwards through the chains until you find either a "GC root" or or some application-specific class that you recognise.
Here are a couple of references on analysing memory snapshots and memory profilers:
How do I analyze a .hprof file?
How to find memory leaks using visualvm
Solving OutOfMemoryError - Memory Profilers
Once you have identified that, you've gone most of the way to identifying the source of your storage leak.
That function has no direct bearing on the leak. It certainly won't cause it. (It could generate a lot of garbage String objects ... but that's a different issue.)
I migrated all projects to Tomcat 7.0.42 and my errors have disappeared, our websites are far more stable and slightly faster, we are using less memory and cpu usage is far better.
Start server in local dev environment, attach profiler (yourkit preferably), Take the heap dump periodically, You will see growth in object byte[] and you can actually connect those byte[] with your application class leaking it with this tool that will help you idenfity defect in code

Accessing 64-bit Registry in 32-bit application

Please do not mark this question as duplicate!
I'm searching for a solution in java - not C# - and used the WinRegistry class.
I wrote a program that can readout a registry key. Now the problem: the java application is 32bit and I want to read the reg-keys from a windows 7 64bit-system. With this code windows will redirect my 32bit program to the 32bit section of the 64bit-registry (compare the real path with the comment in the code - Wow6432Node!).
// only access to "SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Run"
value = WinRegistry.readString(WinRegistry.HKEY_LOCAL_MACHINE,
"SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run", "Citrix Login Service");
I deleted the try-catch-block so you are able to focus the real problem more better ;).
I solved this now - thanks goes to Petrucio who posted this solution in 2012: read/write to Windows Registry using Java.
E.g. - Read Operation:
try {
String value = WinRegistry.readString(WinRegistry.HKEY_LOCAL_MACHINE, "SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run", "TestValue", WinRegistry.KEY_WOW64_64KEY);
System.out.println(value);
} catch (Exception ex) {
ex.printStackTrace();
}
I hope that is usefully for someone.

Catching and handling an exception throw in Java with a message (Too many Open Files)

I've been experiencing a similar problem to (Too many open file handles) when I try to run a program on a Grid Computer. The option of increasing the operating system limit for the total number of open files on this resource is unavailable.
I tried to catch and handle the exception, but catching the exception does not seem to happen. The exception seems to report itself as a FileNotFoundException. One of the places the exception is thrown is in the method shown below:
public static void saveImage(BufferedImage bi, String format, File aFile) {
try {
if (bi != null) {
try {
//System.out.println("ImageIO.write(BufferedImage,String,File)");
System.err.println("Not really an error, just a statement to help with debugging");
ImageIO.write(bi, format, aFile);
} catch (FileNotFoundException e) {
System.err.println("Trying to handle " + e.getLocalizedMessage());
System.err.println("Wait for 2 seconds then trying again to saveImage.");
//e.printStackTrace(System.err);
// This can happen because of too many open files.
// Try waiting for 2 seconds and then repeating...
try {
synchronized (bi) {
bi.wait(2000L);
}
} catch (InterruptedException ex) {
Logger.getLogger(Generic_Visualisation.class.getName()).log(Level.SEVERE, null, ex);
}
saveImage(
bi,
format,
aFile);
} finally {
// There is nothing to go in here as ImageIO deals with the stream.
}
}
} catch (IOException e) {
Generic_Log.logger.log(
Generic_Log.Generic_DefaultLogLevel, //Level.ALL,
e.getMessage());
String methodName = "saveImage(BufferedImage,String,File)";
System.err.println(e.getMessage());
System.err.println("Generic_Visualisation." + methodName);
e.printStackTrace(System.err);
System.exit(Generic_ErrorAndExceptionHandler.IOException);
}
}
Here is a snippet from System.err reported one time when the problem occurs:
Not really an error, just a statement to help with debugging
java.io.FileNotFoundException: /data/scratch/lcg/neiss140/home_cream_292126297/CREAM292126297/genesis/GENESIS_DemographicModel/0_99/0/data/Demographics/0_9999/0_99/39/E02002367/E02002367_Population_Male_End_of_Year_Comparison_2002.PNG (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:216)
at javax.imageio.stream.FileImageOutputStream.(FileImageOutputStream.java:53)
at com.sun.imageio.spi.FileImageOutputStreamSpi.createOutputStreamInstance(FileImageOutputStreamSpi.java:37)
at javax.imageio.ImageIO.createImageOutputStream(ImageIO.java:393)
at javax.imageio.ImageIO.write(ImageIO.java:1514)
at uk.ac.leeds.ccg.andyt.generic.visualisation.Generic_Visualisation.saveImage(Generic_Visualisation.java:90)
at uk.ac.leeds.ccg.andyt.generic.visualisation.Generic_Visualisation$ImageSaver.run(Generic_Visualisation.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
I have some ideas for working around this issue, but does anyone know what is wrong?
(I tried to post a version of this question as an answer to this question, but this was deleted by a moderator.)
Firstly, the write method will actually throw an IIOException not a FileNotFoundException if it fails to open the output stream; see the source - line 1532. That explains why your recovery code never runs.
Second, your recovery strategy is a bit dubious. You have no guarantee that whatever is using all of those file handles is going to release them in 2 seconds. Indeed, in the worst case, they may never be released.
But the most important thing is that you are focussing on the wrong part of the problem. Rather than trying to come up with a recovery mechanism, you should focus on the problem of why the application has so many file descriptors open. This smells like a resource leak. I recommend that you run FindBugs over your codebase to see if it can identify the leaky code. Everywhere your code opens an external Stream, it should have a matching close() call in a finally block to ensures that the stream is always closed; e.g.
OutputStream os = new FileOutputStream(...)
try {
// do stuff
} finally {
os.close();
}
or
// Java 7 form ...
try (OutputStream os = new FileOutputStream(...)) {
// do stuff
}
The resource I am running this on has only 1024 file handlers and changing that is another issue. The program is a simulation and it writes out a large number of output files at the same time as it reads in another lot of input data. That work is threaded using an ExecutorService. The program runs to completion on another computer that has a higher file hander limit, but I want to get it to work on the resource where I am limited to having less file handlers.
So it seems like you are saying that you need to have that number of files open.
It strikes me that the root problem is in your application's architecture. It sounds like you simply have too many simulation tasks running at the same time. I suggest that you reduce the executor's thread pool size to a few less than the max number of open file descriptors.
The problem is that your current strategy could lead to a form of deadlock ... where existing tasks can't make progress until new tasks start running, but the new tasks can't start until existing tasks release file descriptors.
I'm thinking you need a different approach to handling the input and output. Either buffer the complete input and/or output files in memory (etcetera) or implement some kind of multiplexor so that all active files doesn't need to be open at the same time.

FileNotFound (Access is denied) Exception on java.io

Why do I get this error when I run this program? This occurs after random iterations. Usually after the 8000th iteration.
public static void main(String[] args)
{
FileWriter writer = null;
try
{
for(int i = 0; i < 10000; i++)
{
File file = new File("C:\\Users\\varun.achar\\Desktop\\TODO.txt");
if(file.exists())
{
System.out.println("File exists");
}
writer = new FileWriter(file, true);
writer.write(i);
System.out.println(i);
writer.close();
if(!file.delete())
{
System.out.println("unable to delete");
}
//Thread.sleep(10);
//writer = null;
//System.gc();
}
}
catch(IOException e)
{
e.printStackTrace();
}
finally
{
if(writer != null)
{
try
{
writer.close();
}
catch(IOException e)
{
e.printStackTrace();
}
}
}
}
After the exception occurs, the file isn't present. That means the it is deleting, but FIleWriter tries to acquire the lock before that, even though it isn't a multi threaded program. Is it because the Windows isn't deleting the file fast enough, and hence the FileWriter doesn't get a lock? If so, then file.delete() method returns before windows actually deletes it?
How do i resolve it, since i'm getting a similar issue during load testing my application.
EDIT 1: Stacktrace:
java.io.FileNotFoundException: C:\Users\varun.achar\Desktop\TODO.txt (Access is denied)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:192)
at java.io.FileOutputStream.<init>(FileOutputStream.java:116)
at java.io.FileWriter.<init>(FileWriter.java:61)
EDIT 2 : Added file.exists() and file.delete conditions in the program. and the new stacktrace:
7452
java.io.FileNotFoundException: C:\Users\varun.achar\Desktop\TODO.txt (Access is denied)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:192)
at java.io.FileWriter.<init>(FileWriter.java:90)
at com.TestClass.main(TestClass.java:25)
EDIT 3 Thread dump
TestClass [Java Application]
com.TestClass at localhost:57843
Thread [main] (Suspended (exception FileNotFoundException))
FileOutputStream.<init>(File, boolean) line: 192
FileWriter.<init>(File, boolean) line: 90
TestClass.main(String[]) line: 24
C:\Users\varun.achar\Documents\Softwares\Java JDK\JDK 6.26\jdk\jre\bin\javaw.exe (09-Nov-2011 11:57:34 PM)
EDIT 4 : Program runs successfully on different machine with same OS. Now how do i ensure that the app with run successfully in the machine it is deployed in?
On any OS you can have only a certain number of open files/threads at a stretch. You seem to be hitting your OS limit. Try setting file to null inside the loop.
If I understand your stack trace correctly, the exception is coming when trying to create a new FileWriter. It's impossible to know the reason without investigating a bit further.
Return values may tell something. Especially, check what File.delete() returns.
Before trying to create new FileWriter, check what File.exists() returns.
If the previous delete() returns true and the exists() right after it also returns true, in a single-threaded program, then it's indeed something weird.
Edit: so it seems that deletion was successful and the file didn't exist after that. That how it's supposed to work, of course, so it's weird why FileWriter throws the exception. One more thought, try checking File.getParentFile().canWrite(). That is, do your permissions to write to the directory somehow disappear.
Edit 2:
Don't get the error on a different machine with the same OS. Now how do i make sure that this error won't come in the app where it'll be deployed?
So far you have one machine that works incorrectly and one that works correctly. Maybe you could try it on even more machines. It's possible that the first machine is somehow broken and that causes errors. It's amazing how often digital computers and their programs (I mean the OS and Java, not necessarily your program) can be just a "little bit broken" so that they work almost perfectly almost all of the time, but fail randomly with some specific hardware & use case - usually under heavy load - similar to how incorrect multi-threaded programs can behave. It doesn't have to be your fault to be your problem :-)
Frankly, the only way to make sure that errors won't come up in machine X is to run the program on machine X. Unusual stuff such as creating and deleting the same file 8000 times in rapid succession is prone to errors, even though it "should" work. Computers, operating systems and APIs are not perfect. The more unusual stuff you do, the more often the imperfections will realize themselves, because unusual usage is generally less thoroughly tested than everyday operations.
I have had the same issue, a java program (single threaded) that opens, deleted then re-opens the same file continuously.
On some windows systems we get the same issue as reported here, on Linux, Solaris, and various other windows systems it works fine.
Traceing the program with SysInternals Process Monitor (now MS) its clear the delete is done first, at the OS level, and clear the subsequent open fails with PENDING DELETE status.
So there seems to be some slight delay at the OS/NTFS/Disk level before the file is actually deleted, and that seems to be the cause of the random failure in our case.
As a workaround, I changed the .delete() call to instead just write over the top of it new FileWriter(file) and that seems to be working.
The problem did not occur on all systems, one specific model that would always fail, all be it after not fixed number of loops, was a Windows 7 / Dell Lattitude E6420 with WD Smartdrive, whereas my Windows 7 / Dell precision M4600 (with solid state drive) or T3400 with Linux I have never had the issue.
Cheers - Mark
It may be a long shot, but, can you try to work with a file that is NOT directly sitting on the Desktop. Instead of:
"C:\\Users\\varun.achar\\Desktop\\TODO.txt"
Try:
"C:\\Users\\varun.achar\\SomeOtherDirectory\\TODO.txt"
OS may be killing you here with all the Desktop hooks...
EDIT based on the comments:
Are there any scheduled jobs running on the "bad" machine?
Instead of debugging the environment, do you have a sys admin to do that?
Does this work on a clean Windows install? [95% chance it will]
Since the root cause seems to be environment, instead of solving a Windows configuration problem, would you be able to move forward with other tasks, and leave it to someone who keeps the list of discrepancies between the systems?
Can you conditionally try to write to the file ?
Using file.exists and then writing to it, so you can potentially avoid any other issues. Hard to say from this exception.
http://download.oracle.com/javase/6/docs/api/java/io/File.html#exists()
Could you also post a thread dump at that point, just to debug it further.
Please flush the writer, before writing again.
These are the scenerios you should handle before deleting a file http://www.java2s.com/Code/Java/File-Input-Output/DeletefileusingJavaIOAPI.htm
at least check for return value in your program.
Thanks folks for help me out but this is how it got resolved finally.
public static void main(String[] args)
{
FileWriter writer = null;
try
{
for(int i = 0; i < 10000; i++)
{
File file = new File("C:\\tenant-system-data\\abc.txt");
if(!file.getParentFile().canWrite())
{
System.out.println("parent file error");
}
if(file.exists())
{
System.out.println("File exists");
}
int count = 0;
while(count++ < 5)
{
try
{
file.createNewFile();
break;
}
catch(IOException e)
{
try
{
Thread.sleep(100);
}
catch(InterruptedException e1)
{
e1.printStackTrace();
}
}
}
writer = new FileWriter(file, true);
writer.write(i);
System.out.println(i);
writer.close();
if(!file.delete())
{
System.out.println("unable to delete");
}
//Thread.sleep(10);
//writer = null;
//System.gc();
}
}
catch(IOException e)
{
e.printStackTrace();
}
finally
{
if(writer != null)
{
try
{
writer.close();
}
catch(IOException e)
{
e.printStackTrace();
}
}
}
}
I just had the same problem (FileWriter & Access Denied).
My guess for the reason: Windows 7 had put a lock on the file because a preview of the file content was shown in an Explorer window (the file was selected in the window).
Solution: I de-selected the file in the Explorer window. And the IOException was gone.
You have delete permission in the directory but not create permission.

Categories

Resources