This is getting when i tried to download 150mb size file. i have JAVA_OPTS as following.
if [ -f /root/.rs850 -o -f /root/.rs851 -o -f /root/.rs950 -o -f /root/.rs951 ]
JAVA_OPTS="$JAVA_OPTS -Xmx1024m -XX:MaxPermSize=128m"
else
JAVA_OPTS="$JAVA_OPTS -Xmx512m -XX:MaxPermSize=128m"
It would appear that something in your webapp is buffering the big file in memory prior to it being downloaded by the user's web browser. (You can easily consume more memory than you would think, depending on the nature of the file and how it is being read and buffered in memory.)
There are two ways to address this:
Modify the code in your webapp that is responsible for serving up the file so that it doesn't need to buffer it in memory before sending it.
Increase the max heap size; e.g. change the -Xmx512 option to (say) -Xmx1024.
Unfortunately, the second solution means that your Tomcat instance will use more memory. It is also a bandaid. If you need to download a larger file, you are liable to run into the same problem again.
Another possibility is that your webapp has a memory leak, and it was just a coincidence that the 150 Mb download triggered the OOME.
Related
i have a problem with reading PDF file content in java using itextpdf.jar ,
if i read a small sized(5-15MB) PDF file means its working well, it is possible to read it's contents
but when i read large sized(200MB) PDF file means its showing Run time exception like following
enter code hereException in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2786)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
at com.itextpdf.text.pdf.RandomAccessFileOrArray.InputStreamToArray(RandomAccessFileOrArray.java:213)
at com.itextpdf.text.pdf.RandomAccessFileOrArray.<init>(RandomAccessFileOrArray.java:203)
at com.itextpdf.text.pdf.PdfReader.<init>(PdfReader.java:235)
at com.itextpdf.text.pdf.PdfReader.<init>(PdfReader.java:246)
at general.FileStreamClose.main(FileStreamClose.java:28)
Java Result: 1enter code here
any solution for this , how to increase heap size in tomcat
You can tune your Java Application Runtime settings:
maximize heap size to high value with -Xmx say 500M
tune -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio to make sure that the application will not becomes irresponsive when consuming lot of memory when the heap reduces.
to increase heap size for tomcat you'll have to set the evnirenment variable JAVA_OPTS and have it contain the -Xmx option for example -Xmx512m
here is a sample script how you can run tomcat
#echo off
set JAVA_HOME=C:\Program Files\Java\jdk1.6.0_33
set CATALINA_HOME=C:\Program Files\apache-tomcat-7.0
set JAVA_OPTS=-XX:MaxPermSize=128m -Xmx512m -server
call %CATALINA_HOME%\bin\catalina.bat run
for additional info, as far as i know, if your machine is 32 bit increase xmx and xms heap size will limited around 1k++. If you need more than that you need to install java 64 bit (of course in 64 machine and 64 OS).
On Linux when using -XX+HeapDumpOnOutOfMemoryError the hprof file produced is owned by the user under which the java process is running and has permissions of 600.
I understand that these permissions are best security wise but is it possible to override them?
You can start the JVM with
java -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError="chmod g+r java_pid*.hprof" {mainclass} {args}
The command runs after the heap dump is created. This will allow group read access to all heap dump files in the current directory for example.
The -XX:OnOutOfMemoryError parameter doesn't work for me with spaces in the command on JRE 7(1.7.0_72). But pointing to a shell script (without spaces) does. Example:
-XX:OnOutOfMemoryError="/path/to/shell/script.sh"
I generated hprof using jmap.
sudo ~/jdk/bin/jmap -F -dump:file=app.hprof 5003
Now, I am getting OOM / 'Java Heap Space' error while parsing *.hprof in eclipse. I think I need to run it as stand-alone.
How do I run it? any references?
I assume, you've downloaded Eclipse MAT in the form of Standalone Eclipse RCP Application. If not - do so now, and extract the archive to a folder that suits you.
You're getting the OOME, because MAT has too few memory available (the heap-dump you're parsing is too big).
To make the heap bigger, edit your MemoryAnalyzer.ini file (it should be in your MAT directory), and add the following lines to it:
-vmargs
-Xmx2048M
The 2048M means 2 gigabytes of heap space will be available to the JVM. Perhaps 1 gigabyte will be enough for you.
Note!
If you are using MAT as an Eclipse plugin, you can probably do the same trick by editing eclipse.ini in your Eclipse directory.
I am using Tomcat 7.0.28. I have deployed a war file.
In this war file there is a server like structure where we can upload the files.
Now when i access that web page it is working, but when i try to upload the large files it is showing error of JAVA Heap Space.
How can i solve it?
You are probably trying to put the whole file in memory. Your first shot should be to change the -Xmx parameter at the Tomcat JVM startup options to give it more memor. Aside from that, you'll have to read the file one chunk at a time, and write it on the hard drive, so as to free the memory.
You can increase HeapSize in tomcat using below command .
Linux : Open Catalina.sh file placed in the "bin" directory. You have to apply the changes to this line
CATALINA_OPTS="$CATALINA_OPTS -server -Xms256m -Xmx1024m "
Windows:
Open the "Catalina.bat" file placed in the "bin" directory
set CATALINA_OPTS=-server -Xms256m -Xmx1024m
Restart the tomcat after above change.
I would like to stop Cassandra from dumping hprof files as I do not require the use of them.
I also have very limited disk space (50GB out of 100 GB is used for data), and these files swallow up all the disk space before I can say "stop".
How should I go about it?
Is there a shell script that I could use to erase these files from time to time?
It happens because Cassandra starts with -XX:+HeapDumpOnOutOfMemoryError Java option. Which is good stuff if you want to analyze. Also, if you are getting lots of heap-dump that indicate that you should probably tune the memory available to Cassandra.
I haven't tried it. But to block this option, comment the following line in $CASSANDRA_HOME/conf/cassandra-env.sh
JVM_OPTS="$JVM_OPTS -XX:+HeapDumpOnOutOfMemoryError"
Optionally, you may comment this block as well, but not really required, I think. This block is available in 1.0+ version I guess. I can't find this in 0.7.3.
# set jvm HeapDumpPath with CASSANDRA_HEAPDUMP_DIR
if [ "x$CASSANDRA_HEAPDUMP_DIR" != "x" ]; then
JVM_OPTS="$JVM_OPTS -XX:HeapDumpPath=$CASSANDRA_HEAPDUMP_DIR/cassandra-`date +%s`-pid$$.hprof"
fi
Let me know if this worked.
Update
...I guess it is JVM throwing it out when Cassandra crashes / shuts down. Any way to prevent that one from happening?
If you want to disable JVM heap-dump altogether, see here how to disable creating java heap dump after VM crashes?
I'll admit i haven't used Cassandra, but from what i can tell, it shouldn't be dumping any hprof files unless you enable it at compile time, or the program experiences an OutofMemoryException. So try looking there.
in terms of a shell script, if the files are being dumped to a specific location you can use this command to delete all *.hprof files.
find /my/location/ -name *.hprof -delete
this is using the -delete directive from find that deletes all files that match the search. Look at the man page for find for more search options if you need to narrow it down more.
You can use cron to run a script at a given time, which would satisfy your "time to time" requirement, most linux distros have a cron installed, and work off of a crontab file. You can find out more about the crontab by using man crontab
Even if you update cassandra-env.sh to point to the heapdump path it will still not work. The reason was that from the upstart script /etc/init.d/cassandra there is this line which creates the default HeapDump path
start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -b -p "$PIDFILE" -- \
-p "$PIDFILE" -H "$heap_dump_f" -E "$error_log_f" >/dev/null || return 2
I'm not an upstart expert but what I did was just removed the param which creates the duplicate. Another weird observation also when checking cassandra process via ps aux you'll notice that you'll see some parameters being written twice. If you source cassandra-env.sh and print $JVM_OPTS you'll notice those variables okay.