I had a java application that runs as a daemon process. Initially, I discovered that after about a week, the applicaiton would throw an out of memory error. I researching, I discovered that I could manage the memory and avoid the problem by setting the -Xms and -Xmx flags at startup to specify how much memory the application would have. This ultimately resolved my memory problem.
However, later on, I discovered that the application was causing performance issues on some machines that only had 1GB of memory, 1GB of memory was how much I had allocated for -Xmx. So the java application took all of available memory.
So using jconsole, I was about to determine that the application ran at about 150mb. So using that information, I dropped the -Xmx value to 300mb. Thinking that it would be plenty. But then I discovered that the application was only using about 15mb. Why? Why does the application use 150mb when its allocated 1GB, but only 15mb when its allocated 300mb?
And how do I go about figuring out what I should be using?
Related
I have run into an issue with a java application I wrote causing hardware performance issues. The problem (I'm fairly certain), is that a few of the machines that I'm running the application on only have 1GB of memory. When I start my java application, I"m setting the heap size to -Xms 512m -Xmx 1024m.
My first question, is my assumption correct that this will obviously cause performance problems because I'm allocating all of the machines memory to the java heap?
This leads to another question. I'm running jconsole on the app and monitoring the apps memory usage. What I'm seeing is that the app consumes about 30mb at startup, gets to about 150mb and the garbage collector runs and it goes back down to 30mb. What I'm also seeing using top on the pid is that the application starts by using about 6% memory then slowly climbs up to about 20%. I do not understand this. Why would it only get up to 20% memory usage when I'm allocating 1GB to it. Shouldn't it go to 100%. Also, why is it using that much memory (20%) when it doesn't appear that the app ever uses more than 150mb?
I think its pretty obvious I need to adjust my Xms and Xmx and that should resolve the issue, but I'm trying to understand better what exactly is happening.
Two possibilities for the memory use:
Your app just does not use that much memory
Or
Your app does not use that much memory fast enough.
What happens:
The garbage collector has several points where it will execute:
Just scheduled: It will clean up easy to remove objects
Full collection: This runs when you hit the set memory limits.
If options 1, the general much lower impact quick collection, can keep your memory use under control, it will not hit the full collection unless it the JVM GC options are set to run a full on a schedule.
With your application I would start setting lower xmx/xms values so that more guaranteed resources are left for the OS, and maybe some paging is prevented.
I have a java application running on tomcat with xmx=2GB
I see memory consumption slowly raises on tomcat, exceeding the 2GB heap limit.
Going through this forum I know that there more than just the heap consuming the memory.
The problem is that memory keeps raising above 3 and even 4GB until no more memory is available on the machine, and I need to restart tomcat.
Looking at the GC log, I see that the heap does not exceed 2GB.
My question is how can I find and analyze the memory been used.
Also, can it be code related?
It is obviously some kind of leak, but I don't know how to locate and fix it, or even identify the source (my code, tomcat, etc).
Thanks
Maayan
Since it is more likely to be a memory leak in your code than in tomcat, I'd start here.
Create a heap dump using JMap, and try to analyze it using a tool like the Eclipe Memory Analyzer
We developed an highly CPU intensive Java server application that is having a serious memory leak (or so it seems). As time passes, the application seems to eat up increasingly more memory (as seen with Windows Task Manager) but if I analyse it a specialized Java profiler the memory seems to be staying the same. For example, in task manager I see the application taking over 8gb of memory and growing, but in the Java profiler I see that heap memory is at most 2gb. I tried all possible combinations of JAVA_OPTS (-Xmx, -Xms, all types of GC) and nothing worked, Is the Java process not releasing memory back to OS? Is there any way to force it to do so?
1)
I suggest you to set -Xmx2100m and observe heap usage under load.
JVM may take as much OS memory as it decide to be performant, until it reaches Xmx limit. In modern JVMs default Xmx is calculated upon total memory available in OS, so it may be large value.
I think your app does not have memory leak, your JVM simply allocate a lot of memory, because it can.
Observe your JVM thru jvisualvm.
2)
Second suggestion - do you use any JNI code? Does your app call any native library (ie. dll under windows)?
I am running a Java application on a Linux-Cluster with SLURM as resource manager. To run my application I have to specify for SLURM the amount of memory I will need. SLURM will run my application in a kind of VM with the specified amount of memory. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
My problem is that I am exceeding the amount of memory I have chosen on SLURM and it terminates my application. It seems that the JVM uses about 1GB of memory, probably for things like GC or so.
Is there a possibility to restrict the size of the JVM or at least to tame it.
Cheers,
Markus
The maximum heap setting only limited the maximum heap. There are other memory regions which you have not limited such as
thread stacks
perm gen
shared libraries
native memory used by libraries
direct memory
memory mapped files.
If you want to limit the over all memory usage you need to be clear about whether you are limiting virtual memory or resident memory. Often monitoring tools make the mistake of monitoring virtual memory which shows a surprising lack of understanding of how applications work, or even why you monitor an application in the first place.
You want to monitor resident memory usage which means you need to know how much memory your application uses over time apart from the heap, then work out how much heap you can have plus some margin for error.
. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
At a guess I would start with 1/2 GB with -Xmx512m and see what is the peak resident memory and increase it if you find there is always a few hundred MB head room.
BTW 1 GB of memory doesn't cost that much these days (as little as $5). Your time could be worth much more than the resources you are trying to save.
Recently we started using New Relic to monitor our production webapp hosted in tomcat 7.0.6 server but we have observed that memory footprint of this tomcat is increasing continuously and within a week it eats up all the server(AWS High-Memory Double Extra Large Instance) memory and become unresponsive, only way to get it back is by restarting it.
We provide Xms & Xmx arguments while starting the tomcat but within few hours memory usage of tomcat process cross Xmx value and it keeps on increasing until all the server memory is over. Here is process command:
/usr/java/jdk1.6.0_24//bin/java
-Djava.util.logging.config.file=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/conf/logging.properties
-Xms8192m
-Xmx8192m
-javaagent:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/newrelic/newrelic.jar
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Duser.timezone=Asia/Calcutta
-Djava.endorsed.dirs=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/endorsed
-classpath /xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/bootstrap.jar:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/tomcat-juli.jar
-Dcatalina.base=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Dcatalina.home=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Djava.io.tmpdir=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/temp org.apache.catalina.startup.Bootstrap start"
Ideally I would expect this process not to use more than 8GB of memory but within hours it goes above 10GB and within few days it goes above 20GB and everything else on this server suffers because of it(I use 'top' to see memory usage). How is this possible?
There's an issue which affects any Sun/Oracle JVM and will manifest as unbounded growth in non-heap (native) memory. There is a workaround in place for New Relic Java agent versions 2.16+ by adding a shutdown delay to class transformation in your newrelic.yml file in the common section.
class_transformer:
shutdown_delay: 3600
From the changelog
Work-around for Oracle JVM bug that in rare cases causes a native
memory leak
In rare cases, the Oracle JVM can leak native OS memory (not heap
space) when classes are intercepted by the agent. This setting turns
off interception of classes that are loaded after the given number of
seconds. The agent will continue to monitor classes loaded before this
time.
I am sharing some more information on above reported incident. memory leak is not in Java heap. The application never reaches any OUT OF MEMORY error(8 gb is the Java heap max limit what we have set). However the virtual and resident memory keep on increasing till the time RAM runs out of memory.
We have confirmed that this leak happens when relic agent is used.
Version : New Relic Agent v2.1.2
Sorry for the trouble. We (New Relic) are investigating the problem but the first suggestion is to please try the latest 2.2.1 version of the Java Agent which made substantial changes to the way we instrument classes.
I will follow-up here when we have more information.