How to restrict the Java VM overall memory consumption? - java

I am running a Java application on a Linux-Cluster with SLURM as resource manager. To run my application I have to specify for SLURM the amount of memory I will need. SLURM will run my application in a kind of VM with the specified amount of memory. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
My problem is that I am exceeding the amount of memory I have chosen on SLURM and it terminates my application. It seems that the JVM uses about 1GB of memory, probably for things like GC or so.
Is there a possibility to restrict the size of the JVM or at least to tame it.
Cheers,
Markus

The maximum heap setting only limited the maximum heap. There are other memory regions which you have not limited such as
thread stacks
perm gen
shared libraries
native memory used by libraries
direct memory
memory mapped files.
If you want to limit the over all memory usage you need to be clear about whether you are limiting virtual memory or resident memory. Often monitoring tools make the mistake of monitoring virtual memory which shows a surprising lack of understanding of how applications work, or even why you monitor an application in the first place.
You want to monitor resident memory usage which means you need to know how much memory your application uses over time apart from the heap, then work out how much heap you can have plus some margin for error.
. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
At a guess I would start with 1/2 GB with -Xmx512m and see what is the peak resident memory and increase it if you find there is always a few hundred MB head room.
BTW 1 GB of memory doesn't cost that much these days (as little as $5). Your time could be worth much more than the resources you are trying to save.

Related

java process consumes more memory over time but no memory leak [duplicate]

This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.

java memory leak, visualvm showing wrong data

I've a java application running, after few hours it fulfills memory.
I've tried to detect memory leak with visualvm but it shows wrong data (have no idea how that can happen).
In the screenshot you can see task manager showing memory usage of 700Mb and visualvm showing 225...
Does anyone know whats going on here?
Regards
Beware that your OS is only aware of the total amount of memory java has reserved over the time (and java will not return that amount of memory easily AFAIK). However java may not be using all that memory at a given moment, so you can see differences between those two numbers.
For example, if you launch your program like this
java -Xmx512m -Xms256m ...
Then your JVM will take 256 MB as soon as it starts (and the OS will tell you so, more or less). However, if you open your memory peek tool (be it visualvm, jconsole, etc.), it may show that you are using less than that (it is just you have not needed to use the whole of your reserved heap).
What Java gets it doesn't return. Allocating memory takes quite a lot of effort, so Java doesn't usually return any of the memory the system ever granted it. So if your program ever used 760 MB RAM this is what it sticks with.
And then there are two other factors that play an important role. The heap size is only the amount of memory your program uses or can use. But between your program and the OS is the Java-VM which might take a good bit of memory as well. The task manager shows you the amount of memory that is used by your program plus the vm.
The other factor is memory fragmentation. Some data structures (e.g. arrays) have to be in a consecutive chunk of the memory. array[i+1] has to be in the memory slot after array[i]. This now means that if you have e.g. 10 MB memory allocated, and the the middle 2 MB memory are used and you want to create an 6 MB array the Java-VM has to allocate new memory, so it can fit the array in one piece.
This increases the memory usage in the task manager to go up, but not the heap size, since the heap size only shows actually used memory.
Memory spaces in Java are defined by 3 metrics: used, available, max available (see JMX values). Size you see is available size and not max available, that is probably already allocated.
Again you should also show non heap memory (usually lesser than heap, but you should have setup it differently)
To be more precise you should post your JVM startup settings.
PS: looking at your memory profile I can't see any leak: I can see JVM shrinking heap size because it is not used at all

Weird behavior of Java -Xmx on large amounts of ram

You can control the maximum heap size in java using the -Xmx option.
We are experiencing some weird behavior on Windows with this switch. We run some very beefy servers (think 196gb ram). Windows version is Windows Server 2008R2
Java version is 1.6.0_18, 64-Bit (obviously).
Anyway, we were having some weird bugs where processes were quitting with out of memory exceptions even though the process was using much less memory than specified by the -Xmx setting.
So we wrote simple program that would allocate a 1GB byte array each time one pressed the enter key, and initialize the byte array to random values (to prevent any memory compression etc).
Basically, whats happening is that if we run the program with -Xmx35000m (roughly 35 gb) we get an out of memory exception when we hit 25 GB of process space (using windows task manager to measure). We hit this after allocating 24 GB worth of 1 GB blocks, BTW, so that checks out.
Simply specifying a larger value for -Xmx option makes the program work fine to larger amounts of ram.
So, what is going on? Is -Xmx just "off". BTW: We need to specify -Xmx55000m to get a 35 GB process space...
Any ideas on what is going on?
Is their a bug in the Windows JVM?
Is it safe to simply set the -Xmx option bigger, even though there is a disconnect between the -Xmx option and what is going on process wise?
Theory #1
When you request a 35Gb heap using -Xmx35000m, what you are actually saying is that to allow the total space used for the heap to be 35Gb. But the total space consists of the Tenured Object space (for objects that survive multiple GC cycles), the Eden space for newly created objects, and other spaces into which objects will be copied during garbage collection.
The issue is that some of the spaces are not and cannot be used for allocating new objects. So in effect, you "lose" a significant percent of your 35Gb to overheads.
There are various -XX options that can be used to tweak the sizes of the respective spaces, etc. You might try fiddling with them to see if they make a difference. Refer to this document for more information. (The commonly used GC tuning options are listed in section 8. The -XX:NewSpace option looks promising ...)
Theory #2
This might be happening because you are allocating huge objects. IIRC, objects above a certain size can be allocated directly into the Tenured Object space. In your (highly artificial) benchmark, this might result in the JVM not putting stuff into the Eden space, and therefore being able to use less of the total heap space than is normal.
As an experiment, try changing your benchmark to allocate lots of small objects, and see if it manages to use more of the available space before OOME-ing.
Here are some other theories that I would discount:
"You are running into OS-imposed limits." I would discount this, since you said that you can get significantly greater memory utilization by increasing the -Xmx... setting.
"The Windows task manager is reporting bogus numbers." I would discount this because the numbers reported roughly match the 25Gb that you think your application had managed to allocate.
"You are losing space to other things; e.g. the permgen heap." AFAIK, the permgen heap size is controlled and accounted independently of the "normal" heaps. Other non-heap memory usage is either a constant (for the app) or dependent on the app doing specific things.
"You are suffering from heap fragmentation." All of the JVM garbage collectors are "copying collectors", and this family of collectors has the property that heap nodes are automatically compacted.
"JVM bug on Windows." Highly unlikely. There must be tens of thousands of 64bit Java on Windows installations that maximize the heap size. Someone else would have noticed ...
Finally, if you are NOT doing this because your application requires you to allocate memory in huge chunks, and hang onto it "for ever" ... there's a good chance that you are chasing shadows. A "normal" large-memory application doesn't do this kind of thing, and the JVM is tuned for normal applications ... not anomalous ones.
And if your application really does behave this way, the pragmatic solution is to just set the -Xmx... option larger, and only worry if you start running into OS-level issues.
To get a feeling for what exactly you are measuring you should use some different tools:
the Windows Task Manager (I only know Windows XP, but I heard rumours that the Task Manager has improved since then.)
procexp and vmmap from Sysinternals
jconsole from the JVM (you are using the SunOracle HotSpot JVM, aren't you?)
Now you should answer the following questions:
What does jconsole say about the used heap size? How does that differ from procexp?
Does the value from procexp change if you fill the byte arrays with non-zero numbers instead of keeping them at 0?
did you try turning on the verbose output for the GC to find out why the last allocation fails. is it because the OS fails to allocate a heap beyond 25GB for the native JVM process or is it because the GC is hitting some sort of limit on the maximum memory it can manage. I would recommend you also connect to the command line process using jconsole to see what the status of the heap is just before the allocation failure. Also tools like the sysinternals process explorer might give better details as where the failure is occurring if it is in the jvm process.
Since the process is dying at 25GB and you have a generational collector maybe the rest of the generations are consuming 10GB. I would recommend you install JDK 1.6_u24 and use jvisualvm with the visualGC plugin to see what the GC is doing especially factor in the size of all the generations to see how the 35GB heap is being chopped up into different regions by the GC / VM memory manager.
see this link if you are not familiar with Generational GC http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#generation_sizing.total_heap
I assume this has to do with fragmenting the heap. The free memory is probably not available as a single contiguous free area and when you try to allocate a large block this fails because the requested memory cannot be allocated in a single piece.
The memory displayed by windows task manager is the total memory allocated to the process which includes memory for code, stack, perm gen and heap.
The memory you measure using your click program is the amount of heap jvm makes available to running jvm programs.
Natrually the total allocated memory to JVM by windows should be greater than what JVM makes available to your program as heap memory.

Java memory usages

I cannot understand the Java memory usage. I have an application which is executed with maximum memory size set to 256M. Yet, at some point in time I can see that according to the task manager it takes up to 700MB!
Needless to say, all the rest of the applications are a bit unresponsive when this happens as they are probably swapped out.
It's JDK 1.6 on WinXP. Any ideas ?
The memory configured is available to the application. It won't include
the JVM size
the jars/libs loaded in
native libraries and related allocated memory
which will result in a much bigger image. Note that due to how the OS and the JVM work that 700Mb may be shared between multiple JVMs (due to shared binary images, shared libraries etc.)
The amount you specify with -Xmx is only for the user accessible heap - the space in which you create runtime objects dynamically.
The Java process will usea lot more space for its own needs, including the JVM, the program and other libraries, constants pool, etc.
In addition, because of the way the garbage collection system works, there may be more memory allocated than what is currently in the heap - it just hasn't been reclaimed yet.
All that being said, setting your program to a maximal heap of 256MB is really lowballing it on a modern system. For heavy programs you can usually request at least 1GB of heap.
As you mentioned, one possible cause of slowness is that some of the memory allocated to Java gets swapped off to disk. In that case, the program would indeed start churning the disk, so don't go overboard if you have little physical memory available. On Linux, you can get page miss stats for a process, I am sure there's a similar way on windows.
The -Xmx option only limits the java heap size. In addition to the heap, java will allocate memory for other things, including a stack for each thread (2kB by default, set by -Xss), the PermGenSpace, etc.
So, depending on how many threads you launch, the number of classes your application loads, and some other factors, you may use a lot more memory than expected.
Also, as pointed out, the Windows task manager may take the virtual memory into account.
You mean the heap right? As far as i know there are two things to take care. The Xms option which sets an initial java heap size and the Xmx option which sets the maximum java heap space. If the heap memory is overreaching the Xmx value there should be an OutOfMemoryException.
What about the virtual pages it's taking up. I think Windows shows you the full set of everything aggregated.

Which heap size do you prefer?

I know there is no "right" heap size, but which heap size do you use in your applications (application type, jdk, os)?
The JVM Options -Xms (initial/minimum) and -Xmx (maximum) allow for controlling the heap size. What settings make sense under which circumstances? When are the defaults appropriate?
You have to try your application and see how it performs. for example, I used to always run IDEA out of the box until I've got this new job where I work on this huge monolithic project. IDEA was running very slow and regularly throwing out of memory errors when compiling the full project.
first thing I did is to ramp up the heap to 1 gig. this got rid of the out of memory issues but it was still slow. I also noticed IDEA was regularly freezing for 10 seconds or so after which the used memory was cut in half only to ramp up again and , and that triggered the garbage collection idea. I now use it with -Xms512m, -Xmx768m but, I also added -Xincgc, to activate incremental garbage collection
As a result, I've got my old IDEA back: it runs smooth, doesn't freeze anymore and never uses more than 600m of heap.
For your application you have to use a similar approach. try to determine the typical memory usage and tune your heap for the application to run well in those conditions. But also let advanced users tune the setting, to address out of the ordinary data loads.
It depends on the application type. A desktop application is much different than a web application. An application server is much different than a standalone application.
It also depends on the JVM that you are using. JDK5 and later 6 include enhancements that help understand how to tune your application.
Heap size is important, but its also important to know how it plays with the garbage collector.
JDK1.4 Garbage Collector Tuning
JDK5 Garbage Collector Tuning
JDK6 Garbage Collector Tuning
Actually I always considered it very strange that Java limits the heap size. A native application can usually use as much heap as it wants, until it runs out of virtual address space. The only reason to limit the heap in Java seems the garbage collector, which has a certain kind of "laziness" and may not garbage collect objects, unless there is a necessity to do so. That means if you choose the heap too big, your app constantly uses more memory than is really necessary.
However, Sun has improved the GC a lot over the years and to emulate the behavior of a native C app, I would set the initial heap size to 32 MB (for small programs) or 64 MB (for bigger ones) and the maximum to something between 1-2 GB. If your app really needs over a 1 GB of memory, it is most likely broken (unless you deal with data objects that large), but I see no reason why your app should be killed, just because it goes over a certain heap size.
Of course, this is referring to normal PCs. If you create Java code for mobile phones or other limited devices, you should probably adopt the initial and maximum heap size to the limitations of that device.
Typically i try not to use heaps which are larger than 1GB.
It will cost you on major garbage collections.
Sometime it is better to split your application to a few JVM on the same machine and not you large heap sizes.
Major collection with a large heap size can take >10 mintues (on unoptimized GC applications).
This is entirely dependent on your application and any hardware limitations you may have. There is no one size fits all.
jmap can be used to have a look at what heap you are actually using and is a good starting point for right-sizing the heap.
You need to spend quite some time in JConsole or visualvm to get a clear picture on what the plateau memory usage is. Wait until everything is stable and you see the characteristic sawtooth curve of heap memory usage. The peaks should be your 70-80% heap, depending on what garbage collector you use.
Most garbage collectors trigger full GCs when heap usage reaches a certain percentage. This percentage is from 60% to 80% of max heap, depending on what strategy is involved.
1.3Gb for a heavy GUI application.
Unfortunately on Linux the JVM seems to pre-request 1.3G of virtual memory in that situation, which looks bad even if it's not needed (and causes a lot of confused grumbling from users)
On my most memory intensive app:
-Xms250M -Xmx1500M -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC

Categories

Resources