Why does java application take much memory? - java

I have a small Java application which is basically a command line utility which sleeps 99% of the time and runs once a day to copy a file in particular location. That app takes in memory 10 Mb. Of course, that is not a big number, but it looks like that app takes more memory than it could. Could you suggest how to reduce the amount of memory for that app?
I run it like this:
java -Xmx12m -jar c:\copyFiles\copyFiles.jar

AFAIK this heavily depends on the Garbage Collector that is used and the jvm version, but in general the VM is not very eager to give memory back to the OS, for numerous reason. A much broader discussion is here about the same topic
Since jdk-12, there is this JEP :
http://openjdk.java.net/jeps/346
that will instruct the VM (with certain flags) to give memory back to the OS after a "planned" garbage collector cycle takes place - this is controlled by some newly introduced flags; read the JEP to find out more.

You are allowing your application to use roughly 12 MB because you use the -Xmx12m parameter for the JVM. Java is a bit wasteful regarding memory. You can decrease the max amount of memory your application can use:
-Xmx10k - Can use up to 10 KB of memory.
-Xmx10m - Can use up to 10 MB of memory.
-Xmx10g - Can use up to 10 GB of memory.
Feel free to mix this setting however you want but be careful. If you exceed the set amount an OutOfMemory-exception is thrown.

Related

Is setting JVM Xmx option in Google App Engine Standard useful?

I've a simple Spring Boot application running on Google App Engine Standard Java 11 environment F2 instances. However, I occasionally get errors such as:
Exceeded soft memory limit of 512 MB with 592 MB after servicing 18320
requests total. Consider setting a larger instance class in app.yaml.
Instead of upgrading to a larger instance type, I'd prefer to limit memory usage, even if it degrades performance a bit.
Since an F2 instance has 512MB of available memory, would it help to set JVM's -Xmx option to a value like say, 480MB? Or will it make things worse by converting Google's "Exceeded soft memory limit" warning to a full blown OutOfMemory error?
Thanks
A Java process can consume more memory than what is specified via -Xmx. That applies only to max heap size.
As such, if you want to limit it, you should specify significantly less than 512 MB.
There are other flags to limit some portions of non-heap memory like -XX:MaxDirectMemorySize.
By default, 1/4th of available RAM is assigned as Xmx.
You should check what are the actual settings your app is running with, perhaps via java -XX:+PrintFlagsFinal.
I suggest reading this excellent post by a JVM expert explaining various types of JVM memory: Java using much more memory than heap size (or size correctly Docker memory limit)
You can limit the amount of memory the jvm is using, but there are no universal "recommended" arguments / settings. Settings that may be good for one application or use-case can be terrible for another one.As a general rule, no JVM settings at all is a good starting point.
When you face the Exceeded soft private memory limit error you have two alternatives to follow:
You can upgrade your instance to an another with more memory
You can reduce the chunks of data you process in each request, process the XML file in smaller pieces and keep the smaller instance doing the work.
Here is a stackoverflow post which can help.

java memory leak, visualvm showing wrong data

I've a java application running, after few hours it fulfills memory.
I've tried to detect memory leak with visualvm but it shows wrong data (have no idea how that can happen).
In the screenshot you can see task manager showing memory usage of 700Mb and visualvm showing 225...
Does anyone know whats going on here?
Regards
Beware that your OS is only aware of the total amount of memory java has reserved over the time (and java will not return that amount of memory easily AFAIK). However java may not be using all that memory at a given moment, so you can see differences between those two numbers.
For example, if you launch your program like this
java -Xmx512m -Xms256m ...
Then your JVM will take 256 MB as soon as it starts (and the OS will tell you so, more or less). However, if you open your memory peek tool (be it visualvm, jconsole, etc.), it may show that you are using less than that (it is just you have not needed to use the whole of your reserved heap).
What Java gets it doesn't return. Allocating memory takes quite a lot of effort, so Java doesn't usually return any of the memory the system ever granted it. So if your program ever used 760 MB RAM this is what it sticks with.
And then there are two other factors that play an important role. The heap size is only the amount of memory your program uses or can use. But between your program and the OS is the Java-VM which might take a good bit of memory as well. The task manager shows you the amount of memory that is used by your program plus the vm.
The other factor is memory fragmentation. Some data structures (e.g. arrays) have to be in a consecutive chunk of the memory. array[i+1] has to be in the memory slot after array[i]. This now means that if you have e.g. 10 MB memory allocated, and the the middle 2 MB memory are used and you want to create an 6 MB array the Java-VM has to allocate new memory, so it can fit the array in one piece.
This increases the memory usage in the task manager to go up, but not the heap size, since the heap size only shows actually used memory.
Memory spaces in Java are defined by 3 metrics: used, available, max available (see JMX values). Size you see is available size and not max available, that is probably already allocated.
Again you should also show non heap memory (usually lesser than heap, but you should have setup it differently)
To be more precise you should post your JVM startup settings.
PS: looking at your memory profile I can't see any leak: I can see JVM shrinking heap size because it is not used at all

Java: Commandline parameter to release unused memory

In Bash, I use the commmand java -Xmx8192m -Xms512m -jar jarfile to start a Java process with an initial heap space of 512MB and maximum heap space of 8GB.
I like how the heap space increases based on demand, but once the heap space has been increased, it doesn't release although the process doesn't need the memory. How can I release the memory that isn't being used by the process?
Example: Process starts, and uses 600MB of memory. Heap space increases from 512MB to a little over 600MB. Process then drops down to 400MB RAM usage, but heap allocation stays at 600MB. How would I make the allocation stay near the RAM usage?
You cannot; it's simply not designed to work that way. Note that unused memory pages will simply be mapped out by your hardware, and so won't consume any real memory.
Generally you would not like JVM to return memory to the OS and later claim in back as both operations are not so cheap.
There are a couple XX parameters that may or may not work with your preferred garbage collector, namely
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
Source
I believe you'd need stop the world collector for them to be enforced.
Other JVMs may have their own parameters.
I'd normally have not replied but the amount of negative/false info ain't cool.
No, it is a required function. I think, the JVM in Android probably can do this, but I'm not sure.
But most of them - including all Java EE VMs - simply doesn't interested about this.
This is not so simple, as it seems - the VM is a process from the OS view, and has somewhere a mapped memory region for it, which is a stack or data segment.
In most cases it needs to be a continous interval. Memory allocation and release from the OS view happens with a system call, which the process uses to ask the OS its new segment limit.
What to do, if you have for example 2 gigabytes of RAM for your JVM, which uses only 500 megs, but this 500 meg is dispersed in some ten-bytes fragment in this 2 gigs? This memory release function would need also a defragmentation step, which would multiply the resource costs of the GC runs.
As Java runs, and Java objects are constructed and destructed by the garbage collector, the free and allocated memory areas are dispersed in the stack/data segment.
When we don't see java, but native OS processes, the situation is the same: if you malloc() ten 1meg block, and then release the first 9, there is no way to give it back to the OS, altough newer libraries and os apis have extensive development about this. Of course, if you later allocates memory again, this allocation will be done from the just-freed regions.
My opinion is, that even if this is a little bit costly and complex (and a quite large programming work), it worths its price, and I think it isn't the best image from our collective programming culture, that it isn't done since decades in everything, included the java vms.

Appropriate JVM/GC tuning for 4GB JVM with 3GB cache

I am looking for the appropriate settings to configure the JVM for a web application. I have read about old/young/perm generation, but I have trouble using those parameters at best for this configuration.
Out of the 4 GB, around 3 GB are used for a cache (applicative cache using EhCache), so I'm looking for the best set up considering that. FYI, the cache is static during the lifetime of the application (loaded from disk, never expires), but heavily used.
I have profiled my application already, and I have performed optimization regarding the DB queries, the application's architecture, the cache size, etc... I am just looking for JVM configuration advices here. I have measured 99% throughput for the Garbage Collector, and 6-8s pauses when the Full GC runs (approximately once every 1/2h).
Here are the current JVM parameters:
-XX:+UseParallelGC -XX:+AggressiveHeap -Xms2048m -Xmx4096m
-XX:NewSize=64m -XX:PermSize=64m -XX:MaxPermSize=512m
-verbose:gc -XX:+PrintGCDetails -Xloggc:gc.log
Those parameters may be completely off because they have been written a long time ago... Before the application became that big.
I am using Java 1.5 64 bits.
Do you see any possible improvements?
Edit: the machine has 4 cores.
-XX:+UseParallel*Old*GC should speed up the Full GCs on a multi core machine.
You could also profile with different NewRatio values. Your cached objects will live in the tenured generation so profile it with -XX:NewRatio=7 and then again with some higher and lower values.
You may not be able to accurately replicate realistic use during profiling, so make sure you monitor GC when it is in real life use and then you can make minor changes (e.g. to survivor space etc) and see what effect they have.
Old advice was not to use AggressiveHeap with Xms and Xmx, I am not sure if that is still true.
Edit: Please let us know which OS/hardware platform you are deployed on.
Full collections every 30 mins indicates the old generation is quite full. A high value for newRatio will give it more space at the expense of the young gen. Can you give the JVM more than 4g or are you limited to that?
It would also be useful to know what your goals / non functional requirements are. Do you want to avoid these 6 / 7 second pauses at the risk of lower throughput or are those pauses an acceptable compromise for highest possible throughput?
If you want to minimise the pauses, try the CMS collector by removing both
-XX:+UseParallelGC -XX:+UseParallelOldGC
and adding
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Profile with that with various NewRatio values and see how you get on.
One downside of the CMS collector is that unlike the parallel old and serial collectors, it doesn't compact the old generation. If the old generation gets too fragmented and a minor collection needs to promote a lot of objects to the old gen at once, a full serial collection may be invoked which could mean a long pause. (I've seen this once in prod but with the IBM JVM which went out of memory instead of invoking a compacting collection!)
This might not be a problem for you - it depends on the nature of the application - but you can insure against it by restarting nightly or weekly.
I would use Java 6 update 30 or 7 update 2, 64-bit as they are much more efficient. e.g. they use 32-bit references by default.
I would also configure Ehcache to use direct memory or a memory mapped file if possible. This should minimise the impact on GC.
Using these options its possible to almost eliminate your heap foot print. e.g. I have an app which uses up to 180 GB of memory mapped files on a machine with 16 GB of memory and the heap size is 6 MB. A full GC takes up to 11 ms when trigger manually, not that it ever GCs. ;)
If you want a simple example where I map in an 8 TB file into memory and update it. http://vanillajava.blogspot.com/2011/12/using-memory-mapped-file-for-huge.html
I hope you just removed -server to not inflate the post, otherwise you should instantly enable it. Apart from the bit longer startup time (which really isn't an issue for a web application that should run days) I don't see any reason to use anything but c2. That could give some nice performance improvements in general. Umn back to topic:
Sadly the best thing I can think of won't work with your ancient JVM. The G1 garbage collector was basically designed to reduce latency. Not only does it try to reduce pauses in general, it also offers some tuning parameters to set pause goals and intervals. See this page.
There is an experimental backport to java6 though I doubt it's kept up to date. And nobody is wasting any time on optimizing GC or anything else for Java 1.5 anymore I fear.
PS: There would also be IBM's JVM and obviously azul systems (ok that wasn't a serious proposition ;) ), but those are obviously out of the question.. just wanted to mention them.

Which heap size do you prefer?

I know there is no "right" heap size, but which heap size do you use in your applications (application type, jdk, os)?
The JVM Options -Xms (initial/minimum) and -Xmx (maximum) allow for controlling the heap size. What settings make sense under which circumstances? When are the defaults appropriate?
You have to try your application and see how it performs. for example, I used to always run IDEA out of the box until I've got this new job where I work on this huge monolithic project. IDEA was running very slow and regularly throwing out of memory errors when compiling the full project.
first thing I did is to ramp up the heap to 1 gig. this got rid of the out of memory issues but it was still slow. I also noticed IDEA was regularly freezing for 10 seconds or so after which the used memory was cut in half only to ramp up again and , and that triggered the garbage collection idea. I now use it with -Xms512m, -Xmx768m but, I also added -Xincgc, to activate incremental garbage collection
As a result, I've got my old IDEA back: it runs smooth, doesn't freeze anymore and never uses more than 600m of heap.
For your application you have to use a similar approach. try to determine the typical memory usage and tune your heap for the application to run well in those conditions. But also let advanced users tune the setting, to address out of the ordinary data loads.
It depends on the application type. A desktop application is much different than a web application. An application server is much different than a standalone application.
It also depends on the JVM that you are using. JDK5 and later 6 include enhancements that help understand how to tune your application.
Heap size is important, but its also important to know how it plays with the garbage collector.
JDK1.4 Garbage Collector Tuning
JDK5 Garbage Collector Tuning
JDK6 Garbage Collector Tuning
Actually I always considered it very strange that Java limits the heap size. A native application can usually use as much heap as it wants, until it runs out of virtual address space. The only reason to limit the heap in Java seems the garbage collector, which has a certain kind of "laziness" and may not garbage collect objects, unless there is a necessity to do so. That means if you choose the heap too big, your app constantly uses more memory than is really necessary.
However, Sun has improved the GC a lot over the years and to emulate the behavior of a native C app, I would set the initial heap size to 32 MB (for small programs) or 64 MB (for bigger ones) and the maximum to something between 1-2 GB. If your app really needs over a 1 GB of memory, it is most likely broken (unless you deal with data objects that large), but I see no reason why your app should be killed, just because it goes over a certain heap size.
Of course, this is referring to normal PCs. If you create Java code for mobile phones or other limited devices, you should probably adopt the initial and maximum heap size to the limitations of that device.
Typically i try not to use heaps which are larger than 1GB.
It will cost you on major garbage collections.
Sometime it is better to split your application to a few JVM on the same machine and not you large heap sizes.
Major collection with a large heap size can take >10 mintues (on unoptimized GC applications).
This is entirely dependent on your application and any hardware limitations you may have. There is no one size fits all.
jmap can be used to have a look at what heap you are actually using and is a good starting point for right-sizing the heap.
You need to spend quite some time in JConsole or visualvm to get a clear picture on what the plateau memory usage is. Wait until everything is stable and you see the characteristic sawtooth curve of heap memory usage. The peaks should be your 70-80% heap, depending on what garbage collector you use.
Most garbage collectors trigger full GCs when heap usage reaches a certain percentage. This percentage is from 60% to 80% of max heap, depending on what strategy is involved.
1.3Gb for a heavy GUI application.
Unfortunately on Linux the JVM seems to pre-request 1.3G of virtual memory in that situation, which looks bad even if it's not needed (and causes a lot of confused grumbling from users)
On my most memory intensive app:
-Xms250M -Xmx1500M -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC

Categories

Resources