I have a small Java application which is basically a command line utility which sleeps 99% of the time and runs once a day to copy a file in particular location. That app takes in memory 10 Mb. Of course, that is not a big number, but it looks like that app takes more memory than it could. Could you suggest how to reduce the amount of memory for that app?
I run it like this:
java -Xmx12m -jar c:\copyFiles\copyFiles.jar
AFAIK this heavily depends on the Garbage Collector that is used and the jvm version, but in general the VM is not very eager to give memory back to the OS, for numerous reason. A much broader discussion is here about the same topic
Since jdk-12, there is this JEP :
http://openjdk.java.net/jeps/346
that will instruct the VM (with certain flags) to give memory back to the OS after a "planned" garbage collector cycle takes place - this is controlled by some newly introduced flags; read the JEP to find out more.
You are allowing your application to use roughly 12 MB because you use the -Xmx12m parameter for the JVM. Java is a bit wasteful regarding memory. You can decrease the max amount of memory your application can use:
-Xmx10k - Can use up to 10 KB of memory.
-Xmx10m - Can use up to 10 MB of memory.
-Xmx10g - Can use up to 10 GB of memory.
Feel free to mix this setting however you want but be careful. If you exceed the set amount an OutOfMemory-exception is thrown.
I have a 64 bit machine, theoritically the address space is 2^64 bytes and it has 32 G of physical RAM.
this is a server scale machine with 16 cores and is a production server.
Since there is no other process running that consumes mass amounts of memory and the server jvm is the only app thats running, is there any reason to not set the jvm heap to a really large number ?
I am seeing it being set to less than 10 gigs and there is no explanation that I can think of this could be.
As I mentioned earlier in the post:
I understand that the kernel, cache and other processes would need to share RAM. But barring any other processes and OS native stuff, there is nothing else going on.
this machine is a production machine and solely for this specific jvm.
Would there be any reason to not set to to something like 20 gigs/32 g (physical ram) ?
From the comments below- it appears not...other than the need to fail fast,
thanks for your inputs
Because the operating system also needs RAM, for cache, buffers, daemons such as syslogd, page tables, kernel data structures etc.
Also, the JVM designers have no idea what other applications you might want to start after the JVM starts up. So it is sensible for the JVM to not hog all the RAM by default.
This question already has answers here:
Maximum Java heap size of a 32-bit JVM on a 64-bit OS
(17 answers)
Closed 9 years ago.
This goes to all Java heap/GC experts out there.
Simple and straight question: is there a hard limit on the maximum figure one can set the Java heap size to?
One of the servers I'm working on is hosting an in-memory real-time communication system, and the clients using the server have asked me to increase the amount of memory to 192Gb (it's not a typo: 192 GigaBytes!). Technically this is possible because the machine is virtual.
Besides the one above, my other questions are:
how well is the JVM going to handle such size?
is there any showstopper in setting it?
is there something I should be aware of, when dealing with such sizes?
Thanks in advance to anyone willing to help.
Regards.
The JVM spec for java 7 does not indicate that there is a hard limit.
I'd suggest that it is therefore goverened by your OS and the amount of memory available.
However, you will have to be aware that the JVM needs enough memory for other internal structures (as the spec describes), such as PermGen, constant pool, etc. I'd also suggest that you consider profiling before arbitrarily increasing the heap size. It's possible that your old-gen space is hogging memory. I'd use VisualGC (now in VisualVM) to watch the memory usage and YourKit to look for leaks (its generational capabilities are a real help).
I've not answered your other questions but probably couldn't say more than your other respondents.
1) Have them consider the implications of having the virtual machine swapping stuff in and out of memory to disk with their code doing it. Sometimes its just as good and no extra work to let the VM do it. Sometimes it is much faster when they know something about the data characteristics and their optimizations are targeted that way.
2) Consider the possible use of multiple JVMs either cooperating or running in parallel with the scope of operation divided up somehow. Sometimes even running 4 JVMs running the same application on the same VM with each using 1/4 of the memory can be better. (Depends on your application's characteristics.)
3) Consider a memory management framework like TerraCotta's Big Memory. There are competitors in this space. For example, VMWare has Elastic Memory. They use memory outside the JVM's heap to store stuff and avoid the GC issue. Or they allocate very large heap objects and manage it independently of Java.
4) JVMs do ok(TM) if the memory gets taken up by live objects and they end up moving to the older generations for garbage collection. GC can be a problem. This is a black art. Be sure to kill a chicken and spread the feathers over the virtual machine before attempting to optimize GC. :) Oh ... and there are some really interesting comments here: Java very large heap sizes
5) Sometimes you set the JVM size and other factors beyond your control will keep it from growing that large. Be sure to do some testing to make sure it is really able to grow to the size you set.
And number 5 is the real key item. Test. Test. And test some more. You are out on the edges of explored territory.
Default JVM uses maximum 1.5 GB RAM/JVM Java application.
But my Server have 8 GB. Application still need more RAM. how to start cluster of JVM single unit server.
if in case memory increase single JVM garbage collector and other JVM demon goes slow down...
what is solution for this.. is right thing JVM clusters???
Application work high configuration. when request start JVM slow down and memory usage 95% to 99%
My server configuration. Linux
4 Core Multi Processors
8 GB RAM
no issue for HDD space.
Any Solution for this problem??
You might want to look into memory grids like:
Oracle Coherence: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html
GridGain: http://www.gridgain.com/
Terracotta: http://terracotta.org/
We use Coherence to run 3 JVMs on 1 machine, each process is using 1 Gb of the RAM.
There are a number of solutions.
Use a larger heap size (possibly 64-bit JVM)
Use less heap and more off heap memory. Off heap memory can scale into the TB.
Split the JVM into multiple processes. This is easier for some applications than others. I tend to avoid this as my applications can't be split easily.
I have heard several people claiming that you can not scale the JVM heap size up. I've heard claims of the practical limit being 4 gigabytes (I heard an IBM consultant say that), 10 gigabytes, 32 gigabytes, and so on... I simply can not believe any of those numbers and have been wondering about the issue now for a while.
So, I have three part question I would hope someone with experience could answer:
Given the following case how would you tune the heap and GC settings?
Would there be noticeable hickups (pauses of JVM etc) that would be noticed by the end users?
Should this really still work? I think it should.
The case:
64 bit platform
64 cores
64 gigabytes of memory
The application server is client facing (ie. Jboss/tomcat web application server) - complete pauses of JVM would probably be noticed by end users
Sun JVM, probably 1.5
To prove I am not asking you guys to do my homework this is what I came up with:
-XX:+UseConcMarkSweepGC -XX:+AggressiveOpts -XX:+UnlockDiagnosticVMOptions -XX:-EliminateZeroing -Xmn768m -Xmx55000m
CMS should reduce the amount of pauses, although it comes with overhead. The other settings for CMS seem to default automatically to the number of CPUs so they seem sane to me. The rest that I added are extras that might do good or bad generally for performance, and they should probably be tested.
Definitely.
I think it's going to be difficult for anybody to give you anything more than general advice, without having further knowledge of your application.
What I would suggest is that you use VisualGC (or the VisualGC plugin for VisualVM) to actually look at what the garbage collection is doing when your app is running. Once you have a greater understanding of how the GC is working alongside your application, it'll be far easier to tune it.
#1. Given the following case how would you tune the heap and GC settings?
First, having 64 gigabytes of memory doesn't imply that you have to use them all for one JVM. Actually, it rather means you can run many of them. Then, it is impossible to answer your question without any access to your machine and application to measure and analyse things (knowing what your application is doing isn't enough). And no, I'm not asking to get access to your environment :)
#2. Would there be noticeable hickups (pauses of JVM etc) that would be noticed by the end users?
The goal of tuning is to find a good compromise between frequency and duration of (major) GCs. With a ~55g heap, GC won't be frequent but will take noticeable time, for sure (the bigger the heap, the longer the major GC). Using a Parallel or Concurrent garbage collector will help on multiprocessor systems but won't entirely solve this issue. Why do you need ~55g (this is mega ultra huge for a webapp IMO), that's my question. I'd rather run many clustered JVMs to handle load if required (at some point, the database will become the bottleneck anyway with a data oriented application).
#3. Should this really still work? I think it should.
Hmm... not sure I get the question. What is "this"? Instantiating a JVM with a big heap? Yes, it should. Is it equivalent to running several JVMs? No, certainly not.
PS: 4G is the maximum theoretical heap limit for the 32-bit JVM running on a 64-bit operating system (see Why can't I get a larger heap with the 32-bit JVM?)
PPS: On 64-bit VMs, you have 64 bits of addressability to work with resulting in a maximum Java heap size limited only by the amount of physical memory and swap space your system provides. (see How large a heap can I create using a 64-bit VM?)
Obviously heap size is not unlimited and the larger is the heap size, the more your JVM will eventually spend on GC. Though I think it is possible to set heap size quite high on 64-bit JVM, I still think it's not really practical. The advice here is better to have several JVMs running with the same parameters i.e. cluster of JBoss/Tomcat nodes running on the same physical machine and you will get better throughput.
EDIT: Also your GC behavior depends on the taxonomy of your heap. If you have a lot of short-living objects and each request to the server creates a lot of those, then your GC will collect a lot of garbage very often and thus on large heap size this will result in longer pauses. If you have very many long-living objects (e.g. caching most of your data in memory) and the amount of short-living objects is not that big, then having bigger heap size is OK.
As Chris Rice already wrote, I wouldn't expect any obvious problems with the GC for heap sizes up to 32-64GB, although there may of course be some point of your application logic, which can cause problems.
Not directly related to GC, but I would still recommend you to perform a realistic load test on your production system. I used to work on a project, where we had a similar setup (relatively large, clustered JBoss/Tomcat setup to serve a public web application) and without exaggeration, JBoss is not behaving very well under high load or with a high number of concurrent calls if you are using EJBs. JBoss is spending a lot of time in synchronized blocks when accessing and managing the EJB instance pools and if you opt for a cluster, it will even wait for intra-cluster network communication within these synchronized blocks. Be especially aware of poorly performing state replication, if you are using SFSBs.
Only to add some more switches I would use by default: -Xms55g can help to reduce the rampup time because it frees Java from the need to check if it can fall back to the initial size and allows also better internal initial sizing of memory areas.
Additionally we made good experiences with NewSize to give you a large young size to get rid of short term garbage: -XX:NewSize=1g Additionally most webapps create a lot of short time garbage that will never survive the request processing. You can even make that bigger. With Xms55g, the VM reserves a large chunk already. Maybe downsizing can help.
-Xincgc helps to clean the young generation incrementally and return the cpu often to the user threads.
-XX:CMSInitiatingOccupancyFraction=70 If you really fill all that memory, try to start CMS garbage collection earlier.
-XX:+CMSIncrementalMode puts the CMS into incremental mode to return the cpu to the user threads more often.
Attach to the process with jstat -gc -h 10 <pid> 1s and watch the GC working.
Will you really fill up the memory? I assume that 64cpus for request processing might even be able to work with less memory. What do you store in there?
Depending on your GC pause analysis, you may wish to implement Incremental mode whereby the long pause may be broken out over a period of time.
I have found memory architecture plays a part in large memory sizes. Applications in general don't perform as well if they use more than one memory bank. The JVM appears to suffer as well, esp the GC which has to sweep the whole memory.
If you have an application which doesn't fit into one memory bank, your application has to pull in memory which is not local to a processor and use memory local to another processor.
On linux you can run numactl --hardware to see the layout of processors and memory banks.