heap memory allocation and performance - java

An application is running in a high end Linux server with 16 GB RAM and 100 GB Hard Disk. We have following command to run the program.
nohup java -server -Xmn512m -Xms1024m -Xmx1024m -Xss256k -verbose:gc
-Xloggc:/logs/gc_log.txt MyClass >> /output.txt
This was written by someone else I'm trying to understand why -verbose:gc -Xloggc: is used.
When I checked this command it collects gc logs but does this cause performance issue to application?
Also I want to relook at my memory arguments, the application is using complete 16 GB RAM and CPU utilization is less tha 5 percent. We are analyzing the reasons for memory utlization.
but Whether we can increase memory parameter values for better performance or existing will suffice?

The answer to fist part of your question is -verbose:gc implies that more GC logs will be appended to the file provided in -Xloggc. If the CPU usage is <5%, then there is no issue in keeping gc logs as verbose.
For 2nd part, it is not possible that if you assign 1024m of heap and still application takes 16GB RAM.

Related

G1GC causes gradual memory growth and full GC brings it down

I am running my java application on centos 6 with openjdk version "1.8.0_232" using G1GC. I am seeing the total heap usage grows gradually and causing application to crash. When I am taking a heapdump of live objects
the dump size is only 1.6GB but my total used heap was 32GB.
Command used for taking dump: jmap -dump:live,format=b,file=/tmp/dump.hprof
Read somewhere , that the jmap dump command triggers a full GC and releases inaccessible heap , that is the reason for less dump size. I can see after triggering dump command my total heap usage came down and again it starts growing gradually.
My JVM args : -XX:-AllowUserSignalHandlers -Xmx49000m -DFCGI_PORT=6654 -XX:+UseG1GC -XX:+UseStringDeduplication -XX:InitiatingHeapOccupancyPercent=55 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/xyz -XX:+PerfDisableSharedMem -Djava.io.tmpdir=/var/XXX/temp
Is there a better way to efficiently do full GC with G1?
This is because what is being presented to you is the commited memory, which is different from the used memory.
In more recent versions of Java there have been improvements in the GC algorithms for the commited memory to be released more frequently for the operating system.
Use Java Mission Control to see your memory in more detail (commited memory vs used memory).
My recommendation is to use newer versions of Java, if it is not possible, change the value of XmX to a lower value (3Gb). You will notice that the JVM will always approach the defined limit.
https://openjdk.java.net/jeps/346
https://openjdk.java.net/jeps/351
https://blog.idrsolutions.com/2019/09/improved-garbage-collection-in-java-13/
https://www.slideshare.net/jelastic/choosing-right-garbage-collector-to-increase-efficiency-of-java-memory-usage

Java native memory leak with G1 and huge memory

We currently have problems with a java native memory leak. Server is quite big (40cpus, 128GB of memory). Java heap size is 64G and we run a very memory intensive application reading lot of data to strings with about 400 threads and throwing them away from memory after some minutes.
So the heap is filling up very fast but stuff on the heap becomes obsolete and can be GCed very fast, too. So we have to use G1 to not have STW breaks for minutes.
Now, that seems to work fine - heap is big enough to run the application for days, nothing leaking here. Anyway the Java process is growing and growing over time until all the 128G are used and the aplication crashes with an allocation failure.
I've read a lot about native java memory leaks, including the glibc issue with max. arenas (we have wheezy with glibc 2.13, so no fix possible here with setting MALLOC_ARENA_MAX=1 or 4 without a dist upgrade).
So we tried jemalloc what gave us graphs for:
inuse-space:
and
inuse-objects:
.
I don't get it what's the issue here, has someone an idea?
If I set MALLOC_CONF="narenas:1" for jemalloc as environment parameter for the tomcat process running our app, could that still use the glibc malloc version anyway somehow?
This is our G1 setup, maybe some issue here?
-XX:+UseCompressedOops
-XX:+UseNUMA
-XX:NewSize=6000m
-XX:MaxNewSize=6000m
-XX:NewRatio=3
-XX:SurvivorRatio=1
-XX:InitiatingHeapOccupancyPercent=55
-XX:MaxGCPauseMillis=1000
-XX:PermSize=64m
-XX:MaxPermSize=128m
-XX:+PrintCommandLineFlags
-XX:+PrintFlagsFinal
-XX:+PrintGC
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintTenuringDistribution
-XX:-UseAdaptiveSizePolicy
-XX:+UseG1GC
-XX:MaxDirectMemorySize=2g
-Xms65536m
-Xmx65536m
Thanks for your help!
We never called System.gc() explicitly, and meanwhile stopped using G1, not specifying anything other than xms and xmx.
Therefore using nearly all the 128G for the heap now. The java process memory usage is high - but constant for weeks. I'm sure this is some G1 or at least general GC issue. The only disadvantage by this "solution" are high GC pauses, but they decreased from up to 90s to about 1-5s with increasing the heap, which is ok for the benchmark we drive with our servers.
Before that, I played around with -XX:ParallelGcThreads options which had significant influence on the memory leak speed when decreasing from 28 (default for 40 cpus) downwards to 1. The memory graphs looked somewhat like a hand fan using different values on different instances...

JVM heap size issues

I am just started R&D on JVM heap size and observed some strange behavior.
My system RAM size is 4 GB
OS is 64-bit windows 7
Java version is 1.7
Here is the observations:
I wrote a sample main program which starts & immediately went to wait state.
When I run the program from eclipse with -Xms1024m -Xmx1024m parameters it can able to run 3 times/parallel-process from eclipse i.e. 3 parallel process only as my RAM is 4GB only. This is expected behavior.
Hence I had rerun the same with -Xms512m -Xmx512m parameters and it can able to run 19 times/parallel-process from eclipse even my RAM is 4GB. HOW?
I used VisualVM tool to cross-check and I can see 19 process ids and each process id is allocated with 512m even my RAM size is 4GB, but HOW?
I have goggled it & gone through lot oracle documentation about the Memory management & optimization but those articles not answered my question.
Thanks in advance.
Thanks,
Baji
I wonder why you couldn't start more than 3 processes with -Xmx1024M -Xms1024M. I tested it on my 4GB Linux system, and I could start at least 6 such processes; I didn't try to start more.
Anyway, this is due to the fact that the program doesn't actually use that memory if you just start it with -Xmx1024M -Xms1024M. These just specify the maximum size of the heap. If you don't allocate that amount of data, this memory won't get actually used then.
"top" shows the virtual set size to be more than 1024M for such processes, but it isn't actually using that memory. It has just allocated address space but never used it.
The actual memory in your PC and the memory that is allocated must not match each other. In fact your operating system is moving data from the RAM ontothe hdd when your ram is full. This is called swapping / Ram-paging.
Thats why windows has the swap-file on the hdd and unix has a swap partition.

java.lang.OutOfMemoryError: Java heap space in every 2-3 Hours

In our application we have both, Apache Server (for the front end only) & JBoss 4.2 (for the business / backend end). We are using Ubuntu 12 as server OS. Our application is throwing java.lang.OutOfMemoryError: "Java heap space" repeatedly. (It throws OOMEs for an hour or so and then goes back to working normally for next 2-3 hours. Then it repeats the pattern.) Our Java memory settings are
-Xms512m -Xmx1024m
Our server has 6 GB of Ram physically. Please guide us do we need to increase java Heap size. If yes, what should be the ideal size considering physical 6GB of Ram.
Are you sure you dont have memory leaks? Also if you are using high memory using api like POI for doc or itext for PDF the you are utilizing code to keep memory footprint low. You can use a profiler to see what exactly is happening. If you still need to increase increase step by step till it gets to a appropriate value.
like
-Xms512m -Xmx1024m
then
-Xms512m -Xmx2048m
so on ...
I would check whether you have a memory leak e.g. are there objects building up and not being freed.
You can do that with a profiler e.g. visualvm or jmap -histo:live might be enough.
If you don't have a memory leak and the memory usage is valid I would try increasing the maximum to the maximum amount of memory you would want the JVM to use e.g perhaps 4 GB.

How to reduce Sun/Oracle JVM internal overhead?

This problem is specifically about Sun Java JVM running on Linux x86-64. I'm trying to figure out why the Sun JVM takes so much of system's physical memory even when I have set Heap and Non-Heap limits.
The program I'm running is Eclipse 3.7 with multiple plugins/features. The most used features are PDT, EGit and Mylyn. I'm starting the Eclipse with the following command line switches:
-nosplash -vmargs -Xincgc -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -XX:MaxPermHeapExpansion=10m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0 -XX:CMSIncrementalDutyCycle=5 -XX:GCTimeRatio=49 -XX:MaxGCPauseMillis=50 -XX:GCPauseIntervalMillis=1000 -XX:+UseCMSCompactAtFullCollection -XX:+CMSClassUnloadingEnabled -XX:+DoEscapeAnalysis -XX:+UseCompressedOops -XX:+AggressiveOpts -Dorg.eclipse.swt.internal.gtk.disablePrinting
Worth noting are especially the switches:
-Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m
These switches should limit the JVM Heap to maximum of 200 MB and Non-Heap to 150 MB ("CMS Permanent generation" and "Code Cache" as labeled by JConsole). Logically the JVM should take total of 350 MB plus the internal overhead required by the JVM.
In reality, the JVM takes 544.6 MB for my current Eclipse process as computed by ps_mem.py (http://www.pixelbeat.org/scripts/ps_mem.py) which computes the real physical memory pages reserved by the Linux 2.6+ kernel. That's internal Sun JVM overhead of 35% or roughly 200MB!
Any hints about how to decrease this overhead?
Here's some additional info:
$ ps auxw
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
me 23440 2.4 14.4 1394144 558440 ? Sl Oct12 210:41 /usr/bin/java ...
And according to JConsole, the process has used 160 MB of heap and 151 MB of non-heap.
I'm not saying that I cannot afford using extra 200MB for running Eclipse, but if there's a way to reduce this waste, I'd rather use that 200MB for kernel block device buffers or file cache. In addition, I have similar experience with other Java programs -- perhaps I could reduce the overhead for all of them with similar tweaks.
Update: After posting the question, I found previous post to SO:
Why does the Sun JVM continue to consume ever more RSS memory even when the heap, etc sizes are stable?
It seems that I should use pmap to investigate the problem.
I think the reason for the high memory consumption of your Eclipse Environment is the use of SWT. SWT is a native graphic library living outside of the heap of the JVM, and to worsen the situation, the implementation on Linux is not really optimized.
I don't think there's really a chance to reduce the memory consumption of your eclipse environment concerning the memory outside the heap.
Eclipse is a memory and cpu hog. In addition to the Java class libraries all the low end GUI stuff is handled by native system calls so you will have a substantial "native" JNI library to execute the low level X term calls attached to your process.
Eclipse offers millions of useful features and lots of helpers to speed up your day to day programming tasks - but lean and mean it is not. Any reduction in memory or resources will probably result in a noticeable slowdown. It really depends on how much you value your time vs. your computers memory.
If you want lean and mean gvim and make are unbeatable. If you want the code completion, automatic builds etc. you must expect to pay for this with extra resources.
If I run the following program
public static void main(String... args) throws InterruptedException {
for (int i = 0; i < 60; i++) {
System.out.println("waiting " + i);
Thread.sleep(1000);
}
}
with ps auwx prints
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
500 13165 0.0 0.0 596680 13572 pts/2 Sl+ 13:54 0:00 java -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -cp . Main
The amount of memory used is 13.5 MB. There about 200 MB of shared libraries which counts towards the VSZ size. The rest can be acounted for in the max heap, max perm gen with an overhead for the thread stacks etc.
The problem doesn't appear to be with the JVM but the application running in it. Using additional shared libraries, direct memory and memory mapped files can increase the amount of memory used.
Given you can buy 16 GB for around $100, do you know this is actually a problem?

Categories

Resources