when is the jvm heap allocated by the OS - java

One of our sap systems(PI ABAP+JAVA stack) was giving performance issue. The entire 64GB configured for the machine gets hogged up(and the 8 cores as well). Every one is suspecting the java part,but i think different.
The java server nodes where getting restarted with Out Of Memory error. Looking at the hprof files, i found that they where only 1.2G(avg of 3 server nodes) in size , when 3GB(both -Xms and Xmx) of heap is configured for the server nodes. This observation lead to the following doubt.
I have read that when Xms and Xmx are set to the same value, the jvm is allocated the entire heap when it starts. If its the case the server nodes would have 3GB of heap from the start. If so why does it doesn't reflect in the hprof file or if the hprof contains only the memory allocated to objects during runtime, the the size clearly idicates that the heap memory was free(more than 50%), so how OOM error...!!..??
I also know that linux does something called as memory over-commit. ie memory is not actually given when its requested but when its actually used. Is this contributing to the out of memory exception. Like when the JVM starts the os says to it that you have been allocated 3GB of memory but actually defers it until its actually required. By the time the jvm actually tries to allocate the memory to objects , some other applications might have exhausted the memory. Is this possible...??
Even if the java nodes had memory leak issue, wouldn't it be confined to the 3GB of heap. How can it hog up the entire 64G of physical memory....???
One more thing i observed was the swap space was only 50% used.
Any light on this...!

The hprof doesn't show actual heap size, and its size depends on many things, like enabled/disabled compressed references, field layouting (the object size isn't just sum of sizes of its fields, but also headers and some gaps between fields) and so on.
About memory reserving. The JVM does reserve memory for the heap, but OS doesn't allocate memory until it's needed.
I would recommend you to use memory profilers (I highly recommend YourKit profiler) to connect to running virtual machine and then analyze memory usage.

Related

java process consumes more memory over time but no memory leak [duplicate]

This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.

Do the -Xms and -Xmx flags reserve the machine's resources?

I know that the -Xms flag of JVM process is to allow the JVM process to use a specific amount of memory to initialize its process. And in regard to performance of a Java application, it is often recommended to set the same values to both -Xms and -Xmx when starting the application, like -Xms2048M -Xmx2048M.
I'm curious whether the -Xms and -Xmx flags mean that the JVM process makes a reservation for the specific amount of memory to prevent other processes in the same machine from using it.
Is this right?
Xmx merely reserves virtual address space.
Xms actually allocates (commits) it but does not necessarily prefault it.
How operating systems respond to allocations varies.
Windows does allow you to reserve very large chunks of address space (Xmx) but will not allow overcommit (Xms). The limit is defined by swap + physical. The exception are large pages (which need to be enabled with a group policy setting), which will limit it by physical ram.
Linux behavior is more complicated, it depends on the vm.overcommit_memory and related sysctls and various flags passed to the mmap syscall, which to some extent can be controlled by JVM configuration flags. The behavior can range from a) Xms can exceed total ram + swap to b) Xmx is capped by available physical ram.
Short answer: Depends on the OS, though it's definitely a NO in all popular operating systems.
I'll take the example of Linux's memory allocation terminology here.
-Xms and -Xmx specify the minimum and maximum size of JVM heap. These sizes reflect VIRTUAL MEMORY allocations which can be physical mapped to pages in RAM called the RESIDENT SIZE of the process at any time.
When the JVM starts, it'll allocate -Xms amount of virtual memory. This can be mapped to resident memory (physical memory) once you dynamically create more objects on heap. This operation will not require JVM requesting any new allocation from the OS, but will increase you RAM utilization, because those virtual pages will now actually have corresponding physical memory allocation too. However, once your process tries to create more objects on heap after consuming all its Xms allocation on RAM, it has to request the OS for more virtual memory from the OS, which may/may not also be mapped to physical memory later depending on when you need it. The limit for this is your -Xmx allocation.
Note that this is all possible because the memory in linux is shared. So, even if a process allocates memory beforehand, what it gets is virtual memory which is just an addressable contiguous fictional allocation that may or may not be mapped to real physical pages depending on the demand. Read this answer for a short description of how memory management works in popular operating systems. Here is a much detailed (slightly outdated but very useful) information on how Linux's memory management works.
Also note that, these flags only affect heap sizes. The resident memory size that you will see will be larger than the current JVM heap size. More specifically, the memory consumed by a JVM is equals to its HEAP SIZE plus DIRECT MEMORY which reflects things coming from method stacks, native buffer allocations etc.
Does JVM process makes a reservation for the specific amount of memory?
Yes, the JVM reserves the memory specified by Xms at the start and might reserve upto Xmx but the reservation need not be in the physical memory, it can also be in the swap. The JVM pages will be swaped in and out of memory as needed.
Why is it recommended to have same value for Xms and Xmx?
Note: Setting Xms and Xmx is generally recommended for production systems where the machines are dedicated for a single application (or there aren't many applications competing for system resources). This does not generalize it is good everywhere.
Avoids Heap Size:
The JVM starts with the heap size specified by the Xms value initially. When the heap is exhausted due to allocation of objects by the application. The JVM starts increasing the heap. Each time the JVM increases the heap size it must ask the operating system for additional memory. This is a time consuming operation and results in increased gc pause times and inturn the response times for the requests.
Applications Behaviour In the Long Run:
Even though I cannot generalize, many applications over the long run eventually grow to the maximum heap value. This is another reason to start of with maximum memory instead of growing the heap over time and creating unnecessary overhead of heap resize. It is like asking the application to take up the memory at the start itself which it will eventually take.
Number of GCs::
Starting off with small heap sizes results in garbage collection more often. Bigger heap sizes reduce the number of gcs that happen because more memory is available to object allocation. However it must be noted that increased heap sizes increases gc pause times. This is an advantage only if your garbage collection has been tuned properly and the pause times don't increase significantly with increase in heap sizes.
One more reason for doing this is servers generally come with large amounts of memory, So why not use the resources available?

Why is my Java heap dump size much smaller than used memory?

Problem
We are trying to find the culprit of a big memory leak in our web application. We have pretty limited experience with finding a memory leak, but we found out how to make a java heap dump using jmap and analyze it in Eclipse MAT.
However, with our application using 56/60GB memory, the heap dump is only 16GB in size and is even less in Eclipse MAT.
Context
Our server uses Wildfly 8.2.0 on Ubuntu 14.04 for our java application, whose process uses 95% of the available memory. When making the dump, our buffers/cache used space was at 56GB.
We used the following command to create the dump: sudo -u {application user} jmap -dump:file=/mnt/heapdump/dump_prd.bin {pid}
The heap dump file size is 16,4GB and when analyzing it with Eclipse MAT, it says there are around 1GB live objects and ~14,8GB unreachable/shallow heap.
EDIT: Here is some more info about the problem we see happening. We monitor our memory usage, and we see it grow and grow, until there is ~300mb free memory left. Then it stays around that amount of memory, until the process crashes, unfortunately without error in the application log.
This makes us assume it is a hard OOM error because this only happens when the memory is near-depleted. We use the settings -Xms25000m -Xmx40000m for our JVM.
Question
Basically, we are wondering why the majority of our memory isn't captured in this dump. The top retained size classes don't look too suspicious, so we are wondering if there is something heap dump-related what we are doing wrong.
When dumping its heap, the JVM will first run a garbage collection cycle to free any unreachable objects.
How can I take a heap dump on Java 5 without garbage collecting first?
In my experience, in a true OutOfMemoryError where your application is simply demanding more heap space than is available, this GC is a fool's errand and the final heap dump will be the size of the max. heap size.
When the heap dump is much smaller, that means the system was not truly out of memory, but perhaps had memory pressure. For example, there is the java.lang.OutOfMemoryError: GC overhead limit exceeded error, which means that the JVM may have been able to free enough memory to service some new allocation request, but it had to spend too much time collecting garbage.
It's also possible that you don't have a memory problem. What makes you think you do? You didn't mention anything about heap usage or an OutOfMemoryError. You've only mentioned the JVM's memory footprint on the operating system.
In my experience, having a heap dump much smaller than the real memory used can be due to a leak in the JNI.
Despite you don't use directly any native code, there are certain libraries that use it to speed up.
In our case, it was a Deflater and Inflater not properly ended.

How much maxheapsize will be supported by windows8 machine of ram size 16GB?

i have sent heap size to -Xmx512m -XX:MaxPermSize=14336m but still i'm getting out of memory error.
could you please help me in explaining that how much heapmemory can we set in windows8 machine(Ram size 16GB)
As discussed in This question here, the MaxPermSize argument is the
maximum size for the permanent generation heap, a heap that holds the byte code of classes and is kept separated from the object heap containing the actual instances
While the flag Xmx is responsible for the memory where instances are created. Now I don't understand your application but I personally do not believe you need ~14GB of byte-code cached in it. Try potentially changing your Max heap Size (Xmx) rather than perm size and see how that turns out, seeing as the most likely cause is that you are creating too many instances for the memory you have allocated.
As much as your physical ram + virtual memory. See my virtual memory below.
Do not forget that all of this memory is shared among your processes, therefore you should close all unnecessary applications when your memory hungry processes work.
To see your virtual memory information.
System -> Advanced System Settings -> Advanced Tab
See following Microsoft help for more information.
Change the size of virtual memory
.
You seem to misunderstand what heap and what permgen space really does:
Perm space vs Heap space
You need to increase heap size: e.g. -Xmx1g. MaxPermSize should be limited to maybe 300m. The value you used like 14G is simply too high and IMHO of no practical use.
Watch out for your Java VM. 32Bit JavaVM can only address around 1.2G (sum of heap and perm gen) while 64Bit Java could address larger heaps like e.g. 8g.
If you are looking at sonars documentation: http://docs.codehaus.org/display/SONAR/Analyzing+with+SonarQube+Runner you see, they mention 128m perm size.

Eclipse release heap back to system

I'm using Eclipse 3.6 with latest Sun Java 6 on Linux (64 bit) with a larger number of large projects. In some special circumstances (SVN updates for example) Eclipse needs up to 1 GB heap. But most of the time it only needs 350 MB. When I enable the heap status panel then I see this most of the time:
350M of 878M
I start Eclipse with these settings: -Xms128m -Xmx1024m
So most of the time lots of MB are just wasted and are just used rarely when memory usage peaks for a short time. I don't like that at all and I want Eclipse to release the memory back to the system, so I can use it for other programs.
When Eclipse needs more memory while there is not enough free RAM than Linux can swap out other running programs, I can live with that. I heard there is a -XX:MaxHeapFreeRatio option. But I never figured out what values I have to use so it works. No value I tried ever made a difference.
So how can I tell Eclipse (Or Java) to release unused heap?
Found a solution. I switched Java to use the G1 garbage collector and now the HeapFreeRatio parameters works as intended. So I use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory is actually released back to the operating system.
You can go to the Preferences -> General and check the Show heap status. This activate a nice view of your heap in the corner of Eclipse. Something like this:
If you click the trash bin, it will try to run garbage collection and return the memory.
Java's heap is nothing more than a big data structure managed within the JVM process' heap space. The two heaps are logically-separate entities even though they occupy the same memory.
The JVM is at the mercy of the host system's implementations of malloc(), which allocates memory from the system using brk(). On Linux systems (Solaris, too), memory allocated for the process heap is almost never returned, largely because it becomes fragmented and the heap must be contiguous. This means that memory allocated to the process will increase monotonically, and the only way to keep the size down is not to allocate it in the first place.
-Xms and -Xmx tell the JVM how to size the Java heap ahead of time, which causes it to allocate process memory. Java can garbage collect until the sun burns out, but that cleanup is internal to the JVM and the process memory backing it doesn't get returned.
Elaboration from comment below:
The standard way for a program written in C (notably the JVM running Eclipse for you) to allocate memory is to call malloc(3), which uses the OS-provided mechanism for allocating memory to the process and then managing individual allocations within those allocations. The details of how malloc() and free() work are implementation-specific.
On most flavors of Unix, a process gets exactly one data segment, which is a contiguous region of memory that has pointers to the start and end. The process can adjust the size of this segment by calling brk(2) and increasing the end pointer to allocate more memory or decreasing it to return it to the system. Only the end can be adjusted. This means that if your implementation of malloc() enlarges the data segment, the corresponding implementation of free() can't shrink it unless it determines that there's space at the end that's not being used. In practice, a humongous chunk of memory you allocated with malloc() rarely winds up at the very end of the data segment when you free() it, which is why processes tend to grow monotonically.

Categories

Resources