Does Direct Memory affect compressed Pointers in Java? - java

I am aware that once Java heap size grows past 32GB, we lose the benefits of compressed pointers and may have less effective memory (compared to 32GB) until the total heap reaches ~48GB.
Does Direct Memory usage affect the determination to use compressed pointers or not? For example, will I still be able to use them with settings like -Xmx28G -XX:MaxDirectMemorySize=12G?

I am aware that once Java heap size grows past 32GB, we lose the benefits of compressed pointers and may have less effective memory (compared to 32GB) until the total heap reaches ~48GB.
You can increase the object alignment to 16 (in Java) allowing you to use CompressedOops up to 64 GB.
Does Direct Memory usage affect the determination to use compressed pointers or not?
The direct memory is just native memory like the thread stacks, GUI components, shared libraries etc. They are not part of the heap, nor is the meta space.
For example, will I still be able to use them with settings like -Xmx28G -XX:MaxDirectMemorySize=12G
You can have -XX:MaxDirectMemorySize=1024G if you like, this is not part of the heap.

Related

How does the heap manager in java or C++ keep track of all the memory locations used by the threads or processes?

I wanted to understand what data structures the heap managers in Java or OS in case of C++ or C keep track of the memory locations used by the threads and processes. One way is to use a map of objects and the memory address and a reverse map of memory starting address and the size of the object in the memory.
But here it won't be able to cater the new memory requests in O(1) time. Is there any better data structure to do this?
Note that unmanaged languages are going to be allocating/freeing memory through system calls, generally not managing it themselves. Still regardless of what level of abstraction (OS to the run time), something has to deal with this:
One method is called buddy block allocation, described well with an example on Wikipedia. It essentially keeps track of the usage of spaces in memory of varying sizes (typically multiples of 2). This can be done with a number of arrays with clever indexing, or perhaps more intuitively with a binary tree, each node tell whether a certain block is free, all nodes on a level representing the same size block.
This suffers from internal fragmentation; as things come and go, you might ended up with your data scattered rather than being efficiently consolidated, making it harder to fit in large data. This could be countered by a more complicated, dynamic system, but buddy blocks have the advantage of simplicity.
The OS keeps track of the process's memory allocation in an overall view - 4KB pages or bigger "lumps" are stored in some form of list.
In the typical Windows implementation (Microsoft's C runtime library) - at least in recent versions, all memory allocations are done through the HeapAlloc() system call. So every single heap allocation goes through to the OS. Whether the OS actually tracks every single allocation or just keeps a map of "what is free, what is used" is another matter. It is my understanding that the heap management code has no list of "current allocations", just a list of freed memory lump
In Linux/Unix, the C library will typically avoid calling the OS for every little allocation, and instead uses a large lump of memory, and splits that up into smaller pieces per allocation. Again, no tracking of allocated memory inside the heap management.
This is done at a process level. I'm not aware of an operating system that differentiates memory allocations on a per-thread level (other than TLS - thread local storage, but that is typically a very small region, outside of the typical heap code management).
So, in summary: the OS and/or C/C++ runtime doesn't actually keep a list of all the used allocations - it keeps a list of "freed" memory [and when another lump is freed, typically will "Join" previous and next consecutive allocations to reduce fragmentation]. When the allocator is firsts started, it's given a large lump, which is then assigned as a single freed allocation. When a request is made, the lump is split into sections and the free list becomes the remainder. When that lump is not sufficient, another big lump is carved off using the underlying OS allocations.
There is a small amount of metadata stored with each allocation, which contains things like "how much memory is allocated", and this metadata is used when freeing the memory. In the typical case, this data is stored immediately before the allocated memory. But there is no way to find the allocation metadata without knowing about the allocations in some other way.
there is no automatic garbage collection in C++. You need to call free/delete for malloc/new heap memory allocations. That's where tools like valgrind(to check memory leak) comes handy. There are other concepts like auto_ptr which automatically frees the heap memory which you can refer to.

Why is the default size of PermGen so small?

What would be the purpose of limiting the size of the Permgen space on a Java JVM? Why not always set it equal to the max heap size? Why does Java default to such a small number of 64MB? Are they trying to force people to notice permgen issues in their code by doing this?
If my app uses 85MB of permgen, then it might be safe to set it to 96MB but why set it so small if its just really part of the main heap? Wouldn't it be efficient to allow the JVM to use as much PermGen as the heap allows?
The PermGen is set to disappear in JDK8.
What would be the purpose of limiting the size of the Permgen space on a Java JVM?
Not exhausting resources.
Why not always set it equal to the max heap size?
The PermGen is not part of the Java heap. Besides, even if it was, it wouldn't be of much help to the application to fill the heap with class metadata and constant Strings, since you'd then get "OutOfMemoryError: Java heap size" errors instead.
Conceptually to the programmer, you could argue that a "Permanent Generation" is largely pointless. If you need to load a class or other "permanent" data and there is memory space left, then in principle you may as well just load it somewhere and not care about calling the aggregate of these items a "generation" at all.
However, the rationale is probably more that:
there is potentially a benefit (e.g. from a processor cache point of view) from having all code/class metadata near together in memory space, and to guarantee this it is easier to allocate fixed sized area(s);
similarly, memory space where code/class metadata is stored potentially has certain "special" properties (notably, you don't want it to get paged out to disk if you can help it) and the system may not be able to set such properties on memory in a very granular way, so that it is more practical to have all "special" objects together in one (or a small number of) contiguous block or memory space;
having permanent objects all together helps avoid fragmenting the remaining memory space and again, the most practical way to do this is to allocate one contiguous block of memory of fixed size from the outset.
So as I see things, most of the time the reason for allocating a permanent "generation" is really for practical implementation reasons than because the programmer really cares terribly much.
On the other hand, the situation isn't usually terrible for the programmer either: the amount of permanent generation needed is usually predictable, so that you should be able to allocate the required amount with decent leeway. So if you find you are unexpectedly exceeding the allocation, this may well be a signal that "something serious is wrong".
N.B. It is probably the case that some of the issues that the PermGen originally was designed to solve are not such big issues on modern 64-bit processors with larger processor caches. If it is removed in future releases of Java, this is likely a sign that the JVM designers feel it has now "served its purpose".
PermGen is where class data and other static stuff (like string literals) are allocated.
You'd rather allocate memory to the Java heap for your application data (Xms and Xmx, where young (short-lived) and tenured objects go (when the the JVM realizes they need to stay around longer)).
So the historic PermGen 64MB default may be arbitrary but the having you explicitly set it lets you know (and control) how much static data your application is causing the JVM to store.

How to calculate heap fragmenation statistics using heap dumps

Does anyone knows if there is any tool out there to calculate the heap fragmentation using heap dumps?
I have a tool to visualize the heap (http://bobah.net/d4d/tools/cpp-heapmap) but it consumes the list of {op;address;size} triplets, not a raw heap dump. You can use it to visually estimate how bad the heap is. In some cases it's just enough. It's malloc interceptor would obviously not fit for Java app, but the UI does not care where numbers come from and would display ones from any source.
But let's assume we are able build a heap map from the dump (I'm sure someone will answer here how exactly). The main part of the problem is to calculate fragmentation curve F(s), s - target allocation size. F(s) - the ratio of (total_free_space/s), to number of blocks of size s which can actually be allocated considering particular heap layout.
Once the F(s) is built, one can integrate it in the interval from 1 to S (total heap size) to have a single number representing the heap fragmentation or usage efficiency.
The IBM Garbage Collection and Memory Analyzer is excellent for that sort of thing and is free.

How to calculate retained heap size and shallow heap size?

I am new to Java, I just gone through heap dump analysis using eclispe's MAT. So i just wanted to below points
a) understand what is Shallow and retained heap size and how it is being calculated? It would be great if you provide example.
b) For performance issue,most of people adviced me to keep minimum and maximum heap size same, so is is ok or it is subjective to application to application?
Shallow heap is the memory consumed by one object. An object needs 32 or 64 bits (depending on the OS architecture) per reference, 4 bytes per Integer, 8 bytes per Long, etc. Depending on the heap dump format the size may be adjusted
Retained heap of X is the sum of shallow sizes of all objects in the retained set of X, i.e. memory kept alive by X.
Refer this link.
JVM sizing and tuning is not an exact science, so taking a blanket decision to set min and max size the same is not necessarily the best config to choose.
I find this to be an excellent explanation of the sorts of decisions that you need to make. In addressing your question, it says :
Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. However, the virtual machine is then unable to compensate if you make a poor choice.
If you set the min and max the same, but have an inappropriately sized Eden space, you may still end up with poor garbage collection performance.
The only way to make decisions about sizing is to run your application under a variety of workloads and see how the GC performs.

Why is the maximum size of the Java heap fixed?

It is not possible to increase the maximum size of Java's heap after the VM has started. What are the technical reasons for this? Do the garbage collection algorithms depend on having a fixed amount of memory to work with? Or is it for security reasons, to prevent a Java application from DOS'ing other applications on the system by consuming all available memory?
In Sun's JVM, last I knew, the entire heap must be allocated in a contiguous address space. I imagine that for large heap values, it's pretty hard to add to your address space after startup while ensuring it stays contiguous. You probably need to get it at startup, or not at all. Thus, it is fixed.
Even if it isn't all used immediately, the address space for the entire heap is reserved at startup. If it cannot reserve a large enough contiguous block of address space for the value of -Xmx that you pass it, it will fail to start. This is why it's tough to allocate >1.4GB heaps on 32-bit Windows - because it's hard to find contiguous address space in that size or larger, since some DLLs like to load in certain places, fragmenting the address space. This isn't really an issue when you go 64-bit, since there is so much more address space.
This is almost certainly for performance reasons. I could not find a terrific link detailing this further, but here is a pretty good quote from Peter Kessler (full link - be sure to read the comments) that I found when searching. I believe he works on the JVM at Sun.
The reason we need a contiguous memory
region for the heap is that we have a
bunch of side data structures that are
indexed by (scaled) offsets from the
start of the heap. For example, we
track object reference updates with a
"card mark array" that has one byte
for each 512 bytes of heap. When we
store a reference in the heap we have
to mark the corresponding byte in the
card mark array. We right shift the
destination address of the store and
use that to index the card mark array.
Fun addressing arithmetic games you
can't do in Java that you get to (have
to :-) play in C++.
This was in 2004 - I'm not sure what's changed since then, but I am pretty sure it still holds. If you use a tool like Process Explorer, you can see that the virtual size (add the virtual size and private size memory columns) of the Java application includes the total heap size (plus other required space, no doubt) from the point of startup, even though the memory 'used' by the process will be no where near that until the heap starts to fill up...
Historically there has been a reason for this limitiation, which was not to allow Applets in the browser to eat up all of the users memory. The Microsoft VM which never had such a limitiation actually allowed to do this which could lead to some sort of Denial of Service attack against the users computer. It was only a year ago that Sun introduced in the 1.6.0 Update 10 VM a way to let applets specify how much memory they want (limited to a certain fixed share of the physical memory) instead of always limiting them to 64MB even on computers that have 8GB or more available.
Now since the JVM has evolved it should have been possible to get rid of this limitation when the VM is not running inside a browser, but Sun obviously never considered it such a high priority issue even though there have been numerous bug reports been filed to finally allow the heap to grow.
I think the short, snarky, answer is because Sun hasn't found it worth the time and cost to develop.
The most compelling use case for such a feature is on the desktop, IMO, and Java has always been a disaster on the desktop when it comes to the mechanics of launching the JVM. I suspect that those who think the most about those issues tend to focus on the server side and view any other details best left to native wrappers. It is an unfortunate decision, but it should just be one of the decision points when deciding on the right platform for an application.
My gut feel is that it has to do with memory management with respect to the other applications running on the operating system.
If you set the maximum heap size to, for example, the amount of RAM on the box you effectively let the VM decide how much memory it requires (up to this limit). The problem with this is that the VM could effectively cripple the machine it is running on because it will take over all the memory on the box before it decides that it needs to garbage collect.
When you specify max heap size, what you're saying to the VM is, you are allowed to use this amount of memory before you need to start garbage collecting. You cannot have more because if you take more then the other applications running on the box will slow down and you will start swapping to the disk if you use more than this.
Also be aware that they are two values with respect to memory, that is "current heap size" and "max heap size". The current heap size is how much memory the heap size is currently using and, if it requires more it can resize the heap but it cannot resize the heap above the value of maximum heap size.
From IBM's performance tuning tips (so may not be directly applicable to Sun's VMs)
The Java heap parameters influence the behavior of garbage collection. Increasing the heap size supports more object creation. Because a large heap takes longer to fill, the application runs longer before a garbage collection occurs. However, a larger heap also takes longer to compact and causes garbage collection to take longer.
The JVM has thresholds it uses to manage the JVM's storage. When the thresholds are reached, the garbage collector gets invoked to free up unused storage. Therefore, garbage collection can cause significant degradation of Java performance. Before changing the initial and maximum heap sizes, you should consider the following information:
In the majority of cases you should set the maximum JVM heap size to value higher than the initial JVM heap size. This allows for the JVM to operate efficiently during normal, steady state periods within the confines of the initial heap but also to operate effectively during periods of high transaction volume by expanding the heap up to the maximum JVM heap size. In some rare cases where absolute optimal performance is required you might want to specify the same value for both the initial and maximum heap size. This will eliminate some overhead that occurs when the JVM needs to expand or contract the size of the JVM heap. Make sure the region is large enough to hold the specified JVM heap.
Beware of making the Initial Heap Size too large. While a large heap size initially improves performance by delaying garbage collection, a large heap size ultimately affects response time when garbage collection eventually kicks in because the collection process takes more time.
So, I guess the reason that you can't change the value at runtime is because it may not help: either you have enough space in your heap or you don't. Once you run out, a GC cycle will be triggered. If that doesn't free up the space, you're stuffed anyway. You'd need to catch the OutOfMemoryException, increase the heap size, and then retry you calculation, hoping that this time you have enough memory.
In general the VM won't use the maximum heap size unless you need it, so if you think you might need to expand the memory at runtime, you could just specify a large maximum heap size.
I admit that's all a bit unsatisfying, and seems a bit lazy, since I can imagine a reasonable garbage collection strategy which would increase the heap size when GC fails to free enough space. Whether my imagination translates to a high performance GC implementation is another matter though ;)

Categories

Resources