Any effect of MaxDirectMemorySize and sun.misc.unsafe - java

Now, this question arises mostly due to my misunderstanding of the native memory in JVM and probably stupid too. So a good easy to understand documentation in that direction would be nice.
Now I now that the sun.misc.unsafe class never recommended and the word "unsafe" itself implies. I also understand that it will be deprecated.
My understanding is that the MaxDirectMemorySize is to limit the native memory size that can be accessed by the lies of implementations of NIO or byte buffers. So is this memory size limit applied to memory regions that are created by the Unsafe class?
Also why this question is the thread stack growth is a native memory that is not in the control of the JVM. are there other ways within a java code that such native memory can be made to grow that is not in the hand or control of the VM.
These are some ponderings that I have for more of an understanding of the JVM that is all.

The maximum is applied by maintaining a count of how much is used and comparing with the maximum memory size. You can find how this parameter is used in the JVM code.
Unless you do this as well, no maximum if being enforced.
the thread stack growth is a native memory that is not in the control of the JVM.
The JVM doesn't implement this limit as it is performed by the OS. The JVM just sets it when the stack is created. C.f. -Xss
It's important to realise the JVM is a C program. It doesn't do anything magical and under the covers does the same things a C program would do.

Related

Why does JVM need a maximum memory capacity? [duplicate]

In the spirit of question Java: Why does MaxPermSize exist?, I'd like to ask why the Oracle JVM uses a fixed upper limit for the size of its memory allocation pool.
The default is 1/4 of your physical RAM (with upper & lower limit); as a consequence, if you have a memory-hungry application you have to manually change the limit (parameter -Xmx), or your app will perform poorly, possible even crash with an OutOfMemoryError.
Why does this fixed limit even exist? Why does the JVM not allocate memory as needed, like native programs do on most operating systems?
This would solve a whole class of common problems with Java software (just Google to see how many hints there are on the net on solving problems by setting -Xmx).
Edit:
Some answers point out that this will protect the rest of the system from a Java program with a run-away memory leak; without the limit this would bring the whole system down by exhausting all memory. This is true. However, it is equally true for any other program, and modern OSes already let you limit the maximum memory for a programm (Linux ulimit, Windows "Job Objects"). So this does not really answer the question, which is "Why does the JVM do it differently from most other programs / runtime environments?".
Why does this fixed limit even exist? Why does the JVM not allocate memory as needed, like native programs do on most operating systems?
The reason is NOT that the GC needs to know before hand what the maximum heap size can be. The JVM is clearly capable of expanding its heap ... up to the maximum ... and I'm sure it would be a relatively small change to remove that maximum. (After all, other Java implementations do this.) And it would equally be possible to have a simple way to say "use as much memory as you like" to the JVM.
I'm sure that the real reason is to protect the host operating system against the effects of faulty Java applications using all available memory. Running with an unbounded heap is potentially dangerous.
Basically, many operating systems (e.g. Windows, Linux) suffer serious performance degradation if some application tries to use all available memory. On Linux for example, the system may thrash badly, resulting in everything on the system running incredibly slowly. In the worst case, the system won't be able to start new processes, and existing processes may start crashing when the operating system refuses their (legitimate) requests for more memory. Often, the only option is to reboot.
If the JVM ran with an unbounded heap by default, any time someone ran a Java program with a storage leak ... or that simply tried to use too much memory ... they would risk bringing down the entire operating system.
In summary, having a default heap bound is a good thing because:
it protects the health of your system,
it encourages developers / users to think about memory usage by "hungry" applications, and
it potentially allows GC optimizations. (As suggested by other answers: it is plausible, but I cannot confirm this.)
EDIT
In response to the comments:
It doesn't really matter why Sun's JVMs live within a bounded heap, where other applications don't. They do, and advantages of doing so are (IMO) clear. Perhaps a more interesting question is why other managed languages don't put a bound on their heaps by default.
The -Xmx and ulimit approaches are qualitatively different. In the former case, the JVM has full knowledge of the limits it is running under and gets a chance to manage its memory usage accordingly. In the latter case, the first thing a typical C application knows about it is when a malloc call fails. The typical response is to exit with an error code (if the program checks the malloc result), or die with a segmentation fault. OK, a C application could in theory keep track of how much memory it has used, and try to respond to an impending memory crisis. But it would be hard work.
The other thing that is different about Java and C/C++ applications is that the former tend to be both more complicated and longer running. In practice, this means that Java applications are more likely to suffer from slow leaks. In the C/C++ case, the fact that memory management is harder means that developers don't attempt to build single applications of that complexity. Rather, they are more likely to build (say) a complex service by having a listener process fork of child processes to do stuff ... and then exit. This naturally mitigates the effect of memory leaks in the child process.
The idea of a JVM responding "adaptively" to requests from the OS to give memory back is interesting. But there is a BIG problem. In order to give a segment of memory back, the JVM first has to clear out any reachable objects in the segment. Typically that means running the garbage collector. But running the garbage collector is the last thing you want to do if the system is in a memory crisis ... because it is pretty much guaranteed to generate a burst of virtual memory paging.
Hm, I'll try summarizing the answers so far.
There is no technical reason why the JVM needs to have a hard limit for its heap size. It could have been implemented without one, and in fact many other dynamic languages do not have this.
Therefore, giving the JVM a heap size limit was simply a design decision by the implementors. Second-guessing why this was done is a bit difficult, and there may not be a single reason. The most likely reason is that it helps protect a system from a Java program with a memory leak, which might otherwise exhaust all RAM and cause other apps to crash or the system to thrash.
Sun could have omitted the feature and simply told people to use the OS-native resource limiting mechanisms, but they probably wanted to always have a limit, so they implemented it themselves.
At any rate, the JVM needs to be aware of any such limit (to adapt its GC strategy), so using an OS-native mechanism would not have saved much programming effort.
Also, there is one reason why such a built-in limit is more important for the JVM than for a "normal" program without GC (such as a C/C++ program):
Unlike a program with manual memory management, a program using GC does not really have a well-defined memory requirement, even with fixed input data. It only has a minimum requirement, i.e. the sum of the sizes of all objects that are actually live (reachable) at a given point in time. However, in practice a program will need additional memory to hold dead, but not yet GCed objects, because the GC cannot collect every object right away, as that would cause too much GC overhead. So GC only kicks in from time to time, and therefore some "breathing room" is required on the heap, where dead objects can await the GC.
This means that the memory required for a program using GC is really a compromise between saving memory and having good througput (by letting the GC run less often). So in some cases it may make sense to set the heap limit lower than what the JVM would use if it could, so save RAM at the expense of performance. To do this, there needs to be a way to set a heap limit.
I think part of it has to do with the implementation of the Garbage Collector (GC). The GC is typically lazy, meaning it will only start really trying to reclaim memory internally when the heap is at its maximum size. If you didn't set an upper limit, the runtime would happily continue to inflate until it used every available bit of memory on your system.
That's because from the application's perspective, it's more performant to take more resources than exert effort to use the resources you already have to full utilization. This tends to make sense for a lot of (if not most) uses of Java, which is a server setting where the application is literally the only thing that matters on the server. It tends to be slightly less ideal when you're trying to implement a client in Java, which will run amongst dozens of other applications at the same time.
Remember that with native programs, the programmer typically requests but also explicitly cleans up resources. That isn't typically true with environments who do automatic memory management.
It is due to the design of the JVM. Other JVM's (like the one from Microsoft and some IBM ones) can use all the memory available in the system if needed, without an arbitrary limit.
I believe it allows for GC-optimizations.
I think that the upper limit for memory is is linked to the fact that JVM is a VM.
As any physical machine has a given (fixed) ammount of RAM so the VM has one.
The maximal size makes the JVM easier to manage by the operating system and ensures some performance gains(less swapping).
Sun' JVM also works in quite limited hardware architecture(embedded ARM systems) and there the management of resources is crucial.
One answer that no-one above gave is that the JVM uses both heap and non-heap memory pools. Putting an upper limit on the heap defines not only how much memory is available for the heap memory pools, but it also defines how much memory is available for NON-HEAP usages. I suppose that the JVM could just allocate non-heap at the top of virtual memory and heap at the bottom of virtual memory and grow both toward each other.
Non-heap memory includes the DLLs or SOs that comprise the JVM and any native code being used as well as compiled Java code, thread stacks, native objects, PermGen (meta-data about compiled classes), among other uses. I've seen Java programs crash because so much memory was given to the heap that the application ran out of non-heap memory. This is where I learned that it can be important to reserve memory for non-heap usages by not setting the heap to be too large.
This makes a much bigger difference in a 32-bit world where an application often has only 2GB of virtual address space than it does in a 64-bit world, of course.
Would it not make more sense to separate the upper bound that triggers GC and the maximum that can be allocated ? Once the memory allocated hits the upper-bound, GC can kick in and release some memory to the free pool.
sort of like how I clean my desk that I share with my co-worker. I have a large desk, and my threshold of how much junk I can tolerate on the table is much less than the size of my desk. I don't need to have fill up every available inch before I garbage collect.
I could also return some of the desk space that I using to my co-worker, who is sharing my desk....I understand jvms don't return memory back to the system after they've allocated it to themselves, but it does not have to be that way no ?
It does allocate memory as needed, up to -Xmx ;)
One reason I can think of is that once the JVM allocates an amount of memory for its heap, it will never let it go. So if your heap has no upper bound, the JVM may just grab all the free memory on the system and then never let it go.
The upper bound also tells the JVM when it needs to do a full garbage collection. If your app is still under the upper bound, the JVM will postpone garbage collection and let the memory footprint of your application grow.
Native programs can die due to out of memory errors as well since native applications also have a memory limit: the memory available on the system - the memory already held by other applications.
The JVM also needs a contiguous block of system memory in order for garbage collection to be performed efficiently.
EDIT
Contiguous memory claim or here
The JVM will apparently let some memory go, but it is rare with the default configuration.

Why does Java use a static heap rather than allow an arbitrary amount of memory?

In Java the virtual machine pre-allocates a memory heap which cannot be expanded at runtime. The developer can increase the size of the heap with the -Xmx switch when the VM loads, but there is no way to increase the maximum size of the heap at runtime. Why is this?
Fragmentation is a massive problem in memory allocation, as is memory starvation. It's a lot simpler, and less error-prone if you can allocate and reserve the memory you need, especially in a server environment. By pre-allocating memory, you also have a higher probability of having most of your memory in continuously allocation (not guaranteed, thank you #mttdbrd) which could be faster to access.
Going back to when Java first started out, installations with more than 1GB of RAM were pretty much unheard of, instead, we had to work with machines that had as little as 256mb of RAM, sometimes even less! Couple that with how slow RAM was, and it made much more sense to be able to read and write to hopefully contiguously allocated blocks. You are also not constantly hammering the OS to give you more RAM and then releasing it again, freeing up (back then) precious CPU cycles.
In that environment, it's very easy to run out of memory suddenly, so it made a lot of sense to be able to allocate what you MIGHT need and make sure you would have it when the time comes.
These days, I guess with RAM being so much more accessible, it makes a lot less sense, although, when I look at my servers and how memory is allocated, I love the fact that all my Java applications have nice, mostly contiguously allocated blocks of memory when compared to some of the other applications that are all over the place.
That is also why you can't up the heap at runtime, there's no way to guarantee that you will have a contiguous allocation any more.
There is no reason per the JVM specification why the heap size must be specified ahead of time except that it was the choice of the implementors. The specification states: "A Java Virtual Machine implementation may provide the programmer or the user control over the initial size of the heap, as well as, if the heap can be dynamically expanded or contracted, control over the maximum and minimum heap size."
The other answers here are just wrong: "The heap may be of a fixed size or may be expanded as required by the computation and may be contracted if a larger heap becomes unnecessary."
Source: The Java Virtual Machine Specification, Java SE 7 Edition. Section 2.5.3, "Heap." That's page 13 in the printed edition.
When your code starts to work, JVM is already created and configured. Besides, this limitation guarantees that program will not take all available system resources, breaking normal functioning of other applications on a server regardless how bad its code is. ;)

hard limit on java heap size [duplicate]

This question already has answers here:
Maximum Java heap size of a 32-bit JVM on a 64-bit OS
(17 answers)
Closed 9 years ago.
This goes to all Java heap/GC experts out there.
Simple and straight question: is there a hard limit on the maximum figure one can set the Java heap size to?
One of the servers I'm working on is hosting an in-memory real-time communication system, and the clients using the server have asked me to increase the amount of memory to 192Gb (it's not a typo: 192 GigaBytes!). Technically this is possible because the machine is virtual.
Besides the one above, my other questions are:
how well is the JVM going to handle such size?
is there any showstopper in setting it?
is there something I should be aware of, when dealing with such sizes?
Thanks in advance to anyone willing to help.
Regards.
The JVM spec for java 7 does not indicate that there is a hard limit.
I'd suggest that it is therefore goverened by your OS and the amount of memory available.
However, you will have to be aware that the JVM needs enough memory for other internal structures (as the spec describes), such as PermGen, constant pool, etc. I'd also suggest that you consider profiling before arbitrarily increasing the heap size. It's possible that your old-gen space is hogging memory. I'd use VisualGC (now in VisualVM) to watch the memory usage and YourKit to look for leaks (its generational capabilities are a real help).
I've not answered your other questions but probably couldn't say more than your other respondents.
1) Have them consider the implications of having the virtual machine swapping stuff in and out of memory to disk with their code doing it. Sometimes its just as good and no extra work to let the VM do it. Sometimes it is much faster when they know something about the data characteristics and their optimizations are targeted that way.
2) Consider the possible use of multiple JVMs either cooperating or running in parallel with the scope of operation divided up somehow. Sometimes even running 4 JVMs running the same application on the same VM with each using 1/4 of the memory can be better. (Depends on your application's characteristics.)
3) Consider a memory management framework like TerraCotta's Big Memory. There are competitors in this space. For example, VMWare has Elastic Memory. They use memory outside the JVM's heap to store stuff and avoid the GC issue. Or they allocate very large heap objects and manage it independently of Java.
4) JVMs do ok(TM) if the memory gets taken up by live objects and they end up moving to the older generations for garbage collection. GC can be a problem. This is a black art. Be sure to kill a chicken and spread the feathers over the virtual machine before attempting to optimize GC. :) Oh ... and there are some really interesting comments here: Java very large heap sizes
5) Sometimes you set the JVM size and other factors beyond your control will keep it from growing that large. Be sure to do some testing to make sure it is really able to grow to the size you set.
And number 5 is the real key item. Test. Test. And test some more. You are out on the edges of explored territory.

Update a java thread's stack size at runtime

Does anyone know if there is a way to dynamically (runtime) increase the stack size of the main Thread? Also, and I believe it is the same question, is it possible to increase / update the stack size of a Thread after its instantiation?
Thread’s CTOR allows the definition of its stack size but I can’t find any way to update it. Actually, I didn’t find any management of the stack size in the JDK (which tends to indicate that it’s not possible), everything is done in the VM.
According to the java language specification it is possible to set the stack size ‘when stack is created’ but there is a note:
A Java virtual machine implementation may provide the programmer or the user control over the initial size of Java virtual machine stacks, as well as, in the case of dynamically expanding or contracting Java virtual machine stacks, control over the maximum and minimum sizes.
IMO that’s not very clear, does that mean that some VM handle Threads with max (edit) stack sizes evolving within a given range? Can we do that with Hostpot (I didn't find any stack size related options beside Xss) ?
Thanks !
The stack size dynamcally updates itself as it is used so you never need to so this.
What you can set is the maximum size it can be with -Xss This is the virtual memory size used and you can make it as large as you like on 64-bit JVMs. The actual memory used is based on the amount of memory you use. ;)
EDIT: The important distinction is that the maximum size is reserved as virtual memory (so is the heap btw). i.e. the address space is reserved, which is also why it cannot be extended. In 32-bit systems you have limited address space and this can still be a problem. But in 64-bit systems, you usually have up to 256 TB of virtual memory (a processor limitation) so virtual memory is cheap. The actual memory is allocated in pages (typically 4 KB) and they are only allocated when used. This is why the memory of a Java application appears to grow over time even though the maximum heap size is allocated on startup. The same thing happens with thread stacks. Only the pages actually touched are allocated.
There's not a way to do this in the standard JDK, and even the stackSize argument isn't set in stone:
The effect of the stackSize parameter, if any, is highly platform dependent. ... On some platforms, the value of the stackSize parameter may have no effect whatsoever. ... The virtual machine is free to treat the stackSize parameter as a suggestion.
(Emphasis in original.)

Why does a native library use 1.5 times more memory when used by java as when used by a C-Programm under linux?

I've written a library in C which consumes a lot of memory (millions of small blocks). I've written a c program which uses this library. And I've written a java program which uses the same library. The Java program is a very thin layer around the library. Basically there is only one native method which is called, does all the work and returns hours later. There is no further communication between Java and the native library using the java invocation interface. Nor there are Java object which consume a noteworthy amount of memory.
So the c program and the Java program are very similar. The whole computation/memmory allocation happens inside the native library. Still. When executed the c program consumes 3GB of memory. But the Java program consumes 4.3GB! (VIRT amount reported by top)
I checked the memory map of the Java process (using pmap). Only 40MB are used by libraries. So additional libraries loaded by Java are not the cause.
Does anyone have an explanation for this behavior?
EDIT: Thanks for the answers so far. To make it a little bit more clearer: The java code does nothing but invoke the native library ONCE! The java heap is standard size (perhaps 60MB) and is not used (except for the one class containing the main method and the other class invoking the native library).
The native library method is a long running one and does a lot of mallocs and frees. Fragmentation is one explanation I thought of myself too. But since there is no Java code active the fragmentation behavior should be the same for the Java program and the c program. Since it is different I also presume the used malloc implementations are different when run in c program or in Java program.
Just guessing: You might be using a non-default malloc implementation when running inside the JVM that's tune to the specfic needs of the JVM and produces more overhead than the general-purpose malloc in your normal libc implementation.
Sorry guys. Wrong assumptions.
I got used to the 64MB the Sun Java implementations used to use for default maximum heap size. But I used openjdk 1.6 for testing. Openjdk uses a fraction of the physical memory if no maximum heap size was explicitly specified. In my case one fourth. I used a 4GB machine. One fourth is thus 1GB. There it is the difference between C and Java.
Sadly this behavior isn't documented anywhere. I found it looking at the source code of openjdk (arguments.cpp):
// If the maximum heap size has not been set with -Xmx,
// then set it as fraction of the size of physical memory,
// respecting the maximum and minimum sizes of the heap.
Java need to have continuous memory for its heap so it can allocate the maximum memory size as virtual memory. However, this doesn't consume physical memory and might not even consume swap. I would check how much your resident memory increases by.
There are different factors that you need to take into account especially on a language like Java, Java runs on a virtual machine and garbage collection is handled by the Java Runtime, as there is considerable effort (I would imagine) from using the Java Invocation Interface to switch or execute the native method within the native library as there would have to be a means to allocate space on the stack, switch to native code, execute the native method, switch back to the Java virtual machine and perhaps somehow, the space on the stack was not freed up - that's what I would be inclined to think.
Hope this helps,
Best regards,
Tom.
It is hard to say, but I think at the heart of the problem is that there are two heaps in your application which need to be maintained -- the standard Java heap for Java object allocations (maintained by the JVM), and the C heap which is maintained by calls to malloc/free. It is hard to say what is going on exactly without seeing some code.
Here is a suggestion for combating it.
Make the C code stop using the standard malloc call, and use an alternate version of malloc that grabs memory by mmaping /dev/zero. You can either modify an implementation of malloc from a library or roll your own if you feel competent enough to do that.
I strongly suspect you will discover that your problem goes away after you do that.

Categories

Resources