I see some strange behavior on the maximum heap size I get on Sun's JVM, compared to JRockit.
I'm running IDEA on 64-bit VMs on a 64-bit system (Ubuntu 11.04). The JVM versions I'm testing are: Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) (which I got with apt-get install sun-java6-jdk, and Oracle JRockit(R) (build R28.1.3-11-141760-1.6.0_24-20110301-1432-linux-x86_64, compiled mode) (which I downloaded from Oracle's site a couple of months ago).
If I pass the parameters -Xms1g -Xmx3g, IDEA will report a maximum heap size of 1820M on Sun's JVM, and 3072M (as expected) on JRockit.
If I pass -Xms2g -Xmx4g, IDEA will report 3640M on Sun's and 4096M on JRockit.
What is happening? What are those mystic numbers 1820M and 3640M = 2*1820M? Isn't it possible to run Sun's JVM with the exact heap size I want?
EDIT:
An answer has been deleted, so just to bring my comments back: please note that I'm talking about the MAX size, not the current size. Consider that I've researched a lot before asking the question here, so there's no need to teach the meaning of Xms, Xmx or any of the other of the parameters that specify the size of regions of the memory (those can be found elsewhere).
EDIT2:
I wrote the following simple code to test this behavior:
public static void main(String[] args) throws Exception {
while (true) {
final Runtime r = Runtime.getRuntime();
System.out.println("r.freeMemory() = " + r.freeMemory()/1024.0/1024);
System.out.println("r.totalMemory() = " + r.totalMemory()/1024.0/1024);
System.out.println("r.maxMemory() = " + r.maxMemory()/1024.0/1024);
Thread.sleep(1000);
}
}
Then I ran it with -Xmx100m, -Xmx110m, -Xmx120m, etc... for many many different values, both on Sun's JVM and o JRockit. Sun's will always report a bizarre value for maxMemory() and would grow on big steps (like 30M) between runs. JRockit reported the exact value every time.
The Xms and Xmx only serve to indicate the minimum and maximum sizes of the allocated heap. The actual size of the allocated heap could/will be a value between the minimum and maximum, as the JVM can resize the heap, especially during object allocation events or garbage collection events.
If you need the JVM to use the "exact" heap size, you can specify Xms and Xmx values that are close enough to each other, so that heap resizing does not occur. Of course, these values must correspond to a contiguous amount of free memory.
The above section assumed something else, and can be ignored for practical purposes.
Based on the code used to calculate heap size, it should be noted that Runtime.maxMemory() returns a value that does not correspond to the value passed in the Xmx flag for the Hotspot JVM; the documentation is vague in stating that it will simply return a value that indicates the memory available for the JVM to use.
Inferring from your posted code's behavior, heap resizing will result in different values being reported for different invocations of Runtime.maxMemory(). Also, it would be needless to point out that the JRockit JVM reports the value passed in via Xmx flag.
Related
I have JNA wrapper for a C DLL. It works fine, except when used on a Windows 32-bit system. Here is a simplified example:
int SetData(const wchar_t* data);
int SetId(const wchar_t* id, uint32_t flags);
I created JNA bindings as follows:
public static native int SetData(WString data);
public static native int SetId(WString id, int flags);
The first function SetData() works fine on both 32-bit as well as 64-bit Windows, but the second function crashes on Windows 7 32-bit.
I tried using NativeLong as suggested in other related posts, but it didn't help.
Here is the link to the repository:
https://github.com/cryptlex/lexactivator-java/blob/master/src/main/java/com/cryptlex/lexactivator/LexActivatorNative.java
Your mappings are correct: WString is the correct mapping for const wchar_t*, and int (always 32 bits in Java) is the correct mapping for uint32_t (always 32 bits), with a caveat about signededness that shouldn't matter when used as a flags bitmask.
I'm not sure where you read that NativeLong would be appropriate here. This is primarily intended for *nix native code in which sizeof(long) differs based on the OS bitness. Practically, it doesn't actually matter on Windows since LONG is always 32-bit, but it involves unnecessary object creation vs. a primitive.
The "Invalid Memory Access" error thrown by JNA is a "graceful" handling of native memory errors caught by Structured Exception Handling. All you really need to know is that either You are attempting to access native memory that you do not own or Your native memory allocation failed.
Debugging these errors always involves carefully tracking memory allocations. When is memory allocated? When is it released? Who (Java-side or native side) is responsible for this allocation? Your program is likely crashing at the point you attempt to copy data from the user-provided ID string to some native memory that your native DLL is accessing. So that points to two memory pointers you need to investigate.
Look at the memory where the ID string is being written. Find out when the memory for it is allocated. Ensure it is large enough for the string (should be 2x string length + 2 bytes for a null terminator) and properly zeroed (or a null byte explicitly appended). Verify all WinAPI calls use the correct (W vs. A) unicode version.
I tried adding LA_IN_MEMORY to the flags bitmask and got an error message "Either trial has not started or has been tampered! Trial days left: 30". This is apparently produced by the next line (IsLicenseGenuine()), meaning that the setProductId() call was successful.
Identifying what your native code does differently when the LA_IN_MEMORY flag is not set will probably be very helpful. It's possible the invalid memory access is associated with identifying the directory or file to be used.
There is a recent changelog entry for 3.14.9 involving this flag. Looking at that commit might give a hint to the problem.
There's another recent change in 3.15.0 involving auto-detection of a file on Windows which also may be suspicious given that LA_IN_MEMORY makes the problem go away.
When given an invalid key, the error message "43: The product id is incorrect." is returned, so the point in native code where unowned memory is being accessed is after this error check.
Trace what is happening with the ID string defined on the Java side. Given the constant definition of the string, the actual memory allocation is likely not a problem, but keep track of the native pointer to this string to be sure it's not inadvertently overwritten.
As you've noted in the comments, reducing the native memory allocation solves this issue, indicating you are hitting a limit. It turns out the default 32-bit Java native memory allocation for stack size (-Xss) is 320 KB. From Oracle docs:
On Windows, the default thread stack size is read from the binary
(java.exe). As of Java SE 6, this value is 320k in the 32-bit VM and
1024k in the 64-bit VM.
You can reduce your stack size by running with the -Xss option. For
example:
java -server -Xss64k
Note that on some versions of Windows, the OS may round up thread
stack sizes using very coarse granularity. If the requested size is
less than the default size by 1K or more, the stack size is rounded up
to the default; otherwise, the stack size is rounded up to a multiple
of 1 MB.
You could increase this limit to solve the problem or, as you've indicated in the remarks, lower your native allocation. You might wish to be more conservative than 300K as that only leaves a small amount for other use of the stack. You might also start smaller, check the return value for ERR_MORE_DATA and try again with a larger value. 300KB seems a rather huge amount to devote to registry values.
Note also that 32-bit Java has a total process memory size limit of either 2GB or 4GB, depending on the OS. If your Java heap allocation grows close to that limit, it reduces the native amount available to you. You can control how big the heap gets with the -Xmx switch and you can also ensure sufficient native memory allocation with the -Xss switch. Use these switches in a combination to avoid hitting the process size limit.
The Maximum allowed int data type array range is 2147483647. But I am getting
Runtime Error: Exception in thread "main" java.lang.OutOfMemoryError:
Java heap space at Test.main(Test.java:3)
Can someone please explain me the memory representation for this size and why JVM raising a runtime error for this?
Code:
class Test{
public static void main(String[] args){
int [] x = new int[1000000000];
}
}
You're hitting a limit of the particular JVM you're using, or your system memory.
You're trying to allocate an array which will take up ~4GB of contiguous memory. I'd expect a 64-bit JVM on a machine with enough memory to handle that without any problems, but different JVMs may have different limitations. For example, I'd expect any 32-bit JVM to struggle with that.
The good news is that allocating an array that large is almost never required in regular programming. If you do need it, you'll need to make sure you work in an environment that supports it.
Even on my machine that can handle your example, if I increase the size further I get one of two errors, either the one you got:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
or for 2147483646 or 2147483647:
Exception in thread "main" java.lang.OutOfMemoryError: Requested array size exceeds VM limit
(The first happens somewhere between 1064000000 and 1065000000 on my machine.)
By default, the int data type is a 32-bit signed two's complement integer. You declare an array of one billion of those.
You can read more about primitive datatypes at https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
You could look into increasing the available memory (see Increase heap size in Java) , or look into using the Collections API, which might suit your needs better than a primitive array.
You mentioned the biggest number that can be stored in an int i.e. 2147483647.
But you are creating an array of a billion ints. These are two different things.
Basically you are taking up 1000000000 * 4 bytes, because size of one int is 4 bytes. That makes about 4GB of memory!
Usually, this error is thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.
"int[1000000000]" is consuming a lot of Java heap space and when all of the available memory in the heap region is filled and Garbage Collection is not able to clean it, the java.lang.OutOfMemoryError:Java heap space is thrown.
4GB is a lot of space
I think this is and old question.. well I will explain this:
Your Java code runs on a Java Virtual Machine (JVM) this machine has some limitations by default that is why you get this:
int[] array : 4 byte
0 l=1048576 s=4mb
1 l=2097152 s=8mb
java.lang.OutOfMemoryError: Java heap space l=4194304 s=16mb
How can you "fix" it on your environment?
Well easy just run your app adding a VM flag -Xmx12g.
Note:
Remember that, in practice, a JVM array size is limited by its internal representation. In the GC code, JVM passes around the size of an array in heap words as an int then converts back from heap words to jint this may cause an overflow. So in order to avoid crashes and unexpected behavior the maximum array length is limited by this max size - header size. Where header size depends on which C/C++ compiler was used to build the JVM you are running (i.e. gcc for linux, clang for macos), and runtime settings (i.e. UseCompressedClassPointers). For example, on my Linux environment the limits are:
Java HotSpot(TM) 64-Bit Server VM 1.6.0_45 limit Integer.MAX_VALUE
Java HotSpot(TM) 64-Bit Server VM 1.7.0_72 limit Integer.MAX_VALUE-1
Java HotSpot(TM) 64-Bit Server VM 1.8.0_40 limit Integer.MAX_VALUE-2
public class Test {
public static void main(String[] args) {
long heapsize = Runtime.getRuntime().totalMemory();
System.out.println("heapsize is :: " + heapsize/(1024*1024));
}
}
The output is:
heapsize is :: 245
Does this mean my runtime has only 245M memory? My computer has 8GB memory. I doubt output this is correct, since the running of Eclipse alone will consume a lot more than 245M.
In Eclipse, I click Windows->Preferences->Java->Installed JREs, and set Default JVM arguments as follows:
-ea -Xms256m -Xmx4096M
Then run the test again. It still prints out the same number, 245. How could this happen?
Edited: from Java doc for Runtime.getRuntime().totalMemory():
Returns the total amount of memory in the Java virtual machine.
The value returned by this method may vary over time, depending on the
host environment.
Your program doesn't run in Eclipse's heap space. Eclipse spawns off a separate JVM for your program.
Runtime.totalMemory() does indeed return the current heap size.
The -Xms argument specifies the initial heap size. Java will expand if it cannot free up enough memory through garbage collection until it reaches the maximum, as set by -Xmx. At this point, the JVM will exit with an OutOfMemoryError.
Java memory management is a complex topic, involving garbage collection, moving objects from nursery to tenured space, etc.
I'm attempting to debug a problem with pl/java, a procedural language for PostgreSQL. I'm running this stack on a Linux server.
Essentially, each Postgres backend (connection process) must start its own JVM, and does so using the JNI. This is generally a major limitation of pl/java, but it has one particularly nasty manifestation.
If native memory runs out (I realise that this may not actually be due to malloc() returning NULL, but the effect is about the same), this failure is handled rather poorly. It results in an OutOfMemoryError due to "native memory exhaustion". This results in a segfault of the Postgres backend, originating from within libjvm.so, and a javacore file that says something like:
0SECTION TITLE subcomponent dump routine
NULL ===============================
1TISIGINFO Dump Event "systhrow" (00040000) Detail "java/lang/OutOfMemoryError" "Failed to create a thread: retVal -1073741830, errno 11" received
1TIDATETIME Date: 2012/09/13 at 16:36:01
1TIFILENAME Javacore filename: /var/lib/PostgreSQL/9.1/data/javacore.20120913.104611.24742.0002.txt
***SNIP***
Now, there are reasonably well-defined ways of ameliorating these types of problems with Java, described here:
http://www.ibm.com/developerworks/java/library/j-nativememory-linux/
I think that it would be particularly effective if I could set the maximum heap size to a value that is far lower than the default. Ordinarily, it is possible to do something along these lines:
The heap's size is controlled from the Java command line using the -Xmx and -Xms options (mx is the maximum size of the heap, ms is the initial size). Although the logical heap (the area of memory that is actively used) can grow and shrink according to the number of objects on the heap and the amount of time spent in GC, the amount of native memory used remains constant and is dictated by the -Xmx value: the maximum heap size. Most GC algorithms rely on the heap being allocated as a contiguous slab of memory, so it's impossible to allocate more native memory when the heap needs to expand. All heap memory must be reserved up front.
However, it is not apparent how I can follow these steps such that pl/java's JNI initialisation initialises a JVM with a smaller heap; I can't very well pass these command line arguments to Postgres. So, my question is, how can I set the maximum heap size or otherwise control these problems in this context specifically? This appears to be a general problem with pl/java, so I expect to be able to share whatever solution I eventually arrive at with the Postgres community.
Please note that I am not experienced with JVM internals, and am not generally familiar with Java.
Thanks
According to slide 19 in this presentation postgresql.conf can have the parameter pljava.vmoptions where you can pass arguments to the JVM.
My JVM heap max is configured at 8GB on the name node for one of my hadoop clusters. When I monitor that JVM using JMX, the reported maximum is constantly fluctuating, as shown in the attached image.
http://highlycaffeinated.com/assets/images/heapmax.png
I only see this behavior on one (the most active) of my hadoop clusters. On the other clusters the reported maximum stays fixed at the configured value. Any ideas why the reported maximum would change?
Update:
The java version is "1.6.0_20"
The heap max value is set in hadoop-env.sh with the following line:
export HADOOP_NAMENODE_OPTS="-Xmx8G -Dcom.sun.management.jmxremote.port=8004 $JMX_SHARED_PROPS"
ps shows:
hadoop 27605 1 99 Jul30 ? 11-07:23:13 /usr/lib/jvm/jre/bin/java -Xmx1000m -Xmx8G
Update 2:
Added the -Xms8G switch to the startup command line last night:
export HADOOP_NAMENODE_OPTS="-Xms8G -Xmx8G -Dcom.sun.management.jmxremote.port=8004 $JMX_SHARED_PROPS"
As shown in the image below, the max value still varies, although the pattern seems to have changed.
http://highlycaffeinated.com/assets/images/heapmax2.png
Update 3:
Here's a new graph that also shows Non-Heap max, which stays constant:
http://highlycaffeinated.com/assets/images/heapmax3.png
According to the MemoryMXBean documentation, memory usage is reported in two categories, "Heap" and "Non-Heap" memory. The description of the Non-Heap category says:
The Java virtual machine manages memory other than the heap (referred as non-heap memory).
The Java virtual machine has a method area that is shared among all threads. The method area belongs to non-heap memory. It stores per-class structures such as a runtime constant pool, field and method data, and the code for methods and constructors. It is created at the Java virtual machine start-up.
The method area is logically part of the heap but a Java virtual machine implementation may choose not to either garbage collect or compact it. Similar to the heap, the method area may be of a fixed size or may be expanded and shrunk. The memory for the method area does not need to be contiguous.
This description sounds a lot like the permanent generation (PermGen), which is indeed part of the heap and counts against the memory allocated using the -Xmx flag. I'm not sure why they decided to report this separately since it is part of the heap.
I suspect that the fluctuations you're seeing are a result of the JVM shrinking and growing the permanent generation, which would cause the reported max heap space available for non-PermGen uses to change accordingly. If you could get a sum of the Heap and Non-Heap maxes as reported by JMX and this sum stays constant at the 8G limit, that would verify this hypothesis.
One possibility is that the JVM survivor space is fluctuating in max-size.
The JVM max-size reported by JMX via the HeapMemoryUsage.max attribute is not the actual max-size of the heap (i.e. the one set with -Xmx )
The reported value is the max heap size minus the max survivor space size
To get the total max heap size, add the two jmx attributes:
java.lang:type=Memory/HeapMemoryUsage.max + java.lang:type=MemoryPool,name=Survivor Space/Usage.max
(tested on oracle jdk 1.7.0_45)