Error while initializing Array:OutOfMemoryError - java

I have to allocate space to an array int input[] depending on the configuration parameters height and width.
int input[]=new int[height * width]; //this is line no 538
One of the configurations has parameters height=8192 and width=8192. So the size of the array becomes 67108864. But when i do this i get OutOfMemoryError.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at Test.main(Test.java:538)
I have ran this program on eclipse as well as on cygwin but i am facing the same problem. I think this is not an error and not exception. How can i rectify this?

Since 8192 * 8192 * 4 = 256 M (integers are 4 bytes each), your matrix is using 256 MB of heap space by itself.
You can tell the JVM how much heap space should be available to your application. From running man java and looking through the nonstandard options:
-Xmxn
Specify the maximum size, in bytes, of the memory allocation
pool. This value must a multiple of 1024 greater than 2MB.
Append the letter k or K to indicate kilobytes, or m or M to
indicate megabytes. The default value is chosen at runtime
based on system configuration. For more information, see
HotSpot Ergonomics
Examples:
-Xmx83886080
-Xmx81920k
-Xmx80m
On Solaris 7 and Solaris 8 SPARC platforms, the upper limit for
this value is approximately 4000m minus overhead amounts. On
Solaris 2.6 and x86 platforms, the upper limit is approximately
2000m minus overhead amounts. On Linux platforms, the upper limit
is approximately 2000m minus overhead amounts.
To use this option, you would start your application with a command like
java -Xmxn1024m -jar foo.jar
In Eclipse, you can add command-line options as well. This page on eclipse.org describes how to add command-line arguments to a Java program. You should add the -Xmxn1024m (or some other sufficiently large heap specification) to the "VM arguments" section of the dialog shown on that site.

You probably have too little heap space to hold an array of the size you are targeting. You can increase the size of your heap with command line switches. For example, to set it to 256MB, include this switch:
-Xmx256m
If you multiply height * width * 4 (4 is the storage in bytes for an int) you can get a rough gauge of the amount of heap you will need, assuming the rest of the program does not need a significant amount. You will certainly need some more heap than that quick calculation suggests. Add maybe 20% extra, and try that out.
To get a better number than a rule-of-thumb calculation, you can look into heap profilers. There are several open source options:
http://java-source.net/open-source/profilers
See http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-jvm.html for a good discussion of the heap in Java.

memory is not enough for your program, may be memory leak there.
you may try below,if not solve try to increase jmx value.
java -xmx1g -xms512m

Depends on how much heap the JVM has. If you run it on the command line try adding -Xmx512m. If you work in an IDE add it to the "Run" properties.

An int is 32 bits (i.e. 4 bytes). So your array requires 8192*8192*4 bytes. This comes out at 256MB.
Java called with default arguments has only 64MB of heap space.
To get a larger heap, call Java using the -Xmx argument (Maximum memory size).
e.g. java -Xmx300M

Increase your memory arguments for your Java process by adding this flag to increase the heap. You might need to play around to get the optimal size for the heap. This will set the "max" heap size. The default is probably really small. 64M is a common max size for many Java EE containers.
*Note I'm not saying this is exactly the size you'll need. Your unique case will dictate the size you'll need which you may need to experiment with.
-Xmx256M

Related

Clarification of meaning new JVM memory parameters InitialRAMPercentage and MinRAMPercentage

Reference: https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8186315
I'm really struggling to find out what MinRAMPercentage does, especially compared to InitialRAMPercentage.
I assumed that InitialRAMPercentage sets the amount of heap at startup, that MinRAMPercentage and MaxRAMPercentage set the bottom and top limit of heap that the JVM is allowed to shrink/grow to.
Apparently that is not the case. When I start a JVM (with UseContainerSupport, having these new memory setting parameters) like so:
java -XX:+UseContainerSupport -XX:InitialRAMPercentage=40.0 -XX:MinRAMPercentage=20.0 -XX:MaxRAMPercentage=80.0 -XX:+PrintFlagsFinal -version | grep Heap
InitialHeap and MaxHeap get set, there is no "Minimum Heap Size" value that I can find; Consequently, that MinRAMPercentage never seems to get used.
Super confused, and apparently, I'm not the only one; the OpenJ9 dudes seem to also not fully parse the intent of these options, as I've gathered here and here. They seem to have opted to simply not implement MinRAMPercentage afaics.
So: What is the real intended usage and effect of setting MinRAMPercentage?
-XX:InitialRAMPercentage is used to calculate initial heap size when InitialHeapSize / -Xms is not set.
It sounds counterintuitive, but both -XX:MaxRAMPercentage and -XX:MinRAMPercentage are used to calculate maximum heap size when MaxHeapSize / -Xmx is not set:
For systems with small physical memory MaxHeapSize is estimated as
phys_mem * MinRAMPercentage / 100 (if this value is less than 96M)
Otherwise (non-small physical memory) MaxHeapSize is estimated as
MAX(phys_mem * MaxRAMPercentage / 100, 96M)
The exact formula is a bit more complicated as it also takes other factors into account.
Note: the algorithm for calculating initial and maximum heap size depends on the particular JVM version. The preferred way to control the heap size is to set Xmx and Xms explicitly.
See also this question.
Depends on your container memory also, So lets suppose you have container memory as 1 GB then in this case -XX:MaxRAMPercentage=80 will be used to determine the max heap ~ 800mb heap memory will be used
And suppose you have container memory less than 250mb then -XX:MinRAMPercentage=20.0 will be used ~ 50mb heap memory will be used
use this article to understand more XX:MinRAMPercentage,XX:MaxRAMPercentage

Is there an equivalent to Java -Xmx for python? [duplicate]

I sometimes write Python programs which are very difficult to determine how much memory it will use before execution. As such, I sometimes invoke a Python program that tries to allocate massive amounts of RAM causing the kernel to heavily swap and degrade the performance of other running processes.
Because of this, I wish to restrict how much memory a Python heap can grow. When the limit is reached, the program can simply crash. What's the best way to do this?
If it matters, much code is written in Cython, so it should take into account memory allocated there. I am not married to a pure Python solution (it does not need to be portable), so anything that works on Linux is fine.
Check out resource.setrlimit(). It only works on Unix systems but it seems like it might be what you're looking for, as you can choose a maximum heap size for your process and your process's children with the resource.RLIMIT_DATA parameter.
EDIT: Adding an example:
import resource
rsrc = resource.RLIMIT_DATA
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit starts as :', soft
resource.setrlimit(rsrc, (1024, hard)) #limit to one kilobyte
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit changed to :', soft
I'm not sure what your use case is exactly but it's possible you need to place a limit on the size of the stack instead with resouce.RLIMIT_STACK. Going past this limit will send a SIGSEGV signal to your process, and to handle it you will need to employ an alternate signal stack as described in the setrlimit Linux manpage. I'm not sure if sigaltstack is implemented in python, though, so that could prove difficult if you want to recover from going over this boundary.
Have a look at ulimit. It allows resource quotas to be set. May need appropriate kernel settings as well.
Following code allocates memory to specified maximum resident set size
import resource
def set_memory_limit(memory_kilobytes):
# ru_maxrss: peak memory usage (bytes on OS X, kilobytes on Linux)
usage_kilobytes = lambda: resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
rlimit_increment = 1024 * 1024
resource.setrlimit(resource.RLIMIT_DATA, (rlimit_increment, resource.RLIM_INFINITY))
memory_hog = []
while usage_kilobytes() < memory_kilobytes:
try:
for x in range(100):
memory_hog.append('x' * 400)
except MemoryError as err:
rlimit = resource.getrlimit(resource.RLIMIT_DATA)[0] + rlimit_increment
resource.setrlimit(resource.RLIMIT_DATA, (rlimit, resource.RLIM_INFINITY))
set_memory_limit(50 * 1024) # 50 mb
Tested on linux machine.

Why does the JVM Heap Usage Max as reported by JMX change over time?

My JVM heap max is configured at 8GB on the name node for one of my hadoop clusters. When I monitor that JVM using JMX, the reported maximum is constantly fluctuating, as shown in the attached image.
http://highlycaffeinated.com/assets/images/heapmax.png
I only see this behavior on one (the most active) of my hadoop clusters. On the other clusters the reported maximum stays fixed at the configured value. Any ideas why the reported maximum would change?
Update:
The java version is "1.6.0_20"
The heap max value is set in hadoop-env.sh with the following line:
export HADOOP_NAMENODE_OPTS="-Xmx8G -Dcom.sun.management.jmxremote.port=8004 $JMX_SHARED_PROPS"
ps shows:
hadoop 27605 1 99 Jul30 ? 11-07:23:13 /usr/lib/jvm/jre/bin/java -Xmx1000m -Xmx8G
Update 2:
Added the -Xms8G switch to the startup command line last night:
export HADOOP_NAMENODE_OPTS="-Xms8G -Xmx8G -Dcom.sun.management.jmxremote.port=8004 $JMX_SHARED_PROPS"
As shown in the image below, the max value still varies, although the pattern seems to have changed.
http://highlycaffeinated.com/assets/images/heapmax2.png
Update 3:
Here's a new graph that also shows Non-Heap max, which stays constant:
http://highlycaffeinated.com/assets/images/heapmax3.png
According to the MemoryMXBean documentation, memory usage is reported in two categories, "Heap" and "Non-Heap" memory. The description of the Non-Heap category says:
The Java virtual machine manages memory other than the heap (referred as non-heap memory).
The Java virtual machine has a method area that is shared among all threads. The method area belongs to non-heap memory. It stores per-class structures such as a runtime constant pool, field and method data, and the code for methods and constructors. It is created at the Java virtual machine start-up.
The method area is logically part of the heap but a Java virtual machine implementation may choose not to either garbage collect or compact it. Similar to the heap, the method area may be of a fixed size or may be expanded and shrunk. The memory for the method area does not need to be contiguous.
This description sounds a lot like the permanent generation (PermGen), which is indeed part of the heap and counts against the memory allocated using the -Xmx flag. I'm not sure why they decided to report this separately since it is part of the heap.
I suspect that the fluctuations you're seeing are a result of the JVM shrinking and growing the permanent generation, which would cause the reported max heap space available for non-PermGen uses to change accordingly. If you could get a sum of the Heap and Non-Heap maxes as reported by JMX and this sum stays constant at the 8G limit, that would verify this hypothesis.
One possibility is that the JVM survivor space is fluctuating in max-size.
The JVM max-size reported by JMX via the HeapMemoryUsage.max attribute is not the actual max-size of the heap (i.e. the one set with -Xmx )
The reported value is the max heap size minus the max survivor space size
To get the total max heap size, add the two jmx attributes:
java.lang:type=Memory/HeapMemoryUsage.max + java.lang:type=MemoryPool,name=Survivor Space/Usage.max
(tested on oracle jdk 1.7.0_45)

limit for max heap size in java heap setting

is there limit to increase the max heap size in java? I am wondering if the large heap size can be set as long as the physical memory is available.
For example, if a server has 100G for RAM, then can i set the max heap at 90G? I know that GC will halt the app, but I am just curious.
Thanks.
With a 32 bit JVM, the hard limit would be 4 GB but the actual one would be lower as, at least if you aren't running a 64 bit OS, some space must be left for non heap memory, like the JVM own address space (non java), stacks for all threads, architecture/OS limitations and the likes. A 64 bit JVM has no such limitation so you could set the limit to 90 GB although I wouldn't recommend it for the reason you already pointed.

Java thread stack size on 64-bit linux

My goal is to come up with figure of max threads which can run in parallel. I was pointed to many links by Google, where they give simple math by dividing the RAM/StackSize. In 64 bit Linux, we have thread stack size defined as 10 MB(ulimit -s = 10240kb) and RAM was 4GB, leaving 1 GB for OS and going with this math I can have ~300 threads or so but small test application which I wrote goes upto ~32297 and then gives out of memory error.
I tried different values with -Xss but these values hardly have any effect on thread count, it remains same as ~32297).
This gave me an impression that stack size is variable and decided by OS and goes upto max defined by us whenever needed, but wherever I read, they size stack size is static
What exactly I'm missing here?
Try checking/changing linux maximum stack size using
ulimit -s
Also check for linux threads limit
cat /proc/sys/kernel/threads-max
I have also found a limit of about 32K for thread in Java. If you have this many threads, its usually a better idea to use a different approach. On my machine 32K thread doing while(true) Thread.sleep(1000) will consume 3 cores just context switching.
Java: What is the limit to the number of threads you can create?
Linux implements max number of threads per process indirectly!!
number of threads = total virtual memory / (stack size*1024*1024)
Thus, the number of threads per process can be increased by increasing total virtual memory or by decreasing stack size. But, decreasing stack size too much can lead to code failure due to stack overflow while max virtual memory is equals to the swap memory.
Check you machine:
Total Virtual Memory: ulimit -v (default is unlimited, thus you need to increase swap memory to increase this)
Total Stack Size: ulimit -s (default is 8Mb)
Command to increase these values:
ulimit -s newvalue
ulimit -v newvalue
*Replace new value with the value you want to put as limit.
References:
http://dustycodes.wordpress.com/2012/02/09/increasing-number-of-threads-per-process/
What you've read is only valid in 32 bit architecture when the limit is the address space (2^32). You have effectively something like that: Xmx + MaxPermSize + (Xss * number of threads) < Max address space OS allow for user process. Depending on the OS and physical hardware you've something like 3Go like you said. But this has nothing to do with RAM.
For 64 bit architecture, you address space won't be the limitation (2^64). You should look at OS limitation like someone has told above.
It is because of the pid_max kernel variable that is by default 32768, but for 64 bit systems may be increased up to 4 milions.
Explanation is simple 1 thread = 1 process that will have 1 PID (process ID) so no more pids, no more threads.

Categories

Resources