My goal is to come up with figure of max threads which can run in parallel. I was pointed to many links by Google, where they give simple math by dividing the RAM/StackSize. In 64 bit Linux, we have thread stack size defined as 10 MB(ulimit -s = 10240kb) and RAM was 4GB, leaving 1 GB for OS and going with this math I can have ~300 threads or so but small test application which I wrote goes upto ~32297 and then gives out of memory error.
I tried different values with -Xss but these values hardly have any effect on thread count, it remains same as ~32297).
This gave me an impression that stack size is variable and decided by OS and goes upto max defined by us whenever needed, but wherever I read, they size stack size is static
What exactly I'm missing here?
Try checking/changing linux maximum stack size using
ulimit -s
Also check for linux threads limit
cat /proc/sys/kernel/threads-max
I have also found a limit of about 32K for thread in Java. If you have this many threads, its usually a better idea to use a different approach. On my machine 32K thread doing while(true) Thread.sleep(1000) will consume 3 cores just context switching.
Java: What is the limit to the number of threads you can create?
Linux implements max number of threads per process indirectly!!
number of threads = total virtual memory / (stack size*1024*1024)
Thus, the number of threads per process can be increased by increasing total virtual memory or by decreasing stack size. But, decreasing stack size too much can lead to code failure due to stack overflow while max virtual memory is equals to the swap memory.
Check you machine:
Total Virtual Memory: ulimit -v (default is unlimited, thus you need to increase swap memory to increase this)
Total Stack Size: ulimit -s (default is 8Mb)
Command to increase these values:
ulimit -s newvalue
ulimit -v newvalue
*Replace new value with the value you want to put as limit.
References:
http://dustycodes.wordpress.com/2012/02/09/increasing-number-of-threads-per-process/
What you've read is only valid in 32 bit architecture when the limit is the address space (2^32). You have effectively something like that: Xmx + MaxPermSize + (Xss * number of threads) < Max address space OS allow for user process. Depending on the OS and physical hardware you've something like 3Go like you said. But this has nothing to do with RAM.
For 64 bit architecture, you address space won't be the limitation (2^64). You should look at OS limitation like someone has told above.
It is because of the pid_max kernel variable that is by default 32768, but for 64 bit systems may be increased up to 4 milions.
Explanation is simple 1 thread = 1 process that will have 1 PID (process ID) so no more pids, no more threads.
Related
I'm facing heap space OutOfMemory error during my Mapper side cleanup method, where i'm reading the data from inputStream and converting it into byte array using IOUtils.toByteArray(inptuStream);
I know i can resolve it by increasing the max heap space(Xmx), but i should be having enough heap space(1Gb) already. I found the below info on debugging(approximate space value),
runtime.maxMemory() - 1024Mb
runtime.totalMemory - 700Mb
runtime.freeMemory - 200Mb
My block size is 128 Mb and i'm not adding any additional data to it on my RecordReader. My output size from the mapper wont be more than 128 Mb.
And also i saw the available bytes in inputStream(.available()) which is provided an approximate value of 128 Mb.
I'm also a bit confused about the memory allocation of JVM. Let's say I set my heap space value as Xms-128m;Xmx-1024m. My tasktracker has 16Gb RAM and already I've 8jobs(8JVM) running in that tasktracker. Lets assume that the tasktracker can allocate only 8.5 Gb RAM for JVM and it'll use the rest for it's internal purpose. So we have 8.5Gb RAM available and 8 tasks are running which is currently using only 6Gb RAM. Is it possible for a new task be assigned to the same task tracker since already 8 tasks are running which might require 8Gb in which case the new task wont be able to provide user requested heap size(1Gb) if required.
PS: I know that not all heap needs to be in RAM(paging). My main question is, will the user be able to get the maximum requested heap size in all scenario?
I sometimes write Python programs which are very difficult to determine how much memory it will use before execution. As such, I sometimes invoke a Python program that tries to allocate massive amounts of RAM causing the kernel to heavily swap and degrade the performance of other running processes.
Because of this, I wish to restrict how much memory a Python heap can grow. When the limit is reached, the program can simply crash. What's the best way to do this?
If it matters, much code is written in Cython, so it should take into account memory allocated there. I am not married to a pure Python solution (it does not need to be portable), so anything that works on Linux is fine.
Check out resource.setrlimit(). It only works on Unix systems but it seems like it might be what you're looking for, as you can choose a maximum heap size for your process and your process's children with the resource.RLIMIT_DATA parameter.
EDIT: Adding an example:
import resource
rsrc = resource.RLIMIT_DATA
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit starts as :', soft
resource.setrlimit(rsrc, (1024, hard)) #limit to one kilobyte
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit changed to :', soft
I'm not sure what your use case is exactly but it's possible you need to place a limit on the size of the stack instead with resouce.RLIMIT_STACK. Going past this limit will send a SIGSEGV signal to your process, and to handle it you will need to employ an alternate signal stack as described in the setrlimit Linux manpage. I'm not sure if sigaltstack is implemented in python, though, so that could prove difficult if you want to recover from going over this boundary.
Have a look at ulimit. It allows resource quotas to be set. May need appropriate kernel settings as well.
Following code allocates memory to specified maximum resident set size
import resource
def set_memory_limit(memory_kilobytes):
# ru_maxrss: peak memory usage (bytes on OS X, kilobytes on Linux)
usage_kilobytes = lambda: resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
rlimit_increment = 1024 * 1024
resource.setrlimit(resource.RLIMIT_DATA, (rlimit_increment, resource.RLIM_INFINITY))
memory_hog = []
while usage_kilobytes() < memory_kilobytes:
try:
for x in range(100):
memory_hog.append('x' * 400)
except MemoryError as err:
rlimit = resource.getrlimit(resource.RLIMIT_DATA)[0] + rlimit_increment
resource.setrlimit(resource.RLIMIT_DATA, (rlimit, resource.RLIM_INFINITY))
set_memory_limit(50 * 1024) # 50 mb
Tested on linux machine.
I want to decrease memory footprint of Java application in order to decrease swapping. I've been thinking about decreasing stack size (Xss parameter) for this purpose, but not sure how stack memory is allocated and whether the default 512k (for 32 bit OS) per thread sits always in resident memory regardless of how much of it is actually used.
Will decreasing stack memory lead to decrease of swapping?
Update: Please don't suggest to profile the application - it is already done.
How many threads are you running? Even with a huge number of threads and a very generous stack size (say, 10k threads and 256KB stack size) that's only 2GB of heap space.
You say you are running on a 32bit JVM, so I assume this is a relatively small system. You have a few options:
Switch to a 64bit JVM. Now you have tons of address space and the stack size should be inconsequential
Your machine is too small. If the 2gb of stack is a problem for your 10k+ threads, you are running too "big" of an application on too "small" of a machine. Do less in software or buy more hardware
Reduce your thread count
The problem is actually elsewhere and you are barking up the wrong tree
yes it will of course its lifo rule last in first out , less stack less swap
How much memory are you using and how much do you need to save?
Since the stack is only 512K per thread, it means you would need 200 Threads to start entering a value that might be worth saving (100Mb)
Since the use of stack memory would be 'very often' I would consider it a bad target for being swapped out. Unless you are dealing with a memory constrained environment?
I have to allocate space to an array int input[] depending on the configuration parameters height and width.
int input[]=new int[height * width]; //this is line no 538
One of the configurations has parameters height=8192 and width=8192. So the size of the array becomes 67108864. But when i do this i get OutOfMemoryError.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at Test.main(Test.java:538)
I have ran this program on eclipse as well as on cygwin but i am facing the same problem. I think this is not an error and not exception. How can i rectify this?
Since 8192 * 8192 * 4 = 256 M (integers are 4 bytes each), your matrix is using 256 MB of heap space by itself.
You can tell the JVM how much heap space should be available to your application. From running man java and looking through the nonstandard options:
-Xmxn
Specify the maximum size, in bytes, of the memory allocation
pool. This value must a multiple of 1024 greater than 2MB.
Append the letter k or K to indicate kilobytes, or m or M to
indicate megabytes. The default value is chosen at runtime
based on system configuration. For more information, see
HotSpot Ergonomics
Examples:
-Xmx83886080
-Xmx81920k
-Xmx80m
On Solaris 7 and Solaris 8 SPARC platforms, the upper limit for
this value is approximately 4000m minus overhead amounts. On
Solaris 2.6 and x86 platforms, the upper limit is approximately
2000m minus overhead amounts. On Linux platforms, the upper limit
is approximately 2000m minus overhead amounts.
To use this option, you would start your application with a command like
java -Xmxn1024m -jar foo.jar
In Eclipse, you can add command-line options as well. This page on eclipse.org describes how to add command-line arguments to a Java program. You should add the -Xmxn1024m (or some other sufficiently large heap specification) to the "VM arguments" section of the dialog shown on that site.
You probably have too little heap space to hold an array of the size you are targeting. You can increase the size of your heap with command line switches. For example, to set it to 256MB, include this switch:
-Xmx256m
If you multiply height * width * 4 (4 is the storage in bytes for an int) you can get a rough gauge of the amount of heap you will need, assuming the rest of the program does not need a significant amount. You will certainly need some more heap than that quick calculation suggests. Add maybe 20% extra, and try that out.
To get a better number than a rule-of-thumb calculation, you can look into heap profilers. There are several open source options:
http://java-source.net/open-source/profilers
See http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-jvm.html for a good discussion of the heap in Java.
memory is not enough for your program, may be memory leak there.
you may try below,if not solve try to increase jmx value.
java -xmx1g -xms512m
Depends on how much heap the JVM has. If you run it on the command line try adding -Xmx512m. If you work in an IDE add it to the "Run" properties.
An int is 32 bits (i.e. 4 bytes). So your array requires 8192*8192*4 bytes. This comes out at 256MB.
Java called with default arguments has only 64MB of heap space.
To get a larger heap, call Java using the -Xmx argument (Maximum memory size).
e.g. java -Xmx300M
Increase your memory arguments for your Java process by adding this flag to increase the heap. You might need to play around to get the optimal size for the heap. This will set the "max" heap size. The default is probably really small. 64M is a common max size for many Java EE containers.
*Note I'm not saying this is exactly the size you'll need. Your unique case will dictate the size you'll need which you may need to experiment with.
-Xmx256M
is there limit to increase the max heap size in java? I am wondering if the large heap size can be set as long as the physical memory is available.
For example, if a server has 100G for RAM, then can i set the max heap at 90G? I know that GC will halt the app, but I am just curious.
Thanks.
With a 32 bit JVM, the hard limit would be 4 GB but the actual one would be lower as, at least if you aren't running a 64 bit OS, some space must be left for non heap memory, like the JVM own address space (non java), stacks for all threads, architecture/OS limitations and the likes. A 64 bit JVM has no such limitation so you could set the limit to 90 GB although I wouldn't recommend it for the reason you already pointed.