Server Configuration:
Physical Ram - 16Gb
Swap Memory - 27Gb
OS - Solaris 10
Physical Memory Free - 598 Mb
Swap Memory Used - ~26Gb
Java Version - Java HotSpot(TM) Server VM - 1.6.0_17-b04
My Task is to reduce used swap memory:-
Solutions i have though of
Stop all java applications and wait till physical memory is sufficiently freed. then
execute command "swapoff -a"(Yet to find out Solaris equivalent of this command) ...wait till swap memory used goes down to zero. then execute command "swapon -a"
Increase Physical Memory
I need help on following points:-
Whats the solaris equivalent of swapoff and swapon?
Will option 1 work to clear used swap?
Thanks a Million!!!
First, Java and swap don't mix. If your java app is swapping, you're simply doomed. Few things murder a machine like a swapped java process. GC and swap is just a nightmare.
So, given that, if you machine with the java process is swapping -- that machine is too small. Get more memory, or reduce the load on the machine (including the heap of the java process if possible).
The fact that your machine has no physical memory (600ish Mb), and no free swap space (1ish Gb) is another indicator that the machine is way overloaded.
All manner of things could be faulting your Java process when resources are exhausted.
Killing the Java process will "take it out of swap", since the process doesn't exist, it can't be in swap. Same for all of the other processes. "Swap memory" may not instantly go down, but if a process doesn't exist -- it can't swap (barring the use of persistent shared memory buffers that have the misfortune of being swapped out, and Java doesn't typically use those.)
There isn't really a good way that I know of to tell the OS to lock a specific program in to physical RAM and prevent it from being paged out. And, frankly, you don't want to.
Whatever is taking all of your RAM, you need to seriously consider reducing its footprint(s), or moving the Java process off of this machine. You're simply running in to hard limits, and there's no more blood in this stone.
Not quite clear to me what you're asking - stopping applications which takes memory should fre memory (and swap space potentially). It's not clear from your description that Java is taking all the memory on your box - there's usually no reasons to run JVM allocating more memory that physical memory on the box. Check how you start JVM and how much memory is allocated.
Here is how to manage swap on Solaris:
http://www.aspdeveloper.net/tiki-index.php?page=SolarisSwap
a bit late to the party, but for Solaris:
list details of the swap space with:
swap -l
which lists swap slices. eg:
swapfile dev swaplo blocks free
/dev/dsk/c0t0d0s1 136,1 16 1206736 1084736
/export/data/swapfile -16 40944 40944
swapoff equivalent:
swap -d <file or device>
eg:
swap -d /dev/dsk/c1t0d0s3
swapon equivalent:
swap -a <file or device>
eg:
swap -a /dev/dsk/c1t0d0s3
Note: you may have to create a device before you can use the -a switch by editing the /etc/vfstab file and adding information describing the swap slice. eg:
/dev/dsk/c1t0d0s3 --swap -no -
Related
my problem is, I have an executable jar file on a ubuntu linux server which starts 41 threads. Now I want to start a second jar file which creates a simular amount of threads and it doesnt work. I get the error:
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
Even when I try t enter java -version I get this error.
I lookt at my memory limit and It only uses 10% of the cores and 2 of 8GB ram.
When I enter ulimit -a I got 62987 proccesses per user
And when I look in /proc/sys/kernel/pid_max I got 32768.
I dont know what I should do can someone help me?
There is not enough detail in your question to give a definite answer or solution.
The problem is almost certainly not an OS imposed limit on the number of threads. It is most likely memory related.
You say that 2GB out of 8GB of RAM is in use, but you don't say how you are getting that figure. There are many different ways of measuring memory usage and they mean different things.
When a JVM starts a new thread, it goes to the operating system and asks for a block of memory to hold the thread stack. The default thread stack size is platform specific, but it is typically 1GB. This can be modified by a JVM command line option, or by the application using a Thread constructor that has a stack size parameter. Note that the stack segment is NOT allocated in the Java heap.
So here are some of the possible explanations.
One possibility is that you are running a 32 bit JVM. On a Linux platform, that will limit you to a 4GB address space, and architectural issues will limit the JVM to less than that of actual usable space. If you hit this limit, the OS will refuse a JVM's request for a stack segment. (Check that you have a 64 bit Jave installation and that you haven't given the -d32 command line option.)
A second possibility is that you don't have enough swap space. The OS will only allocate a memory segment if it has enough physical RAM and swap space (page file space) to accommodate the segment. If it figures out that there isn't enough space to hold all of the pages for all of the applications currently running, it will refuse a JVM's request for a stack segment.
A third possibility is that you have configured your JVM with a really large heap, and that is reserving all of the available virtual memory at the system level.
A fourth possibility is that you have accidentally configured a non-default stack size using an -Xss.
A final possibility is that you are actually running more than the 41 threads that you think.
I have a VPS with 20GB RAM, Ubuntu OS. I am trying to allocate 10GB RAM as the maximum heap to java using JAVA_TOOL_OPTIONS but I couldn't. Please see the attached screenshot. It shows available memory as 17GB. Its working when I try to set to 7GB. But heap error occurs only when it is > 7GB. I have already installed glassfish and allocated 3Gb to its cluster. Its working fine. But why I am not able to allocate greater than 7GB when I have 17GB RAM free.
TOP
ULIMITS
Java -version
OverCommit memory
My Hardware is Virtual Hosted. Below is the configuration
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.802
BogoMIPS: 4533.60
Virtualization: VT-x
If I had to guess, you don't have a contiguous block of RAM that's 7GB, which does seem weird, but without knowing more about your VM's allocation it's hard to say.
Here's what Oracle has to say on the matter (http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_oom):
The VM prints "OutOfMemoryError" and exits. Increasing max heap size
doesn't help. What's going on?
The Java HotSpot VM cannot expand its heap size if memory is
completely allocated and no swap space is available. This can occur,
for example, when several applications are running simultaneously.
When this happens, the VM will exit after printing a message similar
to the following.
Exception java.lang.OutOfMemoryError: requested bytes
-Xmx-Xms-Xmx
For more information, see the evaluation section of bug 4697804.
I think you may be out of swap space. When I add up the memory in the "virt" column, it comes to 40+ Gb.
Why it's taking that much swap space ? What needs to be done in order to fix this ?
Well, according to top you are running:
Glassfish - 9.1G
MySQL daemon - 5.4G
Hudson - 8.9G
Nexus - 6G
Glassfish - 6.9G (2nd instance)
and sundry other stuff. The "virt" is their total virtual memory footprint, and some of that will be code segments which may be shared.
They mostly seem to have a small "res" (resident memory) at the moment which is why there is so much free RAM. However, if a few of them sprang into life at the same time the system the demand for RAM would skyrocket, and the system might start to thrash.
My recommendation would be to move the Hudson and Nexus services to a separate VM. Or if that is not possible, increase the size of your swap space ... and hope that you don't thrash.
This is true. But is this a normal behaviour?
Yes.
is this how memory allocation works?
Yes. This is indeed how virtual memory works.
I am confused with Resident memory, virtual memory and physical memory now.
Rather than explain it in detail, I suggest you start by reading the Wikipedia page on virtual memory.
The reason why I wasn't able to allocate more than 5G is because of the fact that privvmpages is set to 5G.
We can get that information in linux by this command "cat /proc/user_beancounters"
Also, in VPS, hosting provider will not allow us to customize this value. We have to either go for large virtual or dedicated server to increase this limit.
This was the root cause. However, Stephen and Robin's explanations on the Virtual Memory and RES memory were spot on. Thanks Guys
I am just testing some JNI calls with a simple stand alone java program on a 16-core CPU. I am using 64-bit JVM & OS is Linux AS5. But, as soon as I start my test program with 64-bit c++ libraries, I see that the 'SIZE' column shows 16G. The top command output is something like this:
PID PSID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
3505 31483 xxxxxxx 23 16 0 16G 215M sleep 0:02 0.00% java
I understand that my heap is ok, but JNI memory can increase the process size, but I am confused as to why it starts with 16G - SIZE, which I believe is the Virtual Memory size? Is it really taking that much memory? Should I be concerned with it?
Odd, but it could be that your JNI calls are either somehow rapidly leaking memory or allocating a number of massive chunks of memory. What command line are you using to start the JVM, and what is your JNI code doing?
The comment by Greg Hewgill basically says it all: You're using only 215 MB of real memory. On my machine a nothing-doing Java process without any VM arguments gets 8 GB. You've got four times as many cores, so starting with twice as much memory makes sense to me.
As this number uses nothing but the virtual address space, which is something like 2*48 B or maybe 2*64 B, it's actually free. You could use -mx1G in order to limit the memory to 1 GB, but note that it is a hard limit and you'll get an OOME when the process needs more.
we have a 32 bit JVM running under 64 bit RHEL5 on a box which has plenty of memory (32G). For different reasons, this process requires a pretty large managed heap and permgen space -- currently, it runs with the following VM arguments:
-Xmx2200M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled
I have started seeing JVM crashes recently because it - seemingly - ran out of native memory (it could not create native threads, or failed to allocate native memory, etc.). These crashes were not (directly) related to the state of the managed heap, as when those crashes happened the managed heap was ~50-70% full.
I know that the memory reserved for the managed process is close to 2.5 G which leaves not more than 0.5G for the JVM itself, BUT
- I don't understand why 0.5 isn't enough for the JVM, even if there is constant GCing going on
- the real question is this: when I connect to the process using jconsole, then it says that (currently)
Committed virtual memory:
3,211,180 kbytes
Which is more than 3G. I can imagine that for some reason JVM thinks that it has
3,211,180 kbytes (3.06G) of memory is but when it tries to go over 3G the memory allocation fails.
Any ideas on
a) why does this happen
b) how is it possible to avoid this
Thanks.
Mate
There is a lot of overhead in a typical VM that is not counted in the VM accounting because it is essentially stolen by the native elements of the process - e.g. mapping in .so files that are used for performing native level code for system libraries are not counted in the base VM accounting. your typical shared library is mapped in at the top GB of memory, so if you try to allocate memory into this region you will be denied, because it would overrun with the shared libraries' memory region - memory allocation on most OS's is performed by a simple bar that is raised when you ask for more memory. When you ask for memory and the bar conflicts with other uses, then it simply fails. Most of the details that follow are about this.
You need to avoid needing so much memory in a 32bit process. This is the fundamental challenge. It is trivial to get a 64bit VM that will allow you to make use of so much more memory than would be otherwise accessible - it is just simply usable in this situation.
If you are using a 32bit process, there is a high probability that you are encountering the effective address space limit of the 32bit process. For windows, this is a maximum of about 3GB - anything above this is reserved for I/O space and the kernel. You can move this, but it has a tendency to break applications/drivers that are designed for the 32bit OS.
For Linux, you end up with ~3GB of usable addressable RAM per process, the rest is used up by things like the kernel and mapped in shared libraries. The limit is referred to as the 'address space limit', and I presume it can be tuned.
How to avoid it? Well, for the most part, you can't, it's a physical limitation of the 32bit address space and the needs of having the kernel/IO in the same address space as the process for a 32bit OS.
With 64 bit OS's you have (most of) all of the 64 bit address space to play around with, which is extensively more than you need to use.
When you start a JVM it allocates it maximum size immediately. How much of that memory is used doesn't really matter. Your application can address about 3 GB of which about 2.3 GB you have allocated to heap and perm gen. The rest is available for shared libraries (typically around 200 MB) and thread stacks.
Worrying about why you can't use the full 3 GB of address isn't very useful when the solution is relatively trivial (use a 64-bit JVM) I am assuming you don't have any shared libraries which are only available in 32-bit. However if you do have additional shared libraries they can easily be using 100s of MB.
I have a system which cannot provide more than 1.5 Gb for Java process. Thus i need an exact way to specify java process settings, including all memory kinds inside java and possible fork.
One specific java process and system to illustrate my problem:
My current environment is java 1.6.0_18 under Ubuntu Linux 9.10.
I start large java server process with following JVM Options:
"-Xms512m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m"
Now, "top" command reports that the process uses 1.6gb memory...
Questions:
1 - how the maximal space used by java process is calculated? Please provide exact formula if possible.
( Smth. Like: max.heap + max.perm + stack + jvm space = maximal space )
2 - what is the infamous fork behavior under linux in my case? Will the forked JVM occupy extra 1.6 gb (resulting in total 3.2 Gb of used memory)?
3 - Which options must be used to absolutely ensure that no more than 1.5gb is used at any time?
thank you
#rancidfishbreath: "ulimit" will ensure that java cannot take more than specified amount of memory. My purpose is to ensure that java doesn't ever try to do that.
top reports 1.6GB because PermSize is ON TOP of the heap-size maximum heap size. In your case you set MaxPermSize to 512m and Xmx to 1024m. This amounts to 1536m. Just like in other languages, an absolutely precise number can not be calculated unless you know precisely how many threads are started, how many file handles are used, etc. The stack size per thread depends on the OS and JDK version, in your case its 1024k (if it is a 64bit machine). So if you have 10 threads you use 10240k extra as the stack is not allocated from the heap (Xmx). Most applications that behave nicely work perfectly when setting a lower stack and MaxPermSize. Try to set the ThreadStackSize to 128k and if you get a StackOverflowError (i.e. if you do lots of deep recursions) you can increase it in small steps until the problem disappears.
So my answer is essentially that you can not control it down to the MB how much the Java process will use, but you come fairly close by setting i.e. -Xmx1024m -XX:MaxPermSize=384m and -XX:ThreadStackSize=128k -XX:+UseCompressedOops. Even if you have lots of threads you will still have plenty of headroom until you reach 1.5GB. The UseCompressedOops tells the VM to use narrow pointers even when running on a 64bit JVM, thus saving some memory.
At high level JVM address space is divided in three main parts:
kernel space: ~1GB, also depends on platform, windows its more than 1GB
Java Heap: Java heap specified by user using the -Xmx, -XX:MaxPermSize, etc...
Rest of virtual address space goes to native usage of JVM, to accomodate the malloc/calloc done by JVM, native threads stack: thread respective the java threads and addition JVM native threads for GC, etc...
So you have (4GB - kernel space 1-1.25GB) ~2.75GB to play with,so you can set your java/native heap accordingly. But generally we should keep atleast 500MB for JVM native heap else there is a chances that you get native OOM. So we need to do a trade off here based on your application's java heap utilization.