I've made a program that does like "tail -f" on a number of log files on a machine, using Apache Tailer from commons IO. Basically it runs in a thread, opens the file as a RandomAccessFile, checks its length, seeks to the end etc. It sends all log lines collected to a client.
The somewhat uncomfortable thing about it, is that on Linux it can show an enormous amount of VIRT memory. Right now it says 16.1g VIRT (!!) and 203m RES.
I have read up a little on virtual memory and understood that it's often "nothing to worry about".. But still, 16 GB? Is it really healthy?
When I look at the process with pmap, none of the log file names are shown so I guess they are not memory mapped.. And I read (man pmap) that "[ anon ]" in the "Mapping" column of pmap output means "allocated memory". Now what does that mean? :)
However, pmap -x shows:
Address Kbytes RSS Dirty Mode Mapping
...
---------------- ------ ------ ------
total kB 16928328 208824 197096
..so I suppose it's not residing in RAM, after all.. But how does it work memory-wise when opening a file like this, seeking to the end of it, etc?
Should I worry about all those GB of VIRT memory ? It "watches" 84 different log files right now, and the total size of these on disk are 31414239 bytes.
EDIT:
I just tested it on another, less "production-like", Linux machine and did not get the same numbers. VIRT got to ~2,5 GB at most there. I found that some of the default JVM settings were different (checked with "java -XX:+PrintFlagsFinal -version"):
Value Small machine Big machine
InitialHeapSize 62690688 2114573120
MaxHeapSize 1004535808 32038191104
ParallelGCThreads 2 13
..So, uhm.. I guess it grabs more heap on the big machine, since the max limit is (way) higher? And I also guess it's a good idea to always specify those values explicitly..
A couple of things:
Each Tailer instance will have its own thread. And each thread has a stack. By default (on a 64bit JVM) thread stacks are 1Mb each, so you will be using 84Mb for stacks. You might consider reducing that using the -Xss option at launch time.
A large virt size is not necessarily bad. But if it translates into a demand on physical memory ... and you don't have that much ... then that really is bad.
Hmm I actually run it without any JVM args at all right now. Is that good or bad? :-)
I understand now. Yes it is bad. The JVM's default heap sizing on a large 64bit machine are far more than you really need.
Assuming that your application is only doing simple processing of the log lines, I recommend that you set the max heap size to a relatively small size (e.g. 64Mb). That way if you do get a leak it won't impact on the rest of your system by gobbling up lots of real memory.
Related
In our scenario, we launch 20+ java processes to deal with our business, we find each process eats 500M+ memory, so we consumed several G memory on the server in total, the customer complains their server become slowly once launch our processes.
I had a trial, even for a simplest "HelloWorld" program in HP-UX, it eats 500M memory! If I set -Xmx for it, seems likely it can't be cut down to less than 320M. Actually, we hope our each process just consume 64M memory.
So, any one know how to set memory limit for Java program to 64M-128M on HP-UX (java6)???
Looks relevant, I'd say. Discusses memory management and Java on HPUX
How are you measuring the amount of memory the java processes are using? top/process-lists aren't always telling you what you might think. It might be worth running jvisualvm (part of the java6 jdk) to see what the memory break down is. If you are running a trivial hello world program then something is going wrong if it really is using this amount of memory.
I am just testing some JNI calls with a simple stand alone java program on a 16-core CPU. I am using 64-bit JVM & OS is Linux AS5. But, as soon as I start my test program with 64-bit c++ libraries, I see that the 'SIZE' column shows 16G. The top command output is something like this:
PID PSID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
3505 31483 xxxxxxx 23 16 0 16G 215M sleep 0:02 0.00% java
I understand that my heap is ok, but JNI memory can increase the process size, but I am confused as to why it starts with 16G - SIZE, which I believe is the Virtual Memory size? Is it really taking that much memory? Should I be concerned with it?
Odd, but it could be that your JNI calls are either somehow rapidly leaking memory or allocating a number of massive chunks of memory. What command line are you using to start the JVM, and what is your JNI code doing?
The comment by Greg Hewgill basically says it all: You're using only 215 MB of real memory. On my machine a nothing-doing Java process without any VM arguments gets 8 GB. You've got four times as many cores, so starting with twice as much memory makes sense to me.
As this number uses nothing but the virtual address space, which is something like 2*48 B or maybe 2*64 B, it's actually free. You could use -mx1G in order to limit the memory to 1 GB, but note that it is a hard limit and you'll get an OOME when the process needs more.
Server Configuration:
Physical Ram - 16Gb
Swap Memory - 27Gb
OS - Solaris 10
Physical Memory Free - 598 Mb
Swap Memory Used - ~26Gb
Java Version - Java HotSpot(TM) Server VM - 1.6.0_17-b04
My Task is to reduce used swap memory:-
Solutions i have though of
Stop all java applications and wait till physical memory is sufficiently freed. then
execute command "swapoff -a"(Yet to find out Solaris equivalent of this command) ...wait till swap memory used goes down to zero. then execute command "swapon -a"
Increase Physical Memory
I need help on following points:-
Whats the solaris equivalent of swapoff and swapon?
Will option 1 work to clear used swap?
Thanks a Million!!!
First, Java and swap don't mix. If your java app is swapping, you're simply doomed. Few things murder a machine like a swapped java process. GC and swap is just a nightmare.
So, given that, if you machine with the java process is swapping -- that machine is too small. Get more memory, or reduce the load on the machine (including the heap of the java process if possible).
The fact that your machine has no physical memory (600ish Mb), and no free swap space (1ish Gb) is another indicator that the machine is way overloaded.
All manner of things could be faulting your Java process when resources are exhausted.
Killing the Java process will "take it out of swap", since the process doesn't exist, it can't be in swap. Same for all of the other processes. "Swap memory" may not instantly go down, but if a process doesn't exist -- it can't swap (barring the use of persistent shared memory buffers that have the misfortune of being swapped out, and Java doesn't typically use those.)
There isn't really a good way that I know of to tell the OS to lock a specific program in to physical RAM and prevent it from being paged out. And, frankly, you don't want to.
Whatever is taking all of your RAM, you need to seriously consider reducing its footprint(s), or moving the Java process off of this machine. You're simply running in to hard limits, and there's no more blood in this stone.
Not quite clear to me what you're asking - stopping applications which takes memory should fre memory (and swap space potentially). It's not clear from your description that Java is taking all the memory on your box - there's usually no reasons to run JVM allocating more memory that physical memory on the box. Check how you start JVM and how much memory is allocated.
Here is how to manage swap on Solaris:
http://www.aspdeveloper.net/tiki-index.php?page=SolarisSwap
a bit late to the party, but for Solaris:
list details of the swap space with:
swap -l
which lists swap slices. eg:
swapfile dev swaplo blocks free
/dev/dsk/c0t0d0s1 136,1 16 1206736 1084736
/export/data/swapfile -16 40944 40944
swapoff equivalent:
swap -d <file or device>
eg:
swap -d /dev/dsk/c1t0d0s3
swapon equivalent:
swap -a <file or device>
eg:
swap -a /dev/dsk/c1t0d0s3
Note: you may have to create a device before you can use the -a switch by editing the /etc/vfstab file and adding information describing the swap slice. eg:
/dev/dsk/c1t0d0s3 --swap -no -
I am using Rackspace as a hosting provider, using their Cloud server hosting, with 256mb plan.
I am using Geronimo 2.2 to run my java application.
The server starts up no problem, loads Geronimo quite fast, however, when I started to deploy my web application, it is taking forever, and once it is deployed, it takes forever to navigate through pages.
I've been monitoring the server activity, the CPU is not so busy, however, 60% of the memory is being used up. Could this be the problem?
If so, what are my options? Should I consider upgrading this cloud server to something with more RAM, or changing a host provider to better suit my needs?
Edit:
I should note that, even if I don't deploy my application, just having Geronimo loaded, sometimes I would get a connection time when I try to shut down Geronimo.
Also the database is on the same server as the application. (however I wouldn't say its query intensive)
Update:
After what #matiu suggested, I tried running free -m, and this is the output that I get:
total used free shared buffers cached
Mem: 239 232 6 0 0 2
-/+ buffers/cache: 229 9
Swap: 509 403 106
This was totally different result than running ps ux, which is how I got my previous 60%.
And I did an iostat check, and about 25% iowait time, and device is constantly writing and reading.
update:
Upgraded my hosting to 512MB, now it is up to speed! Something I should note is, I forgot about the Java's Permanent Generation memory, which is also used by Geronimo. So it turns out, I do need more RAM, and more RAM did solve my problem. (As expected) yay.
I'm guessing you're running into 'swapping'.
As you'll know Linux swaps out some memory to disk. This is great for memory that's not accessed very much.
When Java starts eating heaps and heaps, linux starts:
Swapping memory block A out to disk to make space to read in block B
Reading/writing block B
Swapping block B to disk to make space for some other block.
As disk is 1000s of times slower than RAM, as the memory usage increases your machine grinds more and more closer to a halt.
With 256 MB Cloud Servers you get 512 MB of Swap space.
Checking:
You can check if this is the case with free -m .. this page shows how to read the output:
Next I'd check with 'iostat 5' to see what the disk IO rate on the swap partition is. I would say a write rate of 300 or more means you're almost dead in the water. I'd say you'd want to keep the write rate of the swap partition down below 50 blocks a second and the read rate down below 500 blocks a second .. if possible both should be zero most of the time. Remember disk is 1000s of times slower than RAM.
You can check if it's Java eating the ram by running top and hitting shift+m to order the processes by memory consumption.
If you want .. you can disable the swap partition with swapoff -a .. then open up the web console, and hit the site a bit .. you'll soon see error messages in the console like 'OOM Killed process xxx' (OOM is for Out of Memory I think). If you see those that's linux trying to satisfy memory requests by killing processes. Once that happens, it's best to hard reboot.
Fixing:
If it's Java using the RAM .. this link might help.
I think the easy fix would be just to upgrade the size of the Cloud Server.
You may find a different Java RTE may be better.
If you run it in a 32 bit chroot it may use less RAM.
You should consider running a virtual dedicated Linux server, from something like linode.
You'd have to worry about how to start a Java service and things like firewalls, etc, but once you get it right, you are in effect you're own hosting provider, allowing you to do anything a standalone actual Linux box can do.
As for memory, I wouldn't upgrade until you have evidence that you do not have enough. 60% being used up is less than 100% used up...
Java normally assumes that it can take whatever it is assigned to it. Meaning, if you give it a max of 200MB, it thins that it's ok to take 200MB even though it's using much less.
There is a way to make Java use less memory, by using the -Xincgc incremental garbage collector. It actually ends up giving chunks of memory back to the system when it no longer needs it. This is a bit of a kept secret really. You won't see anyone point this out...
Based on my experience, memory and CPU load on VPSes are quite related. Meaning, when application server will take up all available memory, CPU usage starts to sky rock, finally making application inaccessible.
This is just a side effect though - you should really need to investigate where your problems origin!
If the memory consumption is very high, then you can have multiple causes:
It's normal - maybe you have reached a point, where all processes (application server, applications within it, background processes, daemons, Operating System, etc.) put together need that huge amount of memory. This is least probably scenario.
Memory leak - can happen due to bug in framework or some library (not likely) or your own code (possible). This can be monitored and solved.
Huge amount of requests - each request will take both CPU and memory to be processed. You can have a look at the correlation between requests per second and memory consumption, meaning, it can be monitored and resolved.
If you are interested in CPU usage:
Again, monitor requests to your application. For constant count of requests - nothing extraordinary should happen.
One component is exhausting most resources (maybe your database is installed on the same server and it uses all CPU power due to inefficient queries? Slow log would help.)
As you can see, it's not trivial task, but you have tools support which will can help you out. I personally use java melody and probe.
i made a server side aplication that uses 18mb of non-heap and around 6mbs of head of a max of 30mbs. i set the max of heap with -Xms and Xmx, the problem is that when i run the program on ubuntu server it takes arround 170mbs instead of 18+30 or atleast 100Mbs in the max. Some one know how to put VM only getting 100MBs?
The JVM uses heap, other memory like thread stacks and share libraries. The shared libraries can be relatively large but they don't use real memory unless they are actually used. If you run JVMs they are shared between them but you cannot see this in the process information.
In a modern PC, 1 GB of memory costs around $100 so reducing every last MB many not be seen to be as important as it used to be.
In response to your comment
i have made some tests with Jconsole
and VMvisual. Xmx 40mbs Xms 25. the
problem is that iam restricted to
512mbs since its a VPS and i can't pay
for it now. The other thing is that
with 100mbs each i could put atleast 3
process's running.
The problem is, you are going about it the wrong way. Don't try and get your VM super small so you can run 3 VMs. Combine everything into 1 VM. If you have 512 memory, then make 1 VM with 256MB of heap, and let it do everything. You can have 10s or 100s of threads in a single VM. In most cases, this will perform better and use less total memory than trying to run many small VMs.