RSS memory growing on idle Java processes - java

i am running a couple of small servers that are written in java. There is one that listens on a standard socket, the rest communicate with each other using ActiveMQ. i noticed something strange where if i leave the system idle for a couple of hours, the RSS memory either grows by several to tens of megs or shrinks by several to tens of megs. I used jconsole to see what was going on in the servers, but the memory usage and object creation stayed relatively flat. I tested this with both Oracle Java and OpenJDK. I tried using the recommended solution of setting the MALLOC_ARENA_MAX=4, but that did not have an effect. Is there something else going on in the JVM that i am not aware of, and is there a way to stop it?
Setup:
CentOS 7.4.1708 (2G ram, 4 processors)
Oracle Java 8u162
OpenJDK 1.8.0.212
glibc 2.17

Is there something else going on in the JVM
Yes. Garbage collection, JIT compilation, class loading / unloading, logging, I/O etc. More details here.
RSS of a Java process can easily go up and down by hundreds of megabytes - "several megs" is not typically an issue at all. To find where the native memory is allocated from, turn on Native Memory Tracking feature and/or use async-profiler as described in this answer.

Related

Is there a way to determine the memory usage per application in weblogic?

We are running weblogic and appear to have a memory leak - we eventually run out of heap space.
We have 5 apps (5 war deployments) on the server.
Can you think of a way to gather memory usage on a per application basis?
(Then we can concentrate our search by looking through the code in the appropriate app.)
I have run jmap to get a heap dump and loaded the results in jvisualvm but it's unclear where the bulk of objects have come from - for example Strings.
I was thinking that weblogic perhaps uses separate classloaders per application and so we may be able to figure something out via that route...
try using Eclipse MAT, it gives hint of memory leaks, among others features

VisualVM in production?

I consider running VisualVM against a production JVM to see what's going on there - it started to consume too much CPU for some reason.
It must not result in a JVM failure so I'm trying to estimate all the risks.
The only issue that I see on their site that could potentially bring JVM down is related to class sharing and -Xshare JVM option, but afaik class sharing is not enabled in server mode and/or on x64 systems.
So is it really safe to run VisualVM against a production JVM, if it's not - what are the risks that one should consider, and how much load (CPU/memory) does running VisualVM against a JVM (and profiling with it) put on it?
Thanks
AFAIK VisualVM can be used in production, but I would only use it on a server which is lightly loaded. What you could do is wait for the service to slow down and later when its not used as much test it to see if some of the collections are surprising large. Or you could trigger a heap dump and analyze it offline.
And you can't get stats on method calls without significant overhead. Java 6 and 7 are better than java 5 but it could still slow your application by 30% even wityh a commercial profiler.
Actually, you can get some information without a lot of overhead by using stack dumps. There is even a script to help you do this at https://gist.github.com/851961
This type of profiling is the least intrusive that you can get.

Java RAM increases although Heap stays same? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Limit jvm process memory on ubuntu
In my application I'm uploading documents to a server, which does some analyzing on it.
Today I analyzed my application using jconsole.exe and heap dumps as I tried to find out if I'm having memory issues / a memory leak. I thought I might suffer of one since my application is growing very much on RAM while the application is running.
As I watched the heap / codecache / perm gen etc. memory with jconsole after some runs, I was surprised as I saw the following:
picture link: https://www7.pic-upload.de/13.06.12/murk9qrka8al.png
As you can see at the jconsole on the right, the heap is increasing when I'm doing analyzing-related stuff, but it's also decreasing again to its normal size when the work is over. On the left you can see the "htop" of the sever the application is deployed on. And there it is: The RAM is, although the heap acts normally and it also seems the garbage collector is running correct, incredible high at almost 3,2gb.
This is now really confusing me. I was thinking if my java vm stack could have to do something with this? I did some research and what I found spoke about the vm stack as a little memory with only a few megabytes (or even only kb).
My technical background:
The application is running on glassfish v.3.1.2
The database is running on MySQL
Hibernate is used as ORM framework
Java version is 1.7.0_04
It's implemented using VAADIN
MySQL database and glassfish are the only things running on this server
I'm constructing XML-DOM-style documents using JAXB during the analysis and save them in the database
Uploaded documents are either .txt or .pdf files
OS is linux
Solution?
Do you have any ideas why this happens and what I can do for fixing it? I'm really surprised at the moment, since I thought the memory problems came from a memory leak which causes the heap to explode. But now, the heap isn't the problem. It's the RAM that goes higher and higher while the heap stays on the same level. And I don't know what to do to resolve it.
Thanks for every thought you're sharing with me.
Edit: Maybe I should also state out that this behaviour is currently making me impossible to really let other people use my application. When the RAM is full and the server doesn't respond anymore I'm out.
Edit2: Maybe I should also add that this RAM keeps increasing after every successfull further analyzation.
There are lots more things that use memory in a JVM implementation than the Heap Settings.
The Heap settings via -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
the Java Heap isn't the only thing that consumes memory in a JVM implementation, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
Native Code
App Servers many times have native code that runs outside the JVM but still shows up to the OS as memory associated with the process that controls the app server.

Why does System. gc () seem to have no effect on some JVMs

I have been developing a small Java utility that uses two frameworks: Encog and Jetty to provide neural network functionality for a website.
The code is 'finished' in that it does everything it needs to do, but I have some problems with memory usage. When running on my development machine the memory usage seems to fluctuate between about 4MB and 13MB when the application is doing things (training neural networks) and at most it uses about 18MB. This is very good usage and I think it is due to the fact that I call System.GC() fairly regularly. I do this because the processing time doesn't matter for me, but the memory usage does.
So it all works fine on my machine, but as soon as I put it online on our server (shared unix hosting with memory limits) it uses about 19MB to start with and rises to hundreds of MB of memory usage when doing things. These are the same things that I have been doing in testing. The only way, I believe, to reduce the memory usage, is to quit the application and restart it.
The only difference that I can tell is the Java Virtual Machine that it is being run on. I do not know about this and I have tried to find the reason why it is acting this way, but a lot of the documentation assumes a great knowledge of Java and Virtual Machines. Could someone please help m with some reasons why this may be happening and perhaps some things to try to stop it.
I have looked at using GCJ to compile the application, but I don't know if this is something I should be putting a lot of time in to and whether it will actually help.
Thanks for the help!
UPDATE: Developing on Mac OS 10.6.3 and server is on a unix OS but I don't know what. (Server is from WebFaction)
I think it is due to the fact that I
call System.GC() fairly regularly
You should not do that, it's almost never useful.
A garbage collector works most efficiently when it has lots of memory to play with, so it will tend to use a large part of what it can get. I think all you need to do is to set the max heap size to something like 32MB with an -Xmx32m command line parameter - the default depends on whether the JVM believes it's running on a "server class" system, in which case it assumes that you want the application to use as much memory as it can in order to give better throughput.
BTW, if you're running on a 64 bit JVM on the server, it will legitimately need more memory (usually about 30%) than on a 32bit JVM due to larger references.
Two points you might consider:
Calls of System.gc can be disabled by a commandline parameter (-XX:-DisableExplicitGC), I think the behaviour also depends on the gc algorithm the vm uses. Normally invoking the gc should be left to the jvm
As long as there is enough memory available for the jvm I don't see anything wrong in using this memory to increase application and gc performance. As Michael Borgwardt said you can restrict the amount of memory the vm uses at the command line.
Also you may want to look at what mode the JVM has been started when you deploy it online. My guess its a server VM.
Take a look at the differences between the two right here on stackoverflow. Also, see what garbage collector is actually running on the actual deployment. See if you can tweek the GC behaviour, or change the GC algorithm.See the -X options if its a Sun JVM.
Basically the JVM takes the amount of memory it is allowed to as needed, in order to make the "new" operation as fast as possible (this is a science in itself).
So if you have a lot of objects being used, and then discarded, you will slowly and surely fill up the available memory. Then you can ask for garbage collection, but it is just a hint, and the JVM may choose not to listen.
So, you need another mechanism to keep memory usage down. The typical approach is to limit the amount of memory with -Xoptions, but be careful since the JVM you use on your pc may be very different from the one you deploy on, and the memory need may therefore be different.
Is there a deliberate requirement for low memory usage? If not, then just let it run and see how the JVM behaves. Use jvisualvm to attach and monitor.
Perhaps the server uses more memory because there is a higher load on your app and so more threads are in use? Jetty will use a number of threads to spread out the load if there are a lot of requests. Its worth a look at the thread count on the server versus on your test machine.

Java using too much memory on Linux?

I was testing the amount of memory java uses on Linux. When just staring up an application that does absolutely NOTHING it already reports that 11 MB is in use. When doing the same on a Windows machine about 6 MB is in use. These were measured with the top command and the windows task manager. The VM on linux I use is the 1.6_0_11 one, and the hotspot VM is Server 11.2. Starting the application using -client did not influence anything.
Why does java take this much memory? How can I reduce this?
EDIT: I measure memory using the windows task manager and in Linux I open the terminal and type top.
Also, I am only interested in how to reduce this or if I even CAN reduce this. I'll decide for myself whether a couple of megs is a lot or not. It's just that the difference of 5 MB between windows and Linux is strange, and I want to know if I am able to do this on Linux too.
If you think 11MB is "too much" memory... you'd better avoid using Java entirely. Seriously, the JVM needs to do quite a lot of stuff (bytecode verifier, GC, loading all the essential classes), and in an age where average desktop machines have 4GB of RAM, keeping the base JVM overhead (and memory use in generay) very low is simply not a design priority.
If you need your app to run on an embedded system (pretty much the only case where 11 MB might legitimately be considered "too much"), then there are special JVMs designed for such sytems that use less RAM - but at the cost of lacking many of the features and/or performance of mainstream JVMs.
You can control the heap size otherwise default values will be used, java -X gives you an explanation of the meaning of these switches
i.g.
set JAVA_OPTS="-Xms6m -Xmx6m"
java ${JAVA_OPTS} MyClass
The question you might really be asking is, "Does windows task manager and Linux top report memory in the same way?" I'm sure there are others that can answer this question better than I, but I suspect that you may not be doing an apples to apples comparison.
Try using the jconsole application on each respective machine to do a more granular inspection. You'll find jconsole on your sdk under the bin directory.
There is also a very extensive discussion of java memory management at http://www.ibm.com/developerworks/linux/library/j-nativememory-linux/
The short answer is that how memory is being allocated is a more complex answer than just looking at a single figure at the top of a user simplifed system utility.
Both Top and TaskManager will report how much memory has been allocated to a process, not how much the process is actually using, so I would say it's not an apples to apples comparison. Regardless, in the age of Gigs of memory what's a couple megs here or there on startup?
Linux and Windows are radically different operating systems and use RAM very differently. Windows kind of allocates as you go, and Linux caches more at once, and prepares for the future, so that the next operations are smooth.
This explanation is not quite right, but it's close enough for you.

Categories

Resources