How do I determine active memory usage in Java - java

How can I determine the memory usage for referenced objects only in java? ie. exclude "dead" objects from memory usage during the running of a Java application.
I want to display this information and trigger an alert if it reaches a certain threahold. I also want to use this to measure how much memory is taken up when a file is imported into my application.
My application consists of many processes that all run at the same time of which one of them imports files into memory and then into a database. if I measure memory usage using the Runtime.getRuntime.freeMemory or MemoryPoolMXBean and with all the other processes running, memory usage goes up and up because of these processes and because the GC isn't running "in real time" the memory usage depicts dead objects as well as referenced ones. This is not a clear indication for me as to what is taking up memory at the time.
Is there a way to determine memory used by referenced objects only at any time?

You can look into JConsole and see if that suits your need.
There is also VisualVM.
They let you monitor the app but I am not sure how you can do that in your own application to trigger an alarm once your memory is low.
Also, you can use WeakReference and SoftReference if you want objects to be garbage-collected quicker.
I found a good article on how to query the size of a Java Object. It is slightly long so I cannot post any of it here. However, here is the link: http://www.javamex.com/tutorials/memory/instrumentation.shtml Here is a SO question on the same topic determining java memory usage
Click for:
JConsole
VisualVM

you can use eclipse memory analyzer MAT.

Related

VisualVM: Ideal Heap Memory Usage Graph

I am monitoring my Java application (written in JDK 1.7) using VisualVM. Following is the graph that shows heap memory usage for the duration that this application ran.
Looking at this graph ones see that there are a lot of spikes in it. These spikes indicate creation of objects by the application. Once the application is done with them it destroys them using gc (implicitly called in this case).
Also , here is a screenshot of memory profiler when the application is still running
To me the up and down nature of the graph indicates efficient usage of java objects. Is this inference right ?
What is the ideal nature of the heap usage graph that one should aim for ?
Are there any other ways that I can improve on the heap memory usage by my application ?
To me the up and down nature of the graph indicates efficient usage of java objects. Is this inference right ?
I would say its the efficient use of the garbage collector. I would suggest creating less object might be more efficient.
What is the ideal nature of the heap usage graph that one should aim for ?
That depends on your application. I tend to aim for one which is almost completely flat.
Are there any other ways that I can improve on the heap memory usage by my application ?
Loads
create less garbage. Use your memory profiler to find out where garbage is being created.
make the heap larger so it doesn't GC as often.
move your retained data off heap (you don't appear to have a lot)
In your case, the best option would be to reduce the amount of garbage you are producing.
As long as the heap size keep almost same over time, you are ok. Used heap should go up and down due to the nature of pause the world gc in Sun JVM. Looks like lots of short lived objects are produced in your app, it may be inefficient, but sometimes you need create them. It's the lifestyle of Java :D

Memory leak in Scala and processes

I have a system in Scala, with a lot of simultaneous threads and system calls. This system has some problem, because memory usage is increasing over time.
The image bellow shows the memory usage for one day. When it gets to the limit, the process shuts down and I put a watch-dog to recover it again.
I periodically run the command
jcmd <pid> GC.run
And this makes the memory to increase slowly, but the leak still happens.
I analysed with jvisualvm, comparing to distinct moments in time, with 40 minutes delta. The image bellow shows the comparison between these two moments in time. Notice that there is an increase for instances of some classes like ConcurrentHashMap$HashEntry, SNode, WeakReference, char[] and String and many classes in the package scala.collection.concurrent.
What can be causing the memory leak?
Edit 1:
Investigating JVisualVM, I noticed object of CNode and INode classes that are in TriedMap, that is instanced inside sbt.TrapExit$App class. Here is the object hierarchy figure:
First capture a heap dump when your application crashes due to an out of memory issue. Add the following flags when starting the jvm
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/dump
Next you need to analyze the heap dump to figure out the source of the memory leak. I recommend using Eclipse MAT. The Leak Suspects report should give you a sense of what objects are actually causing the leak.
Without seeing the implementation its hard to say. The title of your post suggests that there is a memory leak in Scala, but did you check your implementation against problems with releasing objects?
Did you check following:
Do you limit number of actors at all?
Do you set timeouts for the system calls?
Do you allow the actors to be removed from Heap when they performed therir tasks?
Did you count how many actors can fit into your memory or you are just creating "hundreds of actors" with hope that jvm will know "what to do"
What I'm trying to say is that maybe you run out of memory because you simply create to many objects which are not later released, because either they are still performing their tasks (no timeout) or you have created to many of them.
Maybe you need to scale your application to many jvms? How many jvms do you use?

Finding Java memory leak when not all used heap is reachable from threads

I am looking into a potential memory leak (or at least memory waste) in a largish Java based system. The JVM is running with a maximum heap size of 5 GB and 2-3GB heap usage is an expected base line for the application. (There can be peaks that are higher)
In an overload scenario which I am investigating the heap gets filled up. Analyzing the a heap-dump with the "Eclipse MemoryAnalyzer Tool" shows (no surprise) that the heap is entirely used up.
MAT shows 2 potential leak candidates, both roughly retaining 2.5GB: java.lang.Thread and a domain object from the system which is used extensively during transaction processing in the system. All these domain objects are however (no surprise) reachable from the Thread instances. Those threads are processing the transactions, after all. Thus, the 2.5 GB attributed to java.lang.Thread is almost entirely caused by those domain objects. No surprise here.
Listing the object tree of all java.lang.Thread instances and summing up the retained heap of all threads results in 2.5 GB of retained heap.
Where should I look for the other 2.5 GB that are needed to fill up the heap, if they are not reachable from an instance of java.lang.Thread?
- There is nothing in the finalizer queue
- There is not a significant amount of unreachable objects pending GC
I think another way to put this question is: "How do I find all objects that are not reachable from an instance of java.lang.Thread? Maybe an OQL query?, and the other question: "What kind of Objects are there that are not reachable from an instance of java.lang.Thread other then Objects in the Finalizer Queue and unreferenced objects pending GC?"
I too faced the problem with memory leaks at our site,
Use yourkit java profiler which provide lots of information and with its ability you can have a wider image where all the memory is being utilized.
You can find a great tutorial Find Java Memory Leaks with the above tool.
Your question,
"What kind of Objects are there that are not reachable from an instance of java.lang.Thread other then Objects in the Finalizer Queue and unreferenced objects pending GC?"
There are four kinds of object,
Strong reachable, objects that can be reached directly via references from live objects
Weak/Soft reachable, objects that are having weak/Soft reference associated with them
Pending Finalization, objects that are pending for finalization and whose reference can be reached through finalizer queue
Unreachable these are objects that are unreachable from GC roots, but not yet collected
Besides these JVM also uses Native memory whose information you can find on IBM Heap and native memory use by the JVM and Thanks for the memory and according to YourKit the JVM Memory Structure has Non-Heap Memory whose definition according to them is
Also, the JVM has memory other than the heap, referred to as non-heap memory. It is created at the JVM startup and stores per-class structures such as runtime constant pool, field and method data, and the code for methods and constructors, as well as interned Strings.
Since the extra memory is not showing in MAT it's hard to know what to suggest. My apologies if some (or even most) of this is things you already know, I've just tried to pull together everything I could think of.
FindBugs
FindBugs is a static analysis tool that will scan your code looking for common anti-patterns and problems and giving you a nice report on them. It does pick up on a lot of causes of potential memory and resource leaks.
Manual dump
You could try using something like jmap or visualvm to take a heap dump for analysis manually and see if you get different results from letting eclipse do it:
http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jmap.html
http://java.dzone.com/articles/java-heap-dump-are-you-task
Analyzer Quirks
The memory analyzer FAQ:
http://wiki.eclipse.org/MemoryAnalyzer/FAQ
says:
Symptom: When monitoring the memory usage interactively, the used heap size is much bigger than what MAT reports.
During the index creation, the Memory Analyzer removes unreachable objects because the various garbage collector algorithms tend to leave some garbage behind (if the object is too small, moving and re-assigning addresses is to expensive). This should, however, be no more than 3 to 4 percent. If you want to know what objects are removed, enable debug output as explained here: MemoryAnalyzer/FAQ#Enable_Debug_Output
Another reason could be that the heap dump was not written properly. Especially older VM (1.4, 1.5) can have problems if the heap dump is written via jmap.
Enabling debug output will allow you to see what is going on there and confirm there is nothing odd in that area.
Some of these tips may be relevant
http://eclipsesource.com/blogs/2013/01/21/10-tips-for-using-the-eclipse-memory-analyzer/
Use JProfiler and break the heap object count down by class - find which class has lots of instances and start your hunt there.
You can also take a couple of snapshots a short time apart and compare the two heap dumps to see what objects were created during that time. This is particularly handy if you know that a certain action is causing the problem and you want to ignore all the background JVM object noise and just examine the delta.
I have used it with great success to find a memory leak. It isn't free, but it's worth the licence fee.
FYI: I have no affiliation with JProfiler.
Since the extra memory is not showing in MAT it's hard to know what to suggest.
It isn't true. MAT show unreachable objects. Just go to de Preferences and select check box enabling this options. After MAT restart you will see these objects with details. Of course roots to GC will be not available.
Maybe you should look for memory leaks in database connector code or maybe ORM. Because if you are using raw connection library when you don't close cursor you can get potentially memory leak. Also my second thought is also related to database connector. Because some of them (may be not yours) uses native code beneath and this is source of this leak. Due to heavy concurrent usage that makes sens for me. You can check that if you want.

UsageMemory threashold in JConsole

I am looking into how to use JConsole to detect memory leaks.
I see that in Memory Pool in my MBeans I can define UsageThreashold for my Tenured Generation.
So if my application exceeds this threashold the heap memory becomes red in the Memory tab.
Question: How does this help? I mean how am I supposed to use this setting to analyze my memory? How am I supposed to figure out this value?
In my opinion I don't think that UsageThreashold parameter is the most helpful for you to detect memory leaks (but if someone knows some tricks with it, please do share). In my experience that parameter is more helpful to visually understand if my application is getting way too near my max heap size and I'm in danger of getting an OutOfMemoryException.
Still regarding using JConsole to search for memory leaks, I don't think there's a silver bullet for the process. But what I usually do is the following:
If exists a memory leak, it means that the objects (the ones that are leaking) won't get collected, hence, your Tenured Generation won't fully recover after any amount of GCs.
With the application running I connect JConsole and try to spot a leak by observing the memory tab, if after several computations of my application and also after various GCs occurring (including pressing the Perform GC button, which will result in a full gc) the memory never goes below, or at least to the memory value, it started tracking there's a great possibility that something is leaking. When the leak is big, you can even see a "stair graph" pattern in your memory.
Keep in mind that if your application has long computations running, which may consume memory this analyzes must be done carefully. You must understand when those processes have finished. For example, just run one of those computations and track the total evolution of memory, before, during and afterwards.
Also, I suggest you to try visualVM instead, because it also allows you to create heap dumps, which you can use in order to understand which objects are still in memory and explore the references graph to understand why they are not being collected.
you can use JMAP to see the histogram and/or to create heap dumps and study your memory consumption with tools like Eclipse MAT or YourKit.
JConsole is used more for monitoring and running MBeans and less for analysis and in my expirence JVisualvm is better for that since you can use it for sampling your code and see what methods are CPU consuming.

Memory Leak in a Java based application

There is a memory leak happens in an application when a short lived object holds a long lived object,
My question is how can we identify
1) which object lives longer and shorter, any tool which measures life of an object?
2nd Question
I am constantly getting the Out of Memory Space Error and I tried increasing the Heap memory to 2 GB, but still i am getting, please suggest me any open source tool with which i can identify the memory leak issue and fix.
At present I am restarting the server every time as a temporary solution, but Suggest me any thing which i can fix permanently.
You can use the VisualVM tool included in the JDK:
http://download.oracle.com/javase/6/docs/technotes/tools/share/jvisualvm.html
Documentation available here:
https://visualvm.dev.java.net/docindex.html
There are 2 options:
It just may be your application doesn't have enough heap allocated. Measure size of your input and give application corresponding heap;
There's memory-leak: take profiler, examine your heap, find objects which shouldn't be there or there too much of them ('short-living objects', in your terms), identify which 'long-living' object holds them, fix this. You should know your code to understand which objects must be 'short-living' and which must be 'long-living'.
I've found the Heap Walker in Netbeans very usefull
As said, jvisualvm have good tools to analyze the heap live.
But you can also use jvisualvm or -XX:+HeapDumpOnOutOfMemoryError to take a heap dump in a file. And then take the file to your destkop, to open it in Eclipse Memory Analyzer. Eclipse MAT is even better to analyze the memory.
Out of Memory occurs on a server because it literally uses up all memory it's allowed to have. Not sure about what application you're using for hosting the server, but for Apache, you need to add the line -Xmx512m where 512 is the maximum amount of megabytes it's allowed to have.
If you leave the application to run long enough, it's going to happen. This isn't because of memory leaks in Java but the server itself which has a tendency to do so. You can't change this behavior, but you can at least increase the default memory of 256 mb. With the heavy loading site that I work on everyday, 256 mb lasts about 30 minutes for me unfortunately. I've found that 1024 mb is reasonable and rarely crashes due to out of memory exceptions.
I'd strike me as very unusual for Java to be incapable of garbage collecting correctly unless the programmer took a hand at overriding typical functionality.
I think you can track memory leaks with jsconsole (which comes shipped with JDK6 if i'm not mistaken).
A short-lived object holding a reference to a long-lived object will not cause problems. (a good overview , including generational garbage collection).
2GB is an awful lot of objects/references. If you're running out of heap space at 2Gb you're likely holding onto massive amounts of data and/or keeping open resources when you're done with them. You should post at the very least a description of what your application does and how long it takes to die.
You can get some sense of what's happening quickly by watching the garbage collector (e.g. run with "-verbose:gc" which will tell you when the garbage collector is running and how much it collects).

Categories

Resources