Given a Java Object, how can I get a list of Objects that referring to it?
There must be extension mechanisms in the GC for doing this kind of thing, just can't seem to find them.
I'm not sure if exactly what you're after is simply accessible.
The JPDA (Java Platform Debugger Architecture) enables construction of debuggers, so is a good starting point if you want to delve into the internals. There's a blog on the JPDA that you may also find useful. Check out the Sun Developer Network JPDA page for links to documentation, FAQs, sample code and forums.
Two interfaces that may be good starting points are:
com.sun.jdi.ObjectReference: An instance of java.lang.Class from the target VM
com.sun.jdi.VirtualMachine: A virtual machine targeted for debugging
If you're looking for a memory leak, I find analyzing heap dumps with Eclipse MAT to be very helpful. You can select an object and ask for paths to "GC roots", i.e. show me all chains of references that are keeping this object from being garbage collected.
I don't think there is such a mechanism, and there is no real reason the GC would need one.
It depends a little bit on how you want to use it but if you need it to analyze your memory usage, taking a heapdump and open it in MemoryAnalyzer or JHat will probably give you the information you need. Different ways of taking a heapdump can be found here.
The GC does not support this, though the JDPA APIs do. But I'd be very cautious about doing this kind of thing in a Java application. It is likely to be prohibitively expensive in both time and memory.
Related
I am new to android programming. The memory consumption of may android app increases significantly over time. When analyzed through MAT, it shows the objects being piled up whose GC root is Native Stack. Those objects are being referenced as global ref in native code, but properly being released over time I have also put logs to make sure the count matches.
The documentation about native stack is not much clear as it just states:
In or out parameters in native code, such as user defined JNI code or JVM internal code. This is often the case as many methods have native parts and the objects handled as method parameters become GC roots. For example, parameters used for file/network I/O methods or reflection.
I am not quite sure what it says, where is the problem and how can I fix it. Any hints are much appreciated. Thanks in advance.
This answer won't give you a definitive solution, not because I'm not willing, but because it's impossible (and even harder without not just viewing your code, but knowing very well your code). But from my experience I can tell you that those kind of memory leaks doesn't occur just due to directly referenced objects - objects you declare (and keep referencing another classes/objects) in turn depends on many other classes and so on, and probably you're seeing a memory leak due to an incorrect handling of any of your instances which at the same time reference to other instances.
Debugging memory leaks is a often a very hard work, not just because as I said above it sometimes doesn't depend directly on what you've declared, but also because finding a solution might not be trivial. The best thing you can do is what you already seem to be doing: DDMS + HPROF. I don't know how much knowledge you have, but although it's not a universal method, this link helped me so much to find memory leaks in my code.
Although it seems trivial, the best way to debug those kind of things is progresively remove portions of your code (overall, those which implies working with instances of other classes) and see how the HPROF report change.
---- EDIT ----
This question on SO is a good example to illustrate the GC roots.
I am creating a java program in which my class suppose A has it's some predefined behavior. But user can over-ride my class to change its behavior. So my script will check if there is some subclass than I will call it's behavior but what if he has written some blocking code or memory leak in his code.
This may harm my process. Is there is any way in java to monitor memory allocated by some method.
Please suggest.
but what if he has written some blocking code or memory leek in his
code
First of all i suggest you document your class well. Describe what the user is allowed to do and what not. Give use cases what to do(if possible).
For the blocking code part, if you have some timing issues, you could wrap the execution of the method in say a Future and let a ExecutorService execute the code. That way you will be able to cancel the execution if the execution takes too much time.
For the memory leak issue, well i guess you are not talking about memory leaks but increased memory consumption caused by calling the overridden method. Memory leaks in java are rare after all.
You will not be able to detect the memory consumption of a method, that's not how java works. Memory is global. What will you do if for example an external library is loaded(JNI), or some library in the classpath is called that will use more memory now? You just can not tell.
Other then monitoring the overall memory consumption, there is no other way(someone please tell me if i am wrong).
Oracle has quite a good document about solving memory leaks. It suggests that one should use NetBeans Profiler as a tool.
http://www.oracle.com/technetwork/java/javase/memleaks-137499.html
I believe you can use the same debugging API for checking against misbehaving code while it is running, but that will come with a performance penalty and is probably akin to killing a fly with a sledgehammer. I personally would not let anything like that to run in production. Instead I would rely on rigorous testing and peer review.
For external monitoring, you can use VisualVM or JConsole (part of JDK), for internal you can use the Runtime class:
Runtime rt = Runtime.getRuntime();
long totalMem = rt.totalMemory();
long maxMem = rt.maxMemory();
long freeMem = rt.freeMemory();
Via the Thread class, you can check the status of all threads. Never used it directly, because application servers or batch processing APIs doing their job... So, I don't need to reinvent the wheel. And I suggest to use tools like VisualVM...
EDIT: Watch also this thread: Why do threads share the heap space?
You cannot analyze the heap usage of a single thread. If you have problems with the execution of foreign code, you should sepearate it as good as you can from other threads and analyze the thread or heap dumps. This could be done as mentioned with VisualVM or JConsole which was also added by Oracle (or SUN).
Depending on what sort of behavior that the subclass can do, then we might think of options. For example, if it's a database related operation, we can force them to do connection clean ups, if it's file based, we can force them to read the file through your class and check for how big the file is, if it's any http call or some other streaming functionality, we can look at enforcing constraints accordingly.
If you're just worried about the heap size utilization and memory leaks there, you might want to look at http://java.dzone.com/tips/getting-jvm-heap-size-used which explains how to get runtime memory programatically. But then you'll have to do periodic checks and you can never be sure of whether a memory usage is caused by the subclass behavior.
I just found this while i was trying to build up an agent that records memory allocations:
In the post How to track any object creation in Java since freeMemory() only reports long-lived objects? it is specified that there is an open source project Java Allocation Instrumenter that you could use to register your own callback (it has examples too) and using that you are able to obtain what you need.
I started few days ago to work on a similar project and while researching i found your question and the below post.
I personally needed this kind of code in some unit tests to check if one allocates too many objects inside critical methods and found that using Runtime class was not appropiate because Garbage collector may interfere and the test recorded negative numbers for allocated memory.
I'm using Tomcat and after stopping my web application there's still a reference to the classloader instance of my web application.
With the consequence that a notable amount of memory (mostly related to static data) will not be freed. Sooner or later this results in an OutOfMemoryError.
I took a heap dump and I realized that its held by a JNI global reference which prevents that the classloader will be garbage collected.
My application does not use JNI. I am also not using the Apache Tomcat Native Library. I am using a Sun/Oracle JDK.
I'd like to track down the cause/origin of this global reference.
(My guess is that the JVM internally references the classloader - but why/where?).
Question:
Which approaches/toolsets exists to achieve this?
UPDATE
It seems that bestsss is right and the JNI global references has been introduced by the jvm debug mode. This helped me out but it does not answer the question so I am still curious to get an answer to the question which might be helpful in the future.
Besides the obvious case: Threads, there is one more:
Are you using your application in debug mode?
The JVM does not hold references to any classloader besides the system one, but it doesn't concern you. The rest of JNI references are either Threads or just debug held objects (provided you don't use JNI and lock the objects down yourself).
JNI references are just roots, edit your answer and post what exactly objects are held by those references.
The first thing i'd do is run with -Xcheck:jni on and see if it comes up with anything. I wouldn't expect it to; it doesn't sound there's anything weird happening with JNI, just incorrect use being made of it. However, it's good to make sure of that.
If you're on a Sun JVM, i think you can do -XX:TraceJNICalls to get an overwhelming listing of JNI calls as they happen. That should let you get an idea of what calls are being made, and from there work towards what is making them, and why this is causing a problem.
JRockit mission control: http://download.oracle.com/docs/cd/E13150_01/jrockit_jvm/jrockit/tools/index.html
A nice GUI tool that should help you find it pretty quick.
You could try jstack.
Maybe one of the listed stacktraces will show you the origin of the global reference.
I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks.
I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks).
One thing come into mind is to write a selenium script and has the script bombards the webapps.
One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system.
Any suggestions?
ps: I don't have access to the source code, as we just provide a hosting service (of course I could decompile the class files...)
If you don't have access to the source code of the J2EE app in question, the options that come to mind are:
Reduce the amount of RAM available to the JVM. You've already identified this one and said you don't want to do it.
Create a J2EE app (it could probably just be a JSP) and configure it to run within the same JVM as the target app, and have that app allocate a ridiculous amount of memory. That will reduce the amount of memory available to the target app, hopefully such that it fails in the way you're trying to force.
Try to use some profiling tools to investigate memory leakage. Also good to investigate memory damps that was taken after OOM happens and logs. IMHO: reducing memory is not the rightest way to investigate cose you can get issues not connected with real production one.
Do both, but in a controlled fashion :
Reduce the available memory to the absolute minimum (using -Xms1M -Xmx2M, as an example, but I fear your app won't even load with such limitations)
Do controlled "nuclear irradiation" : do Selenium scripts or each of the known working urls before to attack the presumed guilty one.
Finally, unleash the power that shall not be raised : start VisualVM and any other monitoring software you can think of (DB execution is a usual suspect).
If you are using Sun Java 6, you may want to consider attaching to the application with jvisualvm in the JDK. This will allow you to do in-place profiling without needing to alter anything in your scenario, and may possibly immediately reveal the culprit.
If you don't have the source use decompile it, at least if you think the terms of usage allows this and you live in a free country. You can use:
Java Decompiler or JAD.
In addition to all the others I must say that even if you can reproduce an OutOfMemory error, and find out where it occurred, you probably haven't found out anything worth knowing.
The trouble is that an OOM occurs when an allocation can not take place. The real problem however is not that allocation, but the fact that other allocations, in other parts of the code, have not been de-allocated (de-referenced and garbage collected). The failed allocation here might have nothing to do with the source of the trouble (no pun intended).
This problem is larger in your case as it might take weeks before trouble starts, suggesting either a sparsely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code was OK.
It might be a good idea to ask around why this amount of memory is configured for JBoss and not something different. If it's recommended by the supplier than maybe they already know about the leak and require this to mitigate the effects of the bug.
For these kind of errors it really pays to have some idea in which code path the problem occurs so you can do targeted tests. And test with a profiler so you can see during run-time which objects (Lists, Maps and such) are growing without shrinking.
That would give you a chance to decompile the correct classes and see what's wrong with them. (Closing or cleaning in a try block and not a finally block perhaps).
In any case, good luck. I think I'd prefer to find a needle in a haystack. When you find the needle you at least know you have found it:)
The root of the problem is most likely a memory leak in the webapp that the client is running. In order to track it down, you need to run the app with a representative workload with memory profiling enabled. Take some snapshots, and then use the profiler to compare the snapshots to see where objects are leaking. While source-code would be ideal, you should be able to at least figure out where the leaking objects are being allocated. Then you need to track down the cause.
However, if your customer won't release binaries so that you can run an identical system to what he is running, you are kind of stuck, and you'll need to get the customer to do the profiling and leak detection himself.
BTW - there is not a lot of point causing the webapp to throw an OutOfMemoryError. It won't tell you why it is happening, and without understanding "why" you cannot do much about it.
EDIT
There is not point "measuring the limits", if the root cause of the memory leak is in the client's code. Assuming that you are providing a servlet hosting service, the best thing to do is to provide the client with instructions on how to debug memory leaks ... and step out of the way. And if they have a support contract that requires you to (in effect) debug their code, they ought to provide you with the source code to do your job.
what i'm looking for is what gets put into the call stack once a function is called recursively, how the arguments are laid on top of each other (are local variables pushed in order? are paremeters pushed in the reverse order?), what bytes exist in an array besides those you asked to have for...
searching on the internet, i've only found simple coding tutorials or information about the java memory model, which seems to be about preventing concurrency and doesn't really explain memory management the way i need.
Perhaps a read on the Java Virtual Machine Specification will help you.
It explains among other things how java works on Suns VM and/or how they should work when other implement it ( IBM, BEA, etc )
I think it's definitely worth looking at the Java Language Specification to see if it answers some of your questions. I've also written a few pages about the memory usage of Java objects that may interest you, essentially based on Hotspot. Other sources include white papers published by Sun or other technical documentation produced by your favourite purveyor of JVMs (IBM are also quite reasonable about releasing technical details if you dig around a bit).
If you're feeling particularly "hard core", then you can also download the debug JDK, which allows you to get a dump of all code generated by the JIT compiler (turn on -XX:+PrintOptoAssembly).
You should also ask yourself:
do you really care, say, what order method parameters are written to the stack? so long as the answer is "the order in which the called method expects them", what difference does it make? (N.B. If the JIT compiler inlines the method, the answer in some cases could be "it does not write the parameters to the stack"...)
can you find the answers to your underlying problem empirically? (e.g. if what you really want to know is "to what recursion depth can I go with a method that takes these parameters and declares these local variables each time", why not just write a test program to find that out from within Java?
This white paper offers the best introduction I've seen:
http://www.oracle.com/technetwork/java/memorymanagement-whitepaper-150215.pdf
Once you understand that document, you can find more detailed resources, such as the GC tuning guides provided by Sun:
http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html
I would have replied quite the same as Oscar Reyes did if he hadn't been first. You will definitely find stuff like that in the VM spec.
However, why do you want to know it so exactly, as you can't access this storage in Java anyway?
use javap and read the ByteCode, your intution will help you much faster and better.
Keep in mind when reading the JVM Spec that although it defines how much of Java's memory model should work, it does leave certain areas (like garbage collection) open to interpretation so it might also help to try and find documentation/specifications for the memory model for virtual machines other than Suns.