There is a single core class that is used in transaction engine. I did a test with high number of concurrent transaction which resulted in a deadly stack-overflow exception. I would like to know if there is any way to measure how much stack memory is available in order to avoid the exception.
I am looking into a dynamic way of doing it as setting a hard limit on the number of concurrent transactions is not ideal.
Give Java VisualVM a try. It's from Oracle, and included with the JDK. You can find it here:
${JDK}/bin/jvisualvm.exe
Almost anything you want to know about your Java application's performance can be observed through this.
Here's a quick tutorial if you need it, although it doesn't actually need much of an explanation.
You can set the Stack Size of a Java program by using the -Xss argument (or "-XX:ThreadStackSize see Java HotSpot VM Options).
But, once set, the Java stack size cannot be changed dynamically.
Related
I am hoping to do a profiling analysis on my Java project. To get the results I want to add a "hook" to the JVM so that every time a heap access occurs, the "hook" is called and does some tracing. I have been looking into JVMTI but this does not seem to give me what I expect.
I have several questions:
Is it possible to add such a hook?
If possible, what are the correct tools/interfaces that I should use?
If there is no existing tools that do this, can I achieve this by modifying the JVM codebase?
Thanks.
I want to add a "hook" to the JVM so that every time a heap access occurs
You can't really do this in the Java as the hook itself would access the heap and cal itself. Even if you work around this, it would make the program impossibly slow.
What you can do is use the debugging interface to breakpoint after each instruction, inspect the instruction and see if it accessed the heap or not. This would be perhaps 10,000x slower than normal.
An alternative is to translate the bytecode using Instrumentation to trace each memory access. This might be only a few hundred times slower.
To do what you propose efficiently, you could use https://software.intel.com/en-us/articles/intel-performance-counter-monitor which used by tools such as perf on Linux. This requires in-depth knowledge of the processor you are using
I am creating a java program in which my class suppose A has it's some predefined behavior. But user can over-ride my class to change its behavior. So my script will check if there is some subclass than I will call it's behavior but what if he has written some blocking code or memory leak in his code.
This may harm my process. Is there is any way in java to monitor memory allocated by some method.
Please suggest.
but what if he has written some blocking code or memory leek in his
code
First of all i suggest you document your class well. Describe what the user is allowed to do and what not. Give use cases what to do(if possible).
For the blocking code part, if you have some timing issues, you could wrap the execution of the method in say a Future and let a ExecutorService execute the code. That way you will be able to cancel the execution if the execution takes too much time.
For the memory leak issue, well i guess you are not talking about memory leaks but increased memory consumption caused by calling the overridden method. Memory leaks in java are rare after all.
You will not be able to detect the memory consumption of a method, that's not how java works. Memory is global. What will you do if for example an external library is loaded(JNI), or some library in the classpath is called that will use more memory now? You just can not tell.
Other then monitoring the overall memory consumption, there is no other way(someone please tell me if i am wrong).
Oracle has quite a good document about solving memory leaks. It suggests that one should use NetBeans Profiler as a tool.
http://www.oracle.com/technetwork/java/javase/memleaks-137499.html
I believe you can use the same debugging API for checking against misbehaving code while it is running, but that will come with a performance penalty and is probably akin to killing a fly with a sledgehammer. I personally would not let anything like that to run in production. Instead I would rely on rigorous testing and peer review.
For external monitoring, you can use VisualVM or JConsole (part of JDK), for internal you can use the Runtime class:
Runtime rt = Runtime.getRuntime();
long totalMem = rt.totalMemory();
long maxMem = rt.maxMemory();
long freeMem = rt.freeMemory();
Via the Thread class, you can check the status of all threads. Never used it directly, because application servers or batch processing APIs doing their job... So, I don't need to reinvent the wheel. And I suggest to use tools like VisualVM...
EDIT: Watch also this thread: Why do threads share the heap space?
You cannot analyze the heap usage of a single thread. If you have problems with the execution of foreign code, you should sepearate it as good as you can from other threads and analyze the thread or heap dumps. This could be done as mentioned with VisualVM or JConsole which was also added by Oracle (or SUN).
Depending on what sort of behavior that the subclass can do, then we might think of options. For example, if it's a database related operation, we can force them to do connection clean ups, if it's file based, we can force them to read the file through your class and check for how big the file is, if it's any http call or some other streaming functionality, we can look at enforcing constraints accordingly.
If you're just worried about the heap size utilization and memory leaks there, you might want to look at http://java.dzone.com/tips/getting-jvm-heap-size-used which explains how to get runtime memory programatically. But then you'll have to do periodic checks and you can never be sure of whether a memory usage is caused by the subclass behavior.
I just found this while i was trying to build up an agent that records memory allocations:
In the post How to track any object creation in Java since freeMemory() only reports long-lived objects? it is specified that there is an open source project Java Allocation Instrumenter that you could use to register your own callback (it has examples too) and using that you are able to obtain what you need.
I started few days ago to work on a similar project and while researching i found your question and the below post.
I personally needed this kind of code in some unit tests to check if one allocates too many objects inside critical methods and found that using Runtime class was not appropiate because Garbage collector may interfere and the test recorded negative numbers for allocated memory.
I've recently learned about the -XX:+HeapDumpOnOutOfMemoryError VM argument and was told that it should be added as a matter of course to the HotSpot JVM as it is off by default. One of my co-workers made a comment that maybe we shouldn't because he heard that there's some pitfall to doing this but he can't remember what it was. I hate vague statements like that, but am trying to do my due diligence before making a final decision so am doing some investigation.
Most of the references to it I can find are more about how to use it (and where the dump files are located) and don't speak to any issues with using it. This SO question refers to a different argument, but the answers seem relevant to this one as well and imply that there are no issues: Why is this Hotspot JVM option not the default? -XX:+PrintConcurrentLocks
Does anyone know of any reason not to turn -XX:+HeapDumpOnOutOfMemoryError?
The main downside is that it creates a large file the each time a new program getting this error (the first time it happens for that JVM). If you have a heap of 2 GB, it could create a file that big each time, filling up disk space with heap dumps you don't need. Since its only useful for debugging/development purposes, it not useful for most end users.
With this particular flag I don't think any issues (Don't know about other flags). This is not even a diagnostic flag. It just prints GC/Memory state when JVM encounters OutofMemoryError (happens only once and that too while JVM stop).
One thing you need to accept is, it may (or) may not behave as expected, because it is -XX and
Options that are specified with -XX are not stable and are subject to change without notice
I am currently trying to determine the cause of high memory usage in a Java application running on an exotic platform where I know of no instrumented JVM.
I have the source to the application, and can make changes to the source for the purposes of testing.
How can I debug memory usage under these conditions?
If more info is needed, I'll be happy to provide. I'm just a little lost trying to use such an old jvm without much tooling to speak of.
If I were in your shoes I would approach it with:
Find the functional areas you know
need attention.
Make backup copy of code
Start inserting print statements
with start and end times
See what takes a lot of time and
narrow it down.
For Java 5 and later this can be done using Java agents. For earlier versions - including 1.1.8 - you must load native agents to do this. If you cannot instrument your code, you must do the work needed yourself.
One approach to get most of the way is to use a Java 1.1 compatible version of log4j which allows you to essentially write out strings prepended with a timestamp. This can then be massaged afterwards into extracting answers to whatever you want to know.
If you need memory profiling - and I'd recommend against this - you could start serializing objects out to disk, then measuring disk size as a rough estimate of memory size.
If you really want to dig into where you're usually not supposed to be, try the sun.misc package, although I don't know how much of that was around in 1.1.x.
I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks.
I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks).
One thing come into mind is to write a selenium script and has the script bombards the webapps.
One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system.
Any suggestions?
ps: I don't have access to the source code, as we just provide a hosting service (of course I could decompile the class files...)
If you don't have access to the source code of the J2EE app in question, the options that come to mind are:
Reduce the amount of RAM available to the JVM. You've already identified this one and said you don't want to do it.
Create a J2EE app (it could probably just be a JSP) and configure it to run within the same JVM as the target app, and have that app allocate a ridiculous amount of memory. That will reduce the amount of memory available to the target app, hopefully such that it fails in the way you're trying to force.
Try to use some profiling tools to investigate memory leakage. Also good to investigate memory damps that was taken after OOM happens and logs. IMHO: reducing memory is not the rightest way to investigate cose you can get issues not connected with real production one.
Do both, but in a controlled fashion :
Reduce the available memory to the absolute minimum (using -Xms1M -Xmx2M, as an example, but I fear your app won't even load with such limitations)
Do controlled "nuclear irradiation" : do Selenium scripts or each of the known working urls before to attack the presumed guilty one.
Finally, unleash the power that shall not be raised : start VisualVM and any other monitoring software you can think of (DB execution is a usual suspect).
If you are using Sun Java 6, you may want to consider attaching to the application with jvisualvm in the JDK. This will allow you to do in-place profiling without needing to alter anything in your scenario, and may possibly immediately reveal the culprit.
If you don't have the source use decompile it, at least if you think the terms of usage allows this and you live in a free country. You can use:
Java Decompiler or JAD.
In addition to all the others I must say that even if you can reproduce an OutOfMemory error, and find out where it occurred, you probably haven't found out anything worth knowing.
The trouble is that an OOM occurs when an allocation can not take place. The real problem however is not that allocation, but the fact that other allocations, in other parts of the code, have not been de-allocated (de-referenced and garbage collected). The failed allocation here might have nothing to do with the source of the trouble (no pun intended).
This problem is larger in your case as it might take weeks before trouble starts, suggesting either a sparsely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code was OK.
It might be a good idea to ask around why this amount of memory is configured for JBoss and not something different. If it's recommended by the supplier than maybe they already know about the leak and require this to mitigate the effects of the bug.
For these kind of errors it really pays to have some idea in which code path the problem occurs so you can do targeted tests. And test with a profiler so you can see during run-time which objects (Lists, Maps and such) are growing without shrinking.
That would give you a chance to decompile the correct classes and see what's wrong with them. (Closing or cleaning in a try block and not a finally block perhaps).
In any case, good luck. I think I'd prefer to find a needle in a haystack. When you find the needle you at least know you have found it:)
The root of the problem is most likely a memory leak in the webapp that the client is running. In order to track it down, you need to run the app with a representative workload with memory profiling enabled. Take some snapshots, and then use the profiler to compare the snapshots to see where objects are leaking. While source-code would be ideal, you should be able to at least figure out where the leaking objects are being allocated. Then you need to track down the cause.
However, if your customer won't release binaries so that you can run an identical system to what he is running, you are kind of stuck, and you'll need to get the customer to do the profiling and leak detection himself.
BTW - there is not a lot of point causing the webapp to throw an OutOfMemoryError. It won't tell you why it is happening, and without understanding "why" you cannot do much about it.
EDIT
There is not point "measuring the limits", if the root cause of the memory leak is in the client's code. Assuming that you are providing a servlet hosting service, the best thing to do is to provide the client with instructions on how to debug memory leaks ... and step out of the way. And if they have a support contract that requires you to (in effect) debug their code, they ought to provide you with the source code to do your job.