I am creating a java program in which my class suppose A has it's some predefined behavior. But user can over-ride my class to change its behavior. So my script will check if there is some subclass than I will call it's behavior but what if he has written some blocking code or memory leak in his code.
This may harm my process. Is there is any way in java to monitor memory allocated by some method.
Please suggest.
but what if he has written some blocking code or memory leek in his
code
First of all i suggest you document your class well. Describe what the user is allowed to do and what not. Give use cases what to do(if possible).
For the blocking code part, if you have some timing issues, you could wrap the execution of the method in say a Future and let a ExecutorService execute the code. That way you will be able to cancel the execution if the execution takes too much time.
For the memory leak issue, well i guess you are not talking about memory leaks but increased memory consumption caused by calling the overridden method. Memory leaks in java are rare after all.
You will not be able to detect the memory consumption of a method, that's not how java works. Memory is global. What will you do if for example an external library is loaded(JNI), or some library in the classpath is called that will use more memory now? You just can not tell.
Other then monitoring the overall memory consumption, there is no other way(someone please tell me if i am wrong).
Oracle has quite a good document about solving memory leaks. It suggests that one should use NetBeans Profiler as a tool.
http://www.oracle.com/technetwork/java/javase/memleaks-137499.html
I believe you can use the same debugging API for checking against misbehaving code while it is running, but that will come with a performance penalty and is probably akin to killing a fly with a sledgehammer. I personally would not let anything like that to run in production. Instead I would rely on rigorous testing and peer review.
For external monitoring, you can use VisualVM or JConsole (part of JDK), for internal you can use the Runtime class:
Runtime rt = Runtime.getRuntime();
long totalMem = rt.totalMemory();
long maxMem = rt.maxMemory();
long freeMem = rt.freeMemory();
Via the Thread class, you can check the status of all threads. Never used it directly, because application servers or batch processing APIs doing their job... So, I don't need to reinvent the wheel. And I suggest to use tools like VisualVM...
EDIT: Watch also this thread: Why do threads share the heap space?
You cannot analyze the heap usage of a single thread. If you have problems with the execution of foreign code, you should sepearate it as good as you can from other threads and analyze the thread or heap dumps. This could be done as mentioned with VisualVM or JConsole which was also added by Oracle (or SUN).
Depending on what sort of behavior that the subclass can do, then we might think of options. For example, if it's a database related operation, we can force them to do connection clean ups, if it's file based, we can force them to read the file through your class and check for how big the file is, if it's any http call or some other streaming functionality, we can look at enforcing constraints accordingly.
If you're just worried about the heap size utilization and memory leaks there, you might want to look at http://java.dzone.com/tips/getting-jvm-heap-size-used which explains how to get runtime memory programatically. But then you'll have to do periodic checks and you can never be sure of whether a memory usage is caused by the subclass behavior.
I just found this while i was trying to build up an agent that records memory allocations:
In the post How to track any object creation in Java since freeMemory() only reports long-lived objects? it is specified that there is an open source project Java Allocation Instrumenter that you could use to register your own callback (it has examples too) and using that you are able to obtain what you need.
I started few days ago to work on a similar project and while researching i found your question and the below post.
I personally needed this kind of code in some unit tests to check if one allocates too many objects inside critical methods and found that using Runtime class was not appropiate because Garbage collector may interfere and the test recorded negative numbers for allocated memory.
Related
I am hoping to do a profiling analysis on my Java project. To get the results I want to add a "hook" to the JVM so that every time a heap access occurs, the "hook" is called and does some tracing. I have been looking into JVMTI but this does not seem to give me what I expect.
I have several questions:
Is it possible to add such a hook?
If possible, what are the correct tools/interfaces that I should use?
If there is no existing tools that do this, can I achieve this by modifying the JVM codebase?
Thanks.
I want to add a "hook" to the JVM so that every time a heap access occurs
You can't really do this in the Java as the hook itself would access the heap and cal itself. Even if you work around this, it would make the program impossibly slow.
What you can do is use the debugging interface to breakpoint after each instruction, inspect the instruction and see if it accessed the heap or not. This would be perhaps 10,000x slower than normal.
An alternative is to translate the bytecode using Instrumentation to trace each memory access. This might be only a few hundred times slower.
To do what you propose efficiently, you could use https://software.intel.com/en-us/articles/intel-performance-counter-monitor which used by tools such as perf on Linux. This requires in-depth knowledge of the processor you are using
I have a small java application running a set of computational heavy tasks. For processing the tasks, I use an external library which does most of the computation via native methods and some C code. Unfortunately, after solving one task, the library suffers from heavy memory leaks and can therefore only solve one task per application execution.
The memory problem is known to the coders from the library, but not fixed yet and maybe never will (it has something to do with the java garbage collector not properly working with the native inferface). Since there is no alternative for this particular library, I am looking for options to solve the tasks by sequentially application executions.
Currently, I have a bash wrapper script, which gets a list of tasks that should be executed and for each task the script calls the application with just this single task to execute.
Since tasks often need the results from previous tasks, this involves serializing and deserializing execution results to files. This does not seem to be good practice to me, also because the user has basically no way to interact with the program control flow.
Does anybody have an idea how I can to this sequential task execution inside one single java application? I guess this would involve starting a new JVM for each task exection, hopefully only transferring the task result and not the memory leaks from the new JVM to my application.
Edit providing further information:
Changing the root of the problem: Unfortunately, the library is not open source and I have neither access to the native methods nor to the java interface api.
New processes / JVMs: Is that the same in this context? I have not much experience with the java process api or starting new JVMs. My assumption is that this would involve starting a separate java program with its own main function using ProcessBuilder.start()?
Exchange of data: It is only a couple of kilobytes so performance is not an issue. Still, a solution without files would be preferable, but if I understand correctly memory mapped files also use local files. Sockets on the other hand do sound promising.
Funnily enough, I've faced the same issue. By definition, you need to accept nothing will be best practice or nice faced with having to use a faulty library you must use but cannot upgrade.
The solution we came up with was to isolate calls to the library in it's own process. This process was a child of a master process. The master process contains the good code and the child the bad. We were then able to keep track of the number of invocations of the child process and tear it down once it reached a certain number. We knew that we could get away with X invocations before the child process was corrupt.
Because of the nature of our problem, bringing up a fresh process enabled us to have another X invocations before repeating.
Any state was returned to the master process on a successful invocation. Any state gathered during an unsuccessful invocation was discarded and we started again.
Again, none of the above is "nice" but it worked for us.
For what it's worth, if I did this again, I'd use Akka and remote actors which would make all the sub-process, remoting etc far simpler.
That depends. Do you have the source code of this external application, i.e. can you recompile it? The easiest approach is obviously to fix the leak at its root. This might however be impractical. If the library, as you say, is implemented via native methods and some C code, I do not think that the problem has something to do with the Java garbage collector not properly working. Native methods and C code do not normally store their data on the JVM's heap and are therefore not garbage collected, i.e. it is the job of the library to clean up after itself.
If the leak is indeed in the bit of Java code that the library exposes, than there is a way. Memory leaks in Java occure by forgetting about references, e.g. consider the following example:
class Foo {
private ExpensiveObject eo;
Foo(ExpensiveObject eo) {
this.eo = eo;
}
}
The ExpensiveObject is alive (at least) as long as its referencing Foo instance. If you (or your library) do(es) not isolate instance life-cycles well enough, you get into trouble. If you do not have a chance to refactor, you can however use reflection to clean up the biggest mess from another place in your code:
void release(Foo foo) {
Field f = Foo.class.getDeclaredField("eo");
f.setAccessible(true);
f.set(foo, null);
}
This should however be considered a last-resort as it is quite a hack.
Alternatively, a better approach is normally to fork another instance of a JVM to do the dirty work. It seems like you are doing something similar already. By forking a JVM, you isolate the use of memory on a process level. Once the process dies, all memory is released by the OS. The problem with this approach is normally platform compatibility but as you already use a native library, this does not worsen your situation.
You say that you currently use files to communicate between these different processes. Why do you need to store data in a file? Rather consider using sockets or memory-mapped files (NIO), if performance is important for this matter.
I need to restart a java process if it produce any memory issues like 'GC overhead limit exceeded' or 'Java heap space'.
Is there some standard way of doing this like using some tool or options.
If not how can i put up a watchDog for doing this.
I noticed that my process is not going down when these issues happens.
And a restart brings it back to its foot again
There are people here who will suggest better options, so this is just my 0.02$. What I did a while ago on some app, is have a SoftReference to an Object, and once in a while I would check if that Object is null. SoftReferences are being collected (usually, but not guaranteed) by GC right before you get really close to OutOfMemory, so that would somehow tell you that you are really close to failing.
Also, in this case you should be looking at the JVM option:
-XX:SoftRefLRUPolicyMSPerMB=someValue
Where 'someValue' is the number of milliseconds a soft reference will remain for every free Mb of memory. The default is 1s/Mb, so if an object is only soft reachable it will last 1s if only 1Mb of heap space is free
It is probably not the best option, but just a hint may be?
Cheers, Eugene.
Runtime#freeMemory() will tell you how much memory is available within the VM - you could monitor that and raise an alarm when it reaches a threshold. Calling System.gc() at that point may free some more memory, but it isn't guarranteed and should be seen as a last resort.
You really need to combine this with understanding why you are running out of memoery and trying to do something to fix it.
You could use the Tanuki Software Java Service Wrapper; it will handle Automatic customizable response when something happens in your application or JVM.
It has a filter feature that will:
Filters are a very powerful feature which makes it possible to add new behavior to existing applications without any coding. It works by monitoring the console output of a JVM for sequences of text. When they are found, any number of actions can then be taken.
Examples are initiating a JVM restart whenever a specific error occurs. Some applications have known bugs where they stop working once getting into a certain state. This feature makes it possible to work around such problems immediately until they can be resolved in the application.
Assuming your Java application returns 0 upon graceful shutdown, a below shell script can serve the role of a watchdog.
#!/bin/bash
...
while true; do
java ... MyClass && break
done
I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks.
I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks).
One thing come into mind is to write a selenium script and has the script bombards the webapps.
One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system.
Any suggestions?
ps: I don't have access to the source code, as we just provide a hosting service (of course I could decompile the class files...)
If you don't have access to the source code of the J2EE app in question, the options that come to mind are:
Reduce the amount of RAM available to the JVM. You've already identified this one and said you don't want to do it.
Create a J2EE app (it could probably just be a JSP) and configure it to run within the same JVM as the target app, and have that app allocate a ridiculous amount of memory. That will reduce the amount of memory available to the target app, hopefully such that it fails in the way you're trying to force.
Try to use some profiling tools to investigate memory leakage. Also good to investigate memory damps that was taken after OOM happens and logs. IMHO: reducing memory is not the rightest way to investigate cose you can get issues not connected with real production one.
Do both, but in a controlled fashion :
Reduce the available memory to the absolute minimum (using -Xms1M -Xmx2M, as an example, but I fear your app won't even load with such limitations)
Do controlled "nuclear irradiation" : do Selenium scripts or each of the known working urls before to attack the presumed guilty one.
Finally, unleash the power that shall not be raised : start VisualVM and any other monitoring software you can think of (DB execution is a usual suspect).
If you are using Sun Java 6, you may want to consider attaching to the application with jvisualvm in the JDK. This will allow you to do in-place profiling without needing to alter anything in your scenario, and may possibly immediately reveal the culprit.
If you don't have the source use decompile it, at least if you think the terms of usage allows this and you live in a free country. You can use:
Java Decompiler or JAD.
In addition to all the others I must say that even if you can reproduce an OutOfMemory error, and find out where it occurred, you probably haven't found out anything worth knowing.
The trouble is that an OOM occurs when an allocation can not take place. The real problem however is not that allocation, but the fact that other allocations, in other parts of the code, have not been de-allocated (de-referenced and garbage collected). The failed allocation here might have nothing to do with the source of the trouble (no pun intended).
This problem is larger in your case as it might take weeks before trouble starts, suggesting either a sparsely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code was OK.
It might be a good idea to ask around why this amount of memory is configured for JBoss and not something different. If it's recommended by the supplier than maybe they already know about the leak and require this to mitigate the effects of the bug.
For these kind of errors it really pays to have some idea in which code path the problem occurs so you can do targeted tests. And test with a profiler so you can see during run-time which objects (Lists, Maps and such) are growing without shrinking.
That would give you a chance to decompile the correct classes and see what's wrong with them. (Closing or cleaning in a try block and not a finally block perhaps).
In any case, good luck. I think I'd prefer to find a needle in a haystack. When you find the needle you at least know you have found it:)
The root of the problem is most likely a memory leak in the webapp that the client is running. In order to track it down, you need to run the app with a representative workload with memory profiling enabled. Take some snapshots, and then use the profiler to compare the snapshots to see where objects are leaking. While source-code would be ideal, you should be able to at least figure out where the leaking objects are being allocated. Then you need to track down the cause.
However, if your customer won't release binaries so that you can run an identical system to what he is running, you are kind of stuck, and you'll need to get the customer to do the profiling and leak detection himself.
BTW - there is not a lot of point causing the webapp to throw an OutOfMemoryError. It won't tell you why it is happening, and without understanding "why" you cannot do much about it.
EDIT
There is not point "measuring the limits", if the root cause of the memory leak is in the client's code. Assuming that you are providing a servlet hosting service, the best thing to do is to provide the client with instructions on how to debug memory leaks ... and step out of the way. And if they have a support contract that requires you to (in effect) debug their code, they ought to provide you with the source code to do your job.
I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them)
Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs.
Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it.
Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!
You can pass the following arguments to the JVM on startup and a heap dump will be automatically generated on an OutOfMemoryError. The second argument lets you specify the path for the heap dump file. By using this at least you could check for the existence of a specific file to see if a heap dump has occurred.
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=<value>
I see two options if you want heap dumping automated but #Mark's solution with heap dump on OOM isn't satisfactory.
You can use the MemoryMXBean to detect high memory pressure, and then programmatically create a heap dump if the usage (or usage delta) seems high.
You can periodically get memory usage info and generate heap dumps with a cron'd shell script using jmap (works both locally and remote).
It would be nice if you could have a callback on OOM, but, uhm, that callback probably would just crash with an OOM error. :)
Have you looked at JConsole ? It uses JMX to give you visibility of a variety of JVM metrics, including memory info. It would probably be worth monitoring your application using this to begin with, to get a feel for how/when the memory is consumed. You may find the memory is consumed uniformly over the day, or when using certain features.
Take a look at the detecting low memory section of the above link.
If you need you can then write a JMX client to watch the application automatically and trigger whatever actions required. JConsole will indicate which JMX methods you need to poll.
And alternative to waiting until the application has crashed may be to script a controlled restart like every night if you're optimistic that it can survive for twelve hours..
Maybe even websphere can do that for you !?
You could add a listener (Session scoped or Application scope attribute listener) class that would be called each time a new object is added in session/app scope.
In this - you can attempt to check the total memory used by app (Log it) as as call run gc (note that invoking it will not imply gc will always run)
(The above is for the logging part and gc based on usage growth)
For scheduled gc:
In addition you can keep a timer task class that runs after every few hrs and does a request for gc.
Our experience with ITCAM has been less than stellar from the monitoring perspective. We dumped it in favor of CA Wily Introscope.
Have you had a look on the jvisualvm tool in the latest Java 6 JDK's?
It is great for inspecting running code.
I'd dispute that the you need the heap dumps when the OOM occurs. Periodic gathering of the information over time should give the picture of what's going on.
As has been observed various tools exist for analysing these problems. I have had success with ITCAM for WebSphere, as an IBMer I have ready access to that. We were very quickly able to indentify the exact lines of code in out problem situation.
If there's any way you can get a tool of that nature then that's the way to go.
It should be possible to write a simple program to get the process list from the kernel and scan it to see if your WAS process is still running. On a Unix box you could probably whip up something in Perl in a few minutes (if you know Perl), not sure how difficult it would be under Windows. Run it as a scheduled task every five minutes or so, and if the process doesn't show up you could have it fork off another process that would deal with the heap dump and re-start WAS.