I need to restart a java process if it produce any memory issues like 'GC overhead limit exceeded' or 'Java heap space'.
Is there some standard way of doing this like using some tool or options.
If not how can i put up a watchDog for doing this.
I noticed that my process is not going down when these issues happens.
And a restart brings it back to its foot again
There are people here who will suggest better options, so this is just my 0.02$. What I did a while ago on some app, is have a SoftReference to an Object, and once in a while I would check if that Object is null. SoftReferences are being collected (usually, but not guaranteed) by GC right before you get really close to OutOfMemory, so that would somehow tell you that you are really close to failing.
Also, in this case you should be looking at the JVM option:
-XX:SoftRefLRUPolicyMSPerMB=someValue
Where 'someValue' is the number of milliseconds a soft reference will remain for every free Mb of memory. The default is 1s/Mb, so if an object is only soft reachable it will last 1s if only 1Mb of heap space is free
It is probably not the best option, but just a hint may be?
Cheers, Eugene.
Runtime#freeMemory() will tell you how much memory is available within the VM - you could monitor that and raise an alarm when it reaches a threshold. Calling System.gc() at that point may free some more memory, but it isn't guarranteed and should be seen as a last resort.
You really need to combine this with understanding why you are running out of memoery and trying to do something to fix it.
You could use the Tanuki Software Java Service Wrapper; it will handle Automatic customizable response when something happens in your application or JVM.
It has a filter feature that will:
Filters are a very powerful feature which makes it possible to add new behavior to existing applications without any coding. It works by monitoring the console output of a JVM for sequences of text. When they are found, any number of actions can then be taken.
Examples are initiating a JVM restart whenever a specific error occurs. Some applications have known bugs where they stop working once getting into a certain state. This feature makes it possible to work around such problems immediately until they can be resolved in the application.
Assuming your Java application returns 0 upon graceful shutdown, a below shell script can serve the role of a watchdog.
#!/bin/bash
...
while true; do
java ... MyClass && break
done
Related
We have an application that spawns new JVMs and executes code on behalf of our users. Sometimes those run out of memory, and in that case behave in very different ways. Sometimes they throw an OutOfMemoryError, sometimes they freeze. I can detect the latter by a very lightweight background thread that stops to send heartbeat signals when running low on memory. In that case, we kill the JVM, but we can never be absolutely sure what the real reason for failing to receive the heartbeat was. (It could as well have been a network issue or a segmentation fault.)
What is the best way to reliably detect out of memory conditions in a JVM?
In theory, the -XX:OnOutOfMemoryError option looks promising, but it is effectively unusable due to this bug: https://bugs.openjdk.java.net/browse/JDK-8027434
Catching an OutOfMemoryError is actually not a good alternative for well-known reasons (e.g. you never know where it happens), though it does work in many cases.
The cases that remain are those where the JVM freezes and does not throw an OutOfMemoryError. I'm still sure the memory is the reason for this issue.
Are there any alternatives or workarounds? Garbage collection settings to make the JVM terminate itself rather than freezing?
EDIT: I'm in full control of both the forking and the forked JVM as well as the code being executed within those, both are running on Linux, and it's ok to use OS specific utilities if that helps.
The only real option is (unfortunately) to terminate the JVM as soon as possible.
Since you probably cant change all your code to catch the error and respond. If you don't trust the OnOutOfMemoryError (I wonder why it should not use vfork which is used by Java 8, and it works on Windows), you can at least trigger an heapdump and monitor externally for those files:
java .... -XX:+HeapDumpOnOutOfMemoryError "-XX:OnOutOfMemoryError=kill %p"
After experimenting with this for quite some time, this is the solution that worked for us:
In the spawned JVM, catch an OutOfMemoryError and exit immediately, signalling the out of memory condition with an exit code to the controller JVM.
In the spawned JVM, periodically check the amount of consumed memory of the current Runtime. When the amount of memory used is close to critical, create a flag file that signals the out of memory condition to the controller JVM. If we recover from this condition and exit normally, delete that file before we exit.
After the controlling JVM joins the forked JVM, it checks the exit code generated in step (1) and the flag file generated in step (2). In addition to that, it checks whether the file hs_err_pidXXX.log exists and contains the line "Out of Memory Error". (This file is generated by java in case it crashes.)
Only after implementing all of those checks were we able to handle all cases where the forked JVM ran out of memory. We believe that since then, we have not missed a case where this happened.
The java flag -XX:OnOutOfMemoryError was not used because of the fork problem, and -XX:+HeapDumpOnOutOfMemoryError was not used because a heap dump is more than we need.
The solution is certainly not the most elegant piece of code ever written, but did the job for us.
In case you do have control both over the application and configuration, the best solution would be to find the underlying cause for the OutOfMemoryError being thrown and fix this, instead of trying to hide the symptoms either by catching the error or just restarting JVMs.
From what you describe, it definitely looks that either the application running on the JVM is leaking memory, is just running using under-provisioned resources (memory in your case) or is occasionally processing transactions requiring abnormally large chunks of heap. Solutions for those cases would be different:
In case of a memory leak, find the underlying cause and have engineers fix it. Tools for this include heap dump analyzers, profilers or leak detectors
In case of under-provisioned resources you need to monitor the application memory consumption, for example via garbage collection logs and adjust the sizes of different memory pools based on what you face.
In case of surge allocations during user transactions, you need to trace down the code causing the surge it and having engineers to fix it - via disabling certain user inputs or loading and processing the data in smaller batches. Either thread dumps or heap dumps from the processes can guide you towards the solution.
I am creating a java program in which my class suppose A has it's some predefined behavior. But user can over-ride my class to change its behavior. So my script will check if there is some subclass than I will call it's behavior but what if he has written some blocking code or memory leak in his code.
This may harm my process. Is there is any way in java to monitor memory allocated by some method.
Please suggest.
but what if he has written some blocking code or memory leek in his
code
First of all i suggest you document your class well. Describe what the user is allowed to do and what not. Give use cases what to do(if possible).
For the blocking code part, if you have some timing issues, you could wrap the execution of the method in say a Future and let a ExecutorService execute the code. That way you will be able to cancel the execution if the execution takes too much time.
For the memory leak issue, well i guess you are not talking about memory leaks but increased memory consumption caused by calling the overridden method. Memory leaks in java are rare after all.
You will not be able to detect the memory consumption of a method, that's not how java works. Memory is global. What will you do if for example an external library is loaded(JNI), or some library in the classpath is called that will use more memory now? You just can not tell.
Other then monitoring the overall memory consumption, there is no other way(someone please tell me if i am wrong).
Oracle has quite a good document about solving memory leaks. It suggests that one should use NetBeans Profiler as a tool.
http://www.oracle.com/technetwork/java/javase/memleaks-137499.html
I believe you can use the same debugging API for checking against misbehaving code while it is running, but that will come with a performance penalty and is probably akin to killing a fly with a sledgehammer. I personally would not let anything like that to run in production. Instead I would rely on rigorous testing and peer review.
For external monitoring, you can use VisualVM or JConsole (part of JDK), for internal you can use the Runtime class:
Runtime rt = Runtime.getRuntime();
long totalMem = rt.totalMemory();
long maxMem = rt.maxMemory();
long freeMem = rt.freeMemory();
Via the Thread class, you can check the status of all threads. Never used it directly, because application servers or batch processing APIs doing their job... So, I don't need to reinvent the wheel. And I suggest to use tools like VisualVM...
EDIT: Watch also this thread: Why do threads share the heap space?
You cannot analyze the heap usage of a single thread. If you have problems with the execution of foreign code, you should sepearate it as good as you can from other threads and analyze the thread or heap dumps. This could be done as mentioned with VisualVM or JConsole which was also added by Oracle (or SUN).
Depending on what sort of behavior that the subclass can do, then we might think of options. For example, if it's a database related operation, we can force them to do connection clean ups, if it's file based, we can force them to read the file through your class and check for how big the file is, if it's any http call or some other streaming functionality, we can look at enforcing constraints accordingly.
If you're just worried about the heap size utilization and memory leaks there, you might want to look at http://java.dzone.com/tips/getting-jvm-heap-size-used which explains how to get runtime memory programatically. But then you'll have to do periodic checks and you can never be sure of whether a memory usage is caused by the subclass behavior.
I just found this while i was trying to build up an agent that records memory allocations:
In the post How to track any object creation in Java since freeMemory() only reports long-lived objects? it is specified that there is an open source project Java Allocation Instrumenter that you could use to register your own callback (it has examples too) and using that you are able to obtain what you need.
I started few days ago to work on a similar project and while researching i found your question and the below post.
I personally needed this kind of code in some unit tests to check if one allocates too many objects inside critical methods and found that using Runtime class was not appropiate because Garbage collector may interfere and the test recorded negative numbers for allocated memory.
I am running into trouble to determine what is wrong with my software.
The situation is;
-The program is always running on background and every X minutes performs some actions.
-Right now it is set to check every 1 minute a certain directory and see if there are new files in it.
-If there are new files, they are processed and moved somewhere else.
-If not, it simply logs the event and goes idle again.
I Assume that when new files appear, CPU usage can be somewhat high.
The problem comes when, even if I dont put new files in the directory for many days, the CPU usage will raise to ~90% every minute it checks for new entrys, then after some seconds, return back to <1% usage.
The same process under windows seems somehow stable, staying always on low cpu usage.
If I monitor the CPU activty monthly, I can see that the average CPU usage for my java process keeps growing up (without putting new files to 'activate' the rest of the process), and I have to restart the process for it to return to lower CPU usage levels.
I really dont happen to understand this behaviour, so I dont really know what may be affecting this.
If the log file is somewhat 'big', like 10-20mb would it require that much cpu to log a new entry every minute?
If there are many libraries loaded in the classpath for this process, will the cpu usage be increased even though many of this libraries wont be used most all the time?
Excuse me if I haven't been very clear on my question, I am somewhat new to this.
Thanks every one in advance, regards.
--edit--
I note your advices, I will do some monitoring and I will post some code / results to share with you and see what can you come up with!
I am really lost right now!
I your custom monitoring code is causing a problem, you could always use something standard like Apache Commons IO's FileAlterationMonitor. It's simple to implement and it might be faster than fixing your current code.
Are you talking about a simple console application or a swing/awt app ?
Is the application run every minute via OS underlying at schedule or it's a simple server process ?
If the process is run as a server how do you launch the VM ? (server VM or client VM - -server switch on cmd line)
You may check also your garbage collector, sometimes logging framework use up too many object without releasing their references.
Regards
M.
I have been writing a small java application (my first!), that does only a few things at the moment. Currently, it runs the Main class which launches a gui class (a class I wrote that extends JFrame that only contains a JTextArea), a class that loads a local file through a BufferedInputStream that is approximately 40kb, and class that loads a entry from a Java properties file.
Everything works wonderfully, however, I was watching the Windows task manager and I noticed something that struck me as odd. When I launch the application, the RAM usage jumps to about 40MB while it loads the local file and pulls a few values from it to display in the JTextArea, which seems normal to me because of the JVM, Java base classes, etc. At this point, however, when the application has finished loading the file, itmerely sits idle, as I currently don't have it doing anything else. While it is sitting idle, as long as the window is active, the application's memory usage starts climbing by 10-20kb every second. This strikes me as odd. If I click on another program to make this one the inactive window, the memory still rises, but at a much slower rate (about 10kb every 3-5 seconds).
I have not tested to see how far it would go up, but this strikes me as very odd behavior. Is this normal Java behavior? I guess it is possible that my code could be leaking memory, but I'm not sure how. I did make sure to close the BufferedInputStream I am using, and I can't see what else would cause this.
I'm sorry if my explanation doesn't make sense, but I would appreciate any insight and/or pointers anyone may have.
UPDATE:
Upon suggestion, I basically stripped my application down to the Main class which simply calls the gui class. The gui class only extends JFrame and sets the window size, close operation, and visible properties. With these changes, the memory still grows at 10-20kb, but at a slower rate. This, in conjuction with other advice I have received leads me to believe that this is just Java. I will continue to play with it and let you all know if I find out anything else interesting.
Try monitoring the heap usage with jconsole instead of the Windows task manager:
Launch your app with the -Dcom.sun.management.jmxremote option e.g.
java -Dcom.sun.management.jmxremote -jar myapp.jar
Launch jconsole from the command line, and connect to the local pid of the java process you started in the last step.
Click over to memory and watch heap memory (the default display)
If you watch for a while, you'll probably get a "sawtooth" pattern as the memory climbs over time, but then has sharp drop-offs when the garbage collector runs. You can try to "suggest" garbage collection by clicking the so-labelled button.
When you do this, does the memory usage drop down to the same minimum level, or is the overall minimum increasing over the course of several minutes? If the minimum usage increases, then you have a memory leak. If it always returns to the same minimum level, then you're fine.
Congrats on your first app! Now, a couple things to think about. First, the Windows task manager is not a great resource to understand how quickly your vm is growing. Instead, you should monitor your garbage collection stats in the console (use the -verbose:gc commandline param). Second, if you are concerned about potential leaks and the growth of the vm, there are a bunch of great profilers out there that are easy to use and can help you diagnose memory issues. check out these two posts for some profiler options.
Congratulations for your first Java app!
Java applications run in a virtual machine. The virtual machine has been assigned a fixed amount of memory by the OS, typically 512 MB. As long as the application uses less than 512 MB the garbage collector won't kick in and start searching for "dead" memory blocks. The JVM memory limit can be modified in most OSes. Try switching the memory limit to 32 MB, for example.
Is this normal Java behavior?
No.
I guess it is possible that my code could be leaking memory
That is definitely the cause. Please post your source code, otherwise further diagnosis isn't possible.
I noticed you are using Swing, make sure you are launching your JFrame in the event dispatch thread, using the invokeLater(Runnable) method.
If your are using any sort of collections, make sure you clear them once done.
Since you are doing some file IO, make sure you close all of the classes involved in in the IO operations after you are done with them.
If you are using any event listeners, remember to explicitly remove event listeners when they are no longer necessary.
One thing you could try is experimenting. Take your application and remove the file IO, see what happens. Does the memory usage still climb as before? Now resotre your application to normal, and remove the text area - does the memory still climb as before? Etc, etc. This will help you to determine what the source is, and you can focus your efforts there. Most likely you will uncover what you are after by doing this.
Another useful diagnosis tool is to use System.gc() at particular points in time, usually after the heavy-lifting blocks of code. This will tell the JVM to perform a garbage collection at that point in the execution, rather than at another time determined by memory consumption. This will help you to take into account any periodic fluctuations in the memory usage of your application.
Failing which, you can always use a memory profiler. If you are using Netbeans IDE, there's one built right into it. For Eclipse, there're several plugins which can perform profiling.
it is normal. some background calc might leave dead objects around, which JVM isn't in a hurry to clean up. eventually they will be garbage collected, when max mem is approached.
leave your program running overnight, and your machine won't blow up.
I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them)
Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs.
Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it.
Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!
You can pass the following arguments to the JVM on startup and a heap dump will be automatically generated on an OutOfMemoryError. The second argument lets you specify the path for the heap dump file. By using this at least you could check for the existence of a specific file to see if a heap dump has occurred.
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=<value>
I see two options if you want heap dumping automated but #Mark's solution with heap dump on OOM isn't satisfactory.
You can use the MemoryMXBean to detect high memory pressure, and then programmatically create a heap dump if the usage (or usage delta) seems high.
You can periodically get memory usage info and generate heap dumps with a cron'd shell script using jmap (works both locally and remote).
It would be nice if you could have a callback on OOM, but, uhm, that callback probably would just crash with an OOM error. :)
Have you looked at JConsole ? It uses JMX to give you visibility of a variety of JVM metrics, including memory info. It would probably be worth monitoring your application using this to begin with, to get a feel for how/when the memory is consumed. You may find the memory is consumed uniformly over the day, or when using certain features.
Take a look at the detecting low memory section of the above link.
If you need you can then write a JMX client to watch the application automatically and trigger whatever actions required. JConsole will indicate which JMX methods you need to poll.
And alternative to waiting until the application has crashed may be to script a controlled restart like every night if you're optimistic that it can survive for twelve hours..
Maybe even websphere can do that for you !?
You could add a listener (Session scoped or Application scope attribute listener) class that would be called each time a new object is added in session/app scope.
In this - you can attempt to check the total memory used by app (Log it) as as call run gc (note that invoking it will not imply gc will always run)
(The above is for the logging part and gc based on usage growth)
For scheduled gc:
In addition you can keep a timer task class that runs after every few hrs and does a request for gc.
Our experience with ITCAM has been less than stellar from the monitoring perspective. We dumped it in favor of CA Wily Introscope.
Have you had a look on the jvisualvm tool in the latest Java 6 JDK's?
It is great for inspecting running code.
I'd dispute that the you need the heap dumps when the OOM occurs. Periodic gathering of the information over time should give the picture of what's going on.
As has been observed various tools exist for analysing these problems. I have had success with ITCAM for WebSphere, as an IBMer I have ready access to that. We were very quickly able to indentify the exact lines of code in out problem situation.
If there's any way you can get a tool of that nature then that's the way to go.
It should be possible to write a simple program to get the process list from the kernel and scan it to see if your WAS process is still running. On a Unix box you could probably whip up something in Perl in a few minutes (if you know Perl), not sure how difficult it would be under Windows. Run it as a scheduled task every five minutes or so, and if the process doesn't show up you could have it fork off another process that would deal with the heap dump and re-start WAS.