I read many articles about bad practice to call System.gc().
I understand that no guaranty at all that JVM will react on this call.
And I know that System.gc() is pretty good indicator of fundamentally broken code.
But, if I have web backend server and I need to process many resources at server load stage. And after load memory is full of garbage.
And I know that my server will be used only in Ubuntu with Hotspot JDK1.8 and this JDK reacts on System.gc().
Is it bad to call System.gc() only once after load and before I open server for users?
Is there someone who does the same thing?
There is no need to call it at all. It isn't guaranteed that it will do anything, and it is guaranteed that GC will be performed before an OutOfMemoryException can be thrown. And if it does do something it may waste CPU time.
It's fine to call System.gc(). Do use a memory analysis tool to check that it's helping in your case.
The spec can't guarantee any results since the vm might've just done a GC.
Related
We have an application that spawns new JVMs and executes code on behalf of our users. Sometimes those run out of memory, and in that case behave in very different ways. Sometimes they throw an OutOfMemoryError, sometimes they freeze. I can detect the latter by a very lightweight background thread that stops to send heartbeat signals when running low on memory. In that case, we kill the JVM, but we can never be absolutely sure what the real reason for failing to receive the heartbeat was. (It could as well have been a network issue or a segmentation fault.)
What is the best way to reliably detect out of memory conditions in a JVM?
In theory, the -XX:OnOutOfMemoryError option looks promising, but it is effectively unusable due to this bug: https://bugs.openjdk.java.net/browse/JDK-8027434
Catching an OutOfMemoryError is actually not a good alternative for well-known reasons (e.g. you never know where it happens), though it does work in many cases.
The cases that remain are those where the JVM freezes and does not throw an OutOfMemoryError. I'm still sure the memory is the reason for this issue.
Are there any alternatives or workarounds? Garbage collection settings to make the JVM terminate itself rather than freezing?
EDIT: I'm in full control of both the forking and the forked JVM as well as the code being executed within those, both are running on Linux, and it's ok to use OS specific utilities if that helps.
The only real option is (unfortunately) to terminate the JVM as soon as possible.
Since you probably cant change all your code to catch the error and respond. If you don't trust the OnOutOfMemoryError (I wonder why it should not use vfork which is used by Java 8, and it works on Windows), you can at least trigger an heapdump and monitor externally for those files:
java .... -XX:+HeapDumpOnOutOfMemoryError "-XX:OnOutOfMemoryError=kill %p"
After experimenting with this for quite some time, this is the solution that worked for us:
In the spawned JVM, catch an OutOfMemoryError and exit immediately, signalling the out of memory condition with an exit code to the controller JVM.
In the spawned JVM, periodically check the amount of consumed memory of the current Runtime. When the amount of memory used is close to critical, create a flag file that signals the out of memory condition to the controller JVM. If we recover from this condition and exit normally, delete that file before we exit.
After the controlling JVM joins the forked JVM, it checks the exit code generated in step (1) and the flag file generated in step (2). In addition to that, it checks whether the file hs_err_pidXXX.log exists and contains the line "Out of Memory Error". (This file is generated by java in case it crashes.)
Only after implementing all of those checks were we able to handle all cases where the forked JVM ran out of memory. We believe that since then, we have not missed a case where this happened.
The java flag -XX:OnOutOfMemoryError was not used because of the fork problem, and -XX:+HeapDumpOnOutOfMemoryError was not used because a heap dump is more than we need.
The solution is certainly not the most elegant piece of code ever written, but did the job for us.
In case you do have control both over the application and configuration, the best solution would be to find the underlying cause for the OutOfMemoryError being thrown and fix this, instead of trying to hide the symptoms either by catching the error or just restarting JVMs.
From what you describe, it definitely looks that either the application running on the JVM is leaking memory, is just running using under-provisioned resources (memory in your case) or is occasionally processing transactions requiring abnormally large chunks of heap. Solutions for those cases would be different:
In case of a memory leak, find the underlying cause and have engineers fix it. Tools for this include heap dump analyzers, profilers or leak detectors
In case of under-provisioned resources you need to monitor the application memory consumption, for example via garbage collection logs and adjust the sizes of different memory pools based on what you face.
In case of surge allocations during user transactions, you need to trace down the code causing the surge it and having engineers to fix it - via disabling certain user inputs or loading and processing the data in smaller batches. Either thread dumps or heap dumps from the processes can guide you towards the solution.
I am making some applications (servers and clients).I have a point where I need to call System.gc();
But I found here Why is it bad practice to call System.gc()? that isn't recommended to call gc.
If I use System.gc() the programs runs at ~80MB memory,but without gc,the memory grows up to ~600-700MB and I need to run it on Android phones
Are there other methods to clear memory?
Thanks
EDIT: Seeing the comments,in Android as I have tested (ported 1 hour ago),with System.gc() runs good,I haven't tested without it
EDIT 2:Here are two photos of the programs running in desktop after 5 minutes:
With System.gc():http://imgur.com/M9yMBei
Without System.gc():http://i.imgur.com/4qb2Ylc.png
EDIT 3: WOW!! 1 before posted this one application is using about 2GB of RAM!
It's bad practise because normally the runtime knows more about the state of the system than you do. If RAM is available, why not use it? If RAM isn't available, the system will GC earlier anyway.
In any case, calling System.gc() is merely a hint, and I suspect a hint that's only going to get more likely to be ignored as time passes.
I am creating a java program in which my class suppose A has it's some predefined behavior. But user can over-ride my class to change its behavior. So my script will check if there is some subclass than I will call it's behavior but what if he has written some blocking code or memory leak in his code.
This may harm my process. Is there is any way in java to monitor memory allocated by some method.
Please suggest.
but what if he has written some blocking code or memory leek in his
code
First of all i suggest you document your class well. Describe what the user is allowed to do and what not. Give use cases what to do(if possible).
For the blocking code part, if you have some timing issues, you could wrap the execution of the method in say a Future and let a ExecutorService execute the code. That way you will be able to cancel the execution if the execution takes too much time.
For the memory leak issue, well i guess you are not talking about memory leaks but increased memory consumption caused by calling the overridden method. Memory leaks in java are rare after all.
You will not be able to detect the memory consumption of a method, that's not how java works. Memory is global. What will you do if for example an external library is loaded(JNI), or some library in the classpath is called that will use more memory now? You just can not tell.
Other then monitoring the overall memory consumption, there is no other way(someone please tell me if i am wrong).
Oracle has quite a good document about solving memory leaks. It suggests that one should use NetBeans Profiler as a tool.
http://www.oracle.com/technetwork/java/javase/memleaks-137499.html
I believe you can use the same debugging API for checking against misbehaving code while it is running, but that will come with a performance penalty and is probably akin to killing a fly with a sledgehammer. I personally would not let anything like that to run in production. Instead I would rely on rigorous testing and peer review.
For external monitoring, you can use VisualVM or JConsole (part of JDK), for internal you can use the Runtime class:
Runtime rt = Runtime.getRuntime();
long totalMem = rt.totalMemory();
long maxMem = rt.maxMemory();
long freeMem = rt.freeMemory();
Via the Thread class, you can check the status of all threads. Never used it directly, because application servers or batch processing APIs doing their job... So, I don't need to reinvent the wheel. And I suggest to use tools like VisualVM...
EDIT: Watch also this thread: Why do threads share the heap space?
You cannot analyze the heap usage of a single thread. If you have problems with the execution of foreign code, you should sepearate it as good as you can from other threads and analyze the thread or heap dumps. This could be done as mentioned with VisualVM or JConsole which was also added by Oracle (or SUN).
Depending on what sort of behavior that the subclass can do, then we might think of options. For example, if it's a database related operation, we can force them to do connection clean ups, if it's file based, we can force them to read the file through your class and check for how big the file is, if it's any http call or some other streaming functionality, we can look at enforcing constraints accordingly.
If you're just worried about the heap size utilization and memory leaks there, you might want to look at http://java.dzone.com/tips/getting-jvm-heap-size-used which explains how to get runtime memory programatically. But then you'll have to do periodic checks and you can never be sure of whether a memory usage is caused by the subclass behavior.
I just found this while i was trying to build up an agent that records memory allocations:
In the post How to track any object creation in Java since freeMemory() only reports long-lived objects? it is specified that there is an open source project Java Allocation Instrumenter that you could use to register your own callback (it has examples too) and using that you are able to obtain what you need.
I started few days ago to work on a similar project and while researching i found your question and the below post.
I personally needed this kind of code in some unit tests to check if one allocates too many objects inside critical methods and found that using Runtime class was not appropiate because Garbage collector may interfere and the test recorded negative numbers for allocated memory.
I need to restart a java process if it produce any memory issues like 'GC overhead limit exceeded' or 'Java heap space'.
Is there some standard way of doing this like using some tool or options.
If not how can i put up a watchDog for doing this.
I noticed that my process is not going down when these issues happens.
And a restart brings it back to its foot again
There are people here who will suggest better options, so this is just my 0.02$. What I did a while ago on some app, is have a SoftReference to an Object, and once in a while I would check if that Object is null. SoftReferences are being collected (usually, but not guaranteed) by GC right before you get really close to OutOfMemory, so that would somehow tell you that you are really close to failing.
Also, in this case you should be looking at the JVM option:
-XX:SoftRefLRUPolicyMSPerMB=someValue
Where 'someValue' is the number of milliseconds a soft reference will remain for every free Mb of memory. The default is 1s/Mb, so if an object is only soft reachable it will last 1s if only 1Mb of heap space is free
It is probably not the best option, but just a hint may be?
Cheers, Eugene.
Runtime#freeMemory() will tell you how much memory is available within the VM - you could monitor that and raise an alarm when it reaches a threshold. Calling System.gc() at that point may free some more memory, but it isn't guarranteed and should be seen as a last resort.
You really need to combine this with understanding why you are running out of memoery and trying to do something to fix it.
You could use the Tanuki Software Java Service Wrapper; it will handle Automatic customizable response when something happens in your application or JVM.
It has a filter feature that will:
Filters are a very powerful feature which makes it possible to add new behavior to existing applications without any coding. It works by monitoring the console output of a JVM for sequences of text. When they are found, any number of actions can then be taken.
Examples are initiating a JVM restart whenever a specific error occurs. Some applications have known bugs where they stop working once getting into a certain state. This feature makes it possible to work around such problems immediately until they can be resolved in the application.
Assuming your Java application returns 0 upon graceful shutdown, a below shell script can serve the role of a watchdog.
#!/bin/bash
...
while true; do
java ... MyClass && break
done
Can the JVM recover from an OutOfMemoryError without a restart if it gets a chance to run the GC before more object allocation requests come in?
Do the various JVM implementations differ in this aspect?
My question is about the JVM recovering and not the user program trying to recover by catching the error. In other words if an OOME is thrown in an application server (jboss/websphere/..) do I have to restart it? Or can I let it run if further requests seem to work without a problem.
It may work, but it is generally a bad idea. There is no guarantee that your application will succeed in recovering, or that it will know if it has not succeeded. For example:
There really may be not enough memory to do the requested tasks, even after taking recovery steps like releasing block of reserved memory. In this situation, your application may get stuck in a loop where it repeatedly appears to recover and then runs out of memory again.
The OOME may be thrown on any thread. If an application thread or library is not designed to cope with it, this might leave some long-lived data structure in an incomplete or inconsistent state.
If threads die as a result of the OOME, the application may need to restart them as part of the OOME recovery. At the very least, this makes the application more complicated.
Suppose that a thread synchronizes with other threads using notify/wait or some higher level mechanism. If that thread dies from an OOME, other threads may be left waiting for notifies (etc) that never come ... for example. Designing for this could make the application significantly more complicated.
In summary, designing, implementing and testing an application to recover from OOMEs can be difficult, especially if the application (or the framework in which it runs, or any of the libraries it uses) is multi-threaded. It is a better idea to treat OOME as a fatal error.
See also my answer to a related question:
EDIT - in response to this followup question:
In other words if an OOME is thrown in an application server (jboss/websphere/..) do I have to restart it?
No you don't have to restart. But it is probably wise to, especially if you don't have a good / automated way of checking that the service is running correctly.
The JVM will recover just fine. But the application server and the application itself may or may not recover, depending on how well they are designed to cope with this situation. (My experience is that some app servers are not designed to cope with this, and that designing and implementing a complicated application to recover from OOMEs is hard, and testing it properly is even harder.)
EDIT 2
In response to this comment:
"other threads may be left waiting for notifies (etc) that never come" Really? Wouldn't the killed thread unwind its stacks, releasing resources as it goes, including held locks?
Yes really! Consider this:
Thread #1 runs this:
synchronized(lock) {
while (!someCondition) {
lock.wait();
}
}
// ...
Thread #2 runs this:
synchronized(lock) {
// do something
lock.notify();
}
If Thread #1 is waiting on the notify, and Thread #2 gets an OOME in the // do something section, then Thread #2 won't make the notify() call, and Thread #1 may get stuck forever waiting for a notification that won't ever occur. Sure, Thread #2 is guaranteed to release the mutex on the lock object ... but that is not sufficient!
If not the code ran by the thread is not exception safe, which is a more general problem.
"Exception safe" is not a term I've heard of (though I know what you mean). Java programs are not normally designed to be resilient to unexpected exceptions. Indeed, in a scenario like the above, it is likely to be somewhere between hard and impossible to make the application exception safe.
You'd need some mechanism whereby the failure of Thread #1 (due to the OOME) gets turned into an inter-thread communication failure notification to Thread #2. Erlang does this ... but not Java. The reason they can do this in Erlang is that Erlang processes communicate using strict CSP-like primitives; i.e. there is no sharing of data structures!
(Note that you could get the above problem for just about any unexpected exception ... not just Error exceptions. There are certain kinds of Java code where attempting to recover from an unexpected exception is likely to end badly.)
The JVM will run the GC when it's on edge of the OutOfMemoryError. If the GC didn't help at all, then the JVM will throw OOME.
You can however catch it and if necessary take an alternative path. Any allocations inside the try block will be GC'ed.
Since the OOME is "just" an Error which you could just catch, I would expect the different JVM implementations to behave the same. I can at least confirm from experience that the above is true for the Sun JVM.
See also:
Catching java.lang.OutOfMemoryError
Is it possible to catch out of memory exception in java?
I'd say it depends partly on what caused the OutOfMemoryError. If the JVM truly is running low on memory, it might be a good idea to restart it, and with more memory if possible (or a more efficient app). However, I've seen a fair amount of OOMEs that were caused by allocating 2GB arrays and such. In that case, if it's something like a J2EE web app, the effects of the error should be constrained to that particular app, and a JVM-wide restart wouldn't do any good.
Can it recover? Possibly. Any well-written JVM is only going to throw an OOME after it's tried everything it can to reclaim enough memory to do what you tell it to do. There's a very good chance that this means you can't recover. But...
It depends on a lot of things. For example if the garbage collector isn't a copying collector, the "out of memory" condition may actually be "no chunk big enough left to allocate". The very act of unwinding the stack may have objects cleaned up in a later GC round that leave open chunks big enough for your purposes. In that situation you may be able to restart. It's probably worth at least retrying once as a result. But...
You probably don't want to rely on this. If you're getting an OOME with any regularity, you'd better look over your server and find out what's going on and why. Maybe you have to clean up your code (you could be leaking or making too many temporary objects). Maybe you have to raise your memory ceiling when invoking the JVM. Treat the OOME, even if it's recoverable, as a sign that something bad has hit the fan somewhere in your code and act accordingly. Maybe your server doesn't have to come down NOWNOWNOWNOWNOW, but you will have to fix something before you get into deeper trouble.
You can increase your odds of recovering from this scenario although its not recommended that you try. What you do is pre-allocate some fixed amount of memory on startup thats dedicated to doing your recovery work, and when you catch the OOM, null out that pre-allocated reference and you're more likely to have some memory to use in your recovery sequence.
I don't know about different JVM implementations.
Any sane JVM will throw an OutOfMemoryError only if there is nothing the Garbage collector can do. However, if you catch the OutOfMemoryError early enough on the stack frame it can be likely enough that the cause was itself became unreachable and was garbage collected (unless the problem is not in the current thread).
Generally frameworks that run other code, like application servers, attempting to continue in the face of an OME makes sense (as long as it can reasonably release the third-party code), but otherwise, in the general case, recovery should probably consist of bailing and telling the user why, rather than trying to go on as if nothing happened.
To answer your newly updated question: There is no reason to think you need to shut down the server if all is working well. My experience with JBoss is that as long as the OME didn't affect a deployment, things work fine. Sometimes JBoss runs out of permgen space if you do a lot of hot deployment. Then indeed the situation is hopeless and an immediate restart (which will have to be forced with a kill) is inevitable.
Of course each app server (and deployment scenario) will vary and it is really something learned from experience in each case.
You cannot fully a JVM that had OutOfMemoryError. At least with the oracle JVM you can add -XX:OnOutOfMemoryError="cmd args;cmd args" and take recovery actions, like kill the JVM or send the event somewhere.
Reference: https://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html