What does the jvm option -XX:OnOutOfMemoryError=jmap do? - java

I am trying to monitor a process (which crashed due to Out Of Memory) last time on one production machine. The process is running with -XX:OnOutOfMemoryError=jmap option. What does it mean? Does it mean that it would produce a heap dump on OutOfMemory? or the command jmap is incomplete and should have more to it?

-XX:OnOutOfMemoryError= string is used to specify a command or script to execute when an OutOfMemoryError is first thrown. Documentation is your friend.

Related

How to debug the jmv stack in a linux environment and tomcat with java apps that crash ramdonly?

I have a problem and my job depend on of that.
There are some java apps with tomcat under Linux that crash ramdonly (the apps are not mine and it can not be modified).
Every day we find in the morning some app broken.
I want to see the java stack just when it crashed the app for seeing the message of the JVM (outofmemory, nullpointer etc). For if i can see an advice for fixing the problem.
I do not know nothing about to do this.
I saw searching in internet visualvm and jconsole for this. Is enough for what i want to do?.
I want to see the messages of the java stack of JVM just when crashes.
I need help. Thank you very much.
looks like you have memory leak issue, does the app works after restart for a particular period of time?
You might want to see what happening inside java heap , for that you can take heap dump.. Use jcmd utility for this , you can find this utility within JDK installed on your server.
jcmd <process id/main class> GC.heap_dump filename=filename
NOTE: This will do a GC every time this runs.
To schedule this you need to set the cronjob.
Alternatively if you specify -XX:+HeapDumpOnOutOfMemoryError command-line option while running your application, then when an OutOfMemoryError exception is thrown, the JVM will generate a heap dump(in the logs).
Hope this helps. :)

Hadoop jvm process hangs without any error message,

Hadoop jvm process hangs without any error message,
I want to take a look into what JVM processes are doing (where they are stuck).
When I program in C++, I used GDB that can be attached to a running process and show the call stack of the threads.
How can I do the same thing for JVM?
You may use following command
kill -3 [PID]
This will print stack traces of all threads to the console of your java process. Another option is to use jstack utility which is bundled with jdk. Jstack does the same thing.
If it doesn't help then profilers should help. They can gather a lot more data than one thread dump.

Global status in the JVM that records OutOfMemoryError?

We have to deal with a class library that does a
catch(Throwable e) {log.error("some message", e)}
but otherwise ignores the problem. Other than running an external command as described in https://stackoverflow.com/a/3878199/2954288, is there some internal global state in the JVM that can be queried to see if an OutOfMemoryError has happened since the startup?
My question is not a duplicate of: Is it a bad practice to catch Throwable?. I am not asking whether or
not we should catch(Throwable). I am asking whether a certain way to deal with it exists.
Is there some internal global state in the JVM that can be queried to
see if an OutOfMemoryError has happened since the startup
Yes, there is a variable out_of_memory_reported. It is internal and is not supposed to be read from outside. Though you can do this with gdb, for example:
$ gdb -p PID
(gdb) p 'report_java_out_of_memory(char const*)::out_of_memory_reported'
$1 = 0
If you'd like a reliable way to intercept all OutOfMemoryErrors from within Java application, whether they are caught or not, you can use JVMTI Exception Callback.
The example can be found here.
Catching OutOfMemoryError in the path of all possible places where it occurs ... it's not really possible as it can occurs in any Java codes, and internally in the JVM.
When the JVM emits an OOME, sometimes it's recoverable but usually not, so catching it inside your will generally lead to nothing as the JVM will be unusable (it is in a state where it cannot allocate more memory so you cannot process any step in your program)
If you need to know whether or not your application generate OOME and a way to take action, you need to do this from an external point, the easiest way to do this is to use the standard way that the JVM offers you using some jvm starting option.
Usually, for dealing with OOME you use the following JVM starting options:
-XX:+HeapDumpOnOutOfMemoryError tells the JVM to generate a heap dump on OOME so you can analyse it further (for example with Eclipse MAT: http://www.eclipse.org/mat/)
-XX:OnOutOfMemoryError="<cmd args>;<cmd args>" tells the JVM to launch a command on the host in case of OOME. By this you can for example send an email and restart your server!
More informations on statup options ca be found here : http://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html

Find the memory allocated by the JVM

I am looking for an easy way to find out how much memory the JVM on my computer has allocated to a specific process, i have tried using VisualVM, but it cannot find the process.
I should mention that this it's running a Windows Service, not a regular process.
Any suggestions ?
Thank in advance.
there is a command that comes with the JRE called jps which you can use to see all the running java processes. using jps with -v gives you the launch parameters of each process.
You can see here the launch parameters which will tell you the memory usage of each process.
This command should run also on Windows, just replace the terminal with a command prompt.
I use in android (if you want to know a programatic way):
// get the total memory for my app
long total = Runtime.getRuntime().totalMemory();
// get the free memory available
long free = Runtime.getRuntime().freeMemory();
// some simple arithmetic to see how much i use
long used = total - free;
System.out.println("Used memory in bytes: " + used);
Works for the PC too(just tested)!
JVisualVM should work for you. There are specific cases where the process is not shown in visualVM.
See some known issues
Local Applications Cannot Be Monitored (Error Dialog On Startup) Description: An error dialog saying that local applications cannot be monitored is shown immediately after VisualVM startup. Locally running Java applications are displayed as (pid ###). Resolution: This can happen on Windows systems if the username contains capitalized letters. In this case, username is UserName but the jvmstat directory created by JDK is %TMP%\hsperfdata_username. To workaround the problem, exit all Java applications, delete the %TMP%\hsperfdata_username directory and create new%TMP%\hsperfdata_UserName directory.

Java - under which circumstances may a JVM abruptly crash?

I'm running a daemon java process on my ubuntu machine :
java -cp (...) &> err.log&
The process runs for a random period of time and then just disappears. Nothing in LOGs, err.log, no JVM crash file created (hs_err_*.log), nothing. My two questions are :
1) Under which circumstances can a java process abruptly finish ?
2) IS there any way to know what happened to the process (knowing PID) ? Does UNIX keep information about finished processes somehow ?
1) Under which circumstances can a java process abruptly finish ?
When it exits by its own but I guess you ruled out that or when it is killed with SIGKILL. This might be the oom killer if you are on Linux. Did you look at the system message logs ?
2) IS there any way to know what happened to the process (knowing PID) ?
Generally not unless you configure some tracing tools to get that information
Does UNIX keep information about finished processes somehow ?
No, but depending on the Unix variant you are using, that might be something simple to add.
In your example, you can just print the process exit status with echo $?
If it is 265, that would mean the process was killed with signal 9 (=265-256).
I would write a simple shell script that somehow alerts me when the JVM terminated. Perhaps send an email with the JVM's exit code.
#!/bin/sh
# Launch JVM and wait...
java -cp ...
# do something with the exit code $?
# log it to a file or mail it yourself
Perhaps, the exit code might reveal something.
I would run it as a daemon with YAJSW as it allows several ways to monitor the memory etc, has restart options, and you can also enable log on the wrapper process so you can have much info when there is an issue.

Categories

Resources