My java application which uses JNI is crashing with hs_err_pid file giving the error as "Exception Access Violation". The OS is Windows VISTA.
From what I know, my native code is illegally writing to some chunk of memory that does not belong to it.
I have used valgrind on Linux on pure native code to detect such problems in the past.
But when using java, valgrind simply fails and does not work.
What (if any) method would you suggest to identify the offending piece of code?
It is not possible for me to manually dig through the native code (few million lines) to identify it.
I was finally able to resolve the issue. I thought I will post the procedure here in case someone else is in a similar situation.
Step 1:
Build the native code with proper debugging symbols. The compiler flags could be something like "-g -rdynamic -O0".
Step 2:
The following valgrind command should do the job.
valgrind --error-limit=no --trace-children=yes --smc-check=all --leak-check=full --track-origins=yes -v $JAVA -XX:UseSSE=0 -Djava.compiler=NONE $JAVA_ARGS
In the above command, $JAVA is the java executable and $JAVA_ARGS is the arguments to your java program.
Once successfully started, it will take orders of magnitude more time to complete the execution. Valgrind will print thousands of errors (most related to jvm which can be ignored). You can however identify the ones that relate to your jni code.
This general strategy should be applicable to most native memory related problems.
If you are running Java under Linux, you could use the -XX:OnError="gdb - %p" option to run gdb when the error occurs. See this example.
Under windows, you can use the -XX:+UseOSErrorReporting option to obtain a similar effect.
For debugging JNI code a method posted in this article could be useful (it's about debugging JNI using Netbeans and Visual Studio). It's simple - just start your Java program, then in Visual Studio pick Debug -> Attach to process and choose java.exe process running your program.
When you add breakpoints to your C++ code, Visual Studio will break on them. Voila :)
Related
I am writing an application that does packet sniffing using a Pcap library. For that to work, I need to give the java binary network sniffing capabilities to avoid having to run it as root:
sudo setcap cap_net_raw,cap_net_admin=eip /path/to/bin/java
When I run any program, every few seconds, I get a Full Thread Dump to stdout. Here is an example of such a full thread dump (the rest of the repo is irrelevant). The program otherwise seems to run successfully: except for the dump, I couldn't find a difference in the execution.
The code below is sufficient to reproduce the issue:
package main;
import java.io.IOException;
public class StdinTest {
public static void main(String[] args) throws IOException {
System.in.read();
}
}
When I remove the capabilities from the java binary with the command below, things go back to normal and I no longer get the Full Thread Dumps to stdout.
sudo setcap -r /path/to/bin/java
I'm not sure it's a problem since the program seems to run fine, but it doesn't look normal.
I haven't found anyone who seems to have a similar issue, I'm a bit lost...
Any help is appreciated, thanks in advance!
Details:
OS: reproduced on ArchLinux (kernel 5.12.9) & Ubuntu 20.04 (kernel 5.8.0)
JDK: reproduced on Adopt OpenJDK 11, 15 & 16, openjdk.java.net 15 & 16.
EDIT:
Not reproduceable if the program is ran in command line:
java -classpath target/classes main.StdinTest
I only get the symptoms when starting it from IntelliJ Idea Ultimate. I haven't tried with other IDEs.
I also posted my question on Reddit and someone suggested that such full thread dumps are displayed when the process receives a signal 3.
I was unable to detect that signal being sent to it by using strace, but I did notice that running the code in command line rather than inside my IDE solved the issue.
I also tried sending a signal 3 to the program running in command line, and it does display the same full thread dump.
Kind of unsatisfactory ending, I'm not 100% sure what's happening, but I'm getting confident that this is a non-issue that I can ignore.
If intelliJ thinks your software is stuck, it will take thread dumps from it. System.in.read(); makes it "stuck" on reading from stdin.
You should be able to disable it using -Dperformance.watcher.unresponsive.interval.ms=0 in VM options.
The recommended way of changing the JVM options is via the Help | Edit Custom VM Options action.
Reference: intellij support: Disable "Automatic thread dumps".
Does it help if you add the following in Help | Edit Custom Properties and restart the IDE?
debugger.attach.to.process.action=false
execution.dump.threads.using.attach=false
From: Intellij keeps dumping threads when running simplest application with openjdk 11
I would like to see the current (bytecode) instruction stream of the JVM it is executing. After some googleing, I found that the jvm debug build offers the -XX:+TraceBytecodesoption (see here). However, the mentioned link to the hotspot JVM debug build is dead and I could not find a debug build online :/
Is there another way to trace the jvm bytecode stream or can someone point me in the right direction? I'm running 64 bit ubuntu 16.04.
P.S: I know, its going to be painfully slow to print out the complete instruction stream. However, I am curios
-XX:+TraceBytecodes option is exactly what are you looking for. It is available in debug builds of HotSpot JVM. You can easily build JVM yourself - just HotSpot, not even JDK.
Clone hotspot repository from OpenJDK 8 project
hg clone http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot
Build 'fastdebug' JVM (assuming JDK is already installed at /usr/java/jdk1.8.0_102)
cd hotspot/make
make ALT_BOOTDIR=/usr/java/jdk1.8.0_102 ARCH_DATA_MODEL=64 fastdebug
You may add HOTSPOT_BUILD_JOBS=N to run compilation in N parallel processes.
Run it
export ALT_JAVA_HOME=/usr/java/jdk1.8.0_102
../build/linux/linux_amd64_compiler2/fastdebug/hotspot -XX:+TraceBytecodes MainClass
can someone point me in the right direction
Well I'll try. The only thing I've found is using jdb. You'll have to create your own "printer".
Because you're asking this out of curiosity rather than need, I don't think you'll go as far as creatinf an application that does what yout want, but maybe you'll find other ressources that does it (I didn't find any) or at last ease the job .
From what I understand, jdb (java debugger) is a CLI program that uses the JDBA (Java Platform Debugger Architecture) and JVM TI (Java Virtual Machine Tooling Interface).
This is a how jdb works:
You compile your code with the -g option (not mandatory), start your main class with jdb (rather than java).
You can do step by step execution of your code using step command in the console, but this might run multiple bytecode instructions (all the ones corresponding to the instruction in the source code)
You can use stepi which only executes one bytecode line.
You must set a breakpoint to do step by step execution, and the cont option will go to the next breakpoint (just like in a IDE).
The list option allows you to see the code around your breakpoint/line (but not the bytecode).
You can also get the current line number in source file, and in bytecode (with wherei, I think).
The other tool is javap -c to get readable bytecode (but I think you already knew this).
Now with all these, I guess you see where I'm going. You create an application (java applicaiton, or some shell/dos) that uses jdb to do step by step bytecode execution, and you pick the matching line in your bytecode from javac -p to print it. Note that I don't know how you should do in multi-threaded environnements. There are also bytecode visualisation tools like ASM or dirtyJOE, but I don't think they offer bytecode debugging option.
I believe the JVM TI is used by IDE's debuggers, and might be more powerfull faster, and complex than jdb.
Some links that might interest you:
How to debug bytecode
Basic debugging with jdb
Jdb commands list
Java debugger platform architecture
As for myself, I was also curious on how java debuging (and other stuff) worked, so this was kinda interesting.
I have a Java program that uses the Java DTrace API (Java DTrace API) to build and execute a dtrace script on Solaris Sparc 10. The script uses the pid provider to place probes on the entry points of all the functions in a set of libraries, like this (in reality there are over 100 libraries so the script is quite long):
pid1234:liba::entry,pid1234:libb::entry,pid1234:libc::entry {printf("%s", probemod);}
My code looks like:
_consumer = new LocalConsumer();
_consumer.open();
// set the zdefs option to allow scripts with no probes
_consumer.setOption(org.opensolaris.os.dtrace.Option.zdefs);
_consumer.compile(_dtraceScript); // where _dtraceScript is a string as described above
The call to compile fails with:
invalid probe specifier pid1234:liba::entry,pid1234:libb::entry...[truncated]
I have copied and pasted the script in it's entirety and executed it at the command line using:
dtrace -n '...my script...'
... and it works fine, so I know there is nothing syntactically wrong with my script.
So, two problems/questions:
Why is the compilation failing? Since the script runs on the command line, but not via Java, am I hitting some limitation of the Java DTrace API, like script length? Or maybe the JVM is running out of memory because I'm trying to enable so many probes?
Since the exception message is truncated (because my script is so long) how can I see the dtrace error that usually appears at the end of the exception message?
Any suggestions appreciated!
I've tried following the advice found # https://wikis.oracle.com/display/HotSpotInternals/PrintAssembly and http://alexshabanov.com/2011/12/29/print-assembly-for-java/ , but it wasn't of much help. I'm running a 64bit JVM on Windows7, and I've put the suggested hsdis-i386.dll file in all folders there's a jvm.dll, just to be sure.
I seem to have several JVM installations (at least I have one in C:\Program Files (x86)\Java and other in C:\Program Files\Java), so I don't know whether this is making any difference. From what I've seen, doing a java -d32 yields an error, so I must be using the 64bits version one only.
When trying to run
java -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly -server -cp . HelloWorldApp
only my
Hello World!
message is shown, so nothing seems to be happening. Maybe the problem is that hsdis-i386.dll should have other name?
Btw, I'd like to stay away from having to build any kind of source files myself.
Hotspot won't begin compiling and optimizing until it knows what is important, and when you run such a short program it doesn't have the opportunity to kick in. Give it something more substantial.
I launch the following command line (process) from a Windows VC++ 6 program using CreateProcess (or _spawnv()):
java -cp c:\dir\updates.jar;c:\dir\main.jar Main
and class updates in updates.jar (overiding some in main.jar) are not read or found. It is as if the updates.jar library cannot be found or read.
If I launch the same line from a shortcut, or from the command line proper, everything IS found and executes properly.
If I launch a JVM from the command line, keep it running, AND THEN launch the executable stub (above), then everything works OK also. (This makes it look like the issue is a file rights thing).
Any insight would be greatly appreciated!
--Edward
Try using Microsoft's FileMon utility to figure out what's happening. Set the include filter to "updates" to focus in on the problem.
http://technet.microsoft.com/en-us/sysinternals/bb896642.aspx
Have you tried this on another machine? Another OS? Which JVM are you using? Have you tried different JVMs?
Can you provide us with a minimal example which demonstrates the problem?
Thanks jdigital!
I tried FileMon and it showed me what I was doing wrong. The executable calling CreateProcess() had an unclosed file handle to updates.jar from an attempt to copy the update JAR earlier. Bad code that works in the production environment, but not in the test environment.