I have the following setup:
Hadoop 1.2.1
Oracle Java 1.7
Suse Enterprise Server 10 32bit
If I execute the Pi-example in standalone mode with
bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 10
then Java dies the hard way, telling me
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGFPE (0x8) at pc=0xb7efa20b, pid=9494, tid=3070639008
#
# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)
# Java VM: Java HotSpot(TM) Server VM (24.0-b56 mixed mode linux-x86 )
# Problematic frame:
# C [ld-linux.so.2+0x920b] do_lookup_x+0xab
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/hadoop-1.2.1-new/hs_err_pid9494.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
(The full trace is here)
On a distributed setup, i can start-all components and they are ideling fine. But when I submit a job, then the jobtracker dies immediately with an java.io.EOFException, I assume this is due to the same error as above.
I have already tried the same hadoop on another computer, there everything is fine (altough this one runs Arch Linux 64bit), and other Javas (OpenJDK, 1.6, 1.7) don't help.
Any suggestions?
Probably Hadoop includes a native library that was either compiled for a different platform (e.g. 64 bit instead of 32 bit), or the library expects a different environment. The stack trace also shows that JVM_LoadLibrary() is trying to load a native lib.
Make sure you downloaded the correct version of Hadoop for your platform, or compile it yourself for your target platform.
Related
This is all error code:
19:09:34.464 [main] INFO org.nd4j.linalg.factory.Nd4jBackend - Loaded [CpuBackend] backend
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fff65aa2aa8, pid=2020, tid=3843
#
# JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C [libc++abi.dylib+0x10aa8] __cxxabiv1::__si_class_type_info::has_unambiguous_public_base(__cxxabiv1::__dynamic_cast_info*, void*, int) const+0x4
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/mac/Downloads/RL_DQN(19.9.27)/hs_err_pid2020.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
How can I solve this problem?
I found that it without error in this type windows: thinkpad X1 carbon 2021; Windows 10 Enterprise Edition.
It shows error in mac:
macOS Catalina version10.15.7
MacBook Pro (13-inch, 2019, Two Thunderbolt 3 ports)
cpu 2.9 GHz doublecore Intel Core i5 memory 8GB 2133 MHz LPDDR3
graphics card Intel Iris Graphics 550 1536MB
serial number FVFZ18GLL40Y
I replace the dl4j version, but it doesn't work.
I guess mac has incompatibility with dl4j, but I don't know how to solve.
I solved this question.
This is the method: https://github.com/reactor/BlockHound/issues/37
I just replace my java version. I use this: jdk1.8.0_311
Original version is: 1.8.0_60
This was originally added to revision 4 of the question.
Make sure you are using the latest version (beta6 is pretty old at this point). Native crashes should never happen.
Please file an issue at https://gitHub.com/eclipse/deeplearning4j/issues if you still have issues after upgrading to 1.0.0-M2.
I've just build JCEF but I can't launch it. I have no idea what's wrong, here is the crash message:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000112218648, pid=396, tid=1799
#
# JRE version: Java(TM) SE Runtime Environment (8.0-b93) (build 1.8.0-ea-b93)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.0-b34 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C [ld-linux-x86-64.so.2+0x9cda]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try " ulimit -c unlimited " before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
I've did a research in my issues, because I saw the error like that before and I found this:
Core dumped while running on Ubuntu 16.04 LTS
Finally, the crash was caused by duplicated natives libraries like jogl etc.
Duplicate native libraries
I don't know how you are launching your application, but probably you have the same natives in 2 different directories.
Oh, I used ninja from an alternative building description to build natives and now it works.
Something is wrong with official Manual building section of BranchesAndBuilding
I have developed Java App for Serial Communication on Windows.I want to run this Java App Jar on Ubuntu 14.04.
Java Version for both Windows and Linux is 1.7 64-bit
Whenever I try to run this Jar on Ubuntu 14.04 I get the below error :
swapnilc#cms:~/Desktop/Janvi/testcode$ java -jar Test.jar
Experimental: JNI_OnLoad called.
Stable Library
=========================================
Native lib Version = RXTX-2.1-7
Java lib Version = RXTX-2.1-7
RXTX Warning: Removing stale lock file. /var/lock/LCK..ttyUSB0
Got PortList:: gnu.io.CommPortEnumerator#4fa666bf
Current PortID :: gnu.io.CommPortIdentifier#35a3ae73
Port in Use :: /dev/ttyUSB0
Started
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f30f5b99462, pid=13867, tid=139848257709824
#
# JRE version: OpenJDK Runtime Environment (7.0_131) (build 1.7.0_131-b00)
# Java VM: OpenJDK 64-Bit Server VM (24.131-b00 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 2.6.9
# Distribution: Ubuntu 14.04 LTS, package 7u131-2.6.9-0ubuntu0.14.04.2
# Problematic frame:
# C [librxtxSerial.so+0x6462] read_byte_array+0x52
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/swapnilc/Desktop/Janvi/testcode/hs_err_pid13867.log
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
# http://icedtea.classpath.org/bugzilla
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Aborted (core dumped)
This runs fine on Windows.
I'm hitting some snags getting OrientDB installed on my Raspberry Pi and ran out of ideas from Googling. I tried adding -Dmemory.useUnsafe=false -Dstorage.compressionMethod=gzip to the end of the last line of the /bin/server.sh but that didn't seem to help.
The server starts and I can access it over HTTP but the JVM crashes when I try to create a database or connect to the default database. Any ideas?
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGILL (0x4) at pc=0xabf09054, pid=3075, tid=2878796912
#
# JRE version: Java(TM) SE Runtime Environment (8.0-b132) (build 1.8.0-b132)
# Java VM: Java HotSpot(TM) Client VM (25.0-b70 mixed mode linux-arm )
# Problematic frame:
# C [snappy-1.1.0.1-d0822a9b-fe72-4159-a4b2-d57af0058267-libsnappyjava.so+0x1054] _init+0x187
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
--------------- T H R E A D ---------------
Current thread (0x00657ae0): JavaThread "OrientDB HTTP Connection /192.168.8.111:2480<-/192.168.2.115:57404" daemon [_thread_in_native, id=3102, stack$
siginfo:si_signo=SIGILL: si_errno=0, si_code=1 (ILL_ILLOPC), si_addr=0xabf09054
Raspberry Pi and OrientDB ... could not resist trying it out :)
For your information:
Using Raspberry Pi model B (512MB)
Using Raspbian 7.6 (with latest patches as of 2014-11-28)
Using Oracle Java 8 SE for ARM update 6 (so not the version from the Raspbian repo!)
Using OrientDB 2.0-M3
Set MAXHEAP=-Xmx128m in server.sh
With this configuration i am able to create / query / modify a database via HTTP without problems. OrientDB complains about the shortage of memory, but that is all:
WARNING No enough physical memory available: 437MB (heap=123MB). Set
lower Heap and restart OrientDB. Now running with DISKCACHE=64MB
I am having an issue where the JRE crashes whenever I check if the GtkLookAndFeel is supported. Surprisingly, this bug only appears to show up on Oracle JREs.
So far I have tested the behavior on three JREs:
(I am using the 64 bit version of all of these)
OpenJDK Runtime Environment (IcedTea 2.5.1) (7u65-2.5.1-4) -> Runs fine
Java(TM) SE Runtime Environment (build 1.7.0_67-b01) -> Crashes
Java(TM) SE Runtime Environment (build 1.8.0_20-b26) -> Crashes
Here is code to trigger this bug:
import javax.swing.LookAndFeel;
public class Test
{
public static void main(String[] args)
{
LookAndFeel currLAF = new com.sun.java.swing.plaf.gtk.GTKLookAndFeel();
currLAF.isSupportedLookAndFeel();
System.out.println("I am exiting main");
}
}
Here is the resulting output:
I am exiting main
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f91fe0fdbe0, pid=332, tid=140265730119424
#
# JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 1.7.0_67-b01)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x00007f91fe0fdbe0
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/ethan/fail/hs_err_pid332.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
Note that the program only crashes after it exits main.
For reference, I am developing on a 64-bit debian testing machine and I have verified that other GTK+ apptications work.
Should I report this to Oracle or am I doing something wrong?
I would definitely file a bug report with Oracle. I remember something similar occurring to me many years ago. You've done your due diligence and tested on multiple runtime environments, and have identified (at least on an upper level) where the bug occurs. I would tell them everything you've done as listed here, and to be safe, run the same code on a couple different machines if you can. I know that Java should run the same either way, it's designed that way, but do it anyways so you can at least say you did that in the bug report.
Make sure you follow these guides here to collect the right information, crash dumps, system info, runtime info, etc: https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/bugreports.html
And grab the core dump if you can. The core dump will be extremely helpful (to an almost required degree) to the Oracle people for debugging what's actually happening. Here's a link to the Oracle page, but you may have to find the specific information for your machine:
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/bugreports004.html#CHDJJAJE
From the above link:
On the Linux operating system, unhandled signals such as segmentation
violation, illegal instruction, and so forth, result in a core dump.
By default, the core dump is created in the current working directory
of the process and the name of the core dump file is core.pid, where
pid is the process id of the crashed Java process.
The ulimit utility is used to get or set the limitations on the system
resources available to the current shell and its descendants. Use the
ulimit -c command to check or set the core file size limit. Make sure
that the limit is set to unlimited; otherwise the core file could be
truncated.
And this is the link to the Java Oracle Bug Reporting Site: https://bugreport.java.com/