OrientDB installation of Raspberry Pi issues - java

I'm hitting some snags getting OrientDB installed on my Raspberry Pi and ran out of ideas from Googling. I tried adding -Dmemory.useUnsafe=false -Dstorage.compressionMethod=gzip to the end of the last line of the /bin/server.sh but that didn't seem to help.
The server starts and I can access it over HTTP but the JVM crashes when I try to create a database or connect to the default database. Any ideas?
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGILL (0x4) at pc=0xabf09054, pid=3075, tid=2878796912
#
# JRE version: Java(TM) SE Runtime Environment (8.0-b132) (build 1.8.0-b132)
# Java VM: Java HotSpot(TM) Client VM (25.0-b70 mixed mode linux-arm )
# Problematic frame:
# C [snappy-1.1.0.1-d0822a9b-fe72-4159-a4b2-d57af0058267-libsnappyjava.so+0x1054] _init+0x187
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
--------------- T H R E A D ---------------
Current thread (0x00657ae0): JavaThread "OrientDB HTTP Connection /192.168.8.111:2480<-/192.168.2.115:57404" daemon [_thread_in_native, id=3102, stack$
siginfo:si_signo=SIGILL: si_errno=0, si_code=1 (ILL_ILLOPC), si_addr=0xabf09054

Raspberry Pi and OrientDB ... could not resist trying it out :)
For your information:
Using Raspberry Pi model B (512MB)
Using Raspbian 7.6 (with latest patches as of 2014-11-28)
Using Oracle Java 8 SE for ARM update 6 (so not the version from the Raspbian repo!)
Using OrientDB 2.0-M3
Set MAXHEAP=-Xmx128m in server.sh
With this configuration i am able to create / query / modify a database via HTTP without problems. OrientDB complains about the shortage of memory, but that is all:
WARNING No enough physical memory available: 437MB (heap=123MB). Set
lower Heap and restart OrientDB. Now running with DISKCACHE=64MB

Related

How to solve the problem that mac conflicts with dl4j?

This is all error code:
19:09:34.464 [main] INFO org.nd4j.linalg.factory.Nd4jBackend - Loaded [CpuBackend] backend
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fff65aa2aa8, pid=2020, tid=3843
#
# JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C [libc++abi.dylib+0x10aa8] __cxxabiv1::__si_class_type_info::has_unambiguous_public_base(__cxxabiv1::__dynamic_cast_info*, void*, int) const+0x4
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/mac/Downloads/RL_DQN(19.9.27)/hs_err_pid2020.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
How can I solve this problem?
I found that it without error in this type windows: thinkpad X1 carbon 2021; Windows 10 Enterprise Edition.
It shows error in mac:
macOS Catalina version10.15.7
MacBook Pro (13-inch, 2019, Two Thunderbolt 3 ports)
cpu 2.9 GHz doublecore Intel Core i5 memory 8GB 2133 MHz LPDDR3
graphics card Intel Iris Graphics 550 1536MB
serial number FVFZ18GLL40Y
I replace the dl4j version, but it doesn't work.
I guess mac has incompatibility with dl4j, but I don't know how to solve.
I solved this question.
This is the method: https://github.com/reactor/BlockHound/issues/37
I just replace my java version. I use this: jdk1.8.0_311
Original version is: 1.8.0_60
This was originally added to revision 4 of the question.
Make sure you are using the latest version (beta6 is pretty old at this point). Native crashes should never happen.
Please file an issue at https://gitHub.com/eclipse/deeplearning4j/issues if you still have issues after upgrading to 1.0.0-M2.

Cassandra A fatal error has been detected by the Java Runtime Environment: (WINDOWS) [duplicate]

This question already has answers here:
Cassandra Windows 10 Access Violation
(4 answers)
Closed last year.
I have installed cassandra correctly but when I type cassandra in the cmd I get this error.
Cassandra 3.11,
JDK 8 & Python 2.7.18
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000010014ed4, pid=12960, tid=0x0000000000000eac
#
# JRE version: Java(TM) SE Runtime Environment (8.0_321-b07) (build 1.8.0_321-b07)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.321-b07 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# C [sigar-amd64-winnt.dll+0x14ed4]
#
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Windows\System32\hs_err_pid12960.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
There are several known issues with running Cassandra on Windows so support is limited. In fact, Windows support was completely dropped in Cassandra 4.0 (CASSANDRA-16171).
The recommended workarounds are:
Deploy Cassandra in Docker
Deploy Cassandra in a VM using
software like VirtualBox
Deploy Cassandra in a Kubernetes cluster
with K8ssandra.io
Otherwise if you just want to learn how to build apps on Cassandra, Astra DB has a free tier and you can launch a cluster in 5 clicks. Cheers!

Why a JRE crashes because of core dumping?

My JRE crashed when executing a shared library function called from java code via JNI. The output says that JRE crashed because it "Failed to write core dump. Core dumps have been disabled.".
After many days googling without explanation, my questions is:
Where can I find info about JRE core dumping? I want to understand the problem and the solution.
I know the output recommends executing "ulimit -c unlimited" and this questions proposes solutions (How to enable program to dump core on linux?)
Here I've pasted the JRE output:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f7e541cec55, pid=20390, tid=140180586714880
#
# JRE version: Java(TM) SE Runtime Environment (7.0_55-b13) (build 1.7.0_55-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libXXX.so+0x1fc55] NewMask_UnsetAll+0x15
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /XXX/hs_err_pid20390.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
I know this is an old question…
This message "Failed to write core dump. Core dumps have been disabled." didn't mean this was the cause of the crash. The word "Failed" is indeed misleading here.
This message only means that core dumps are disabled in the Java VM (which is usually the case), so that such dumps are not produced in case of crash.

Java Runtime Environment SIGSEGV error on server startup

I'm using java version "1.7.0_45" with eclipse kepler and on server startup I'm getting the below error log.
Although I've found several posts[1, 2] regarding the same issue, I've tried everything from adding -Dorg.eclipse.swt.browser.DefaultType=mozilla, -XX:LoopUnrollLimit=1 and ulimit -c unlimited but nothing worked for me.
Is there any other work around ?
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000000000000, pid=7084, tid=139749936641792
#
# JRE version: Java(TM) SE Runtime Environment (7.0_45-b18) (build 1.7.0_45-b18)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x0000000000000000
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
#
SIGSEV means that your program is producing a so called segmentation fault. Writing over array boundaries, OR invalid memory addresses, etc... So I think you have to re install eclipse first and then see if you have to reinstall java and eclipse both... if the first wont work for you.
I am not sure if this is the correct guess, I hope no one downvotes this...
I had a similar error while I started netbeans in Ubuntu 13.04. I fixed it with
sudo apt-get install openjdk-7-jdk
Try and see if it works.

Hadoop dies on ld-linux.so

I have the following setup:
Hadoop 1.2.1
Oracle Java 1.7
Suse Enterprise Server 10 32bit
If I execute the Pi-example in standalone mode with
bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 10
then Java dies the hard way, telling me
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGFPE (0x8) at pc=0xb7efa20b, pid=9494, tid=3070639008
#
# JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)
# Java VM: Java HotSpot(TM) Server VM (24.0-b56 mixed mode linux-x86 )
# Problematic frame:
# C [ld-linux.so.2+0x920b] do_lookup_x+0xab
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/hadoop-1.2.1-new/hs_err_pid9494.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
(The full trace is here)
On a distributed setup, i can start-all components and they are ideling fine. But when I submit a job, then the jobtracker dies immediately with an java.io.EOFException, I assume this is due to the same error as above.
I have already tried the same hadoop on another computer, there everything is fine (altough this one runs Arch Linux 64bit), and other Javas (OpenJDK, 1.6, 1.7) don't help.
Any suggestions?
Probably Hadoop includes a native library that was either compiled for a different platform (e.g. 64 bit instead of 32 bit), or the library expects a different environment. The stack trace also shows that JVM_LoadLibrary() is trying to load a native lib.
Make sure you downloaded the correct version of Hadoop for your platform, or compile it yourself for your target platform.

Categories

Resources