I used h2o package before. But it can't do now.
I only have 64-bit java.
I have java.exe D:\java\New Foder\java8\bin. Also added path.
This is error message.
> h2o.init()
H2O is not running yet, starting it now...
<simpleError in system2(command, "-version", stdout = TRUE, stderr = TRUE): '"D:\java\New Folder\java8;D:\java;\bin\java.exe"' not found>
Error in value[[3L]](cond) :
You have a 32-bit version of Java. H2O works best with 64-bit Java.
Please download the latest Java SE JDK 8 from the following URL:
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
In addition: Warning message:
In normalizePath(path.expand(path), winslash, mustWork) :
path[1]="D:\\java\\New Folder\\java8;D:\java;/bin/java.exe": 파일 이름, 디렉터리 이름 또는 볼륨 레이블 구문이 잘못되었습니다
I checked regedit. There seems to be no problem. How can it be solved?
Related
I am trying to run some JUnit tests over cassandra. But I get the following error:
[08/12/19 10:48:40:411](main)([]) INFO - c.h.c.c.e.EmbeddedCassandra - Starting embedded Cassandra server.
8/12/19 10:48:41:497](main)([]) ERROR - o.a.c.u.NativeLibraryDarwin - Failed to link the C library against JNA. Native methods will be unavailable.
java.lang.UnsatisfiedLinkError: /private/var/folders/ty/wl4gxf352m328101m101hwh40000gn/T/jna--321969061/jna10641195286884112036.tmp: dlopen(/private/var/folders/ty/wl4gxf352m328101m101hwh40000gn/T/jna--321969061/jna10641195286884112036.tmp, 1): no suitable image found. Did find:
/private/var/folders/ty/wl4gxf352m328101m101hwh40000gn/T/jna--321969061/jna10641195286884112036.tmp: code signature in (/private/var/folders/ty/wl4gxf352m328101m101hwh40000gn/T/jna--321969061/jna10641195286884112036.tmp) not valid for use in process using Library Validation: mapped file has no cdhash, completely unsigned? Code has to be at least ad-hoc signed.
It was running well until I changed to mac mojave and re-setup everything. I think it is an issue related to permissions may be or JNA?
IDE: IntelliJ
Java: AdoptOpenJDK 11.0.4
JNA: 4.2.2
Any kind of help will be highly appreciated!
This is the result of a bug in AdoptOpenJDK jdk-11.0.4+11 on macOS, persisting through 11.2.
It will be fixed in the jdk-11.0.4+11.3 release.
If you can't wait for the new release you can temporarily resolve by downgrading to 11.0.3+7
What version of cassandra unit? Support for java 11 (https://issues.apache.org/jira/browse/CASSANDRA-9608) isn't in until cassandra 4 and I dont think embedded cassandra is setup for that yet
The issue
I am attempting to start an instance of NetLogo in R using the RNetLogo package, which has rJava as a dependency.
During installation of rJava 0.9-9 (the latest development snapshot from rforge), I get the following errors:
warning: [options] bootstrap class path not set in conjunction with -source 1.6
This appears to refer to Java version 6, even though I have version 8 only on my machine. However, the developer of rJava appears to say here that as long as the package installs and loads correctly, which it does, then users should ignore the warnings. Furthermore, .jinit() appears to run correctly:
> .jinit()
[1] 0
and the correct version of Java is detected:
> .jcall("java/lang/System", "S", "getProperty", "java.runtime.version")
[1] "1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12"
So, having loaded RNetLogo, I attempt to start a NetLogo instance. Here is the minimal code I'm running:
library(RNetLogo)
nl.path <- "~/NetLogo 6.0.2/app"
NLStart(nl.path, gui = FALSE, nl.jarname='netlogo-6.0.2.jar')
Which returns the following errors:
java.lang.NoClassDefFoundError: org/nlogo/workspace/Controllable
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
Caused by: java.lang.ClassNotFoundException
at RJavaClassLoader.findClass(RJavaClassLoader.java:383)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more
I get the same issue using RStudio or running R from the terminal (including running as root).
My full sessionInfo():
R version 3.4.3 (2017-11-30)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.3 LTS
Matrix products: default
BLAS: /usr/lib/libblas/libblas.so.3.6.0
LAPACK: /usr/lib/lapack/liblapack.so.3.6.0
locale:
[1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C LC_TIME=en_GB.UTF-8 LC_COLLATE=en_GB.UTF-8
[5] LC_MONETARY=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 LC_PAPER=en_GB.UTF-8 LC_NAME=en_GB.UTF-8
[9] LC_ADDRESS=en_GB.UTF-8 LC_TELEPHONE=en_GB.UTF-8 LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=en_GB.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] RNetLogo_1.0-4 igraph_1.1.2 rJava_0.9-9
loaded via a namespace (and not attached):
[1] compiler_3.4.3 magrittr_1.5 tools_3.4.3 pkgconfig_2.0.1
Attempted fixes
Based on other users' issues that appear to be related, I also tried to following:
° Setting environment variables in /etc/profile.d/:
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"
export PATH="$PATH:$HOME/bin:$JAVA_HOME/bin"
export LD_LIBRARY_PATH="/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server"
export CLASSPATH="$ClASSPATH:$HOME/R/x86_64-pc-linux-gnu-library/3.4/rJava/java"
° Running
sudo R CMD javareconf -e
° Adding a couple of lines proposed as mac OS fixes to the start of my script:
Sys.setenv(NOAWT=1)
dyn.load('/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so')
Thanks in advance.
I spent a lot of time staring at this question, assuming I had the same issue because I was getting the same error. So, for future readers: On Ubuntu 17.10, with rJava 0.9-9, NetLogo 6.0.3 and RNetLogo 1.0-4, the same error results when using the Oracle 9 Java SDK. Going back to 8 solves it.
For the OP: The ~/ in nl.path might be the problem. On my system it doesn't work, but specifying the full path (that is, home/user_name/...) does.
I had the same problem with rNetLogo 1.0-4 for NetLogo 5.3.1 in combination with openjdk-8 on Ubuntu 16.04. I tried openjdk-8 and Oracle Java 8 (using ppa:webupd8team/java) with no luck.
In the end it works with rJava as Ubuntu package (r-cran-rjava) and going back to rNetLogo 1.0-0 (with I suppose did the trick), still using Oracle Java 8.
url = 'https://cran.r-project.org/src/contrib/Archive/RNetLogo/RNetLogo_1.0-0.tar.gz'
install.packages(url, repos=NULL, type="source")
I am getting the below error while trying to publish the application EAR in the server.
Deployment from com.ibm.etools.ejbdeploy.EJBDeployer had errors:
RMIC Command returns RC = MyApplicationEJB. The problems which stopped RMIC are displayed, and have also been recorded in the .log file in error: An error has occurred in the compiler; please file a bug report (http://java.sun.com/cgi-bin/bugreport.cgi).
java.lang.ClassFormatError: JVMCFRE074 no Code attribute specified; class=javax/ejb/RemoveException, method=<init>()V, pc=0
at java.lang.ClassLoader.defineClass(ClassLoader.java:275)
at java.lang.ClassLoader.defineClass(ClassLoader.java:212)
at com.ibm.tools.rmic.iiop.DirectoryLoader.loadClass(DirectoryLoader.java:149)
at com.ibm.tools.rmic.iiop.CompoundType.loadClass(CompoundType.java:354)
at com.ibm.tools.rmic.iiop.Type.initClass(Type.java:1008)
at com.ibm.tools.rmic.iiop.Type.setRepositoryID(Type.java:1025)
at com.ibm.tools.rmic.iiop.CompoundType.initialize(CompoundType.java:762)
at com.ibm.tools.rmic.iiop.ValueType.initialize(ValueType.java:323)
at com.ibm.tools.rmic.iiop.ValueType.forValue(ValueType.java:131)
at com.ibm.tools.rmic.iiop.CompoundType.getMethodExceptions(CompoundType.java:1678)
at com.ibm.tools.rmic.iiop.CompoundType$Method.<init>(CompoundType.java:2457)
at com.ibm.tools.rmic.iiop.CompoundType.addAllMethods(CompoundType.java:1308)
at com.ibm.tools.rmic.iiop.RemoteType.isConformingRemoteInterface(RemoteType.java:222)
at com.ibm.tools.rmic.iiop.RemoteType.initialize(RemoteType.java:171)
at com.ibm.tools.rmic.iiop.RemoteType.forRemote(RemoteType.java:90)
at com.ibm.tools.rmic.iiop.CompoundType.makeType(CompoundType.java:852)
at com.ibm.tools.rmic.iiop.CompoundType$Method.<init>(CompoundType.java:2408)
at com.ibm.tools.rmic.iiop.CompoundType.addAllMethods(CompoundType.java:1308)
at com.ibm.tools.rmic.iiop.RemoteType.isConformingRemoteInterface(RemoteType.java:222)
at com.ibm.tools.rmic.iiop.RemoteType.initialize(RemoteType.java:171)
at com.ibm.tools.rmic.iiop.RemoteType.forRemote(RemoteType.java:90)
at com.ibm.tools.rmic.iiop.CompoundType.addRemoteInterfaces(CompoundType.java:1455)
at com.ibm.tools.rmic.iiop.ImplementationType.initialize(ImplementationType.java:166)
at com.ibm.tools.rmic.iiop.ImplementationType.forImplementation(ImplementationType.java:92)
at com.ibm.tools.rmic.iiop.CompoundType.makeType(CompoundType.java:892)
at com.ibm.tools.rmic.iiop.ClassType.initParents(ClassType.java:197)
at com.ibm.tools.rmic.iiop.ImplementationType.initialize(ImplementationType.java:156)
at com.ibm.tools.rmic.iiop.ImplementationType.forImplementation(ImplementationType.java:92)
at com.ibm.tools.rmic.iiop.StubGenerator.getTopType(StubGenerator.java:151)
at com.ibm.tools.rmic.iiop.Generator.generate(Generator.java:285)
at sun.rmi.rmic.Main.doCompile(Main.java:547)
at sun.rmi.rmic.Main.compile(Main.java:148)
at sun.rmi.rmic.Main.main(Main.java:786)
1 error
Can anybody please help me out in this.
I am using - Websphere 9.1 and jdk 1.6. Interesting thing is my colleagues who have a lower version of Websphere - 8.0, do not get this error.
UPDATE: My Websphere's Runtime Environment is - WebSphere Application Server v7.0
In general a ClassFormatError means that the class with the error was compiled at a java level that is greater than the java level being used at runtime.
In the case of WebSphere v9.0 (which only support Java 8+), the javax/ejb/RemoveException class is compiled at the Java 7 level, so running WAS on Java 6 with a class compiled at the Java 7 or higher level will result in a ClassFormatError.
Update:
You have mentioned in the comments that you are using WAS v7.0 and not WAS v9. The overall explanation is the same regardless of what version of WAS you are using, namely, you can't run on a lower java level than what the classes were compiled with.
I recommend checking what java version the javax/ejb/RemoveException class in your WAS install was compiled at, and compare it to the java level you are running on.
I'm trying to build the following hadoop version on development computer with Windows 10 Home Edition
hadoop-2.7.3-src
Here are the details about my local development environment:
-Windows 10 Home Edition
-Intel Core i5-6200U CPU #2.30GHz
-RAM 16 GB
-64-bit Operating System, x64-based processor
-Microsoft Visual Studio Community 2015 Version 14.0.25431.01 Update 3
-Also added MSBUILD location as C:\Program Files (x86)\MSBuild\14.0\Bin\amd64 to Windows System Environment Variable Path
-.NET Framework 4.6.01586
-cmake version 3.7.2
-CYGWIN_NT-10.0 LTPBCV82DUG 2.7.0(0.306/5/3) 2017-02-12 13:18 x86_64 Cygwin
-java version "1.8.0_121"
-Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
-Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
-Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
-Google Protocol Buffers protoc --version libprotoc 2.5.0
Also, I've created and system environment variable called Platform and set it to x64
I opened up Developer Command Prompt for Visual Studio 2015 (VS2015)
c:\hadoop\hadoop-2.7.3-src> mvn package -Pdist,native-win -DskipTests -Dtar -X
Unfortunately, I'm getting the following error:
[C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
ZlibDecompressor.c
c:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\src\org\apache\hadoop\io\compress\zlib\org_apache_hadoop_io_compress_zlib.h(36): fatal error C1083: Cannot open include file: 'zlib.h': No such file or directory [C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
Done Building Project "C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj" (default targets) -- FAILED.
Done Building Project "C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.sln" (default targets) -- FAILED.
Build FAILED.
"C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.sln" (default target) (1) ->
"C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj" (default target) (2) ->
(ClCompile target) ->
c:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\src\org\apache\hadoop\io\compress\zlib\org_apache_hadoop_io_compress_zlib.h(36): fatal error C1083: Cannot open include file: 'zlib.h': No such file or directory [C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
c:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\src\org\apache\hadoop\io\compress\zlib\org_apache_hadoop_io_compress_zlib.h(36): fatal error C1083: Cannot open include file: 'zlib.h': No such file or directory [C:\hadoop\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
0 Warning(s)
2 Error(s)
Time Elapsed 00:00:02.49
The aforementioned error has to do with zlib tool.
After researching online, someone said that the following Visual Studio solution file needs to be built successfully in Visual Studio:
....\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\native\native.sln
Using Visual Studio 2015 in Administrator mode, I opened up the native.sln file, and immediately saw an error:
enter image description here
Could someone please tell me what steps I have to take to resolve said error?
So there were quite a few steps I had to take in order to resolve the problems.
Within the ....\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\winutils directory, I opened up the following solution in Visual Studio 2015:
winutils.sln
Within .....\hadoop-2.7.3-src\hadoop-common-project\hadoop-common\src\main\winutils\libwinutils.c , I commented out the following line of code, and made a modified copy of it as shown below:
//const WCHAR* wsceConfigRelativePath = WIDEN_STRING(STRINGIFY(WSCE_CONFIG_DIR)) L"\\" WIDEN_STRING(STRINGIFY(WSCE_CONFIG_FILE));
const WCHAR* wsceConfigRelativePath = WIDEN_STRING("../etc/hadoop") L"\\" WIDEN_STRING("wsce-site.xml");
Also, In the winutils solution's property window, I had to set the platform value to x64 as the screenshot below shows:
Next, I opened Dos command prompt, and checked the exact version of my Windows OS:
ver
Microsoft Windows [Version 10.0.14393]
Also, I opened up the property window of the libwinutils project, and ensured that properties that are marked in the following snapshot had the proper values:
Also, I took the same steps for the properties of the winutils project:
(Sorry, stackoverflow would not allow me to place another picture snapshot, but all you basically have to do is make sure the the winutils project's property are set properly )
I downloaded zlib version 1.2.11 source code. Using Developer Command Prompt for VS2015 ( Visual Studio 2015 ) I built zlib from zlib version 1.2.11 source code using cmake
c:\zlib\zlib-1.2.11>cmake -G "Visual Studio 14 2015" -A x64 c:\zlib\zlib-1.2.11\
-- The C compiler identification is MSVC 19.0.24215.1
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of off64_t
-- Check size of off64_t - failed
-- Looking for fseeko
-- Looking for fseeko - not found
-- Looking for unistd.h
-- Looking for unistd.h - not found
-- Configuring done
-- Generating done
-- Build files have been written to: C:/zlib/zlib-1.2.11
Finally, run the build with cmake
c:\zlib\zlib-1.2.11>cmake --build .
In Windows System Variables I have the following variable defined:
ZLIB_HOME is set to C:\zlib\zlib-1.2.11
I'm working on some code to access HBase and I am writing unit tests that create a MiniDFSCluster as part of the test setup.
(defn test-config [& options]
(let [testing-utility (HBaseTestingUtility.)]
(.startMiniCluster testing-utility 1)
(let [config (.getConfiguration testing-utility)]
(if (not= options nil)
(doseq [[key value] options]
(.set config key value)))
config)))
;; For those who don't read Clojure, lines 2 and 3 cause
;; the failure and are equivalent to the following Java
;;
;; HBaseTestingUtility testingUtility = new HBaseTestingUtility();
;; testingUtility.startMiniCluster(1); // blows up on Linux but not Mac OSX
This runs fine on Mac OSX with Java HotSpot:
$ java -version
java version "1.6.0_51"
Java(TM) SE Runtime Environment (build 1.6.0_51-b11-457-11M4509)
Java HotSpot(TM) 64-Bit Server VM (build 20.51-b01-457, mixed mode)
$ lein test
lein test hbase.config-test
lein test hbase.table-test
2013-07-12 17:44:13.488 java[27384:1203] Unable to load realm info from SCDynamicStore
Starting DataNode 0 with dfs.data.dir: /Users/dwilliams/Desktop/Repos/mobiusinversion/hbase/target/test-data/fe0199fd-0168-48d9-98ce-b4a5e62d3257/dfscluster_bbad1095-58d1-4571-ba12-4d4f1c24203f/dfs/data/data1,/Users/dwilliams/Desktop/Repos/mobiusinversion/hbase/target/test-data/fe0199fd-0168-48d9-98ce-b4a5e62d3257/dfscluster_bbad1095-58d1-4571-ba12-4d4f1c24203f/dfs/data/data2
Cluster is active
Ran 11 tests containing 14 assertions.
0 failures, 0 errors.
But when this is run in a Linux environment, the following error occurs:
ERROR in (create-table) (MiniDFSCluster.java:426)
Uncaught exception, not in assertion.
expected: nil
actual: java.lang.NullPointerException: null
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes (MiniDFSCluster.java:426)
org.apache.hadoop.hdfs.MiniDFSCluster.<init> (MiniDFSCluster.java:284)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster (HBaseTestingUtility.java:444)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:612)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:568)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:555)
I filed a travis-ci ticket, since this first manifested itself there and I thought it might be due to their environment.
https://github.com/travis-ci/travis-ci/issues/1240
However, after discussion with travis support I was able to reproduce the error on CentOS. I tried both the Sun JDK and the OpenJDK on Linux and both produced the same error. Whats going on here? Is this a trivial configuration problem? Perhaps something not set in the Linux ENV that is set in Mac OSX's ENV?
If you would like to run the tests, please clone the repo
https://github.com/mobiusinversion/hbase
And run lein test. Help is greatly appreciated!
Update:
Filed this HBASE Jira ticket
https://issues.apache.org/jira/browse/HBASE-8944
Short answer: set "umask 022" prior to running the tests.
Long answer: This is a common environmental issue with running MiniDFSCluster from Hadoop 1.x, releases, which HBaseTestingUtility uses internally. It has been effectively fixed in Hadoop 0.22+ (including 2.0+, but not 1.x at the moment).
The underlying problem is https://issues.apache.org/jira/browse/HDFS-2556.
When the MiniDFSCluster starts up, it creates the temporary storage directories to use for the datanode processes (configured as "dfs.data.dir"). These will be created with your currently set umask. When each datanode starts up, it checks that the directories configured in "dfs.data.dir" both exist and that the directory permissions match the expected value (set as "dfs.datanode.data.dir.perm"). If the directories permissions do not match the expected value ("755" by default), then the datanode process exits.
By default, in Hadoop 1.x, this value is set to "755", so if you set your umask to "022", the data directories will wind up with the correct permissions. If however, the permissions do not match the expected value, the datanode will abort and you will see errors like the following in the test log file:
WARN [main] datanode.DataNode(1577): Invalid directory in dfs.data.dir: Incorrect permission for /.../dfs/data/data2, expected: rwxr-xr-x, while actual: rwxrwxr-x
In later versions of Hadoop, the datanode will attempt to change the directory permissions to the expected value if they do not match. Only if this operation fails will the datanode abort. HDFS-2556 proposes backporting this change to the 1.x releases, but has not yet been fixed.