I seem to be continually coming against a wall in getting chromium running with JCEF in eclipse. I was able to get to the point where the native functions are discovered but am still unable to complete initialization. I set the LD_PRELOAD variable. I am running both the MainFrame.java class and custom Scala code and run into the same problem in each. Is there a way to resolve this?
System:
OS: Ubuntu 16.04
JCEF version 3
CEF version 3
Java Jdk 8
Structure and Configuration:
Everything is under the binary distribution structure. I imported the jars as a library, added the native library path to the jcef jar and imported it into my project.
I setup the run configuration with the environment variables:
Display = :0.0
LD_PRELOAD = /path/to/libcef.so
All of my libraries and *.pak files are in the same directory and a subdirectory where the libcef.so is located (the binary distribution) as are the chrome sandbox and helpers.
Code and Error
The code fails after the following:
println("Generating Handlers")
CefApp.addAppHandler(Handlers.getHandlerAdapter)
private var settings = new CefSettings
settings.windowless_rendering_enabled = useOSR
println("Starting App")
private final val cefApp : CefApp = if(commandLineArgs != null && commandLineArgs.size > 0) CefApp.getInstance(ChromeCommandLineParser.parse(commandLineArgs)) else CefApp.getInstance(settings)
println("Creating Client")
private final val client : CefClient = cefApp.createClient()
The following output results:
Starting
Generating Handlers
Starting App
Creating Client
initialize on Thread[AWT-EventQueue-0,6,main] with library path /home/XXXXX/jcef/src/binary_distrib/linux64/bin/lib/linux64
[0413/135633:ERROR:icu_util.cc(157)] Invalid file descriptor to ICU data received.
[0413/135633:FATAL:content_main_runner.cc(700)] Check failed: base::i18n::InitializeICU().
#0 0x7ff8fa94a62e base::debug::StackTrace::StackTrace()
#1 0x7ff8fa95f88b logging::LogMessage::~LogMessage()
#2 0x7ff8fd7588d4 content::ContentMainRunnerImpl::Initialize()
#3 0x7ff8fa857962 CefContext::Initialize()
#4 0x7ff8fa85775b CefInitialize()
#5 0x7ff8fa80a9b8 cef_initialize
#6 0x7ff8d6946914 CefInitialize()
#7 0x7ff8d690200f Java_org_cef_CefApp_N_1Initialize
#8 0x7ff8de207994 <unknown>
All help is appreciated. Thanks
I had a lot of problems with this too, until I created the symlinks to "icudtl.dat", "natives_blob.bin" and "snapshot_blob.bin" under the $jdk/bin directory, instead of $jdk/jre/bin.
Now I don't get this error any more.
Using the example in https://bitbucket.org/chromiumembedded/java-cef/wiki/BranchesAndBuilding
I changed this...
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Resources/icudtl.dat /usr/lib/jvm/java-8-oracle/jre/bin/icudtl.dat
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/natives_blob.bin /usr/lib/jvm/java-8-oracle/jre/bin/natives_blob.bin
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/snapshot_blob.bin /usr/lib/jvm/java-8-oracle/jre/bin/snapshot_blob.bin
To this...
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Resources/icudtl.dat /usr/lib/jvm/java-8-oracle/bin/icudtl.dat
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/natives_blob.bin /usr/lib/jvm/java-8-oracle/bin/natives_blob.bin
$ sudo ln -s /path/to/java-cef/src/third_party/cef/linux64/Debug/snapshot_blob.bin /usr/lib/jvm/java-8-oracle/bin/snapshot_blob.bin
The solution that #dvlcube gave works, but it's not comfortable. You can add some extra logic to detect user's environment and if it's a linux you can copy required files - example:
GitHub - PandomiumLoadWorker [:53]
Instead of copying you can also create symlinks:
Java SE Tutorial: Links
If you don't want to specify related to linux environment variables before launching, you can also inject those variables (like LD_LIBRARY_PATH and LD_PRELOAD) at runtime:
GitHub - LinuxEnv
Related
Yoooo!
Scope
I am trying to deploy a Quarkus based application to a Raspberry Pi using some fancy technologies, my goal is to figure out an easy way to develop an application with Quarkus framework, subsequently deploy as native executable to a raspberry device with full GPIO pins access. Below I will provide you will see requirements that I set for myself and my environment settings to have a better picture of the problem that I faced.
Acceptance Criteria
Java 17
Build native executable using GraalVM
Execute native executable in a micro image on raspberry's docker
Target platform can vary
Be able to use GPIO, SPI, I2C and etc. interfaces of the raspberry
Environment
Development PC
Raspberry Pi Model 3 B+
os: Ubuntu 22.04.1 LTS
os: DietPy
platform: x86_64, linux/amd64
platform: aarch64, linux/arm64/v8
Prerequisites
Java: diozero a device I/O library
Docker: working with buildx
Quarkus: build a native executable
How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?
Building Multi-Architecture Docker Images With Buildx
Application
source code on github
As for project base I used getting-started application from
https://github.com/quarkusio/quarkus-quickstarts
Adding diozero library to pom.xml
<dependency>
<groupId>com.diozero</groupId>
<artifactId>diozero-core</artifactId>
<version>1.3.3</version>
</dependency>
Creating a simple resource to test GPIO pinspackage org.acme.getting.started;
import com.diozero.devices.LED;
import javax.ws.rs.Path;
import javax.ws.rs.QueryParam;
#Path("led")
public class LedResource {
#Path("on")
public String turnOn(final #QueryParam("gpio") Integer gpio) {
try (final LED led = new LED(gpio)) {
led.on();
} catch (final Throwable e) {
return e.getMessage();
}
return "turn on led on gpio " + gpio;
}
#Path("off")
public String turnOff(final #QueryParam("gpio") Integer gpio) {
try (final LED led = new LED(gpio)) {
led.off();
} catch (final Throwable e) {
return e.getMessage();
}
return "turn off led on gpio " + gpio;
}
}
4.Dockerfile
```
# Stage 1 : build with maven builder image with native capabilities
FROM quay.io/quarkus/ubi-quarkus-native-image:22.0.0-java17-arm64 AS build
COPY --chown=quarkus:quarkus mvnw /code/mvnw
COPY --chown=quarkus:quarkus .mvn /code/.mvn
COPY --chown=quarkus:quarkus pom.xml /code/
USER quarkus
WORKDIR /code
RUN ./mvnw -B org.apache.maven.plugins:maven-dependency-plugin:3.1.2:go-offline
COPY src /code/src
RUN ./mvnw package -Pnative
# Stage 2 : create the docker final image
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
WORKDIR /work/
COPY --from=build /code/target/*-runner /work/application
# set up permissions for user 1001
RUN chmod 775 /work /work/application \
&& chown -R 1001 /work \
&& chmod -R "g+rwX" /work \
&& chown -R 1001:root /work
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
```
Building image with native executable
Dockerfile based on quarkus docs, I changed image of the build container to quay.io/quarkus/ubi-quarkus-native-image:22.0.0-java17-arm64 and executor container to registry.access.redhat.com/ubi8/ubi-minimal:8.6-902, both of these are linux/arm64* compliant.
Since I am developing and building in linux/amd64 and I want to target linux/arm64/v8 my executable must be created in a target like environment. I can achieve that with buildx feature which enables cross-arch builds for docker images.
Installing QEMU
sudo apt-get install -y qemu-user-static
sudo apt-get install -y binfmt-support
Initializing buildx for linux/arm64/v8 builds
sudo docker buildx create --platform linux/arm64/v8 --name arm64-v8
Use new driver
sudo docker buildx use arm64-v8
Bootstrap driver
sudo docker buildx inspect --bootstrap
Verify
sudo docker buildx inspect
Name: arm64-v8
Driver: docker-container
Nodes:
Name: arm64-v80
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/arm64*, linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Now looks like we're ready to run the build. I ended up with the following command
sudo docker buildx build --push --progress plain --platform linux/arm64/v8 -f Dockerfile -t nanobreaker/agus:arm64 .
--push - since I need to deploy a final image somewhere
--platform linux/arm64/v8 - docker requires to define target platform
-t nanobreaker/agus:arm64 - my target repository for final image
It took ~16 minutes to complete the build and push the image
target platform is linux/arm64 as needed
59.75 MB image size, good enough already (with micro image I could achieve ~10 MB)
After I connected to raspberry, downloaded image and run it
docker run -p 8080:8080 nanobreaker/agus:arm64
Pretty nice, let's try to execute a http request to test out gpio pins
curl 192.168.0.20:8080/led/on?gpio=3
Okey, so I see here that there are permission problems and diozero library is not in java.library.path
We can fix permission problems by adding additional parameter to docker run command
docker run --privileged -p 8080:8080 nanobreaker/agus:arm64
PROBLEM
From this point I do not know how to resolve library load error in a native executable.
I've tried:
Pulled out native executable from final container, executed on raspberry host os and had same result, this makes me think that library was not included at GraalVM compile time?
Learning how library gets loaded https://github.com/mattjlewis/diozero/blob/main/diozero-core/src/main/java/com/diozero/util/LibraryLoader.java
UPDATE I
It looks like I have two options here
Figure out a way to create configuration for the diozero library so it is properly resolved by GraalVM during native image compilation.
Add library to the native image and pass it to the native executable.
UPDATE II
Further reading of quarkus docs landed me here https://quarkus.io/guides/writing-native-applications-tips
By default, when building a native executable, GraalVM will not include any of the resources that are on the classpath into the native executable it creates. Resources that are meant to be part of the native executable need to be configured explicitly. Quarkus automatically includes the resources present in META-INF/resources (the web resources) but, outside this directory, you are on your own.
I reached out #Matt Lewis (creator of diozero) and he was kind to share his configs, which he used to compile into GraalVM. Thank you Matt!
Here’s the documentation on my initial tests: https://www.diozero.com/performance/graalvm.html
I stashed the GraalVM config here: https://github.com/mattjlewis/diozero/tree/main/src/main/graalvm/config
So combining the knowledge we can enrich pom.xml with additional setting to tell GraalVM how to process our library
<quarkus.native.additional-build-args>
-H:ResourceConfigurationFiles=resource-config.json,
-H:ReflectionConfigurationFiles=reflection-config.json,
-H:JNIConfigurationFiles=jni-config.json,
-H:+TraceServiceLoaderFeature,
-H:+ReportExceptionStackTraces
</quarkus.native.additional-build-args>
Also added resource-config.json, reflection-config.json, jni-config.json to the resource folder of the project (src/main/resources)
First, I will try to create a native executable in my native os ./mvnw package -Dnative
Fatal error: org.graalvm.compiler.debug.GraalError: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.lang.ProcessHandleImpl are allowed in the image heap as this class should be initialized at image runtime. To see how this object got instantiated use --trace-object-instantiation=java.lang.ProcessHandleImpl.
Okey, so it failed, but let's trace object instantiation as recommended, maybe we can do something in configs to get around this. I added --trace-object-instantiation=java.lang.ProcessHandleImpl to the additional build args.
Fatal error: org.graalvm.compiler.debug.GraalError: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: No instances of java.lang.ProcessHandleImpl are allowed in the image heap as this class should be initialized at image runtime. Object has been initialized by the java.lang.ProcessHandleImpl class initializer with a trace:
at java.lang.ProcessHandleImpl.<init>(ProcessHandleImpl.java:227)
at java.lang.ProcessHandleImpl.<clinit>(ProcessHandleImpl.java:77)
. To fix the issue mark java.lang.ProcessHandleImpl for build-time initialization with --initialize-at-build-time=java.lang.ProcessHandleImpl or use the the information from the trace to find the culprit and --initialize-at-run-time=<culprit> to prevent its instantiation.
something new at least, let's try to initialize it first at build time with --initialize-at-build-time=java.lang.ProcessHandleImpl
Error: Incompatible change of initialization policy for java.lang.ProcessHandleImpl: trying to change BUILD_TIME from command line with 'java.lang.ProcessHandleImpl' to RERUN for JDK native code support via JNI
com.oracle.svm.core.util.UserError$UserException: Incompatible change of initialization policy for java.lang.ProcessHandleImpl: trying to change BUILD_TIME from command line with 'java.lang.ProcessHandleImpl' to RERUN for JDK native code support via JNI
Okey, we're not able to change the initialization kind and looks like it won't give us any effect.
I found out that with -H:+PrintClassInitialization we can generate a csv file with class initialization info
here we have two lines for java.lang.ProcessHandleImpl
java.lang.ProcessHandleImpl, RERUN, for JDK native code support via JNI
java.lang.ProcessHandleImpl$Info, RERUN, for JDK native code support via JNI
So it says that class is marked as RERUN, but isn't this the thing we're looking for? Makes no sense for me right now.
UPDATE III
With the configs for graalvm provided by #Matt I was able to compile a native image, but it fails anyways during runtime due to java.lang.UnsatisfiedLinkError, makes me feel like the library was not injected properly.
So looks like we just need to build a proper configuration file, in order to do this let's build our application without native for now, just run it on raspberry, trigger the code related to diozero, get output configs.
./mvnw clean package -Dquarkus.package.type=uber-jar
Deploying to raspberry, will run with graalvm agent for configs generation (https://www.graalvm.org/22.1/reference-manual/native-image/Agent/)
/$GRAALVM_HOME/bin/java -agentlib:native-image-agent=config-output-dir=config -jar ags-gateway-1.0.0-SNAPSHOT-runner.jar
Running simple requests to trigger diozero code (I've connected a led to raspberry on gpio 4, and was actually seeing it turn off/on)
curl -X POST 192.168.0.20:8080/blink/off?gpio=4
curl -X POST 192.168.0.20:8080/blink/on?gpio=4
I've published project with output configs
One thing I noticed that "pattern":"\\Qlib/linux-aarch64/libdiozero-system-utils.so\\E" aarch64 library gets pulled while running on py which is correct, but when I build on native OS I should specify 'amd64' platform.
Let's try to build a native with new configs
./mvnw package -Dnative
Successfully compiled, let's run and test
./target/ags-gateway-1.0.0-SNAPSHOT-runner
curl -X POST localhost:8080/led/on?gpio=4
And here we have error again
ERROR [io.qua.ver.htt.run.QuarkusErrorHandler] (executor-thread-0) HTTP Request to /led/on?gpio=4 failed, error id: b0ef3f8a-6813-4ea8-886f-83f626eea3b5-1: java.lang.UnsatisfiedLinkError: com.diozero.internal.provider.builtin.gpio.NativeGpioDevice.openChip(Ljava/lang/String;)Lcom/diozero/internal/provider/builtin/gpio/GpioChip; [symbol: Java_com_diozero_internal_provider_builtin_gpio_NativeGpioDevice_openChip or Java_com_diozero_internal_provider_builtin_gpio_NativeGpioDevice_openChip__Ljava_lang_String_2]
So I finally managed to build native image, but for some reason it didn't resolve JNI for native library.
Any thoughts on how to properly inject diozero library into native executable?
UPDATE IV
With help of #matthew-lewis we managed to build aarch64 native executable on amd64 os. I updated the source project with final configurations, but I must inform you that this is not a final solution and it doesn't cover all the library code, also according to the Matt's comments this might not be the only way to configure the graalvm build.
I've created a very simple Quarkus app that exposes a single REST API to list the available GPIOs. Note that it currently uses the mock provider that will be introduced in v1.3.4 so that I can test and run locally without deploying to a Raspberry Pi.
Running on a Pi would be as simple as removing the dependency to diozero-provider-mock in the pom.xml - you would also currently need to change the dependency to 1.3.3 until 1.3.4 is released.
Basically you need to add this to the application.properties file:
quarkus.native.additional-build-args=\
-H:ResourceConfigurationFiles=resource-config.json,\
-H:JNIConfigurationFiles=jni-config.json,\
-H:ReflectionConfigurationFiles=reflect-config.json
These files were generated by running com.diozero.sampleapps.LEDTest with the GraalVM Java executable (with a few minor tweaks), i.e.:
$GRAALVM_HOME/bin/java -agentlib:native-image-agent=config-output-dir=config \
-cp diozero-sampleapps-1.3.4.jar:diozero-core-1.3.4.jar:tinylog-api-2.4.1.jar:tinylog-impl-2.4.1.jar \
com.diozero.sampleapps.LEDTest 18
Note a lot of this was based my prior experiments with GraalVM as documented here and here.
The ProcessHandlerImpl error appear to be related to the tinylog reflect config that I have edited out.
Update 1
In making life easy for users of diozero, the library does a bit of static initialisation for things like detecting the local board. This causes issues when loading the most appropriate native library at most once (see LibraryLoader - you will notice it has a static Map of libraries that have been loaded which prevents it being loaded at runtime). To get around this I recommend adding this build property:
--initialize-at-run-time=com.diozero.sbc\\,com.diozero.util
Next, I have been unable to resolve the java.lang.ProcessHandleImpl issue, which prevents reenabling the service loader (diozero uses service loader quite a bit to enable flexibility and extensibility). It would be nice to be able to add this flag:
quarkus.native.auto-service-loader-registration=true
Instead I have specified relevant classes in resource-config.json.
I'm having issues with Bouncycastle, which only arise when running the :lint task.
Generally it seems to be a Java 9 byte-code version 53.0 / ASM version conflict.
These are the dependencies:
// https://mvnrepository.com/artifact/org.bouncycastle
implementation "org.bouncycastle:bcprov-jdk15on:1.64"
implementation "org.bouncycastle:bcpkix-jdk15on:1.64"
Which cause the :lint task to throw processing errors:
> Task :mobile:lint
Error processing bcpkix-jdk15on-1.64.jar:META-INF/versions/9/module-info.class: broken class file? (This feature requires ASM6)
Error processing bcprov-jdk15on-1.64.jar:META-INF/versions/9/module-info.class: broken class file? (This feature requires ASM6)
META-INF/versions/9/module-info.class: broken class file? (This feature requires ASM6)
The same goes for:
// https://mvnrepository.com/artifact/com.google.code.gson/gson
implementation "com.google.code.gson:gson:2.8.6"
Since upgrading from 1.4.1 to 1.4.2-native-mt, it's the same again:
implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:1.4.2-native-mt"
kotlin-stdlib-1.4.0.jar:META-INF\versions\9\module-info.class: broken class file? (Module requires ASM6)
As already mentioned this was introduced in Java 9, that Android does not support. You could just use packagingOptions to remove those classes.
android {
packagingOptions {
exclude "**/module-info.class"
}
}
This should not affect actual executed code and should also remove classes for lint checks as lint is working on bytecode.
Update: Please see my current answer, which nails the problem.
This answer is only being kept as an example for Gradle scripting.
When using old versions (likely built with Java 8), there are no such processing errors:
// https://mvnrepository.com/artifact/org.bouncycastle
implementation "org.bouncycastle:bcprov-jdk15on:1.60"
implementation "org.bouncycastle:bcpkix-jdk15on:1.60"
// https://mvnrepository.com/artifact/com.google.code.gson/gson
implementation "com.google.code.gson:gson:2.8.5"
The issue obviously was introduced with version 1.61 / 2.8.6 (likely built with Java 9).
It's annoying when Google brings one back to the own answer, which is not really an answer. Instead of keeping back the version or editing the JAR, I've wrote a DeleteModuleInfoTask and a shell script, which automates the deletion of module-info.class from any given Java dependency.Since commandLine only accepts a single command, one almost has to call a script. And this should serve as a good example for a custom Exec task.
For Linux: module_info.sh considers versions/9/module-info.class and module-info.class:
#!/usr/bin/env bash
GRADLE_CACHE_DIR=$HOME/.gradle/caches/modules-2/files-2.1
ZIP_PATHS=(META-INF/versions/9/module-info.class module-info.class)
if [[ $# -ne 3 ]]; then
echo "Illegal number of parameters"
exit 1
else
if [ -d "$GRADLE_CACHE_DIR" ]; then
DIRNAME=${GRADLE_CACHE_DIR}/$1/$2/$3
if [ -d "$DIRNAME" ]; then
cd ${DIRNAME} || exit 1
find . -name ${2}-${3}.jar | (
read ITEM;
for ZIP_PATH in "${ZIP_PATHS[#]}"; do
INFO=$(zipinfo ${ITEM} ${ZIP_PATH} 2>&1)
if [ "${INFO}" != "caution: filename not matched: ${ZIP_PATH}" ]; then
zip ${ITEM} -d ${ZIP_PATH} # > /dev/null 2>&1
fi
done
)
exit 0
fi
fi
fi
For Windows: module_info.bat depends on 7-Zip:
#echo off
REM delete module.info from JAR file - may interfere with the local IDE.
for /R %USERPROFILE%\.gradle\caches\modules-2\files-2.1\%1\%2\%3\ %%G in (%2-%3.jar) do (
if exist %%G (
7z d %%G META-INF\versions\9\module-info.class > NUL:
7z d %%G versions\9\module-info.class > NUL:
7z d %%G module-info.class > NUL:
)
)
Update: After some testing I came to the conclusion that it may be better to manually edit the file when developing on Windows, because Android Studio and Java will lock the JAR, which will subsequently prevent the edit and leave the temp file behind.
File tasks.gradle provides the DeleteModuleInfoTask:
import javax.inject.Inject
abstract class DeleteModuleInfoTask extends Exec {
#Inject
DeleteModuleInfoTask(String dependency) {
def os = org.gradle.internal.os.OperatingSystem.current()
def stdout = new ByteArrayOutputStream()
def stderr = new ByteArrayOutputStream()
ignoreExitValue true
standardOutput stdout
errorOutput stderr
workingDir "${getProject().getGradle().getGradleUserHomeDir()}${File.separator}caches${File.separator}modules-2${File.separator}files-2.1${File.separator}${dependency.replace(":", File.separator).toString()}"
String script = "${getProject().getRootDir().getAbsolutePath()}${File.separator}scripts${File.separator}"
def prefix = ""; def suffix = "sh"
if (os.isWindows()) {prefix = "cmd /c "; suffix = "bat"}
String[] item = dependency.split(":")
commandLine "${prefix}${script}module_info.${suffix} ${item[0]} ${item[1]} ${item[2]}".split(" ")
// doFirst {println "${commandLine}"}
doLast {
if (execResult.getExitValue() == 0) {
if (stdout.toString() != "") {
println "> Task :${project.name}:${name} ${stdout.toString()}"
}
} else {
println "> Task :${project.name}:${name} ${stderr.toString()}"
}
}
}
}
Example Usage:
// Bouncycastle
tasks.register("lintFixModuleInfoBcPkix", DeleteModuleInfoTask, "org.bouncycastle:bcpkix-jdk15on:1.64")
lint.dependsOn lintFixModuleInfoBcPkix
tasks.register("lintFixModuleInfoBcProv", DeleteModuleInfoTask, "org.bouncycastle:bcprov-jdk15on:1.64")
lint.dependsOn lintFixModuleInfoBcProv
// GSON
tasks.register("lintFixModuleInfoGson", DeleteModuleInfoTask, "com.google.code.gson:gson:2.8.6")
lint.dependsOn lintFixModuleInfoGson
// Kotlin Standard Library
tasks.register("lintFixModuleInfoKotlinStdLib", DeleteModuleInfoTask, "org.jetbrains.kotlin:kotlin-stdlib:1.4.32")
lint.dependsOn lintFixModuleInfoKotlinStdLib
Make sure to register these tasks only for a single module.
According to resmon "Resource Monitor" > "Associated Handles", studio64 and java may hold a lock on the JAR file, therefore 7-Zip may only be able to edit the archive when Android Studio and Java had been closed; at least it nicely works for CI on Linux.
The file module-info.class is part of the Java module system which was introduced since Java 9. As per this issue on Android IssueTracker, the bug has been be fixed since Android Studio 3.4.
I have got the following error message:
Error processing C:\Users\mypc\.gradle\caches\modules-2\files-2.1\com.google.code.gson\gson\2.8.6\9180733b7df8542621dc12e21e87557e8c99b8cb\gson-2.8.6.jar:module-info.class: broken class file? (This feature requires ASM6)
This error occurs without using a development system like Android Studio. I use Gradle 6.1.1.
I prevented the mistake as follows:
Open the file gson-2.8.6.jar which is named in the error message
Removing of the file module-info.class, which is located in the root
There is a more simple workaround. Basically the problem can be identified as "running Gradle with Java 8, while handling files which were built with Java 9". My new approach is building with Java 11 (GitHub Actions also builds with Java 11 and Gradle 6.7.1 would currently support up to Java 15).
After installing Java 11 with sudo dnf install java-11-openjdk
alternatives --display java will list the JDK to use.
For example: /usr/lib/jvm/java-11-openjdk-11.0.11.0.9-0.el8_3.x86_64:
On a side note, building with JDK 11 also fixes this warning:
Current JDK version 1.8.0_172-b11 has a bug (https://bugs.openjdk.java.net/browse/JDK-8007720) that prevents Room from being incremental. Consider using JDK 11+ or the embedded JDK shipped with Android Studio 3.5+.
The "embedded JDK shipped with Android Studio 3.5+" is still Java 8 ...
I have an application that (I want to) uses Java to start and stop Docker containers. It seems that the way to do this is using docker-machine create, which works fine when I test from the command line.
However, when running using Commons-Exec from Java I get the error:
(aa4567c1-058f-46ae-9e97-56fb8b45211c) Creating SSH key...
Error creating machine: Error in driver during machine creation: /usr/local/bin/VBoxManage modifyvm aa4567c1-058f-46ae-9e97-56fb8b45211c --firmware bios --bioslogofadein off --bioslogofadeout off --bioslogodisplaytime 0 --biosbootmenu disabled --ostype Linux26_64 --cpus 1 --memory 1024 --acpi on --ioapic on --rtcuseutc on --natdnshostresolver1 off --natdnsproxy1 on --cpuhotplug off --pae on --hpet on --hwvirtex on --nestedpaging on --largepages on --vtxvpid on --accelerate3d off --boot1 dvd failed:
VBoxManage: error: Could not find a registered machine with UUID {aa4567c1-058f-46ae-9e97-56fb8b45211c}
VBoxManage: error: Details: code VBOX_E_OBJECT_NOT_FOUND (0x80bb0001), component VirtualBoxWrap, interface IVirtualBox, callee nsISupports
VBoxManage: error: Context: "FindMachine(Bstr(a->argv[0]).raw(), machine.asOutParam())" at line 500 of file VBoxManageModifyVM.cpp
I have set my VBOX_USER_HOME variable in an initializationScript that I'm using to start the machine:
export WORKERID=$1
export VBOX_USER_HOME=/Users/me/Library/VirtualBox
# create the machine
docker-machine create $WORKERID && \ # create the worker using docker-machine
eval $(docker-machine env $WORKERID) && \ # load the env of the newly created machine
docker run -d myimage
And I'm executing this from Java via the Commons Exec CommandLine class:
CommandLine cmdline = new CommandLine("/bin/sh");
cmdline.addArgument(initializeWorkerScript.getAbsolutePath());
cmdline.addArgument("test");
Executor executor = new DefaultExecutor();
If there is another library that can interface with docker-machine from Java I'm happy to use that, or to change out Commons Exec if that's the issue (though I don't understand why). The basic requirement is that I have some way to get docker-machine to create a machine using Java and then later to be able to use docker-machine to stop that machine.
As it turns out the example that I posted should work, the issue that I was having is that I was provisioning machines with a UUID name. That name contained dash (-) characters which apparently break VBoxManage. This might be because of some kind of path problem but I'm just speculating. When I changed my UUID to have dot (.) instead of dash it loaded and started the machine just fine.
I'm happy to remove this post if the moderators want, but will leave it up here in case people are looking for solutions to problems with docker-machine create naming issues.
So, I am trying to create a Spark session in Python 2.7 using the following:
#Initialize SparkSession and SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkContext
#Create a Spark Session
SpSession = SparkSession \
.builder \
.master("local[2]") \
.appName("V2 Maestros") \
.config("spark.executor.memory", "1g") \
.config("spark.cores.max","2") \
.config("spark.sql.warehouse.dir", "file:///c:/temp/spark-warehouse")\
.getOrCreate()
#Get the Spark Context from Spark Session
SpContext = SpSession.sparkContext
I get the following error pointing to the python\lib\pyspark.zip\pyspark\java_gateway.pypath`
Exception: Java gateway process exited before sending the driver its port number
Tried to look into the java_gateway.py file, with the following contents:
import atexit
import os
import sys
import select
import signal
import shlex
import socket
import platform
from subprocess import Popen, PIPE
if sys.version >= '3':
xrange = range
from py4j.java_gateway import java_import, JavaGateway, GatewayClient
from py4j.java_collections import ListConverter
from pyspark.serializers import read_int
# patching ListConverter, or it will convert bytearray into Java ArrayList
def can_convert_list(self, obj):
return isinstance(obj, (list, tuple, xrange))
ListConverter.can_convert = can_convert_list
def launch_gateway():
if "PYSPARK_GATEWAY_PORT" in os.environ:
gateway_port = int(os.environ["PYSPARK_GATEWAY_PORT"])
else:
SPARK_HOME = os.environ["SPARK_HOME"]
# Launch the Py4j gateway using Spark's run command so that we pick up the
# proper classpath and settings from spark-env.sh
on_windows = platform.system() == "Windows"
script = "./bin/spark-submit.cmd" if on_windows else "./bin/spark-submit"
submit_args = os.environ.get("PYSPARK_SUBMIT_ARGS", "pyspark-shell")
if os.environ.get("SPARK_TESTING"):
submit_args = ' '.join([
"--conf spark.ui.enabled=false",
submit_args
])
command = [os.path.join(SPARK_HOME, script)] + shlex.split(submit_args)
# Start a socket that will be used by PythonGatewayServer to communicate its port to us
callback_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
callback_socket.bind(('127.0.0.1', 0))
callback_socket.listen(1)
callback_host, callback_port = callback_socket.getsockname()
env = dict(os.environ)
env['_PYSPARK_DRIVER_CALLBACK_HOST'] = callback_host
env['_PYSPARK_DRIVER_CALLBACK_PORT'] = str(callback_port)
# Launch the Java gateway.
# We open a pipe to stdin so that the Java gateway can die when the pipe is broken
if not on_windows:
# Don't send ctrl-c / SIGINT to the Java gateway:
def preexec_func():
signal.signal(signal.SIGINT, signal.SIG_IGN)
proc = Popen(command, stdin=PIPE, preexec_fn=preexec_func, env=env)
else:
# preexec_fn not supported on Windows
proc = Popen(command, stdin=PIPE, env=env)
gateway_port = None
# We use select() here in order to avoid blocking indefinitely if the subprocess dies
# before connecting
while gateway_port is None and proc.poll() is None:
timeout = 1 # (seconds)
readable, _, _ = select.select([callback_socket], [], [], timeout)
if callback_socket in readable:
gateway_connection = callback_socket.accept()[0]
# Determine which ephemeral port the server started on:
gateway_port = read_int(gateway_connection.makefile(mode="rb"))
gateway_connection.close()
callback_socket.close()
if gateway_port is None:
raise Exception("Java gateway process exited before sending the driver its port number")
# In Windows, ensure the Java child processes do not linger after Python has exited.
# In UNIX-based systems, the child process can kill itself on broken pipe (i.e. when
# the parent process' stdin sends an EOF). In Windows, however, this is not possible
# because java.lang.Process reads directly from the parent process' stdin, contending
# with any opportunity to read an EOF from the parent. Note that this is only best
# effort and will not take effect if the python process is violently terminated.
if on_windows:
# In Windows, the child process here is "spark-submit.cmd", not the JVM itself
# (because the UNIX "exec" command is not available). This means we cannot simply
# call proc.kill(), which kills only the "spark-submit.cmd" process but not the
# JVMs. Instead, we use "taskkill" with the tree-kill option "/t" to terminate all
# child processes in the tree (http://technet.microsoft.com/en-us/library/bb491009.aspx)
def killChild():
Popen(["cmd", "/c", "taskkill", "/f", "/t", "/pid", str(proc.pid)])
atexit.register(killChild)
# Connect to the gateway
gateway = JavaGateway(GatewayClient(port=gateway_port), auto_convert=True)
# Import the classes used by PySpark
java_import(gateway.jvm, "org.apache.spark.SparkConf")
java_import(gateway.jvm, "org.apache.spark.api.java.*")
java_import(gateway.jvm, "org.apache.spark.api.python.*")
java_import(gateway.jvm, "org.apache.spark.ml.python.*")
java_import(gateway.jvm, "org.apache.spark.mllib.api.python.*")
# TODO(davies): move into sql
java_import(gateway.jvm, "org.apache.spark.sql.*")
java_import(gateway.jvm, "org.apache.spark.sql.hive.*")
java_import(gateway.jvm, "scala.Tuple2")
return gateway
I am pretty new to Spark and Pyspark, hence unable to debug the issue here. I also tried to look at some other suggestions:
Spark + Python - Java gateway process exited before sending the driver its port number?
and
Pyspark: Exception: Java gateway process exited before sending the driver its port number
but unable to resolve this so far. Please help!
Here is how the spark environment looks like:
# This script loads spark-env.sh if it exists, and ensures it is only loaded once.
# spark-env.sh is loaded from SPARK_CONF_DIR if set, or within the current directory's
# conf/ subdirectory.
# Figure out where Spark is installed
if [ -z "${SPARK_HOME}" ]; then
export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
if [ -z "$SPARK_ENV_LOADED" ]; then
export SPARK_ENV_LOADED=1
# Returns the parent of the directory this script lives in.
parent_dir="${SPARK_HOME}"
user_conf_dir="${SPARK_CONF_DIR:-"$parent_dir"/conf}"
if [ -f "${user_conf_dir}/spark-env.sh" ]; then
# Promote all variable declarations to environment (exported) variables
set -a
. "${user_conf_dir}/spark-env.sh"
set +a
fi
fi
# Setting SPARK_SCALA_VERSION if not already set.
if [ -z "$SPARK_SCALA_VERSION" ]; then
ASSEMBLY_DIR2="${SPARK_HOME}/assembly/target/scala-2.11"
ASSEMBLY_DIR1="${SPARK_HOME}/assembly/target/scala-2.10"
if [[ -d "$ASSEMBLY_DIR2" && -d "$ASSEMBLY_DIR1" ]]; then
echo -e "Presence of build for both scala versions(SCALA 2.10 and SCALA 2.11) detected." 1>&2
echo -e 'Either clean one of them or, export SPARK_SCALA_VERSION=2.11 in spark-env.sh.' 1>&2
exit 1
fi
if [ -d "$ASSEMBLY_DIR2" ]; then
export SPARK_SCALA_VERSION="2.11"
else
export SPARK_SCALA_VERSION="2.10"
fi
fi
Here is how my Spark environment is set up in Python:
import os
import sys
# NOTE: Please change the folder paths to your current setup.
#Windows
if sys.platform.startswith('win'):
#Where you downloaded the resource bundle
os.chdir("E:/Udemy - Spark/SparkPythonDoBigDataAnalytics-Resources")
#Where you installed spark.
os.environ['SPARK_HOME'] = 'E:/Udemy - Spark/Apache Spark/spark-2.0.0-bin-hadoop2.7'
#other platforms - linux/mac
else:
os.chdir("/Users/kponnambalam/Dropbox/V2Maestros/Modules/Apache Spark/Python")
os.environ['SPARK_HOME'] = '/users/kponnambalam/products/spark-2.0.0-bin-hadoop2.7'
os.curdir
# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']
# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']
#Add the following paths to the system path. Please check your installation
#to make sure that these zip files actually exist. The names might change
#as versions change.
sys.path.insert(0,os.path.join(SPARK_HOME,"python"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","pyspark.zip"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","py4j-0.10.1-src.zip"))
#Initialize SparkSession and SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkContext
After reading many posts I finally made Spark work on my Windows laptop. I use Anaconda Python, but I am sure this will work with standard distibution too.
So, you need to make sure you can run Spark independently. My assumptions are that you have valid python path and Java installed. For Java I had "C:\ProgramData\Oracle\Java\javapath" defined in my Path which redirects to my Java8 bin folder.
Download pre-built Hadoop version of Spark from https://spark.apache.org/downloads.html and extract it, e.g. to C:\spark-2.2.0-bin-hadoop2.7
Create Environmental variable SPARK_HOME which you will need later for pyspark to pick up your local Spark installation.
Go to %SPARK_HOME%\bin and try to run pyspark which is Python Spark shell. If your environment is like mine you will see exeption about inability to find winutils and hadoop. Second exception will be about missing Hive:
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
I then found and simply followed https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-tips-and-tricks-running-spark-windows.html
Specifically:
Download winutils, put it to c:\hadoop\bin . Create HADOOP_HOME env and add %HADOOP_HOME%\bin to PATH.
Create directory for Hive, e.g. c:\tmp\hive and run winutils.exe chmod -R 777 C:\tmp\hive in cmd in admin mode.
Then go to %SPARK_HOME%\bin and make sure when you run pyspark you see a nice following Spark logo in ASCII:
Note that sc spark context variable needs to be defined already.
Well, my main purpose was to have pyspark with auto completion in my IDE, and that's when SPARK_HOME (Step 2) comes into play. If everything is setup correctly, you should see the following lines working:
Hope that helps and you can enjoy running Spark code locally.
I have the same problem.
Luckily I found the reason.
from pyspark.sql import SparkSession
# spark = SparkSession.builder.appName('Check Pyspark').master("local").getOrCreate()
spark = SparkSession.builder.appName('CheckPyspark').master("local").getOrCreate()
print spark.sparkContext.parallelize(range(6), 3).collect()
Notice the difference between the second line and the third line.
If the parameter after the AppName like this 'Check Pyspark',you will get error(Exception: Java gateway process...).
The parameter after the AppName can not has blank space. Should chagne 'Check Pyspark' to 'CheckPyspark'.
From my "guess" this is a problem with your java version. Maybe you have two different java version installed. Also it looks like you are using code that you copy and paste from somewhere for setting the SPARK_HOMEetc.. There are many simple examples how to set up Spark. Also it looks like that you are using Windows. I would suggest to take a *NIX environment to test things as this is much easier e.g. you could use brew to install Spark. Windows is not really made for this...
I had the exact same issue after playing around with my JAVA_HOME system environmental variables on Windows 10 using python 2.7: I tried to run the same configuration script for Pyspark (Based on the V2-Maestros Udemy course) with the same error message "Java gateway process exited before sending the driver its port number".
After several attempts to fix the problem, the only solution that ended up working was to uninstall all versions of java (had three of them) from my machine, and deleting the JAVA_HOME system variable as well as the record from the PATH system variable related to JAVA_HOME; after that I performed a clean installation of Java jre V1.8.0_141, reconfigured both the JAVA_HOME and PATH entries in the system environment for Windows and restarted my machine and finally got the script to work.
Hope this helps.
Spark will not work fine with java version greater than 11, downgrade java to version 8 or 11 and set JAVA_HOME.
I am using IntelliJ to run a sample java-jnetpcap application. I have 64 bit JDK in the class path and included the following dependency
<dependency>
<groupId>jnetpcap</groupId>
<artifactId>jnetpcap</artifactId>
<version>1.4.r1425-1f</version>
</dependency>
I am running the below sample.java class
public class PcapReaderDemo
{
private static final String filePath= "/src/main/resources/TAPcapture.pcap";
public static void main(String [] arguments){
final StringBuilder errbuf = new StringBuilder();
Pcap pcap = Pcap.openOffline(filePath,errbuf);
if (pcap == null) {
System.err.printf("Error while opening device for capture: "
+ errbuf.toString());
return;
}
PcapPacketHandler<String> jpacketHandler = new PcapPacketHandler<String>() {
public void nextPacket(PcapPacket packet, String user) {
System.out.printf("Received at %s caplen=%-4d len=%-4d %s\n",
new Date(packet.getCaptureHeader().timestampInMillis()),
packet.getCaptureHeader().caplen(), // Length actually captured
packet.getCaptureHeader().wirelen(), // Original length
user // User supplied object
);
}
};
System.out.println("Cleared");
}
}
It is throwing the below exception:
PcapReaderDemo
Exception in thread "main" java.lang.UnsatisfiedLinkError: com.slytechs.library.NativeLibrary.dlopen(Ljava/lang/String;)J
at com.slytechs.library.NativeLibrary.dlopen(Native Method)
at com.slytechs.library.NativeLibrary.<init>(Unknown Source)
at com.slytechs.library.JNILibrary.<init>(Unknown Source)
at com.slytechs.library.JNILibrary.loadLibrary(Unknown Source)
at com.slytechs.library.JNILibrary.register(Unknown Source)
at com.slytechs.library.JNILibrary.register(Unknown Source)
at com.slytechs.library.JNILibrary.register(Unknown Source)
at org.jnetpcap.Pcap.<clinit>(Unknown Source)
at com.demo.myapexapp.PcapReaderDemo.main(PcapReaderDemo.java:20)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Please suggest your inputs on where it is going wrong.
I solved the same issue this way:
using Ubuntu 16.04
installing jre-1.8.0_181 manually:
download specific java version (https://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8-2177648.html) jre-8u181-linux-x64.tar.gz
create java directory: mkdir /opt/jre
extract java: tar -zxf jre-8u181-linux-x64.tar.gz
Update used java version:
update-alternatives --install /usr/bin/java java /opt/jre/jre1.8.0_181/bin/java 100
Download, extract und copy jnetpcap files to lib directory
wget -O jnetpcap-1.4.r1425 https://downloads.sourceforge.net/project/jnetpcap/jnetpcap/Latest/jnetpcap-1.4.r1425-1.linux64.x86_64.tgz
tar -xvf jnetpcap-1.4.r1425
cp jnetpcap-1.4.r1425/libjnetpcap.so /lib/
Run your program
I ran into this exception as well, and discovered that I had forgotten an installation step from RELEASE_NOTES.txt.
The library will fail to find the binaries unless they're placed in an OS default location, or Java is given some way to find them. For me, following the directions made this error go away.
It's hard to summarize it better than the source material, so I'll paste it here directly:
2) Setup native jnetpcap dynamically loadable library. This varies between
operating systems.
* On Win32 systems do only one of the following
- copy the jnetpcap.dll library file, found at root of jnetpcap's
installation directory to one of the window's system folders. This
could be \windows or \windows\system32 directory.
- add the jNetPcap's installation directory to system PATH variable. This
is the same variable used access executables and scripts.
- Tell Java VM at startup exactly where to find jnetpcap.dll by setting
a java system property 'java.library.path' such as:
c:\> java -Djava.library.path=%JNETPCAP_HOME%
- You can change working directory into the root of jnetpcap's
installation directory.
* On unix based systems, use one of the following
- add /usr/lib directory to LD_LIBRARY_PATH variable as java JRE does not
look in this directory by default
- Tell Java VM at startup exactly where to find jnetpcap.dll by setting
a java system property 'java.library.path' such as:
shell > java -Djava.library.path=$JNETPCAP_HOME
- You can change working directory into the root of jnetpcap's
installation directory.
* For further trouble shooting information, please see the following link:
(http://jnetpcap.wiki.sourceforge.net/Troubleshooting+native+library)
The solution that works for me was to see what was missing for libjnetpcap with ldd :
$ ldd libjnetpcap.so<br>
linux-vdso.so.1 => (0x00007ffe42706000)<br>
libstdc++.so.6 (0x00007f12ef2ad000)<br>
libpcap.so.0.9.4 => **not found**<br>
libc.so.6 => /lib64/libc.so.6 (0x00007f12eeefe000)<br>
libm.so.6 => /lib64/libm.so.6 (0x00007f12eec7a000)<br>
/lib64/ld-linux-x86-64.so.2 (0x00007f12ef861000)<br>
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f12eea63000)<br>
My version of libjnetpcap.so was searching for a libpcap.so.0.9.4.
So I just made a quick link and check with ldd :
$ ln -s /usr/lib64/libpcap.so /usr/lib64/libpcap.so.0.9.4<br>
$ ldd libjnetpcap.so<br>
linux-vdso.so.1 => (0x00007ffdaadee000)<br>
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007eff8f974000)<br>
libpcap.so.0.9.4 => /usr/lib64/libpcap.so.0.9.4 (0x00007eff8f734000)<br>
libc.so.6 => /lib64/libc.so.6 (0x00007eff8f39f000)<br>
libm.so.6 => /lib64/libm.so.6 (0x00007eff8f11b000)<br>
/lib64/ld-linux-x86-64.so.2 (0x00007eff8ff42000)<br>
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007eff8ef05000)<br>
Then all was good for me.
In Ubuntu, I also needed this library:
sudo apt install libpcap-dev
I also cp the files as described by #NormanSp and #user977860
1st Place your libjnetpcap.so inside /lib64
2nd Make sure your java version is 1.8.0_181 or below.