Add vm options to a native image graalvm - java

Hi i upgraded an old java 8 project to a java 11 to use Graalvm for building native image, first problem was with how should i add external jar files to the project and i did it using a maven plugin, then after compiling and linking the project successfully, the application gave an error JavaFx configuration: classes were loaded from unnamed module. I had solved this problem in the IDE using VM options --module-path javafx-sdk-19/lib --add-modules javafx.fxml,javafx.controls,javafx.graphics,and after adding the runtimeArgs still no luck running the native image.
How can i make the native image use external javafx sdk like in the IDE?
Maven plugin for adding external jars
<plugin>
<groupId>com.googlecode.addjars-maven-plugin</groupId>
<artifactId>addjars-maven-plugin</artifactId>
<version>1.0.2</version>
<executions>
<execution>
<goals>
<goal>add-jars</goal>
</goals>
<configuration>
<resources>
<resource>
<directory>${basedir}/libs</directory>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
Gluonfx plugin
<plugin>
<groupId>com.gluonhq</groupId>
<artifactId>gluonfx-maven-plugin</artifactId>
<version>1.0.15</version>
<configuration>
<mainClass>com.proj.main.Main</mainClass>
<reflectionList>com.proj.main.PaxUtils</reflectionList>
<reflectionList>com.proj.JsonUtil</reflectionList>
<nativeImageArgs>
<arg>+EagerJVMCI</arg>
<arg>-Dgraal.PrintConfiguration=info</arg>
</nativeImageArgs>
</configuration>
</plugin>
Error log
WARNING: Unsupported JavaFX configuration: classes were loaded from 'unnamed module #7e0babb1'
Exception in Application start method
Exception in thread "main" java.lang.RuntimeException: Exception in Application start method
at com.sun.javafx.application.LauncherImpl.launchApplication1(LauncherImpl.java:901)
at com.sun.javafx.application.LauncherImpl.lambda$launchApplication$2(LauncherImpl.java:196)
at java.lang.Thread.run(Thread.java:829)
at com.oracle.svm.core.thread.PlatformThreads.threadStartRoutine(PlatformThreads.java:704)
at com.oracle.svm.core.posix.thread.PosixPlatformThreads.pthreadStartRoutine(PosixPlatformThreads.java:202)
Caused by: java.lang.ExceptionInInitializerError
at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:164)
at org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2835)
at org.jboss.resteasy.spi.ResteasyProviderFactory.addMessageBodyReader(ResteasyProviderFactory.java:1068)
at org.jboss.resteasy.spi.ResteasyProviderFactory.registerProvider(ResteasyProviderFactory.java:1841)
at org.jboss.resteasy.spi.ResteasyProviderFactory.registerProvider(ResteasyProviderFactory.java:1769)
at org.jboss.resteasy.plugins.providers.RegisterBuiltin.registerProviders(RegisterBuiltin.java:148)
at org.jboss.resteasy.plugins.providers.RegisterBuiltin.register(RegisterBuiltin.java:54)
at org.jboss.resteasy.plugins.providers.RegisterBuiltin.register(RegisterBuiltin.java:40)
at org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder.getProviderFactory(ResteasyClientBuilder.java:456)
at org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder.buildOld(ResteasyClientBuilder.java:464)
at org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder.build(ResteasyClientBuilder.java:496)
at org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder.build(ResteasyClientBuilder.java:50)
at javax.ws.rs.client.ClientBuilder.newClient(ClientBuilder.java:114)
at com.xylo.client.xpos.service.XWebServiceClient.init(XWebServiceClient.java:41)
at com.xylo.client.xpos.service.XWebServiceClient.<init>(XWebServiceClient.java:31)
at com.xylo.client.xpos.POSSettings.fillSettings(POSSettings.java:181)
at com.xylo.client.xpos.main.Main.start(Main.java:45)
at com.sun.javafx.application.LauncherImpl.lambda$launchApplication1$9(LauncherImpl.java:847)
at com.sun.javafx.application.PlatformImpl.lambda$runAndWait$12(PlatformImpl.java:484)
at com.sun.javafx.application.PlatformImpl.lambda$runLater$10(PlatformImpl.java:457)
at java.security.AccessController.executePrivileged(AccessController.java:169)
at java.security.AccessController.doPrivileged(AccessController.java:91)
at com.sun.javafx.application.PlatformImpl.lambda$runLater$11(PlatformImpl.java:456)
at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:96)
at com.oracle.svm.jni.JNIJavaCallWrappers.jniInvoke_VA_LIST_Runnable_run_16403f8d32adb631126daa893e5e80085c5d6325(JNIJavaCallWrappers.java:0)
at com.sun.glass.ui.gtk.GtkApplication._runLoop(GtkApplication.java)
at com.sun.glass.ui.gtk.GtkApplication.lambda$runLoop$11(GtkApplication.java:316)
... 3 more
Caused by: java.lang.IllegalArgumentException: Invalid bundle interface org.jboss.resteasy.resteasy_jaxrs.i18n.Messages (implementation not found)
at org.jboss.logging.Messages.doGetBundle(Messages.java:92)
at org.jboss.logging.Messages.getBundle(Messages.java:59)
at org.jboss.logging.Messages.getBundle(Messages.java:46)
at org.jboss.resteasy.resteasy_jaxrs.i18n.Messages.<clinit>(Messages.java:31)
... 30 more

Related

Running Java Jar with included config via maven on flink yarn cluster

I am using flink in a maven/java project and need to include my configs internally in the created jar.
So, I have added the following in my pom file. This includes all my yml configs (located in src/main/resources folder) in the jar, whose name I will pass as argument while executing.
<resources>
<resource>
<directory>src/main/resources</directory>
<includes>
<include>**/*.yml</include>
</includes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<finalName>${project.artifactId}-${project.version}</finalName>
<shadedArtifactAttached>true</shadedArtifactAttached>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>com.exmaple.MyApplication</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
The following main class code receives an arg based on which I decide what config to pick from resource, read(using snakeyaml) and use.
public static void main(String[] args) throws Exception {
final ParameterTool parameterTool = ParameterTool.fromArgs(args);
ClassLoader classLoader = MyApplication.class.getClassLoader();
Yaml yaml = new Yaml();
String filename = parameterTool.getRequired("configFilename");
InputStream in = classLoader.getSystemResourceAsStream(filename);
MyConfigClass = yaml.loadAs(in, MyConfigClass.class);
...
}
mvn clean install creates "my-shaded-jar.jar"
which I execute using command
java -jar /path/to/my-shaded-jar.jar --configFilename filename
It works on multiple systems, if I share the jar with others.
However I am facing issue, when I try to run the same jar in a yarn cluster on Hadoop, using the following command:-
HADOOP_CLASSPATH=`hadoop classpath` HADOOP_CONF_DIR=/etc/hadoop/conf ./flink-1.6.2/bin/flink run -m yarn-cluster -yd -yn 5 -ys 30 -yjm 10240 -ytm 10240 -yst -ynm some-job-name -yqu queue-name ./my-shaded-jar.jar --configFilename filename
I am getting following Error:
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:78)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:120)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:238)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1053)
at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1129)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1129)
Caused by: org.yaml.snakeyaml.error.YAMLException: java.io.IOException: Stream closed
at org.yaml.snakeyaml.reader.StreamReader.update(StreamReader.java:200)
at org.yaml.snakeyaml.reader.StreamReader.<init>(StreamReader.java:60)
at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:444)
at com.example.MyApplication.main(MyApplication.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
... 13 more
Caused by: java.io.IOException: Stream closed
at java.io.PushbackInputStream.ensureOpen(PushbackInputStream.java:74)
at java.io.PushbackInputStream.read(PushbackInputStream.java:166)
at org.yaml.snakeyaml.reader.UnicodeReader.init(UnicodeReader.java:90)
at org.yaml.snakeyaml.reader.UnicodeReader.read(UnicodeReader.java:122)
at java.io.Reader.read(Reader.java:140)
at org.yaml.snakeyaml.reader.StreamReader.update(StreamReader.java:184)
Why does my solution works on any normal linux/mac systems, however the same jar with same args fails when running with flink run command on yarn cluster.
Is there a difference between how we generally execute jars and how yarn does it.
Any help appreciated.
Replace classLoader.getSystemResourceAsStream(filename) with classLoader.getResourceAsStream(filename).
java.lang.ClassLoader#getSystemResourceAsStream locates the resource through the system class loader, which is typically used to start the application.
java.lang.ClassLoader#getResourceAsStream will first search the parent class loader. That failing, it will invoke findResource of the current class loader.
To avoid dependency conflicts, classes in Flink applications are divided into two domains [1], which is also applied to Flink client, e.g. CliFrontend.
The Java Classpath includes the classes of Apache Flink and its core dependencies.
The Dynamic User Code includes the classes (and resources) of user jars.
So in order to find your "config file", which is packaged in your jar file, we should use the user code class loader (you can find the details of userCodeClassLoader in org.apache.flink.client.program.PackagedProgram), instead of the system classloader.
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/debugging_classloading.html

Spring boot application cannot run using cmd

I try to run my compiled jar file using java -jar jarfile.jar but it returning following error.
Exception in thread "main" java.lang.ClassNotFoundException: MainApplication
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:93)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:46)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.PropertiesLauncher.main(PropertiesLauncher.java:593)
Why this is happening. When i run in the spring tool suit it run perfectly. This happen only when i try to run my application using CMD windows
SOLVED This answer is correct.
This is the error i have done when configuration of my pom.xml in the api module of my project.
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>2.1.0.RELEASE</version>
<configuration>
<mainClass>com.mobios.MainApplication</mainClass>
<layout>ZIP</layout>
</configuration>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Following line create error
<mainClass>MainApplication</mainClass>
This define the main class of the application. But i have only mentioned the class name only. It must include the group id also. I think lot of people doing this kind of simple mistakes like me. As a spring boot beginner i think it is common. The above line must be like following.
<mainClass>com.mobios.MainApplication</mainClass>
Now working fine when building the jar and run it. But without group id i can run the project in eclipse or any other development tool you are using.
Did you add this type of spring boot application class?
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class FApplication {
public static void main(String[] args) {
SpringApplication.run(FApplication.class, args);
}
}
May be your application can not find java manifest on your jar file.
Open command prompt and goto your where pom.xml presents
run mvn clean install
once you get the message build successful
goto target folder by cd target
then run the command java -jar <file-name>.jar
if you are still getting the error message while running then some spring config is wrong.
create new spring boot application from https://start.spring.io/

How to use Apache Tattletale to analyze duplicate Jar/APIs used in class path

In my project they have used more than 225+ jar files which causing memory issue, while searching on net i come to know Apache Tattletale will analyze and give a report of duplicate classes and JAR/APIs used by the application (Classpath). So i have refereed following links
1) how to use JBoss Tattletale tool
2) Uncover JBoss client jar list with Tattletale
3) Jboss official Documentation
but i didn't get how to execute and run the Tattletale Jar file and my application is not based on maven so i am not using Maven.
I have downloaded tattletale-1.2.0.Beta2.jar file along with jboss-seam-2.3.0.CR1-dist file and used following command
java -Xmx512m -jar tattletale.jar /Java/workspaces/mycoolprojects/projectX output-projectx
but getting following exception
Exception in thread "main" java.lang.NoClassDefFoundError: javassist/NotFoundException
at org.jboss.tattletale.analyzers.Analyzer.getScanner(Analyzer.java:49)
at org.jboss.tattletale.Main.execute(Main.java:608)
at org.jboss.tattletale.Main.main(Main.java:1099)
Caused by: java.lang.ClassNotFoundException: javassist.NotFoundException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 3 more
More over i didn't get what is the use of jboss-seam-2.3.0.CR1-dist file. Meaning i can see lot of jar files and lot of code in there but i don't know how does it help to use tattletale.
In the official documentation they have mentioned jboss-tattletale.properties and how can i set/use that.
I was having the same problem and this solution worked for me too.
(downloaded the latest javaassist jar)
Interestingly, tattletale itself suggests that the tattletale jar contains the
javaassist jar
The below steps worked for me:
download jboss-javassist-javassist-rel_3_22_0_cr1-2-g6a9079a.zip from http://jboss-javassist.github.io/javassist/
extract it to a location
go to that location and copy javassist.jar
go to location where your tattletale-1.2.0.Beta2.jar is present
paste javassist.jar here
open command prompt at this path
run command java -jar tattletale-1.2.0.Beta2.jar path_to_application_archive output_path
I inherited an old Maven project configured to use this plugin and got the same javassist errors. The plugin dependencies may be adjusted as shown to make the errors stop.
<plugin>
<groupId>org.jboss.tattletale</groupId>
<artifactId>tattletale-maven</artifactId>
<version>1.2.0.Beta2</version>
<executions>
<execution>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
<configuration>
<!-- This is the location which will be scanned for generating tattletale reports -->
<source>${project.build.directory}/${project.artifactId}/WEB-INF/lib</source>
<!-- This is where the reports will be generated -->
<destination>${project.build.directory}/site/tattletale</destination>
</configuration>
<dependencies>
<dependency>
<groupId>org.javassist</groupId>
<artifactId>javassist</artifactId>
<version>3.27.0-GA</version>
</dependency>
</dependencies>
</plugin>

Running app jar file on spark-submit in a google dataproc cluster instance

I'm running a .jar file that contains all dependencies that I need packaged in it. One of this dependencies is com.google.common.util.concurrent.RateLimiter and already checked it's class file is in this .jar file.
Unfortunately when I hit the command spark-submit on the master node of my google's dataproc-cluster instance I'm getting this error:
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.createStarted()Lcom/google/common/base/Stopwatch;
at com.google.common.util.concurrent.RateLimiter$SleepingStopwatch$1.<init>(RateLimiter.java:417)
at com.google.common.util.concurrent.RateLimiter$SleepingStopwatch.createFromSystemTimer(RateLimiter.java:416)
at com.google.common.util.concurrent.RateLimiter.create(RateLimiter.java:130)
at LabeledAddressDatasetBuilder.publishLabeledAddressesFromBlockstem(LabeledAddressDatasetBuilder.java:60)
at LabeledAddressDatasetBuilder.main(LabeledAddressDatasetBuilder.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
It seems something happened in the sense of overwriting my dependencies. Already decompiled the Stopwatch.class file from this .jar and checked that method is there. That just happened when I ran on that google dataproc instance.
I did grep on the process executing the spark-submit and I got the flag -cp like this:
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/lib/spark/conf/:/usr/lib/spark/lib/spark-assembly-1.5.0-hadoop2.7.1.jar:/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/etc/hadoop/conf/:/etc/hadoop/conf/:/usr/lib/hadoop/lib/native/:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/*
Is there anything I can do to solve this problem?
Thank you.
As you've found, Dataproc includes Hadoop dependencies on the classpath when invoking Spark. This is done primarily so that using Hadoop input formats, file systems, etc is fairly straight-forward. The downside is that you will end up with Hadoop's guava version which is 11.02 (See HADOOP-10101).
How to work around this depends on your build system. If using Maven, the maven-shade plugin can be used to relocate your version of guava under a new package name. An example of this can be seen in the GCS Hadoop Connector's packaging, but the crux of it is the following plugin declaration in your pom.xml build section:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>com.google.common</pattern>
<shadedPattern>your.repackaged.deps.com.google.common</shadedPattern>
</relocation>
</relocations>
</execution>
</execution>
</plugin>
Similar relocations can be accomplished with the sbt-assembly plugin for sbt, jarjar for ant, and either jarjar or shadow for gradle.

Onejar and resource loading

I have a maven project which I would like to package in an executable jar.
It's using quite a few dependencies, like spring and so on.
It was suggested in a few posts to use OneJar, to avoid a lot of headaches.
This is what I have currently in my pom.xml:
<plugin>
<groupId>org.dstovall</groupId>
<artifactId>onejar-maven-plugin</artifactId>
<version>1.4.4</version>
<executions>
<execution>
<configuration>
<mainClass>com.cool.project.Application</mainClass>
<onejarVersion>0.97</onejarVersion>
<attachToBuild>true</attachToBuild>
<classifier>coolproject</classifier>
</configuration>
<goals>
<goal>one-jar</goal>
</goals>
</execution>
</executions>
</plugin>
In my Spring configuration one of the classes needs to pass the a resource (src/main/resources/coolfile.bin) path to an external library (jsch) method:
String resource = ConfigurationClass.class.getClassLoader().getResource("coolfile.bin").getFile();
jsch.addIdentity(resource);
When I run Application.java from the IDE (eclipse), the entire application loads successfully.
Although when I run mvn clean install, the onejar jar is built under the target folder, but when I try to run it with java -jar coolproject.one-jar.jar the following error is displayed:
...
Caused by: java.io.FileNotFoundException: file:/target/coolproject.one-jar.jar!/main/coolproject.jar!/coolfile.bin (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:97)
at com.jcraft.jsch.IdentityFile.newInstance(IdentityFile.java:83
If I inspect coolproject.one-jar.jar, I can find the coolproject.jar under the main folder, and if I inspect that, I can see coolfile.bin in its root.
So in theory the resource should be found? What am I missing?
Turns out that FileInputStream would not find the path specified by resource.
Luckily jsch provides another method where you can pass the byte array of the file rather than its location:
jsch.addIdentity("coolfile.bin", toByteArray(ConfigurationClass.class.getResourceAsStream("/coolfile.bin")), null, null);

Categories

Resources