I wanna run selenium tests from TeamCity using Maven at Linux server without display.
While running Selenium tests I'm getting the following error in TeamCity:
Failed to execute goal org.codehaus.mojo:selenium-maven-plugin:2.3:xvfb (xvfb) on project my-project:
It appears that the configured display is already in use: :1
I installed x11-fonts*, xvfb, firefox, extracted DISPLAY=localhost:1, started xvfb
In pom.xml I added the following plugin:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>selenium-maven-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<id>xvfb</id>
<phase>pre-integration-test</phase>
<goals>
<goal>xvfb</goal>
</goals>
<configuration>
<display>:1</display>
</configuration>
</execution>
<execution>
<id>selenium</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start-server</goal>
</goals>
<configuration>
<background>true</background>
</configuration>
</execution>
</executions>
</plugin>
Do you have any ideas how to fix this problem?
UPD: xvfb is running using commmand
Xvfb :1 -screen 0 1920x1200x24 > /dev/null 2>&1 &
UPD: Before I've tried not to run xvfb before running tests, but was getting:
Execution xvfb of goal org.codehaus.mojo:selenium-maven-plugin:2.3:xvfb failed: Execute failed: java.io.IOException: Cannot run program "xauth": java.io.IOException: error=2, No such file or directory
The error message tells you there is already an X server running on display number 1. From what you say:
I installed x11-fonts*, xvfb, firefox, extracted DISPLAY=localhost:1, started xvfb ... I added the following plugin
it seems that you started a server, and then the plugin tried to start it once more (as it should). I'd try not starting xvfb beforehand (ensure it doesn't run).
Or get rid of the display number in the plugin configuration altogether, it will try to find a free display number. It won't use your xvfb instance, though.
I removed plugin declaration from pom.xml (as far as got to know that it's for previous version of Selenium), installed xauth (not sure that was necessary) and everything began to work.
Related
I am facing this issue while building a maven project
[INFO] Cloning git#bitbucket.org:bookkeeper/bookkeeper-openapi3-hosted-ui-configuration.git:refs/tags/v5.0.0 into /Users/shubhamjain/bookkeeper/de/bookkeeper-ui-service/service/bookkeeper-api/bookkeeper-platform-hosted-ui-configuration-5.0.0
[ERROR] Failed to fetch bookkeeper-platform-hosted-ui-configuration API (5.0.0):
org.eclipse.jgit.api.errors.TransportException: git#bitbucket.org:bookkeeper/bookkeeper-openapi3-hosted-ui-configuration.git: Auth fail
at org.eclipse.jgit.api.FetchCommand.call (FetchCommand.java:222)
----
----
Caused by: org.eclipse.jgit.errors.TransportException: git#bitbucket.org:bookkeeper/bookkeeper-openapi3-hosted-ui-configuration.git: Auth fail
at org.eclipse.jgit.transport.JschConfigSessionFactory.getSession (JschConfigSessionFactory.java:162)
at org.eclipse.jgit.transport.SshTransport.getSession (SshTransport.java:107)
Initially I had issues in access to the mentioned repository but then I got required permissions.
In the pom.xml
-----
-----
<plugin>
<groupId>com.bookkeeper</groupId>
<artifactId>bookkeeper-api-maven-plugin</artifactId>
<executions>
<execution>
<id>generate-hosted-ui-configuration-openapi-hook</id>
<goals>
<goal>generate-client</goal>
</goals>
<configuration>
<apiName>bookkeeper-platform-hosted-ui-configuration</apiName>
<apiVersion>5.0.0</apiVersion>
<repositorySpecPath>bookkeeper-platform-hosted-ui-configuration/application-hosted-ui-configuration.yaml</repositorySpecPath>
<gitUrlTemplate>git#bitbucket.org:bookkeeper/bookkeeper-openapi3-hosted-ui-configuration.git</gitUrlTemplate>
<openapiConfigurationOverrides>
<isExperimental>false</isExperimental>
<apiName>hosted-ui-configuration</apiName>
</openapiConfigurationOverrides>
</configuration>
</execution>
</executions>
</plugin>
-----
-----
What could be missing ? Thanks in advance.
Issue was Maven couldn't find ssh binary path on the environment
Setting GIT_SSH environment variable worked.
export GIT_SSH=${SSH_BINARY_PATH}
In my case
export GIT_SSH=/usr/bin/ssh
I have a multi module spring-boot project, before integration tests of my app, I start another child module (which is Stub made by another spring boot app) You can see it is attached to "pre-integration-test" and it is working fine finally.
Parent Pom
|
|----myRealApp module(spring boot app)
|----stub module(This is also a spring-boot app)
My question is, is there a way to randomize And share this port (not fixed to 8090), so concurrent builds on Jenkins server can run tests and not fail because address is in use already.
I know I can generate random numbers/ports in spring properties file. But couldn't find a way to pass it to Pom.
application-test.properties of myRealApp:
stub.port=8090
stub.url=http://localhost:${stub.port}/stub/api/v1/domains/
Pom of myRealApp:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>${spring.boot.mainclass}</mainClass>
</configuration>
<executions>
<execution>
<id>start-stub</id>
<configuration>
<arguments>
<argument>--server.port=8090</argument>
</arguments>
<mainClass>io.swagger.Stub</mainClass>
<classesDirectory>../my-stub/target/classes</classesDirectory>
</configuration>
<goals>
<goal>start</goal>
</goals>
<phase>pre-integration-test</phase>
</execution>
<execution>
<goals>
<goal>build-info</goal>
</goals>
</execution>
</executions>
</plugin>
You can do that via jenkins Port Allocator Plugin
Once you assign the port (lets say HTTP_PORT), then you can pass this as command line
-Dstub.port=$HTTP_PORT
I recommend you not to randomize at all. My suggestion is to parametrize the server port in the POM and application-test.properties files, and set a value based upon some Jenkins-provided variable: For example, BUILD_NUMBER, which is incremented on every build and thus uniqueness is guranteed.
However, there is a problem about this: You also need to enclose the port number within valid boundaries: TCP ports must be within 1024 and 65535, however BUILD_NUMBER is not limited at all.
How to cope with this? I think a simple Ant task bound to the initialize phase could read the BUILD_NUMBER value, apply it a simple formula 1024+(BUILD_NUMBER % 64512), and set it as the definitive port number variable, which is the one you will reference in the POM and application-test.properties files.
I have a Spring-MVC app that uses AngularJS for the front-end and Java in the backend. The java code is in src/main/java and the UI code is in src/main/resources/static. I'm building a fat jar using Maven.
Running locally = everything works.
I can also run the jar from the command line and everything works.
When I deploy to Heroku, the app returns a 404 on / ... it seems like it can't find the UI code anywhere.
I have an identical app with a different (less fancy) AngularJS UI, and it deploys to Heroku without any issues. The only real difference is the UI code exists at the parent src/main/resources/static while my custom app uses gulp - and gulp builds the ui code src/main/resources/static/dist. My Maven POM moves that /dist to target/classes/static when I run the package job, and that's working fine... After mvn clean package I can run my app through IntelliJ or at the command line using java -jar target/blah.jar. But when I push it to Heroku I get an application error, and the Heroku log cites a 404 on path="/".
Note my starting point for these projects was the Stormpath examples for spring-boot-web-angular. The stock example deploys fine with same Procfile, so the only difference is the /dist that my custom UI code has - but Maven should be taking care of that.
My Procfile contains:
web: java $JAVA_OPTS -Dserver.port=$PORT -jar target/*.jar
Pom excerpt that copies the UI code to the right spot in target/:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.1</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>validate</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/classes/static</outputDirectory>
<resources>
<resource>
<directory>src/main/resources/static/dist</directory>
<filtering>false</filtering>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
I've been googling for days and reached out to Heroku support, but they said they can't help.
I can't tell if the Maven piece isn't getting picked up when I git push heroku master (after packing locally), or if I'm missing a config option or something in my Procfile.
Would very much appreciate a pointer in the right direction.
This post on javapapers.com shows how to run a JMH benchmark in Maven by typing mvn exec:exec. Running JMH within Maven is pretty handy, since you can easily run it from an Eclipse Run Configuration or even in a Maven phase.
However, there are two problems with this setup:
When you kill Maven, JMH will continue running in the background, as exec:exec starts it in a separate VM.
Usually, JMH will start yet another VM to run the benchmarks, so you will end up with at least 3 VMs running at the same time.
Fortunately, the Exec Maven Plugin comes with a second goal, exec:java, which executes a main class directly in the VM Maven runs. However, when I tried to configure Maven to run JMH using exec:java, the benchmark crashes because of missing classes:
# JMH 1.11.3 (released 40 days ago)
# VM version: Error: Could not find or load main class org.openjdk.jmh.runner.VersionMain
# VM invoker: C:\Program Files\Java\jdk1.7.0\jre\bin\java.exe
[...]
# Run progress: 0.00% complete, ETA 00:02:40
# Fork: 1 of 1
Error: Could not find or load main class org.openjdk.jmh.runner.ForkedMain
<forked VM failed with exit code 1>
Here is the relevant part of the pom.xml:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.4.0</version>
<configuration>
<mainClass>my.Benchmark</mainClass>
</configuration>
</plugin>
And here is how I run JMH from my.Benchmark:
public static void main(String[] args) throws RunnerException {
Options options = new OptionsBuilder().include(my.Benchmark.class.getSimpleName())
.forks(1).build();
new Runner(options).run();
}
I realize that JMH uses the java.class.path system property to determine the classpath for the forked VMs and that this property does not contain Maven's project dependencies. But what is the preferred way to deal with this?
While my previous answer requires modifying the benchmark program, here is a POM-only solution that sets the java.class.path system property to the runtime classpath with the help of the Dependency Plugin:
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>build-classpath</id>
<goals>
<goal>build-classpath</goal>
</goals>
<configuration>
<includeScope>runtime</includeScope>
<outputProperty>depClasspath</outputProperty>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<configuration>
<mainClass>my.Benchmark</mainClass>
<systemProperties>
<systemProperty>
<key>java.class.path</key>
<value>${project.build.outputDirectory}${path.separator}${depClasspath}</value>
</systemProperty>
</systemProperties>
</configuration>
</plugin>
One way to work around this problem is to extract the "effective" classpath from the class loader of the my.Benchmark class before calling JMH from my main method:
URLClassLoader classLoader = (URLClassLoader) my.Benchmark.class.getClassLoader();
StringBuilder classpath = new StringBuilder();
for(URL url : classLoader.getURLs())
classpath.append(url.getPath()).append(File.pathSeparator);
System.setProperty("java.class.path", classpath.toString());
This seems to work, but it feels a lot like a hack that shouldn't be necessary...
I'm running a .jar file that contains all dependencies that I need packaged in it. One of this dependencies is com.google.common.util.concurrent.RateLimiter and already checked it's class file is in this .jar file.
Unfortunately when I hit the command spark-submit on the master node of my google's dataproc-cluster instance I'm getting this error:
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Stopwatch.createStarted()Lcom/google/common/base/Stopwatch;
at com.google.common.util.concurrent.RateLimiter$SleepingStopwatch$1.<init>(RateLimiter.java:417)
at com.google.common.util.concurrent.RateLimiter$SleepingStopwatch.createFromSystemTimer(RateLimiter.java:416)
at com.google.common.util.concurrent.RateLimiter.create(RateLimiter.java:130)
at LabeledAddressDatasetBuilder.publishLabeledAddressesFromBlockstem(LabeledAddressDatasetBuilder.java:60)
at LabeledAddressDatasetBuilder.main(LabeledAddressDatasetBuilder.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
It seems something happened in the sense of overwriting my dependencies. Already decompiled the Stopwatch.class file from this .jar and checked that method is there. That just happened when I ran on that google dataproc instance.
I did grep on the process executing the spark-submit and I got the flag -cp like this:
/usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/lib/spark/conf/:/usr/lib/spark/lib/spark-assembly-1.5.0-hadoop2.7.1.jar:/usr/lib/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/lib/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/lib/spark/lib/datanucleus-core-3.2.10.jar:/etc/hadoop/conf/:/etc/hadoop/conf/:/usr/lib/hadoop/lib/native/:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/*
Is there anything I can do to solve this problem?
Thank you.
As you've found, Dataproc includes Hadoop dependencies on the classpath when invoking Spark. This is done primarily so that using Hadoop input formats, file systems, etc is fairly straight-forward. The downside is that you will end up with Hadoop's guava version which is 11.02 (See HADOOP-10101).
How to work around this depends on your build system. If using Maven, the maven-shade plugin can be used to relocate your version of guava under a new package name. An example of this can be seen in the GCS Hadoop Connector's packaging, but the crux of it is the following plugin declaration in your pom.xml build section:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>com.google.common</pattern>
<shadedPattern>your.repackaged.deps.com.google.common</shadedPattern>
</relocation>
</relocations>
</execution>
</execution>
</plugin>
Similar relocations can be accomplished with the sbt-assembly plugin for sbt, jarjar for ant, and either jarjar or shadow for gradle.