Does Jenkins swallow MojoFailureException? - java

I have configured maven surefire plugin with parameter:
<configuration>
<forkedProcessTimeoutInSeconds>60</forkedProcessTimeoutInSeconds>
</configuration>
So when test's working more then 60 seconds, the surefire plugin interrupts it.
Everything works perfectly on my local machine when I use mvn test or mvn install, but when I try to build project on Jenkins it just swallows exception, writes into log [ERROR] There was a timeout or other error in the fork and continues the build. As result I get a Finished: SUCCESS message.
Question: Have anyone got this problem? Does anyone know any solution?

One important difference between default options of a maven local build and a Jenkins maven job is that locally the maven.test.failure.ignore option of the Maven Surefire Plugin is set to false (reasonably) so that test failures will also fail the build.
From official documentation:
Set this to "true" to ignore a failure during testing. Its use is NOT RECOMMENDED, but quite convenient on occasion.
However, a Maven Jenkins job will always run setting the same option to true, as such making the Maven build successful even with test failing and turn the status of the Jenkins job to UNSTABLE (and not SUCCESSFUL or FAILED, which may be a point of debate indeed).
This behavior is also documented in an official Jenkins issue ticket
Following the Jenkins Terminology, when (surefire or failsafe) tests fail, the Jenkins build status must be UNSTABLE:
<< A build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable. >>
So, in a Maven Jenkins job, if a test fails:
Maven build is SUCCESSFUL
Jenkins build is UNSTABLE
Instead, in a freestyle Jenkins job executing Maven, if a test fails:
Maven build is FAILED
Jenkins build is FAILED
Possible solutions:
Change the build to a freestyle Jenkins job running maven (which may be too much work though) or
Add the -Dmaven.test.failure.ignore=false option to your build (however, you would not have UNSTABLE builds any longer).

Related

Maven rebuild failure of tests, build does not complete

I have a large project that I am working on. I recently checked out our evolution branch, did a git pull and tried to deploy the app locally. It doesn't seem to recognize some libraries or jars in one Java class, so subsequently errors halt me from running. Basically, the import statements go unrecognized in the class.
Turns out I forgot to rebuild maven. When I ran mvn clean install from the command prompt, the build fails (even when I do mvn clean install -fn) as there are tests that fail. I don't often work with maven, or the command line, but here is my full stack trace when I run mvn clean install -e:
I'm running my project in the IntelliJ environment.
When I ran mvn clean install -fn, 'talent-app' was successful, but talent-core still failed and I still got
[INFO] BUILD FAILURE
Please let me know if you have any input, I appreciate it!
I'm not sure I understand your question correctly.
Basically, regarding your first paragraph, you said you had library issues but that after a clean rebuild - of your project, I suppose not of Maven itself - everything is fine?
Regarding the rest of your post, your build is failing because of a failing test case. This is shown by the line:
talent-core ............................... FAILURE
and by the output:
Failed to execute goal [...]. There are test failures.
If you go into the target/surefire-reports folder, you will find some files containing the output and error traces of each test, including the one that failed.
By scrolling up in your terminal, you should also be able to see which test was failing for talent-core.
From then on, by order of preference:
either look at the test reports as mentioned in the output, and attempt to figure our why the test is failing, and either fix the test or the code;
or skip the tests, you can add -DskipTests to the command-line. But you shouldn't skip your tests, really.

How to fail Jenkins build if no tests were run?

We are running TestNG tests using Gradle on Jenkins.
Job configuration:
Build section -> Invoke Gradle Script -> Use Gradle Wrapper -> Tasks:
clean test -Dgroups=myTestNGTestGroupName
In Jenkins Console Output I can see the logs from execution of gradlew.bat with specific parameters (one of them is -Dgroups=myTestNGTestGroupName)).
We have quite a lot of Jenkins jobs and automation Selenium tests.
Since that on daily basis we are checking only failed jobs.
During refactoring tests TestNG group name may be changed or typo may occur.
If you changed the test group name in test repository and forgot to update Jenkins job:
0 tests are executed and job is still passing (build is successful).
How can I tell Jenkins to mark build as non-successful if no tests were executed?
TestNG generates testng-results.xml file every time after test run
(even if 0 tests were executed).
We can analyze this file. The simplest solution which I found is using
Text-finder Plugin
(which in my case was already added to Jenkins)
I added Jenkins Text Finder in Post-build Actions as follow:
How it looks in Jenkins Console Output logs:
BUILD SUCCESSFUL
Total time: 42.105 secs
Build step 'Invoke Gradle script' changed build result to SUCCESS
Archiving artifacts
Checking <testng-results skipped="0" failed="0" total="0" passed="0">
c:\jenkins\workspace\my-job-name\build\reports\tests\testng-results.xml:
<testng-results skipped="0" failed="0" total="0" passed="0">
Build step 'Jenkins Text Finder' changed build result to UNSTABLE
...
Finished: UNSTABLE

Running cucumber-jvm in Jenkins

After some research into running Cucumber on Jenkins I have come to a dead end, I have read some post here about Running Cucumber but most are error not the process.
Running via command line the problem here is that I don't know how to call this in Jenkins after building.
I have Jenkins running on an Ubuntu server. Everything for building a maven project is setup, but how would one run the Runcukes file or setup the pom file in a way to call cucumber to start running?
Wire up a Maven task to run cucumber. As cucumber generates stubs for junit, maven's surefire plugin will run the tests nicely.
Jenkins has full support for running maven builds, so you won't have any issues there.

why sonar:sonar needs mvn install before?

official documentation http://docs.sonarqube.org/display/SONAR/Analyzing+with+Maven says that the proper way of invoking sonar is:
mvn clean install -DskipTests=true
mvn sonar:sonar
but doesn't say why. how does sonar work? does it need compiled classes? so why not just mvn clean compile? or does it need a jar file? so why not just mvn clean package? what exactly does sonar plugin?
Explanation from a SonarSource team member:
In a multi-module build an aggregator plugin can't resolve dependencies from target folder. So you have two options:
mvn clean install && mvn sonar:sonar as two separate processes
mvn clean package sonar:sonar as a single reactor
I was surprised too, so I made a tweet an received the following answer from the official Maven account:
If the plugin is not designed to use the target/classes folder as a substitute, then yes you would need to have installed to get the jar when running *in a different session*. Complain to the plugin author if they force you to use install without foo reason [ed - #connolly_s]
The SonarQube analyzer indeed needs compiled classes (e.g for Findbugs rules, coverage). And since by default it executes tests itself, the compile phase can skip tests.
You can run SonarQube as part of a single Maven command if you meet some requirements:
As Mithfindel mentions, some SonarQube plugins need to analyze .class files. And if you run unit tests outside of SonarQube, then of course the testing plugins must read output from the test phase.
Got integration tests? Then you need to run after the integration-test phase.
If you want to run SonarQube as a true quality gate then you absolutely must run it before the deploy phase.
One solution is to just attach SonarQube to run after the package phase. Then you can get a full build with a simple clean install or clean deploy. Most people do not do this because SonarQube is time-consuming, but the incremental mode added in 4.0 and greatly improved in the upcoming 4.2 solves this.
As far as the official documentation goes, it's a lot easier to say "build and then run sonar:sonar" then it is to say, "open your POM, add a build element for the sonar-maven-plugin, attach it to verify, etc".
One caveat. SonarQube requires Java 6, so if you're building against JDK 1.5 (still common in large organizations), the analysis will have to happen in a separate Maven invocation with a newer JDK selected. We solved this issue with custom Maven build wrapper.

Strategy for maven deploy via a Jenkins job

I have a Jenkins job that use maven build goals 'clean package deploy' for the master git branch. However, due to the nexus repo not allowing redeploys, if the Jenkins job runs a second time without the version changing, it will fail with the expected 400 Bad Request error:
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal
org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy)
on project common-library:
Failed to deploy artifacts: Could not transfer artifact
net.bacon.common:common-library:pom:1.2.13 from/to bacon-releases
(https://maven.bacon.com/nexus/content/repositories/releases):
Failed to transfer file:
https://maven.bacon.com/nexus/content/repositories/releases/net/bacon/common/common-library/1.2.13/common-library-1.2.13.pom.
Return code is: 400, ReasonPhrase:Bad Request.
Can anyone suggest a different strategy, whereby the deploy goal can run without making the Jenkins job fail?
what we do is automatic snapshot builds. then, the version is automatically incremented.
for release build, we use the maven release plugin and enter the version manually. you can, however, let the release plugin do the work. it will remove the "-SNAPSHOT" build, deploy, and then, for the next release version increment the last digit and append the "-SNAPSHOT" again.
for the distribution management, you can have two repos, one for snapshots and one for releases, with different redeploy settings.
We apply a "double action" solution:
Increment version
Run mvn install
Run tests
If all passed, we run mvn deploy
This way, we do not try to deploy before we know all passed and we have a unique version deployed every time.
I hope this helps.
You should make sure, that each commit on master carries its own version number on the pom file. So you won't have redeploys.
There is a good reason for rejecting the "redeploys": The content of a released version should never change.
If you can't avoid commits for the same version number on master, consider changing the chained jenkins job to "clean install" (stores the artifacts only on the local repository) and create a new job with "clean deploy" that is only started manually.
This is an issue for our group as well.
We want maven to attempt a PASSIVE deploy, so if the deploy exists at nexus then it will acceptably move on with SUCCESS ALREADY DEPLOYED, and if the deploy does not exist at nexus, it will upload and deploy with SUCCESS.
We want jenkins to deploy after it builds and passes coverage check, but how to make it so that only the un-deployed will get deployed, and the already-deployed are ignored.
Our solution was a custom script.
You can use the release candidate concept. When you start the release you add -RC1 to the version (1.1.0-RC1 for example).
With the next redeploy you are increment the RC number. When the release is finished and you want to generate a new TAG, you only delete the RC for the version. before the TAG creation

Categories

Resources