I think I'm getting memory leaks when using Jenkins to execute my unit tests. If I try to execute more than ~60 unit tests I start to get most tests failing with java.lang.OutOfMemoryError: PermGen space. Often, but not always, the stack trace seems to start in or near org.powermock.core.classloader.MockClassLoader, although it's not consistent. the The maven surefire plugin configuration is pretty straightforward:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18</version>
<executions>
<execution>
<phase>test</phase>
<configuration>
<reuseForks>false</reuseForks>
<argLine>-XX:PermSize=512m -XX:MaxPermSize=1024m</argLine>
</configuration>
</execution>
</executions>
</plugin>
In Jenkins, MAVEN_OPTS is also set to -XX:MaxPermSize=1024m.
I saw some documents suggesting it might be related to the fact that I was using an older version of powermock, so I upgraded to 1.6.0, but I am still experiencing this error.
I can't reproduce the problem locally, it only seems to happen on the Jenkins server.
I'm not sure how to reliably resolve this: limiting the number of tests cases that execute seems to work OK, but I have 150+ test cases to execute and running batches of 50 tests at a time on the server does not seem like a very good solution. I might be able to give it a bit more memory but it seems like it already has enough, and I don't think surefire needs that much memory when it runs locally. There might be a way to play around with some of the other surefire settings, but I'm not sure which ones I'd need to adjust, or how. Has anyone else every seen this, or have a suggestion for how to resolve it?
This might be relevant: The development environment is IBM's RAD, and the workspace is launched with the option -Xgcpolicy:gencon, which as far as I can tell is specific to IBM's implementation of the JVM. Might this be the reason that the unit tests run fine when I run maven from RAD, but not from Jenkins? If so, what would be an equivalent option for the standard (Oracle) JVM, which Jenkins is using?
The problem is solved. I never figured out where the memory leaks were, but I noticed that in the console, maven would fork for surefire but never included the arguments I passed via <argLine>. When I added the same arguments to the maven command as:
mvn test -DargLine="-XX:MaxPermSize=1024m -Xmx768m"
all the tests executed fine, with no OutOfMemory issues. So I think that the <argLine> element might not work correctly.
Related
This is a really weird one. I have a Kotlin web service that was originally written as a hybrid app of both Kotlin and Java but I've recently migrated to pure Kotlin (although many of its libraries are still in Java). The framework I'm using is sparkjava and I'm using Maven to manage dependencies and packaging. The service in the past was built with manually included dependencies as JAR files and was built using an IntelliJ configuration, this was horribly messy and difficult to reproduce so I moved all the dependencies into Maven and set up a process for this. This is where things get weird:
I included this plugin in my pom.xml to manage the creation of the fat JAR which looks like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.1</version>
<configuration>
<archive>
<manifest>
<mainClass>unifessd.MainKt</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
When I run this confuration however, I get a JAR that won't execute. I didn't think this was a major problem, as running the "package" lifecycle in Maven does produce an executable JAR. This resultant JAR will happily run on my development machine (macOS Big Sur) and will pass all my external testing scripts. However, when I deploy the very same JAR to my production environment which is a FreeBSD server on AWS, it will start up correctly but whenever I make a request I get the following error:
[qtp248514407-20] WARN org.eclipse.jetty.server.HttpChannel -
//<redacted.com>/moderation/users/administrators
java.lang.NoClassDefFoundError: Could not initialize class
de.mkammerer.argon2.jna.Argon2Library
at de.mkammerer.argon2.BaseArgon2.hashBytes(BaseArgon2.java:267)
at de.mkammerer.argon2.BaseArgon2.hashBytes(BaseArgon2.java:259)
at de.mkammerer.argon2.BaseArgon2.hash(BaseArgon2.java:66)
at de.mkammerer.argon2.BaseArgon2.hash(BaseArgon2.java:49)
at [...]
I've truncated the stack trace to keep things concise but all it's doing before that is opening the appropriate DAO and hashing the password attempt. The offending class is of course de.mkammerer.argon2, which is a dependency I use to hash passwords using the argon2 algorithm. This has me really stumped for the following reasons:
When this dependency was linked in manually using a JAR in IntelliJ, it worked absolutely fine in production.
Even though the class fails to load in production, it works fine locally despite the packages being identical.
macOS and FreeBSD aren't exactly a million miles apart in terms of how they're put together, so why are they behaving so differently?
A few other points in my efforts to debug this:
I've tried linking in my argon2 library in the old way, and it's still failing in the same fashion.
IntelliJ isn't recognising the main class of my Kotlin app any more if I try and create an artifact without Maven. This is really weird, I can set up a Kotlin build and run configuration just fine by specifying unifessd.MainKt as my main class, but when it comes to building an artifact it's simply not having it. It doesn't appear in the artifact creation dialogue and when I specify it as my Main-Class in MANIFEST.MF, IntelliJ tells me it's an invalid main class. What on Earth is going on here? It'll run just fine when I tell Maven that's my main class and package it in a JAR, even in the faulty production environment.
Robert and dan1st were correct, the problem was that my argon2 library had a dependency on JNA and native code that was incompatible with FreeBSD. I tested the JAR on an Ubuntu server to confirm that this was the case and the program ran correctly.
My JUnit tests are failing when running them through Maven and the Surefire plugin (version information below). I see the error message:
Corrupted STDOUT by directly writing to native stream in forked JVM 4. See FAQ web page and the dump file C:\(...)\target\surefire-reports\2019-03-20T18-57-17_082-jvmRun4.dumpstream
The FAQ page points out some possible reasons but I don't see how I can use this information to start solving this problem:
Corrupted STDOUT by directly writing to native stream in forked JVM
If your tests use native library which prints to STDOUT this warning message appears because the library corrupted the channel used by the plugin in order to transmit events with test status back to Maven process. It would be even worse if you override the Java stream by System.setOut because the stream is also supposed to be corrupted but the Maven will never see the tests finished and build may hang.
This warning message appears if you use FileDescriptor.out or JVM prints GC summary.
In that case the warning is printed "Corrupted STDOUT by directly writing to native stream in forked JVM", and a dump file can be found in Reports directory.
If debug level is enabled then messages of corrupted stream appear in the console.
It refers to some native library printing out to STDOUT directly but how can I figure out which one, and even if I do, how do I deal with this issue if I need the library for my project?
It mentions "debug level" but it is unclear if this means Maven's debug level or Surefire plugin's debug level. I enabled Maven's debug but I don't see the console outputs as mentioned by the FAQ. And Surefire's debug option seems to be about pausing tests and waiting for a debugger to be connected to the process, not simply showing more information on the console.
The dump files also don't seem very helpful:
# Created on 2019-03-20T18:42:58.323
Corrupted STDOUT by directly writing to native stream in forked JVM 2. Stream 'FATAL ERROR in native method: processing of -javaagent failed'.
java.lang.IllegalArgumentException: Stream stdin corrupted. Expected comma after third character in command 'FATAL ERROR in native method: processing of -javaagent failed'.
at org.apache.maven.plugin.surefire.booterclient.output.ForkClient$OperationalData.<init>(ForkClient.java:511)
at org.apache.maven.plugin.surefire.booterclient.output.ForkClient.processLine(ForkClient.java:209)
at org.apache.maven.plugin.surefire.booterclient.output.ForkClient.consumeLine(ForkClient.java:176)
at org.apache.maven.plugin.surefire.booterclient.output.ThreadedStreamConsumer$Pumper.run(ThreadedStreamConsumer.java:88)
at java.base/java.lang.Thread.run(Thread.java:834)
So, how can I solve this problem?
Update: requested configuration information below.
I'm using OpenJDK 11 (Zulu distribution) on Windows 10, Maven 3.5.3, and Surefire 2.21.0 (full configuration below).
I'm running Maven from Eclipse using the "Run As..." context menu option on the pom.xml file, but obtain the same results when running it on the console.
I had never heard of JaCoco before the first comment to this question, but I see several error messages mentioning it:
[ERROR] ExecutionException The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was cmd.exe /X /C ""C:\Program Files\Zulu\zulu-11\bin\java" -javaagent:C:\\Users\\E26638\\.m2\\repository\\org\\jacoco\\org.jacoco.agent\\0.8.0\\org.jacoco.agent-0.8.0-runtime.jar=destfile=C:\\Users\\E26638\\git\\aic-expresso\\target\\jacoco.exec -Xms256m -Xmx1028m -jar C:\Users\E26638\AppData\Local\Temp\surefire10089630030045878403\surefirebooter8801585361488929382.jar C:\Users\E26638\AppData\Local\Temp\surefire10089630030045878403 2019-03-21T21-26-04_829-jvmRun12 surefire10858509118810158083tmp surefire_115439010304069944813tmp"
[ERROR] Error occurred in starting fork, check output in log
[ERROR] Process Exit Code: 1
This is the Surefire Maven plugin configuration:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.21.0</version>
<configuration>
<skipTests>${skipUnitTests}</skipTests>
<testFailureIgnore>false</testFailureIgnore>
<forkCount>1.5C</forkCount>
<reuseForks>true</reuseForks>
<parallel>methods</parallel>
<threadCount>4</threadCount>
<perCoreThreadCount>true</perCoreThreadCount>
<reportFormat>plain</reportFormat>
<trimStackTrace>false</trimStackTrace>
<redirectTestOutputToFile>true</redirectTestOutputToFile>
</configuration>
</plugin>
Run in the same problem while migrating project from JAVA 8 to JAVA 11, upgrading jacoco-plugin from 0.8.1 to 0.8.4 did the job.
Analysing maven dependencies, seeing from where jacoco is pulled and then fixing the version should solve the issue.
I was running into this issue when running my Junit tests using a custom Runner. If I made any output to System.out or System.err in my custom runner or in my test class, this exact warning would show up. In my case the problem was not caused by some older Jacoco version. Updating the surefire plugin to version 2.22.2 or the more recent 3.0.0-M4 did not solve the issue.
According to the Jira issue SUREFIRE-1614, the problem will be fixed in the 3.0.0-M5 release of the maven-surefire-plugin (not released as of May 21st 2020).
Update
The Maven Surefire plugin version 3.0.0-M5 has now been released. In your pom.xml you can do the following:
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M5</version>
<configuration>
<!-- Activate the use of TCP to transmit events to the plugin -->
<forkNode implementation="org.apache.maven.plugin.surefire.extensions.SurefireForkNodeFactory"/>
</configuration>
</plugin>
Original answer
If you cannot wait for the release of the 3.0.0-M5 plugin, you can use the "SNAPSHOT" version of the plugin. It did fix the issue for me. You have to enable some specific setting in the plugin so that the plugin uses TCP instead of the standard output/error to obtain the events raised in your tests. Configuration changes below:
In my pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
...
<!-- Add the repository to download the "SNAPSHOT" of maven-surefire-plugin -->
<pluginRepositories>
<pluginRepository>
<id>apache.snapshots</id>
<url>https://repository.apache.org/snapshots/</url>
</pluginRepository>
</pluginRepositories>
<build>
<pluginManagement>
<plugins>
...
<artifactId>maven-surefire-plugin</artifactId>
<!-- Use the SNAPSHOT version -->
<version>3.0.0-SNAPSHOT</version>
<configuration>
<!-- Activate the use of TCP to transmit events to the plugin -->
<forkNode implementation="org.apache.maven.plugin.surefire.extensions.SurefireForkNodeFactory"/>
</configuration>
</plugin>
For me it was updating the failsafe plugin from 2.22.0 to 2.22.2
If you are unable to upgrade to the latest JaCoCo version, I was also able to fix this for my project by setting forkCount to 0:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<forkCount>0</forkCount>
</configuration>
</plugin>
We are using log4j backend and could also fix it using follow set to true.
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="${messagePattern}" />
<follow>true</follow>
</Console>
</Appenders>
(We had the same error message but without jacoco being involved. But I can confirm that setting the forkNode in maven-surefire-plugin to TCP did also work.)
None of the listed answers helped in our case. The issue began after we've upgraded from Java 8 to Java 11.
Note: updating Surefire plugin was not possible in our case since it has broken some mechanisms the tests in our projects rely on - inspecting the issue took too long so we started identifying the root cause in Jacoco for that behaviour.
After some debugging and JVM dumps we found the cause: in our case we had JavaFX dependencies on the classpath which were loaded automatically by a resolver util. Loading these classes with jacoco enabled led to the JVM crash (without jacoco - encapsulated in a profile "coverage" in our case - did run fine). Excluding the loading of classes from JavaFX libraries (were not needed in our case) fixed the issue. Tests are running fine now without JVM crashs.
The exact class that led to the JVM crash (or at least the last that were loaded before) was in our case: com.sun.javafx.logging.jfr.JFRPulsePhaseEvent
Jar: javafx-base-12-win.jar
Hint: in many IDEs you can debug the Maven build with specific profile and check what is going on exactly.
Using Jacoco 0.8.6 and Surefire plugin 2.22.2
What solved it for me is upgrading maven surefire plugin to 2.22.2
For me it was upgrading org.testng to the latest version (7.3.0)
This issue happens to me too at random
I'm using
IntelliJ IDEA 2020 (Community Edition)
Surefire plugin (3.0.0-M5)
Maven 3.3.9
AdoptOpenJDK 11
And when it happens usually Windows 10 after several minutes shows
a beautiful blue screen of death.
And then after a restart everything goes back to normal
In my case I moved my development to a new PC and didn't have all our company library-dependencies in my Maven-repo yet. So when Maven ran there was such a library missing and I had to install it with mvn install:install-file ....
Therefore, it's important to read the latest surefire-logs as it suggests those things.
No idea, why the surefire-plugin doesn't just print that conflicting line on STDOUT to the console, so it would be obvious in less than a second.
We faced the same problem (OpenJDK 64-Bit Server VM Temurin-11.0.17+8 (11.0.17+8, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)). We already had a jacoco-plugin version of 0.8.5. A downgrade to 0.8.4 did not help. We dug deepter into the dump file and it (i.e., the gitlab runner) spit out:
Corrupted STDOUT by directly writing to native stream in forked JVM 1. Stream '# /builds/mygroup/myproject/hs_err_pid114.log'.
In that file we discovered
Internal Error (sharedRuntime.cpp:1262), pid=114, tid=115
guarantee((retry_count++ < 100)) failed: Could not resolve to latest version of redefined method
... which lead to this bug issue: https://bugs.openjdk.org/browse/JDK-6776659
The issue should be fixed in Java 16. Upgrading at this point was not possible. Luckily, the thread creator provided a workaround:
CUSTOMER SUBMITTED WORKAROUND :
-Xint or -server
As we were running a maven docker image in the gitlab runner, we had to set the forkCount to 0 in the maven surefire plugin configuration:
<configuration>
<forkCount>0</forkCount>
</configuration>
That solved the VM crash. Unfortunately, we also had a testcoverage check (JaCoCo) and that was no longer working, as the surefire plugin created a separate VM for e.g., test classes instrumentatlization.
target/site/jacoco/jacoco.csv: No such file or directory
Finally, we had to use the argLine configuration of surefire (2, 3)
<configuration>
<argLine>-Xint ${argLine}</argLine>
</configuration>
.. and it worked.
I was getting this error when running Maven build in Intelij Idea. I had a couple of projects open in separate windows and had other strange errors in a different project.
Solved for me by closing all the Intellij Idea windows and re-opening the project. No dependencies versions were changed.
The newer Surefire plugin versions are completely buggy and broken.
for me (tested all the way up to Java 12) the only solution was to stick with 2.20.
Don't use 2.20.1 either, that failed with a NPE, although maybe it is specific to particular tests, but I don't have time to investigate that.
Background:
We have a rather large REST API written in Java that we're testing with combination of unit and functional tests. There are many variations that are required when testing it, particularly at the functional level. While the unit tests live in-tree, the functional tests are in a separate code repository.
We are currently using Jacoco for test coverage and TestNG for running our unittests, though I believe answers to my question should be applicable to other tool combinations.
We have several different jobs in Jenkins that are triggered by a check-in to the primary project. These include jobs that run tools like Coverity as well as several different functional test jobs. These jobs are triggered by the initial commit, which is not considered to be "green", until all of the downstream jobs complete successfully.
The Problem:
How do we take coverage reports (like the Jacoco binaries and the TestNG xml files) and combine them to show total code coverage over all of our tests? Specifically, we know how to combine them if they are present in the same job/directory, but these files are spread across multiple Jenkins jobs which may be running at different times.
In my experience, the most commonly accepted way of handling this is to use the
Promoted Builds Plugin to trigger all jobs, then pull their artifacts when it completes down to the triggering job. I don't feel like this scales very well, however, when you have more than one or two jobs that you're attempting to roll-up. This is especially true when you may have more than one variation on the master project (old releases, etc).
I understand it is possible to fingerprint files in Jenkins such that we know that -.jar is the same version used in Jobs A, B, and C. Does a plugin exist that can retrieve all files matching a pattern based on the existence of a different fingerprinted file?
One alternative solution (which would probably be run from an ant/groovy script), is to push test data to a directory somewhere that is tied to a git commit hash, and retrieve all such data in a roll-up job based on the git commit hash of the base project.
Are there any simple ways to do this? Has anyone figured out any better other ways to solve this problem?
Thanks,
Michael
Faced similar issue, tweaked jacoco maven plugin config to merge jacoco results. Basically merged jacoco-unit.exec and jacoco-it.exec into one binary and published that merged result on Jenkins via pipeline step.
pom.xml:
<plugin>
<inherited>false</inherited>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco.agent.version}</version>
<executions>
<execution>
<id>merge-results</id>
<phase>post-integration-test</phase>
<goals>
<goal>merge</goal>
</goals>
<configuration>
<fileSets>
<fileSet>
<directory>${project.parent.build.directory}</directory>
<includes>
<include>jacoco-*.exec</include>
</includes>
</fileSet>
</fileSets>
<destFile>${project.parent.build.directory}/jacoco.exec</destFile>
</configuration>
</execution>
</executions>
</plugin>
Jenkinsfile:
echo 'Publish Jacoco trend'
step($class: 'JacocoPublisher',
execPattern: '**/jacoco.exec',
classPattern: '**/classes',
sourcePattern: '**/src/main/java',
)
However you still have to fetch jacoco binaries from several Jenkins builds by another build step or specify their locations explicitly.
I am looking for the best way to measure code coverage for cucumber tests (cucumber jvm).
I found Cobertura but I don't really know how to use and configure it when it has to measure the code coverage for acceptance test and I can't find anything efficient to do that... (For the moment, I just added the maven plugin corresponding to Cobertura, but I don't know what configuration should be done inside).
Do you have any idea ?
If you think I should use any other tool than Cobertura, please tell me :)
Thank you
Before you try and use Cobertura, make sure you understand what it does and whether that applies to your case. Cobertura in fact IS a tool that measures the code coverage BUT it is important to understand how it does that.
Cobertura (and jcoverage which it's based on) calculate the percentage of the code covered by tests, meaning that it is actually checking what lines of code were touched! It is very different from the functional (or business domain) test coverage described by BDD tools like Cucumber that you are using.
Saying that, to use Cobertura you have 2 options:
Single run
Just include it in your dependencies in pom.xml and run: mvn
cobertura:cobertura
Integrate into Maven lifecycle
Add the plugin to your pom.xml
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
<version>2.6</version>
<configuration>
<formats>
<format>html</format>
<format>xml</format>
</formats>
</configuration>
</plugin>
and run mvn clean site-deploy to execute the plugin.
I have a Maven test project for my application.
The JUnit tests run fine, and the code coverage test run too.
But the report always shows 0% of code coverage.
What should i do?
According to the official site, Eclemma is a code coverage plugin for Eclipse, based on JaCoCo library.
As you want to use the same code coverage engine outside eclipse, you should include the plugin Jacoco inside the Maven configuration (pom) of your project, as the following (this code was copied from the Agile Engineering blog):
<build>
<plugins>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.6.0.201210061924</version>
<executions>
<execution>
<id>jacoco-initialize</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>jacoco-site</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
To run the tests just type the following on the command line tool:
mvn clean test
p.s.: you also could use other code coverage plugins like Cobertura or Emma.
Just in case you forgot to do these:
Are you annotating your tests using #Test?
Are you running the class as a JUnit test case or from the coverage button?
I'm not sure what the cause of the problem is, cause it always worked for me. Have you installed it from eclipse itself? Try to uninstall it, and reinstall from eclipse. Here's how to do it just in case:
In Eclipse, Click "Help" > "Install new Software"
Click "Add", and type the following:
Name: EclEmma (or any name you want)
Path: http://update.eclemma.org/
Select EclEmma, and install
Now I realized that you just want to get a report using the tool inside Eclipse...
How is the code coverage in the Eclipse Dialog? Did you tried to use the mouse right click on this dialog to export session (report), or inside File -> Export?
It's a known issue for many years and unfortunately there's no official solution yet for it.
You can see it here, here and here
One not-so-honey solution might be to try using eCobertura (or downgrading eclemma from 2.x to 1.x)
If you are using eclemma, you need to add jacoco dependency. if jacoco has been added and still, you are facing this issue, refer the eclemma faq: "Why does a class show as not covered although it has been executed?"
it says,
First make sure execution data has been collected. For this select the Sessions link on the top right corner of the HTML report and check whether the class in question is listed. If it is listed but not linked the class at execution time is a different class file. Make sure you're using the exact same class file at runtime as for report generation. Note that some tools (e.g. EJB containers, mocking frameworks ) might modify your class files at runtime.
So, Mockito / PowerMockito can cause this problem. In my case, I have added the class in #PrepareForTest(). I was shown that the test case was executed fine without errors but Jacoco did't improve the code coverage in its report.
Finally, removing the class from #PrepareForTest() annotation improved the code coverge. check if you have added it or not and remove it from annotation if added.
I just came across this issue and it was caused by an incorrectly configured classpath. When the unit tests were executed, they were executing against a compiled jar (actual source compiled outside of eclipse) and not my actual source code. After removing the jar from my classpath, the unit tests correctly hit my package source.
I was able to resolve the issue on mine by calling a instance of the class at the top of the test cases.
i.e.
public hotelOccupancy hotel = new hotelOccupancy();
#Test
public void testName() {
// some test here
}
Once I did that all my coverage began working and the issues were resolved.
I'm using eclemma 2.3.2 and it's working perfectly on eclipse
I only need to add these dependencies in my pom.xml
<dependency>
<groupId>org.jboss.arquillian.extension</groupId>
<artifactId>arquillian-jacoco</artifactId>
<version>1.0.0.Alpha6</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jacoco</groupId>
<artifactId>org.jacoco.core</artifactId>
<version>0.7.1.201405082137</version>
<scope>test</scope>
</dependency>
Then I build the project, update maven projects configuration and run coverage plugin as expected