Can I merge Emma coverage data from unit and integration test targets? - java

We have our TeamCity builds set up using a build chain, so that our unit tests and and integration tests can run in parallel when triggered by a commit:
Build Chain - dependant on:
Unit tests
Integration tests
I am looking for a way that we can combine/merge the coverage data generated by the unit and integration tests in the build chain, so that we can get a better picture of how much actual code is covered by the two combined.
The plan then is to be able to monitor changes in coverage of committed code, and perhaps failing builds if percentages fall!

I have set up the 'build chain' target so that the coverage files (*.em, *.ec) from the unit and integration targets are available to it.
I created an ant build file specifically for the build chain target (with help from the emma doco!):
<project name="coverage-merge" basedir="." default="all">
<!-- directory that contains emma.jar and emma_ant.jar: -->
<property name="emma.dir" value="${basedir}/lib"/>
<property name="coverage.dir" location="${basedir}/coverage"/>
<path id="emma.lib">
<pathelement location="${emma.dir}/emma-teamcity-3.1.1.jar"/>
<pathelement location="${emma.dir}/emma_ant-2.0.5312.jar"/>
</path>
<taskdef resource="emma_ant.properties" classpathref="emma.lib"/>
<target name="all" depends="-report"/>
<target name="-report">
<emma>
<report sourcepath="${src.dir}" sort="+block,+name,+method,+class"
metrics="method:70,block:80,line:80,class:100">
<infileset dir="${coverage.dir}" includes="**/*.em, **/*.ec"/>
<!-- for every type of report desired, configure a nested
element; various report parameters
can be inherited from the parent <report>
and individually overridden for each report type:
-->
<txt outfile="${coverage.dir}/coverage.txt" depth="package"
columns="class,method,block,line,name"/>
<xml outfile="${coverage.dir}/coverage.xml" depth="package"/>
<html outfile="${coverage.dir}/coverage.html" depth="method"
columns="name,class,method,block,line"/>
</report>
</emma>
</target>
</project>
...which merges all the coverage files into a single report!
The metrics parameter of report sets the highlight threshold for the html report, so that the percentages against packages and files that are lower than the threshold are highlighted in red.
Modifying the xml output will allow me to use something like andariel to run an xpath over the results, and then force the build to fail if thresholds are not met!

Per TC's Emma doc
All coverage.* files are removed in the beginning of the build, so you have to ensure that full recompilation of sources is performed in the build to have the actual coverage.em file.
What I did to workaround this is below:
Use -out emma.em in teamcity build steps config, and make sure merge option is set to true to preserve the instrumented data.
In the last step when coverage report is generated, use ant's move task
<move file="$YOUR_PATH/emma.em" tofile="$YOUR_PATH/coverage.em"/> to rename to the default one.
The emma report will pick up the default em file to generate report.
Hope this helps whoever whats to have an accumulated emma coverage report.

Most of the code coverage tools I've encountered do not seem to have a way to combine test results from different or overlapping subsystems. As you have pointed out, this is a very useful ability.
Our SD Test Coverage tools do have this ability and are available for Java, C, C++, C#, PHP and COBOL. In fact, the SD test coverage tools can combine test coverage data from multiple languages into a single monolithic result, so that you can get an overview of test coverage for your multi-lingual applications. It is able to show the coverage on all the source langauges involved, as well as provide summary reports.

Related

Pitest: How to set paths correct in different modules

I have a huge project for which I am testing mutation testing with Pitest. The project is in an OSGi form and having all modules separated. I have this structure:
|-1.myProgramm-parent
 |-pom.xml
 |-2.myProgramm.module1
  |-pom.xml
 |-2.myProgramm.module1.Test
  |-pom.xml
 |-3.myProgramm.module2
  |-pom.xml
 |-3.myProgramm.module2.Test
   |-pom.xml
... and so on.
Now I put into the pom.xml from my 1.myProgramm-parent all the Pitest configurations I need (taken from the official site of pitest.org). The targetClasses and targetTests are in the pom.xml of 2.myProgramm.module1.Test, which I need to use.
Pitest finds all 7 test classes to minion. And sends them. Then gathering for test description is also fine. Coverage generator Minion excited ok.
Then: created 0 mutation test units.
And a build failure is shown. No mutations found.
I tried already all the possible annotatons shown on pitest.org, like: targetClasses, targetTests and additionalClasspathElements.
How can I say that the testClasses are in this folder 2.myProgramm.module1.Test, where I am setting the targetClasses, targetTests in the pom.xml. BUT the normal javaClasses to be minioned are in this package: 2.myProgramm.module1
How I can tell, go out of your test-folder and get into the folder up?
I also gave the pure path to the folder with the normal javaClasses, but NO reaction.
Do you have an idea?
Ps. It is not my program. I didn't wrote it. I am just working on it, to test. I have already 11 other programs with Maven and Gradle. I get all to minion. But this is such a pain in the butt! ARG!
If you are working with multi-module projects, you will need to use the pitmp plugin (https://github.com/STAMP-project/pitmp-maven-plugin).
This is because PIT itself only mutates classes that are defined in the same module as the tests. In contrast, pitmp will execute the tests for all classes of the modules. More details are provided in the link above.

How to exclude classes from the coverage calculation in EclEmma without actually excluding them from the coverage itself

I am using EclEmma to test the coverage of my scenario tests and use case tests on my project.
I have a Base package which contains the most general classes and the use case tests. The coverage looks like this:
What I want is to exclude the use case tests (e.g. BugReportTest) from the coverage calculation. But I do want the tests inside it to be considered. I know how to exclude the entire class from the coverage but if I do that, my coverage % drops because the actual tests that check which lines of my code are tested are forgotten. These use case tests do need to stay in the Base package because of privacy reasons.
For technical reasons it might be necessary to exclude certain classes from code coverage analysis. The following options configure the coverage agent to exclude certain classes from analysis. Except for performance optimization or technical corner cases these options are normally not required.
Excludes: A list of class names that should be excluded from
execution analysis. The list entries are separated by a colon (:)
and may use wildcard characters (* and ?). (Default: empty)
Exclude classloaders: A list of class loader names that should be
excluded from execution analysis. The list entries are separated by
a colon (:) and may use wildcard characters (* and ?). This option
might be required in case of special frameworks that conflict with
JaCoCo code instrumentation, in particular class loaders that do not
have access to the Java runtime classes. (Default:
sun.reflect.DelegatingClassLoader)
Warning: Use these options with caution! Invalid entries might break
the code coverage launcher. Also do not use these options to define
the scope of your analysis. Excluded classes will still show as not
covered.
Resource Link:
EclEmma Code Coverage Preferences
The following examples all specify the same set of inclusion/exclusion patterns:
<filter includes="com.foo.*" excludes="com.foo.test.*,
com.foo.*Test*" />
<filter includes="com.foo.*" /> <filter excludes="com.foo.test.*,
com.foo.*Test*" />
<filter value="+com.foo.*, -com.foo.test.*, -com.foo.*Test*" />
<filter excludes="com.foo.*Test*" file="myfilters.txt" />
where myfilters.txt file contains these lines:
-com.foo.test.*
+com.foo.*
Resource Link:
Coverage filters
I am certain that all of my classes are built with -g(debug='true')
and yet EMMA still complains about missing debug info!
Ignore code coverage for unit tests in EclEmma
Preferences->Java->Code Coverage and set the "Only path entries matching" option to src/main/java - seems to work nicely

Adding extra information in JUnit reports

I am using JUnit4 and want to put some extra information to be displayed in JUnit reports.
For this, I shall be dumping the extra information to the report xml and then modify the xslt to read that extra information to generate the HTML report.
Steps so far that are working are:
Copied all the code from XMLJUnitResultFormatter to MyFormatter.java and modified the endTest() method to add that extra information in the form of an extra attribute to testcase XML tag.
This is really bad :( but I could not simply override it as there usages of the private instance variables directly without getters/setters in endTest() method.
My junit ant task:
<junit fork="yes" printsummary="withOutAndErr">
<!--<formatter type="xml"/>-->
<formatter classname="com.some.junit.MyFormatter" extension=".xml"/>
<test name="com.some.source.MyTestClassTest" todir="${junit.output.dir}"/>
<classpath refid="JUnitProject.classpath"/>
</junit>
Modified the xslt to read the extra attribute of TESTCASE xml tag and display in report.
My modified ant task for report:
<target name="junitreport" depends="MyTestClassTest">
<junitreport todir="${junit.output.dir}">
<fileset dir="${junit.output.dir}">
<include name="TEST-*.xml"/>
</fileset>
<report styledir="reportstyle" format="frames" todir="${junit.output.dir}"/>
</junitreport>
</target>
I came across using TestNG nor SureFire Maven plugins as solutions, but I can't use them in my project.
Is there any better way than this in JUnit4?
Maybe?
The interface org.apache.tools.ant.taskdefs.optional.junit.JUnitResultFormatter is what needs to be implemented for a custom output format. This could write to any output stream, which is all the extensiblity that was built into the framework. You are right, there isn't a good way to extend the capabilities of XMLJUnitResultFormatter to customize the output. A copy-paste-modify certainly isn't ideal, but certainly acceptable.
Another approach might be to have more than one formatter defined in your ant task. One could be the regular xml formatter, with another being your custom one for additional information. These two files could be combined and then turned into HTML using xsl transforms.
I leave it to you to decide if this a better method than you had devised.

Running junit ant-task without build.xml

I'm trying to work out how I can run an ant task without actually needed a build.xml.
In particular I want to run a JUnit task with a formatter. In xml format this looks like below:
<junit printsummary="true" errorProperty="test.failed" failureProperty="test.failed">
<classpath refid="run.class.path" />
<!-- console log -->
<formatter type="xml" classname="be.x.SFFormatter" />
<test name="be.x.SF" outfile="result" todir="${build.output.dir}" />
</junit>
It works when running the ant script, but I would like to get my app running as a runnable jar.
Running the tests from Java was easy:
JUnitCore junit = new JUnitCore();
testResult = junit.run(SeleniumFramework.class);
However, I struggle to work out how to actually get the formatter to work.
The formatter is of type org.apache.tools.ant.taskdefs.optional.junit.JUnitResultFormatter so I doubt I can just plug it in somewhere without running ant.
Has anyone done something similar before?
Thanks!
Ant doesn't do any magic. All it does is read the XML file, create the beans specified in it and then execute the methods as per the Task API (org.apache.tools.ant.Task).
So all you need is to do the same in your code. Don't forget to create a Project :-)
You may use Ant via Groovy to avoid the xml syntax.
See => Using Ant from Groovy for details.

How do you fail a build based on the result of a single Findbugs detector?

If you are using Findbugs for compiled code inspection, is it possible to fail a build based on the result of a single detector or category of detectors?
For example, I would like to ensure that I don't have any null pointer-related detections (prefix of "NP" in this list) of any priority. Likewise, we really don't want to have any wait not in loop situations. That said, I don't necessarily want to fail a build based on internationalization detections as those aren't immediately critical to our application.
The desired end-state would be a process that we could tune for a variety of development phases ranging from the IDE level (Eclipse and Netbeans) to the release level (builds are generated using CruiseControl).
NOTE: I am aware that Eclipse and Netbeans both have similar detection methods built-in but this is a FindBugs specific question.
From the FindBugs Using the Ant Task section:
includeFilter
Optional attribute. It specifies the filename of a filter specifying which bugs are reported. See Chapter 8, Filter Files.
From Chapter 8:
However, a filter could also be used to select bug instances to specifically report:
$ findbugs -textui -include myIncludeFilter.xml myApp.jar
and
Match certain tests from all classes by specifying their abbreviations.
<Match>
<Bug code="DE,UrF,SIC" />
</Match></pre>
So I would assume something along the lines of:
<Match>
<Bug code="Wa,NP" />
</Match>
In your include filter and
<findbugs includeFilter="path/to/includefilter.xml"...
Would be what you're looking for.
The path/to /includeFilter (or excludeFilter) could be a property that gets set based on the value of another property which could default to something like dev for regular builds, test for CI builds, and deploy for deployment builds and specify which specific warnings you want to see or don't want to see at each stage.
Hope that helps.

Categories

Resources