JaCoCo branch coverage try with resources - java

I have a method that I am trying to unit-test. I cannot post the actual code, but it looks like this:
public int getTotal() throws MyException {
int total = 0;
try (ExternalResource externalResource = ExternalService.getResource()) {
try (OtherExternal otherResource = externalResource.getOtherResource()) {
if (someCondition) {
total = otherResource.getTotal();
}
}
}
}
JaCoCo is telling me that I am missing 4/8 branches on each of the try-with-resource blocks. I am testing that someCondition is true and someCondition is false, and JaCoCo shows that block completely covered.
I read this question, and I understand from the accepted answer that the issue is in how the byte code is generated.
I would like to be able to better understand how to identify the various branches that are generated, and then I can make a better judgement on wether to test them or not (are they unreachable, etc).

Per the change history in version 0.8.2:
Branches and instructions generated by javac 11 for try-with-resources statement are filtered out
I've tested this out locally using openjdk java8, and my try-with-resources now reports 100% branch coverage (even though the IOException is never thrown in my tests).
While it is good to test this behavior out, there are times when you can't easily reproduce such exceptions. For instance, in a method that just returns an open port:
public int getOpenPort() {
try (ServerSocket boundSocket = new ServerSocket(0)) {
return boundSocket.getLocalPort();
}
}
I know of no simple way to force this code to throw IOException without adding a bunch of confusing and unnecessarily complicated code, just to pass a branch coverage check. Luckily, the new (v0.8.2) jacoco library gives this method 100% coverage with a single test just calling Assert.assertNotEquals(0, portChecker.getOpenPort());.

You have to test every Exception and every condition. But JaCoCo sometimes failed to identify correctly what is covered or not.

Related

Gradle + TestNG: How to list which classes were touched by a unit test?

I'd like to list what classes are touched during the execution of each unit test. I want to discover which tests have an overly large scope and should use a smaller unit instead. Measuring the code coverage via IntelliJ or JaCoCo doesn't help me as I cannot drill down to a single test. Has anyone managed to do something similar? I found a similar question which was asked ten years ago, but never answered. I currently use Java 8 with Gradle and TestNG. I feel like I have to build some Gradle plugin or modify TestNG in some way, but I have no idea where to start.
Any help is greatly appreciated.
OK, I've discovered that Openclover can do this. I've setup a dummy maven project and when I execute the tests via this command chain:
mvn clean clover:setup test clover:aggregate clover:clover
Then clover generates a report in the "target\site\clover" subdirectory of my project. In that directory is an index.html which you can use to browse through the results. You need to navigate to a test and to a certain test method to see the breakdown. It looks like this:
We can see that the method testCalculateBalance of BalanceCalculatorTest covers BalanceCalculator, PriorServiceBalanceProvider and TimeAccount, but we don't see TimeAccountProvider, because it is mocked in the test. I'll paste the code snippets, so you get the full picture. Bottom line is, it works, but it is not very user friendly.
public class BalanceCalculatorTest {
#Test
public void testCalculateBalance() {
TimeAccountProvider timeAccountProvider = Mockito.mock(TimeAccountProvider.class);
Mockito.doReturn(Collections.singleton(new TimeAccount())).when(timeAccountProvider).getTimeAccounts();
BalanceCalculator balanceCalculator = new BalanceCalculator(timeAccountProvider);
Assert.assertEquals(balanceCalculator.calculateBalance(), 2);
}
}
public class BalanceCalculator {
private final TimeAccountProvider timeAccountProvider;
public BalanceCalculator(TimeAccountProvider timeAccountProvider) {
this.timeAccountProvider = timeAccountProvider;
}
public int calculateBalance() {
Set<TimeAccount> timeAccounts = timeAccountProvider.getTimeAccounts();
int sum = 0;
for (TimeAccount timeAccount : timeAccounts) {
Set<TimeAccountDetail> bookings = timeAccount.getTimeAccountDetails();
sum += bookings.stream()
.mapToInt(TimeAccountDetail::getAmount)
.sum();
}
sum += new PriorServiceBalanceProvider().getPriorBalance();
return sum;
}
}

Mockito with JUnit Failing on Windows

I have just received a new project, I have a fresh repo clone of a java spring project.
When I build it with Gradle, all the dependencies are downloaded but when one of the Gradle tasks execute, the unit tests, the build fails.
I think the problem resides in the argThat() method of Mockito that is not getting well integrated with JUnit. This is one of the places where the issue occurs:
Any time a unit tests have this kind of logic, it fails with:
The console output is not for the above test but it is a similar method with more complex logic.
The above tests still fail with the same issue.
This only happens in my machine and not on others that are on a Unix distribution, fedora.
I think the problem is due to the dependencies version, but I have tested with different ones to no avail.
I can give you more information if needed.
Thank you.
EDIT: Code - not a screenshot
#Test
void shouldAbortEventExecutionWhenJobFails() throws JobParametersInvalidException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException {
when(jobLauncher.run(eq(job1), argThat(jobParametersForPath(TEST_PATH_1)))).thenReturn(jobExecutionFailed);
when(job1.getName()).thenReturn("job1");
ExecutionState result = executor.execute(asList(event1, event2));
assertThat(result).isEqualTo(ExecutionState.FAILED);
verify(jobLauncher).run(eq(job1), argThat(jobParametersForPath(TEST_PATH_1)));
verify(jobLauncher, never()).run(eq(job2), argThat(jobParametersForPath(TEST_PATH_1)));
verify(jobLauncher).run(eq(job1), argThat(jobParametersForPath(TEST_PATH_2)));
verify(jobLauncher).run(eq(job2), argThat(jobParametersForPath(TEST_PATH_2)));
verifyNoMoreInteractions(jobLauncher);
}
private ArgumentMatcher<JobParameters> jobParametersForPath(String inputPath) {
return jobParameters ->
jobParameters.getParameters().get("inputFilePath").toString().equals(inputPath) &&
jobParameters.getParameters().get("outputFilePath").toString().equals(TEST_OUTPUT_PATH + "/" + inputPath) &&
jobParameters.getParameters().containsKey("timestamp");
}
I can't tell you the exact problem without inspecting your code or without reproducing your issue. But I guess the problem should be related to file paths;
I can see that there is a variable called outputFilePath inside your assertation object. in Linux environments, we use slash / for file paths, but in windows environments it's back-slashes \.
[1] https://www.howtogeek.com/181774/why-windows-uses-backslashes-and-everything-else-uses-forward-slashes/
[2] https://stackoverflow.com/a/1589959/3728639
You need to debug your Junit test and compare actual assertation object with the expected one

junit pass with warning [duplicate]

This question already has answers here:
Conditionally ignoring tests in JUnit 4
(5 answers)
Closed 5 years ago.
I have a junit integration test that runs on our build machine, and calls an external server. We use a gating build - so code doesn't make it in to the main branch unless we get 100% of tests passing
Occasionally, the external server goes down for a while, and the test fails, but with a definitive exception that I can catch. I really don't want the test to fail the build, therefore blocking code getting in - but I also would prefer it's not marked as "passed". So I want to sort of mark it as a warning, ignored, or indeterminate result. This would be ideal:
#Test
public void someTest()
{
try
{
do whatever
}
catch (ServerDownException theException)
{
junit.markThisTestAsIgnored(); <---- something like this
}
}
found it -
throw new AssumptionViolationException( "skipping test because xxx is down");
One option is to set a category(stress category for example).
Check out this link:
https://github.com/junit-team/junit4/wiki/categories
If you want, you can put the #Ignore rule:
https://dzone.com/articles/allowing-junit-tests-pass-test
Regards

TestNG test passes on Linux, fails on windows

For some reason the test below fails on Windows, but passes on Linux. The test is designed to generate an exception in the code being tested. The exception is basically a file exception. The approach is to make the file unreadable in order to generate the exception. It looks like the setReadable(false) has no affect on Windows.
#Test(dependsOnGroups = "expectedFlow",expectedExceptions = ParserException.class)
#Parameters("unreadableFile")
public void mineDataParserExceptionTest(String unreadableFile) throws ParserException{
AbstractParser parser;
File f = new File(unreadableFile);
f.setReadable(false);
parser = ParserFactory.getParser(ParserFactory.TYPES.SAR);
parser.mine(fileHelper, xml);
}
You should check the return value to see if it succeeded; however, it seems likely that f.setReadable(false, false); might be a better idea, since otherwise it is only supposed to alter the read permission for the owner of the file.

Is it possible for e JUnit test to tell if it's running in Eclipse (rather than ant)

I have a test that compares a large blob of expected XML with the actual XML received. If the XML is significantly different, the actual XML is written to disk for analysis and the test fails.
I would prefer to use assertEquals so that I can compare the XML more easily in Eclipse - but this could lead to very large JUnit and CruiseControl logs.
Is there a way I can change a JUnit test behaviour depending on whether it's running through Eclipse or through Ant.
Here are 2 solutions.
Use system properties
boolean isEclipse() {
return System.getProperty("java.class.path").contains("eclipse");
}
Use stacktrace
boolean isEclipse() {
Throwable t = new Throwable();
StackTraceElement[] trace = t.getStackTrace();
return trace[trace.length - 1].getClassName().startsWith("org.eclipse");
}
Yes - you can test if certain osgi properties are set (System.getProperty("osgi.instance.area") for instance). They will be empty if junit is started through ant outside of eclipse.
Maybe the "java.class.path" approach can be weak if you include some eclipse jar in the path.
An alternative approch could be to test "sun.java.command" instead:
On my machine (openjdk-8):
sun.java.command org.eclipse.jdt.internal.junit.runner.RemoteTestRunner ...
A possible test:
boolean isEclipse() {
return System.getProperty("sun.java.command")
.startsWith("org.eclipse.jdt.internal.junit.runner.RemoteTestRunner");
}
Usually, the system proeprties are different in different environments. Try to look for a system property which is only set by eclipse or ant.
BTW: The output in eclipse is the same, its just that the console for eclipse renders the output in a more readable form.
Personally, I wouldn't worry about the size of the logs. Generally you don't need to keep them very long and disk space is cheap.
With Java 1.6+, it looks like the result of System.console() makes a difference between running for Eclipse or from a terminal:
boolean isRealTerminal()
{
// Java 1.6+
return System.console() != null;
}

Categories

Resources