Jenkins Maven release fails due to log files from unit tests - java

I unfortunately may have two questions in one, or rather the solution may go two different ways. I have log4j loggers set up in a few classes that I unit test on. When I run mvn clean install it obviously runs those tests and in turn creates a log file (that is usually empty as nothing exciting is being logged). This isn't necessarily a problem except that Jenkins doesn't seem to like this when I do Perform Maven Release. It yells about the workspace having local changes and it cites the log file before declaring failure.
I know its the unit tests because if I changed them to integration tests or ignore them, everything works fine. But I'd like a solution not a workaround.
Are there configurations in Jenkins that can allow me to remedy this?
Or is there a strategy for mocking or ignoring logging for Unit tests?
I don't necessarily want to ignore them, but it is interfering with creating a release.

I am not quite aware of what Perform Maven Release does, but I can suggest a couple of solutions:
Remove the log file from your source code repository (as the log files gets regenerated on every run, I don't think it should reside in your source code repository).
Add the path of the offending log file to a list of files that are ignored by version control (e.g. git uses a file called gitignore - https://git-scm.com/docs/gitignore)
Hope this helps!

Amit has some good ideas, and I'll suggest a few more:
Use slf4j for logging in your project, and don't bind a logging implementation during the test phase
going one better, bind slf4j-test during the test phase, which logs to memory. Then you can also write tests against your logging to ensure it happens when you expect it to.

Related

Finding #Ignore'd tests that are now passing

For a decent sized open source project where developers come and go, someone may fix a bug without realizing that someone else a while back committed a disabled unit test (a la #Ignore). We'd like to find the passing tests that are disabled so we can enable them and update the bug tracker, CC list, and anything else downstream.
What is the best way to occasionally run all #Ignore'd tests and identify the ones that are now passing? We are using Java 1.6 with JUnit4, building our project with ant and transitioning to gradle. We use Jenkins for CI.
A few ideas:
Permanently replace all of our #Ignore annotations with a conditional ignore
http://www.codeaffine.com/2013/11/18/a-junit-rule-to-conditionally-ignore-tests/
Run a custom JUnit4 class runner that changes the behavior of #Ignore.
https://stackoverflow.com/a/42520871
Temporarily comment out all #Ignore annotations so that they run. However we'd need a way to negate the failures.
Sorry, this is not a solution, but rather another alternative that has worked for me:
My key point was to not modify existing (1000s) of unit tests. So no broad code changes. No new Annotations, certainly not temporarily.
What I did was override the JUnit #Ignore detection and make that conditional, via classpath prepends: Check in a separate control file if that test/class is listed or disabled. This is based on package/FQCN/method name and Regexp patterns. If covered, run it even though it still has #Ignore in the unchanged original JUNit Test source.
Log the outcome, amend the control file. Rinse and repeat.

Maven-surefire-plugin tests fail in Jenkins build but run successfully locally?

I have a maven project with test execution by the maven-surefire-plugin. An odd phenomenon I've observed and been dealing with is that running locally
mvn clean install
which executes my tests, results in a successful build with 0 Failures and 0 Errors.
Now when I deploy this application to our remote repo that Jenkins attempts to build, I get all sorts of random EasyMock errors, typically of the sort:
java.lang.IllegalStateException: 3 matchers expected, 4 recorded. at org.easymock.internal.ExpectedInvocation.createMissingMatchers
This is a legacy application being inherited, and we are aware that many of these tests are flawed if not plainly using EasyMock incorrectly, but I'm in a state where with test execution I get a successful build locally but not in Jenkins.
I know that the order of execution of these tests is not guaranteed, but I am wondering how I can introspect what is different in the Jenkins build pipeline vs. local to help identify the issue?
Is there anything I can do to force execute the tests in the way they're done locally? At this point, I have simply excluded many troublesome test classes but it seems that no matter how many times I see a Jenkins failure, I either fix the problem or exclude the test class, I'm only to find it complain about some other test class it didn't mention before.
Any ideas how to approach a situation like this?
I have experimented quite a similar situation, and the cause of mine was obviously some concurrency problems with the tests implementations.
And, after reading your comment:
What I actually did that fixed it (like magic am I right?) is for the maven-surefire plugin, I set the property reuseForks=false, and forkCount=1C, which is just 1*(number of CPU's of machine).
... I get more convinced that you have concurrency problems with your tests. Concurrency is not easy to diagnose, specially when your experiment runs OK on one CPU. But race conditions might arise when you run it on another system (which usually is faster or slower).
I recommend you strongly to review your tests one by one and ensure that each one of them is logically isolated:
They should not rely upon an expected previous state (files, database, etc). Instead, they should prepare the proper setup before each execution.
If they modify concurrently a common resource which might interfere other test's execution (files, database, singletons, etc), every assert must be done synchronizing as much as needed, and taking in account that its initial state is unknown:
Wrong test:
MySingleton.getInstance().put(myObject);
assertEquals(1, MySingleton.getInstance().size());
Right test:
synchronized(MySingleton.getInstance())
{
MySingleton.getInstance().put(myObject);
assertTrue(MySingleton.getInstance().contains(myObject));
}
A good start point for the reviewing is checking one of the failing tests and track the execution backwards to find the root cause of the fail.
Setting explicitly the tests' order is not a good practice, and I wouldn't recommend it to you even if I knew it was possible, because it only would hide the actual cause of the problem. Think that, in a real production environment, the executions' order is not usually guranteed.
JUnit test run order is non-deterministic.
Are the versions of Java and Maven the same on the 2 machines? If yes, make sure you're using the most recent maven-surefire-plugin version. Also, make sure to use a Freestyle Jenkins job with a Maven build step instead of the Maven project type. Using the proper Jenkins build type can either fix build problems outright or give you a better error so you can diagnose the actual issue.
You can turn on Maven debug logging to see the order tests are being run in. Each test should set up (and perhaps tear down) its own test data to make sure the tests may run independently. Perhaps seeing the test order will give you some clues as to which classes depend on others inappropriately. And - if the app uses caching, ensure the cache is cleaned out between tests (or explicitly populated depending on what the test needs to do). Also consider running the tests one package at a time to isolate the culprits - multiple surefile plugin executions might be useful.
Also check the app for classpath problems. This answer has some suggestions for cleaning the classpath.
And another possibility: Switching to a later version of JUnit might help - unless the app is using Spring 2.5.6.x. If the app is using Spring 2.5.6.x and cannot upgrade, the highest possible version of JUnit 4.x that may be used is 4.4. Later versions of JUnit are not compatible with Spring Test 2.5.6 and may lead to hard-to-diagnose test errors.

Can I know, in a maven plugin, if a build of a module failed?

I'm thinking of developing a maven plugin which will cause your maven build to output info messages and above if the build fails.
The context is that I'd like to configure maven to work with warn by default and disable all logs of my company (this will be done by logback configuration) and I'd like to have a plugin which talks to another in-memory logback appender to get the entire log to throw to the user in case the build fails since at that point all the data is relevant.
My question is if and how I can get that "notification" that the build failed?
For those interested my intention, which I still need to validate, is then to programmatically change the consoleAppender back to info and write everything that was accumulated to it.
I was asked about my motivations and so there are two.
The first is that I think (still crunching data to see if I'm right) that our build logs are so verbose it's effecting our build times.
The second is that some of our tests cause exceptions to be thrown as part of them which confuse the logs. I'd still like the entire log in case the build fails so that developers have all the info they need to debug their failure
First i don't understand your intention why not using a continious integration solution which records the whole output and can be stored for a period of time. If you need to analyze you can take a look into it. Apart from that i don't understand your need to do something what you described and what the advantage would be...
Furthermore a maven plugin will simply not work for your intention cause a maven plugin is bound to the life cycle.
If you really need something outside the mave life cycle you could take a look into the EventSpy which could be used in the way you described, but its an extension which must be put into lib/ext folder of your maven installation. Best is to use the AbstractEventSpy as parent for your own implementation.

Java code coverage without instrumentation

I'm trying to figure out which tool to use for getting code-coverage information for projects that are running in kind of stabilization environment.
The projects are deployed as a war and running on Jboss. I need server-side coverage while running manual / automated tests interacting with a running server.
Lets assume I cannot change projects' build and therefore cannot add any kind of instrumentation to their jars as part of the build process. I also don't have access to code.
I've made some reading on various tools and they are all presenting techniques involving instrumenting the jars on build (BTW - doesn't that affect production, or two kinds of outputs are generated?)
One tool though, JaCoCo, mentioned "on-the-fly-instrumentation" feature. Can someone explain what does it mean? Can this help me with my limitations?
I've also heard on code-coverage using runtime profiling techniques - can someone help on that issue?
Thanks,
Ben
AFAIK "on-the-fly-instrumentation" means that the coveragetool hooks into the Classloading-Mechanism by using a special ClassLoader and edits the Class-Bytecode when it's being loaded.
The result should be the same as in "offline-instrumentation" with the JARs.
Have also a look at EMMA, which supports both mechanisms. There's also a Plugin for Eclipse.
A possible solution to this problem without actual code instrumentation is to use a jvm c-agent. It is possible to attach agents to the jvm. In such an agent you can intercept every method call done in your java code without changes to the bytecodes.
At every intercepted method call you then write info about the method call which can be evaluated later for code coverage purposes.
Here you'l find the official guide to the JVMTI JVMTI which defines how jvm agents can be written.
You don't need to change the build or even have access to the code to instrument the classes. Just instrument the classes found in the delivered jar, re-jar them and redeploy the application with the instrumented jars.
Cobertura even has an ant task that does that for you: it takes a war file, instrument the classes inside the jars inside the war, and rebuild a new war file. See https://github.com/cobertura/cobertura/wiki/Ant-Task-Reference
To answer your question about instrumenting the jars on build: yes, of course, the instrumented classes are not used in production. They're only used for the tests.

Run JUnit automatically when building Eclipse project

I want to run my unit tests automatically when I save my Eclipse project. The project is built automatically whenever I save a file, so I think this should be possible in some way.
How do I do it? Is the only option really to get an ant script and change the project build to use the ant script with targets build and compile?
Update I will try 2 different approaches now:
Running an additional builder for my project that executes the ant target test (I have an ant script anyway)
ct-eclipse, recommended by Thorbjørn
For sure it it unwise to run all tests, because we can have for example 20.000 tests whereas our change could affect only, let's say 50 of them, among which are tests for the class we have changed and tests for classes that collaborate with our class.
There is an unseful plugin called infinitetest http://improvingworks.com/products/infinitest/ which runs only some tests ( related to class we've changed ) just after we save changes. It also integrate quite nicely with editor ( using annotations ) and problem view - displaying not-passing tests like errors.
Right click on your project > Properties > Builders > New, and there add your ant ant builder.
But, in my opinion, it is unwise to run the unit tests on each save.
See if Eclipse has a plugin for Infinitest.
I'd also consider TestNG as an alternative to JUnit. It has a lot of features that might be helpful in partitioning your unit test classes into shorter and longer running groups.
I believe you are looking for http://ct-eclipse.tigris.org/
I've experimented with the concept earlier, and my personal conclusion was that in order for this to be useful you need a lot of tests which take time. Personally I save very frequently so this would happen frequently, and I didn't find it to be an advantage. It might be different for you.
Instead we bit the bullet and set up a "build server" which watches our CVS repository and builds projects as they change. If the compilation fails or the tests fail we are notified quickly so we can remedy it.
It is as always a matter of taste what works for you. This is what I've found.
I would recommend Inifinitest for the described situation. Infinitest is nowadays a GPL v3 licensed product. Eclipse update site: http://infinitest.github.com
Then you must use INFINITEST. INFINITEST helps you to do Continuous Testing.
Whenever you make a change, Infinitest runs tests for you.
It selects tests intelligently, and only runs the ones you need. It reports unit test failures like compiler errors, and provides additional information that helps you write better tests.

Categories

Resources