suppose i have a project with lots of todos, some unintentionally left there, some no longer relevant, some represent future possible features etc.
i know that most IDEs can show/filter/sort them, but im looking for a way to enforce a more strict policy.
what im looking for is some maven plugin that i can bind to the test phase that looks for TODOs of a specific format (for example //TODO-Ver ...) and, if any are found, generates a test failure (which would then be visible via hudson, emails will be sent, alarms would go off, heads will roll etc).
this extra execution would be bound to the test phase under some profile that will only be activated close to the dev cycle end or something.
my question is has anyone done anything like this before ?
what code inspection tools can be tailored to look for TODOs by regexp, and what maven plugin can be used to run said inspection tools ? is it possible to do from a unit test ?
any comments/ideas/suggestions would be welcome.
Checkstyle can do that (see the TodoComment check) and you could use the maven checkstyle plugin and its checkstyle:check goal to check the code and fail the build in case of violation (usually during the verify phase).
The checkstyle plugin has already been pointed out, so I'll introduce the Taglist Maven plugin that looks for TODO, FIXME tags in the source code and can produce a report of the usages of all such tags. Of course, it is customizable, so you can put in your own tags to search for; regexes also seem to be supported.
Related
I'm thinking of developing a maven plugin which will cause your maven build to output info messages and above if the build fails.
The context is that I'd like to configure maven to work with warn by default and disable all logs of my company (this will be done by logback configuration) and I'd like to have a plugin which talks to another in-memory logback appender to get the entire log to throw to the user in case the build fails since at that point all the data is relevant.
My question is if and how I can get that "notification" that the build failed?
For those interested my intention, which I still need to validate, is then to programmatically change the consoleAppender back to info and write everything that was accumulated to it.
I was asked about my motivations and so there are two.
The first is that I think (still crunching data to see if I'm right) that our build logs are so verbose it's effecting our build times.
The second is that some of our tests cause exceptions to be thrown as part of them which confuse the logs. I'd still like the entire log in case the build fails so that developers have all the info they need to debug their failure
First i don't understand your intention why not using a continious integration solution which records the whole output and can be stored for a period of time. If you need to analyze you can take a look into it. Apart from that i don't understand your need to do something what you described and what the advantage would be...
Furthermore a maven plugin will simply not work for your intention cause a maven plugin is bound to the life cycle.
If you really need something outside the mave life cycle you could take a look into the EventSpy which could be used in the way you described, but its an extension which must be put into lib/ext folder of your maven installation. Best is to use the AbstractEventSpy as parent for your own implementation.
We have a massive project with almost no unit tests at all. I would like to ensure from now on that the developers commit new features (or bugs!) without minimal coverage for corresponding unit tests.
What are some ways to enforce this?
We use many tools, so perhaps I can use a plugin (jira, greenhopper, fisheye, sonar, hudson). I was also thinking perhaps a Subversion pre-commit hook, the Commit Acceptance Plugin for jira, or something equivalent.
Thoughts?
Sonar (wonderful tool by the way) with Build breaker plugin can break your Hudson build when some metrics don't meet specified rules. You can setup such a rule in Sonar that would trigger an alert (eventually causing the build to fail) when the coverage is below given point. The only drawback is that you probably want the coverage to grow, so you must remember to increase the alert level every day to the current value.
What you want to do is determine what is new code, and verify that the new code is covered by some test.
Determining code coverage in general can be accomplished with any of a variety of test coverage tools. Many test coverage tools can simply reinstrument your entire application and then you can run tests to determine coverage.
Our (Semantic Designs') line of Test Coverage tools can determine, from a changed-file list, just the individual files that need to re-instrumented, and with careful test organization, just the tests that need to be reexecuted. This will minimize the cost of re-running your tests, and you'll still end
with the same overall coverage data. (Actually, these tools detect what tests need to be made based on changes at the method level).
Once you have test coverage data, what you want to know is the the specifically new code is covered by some tests. You can do this sloppily with just test coverage data if you know which files changed, by insisting the changed files have 100% coverage. That probably doesn't work in practice.
You could instead take advantage of SD's Smart Differencer tools to give a more precise answer. These tools compare two language files, and indicate where the changes are using the language syntax (e.g., expression, statement, declaration, method body, not just changed source lines) and conceptual editing operations (move, copy, delete, insert, rename-identifier-within-block). SmartDifferencer deltas tend to be both smaller and finer than what you would get from a plain diff tool.
It is easy to extract from the SmartDifferencer's output a list of lines changed. One could compute the intersection of that, per file, with the lines covered by the test coverage data. If the changed-lines are not all entirely within the set of covered lines, then "new" code hasn't been tested and you can raise a flag, stop a checkin, or whatever to signal that your checking policy has been violated.
The TestCoverage and SmartDifferencer tools don't come out-of-the-box with this computation done for you, but it should be a pretty easy script to implement.
if you use maven - cobertura plugin can be a good choice ( and not so annoying for developers as svn hook )
http://mojo.codehaus.org/cobertura-maven-plugin/usage.html
I want to run my unit tests automatically when I save my Eclipse project. The project is built automatically whenever I save a file, so I think this should be possible in some way.
How do I do it? Is the only option really to get an ant script and change the project build to use the ant script with targets build and compile?
Update I will try 2 different approaches now:
Running an additional builder for my project that executes the ant target test (I have an ant script anyway)
ct-eclipse, recommended by Thorbjørn
For sure it it unwise to run all tests, because we can have for example 20.000 tests whereas our change could affect only, let's say 50 of them, among which are tests for the class we have changed and tests for classes that collaborate with our class.
There is an unseful plugin called infinitetest http://improvingworks.com/products/infinitest/ which runs only some tests ( related to class we've changed ) just after we save changes. It also integrate quite nicely with editor ( using annotations ) and problem view - displaying not-passing tests like errors.
Right click on your project > Properties > Builders > New, and there add your ant ant builder.
But, in my opinion, it is unwise to run the unit tests on each save.
See if Eclipse has a plugin for Infinitest.
I'd also consider TestNG as an alternative to JUnit. It has a lot of features that might be helpful in partitioning your unit test classes into shorter and longer running groups.
I believe you are looking for http://ct-eclipse.tigris.org/
I've experimented with the concept earlier, and my personal conclusion was that in order for this to be useful you need a lot of tests which take time. Personally I save very frequently so this would happen frequently, and I didn't find it to be an advantage. It might be different for you.
Instead we bit the bullet and set up a "build server" which watches our CVS repository and builds projects as they change. If the compilation fails or the tests fail we are notified quickly so we can remedy it.
It is as always a matter of taste what works for you. This is what I've found.
I would recommend Inifinitest for the described situation. Infinitest is nowadays a GPL v3 licensed product. Eclipse update site: http://infinitest.github.com
Then you must use INFINITEST. INFINITEST helps you to do Continuous Testing.
Whenever you make a change, Infinitest runs tests for you.
It selects tests intelligently, and only runs the ones you need. It reports unit test failures like compiler errors, and provides additional information that helps you write better tests.
I am using hudson CI to manage a straight java web project, using ant to build.
I would like to mandate that the unit test coverage never be worse than the previous build, thereby making sure any new code is always tested, or at least the coverage is continually improving.
Is there a hudson plugin that works this way?
Edit: I am currently using Emma, but would be willing to switch to another coverage app.
Also, as a clarification, I've seen the thresholds in some Hudson plugins, but that's not exactly what I'm after. For example what I'd like is that if coverage for Build #12 was 46% overall, and someone checked in Build #13 with 45% coverage, the build would break.
The reason I want to do this, is that I have a codebase with low test coverage. We don't have time to go back and retroactively write unit tests, but I'd like to make sure that the coverage keeps getting better.
UPDATE: Dan pointed out an edge case with my plan that will definitely be a problem. I think I need to rethink whether this is even a good idea.
Yes. Which coverage tool are you using?
The Cobertura plugin for Hudson definitely supports this. On the project configuration screen you can specify thresholds.
Alternatively, you can make Ant fail the build (rather than Hudson), by using the cobertura-check task.
EDIT: I'm not sure you can do precisely what you are asking for. Even if you could, it could prove problematic. For example, assume you have an average coverage of 75% but for one class you have coverage of 80%. If you remove that 80% class and all of its tests, you reduce the overall coverage percentage even though none of the other code is any less tested than previously.
This is kind of a hack, but we use it for similar reasons with Findbugs and Checkstyle. You can set up an Ant task to do the following (this can be split out into multiple tasks, but I'm combining them for brevity):
Run tests with coverage
Parse the coverage results and get the coverage percentage
Read tmp/lastCoverage.txt from last build (see step #5a)
Compare the current coverage percentage with the percentage read from lastCoverage.txt
If percentage DIDN'T decrease, write the new percentage over the contents of tmp/lastCoverage.txt
If percentage DID decrease, keep the original file and echo "COVERAGE FAILURE" (with ant's echo task).
Note that steps 2 through 5 don't necessarily need to be done with native Ant tasks - you could use something like Ant's javac task to run a Java program to do this for you.
Then, configure Hudson:
Under "Source code management", make sure "Use Update" is checked. This will allow your lastCoverage.txt file to be retained between builds. Note that this could be problematic if you really, really need things to be cleaned between builds.
Use the Hudson Text Finder plugin with a regular expression to search for "COVERAGE FAILURE" in the build output (make sure that "Also search console output" is checked for the plugin). The text finder plugin can mark the build unstable.
You can obviously replace things like the file name/path and console output to whatever fits within the context of your build.
As I mentioned above, this is rather hacky, but it's probably one of the few (only?) ways to get Hudson to compare things in the previous build to the current build.
Another approach would be to use the Sonar plugin for Hudson to maintain trending of coverage over time, and make it easier to assimilate and analyze results. It will also show coverage in context of other measures, such as checkstyle and pmd
Atlassian's Clover supports what you want. Have a look at the clover-check Ant task, specifically the historyDir attribute.
In our infrastructure, we have lots of little Java projects built by Maven2. Each project has its own pom.xml that ultimately inherits from our one company "master" parent pom.
We've recently started adding small profiles to our parent pom, disabled by default, that, when enabled, execute a single plugin in a conventional manner.
Examples:
The 'sources' profile executes the maven-source-plugin to create the jar of project sources.
The 'clover' profile executes the maven-clover2-plugin to generate the Clover report. It also embeds our Clover license file so it need not be re-specified in child projects.
The 'fitnesse' profile executes the fitnesse-maven-plugin to run the fitnesse tests associated with the project. It contains the fitnesse server host and port and other information that need not be repeated.
This is being used to specify builds in our CI server like:
mvn test -P clover
mvn deploy site-deploy -P fitnesse,sources
and so on.
So far, this seems to provide a convenient composition of optional features.
However, are there any dangers or pitfalls in continuing on with this approach (obvious or otherwise)? Could this type of functionality be better implemented or expressed in another way?
The problem with this solution is that you may be creating a "pick and choose" model which is a bit un-mavenesque. In the case of the profiles you're describing you're sort of in-between; if each profile produces a decent result by itself you may be ok. The moment you start requiring specific combinations of profiles I think you're heading for troubles.
Individual developers will typically run into consistency issues because they forget which set of profiles should be used for a given scenario. Your mileage may vary, but we had real problems with this. Half your developers will forget the "correct" combinations after only a short time and end up wasting hours on a regular basis because they run the wrong combinations at the wrong time.
The practical problem you'll have with this is that AFAIK there's no way to have a set of "meta" profiles that activate a set of sub-profiles. If there had been a nice way to create an umbrella profile this'd be a really neat feature. Your "fitnesse" and "sources" profiles should really be private, activated by one or more meta-profiles. (You can activate a default set in settings.xml for each developer)
There isn't a problem with having multiple profiles in Maven, in fact I think they are an excellent way of allowing your build to enable and disable classes of functionality. I'd recommend naming them based on their function rather than the plugin though, and consider grouping functionally related plugins in the same profile.
As a precedent for you to follow, the Maven super POM has a "release-profile" defined, which includes configurations for the source, javadoc, and deploy plugins.
You should consider following this approach, so your "fitnesse" profile would become "integration-test", and you could choose to define additional plugins in that profile if needed at a later date. Similarly the "clover" profile could be renamed "site", and you could define additional reports in that profile, e.g. configurations for the JDepend, JXR, PMD plugins.
You seem slightly suspicious towards that approach, but you're not really sure why - after all, it is quite convenient. Anyway, that's what I feel about it: I don't really know why, but it seems somewhat odd.
Let's consider these two questions:
a) what are profiles meant for?
b) what are the alternative approaches we should should compare your approach with?
Regarding a), I think profiles are meant for different build or execution environments. You may depend on locally installed software, where you would use a profile to define the path to the executable in the respective environments. Or you may have profiles for different runtime configurations, such as "development", "test", "production".
More about this is found on http://maven.apache.org/guides/mini/guide-building-for-different-environments.html and http://maven.apache.org/guides/introduction/introduction-to-profiles.html.
As for b), ideas that come to my head:
triggering the plug-ins with command line properties. Such as mvn -Dfitnesse=true deploy. Like the well known -DdownloadSources=true for the eclipse plugin, or -Dmaven.test.skip=true for surefire.
But that would require the plugin to have a flag to trigger the execution. Not all the plug-ins you need might have that.
Calling the goals explicitly. You can call several goals on one command line, like "mvn clean package war:exploded". When fitnesse is executed automatically (using the respective profile), it means its execution is bound to a lifecycle phase. That is, whenever that phase in the lifecycle is reached, the plugin is executed.
Rather than binding plugin executions to lifecycle phases, you should be able to include the plugin, but only execute it when it is called explicitly.
So your call would look like "mvn fitnesse:run source:jar deploy".
The answer to question a) might explain the "oddness". It is just not what profiles are meant for.
Therefore, I think alternative 2 could actually be a better approach. Using profiles might become problematic when "real" profiles for different execution or build environments come into play. You would end up with a possibly confusing mixture of profiles, where profiles mean very different things (e.g. "test" would denote an environment while "fitnesse" would denote a goal).
If you would just call the goals explicitly, I think that would be very clear and flexible. Remembering the plugin/goal names should not be more difficult that remembering the profile names.