Maven Plugin for Enforcing Java File Size - java

I'm looking for a plugin that will allow me, at build time, to enforce that my Java files don't exceed a certain size. For instance, if it's decided that 500 lines is too many lines for a class, then the build will fail if any classes exceed 500 lines.
For something similar, I'm thinking of jacoco where you can configure different parameters but, of course, instead of analyzing test code coverage, it analyzes the actual number of lines in each class.
Does such a plugin exist?

Static code analysis (SCA) tools check things like that but I'm not aware of a Maven plugin that fails a build if such a limit is exceeded. The tools I know just create reports to inform you about such a circumstance.
Even if it existed I wouldn't use such a plugin. Too long classes are a matter of refactoring, not a matter of releasing or not working code.

Related

Automatically delete unnecessary files in an Eclipse project

I forked a repository from Github that has a lot of packages and files, implementing all kind of algorithms, simulations and utility classes.
However, in my research I don't need all of these files/packages for my own simulation to work.
I would like to keep my forked project as minimal as possible, so I would like to keep only the necessary packages/files that are needed to compile my simulation.
I'm talking specifically about the IDE Eclipse. If I decide to "backtrack" all imports starting from my simulation file, I would definitely get lost because the original project is big.
On the other side, if I decide to "delete" a package and see if my simulation compiles, I would stay all week trying this out, and if I delete a needed file I would have to attach it again to my project which is troublesome.
Is there an automatic tool I can use to do this on Eclipse?
A simple option is to use a coverage checker to see what methods in what classes are used during execution, and delete the rest. And then revert anything that causes a compilation error.
This only works for code, not resources, though - and only if something like reflection isn’t used.

Extract a reference graph while compiling Java codebase?

Background:
I'm working with (for me) a reasonably large codebase (eg: I've only got a few of the related projects checked out at the moment, and its > 11000 classes).
Build is ant, Tests are JUnit, CI is Jenkins.
Running all tests before checkin is not an option, it takes Jenkins hours. Even for some of the individual apps it can be 45 minutes.
There are some tests that don't reference individual methods based on reflection, and in some cases don't even directly reference the class of the tested methods, as they interrogate an aggregator class, and are aware of the patterns of pass-through methods in use here. As it's a big codebase, > 10 developers, and I'm not in charge, this is something I can not change for now.
What I want, is the ability to before check-in print out a list of all test classes that are two degrees away (Kevin-Bacon-wise) from any class in the git diff list. This way I can run them all and cut down on angry emails from Jenkins when something I missed eventually gets run and has an error.
The easiest way I can think of to achieve this is to code it myself with a Ruby script or similar, which allows me to account for some of the patterns we're using, but to do it I need to be able to query "which classes reference class X?"
I could parse .java or (easier) .class files to get this info, but I'd rather not :) Is there a way I can make Javac export it in a simple format as it compiles?
Is there a way I can make Javac export it in a simple format as it compiles?
AFAIK, no.
However, there are other ways to get a list of the dependencies:
How do I get a list of Java class dependencies for a main class?.
(Note however that you are unlikely to get a static tool to extract dependencies resulting from Class.forName(), etcetera. Also note that you cannot infer the complete set of dependencies from bytecode files because of the way that "compile time constants" are handled.)
It strikes me that there are a few problems here:
It sounds to me like your build, and indeed your project structure is monolithic. If you could restructure the code base into large-scale modules that build separately (according to their dependencies), and version controlled separately, then you only need to do a full build and run all unit tests when there is a change high up ... in a module that everything else depends on. (Can I suggest the "Maven" word. It really helps for a large codebase, and 11,000 classes is large.)
It sounds like you may be suffering from the "branches are hard" problem of classic VCS systems.
It sounds like you may need a beefier CI system. If you've got more cores and the build framework is right, you should be able to get faster CI builds. (And if you modularize so that you rebuild less ...)
I think it might be easier to address your slow build/test cycle that way rather than via extra (possibly bespoke) tooling to do dependency analysis.
But I recognize that it may not be up to you to make those decisions.

Measure Code Coverage only on New Code

We are looking for a creative way to measure code coverage on new code separate from existing code. We have a large legacy project and want to start getting 90+% coverage on any new functionality. We would like a way to easily view a report that filters out any older code to make sure the new functionality is meeting our goal. Obviously still looking a increasing overall coverage on the project, but need a non-manual way to give us feedback on the new code activity. We have this working for Static analysis since we can look at the dates on the source files. Since Cobertura is analyzing the class files they have new dates and this technique doesn't work.
Any Ideas?
Stack:
Java 1.5
JUnit
Cobertura
Hudson
We had a similar situation.. wanted new code tested but could not test all old code at once. What we did is not exactly what you asked, but may give you an idea.
We have a file called linecoverage.standard, and a file called branchcoverage.standard that live on the build server (and local copies). They have a number inside with the current line and branch coverage limits. If the checked in code is below the standard, it fails the build. If it is at the standard it passes the build. If it is ABOVE the standard, a new standard is written equal to the current coverage.
This means our code coverage will never get worse, and should slowly go up. If new code is 90%, the coverage will keep creeping up. You could also set a goal like raise the standard by 1 each week until it gets to your final goal (90%). Having to add a few tests a week to old code is not a bad idea, if it is spread out over enough time.
Our current coverage is up to 75%ish... pretty good coming from a 0% rate under a year ago.
I did this for a large C++ project by using svn blame combined with the output of gcov. If you zip those two results together you have revision information and coverage information for each line. I actually loaded this all into a database to do queries (e.g. show me all the uncovered lines written by joe since r1234). If you only want an aggregate number you can just avoid counting 'old' uncovered lines in your total.
Have a look on emma.sourceforge and associated Eclipse plugin here (if you are using Eclipse)
I think this tool can answer to your need by selecting exactly what to test for coverage.
IMO the best option is to split the codebase into "new" and "legacy" sections. Then either run test coverage analysis only on the "new" section, or ignore the results for the "old" section.
The two best ways to accomplish this are a) split the codebase into two source trees (two projects with a dependency between), or b) maintain two separate package hierarchies in a single project.
Two separate projects is probably preferable, but it might not be possible if there's a cyclical dependency between the legacy codebase and the new codebase (old code depends on new code and new code depends on old code). If you can manage it, a one-way dependency between old and new code will also make the combined codebase easier to understand.
Once you've got this done, either adjust cobertura so that it's only analyzing the bits you want, or at least just focus on the "new" part of the codebase. One additional tip is that in this scheme, it's best to move bits of code from the "legacy" section to the "new" section as you refactor/add tests to them (if code is frequently moving in the other direction, that's not so good :-).
We did it as below using sonar.exclusions property:
We use Sonar to display the code coverage reports (reported by Cobertura).
a) Identify the classes that you don't want coverage report on (Legacy classes)
Use your SCM cmd line client.
eg: p4 files //depot/... #2000/01/01,#2013/07/13
git log --until="5 days ago"
Direct this list into a file.
You will need to do some parsing based on the SCM tool you use and your destination file should contain one file name per line.
eg. the destination file is excludeFile.list should look like below:
abc.java
xyz.java
...
b) Now..when you integrate with Sonar (from Jenkins Job), use the below property.
-Dsonar.exclusions=<filename>
And your final coverage report in Sonar contains only your new classes ( added after 07/13 in the above example).
We call what you are trying to do Test Gap analysis. The idea is to test all (or at least most of) the changes you make to a large software system during development, because that's where the most bugs will be. There's empirical evidence to back up this intuition as well!
Teamscale is a tool that does what you are looking for and it can handle Cobertura reports. The advantage is, that you just measure coverage as you normally do and then upload the reports to Teamscale, which will perform the Test Gap analysis to highlight new/changed but untested code on a method-by-method basis.
Full disclaimer: I work for CQSE, the company that makes Teamscale.
In my scenario, we need to have the measureament of new code in our day-to-day process. What we did was install sonarquobe locally where the developers can check their code quality control, such as the code coverage of the new code as the sonar can provide to us, and make actions right away.
For a global metrics, we implemented sonarquobe for only our production code and we gather from there all of the quality metrics (such as new code coverage)

Integration of System Tests in the build process

I am continuing the development of a serialization layer generator. The user enters a description of types (currently in XSD or in WSDL), and the software produces code in a certain target language (currently, Java and ansi C89) which is able to represent the types described and which is also able to serialize (turn into a byte-sequence) and deserialize these values.
Since generating code is tricky (I mean, writing code is hard. Writing code that writes code is writing code to do a hard thing, which is a whole new land of hardness :) ). Thus, in the project which preceded my master thesis, we decided that we want some system tests in place.
These system tests know a type and a number of pairs of values and byte sequences. In order to execute a system test in a certain language, the type is run through the syste, resulting in code as described above. This code is then linked with some handwritten host-code, which is capable of reading these pairs of a byte sequence and a value and functions to read values of the given value from a string. The resulting executable is then run and the byte-value-pairs are fed into this executable and it is overall checked if all such bindings result in the output "Y". If this is the case, then these example values for the type serialize into the previously defined byte sequence and we can conclude that the generated code compiles and runs correctly, and thus, overall, that the part of the system handling this type is correct. This is a very good thing.
However, right now I am a bit unhappy with the current implementation. Currently, I have written a custom junit runner which uses quite a lot of reflection sorcery in order to read these byte-value-bindings from a classes attributes. Also, the overall stack to generate the code requires a lot of boilerplate code and boilerplate classes which do little more than to contain two or three strings. Even worse, it is quite hard to get a good integration with all tools which base on Junits descriptions and which generate test failure reports. It is quite hard to actually debug what is happening if the helpful maven Junit testrunner or the eclipse test runner gobble up whatever errors the compiler threw, just because the format of this error is different from junits own assertion errors.
Even worse, a single failed test in the generated code causes the maven build to fail. This is very annoying. I like it if the maven build fails if a certain test of a different unit fails, because (for example), if a certain depth first preorder calculation fails for some reason, everything will go haywire. However, if I just want to show someone some generated code for a type I know working, then it is very annoying if I cannot quickly build my application because the type I am working on right now is not finished.
So, given this background, how can I get a nice automated system which checks these generation specifications? Possibilities I have considererd:
A Junit integrated solution appears to be less than ideal, unless I can improve the integration of maven and junit and junit with my runner and everything else.
We used fitnesse earlier, but overall ditched it, because it caused more problems than it solved. The major issues we had were integration into maven and hudson.
A solution using texttest. I am not entirely convinced, because this mostly wants an executable, strings to put on stdin and strings to expect on stdout. Adding the whole "run application, link with host code and THEN run the generated executable" seems kinda complicated.
Writing my own solution. This will of course work and do what I want. However, this will be the most time consuming task, as usual.
So... do you see another possible way to do this while avoiding to write something own?
You can run Maven with -Dmaven.test.skip=true. Netbeans has a way to set this automatically unless you explicitly hit one of the commands to test the project, I don't know about Eclipse.

Can my build stipulate that my code coverage never get worse?

I am using hudson CI to manage a straight java web project, using ant to build.
I would like to mandate that the unit test coverage never be worse than the previous build, thereby making sure any new code is always tested, or at least the coverage is continually improving.
Is there a hudson plugin that works this way?
Edit: I am currently using Emma, but would be willing to switch to another coverage app.
Also, as a clarification, I've seen the thresholds in some Hudson plugins, but that's not exactly what I'm after. For example what I'd like is that if coverage for Build #12 was 46% overall, and someone checked in Build #13 with 45% coverage, the build would break.
The reason I want to do this, is that I have a codebase with low test coverage. We don't have time to go back and retroactively write unit tests, but I'd like to make sure that the coverage keeps getting better.
UPDATE: Dan pointed out an edge case with my plan that will definitely be a problem. I think I need to rethink whether this is even a good idea.
Yes. Which coverage tool are you using?
The Cobertura plugin for Hudson definitely supports this. On the project configuration screen you can specify thresholds.
Alternatively, you can make Ant fail the build (rather than Hudson), by using the cobertura-check task.
EDIT: I'm not sure you can do precisely what you are asking for. Even if you could, it could prove problematic. For example, assume you have an average coverage of 75% but for one class you have coverage of 80%. If you remove that 80% class and all of its tests, you reduce the overall coverage percentage even though none of the other code is any less tested than previously.
This is kind of a hack, but we use it for similar reasons with Findbugs and Checkstyle. You can set up an Ant task to do the following (this can be split out into multiple tasks, but I'm combining them for brevity):
Run tests with coverage
Parse the coverage results and get the coverage percentage
Read tmp/lastCoverage.txt from last build (see step #5a)
Compare the current coverage percentage with the percentage read from lastCoverage.txt
If percentage DIDN'T decrease, write the new percentage over the contents of tmp/lastCoverage.txt
If percentage DID decrease, keep the original file and echo "COVERAGE FAILURE" (with ant's echo task).
Note that steps 2 through 5 don't necessarily need to be done with native Ant tasks - you could use something like Ant's javac task to run a Java program to do this for you.
Then, configure Hudson:
Under "Source code management", make sure "Use Update" is checked. This will allow your lastCoverage.txt file to be retained between builds. Note that this could be problematic if you really, really need things to be cleaned between builds.
Use the Hudson Text Finder plugin with a regular expression to search for "COVERAGE FAILURE" in the build output (make sure that "Also search console output" is checked for the plugin). The text finder plugin can mark the build unstable.
You can obviously replace things like the file name/path and console output to whatever fits within the context of your build.
As I mentioned above, this is rather hacky, but it's probably one of the few (only?) ways to get Hudson to compare things in the previous build to the current build.
Another approach would be to use the Sonar plugin for Hudson to maintain trending of coverage over time, and make it easier to assimilate and analyze results. It will also show coverage in context of other measures, such as checkstyle and pmd
Atlassian's Clover supports what you want. Have a look at the clover-check Ant task, specifically the historyDir attribute.

Categories

Resources