Java code coverage missed instructions? - java

I'm wondering how code coverage is measured for Java. For a class my unit test covers most of the functionality but coverage result is about 44%. We use a custom tool at our company and wondering how is the coverage usually measured.
What would be the ideal code coverage percentage to have?

Dependent on the tool the code coverage might be measured in lines touched by tests or in the number of different branches covered by tests. The metric alone isn't very interesting - look at which lines are not covered. If you're using Eclipse or IntelliJ, there is a code coverage view which you can bring up to show you.
Emma and Cobertura are both decent code coverage tools, but will give different results on the same code+unit tests.
44% code coverage is pretty low.
If you're doing test driven development, then you should be able to achieve 90%+ code coverage, with only weird exceptions and small snippets of file/network access that you exclude from unit testing.
If you've got 44% code coverage, this may be ok. Your code might contain a lot of getters/setters which you're not using right now, but which are handy to have there. Similarly I've noticed that it's not worth exercising all routes through a hashCode or equals method just to appease the coverage tool. If your code is very POJOey with lots of uninteresting boilerplate that was auto-generated, then it's probably no big deal.
Tools like SonarQube are essential for visualising the combination of code coverage hot spots and static code analysis quality metrics.
Don't use quirky in-house tools, find something that's commonly supported.

Related

How get code coverage on java production code with clojure unit tests?

I got Java production code and unit tests written in Clojure. Is there a way of measuring the code coverage?
Is there an eclipse plugin for this? I use eclEmma for my Java unit tests, is there an similar one for tests written in Clojure?
I heard about this project in a ClojureWest talk, though have not used it my self. perhaps it would be worth looking into for this:
https://github.com/dgrnbrg/guzheng
"guzheng is a library for doing branch coverage analysis of clojure code."
it seems to have a lein plugin to make running it a bit smoother.

How to know what classes were touched by a JUnit test

What is an efficient way to programmatically know what classes were touched by a JUnit test?
Right know, I am instrumenting my entire code with JaCoCo, to obtain the code coverage information for every line of code, and then I can figure out what classes were used.
Is it possible to do this without having to instrument all the code at a line of code level?
You can probably do something at the classloader level (this is how some code coverage tools work - from memory, Emma does this, and is open source). Then you can just record which classes are loaded. You might be able to hack something together from one of the OSS coverage tools.
I use cobertura which gives lots of nice stats on coverage and can show code coverage by highlighting your code.
There are plugins for eclipse, maven, hudson, jenkins... really easy to use although I have to admit that I haven't tried out any other tools for code coverage.
Well, I'm not sure how you do it with JaCoCo, but you definitely need a code coverage tool in order to know what parts of your code has been covered :)

Clover - getting coverage without automated tests

I am currently exploring various code coverage tools for use in project and short listed on
clover amongst clover, Emma and cobertura. ( My org is ready to pay for clover and its nice )
But we donot have automated tests. all tests are manual and we need results to be generated run time using instrumented code.
clover's wiki's inital lines say
Code coverage is the percentage of code which is covered by automated
tests.
Can clover collect coverage on non-automated tests ? i.e the requirement is i instrument code at compile time and get coverage report when i actually run the code.
Googled much but could not find most appropriate answer.
Are there alternatives to achieve that if clover does not support it
The idea of coverage tools is to instrument the application code so that when it's run, statistics are collected, and finally written into reports. Wether the application code is run by automated tests or by manual tests doesn't matter. It will work with manual tests, but be of course much longer.
There are manual steps which clover website documents. probably using this we can achieve.. but not sure of exact optimal way to do this.

How to introduce tests in an underway Java project?

I'm using eclipse in a Java environment. I have to introduce testing to an already underway project. How to start? Unit tests on the classes, and what else? What are the best tools for the job (considering I'm using eclipse)? TestNG, JUnit, JTiger?
How do I make others adapt themselves to use the tests?
Thanks in advance!
Eclipse has a great support of JUnit. This looks like a great starting point. I would add a new source directory test and create a package structure mirroring your src folder. Then you can add your unit tests one by one. Good luck!
Unless your team already is used to writing tests, testing all old components is not really feasible (sometimes not even then), as writing testable software requires a specific mentality.
I think, the best time to start writing tests is when you can define some new component that is critical enough to warrant the extra effort, but somehow new, so the already existing code base would not be tested as much.
This way you could find a testing approach, identify the benefits and learning the mentality without putting too much effort in something that might not work for your team.
About tooling: I sadly cannot really compare the different tools, as I only have experience with JUnit.
JUnit is easy to start with, as the corresponding tooling is already included in Eclipse (wizards to create templates, running and evaluation options...), and there are plenty of documentation and examples available on the net.
A lot of good questions.
If your project does not tests at all and already underway you should introduce the tests incrementally. When you have to develop a new feature write test for this feature, classes that you are going to change and features that could be broken. The same is when you are fixing bugs. First write test that reproduces the bug, then fix it.
I used JUnit and TestNG. Both are almost the same. TestNG has groups, i.e. you can mark test to be belong to group like development, build, integration, etc. This is mostly relevant if you have a lot of tests and it takes significant time to run them all.
Do you already have automatic build? If not start from this. If you prefer to use maven it is relatively simple. When your build is ready write a couple of unit tests (just to have something to fail...)
Then install Hudson/Jenkins and define your project there. People will see how cool is it that once you commit your new code the build runs almost immediately and you see all failed tests. Probably try to show the strength of TDD to your boss and try to explain him that he should force all team members to write tests.
If you have enough energy introduce Sonar to your company. People will see how awful the code that they are writing and how poor the test coverage is. The they will see how quickly the test coverage is growing up and will probably invest more into unit testing.
Shortly, good luck. You are on the right way.
JUnit and TestNG are both fine. TestNG has more capabilities and can be helpful with integration tests, JUnit is more focused on unit tests.
You may want to get acquainted with some kind of mocking library like Mockito. Code coverage tools like Cobertura and continuous integration tools like Jenkins are great too.
Using a DI framework like Spring or Guice is helpful for writing more easily-testable code. Whether you use a DI framework or not, the more loosely-coupled your code the easier it is to test. Since your project is already under way it is probably too late for this part, which will make your task harder.
It can be very hard to get co-workers to cooperate with testing. You may want to introduce it selectively on pieces where it can make the most difference. Writing tests for already-finished functionality is usually painful and a waste of time. Tests should be small, have few dependencies, be understandable, and should run quickly. The more painful it is to write tests, run the tests, and fix broken tests, the more resistance you will get.

Are there code coverage tools that can tell me about just the code that I wrote during my last sprint?

I'm looking for a tool that can give me more meaningful metrics about code coverage for my team. For instance, two things I'd like to see:
How much code coverage did we have as a team for the code written during our last sprint?
How much code coverage did new code get broken down by developer?
Has anyone done anything like this before? What tools are available? Specifically, I'm working in Java and am interested in either free or commercial solutions.
Look into sonar. It can be told about per-sprint releases.
You have two problems:
1) What's the coverage of the code that I presently have?
2) What's the coverage compared to what I used to have?
I think you need three tools:
a) to compute coverage of your application this instant,
b) something that computes the difference between what you have now, and what you had
the last time you computed code coverage,
c) something to pick out the coverage data that changes.
You can get a) from a number of Java test coverage tools, but you want it in a form that you can process for step c). The SD Java Test Coverage Tool does provide that data in a usable form. The test coverage is reported in XML with line number data so you can easily build a tool to extract what you want.
b) Any diff tool will do to identify file differences under the assumptionst that the files from the previous version and from this version line up one for one. Where they don't, you can simply assume the test coverage data is completely different :-}
However, you want the smallest diff you can get; see the SD Java Smart Differencer Tool for a tool that often computes much smaller deltas than conventional diff. The output of the Smart Differencer is machine readable.
c) Don't have an off-the-shelf answer for this, but a Perl script that combined the information from a) and b) would likely do the trick. What you want to report is,
for each explicit delta identified by the diff tool, what's the coverage data, and for all code not changed according to the diff tool, whether the new coverage is less than the old coverage.

Categories

Resources