What is an efficient way to programmatically know what classes were touched by a JUnit test?
Right know, I am instrumenting my entire code with JaCoCo, to obtain the code coverage information for every line of code, and then I can figure out what classes were used.
Is it possible to do this without having to instrument all the code at a line of code level?
You can probably do something at the classloader level (this is how some code coverage tools work - from memory, Emma does this, and is open source). Then you can just record which classes are loaded. You might be able to hack something together from one of the OSS coverage tools.
I use cobertura which gives lots of nice stats on coverage and can show code coverage by highlighting your code.
There are plugins for eclipse, maven, hudson, jenkins... really easy to use although I have to admit that I haven't tried out any other tools for code coverage.
Well, I'm not sure how you do it with JaCoCo, but you definitely need a code coverage tool in order to know what parts of your code has been covered :)
Related
I'm wondering how code coverage is measured for Java. For a class my unit test covers most of the functionality but coverage result is about 44%. We use a custom tool at our company and wondering how is the coverage usually measured.
What would be the ideal code coverage percentage to have?
Dependent on the tool the code coverage might be measured in lines touched by tests or in the number of different branches covered by tests. The metric alone isn't very interesting - look at which lines are not covered. If you're using Eclipse or IntelliJ, there is a code coverage view which you can bring up to show you.
Emma and Cobertura are both decent code coverage tools, but will give different results on the same code+unit tests.
44% code coverage is pretty low.
If you're doing test driven development, then you should be able to achieve 90%+ code coverage, with only weird exceptions and small snippets of file/network access that you exclude from unit testing.
If you've got 44% code coverage, this may be ok. Your code might contain a lot of getters/setters which you're not using right now, but which are handy to have there. Similarly I've noticed that it's not worth exercising all routes through a hashCode or equals method just to appease the coverage tool. If your code is very POJOey with lots of uninteresting boilerplate that was auto-generated, then it's probably no big deal.
Tools like SonarQube are essential for visualising the combination of code coverage hot spots and static code analysis quality metrics.
Don't use quirky in-house tools, find something that's commonly supported.
I am currently working on a Java library - that is, a bunch of classes that are exclusively intended to be used in other projects. Naturally, it has no main() function.
Now, I want to test my progress. And by "test" I don't mean some professional standardized system; I mean I have a very simple function that I want to run to gather information, which will be modified as the project becomes more complete.
I was hoping I could drop an executable class into the Test Packages folder, and just click Run. Unfortunately, NetBeans complains that there are no main classes found.
So, how do I test a library project, without adding an executable class to my distributable source?
You should absolutely look into unit testing frameworks, such as JUnit. IDEs typically have support for running tests easily, and it looks like Netbeans does too. (I don't use Netbeans myself, but I'd have been shocked if it didn't support JUnit.) It's a lot simpler to do this than to have main methods everywhere. After all, a main method will only test one route through your code - with unit tests, you can have lots of tests, each testing one small piece of your code.
Even if you don't want to go into unit testing in a fully-fledged way (which I'd strongly urge you to, by the way), unit tests can be a very straightforward way of just running some code and experimenting with it. I sometimes use it when developing against a 3rd party library for the first time - leaving unit tests to show and document my understanding of the library's behaviour. (Obviously the better the library and its documentation, the less need there is for this, but it's still useful...)
I have used both JUnit and CPPUnit in Netbeans extensively and find that it is fairly easy to get test coverage for libraries with those tools. IntelliJ IDEA does a decent job with JUnit as well so that is an option if you don't like the Netbeans interface. The xUnit frameworks have gotten me out of a jam many times since they are very good at isolating errors quickly. As Jon said they also help to capture the requirements/behavior of your system so that is an added bonus.
Does anybody know a way to get a report of the Java code covered by each JUnit test in a project (or a file) ?
I was interested in JaCoCo, but it doesn't deal with tests case by case.
If someone already made research on that, I'm interested to know the interesting ways you found. Maybe someone succeeded to do it with JaCoCo and I'm interested to know how, because I'm quite lost with this technology...
I am trying to find the execution flow in a large java code base which is not written by me. I have searched for tools which make that possible (JSonde, JTrace, Java Call Tracert, JavacallTracer), but the problem is that they all should be used with a single java/jar/class file.
The code I am trying to understand is built with Ant and has hundreds of jars. So, it runs using a shell script. I do not know how to use those tools with this code.
I really appreciate your help.
I know, that this is an old question, but now I found a solution and I put it here if somebody else searches the same thing: http://findtheflow.io/#gettingstarted.
I think what you should consider is a code coverage tool. This will report what parts of your code are executed and which are not. There are several such tools to consider. Jacoco is an emerging favourite and is associated with the Emma Eclipse plugin.
The thing to remember about code coverage is that it needs to be driven by something. Normally this is accomplished by running your code's tests (unit or integration).
Finally, once you've comfortable with how to enable code coverage you could also consider uploading and archiving it's results in Sonar.
I'm using eclipse in a Java environment. I have to introduce testing to an already underway project. How to start? Unit tests on the classes, and what else? What are the best tools for the job (considering I'm using eclipse)? TestNG, JUnit, JTiger?
How do I make others adapt themselves to use the tests?
Thanks in advance!
Eclipse has a great support of JUnit. This looks like a great starting point. I would add a new source directory test and create a package structure mirroring your src folder. Then you can add your unit tests one by one. Good luck!
Unless your team already is used to writing tests, testing all old components is not really feasible (sometimes not even then), as writing testable software requires a specific mentality.
I think, the best time to start writing tests is when you can define some new component that is critical enough to warrant the extra effort, but somehow new, so the already existing code base would not be tested as much.
This way you could find a testing approach, identify the benefits and learning the mentality without putting too much effort in something that might not work for your team.
About tooling: I sadly cannot really compare the different tools, as I only have experience with JUnit.
JUnit is easy to start with, as the corresponding tooling is already included in Eclipse (wizards to create templates, running and evaluation options...), and there are plenty of documentation and examples available on the net.
A lot of good questions.
If your project does not tests at all and already underway you should introduce the tests incrementally. When you have to develop a new feature write test for this feature, classes that you are going to change and features that could be broken. The same is when you are fixing bugs. First write test that reproduces the bug, then fix it.
I used JUnit and TestNG. Both are almost the same. TestNG has groups, i.e. you can mark test to be belong to group like development, build, integration, etc. This is mostly relevant if you have a lot of tests and it takes significant time to run them all.
Do you already have automatic build? If not start from this. If you prefer to use maven it is relatively simple. When your build is ready write a couple of unit tests (just to have something to fail...)
Then install Hudson/Jenkins and define your project there. People will see how cool is it that once you commit your new code the build runs almost immediately and you see all failed tests. Probably try to show the strength of TDD to your boss and try to explain him that he should force all team members to write tests.
If you have enough energy introduce Sonar to your company. People will see how awful the code that they are writing and how poor the test coverage is. The they will see how quickly the test coverage is growing up and will probably invest more into unit testing.
Shortly, good luck. You are on the right way.
JUnit and TestNG are both fine. TestNG has more capabilities and can be helpful with integration tests, JUnit is more focused on unit tests.
You may want to get acquainted with some kind of mocking library like Mockito. Code coverage tools like Cobertura and continuous integration tools like Jenkins are great too.
Using a DI framework like Spring or Guice is helpful for writing more easily-testable code. Whether you use a DI framework or not, the more loosely-coupled your code the easier it is to test. Since your project is already under way it is probably too late for this part, which will make your task harder.
It can be very hard to get co-workers to cooperate with testing. You may want to introduce it selectively on pieces where it can make the most difference. Writing tests for already-finished functionality is usually painful and a waste of time. Tests should be small, have few dependencies, be understandable, and should run quickly. The more painful it is to write tests, run the tests, and fix broken tests, the more resistance you will get.