We have a huge java based application , which is there since few years.We also have a large set of block box test cases with QA team to carry out regression testing.
There is an initiative being taken in our project to improve the quality of the application and on the same lines we have to measure the code which is getting covered by these black box test cases.
I know that we can have a code coverage report through code coverage tools like EMMA,Code Cover,Cobertura ,these tools work along with White Box Unit test cases(i.e JUnit test cases).
I want to know whether any of these tools can be used to generate similar code coverage reports when black box test cases are executed on the application.
With regards to this I have done some google search and found out that the application code can be "Instrumented" and it is possible to generate code coverage reports.
Now what I am trying to do is to
1.Instrument the code in Eclipse using "Code pro" eclipse plugin,
2.Once the code is instrumented ,will generate the jar file of the instrumented code and deploy the same on the test Environment (Unix box).
Now the question is, whether I am going in right direction?
How and where the code coverage reports will be generated when black box testing is being done on the instrumented code on server(not local machine).
Take a look at jacoco
http://www.eclemma.org/jacoco/trunk/doc/mission.html
This uses a java agent and can instrument your code at runtime
You can use jacoco for this, set the jvm under to test to run with the tcpserver option, run tests and then connect to it using the tcpclient option. If you want to collect coverage separately for n runs then you can connect to it over jmx and call reset
Related
So this is my situation:
I am fairly new to gitlab-ci. I don't host my own gitlab instance but rather push everything to gitab itself. I am not using and am not familiar with any build tools like Maven. I usually work and run my programms from an IDE rather than the terminal.
This is my problem:
When I push my Java project I want my pipeline to start the Junit tests I wrote. Whereas I've found various simple commands for other languages than Java to run unit tests I didn't come across anything for Junit. I've just found people using Maven, running the test locally and then pushing the test reports to gitlab. Is it even possible to easily run Junit tests on the gitlab server with the pipeline without build tools like Maven? Do I have to run them locally? Do I have to learn to start them with a Java terminal command? I've beeen searching for days now.
The documentation is clear:
To enable the Unit test reports in merge requests, you need to add artifacts:reports:junit in .gitlab-ci.yml, and specify the path(s) of the generated test reports.
The reports must be .xml files, otherwise GitLab returns an Error 500.
You then have various example in Ruby, Gio, Java (Gradle or Maven), and other languages.
But with GitLab 13.12 (May 2021), this gets better:
Failed test screenshots in test report
GitLab makes it easy for teams to set up end-to-end testing with automation tools like Selenium that capture screenshots of failed tests as artifacts.
This is great until you have to sort through a huge archive of screenshots looking for the specific one you need to debug a failing test.
Eventually, you may give up due to frustration and just re-run the test locally to try and figure out the source of the issue instead of wasting more time.
Now, you can link directly to the captured screenshot from the details screen in the Unit Test report on the pipeline page.
This lets you quickly review the captured screenshot alongside the stack trace to identify what failed as fast as possible.
See Documentation and Issue.
Our team is starting a JUnit 5 project with karate tests.
Currently we are using this as a template for our Karate test runner https://github.com/intuit/karate#junit-5-parallel-execution.
It allows us to pass in the "target/surefire-reports" and then before the test finishes we call ReportBuilder.generateReports(). It is basically identical to this code https://github.com/intuit/karate/blob/b50202b3c8a8916a7db0f3d5196d42086ab80a04/karate-junit4/src/test/java/com/intuit/karate/mock/MockServerTest.java.
This works well, but while I was looking at how to set up JUnit 5 I noticed this very slick fluent api https://github.com/intuit/karate#junit-5.
It would be nice to use that syntax, but I can't get the Cucumber report generated like I can with Runner.parallel. I made sure the maven-surefire-plugin was in build.gradle(although I could have messed that up) but it didn't seem to help.
I also tried doing ReportBuilder.generateReports() and the related logic from the parallel execution example in the #AfterAll function, but couldn't get that working either. The errors suggested that the target/surefire-reports folder didn't exist.
Is the cucumber report supported in the second example? If so, is there a trick to getting it setup?
Great question. The reason we de-couple the JUnit execution and the parallel-runner - is JUnit is more useful in development mode, and you expect detailed pass/fail stats in the IDE for example. But this will be an un-necessary overhead in "CI mode".
That said, we have put in some work on making the Parallel runner a fluent interface, so great timing :) You can find an example on line 57 here.
May I request you to try the develop branch and see if you are missing anything ? Building is easy, here are some instructions: https://github.com/intuit/karate/wiki/Developer-Guide
I am very new to this arena.
I am trying to get the test coverage of my automation test cases written under a completely different repo for the tests using Jacoco.
I want to know if it is possible at first place ? And if it is how to achieve it?
There is separate repo used by the developers for the application source code.
How is it possible to get the test coverage when both source code and tests are in different repos
The unit tests coverage is received by developers.
How can testers get the coverage for their integration tests?
Are you using a CI/CD tool like Jenkins? In that case, you can schedule different builds for different branches in that if you have admin access to the tool.
Edited after seeing the request of John.
Usually, companies will have DevOps admin and other stakeholders of the project who monitor what is happening in each branch. There will be a branching strategy for each Product team. You need to periodically merge contents from developer branch to test branch so that the jacoco test coverage reports don't look confusing to your Dev team members. When, how and what is merged will be decided by the stakeholders and it depends on a lot of facts starting right from the software development process.
If you are following Scrum methodology for software development, at the end of each sprint, developers would give a demo on testable new features or enhancements. Testing team will create test cases based on what is delivered. All these happens in the Sprint review/retrospective/demo meetings.
If you need more information on Jenkins and configuring multiple jobs on them, you need to look at a separate stackexchange forum dedicated to devops. I believe this should be a good place to start for you.
One of the problems of a team lead is that people on the team (sometimes even including myself) often create JUnit tests without any testing functionality.
It's easily done since the developers use their JUnit test as a harness to launch the part of the application they are coding, and then either deliberately or forgetfully just check it in without any assert tests or mock verifies.
Then later it gets forgotten that the tests are incomplete, yet they pass and produce great code coverage. Running up the application and feeding data through it will create high code coverage stats from Cobertura or Jacoco and yet nothing is tested except its ability to run without blowing up - and I've even seen that worked-around with big try-catch blocks in the test.
Is there a reporting tool out there which will test the tests, so that I don't need to review the test code so often?
I was temporarily excited to find Jester which tests the tests by changing the code under test (e.g. an if clause) and re-running it to see if it breaks the test.
However this isn't something you could set up to run on a CI server - it requires set-up on the command line, can't run without showing its GUI, only prints results onto the GUI and also takes ages to run.
PIT is the standard Java mutation tester. From their site:
Mutation testing is conceptually quite simple.
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
...
Traditional test coverage (i.e line, statement, branch etc) measures only which code is executed by your tests. It does not check that your tests are actually able to detect faults in the executed code. It is therefore only able to identify code the is definitely not tested.
The most extreme example of the problem are tests with no assertions. Fortunately these are uncommon in most code bases. Much more common is code that is only partially tested by its suite. A suite that only partially tests code can still execute all its branches (examples).
As it is actually able to detect whether each statement is meaningfully tested, mutation testing is the gold standard against which all other types of coverage are measured.
The quality of your tests can be gauged from the percentage of mutations killed.
It has a corresponding Maven plugin to make it simple to integrate as part of a CI build. I believe the next version will also include proper integration with Maven site reports too.
Additionally, the creator/maintainer is pretty active here on StackOverflow, and is good about responding to tagged questions.
As far as possible, write each test before implementing the feature or fixing the bug the test is supposed to deal with. The sequence for a feature or bug fix becomes:
Write a test.
Run it. At this point it will fail if it is a good test. If it does
not fail, change, replace, or add to it.
When you have a failing test, implement the feature it is supposed
to test. Now it should pass.
You have various options:
You probably could use some code analysis tool like checkstyle to verify that each test has an assertion. Or alternatively use a JUnit Rule to verify this, but both is easily tricked and works only on a superficial level.
Mutation testing as Jester does is again a technical solution which would work, and it seems #Tom_G has a tool that might work. But these tools are (in my experience) extremely slow, because the work by changing the code, running tests, analyzing result over and over again. So even tiny code bases take lots of time and I wouldn't even think about using it in a real project.
Code Reviews: such bad tests are easily caught by code reviews, and they should be part of every development process anyway.
All this still only scratches on the surface. The big question you should ponder is: why do developers feel tempted to create code just to start a certain part of the application? Why don't they write tests for what they want to implement, so there is almost no need for starting parts of the application. Get some training for automated unit testing and especially TDD/BDD, i.e. a process where you write the tests first.
In my experience it is very likely that you will hear things like: We can't test this because .... You need to find the real reason why the developers, can't or don't want to write these tests, which might or might not be the reasons they state. Then fix those reasons and those abominations of tests will go away all on their own.
What you are looking for is indeed mutation testing.
Regarding tool support, you might also want to look at the Major
mutation framework (mutation-testing.org), which is quite efficient and configurable. Major
uses a compiler-integrated mutator and gives you great control over
what should be mutated and tested. As far as I know Major does not yet
produce graphical reports but rather data (csv) files that you can
process or visualize in any way you want.
Sounds like you need to consider a coverage tool like Jacoco, the gradle plugin provides report on coverage. I also use the EclEmma Eclipse plugin to obtain the same results, but with a fairly nice integration in the IDE.
In my experience, Jacoco has provided acceptable numbers even when there are no-op unit test. As it seems able to accurately determine the tested code paths. No-op test get low or 0% coverage scores and the score increase as the test become more complete.
Update
To address the down-voter. Perhaps a more appropriate tool to address this is PMD. Can be used in an IDE or build system. With proper configuration and rule development it could be used to find these incomplete unit tests. I have used it in the past to find methods missing certain security related annotation in the past.
I've got a Java software that reads settings from properties files and database, reads input files from a directory and creates output files in another directory. It also makes modifications to database.
I need to improve testing of this software from being manual to automatic. Currently the user copies some files to input directory, executes the program and inspects the files in the output director. I'd like to automate this to just running the tests and inspecting the test result file. The test platform would have a expected result file(s) for each input file. The test results should be readable by people that are not programmers :)
I don't want to do this in a jUnit test in the build phase because the tests have to be executed against development and test environments. Is there any tools/platforms that could help me with this or should I build this kind of thing from scratch?
I'd recommend to use TestNG testing framework.
This is functionality testing framework, which provides similar to jUnit functionality, but has a number of features specific to functional testing - like test dependencies, groups etc.
The test results should be readable by
people that are not programmers :)
You can implement your own test listener and use it to build custom test report.