I have two separate java maven projects: one is my web app itself and other one is tellurium+selenium automation tests for my web(I moved these tests to separate projects as their code doesn't really belong to the web app project code and doesn't use java classes of my web app, also I want to reuse some parts of those tests for testing my other web apps). Therefore, project where my tests reside doesn't know anything about my web app, except tellurium/selenium conf files(host name, credentials, browser).
So the question: is there any way to measure code coverage of my web app backend that is invoked by my tellurium/selenium tests that reside in separate project?
Thanks in advance. Any help is highly appreciated.
EMMA or cobetura can instrument your classes so that after the test run they create a coverage report.
http://emma.sourceforge.net/reference/ch02s03.html
<instr>/instr is EMMA's offline class instrumentor. It adds bytecode
instrumentation to all classes found in an instrumentation path that
also pass through user-provided coverage filters. Additionally, it
produces the class metadata file necessary for associating runtime
coverage data with the original class definitions during coverage
report generation.
Related
I am very new to this arena.
I am trying to get the test coverage of my automation test cases written under a completely different repo for the tests using Jacoco.
I want to know if it is possible at first place ? And if it is how to achieve it?
There is separate repo used by the developers for the application source code.
How is it possible to get the test coverage when both source code and tests are in different repos
The unit tests coverage is received by developers.
How can testers get the coverage for their integration tests?
Are you using a CI/CD tool like Jenkins? In that case, you can schedule different builds for different branches in that if you have admin access to the tool.
Edited after seeing the request of John.
Usually, companies will have DevOps admin and other stakeholders of the project who monitor what is happening in each branch. There will be a branching strategy for each Product team. You need to periodically merge contents from developer branch to test branch so that the jacoco test coverage reports don't look confusing to your Dev team members. When, how and what is merged will be decided by the stakeholders and it depends on a lot of facts starting right from the software development process.
If you are following Scrum methodology for software development, at the end of each sprint, developers would give a demo on testable new features or enhancements. Testing team will create test cases based on what is delivered. All these happens in the Sprint review/retrospective/demo meetings.
If you need more information on Jenkins and configuring multiple jobs on them, you need to look at a separate stackexchange forum dedicated to devops. I believe this should be a good place to start for you.
I have a java maven project in a jar file (bigfat jar, the whole project in one file). I want to do an integration test on it (first option is blackbox testing, but I will settle for whitebox if it is an easier path).
What I am really interested about is the end-to-end result, however some of the interfaces in the middle are APIs and sockets which are hardcoded inside the jar to communicate with a specific port or with a specific website.
What I want to do is to override the classes inside the jar file that are related to these interfaces, and code them with my model interfaces.
The whole new testing project will not go into production, this is purely for testing purposes.
I am using maven.
Any ideas are more than welcome
My company is using FitPro + FitLibrary to test our applications. Our main test suites are .suite files and they invoke FIT tests pages, which bear the .fit extension (contents
is HTML).
Our application Fixtures are built on top of FitLibraryRunner.jar (a release issued on 01/28/2007), and we have a .bat/.sh script which launches our FIT tests suite using
com.luxoft.fitpro.runner.TestCaseRunner, which is part of the fitpro.jar library.
This setup is convenient for us because we provide the same application but customized for many customers, and all you need to run FIT tests is fitpro.jar among your classpath, and you don't depending upon other stuff.
As FitPro seems to be no longer maintained, our alternative would be to switch to FitNesse.
Now, it would seem as per my understanding that FitNesse does not offer runners that would allow executing FIT tests suites outside of its wiki. Let me precise that the usage of a Wiki is not really useful for us due to us packaging the same libraries but in differents stages of development, and to many customers.
I would like to know if any of you could ever succeeded in launching the FitNesse/SLIM engine outside of the wiki context ? I am looking for a way to invoke a Runner provided with FitNesse, that reads a main suite test file (.suite) and produces an HTML or XML-based report as output, just like the way we do with FitPro.
I am also told that I could not use the .suite and .fit pages we have created with FitPro over the years.
Thanks in advance for any feedback!
J-C
If you don't want to use the FitNesse wiki, then FitNesse doesn't add any value for you. FitNesse renders HTML from its wiki pages and passes them to a test execution engine, like Slim, FitLibrary, fitSharp, etc. FitLibrary and fitSharp can also execute tests sourced from HTML files. I haven't used FitPro but it appears to feed FitLibrary with tests from its own .suite and .fit files. Your best bet may be to use FitLibrary's FolderRunner and organize your suites into folders of HTML files that FolderRunner can process.
I'm trying to figure out which tool to use for getting code-coverage information for projects that are running in kind of stabilization environment.
The projects are deployed as a war and running on Jboss. I need server-side coverage while running manual / automated tests interacting with a running server.
Lets assume I cannot change projects' build and therefore cannot add any kind of instrumentation to their jars as part of the build process. I also don't have access to code.
I've made some reading on various tools and they are all presenting techniques involving instrumenting the jars on build (BTW - doesn't that affect production, or two kinds of outputs are generated?)
One tool though, JaCoCo, mentioned "on-the-fly-instrumentation" feature. Can someone explain what does it mean? Can this help me with my limitations?
I've also heard on code-coverage using runtime profiling techniques - can someone help on that issue?
Thanks,
Ben
AFAIK "on-the-fly-instrumentation" means that the coveragetool hooks into the Classloading-Mechanism by using a special ClassLoader and edits the Class-Bytecode when it's being loaded.
The result should be the same as in "offline-instrumentation" with the JARs.
Have also a look at EMMA, which supports both mechanisms. There's also a Plugin for Eclipse.
A possible solution to this problem without actual code instrumentation is to use a jvm c-agent. It is possible to attach agents to the jvm. In such an agent you can intercept every method call done in your java code without changes to the bytecodes.
At every intercepted method call you then write info about the method call which can be evaluated later for code coverage purposes.
Here you'l find the official guide to the JVMTI JVMTI which defines how jvm agents can be written.
You don't need to change the build or even have access to the code to instrument the classes. Just instrument the classes found in the delivered jar, re-jar them and redeploy the application with the instrumented jars.
Cobertura even has an ant task that does that for you: it takes a war file, instrument the classes inside the jars inside the war, and rebuild a new war file. See https://github.com/cobertura/cobertura/wiki/Ant-Task-Reference
To answer your question about instrumenting the jars on build: yes, of course, the instrumented classes are not used in production. They're only used for the tests.
I've got a Java software that reads settings from properties files and database, reads input files from a directory and creates output files in another directory. It also makes modifications to database.
I need to improve testing of this software from being manual to automatic. Currently the user copies some files to input directory, executes the program and inspects the files in the output director. I'd like to automate this to just running the tests and inspecting the test result file. The test platform would have a expected result file(s) for each input file. The test results should be readable by people that are not programmers :)
I don't want to do this in a jUnit test in the build phase because the tests have to be executed against development and test environments. Is there any tools/platforms that could help me with this or should I build this kind of thing from scratch?
I'd recommend to use TestNG testing framework.
This is functionality testing framework, which provides similar to jUnit functionality, but has a number of features specific to functional testing - like test dependencies, groups etc.
The test results should be readable by
people that are not programmers :)
You can implement your own test listener and use it to build custom test report.