I'm trying to run a windows service with cobertura. The only problem is cobertura reports results when the shutdown hook is executed. I am unable to directly modify the code for these results, so I was wondering if it is possible to run a java application as a windows service and still gather cobertura results. I instrument the code, add it to the classpath, but when reporting, I get nothing. When viewing a trace file, it fails to load/save any cobertura information. This leads me to believe that shutdown hooks never get executed, otherwise I would get results.
Thanks for the assistance!
You might want to look at Emma instead, it allows you to instrument in advance.
http://emma.sourceforge.net/
When using Cobertura, do you get a .ser file at all?
I assume that you can't tweak the code to force an export, as shown at the bottom of this FAQ?
http://cobertura.sourceforge.net/faq.html
Related
If I run unit tests for Quarkus from inside Eclipse, the Eclipse console view shows Quarkurs' log output. However, when I run the same tests in Maven, the Quarkus output is completely swallowed and does not appear anywhere. If there is a test failure due to an exception in the application code I get the test failure message but I cannot see what actually went wrong inside the application. The Java log manager is configured to use JBoss logging by using the surefire system property.
Does anyone know where one can find the Quarkus log output or how it can be enabled?
You haven't posted any code or configuration files, so there is no way to know what may be causing your problem.
Potentially you may want to check out a few things:
In the application configuration file, make sure quarkus.log.console.enable is set to true (if you can't find it don't worry about it, since by default it's true)
Is quarkus.log.file.enable set to true? (by default it's false)
While I can't tell you what the problem is exactly, since there is no code to review, I will put links to the official Quarkus logging guide. What may interest you is under Runtime configuration and Logging configuration reference.
So this is my situation:
I am fairly new to gitlab-ci. I don't host my own gitlab instance but rather push everything to gitab itself. I am not using and am not familiar with any build tools like Maven. I usually work and run my programms from an IDE rather than the terminal.
This is my problem:
When I push my Java project I want my pipeline to start the Junit tests I wrote. Whereas I've found various simple commands for other languages than Java to run unit tests I didn't come across anything for Junit. I've just found people using Maven, running the test locally and then pushing the test reports to gitlab. Is it even possible to easily run Junit tests on the gitlab server with the pipeline without build tools like Maven? Do I have to run them locally? Do I have to learn to start them with a Java terminal command? I've beeen searching for days now.
The documentation is clear:
To enable the Unit test reports in merge requests, you need to add artifacts:reports:junit in .gitlab-ci.yml, and specify the path(s) of the generated test reports.
The reports must be .xml files, otherwise GitLab returns an Error 500.
You then have various example in Ruby, Gio, Java (Gradle or Maven), and other languages.
But with GitLab 13.12 (May 2021), this gets better:
Failed test screenshots in test report
GitLab makes it easy for teams to set up end-to-end testing with automation tools like Selenium that capture screenshots of failed tests as artifacts.
This is great until you have to sort through a huge archive of screenshots looking for the specific one you need to debug a failing test.
Eventually, you may give up due to frustration and just re-run the test locally to try and figure out the source of the issue instead of wasting more time.
Now, you can link directly to the captured screenshot from the details screen in the Unit Test report on the pipeline page.
This lets you quickly review the captured screenshot alongside the stack trace to identify what failed as fast as possible.
See Documentation and Issue.
I'm currently dealing with the usual flakiness of the latest version of Chrome and ChromeDriver with Selenium. I'm running my tests using Grid 2 and a couple of Windows 7 machines. I'll get the occasional inevitable dead browser tab being reported by ChromeDriver. Since these tests didn't really fail as far as the web app functionality, I'd like to mark them as skipped to keep reporting a bit more useful for my current purpose. I've tried getting them to re-run, but TestNG's support for this is experimental and currently broken.
Is there a way I can set these tests to a SKIPPED status before they're logged in my Gradle report? (I'm using Gradle for reporting instead of ReportNG since ReportNG doesn't work properly with parallel testing).
I'm thinking I need to add another listener and somehow pick up the reported stack trace, check for a particular string, and then set the set to SKIPPED. Is this the correct approach?
Any tips on how to accomplish this would be great. I'm not able to find a way to capture the stack trace with my listener yet, and most importantly, set the test to a SKIPPED state (once the trace is parsed). I am using Java to drive these tests.
Any ideas / help would be much appreciated!
Cheers,
Darwin
You may try to implement the interface ITestListener, in onTestFailure() check the stack trace and call setCurrentTestResult() method to set the Skipped status.
Don't forget to make the implemented class a listener as described in '5.17 - TestNG Listeners' of TestNG documentation.
I'm trying to figure out which tool to use for getting code-coverage information for projects that are running in kind of stabilization environment.
The projects are deployed as a war and running on Jboss. I need server-side coverage while running manual / automated tests interacting with a running server.
Lets assume I cannot change projects' build and therefore cannot add any kind of instrumentation to their jars as part of the build process. I also don't have access to code.
I've made some reading on various tools and they are all presenting techniques involving instrumenting the jars on build (BTW - doesn't that affect production, or two kinds of outputs are generated?)
One tool though, JaCoCo, mentioned "on-the-fly-instrumentation" feature. Can someone explain what does it mean? Can this help me with my limitations?
I've also heard on code-coverage using runtime profiling techniques - can someone help on that issue?
Thanks,
Ben
AFAIK "on-the-fly-instrumentation" means that the coveragetool hooks into the Classloading-Mechanism by using a special ClassLoader and edits the Class-Bytecode when it's being loaded.
The result should be the same as in "offline-instrumentation" with the JARs.
Have also a look at EMMA, which supports both mechanisms. There's also a Plugin for Eclipse.
A possible solution to this problem without actual code instrumentation is to use a jvm c-agent. It is possible to attach agents to the jvm. In such an agent you can intercept every method call done in your java code without changes to the bytecodes.
At every intercepted method call you then write info about the method call which can be evaluated later for code coverage purposes.
Here you'l find the official guide to the JVMTI JVMTI which defines how jvm agents can be written.
You don't need to change the build or even have access to the code to instrument the classes. Just instrument the classes found in the delivered jar, re-jar them and redeploy the application with the instrumented jars.
Cobertura even has an ant task that does that for you: it takes a war file, instrument the classes inside the jars inside the war, and rebuild a new war file. See https://github.com/cobertura/cobertura/wiki/Ant-Task-Reference
To answer your question about instrumenting the jars on build: yes, of course, the instrumented classes are not used in production. They're only used for the tests.
Suppose that I have a Java program within an IDE (Eclipse in this case).
Suppose now that I execute the program and at some point terminate it or it ends naturally.
Is there a convenient way to determine which lines executed at least once and which ones did not (e.g., exception handling or conditions that weren't reached?)
A manual way to collect this information would be to constantly step with the debugging and maintain a set of lines where we have passed at least once. However, is there some tool or profiler that already does that?
Edit: Just for clarification: I need to be able to access this information programmatically and not necessarily from a JUnit test.
eclemma would be a good start: a code coverage tool would allow a coverage session to record the information you are looking for.
(source: eclemma.org)
What you're asking about is called "coverage". There are several tools that measure that, some of which integrate into Eclipse. I've used jcoverage and it works (I believe it has a free trial period, after which you'd have to buy it). I've not used it, but you might also try Coverlipse.
If I understand the question correctly you want more than the standard stacktrace data but you don't want to manually instrument your code with, say, log4j debug statements.
The only thing I can think of is to add some sort of bytecode tracing. Refer to Instrumenting Java bytecode. The article references Cobertura which I haven't used but sounds like what you need...