Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
We have an application which is deployed on JBoss 5.1, JDK 1.6.
We also have scripts written in PowerShell for testing. These scripts access the application using a web-service.
I would like to check the code coverage of the scripts. Any ideas? Most of the tools I saw are checking a JUnit test coverage and I don't see how we can use them.
AFAIK, all code coverage tools use the same concept (I'll omit the reporting and checking part):
First instrument the code (i.e. place markers).
Then run tests to execute the instrumented code (to activate markers and collect data).
For the second step, the common use case is indeed to run JUnit tests but your tests don't have to be JUnit tests. Actually, they don't even have to be automated.
And the instrumented code doesn't have to be executed in the context of a unit test, it can be packaged in a WAR/EAR and deployed on a container (this will just require a bit more work).
For Cobertura, this is what we can read in the Frequently Asked Questions:
Using Cobertura with a Web Application
I have automated tests that use
HttpUnit/HtmlUnit/Empirix/Rational
Robot, can I use Cobertura?
Yes! The process is a bit more
involved, but the concept is the same.
First instrument your compiled
classes. Then create your war file.
Then deploy the war file into your
application server (Tomcat, JBoss,
WebLogic, WebSphere, etc). Now run
your tests.
As your classes are accessed, they
will create a "cobertura.ser" file on
the disk. You may need to dig around a
bit to find it. Cobertura puts this
file in what it considers to be the
current working directory. Typically
this is the directory that the
application server was started from
(for example, C:\Tomcat\bin) Note:
This file is not written to the disk
until the application server exits.
See below for how to work around this.
Now that you know where the
cobertura.ser file is, you should
modify your deploy step so that it
moves the original cobertura.ser to
the appropriate directory in your
application server, and then moves it
back when finished testing. Then run
cobertura-report.
[...]
For Emma, this is what the documentation says:
3.11. How do I use EMMA in {WebLogic, Websphere, Tomcat, JBoss, ...}?
First of all, there is little chance that you will be able to use the on-the-fly mode (emmarun) with a full-blown J2EE container. The reason lies in the fact that many J2EE features require specialized classloading that will happen outside of EMMA instrumenting classloader. The server might run fine, but you will likely get no coverage data.
Thus, the correct procedure is to instrument your classes prior to deployment (offline mode). Offline instrumentation always follows the same compile/instrument/package/deploy/get coverage/generate reports sequence. Follow these steps:
use EMMA's instr tool to instrument the desired classes. This can be done as a post-compilation step, before packaging. However, many users also find it convenient to let EMMA process their jars directly (either in-place, using overwrite mode, or by creating separate instrumented copies of everything, in fullcopy mode);
do your J2EE packaging as normal, but do not include emma.jar as a lib at this level, that is, within your .war, .ear, etc;
locate whichever JRE is used by the container and copy emma.jar into its /lib/ext directory. If that is impossible, add emma.jar to the server classpath (in a server-specific way);
deploy your instrumented classes, .jars, .wars, .ears, etc and exercise/test your J2EE application via your client-side testcases or interactively or whichever way you do it;
to get a coverage dump file, you have three options described in What options exist to control when EMMA dumps runtime coverage data?. It is highly recommended that you use coverage.get control command with the ctl tool available in v2.1.
For clover, check the Working with Distributed Applications page.
I use emma coverage tool integrated with unit testing project build phase, however, tool's documentation says that it's fairly simple to get code coverage at situation you described.
I suggest jacoco, as it does not require source code modifications.
Check out Measuring Code Coverage in (Tomcat) Java Applications with a GreyBox Harness
Code cover is a great tool. For your case you should use command-line interface, that might you incorporate with existing PowerShell scripts.
Related
I'm exploring the use of Groovy as the default scripting language for my next project. Some basic requirements are:
load and run Groovy scripts, sending params in and getting results out, I know this can be done using GroovyShell or GroovyScriptEngine.
be able to run in debug mode stepping into the statements of the scripts (no need of managing breakpoints but execute statements in the code and take a look and the values of the variables in the current scope).
Yes, this is like a small IDE, but should be integrated into the app that manages running many scripts in parallel, and sending outputs of some scripts to inputs of other scripts (is a pipes and filters architecture http://www.enterpriseintegrationpatterns.com/patterns/messaging/PipesAndFilters.html).
I'm not sure if Groovy alone provides debugging and stepping into groovy code, or Java Platform Debugger Architecture should be used, or if this should be done by embedding an IDE into this system (it might be possible to embed Eclipse components).
These are my main concerns/doubts about the possibilities of using Groovy. any pointers are welcome.
Short answer is "No easy way to embed debugging functionality in an application that I am aware of". Please provide workaround instructions to use companion IDE or start deep research for any open source projects that have embedded script debugger functionality... and become great friends to the related contributing developers :-).
MORE INFORMATION:
Anything is possible when working with open source software. Yes, you can build a debugger that hooks execution of script (assuming you can keep process/threads separate) to step, create/halt at breakpoints, read variables currently in memory, etc.
Example: SmartBear ReadyAPI SoapUI NG PRO v4.6+(?) includes some base debugging embedded in their "Groovy Script" Test Step editor pane. (Disclaimer: SmartBear might have used other language or libraries to build their debugger features. I also am unsure if/when this feature will go to the SoapUI Open Source project - debugging is clearly noted as a PRO (paid) feature.)
Caveats:
Implementing debugger functionality is a MAJOR feature. You need debugger development experience (and/or even more time) to build functionality unrelated to your actual application. Make sure you truly need this embedded debugging before proceeding.
You should start with existing open-source groovy debugger to embed in your application... if one exists. SmartBear SoapUI NG PRO would be a start... but PRO feature is not open source.
I want to know ways or tools that allow instrumentation of code that is to be deployed as a jar or similar, so that when a jar is used it can allow me recording of what underlying source code is used/accessed.
I have come across tools that allow code coverage in Java. But once the application is deployed, how to perform the same at that time?
This will also give me a glimpse that how frequently a certain module or part of code is being used?
You can try free jacoco library. Also you can check dynatrace which is pretty successful but not for free.
https://www.dynatrace.com/capabilities/business-transaction-monitoring/
My company is trying to determine the best strategy for implementing batch Java programs. We have a few hundred (and growing) separate Java programs. Most of them are individual Jasper Reports but some are bigger batch Java jobs. Currently, each Java Project is packaged an independent JAR file using Eclipse's export option. Those JARs are then deployed to our Linux server manually where they are tested. If they pass testing, they are then migrated up through QA and onto Production through a home grown source code control system.
Is this the best strategy for doing batch Java? Ongoing maintenance can be a hassle since searching Jar files is not easy and different developers are creating new Java Projects (new reports) every week.
Importing existing projects from the Jar files into Eclipse is a tricky process as well. We would like these things to be easier. We have thought about packaging all the code into 1 big project and writing an interface to be able to execute the desired "package" (aka program) maybe using a Web Server.
What are other people/companies doing out there with their batch Java programs? Are there any best practices out there on this stuff? Any help/ideas/working models would be appreciated.
I would say that you should be able to create one web based app for access Jasper reports, rather than a bunch of batch processes. Then, when you need to deploy a new report, just deploy a minor update that accesses a new compiled Jasper report file.
That said, you should be checking your code, not your binaries, into a Subversion or Git repository. Dump the "home grown" source control repository. Life is too short to try to home grow stuff like that. Just use Git or Subversion, they're proven, simple, and functional. When you import a new project, just pull it down from Subversion, don't try to import the JAR file from your Eclipse IDE.
Put your JAR files into a Maven repository such as Nexus, and deploy to QA and Production from there. Create automated builds for every project (be that with Maven or something else). Don't depend upon an IDE to export your JAR files. IDE's change and exporting from an IDE introduces more opportunity for human error. Also, different developers will prefer different IDE's. By standardizing on something like Maven, you're a bit more IDE agnostic.
Mhy company has standardized Java Batch execution using IBM Websphere Extended Deployment.
Here http://www.ibm.com/developerworks/websphere/techjournal/0801_vignola/0801_vignola.html is an article introducing techniques for programming and deploying java batch.
Introduction to batch programming using WebSphere Extended Deployment Compute Grid
Christopher Vignola, WebSphere
Architect, IBM
Commonly thought of as a
legacy "mainframe" technology, batch
processing is showing itself to be a
venerable workload style with growing
demand in Java™ and distributed
environments. This article introduces
an exciting new capability for Java
batch processing from IBM®, the leader
in batch processing systems for the
last 40 years. This content is part of
the IBM WebSphere Developer Technical
Journal.
WebSphere Extended Deployment Compute
rid provides a simple abstraction of
a batch job step and its inputs and
outputs. The programming model is
concise and straightforward to use.
The built-in checkpoint/rollback
mechanism makes it easy to build
robust, restartable Java batch
applications.
The Batch Simulator utility provided
with this article offers an
alternative test environment that runs
inside your Eclipse (or Rational
Application Developer) development
environment. Its xJCL generator can
help jump start you to the next phase
of testing in the Compute Grid unit
test server.
But even if you are not interested in the product, the article is a must read anyway.
I have been searching for some time for a code coverage tool that will work with my client/sever application, but I have been unable to find a compatible tool.
My application stores images on a server, then displays them though a client which is launched via java webstart/jnlp file.
Any recommendations would be appreciated. I have already tried emma & clover, with no results. open source or commercials solutions are acceptable. thanks!
Instrument the classes with any of the code coverage tools you like (e.g. cobertura, which writes a local file cobertura.ser which can then be used for the report generation in a separate step).
Then, instead of running the signed or unsigned (which wouldn't work anyway) Applet directly in the browser, use the AppletViewer environment. The viewer runs the Applet in a privileged environment, without the Java Plugin Sandbox and thus the code coverage tool can do its work and write the report file.
Many of the code coverage tools use byte-code weaving and only write their results using a shutdown hook - when the VM shuts down. That is probably not working when used in a browser, since that's a special VM. Not sure, but maybe the Java Plugin starts a separate VM for Applets which is never being shut down.
We are currently using JDeveloper to build our production EARs. One problem with this is that if the developer doesn't add new files to a VCS, then that developer is the only one capable of making EARS, and therefore he can use unversioned files as well.
What would be a good system that seperates this, so that EAR files can be correctly produced without depending on the local developers workspace (this would also ensure that they add their files to a VCS before allowing to make a deployment/check-in).
One problem with this is that if the developer doesn't add new files to a VCS, then that developer is the only one capable of making EARS,
If the developer doesn't use the VCS, this is not your only problem:
You cannot reproduce things in another environment, you're tied to the developer machine (but you're aware of that). What if he is sick?
Not versioning files means you don't have any history of modifications and that you don't know what you put into production ("Hmm, what is in this version? Wait, let's open the EAR to check that.").
And last but not least, in case of hardware failure (e.g. a hard drive crash), you can say good bye to everything that is not in the VCS.
So, the first thing to fix is to ALWAYS version files, even if there is only one developer as working alone doesn't save you from the mentioned problems. These points should be reminded (the developer needs to be aware of them to understand their importance).
To make sure this happens, you should actually not rely on the developer machine to build the EAR but rather rely on an "external" process that would be the reference. You want to avoid this syndrome:
alt text http://img716.imageshack.us/img716/9901/worksonmymachinestarbur.png
To put such a process in place, you need to automate the build (i.e. your whole build can be run in one command) and to break the dependency with your IDE. In other words, do not use the IDE to build your EAR but rather use a tool like Maven or Ant (which are IDE agnostic). That would be the second thing to fix.
Once you'll have automated your build process, you could go one step further and run it continuously: this is called Continuous Integration (CI) and allows to get frequent, ideally immediate, feedback about changes (to avoid big bang integration problems). That would be the third thing to fix.
Given your actual toolset (which is far from ideal, there is not much community support for the tools you are using), my recommendation would be to use Ant (or Maven if you have some knowledge of it) for the build and Hudson for the continuous integration (because it's extremely easy to install and to use, and it has a Dimensions plugin).
Here's my recommendation:
Get a better IDE - IntelliJ, Eclipse, or NetBeans. Nobody uses JDeveloper
Have developers check into a central version control system like Subversion or Git.
Set up a continuous integration facility using Ant and either Cruise Control or Hudson to automate your builds.
What we do is use cruisecontrol. It does two things, it lets us do continuous integration builds, so that we have nightly builds as well as lightweight builds that get built every time a change is checked it.
We also use it to more specifically address your issue. When we want to ship, we use cruisecontrol to kick off a build, that is tagged with the proper production build version. It will grab the code from our version control system (we use SVN) and will build on that, so it is not dependent on developers local environments.
One thing you might also want to consider is creating a production branch to build out of. So, production ears for a particular release are always built from that branch. This way you have even have a bit more control over what goes into the build.
Instead of doing builds from developer workspaces, setup Maven, and have something like Hudson run your Maven build. The artificats of this build (your ear) gets deployed.