I have a test war file that contains many tests. Each test is packaged in maven project with a lot of dependencies. We use maven for dependency management but it comes with a problem. When a test update a common library, it can break other test that depends on the older version of the lib. How to make all the test run in a completely separate environment with its own set of library version? I can't execute them in a separate jvm because these tests need to be executed very frequently like very 30 sec or so. Can OSGi help solve this problem?
Yes OSGi can solve this problem, but it is not a step to be taken lightly. Use OSGi when you are ready to commit time and effort to isolating and managing dependencies, versioning them properly and, optionally, making your code and architecture more modular/reusable.
Bear in mind that adopting OSGi can be painful at first due to non-modular practices used by some legacy libraries.
Related
For security reason, we update third-party dependencies frequently.
We use maven as our dependency management tool , but it's still a hard work since we have 100+ projects to update.
1、How can we do this fast and sound?
Sometimes we have to change our code to use the new dependency. Sometimes we don't have enough time to test and cause exceptions in product environment, like NoSuchMethodError.
2、Is One version rule a good idea in java? Have someone done this before?
for example, our Project A depends spring-webmvc 5.3.9 and Project B depends on spring-webmvc 5.2.0. We want both A and B to depend on spring-webmvc 5.3.9. In fact, we want our all projects to depend on the same version.
Thank you
Sometimes we have to change our code to use the new dependency.
Sometimes we don't have enough time to test and cause exceptions in
product environment, like NoSuchMethodError.
This sentence sounds like you have bad practices inside your organisation. You should put in place a solid policy on your testing and deployment (CI/CD) process.
A good practice is to implement a BOM or a parent POM project that will manage all your common dependencies. It is very good when it comes to manage and centralize your librairies versions.
Before any changes goes to the production server, it should be tested
You have to define a process for your tests : unit test > integration tests > end-to-end test
Every new implementation should pass pull/request and review process
Try to implement AGILE workflow in your team
My answer is not a silver bullet approach but I hope it will help you quite a lot. It is clearly an organisation issue that you have in your team or your enterprise. You have to implement Software Development Process.
I'm trying to figure out which tool to use for getting code-coverage information for projects that are running in kind of stabilization environment.
The projects are deployed as a war and running on Jboss. I need server-side coverage while running manual / automated tests interacting with a running server.
Lets assume I cannot change projects' build and therefore cannot add any kind of instrumentation to their jars as part of the build process. I also don't have access to code.
I've made some reading on various tools and they are all presenting techniques involving instrumenting the jars on build (BTW - doesn't that affect production, or two kinds of outputs are generated?)
One tool though, JaCoCo, mentioned "on-the-fly-instrumentation" feature. Can someone explain what does it mean? Can this help me with my limitations?
I've also heard on code-coverage using runtime profiling techniques - can someone help on that issue?
Thanks,
Ben
AFAIK "on-the-fly-instrumentation" means that the coveragetool hooks into the Classloading-Mechanism by using a special ClassLoader and edits the Class-Bytecode when it's being loaded.
The result should be the same as in "offline-instrumentation" with the JARs.
Have also a look at EMMA, which supports both mechanisms. There's also a Plugin for Eclipse.
A possible solution to this problem without actual code instrumentation is to use a jvm c-agent. It is possible to attach agents to the jvm. In such an agent you can intercept every method call done in your java code without changes to the bytecodes.
At every intercepted method call you then write info about the method call which can be evaluated later for code coverage purposes.
Here you'l find the official guide to the JVMTI JVMTI which defines how jvm agents can be written.
You don't need to change the build or even have access to the code to instrument the classes. Just instrument the classes found in the delivered jar, re-jar them and redeploy the application with the instrumented jars.
Cobertura even has an ant task that does that for you: it takes a war file, instrument the classes inside the jars inside the war, and rebuild a new war file. See https://github.com/cobertura/cobertura/wiki/Ant-Task-Reference
To answer your question about instrumenting the jars on build: yes, of course, the instrumented classes are not used in production. They're only used for the tests.
I know PAX is doing a lot of stuff and that creating the container and copying all those jars is not cheap but are there any general tips to improve performance. I have tests that execute outside the container in a fraction of a second while inside they take much longer. I am using PAX primarily to verify that my manifests are accurate and the bundle would be deployable without any missing dependencies. I have tried Knopflerfish, Equinox, Felix and in general there is little difference they are relatively slow to a barebones containerless run.
As you realised, the underlying container does not make much of a difference.
If you want to have minimal bundles created on the fly, you can try out Pax Tinybundles: if this applies to your case, you can build a set of minimized bundles with only the content you actually need for testing. For example, you can just package your Manifest. I haven't benchmarked it myself for this particular purpose, but it is worth a shot.
As a sidenote, please consider that Pax Exam 2.3 introduced support (see here) for #Before and #After, thus coming to your rescue for more flexible load setup/teardown.
Using Native Container is faster than Pax Runner Container, saving the overhead of starting an external process.
Using EagerSingleStagedReactorFactory saves the overhead of restarting the framework for each test.
To avoid copying JARs, prefer mvn: URLs or mavenBundle() to general URLs, then bundles will be taken from your local Maven repository, once they have been downloaded.
A new feature in Pax Exam 2.3.0 is the reference: protocol which allows you to provision bundles in place, without copying - this works even for exploded bundles (i.e. an unzipped directory structure).
For my previous employer I've worked with Hibernate, and now that I'm in a small startup I would like to use it again. However, downloading both the Hibernate core and the Hibernate annotations distributions is rather painful, as it requires putting a lot of JAR files together. Because the JARs are split up into categories such as "required" and "optional" I would assume that every developer ends up with a different contents of his lib folder.
What is the common way to handle this problem? Basically I want to have a formal way to get all the JARs for Hibernate, so that (in theory) I would end up with exactly the same stuff if I would need again for another project next month.
Edit: I know roughly what Maven does, but I was wondering if there was another way to manage this sort of thing.
As Aaron has already mentioned, Maven is an option.
If you want something a bit more flexible you could use Apache Ant with Ivy.
Ivy is a dependency resolution tool which works in a similar way to Maven, you just define what libraries your project needs and it will go off and download all the dependencies for you.
Maybe this is not much of an answer, but I really don't see any problem with Hibernate dependencies. Along with hibernate3.jar, you need to have:
6 required jars, out of which commons-collections, dom4j and slf4j are more often used in other open-source projects
1 of either javassist or CGLIB jars
depending on cache and connection pooling, up to 2 jar files, which are pretty much Hibernate specific
So, at the very worst, you will have a maximum of 10 jars, Hibernate's own jar included. And out of those, only commons-collections, dom4j and slf4j will probably be used by some other library in your project. That is hardly a zillion, it can be managed easily, and surely does not warrant using an "elephant" like Maven.
I use Maven 2 and have it manage my dependencies for me.
One word of caution when considering using Maven or Ivy for managing dependencies is that the quality of the repository directly affects your build experience. If the repo is unavailable or the meta-data for the artifacts (pom.xml or ivy.xml) is incorrect you might not be able to build. Building your own local repository takes some work but is probably worth the effort. Ivy, for example, has an ANT task that will import artifacts from a Maven repository and publish them to you own Ivy repository. Once you have a local copy of the Maven repo, you can adjust the meta-data to fit what ever scheme you see fit to use. Sometimes the latest and greatest release is not in the public repository which can sometimes be an issue.
I assume you use the Hibernate APIs explicitly? Is it an option to use a standard API, let's say JPA, and let a J2EE container manage the implementation for you?
Otherwise, go with Maven or Ivy, depending on your current build system of choice.
Currently, I am working on a new version control system as part of a final year project at University. The idea is to make it highly adaptable and pluggable.
We're using the OSGi framework (Equinox implementation) to manage our plug ins. My problem is that I can't find a simple & easy to use method for testing OSGi bundles.
Currently, I have to build the bundle using Maven and then execute a test harness. I'm looking for something like the JUnit test runner for Eclipse, as it will save me a bunch of time.
Is there a quick and easy way to test OSGi bundles?
EDIT: I don't need something to test Eclipse plug ins or GUI components, just OSGi bundles.
EDIT2: Is there some framework that supports JUnit4?
More recently, you should have a look at Pax Exam:
http://team.ops4j.org/wiki/display/paxexam/Pax+Exam
This is the current effort at OPS4J related to testing.
Here are some tools not mentioned yet:
I'm using Tycho, which is a tool for using Maven to build Eclipse plugins. If you create tests inside their own plug-ins, or plug-in fragments, Tycho can run each set of tests inside its own OSGi instance, with all its required dependencies. Intro and further info. This is working quite well for me.
jUnit4OSGI looks straightforward. You make subclasses of OSGiTestCase, and you get methods like getServiceReference(), etc.
Pluginbuilder, a headless build system for OSGi bundles / Eclipse plug-ins, has a test-running framework called Autotestsuite. It runs the tests in the context of the OSGi environment, after the build step. But, it doesn't seem to have been maintained for several years. I think that many Eclipse projects are migrating from Pluginbuilder to Tycho.
Another option is to start an instance of an OSGi container within your unit test, which you run directly, as explained here.
Here's someone who's written a small bundle test collector, which searches for JUnit (3) tests and runs them.
Spring Dynamic Modules has excellent support for testing OSGi bundles.
There is a dedicated open source OSGi testing framework on OPS4J (ops4j.org) called Pax Drone.
You might want to have a look at Pax Drone ([http://wiki.ops4j.org/confluence/x/KABo]) which enables you to use all Felix Versions as well as Equinox and Knopflerfish in your tests.
Cheers,
Toni
Eclipse has a launch configuration type for running JUnit tests in the context of an Eclipse (i.e. OSGi) application:
http://help.eclipse.org/stable/index.jsp?topic=/org.eclipse.pde.doc.user/guide/tools/launchers/junit_launcher.htm
If you need to test GUI components I've found SWTBot gets the job done.
Treaty is a contract(testing) framework that is pretty academic but has some nice ideas. There are papers that are published on it, and the people currently working on improving it.
The ProSyst Test Execution Environment is a useful test tool for OSGi bundles. It also supports JUnit tests as one of the possible test models.
For unit tests use the EasyMock framework or create your own implementations of the required interfaces for testing .
I think we met the same issue and we made our own solution. There are different parts of the solution:
A junit4runner that catches all OSGi services that has a special property defined. It runs these caught services with JUnit4 engine. JUnit annotations should be placed into interfaces that the services implement.
A maven plugin that starts an OSGi framework (a custom framework can be created as maven dependency) and runs the unit tests inside the integration-test maven lifecycle.
A deployer OSGi bundle. If this is dropped into your OSGi container a simple always-on-top window will be opened where you can drop your project folders (from total commander or from eclipse). This will then redeploy that bundle.
With the tools you can do TDD and have the written tests always run inside the maven integration-phase as well. It is recommended to use eclipse with m2e and maven-bundle-plugin as in this case the target/classes/META-INF/MANIFEST.MF is regenerated as soon as you save a class in your source so you can drag the project and drop to the deployer window. The OSGi bundles you develop do not have to have any special feature (like being an eclipse plugin or something).
The whole solution is OpenSource. You can find a tutorial at http://cookbook.everit.org
During the last couple of years Tycho - a new Maven based build system for OSGi - has become rather popular among the Eclipse Foundation. This framework also includes method to use Maven Surefire to test OSGi bundles in separate testbeds...
There are many ways to test OSGi components, I suppose. One way of doing the testing is to use Robot Framework. What I've done is made my tests with Robot Framework and have the remote libraries either installed in OSGi or have them talk to OSGi-test components through sockets and robot would talk to these modules and run tests through them.
So, basically your OSGi-modules should have interfaces that do something and produce some output. So, in my setup I had a test components that would make service calls to the actual OSGi-component and then there would be a listening-service that would catch the events/service calls (made by the module under test) and those results could be asked by the robot. So basically this way you can split a massive system in small components and have the system run in production/production like enviroment and have it tested automatically on component level or have some of the real components be tested in unison.
Along with others mentioned mockito is very handy to mock plugin dependencies(references etc). see https://www.baeldung.com/mockito-annotations
How about bnd-testing-maven-plugin?
It allow running JUnit inside a running container like Felix or Equinox.
If you used the BNDTools for eclipse this is very similar but just maven withpout eclipse and without a UI.
https://github.com/bndtools/bnd/tree/master/maven/bnd-testing-maven-plugin
also look at the effectiveosgi archetype for maven. This will give you a good starting point to build your project or just add tests.
https://github.com/effectiveosgi