I know PAX is doing a lot of stuff and that creating the container and copying all those jars is not cheap but are there any general tips to improve performance. I have tests that execute outside the container in a fraction of a second while inside they take much longer. I am using PAX primarily to verify that my manifests are accurate and the bundle would be deployable without any missing dependencies. I have tried Knopflerfish, Equinox, Felix and in general there is little difference they are relatively slow to a barebones containerless run.
As you realised, the underlying container does not make much of a difference.
If you want to have minimal bundles created on the fly, you can try out Pax Tinybundles: if this applies to your case, you can build a set of minimized bundles with only the content you actually need for testing. For example, you can just package your Manifest. I haven't benchmarked it myself for this particular purpose, but it is worth a shot.
As a sidenote, please consider that Pax Exam 2.3 introduced support (see here) for #Before and #After, thus coming to your rescue for more flexible load setup/teardown.
Using Native Container is faster than Pax Runner Container, saving the overhead of starting an external process.
Using EagerSingleStagedReactorFactory saves the overhead of restarting the framework for each test.
To avoid copying JARs, prefer mvn: URLs or mavenBundle() to general URLs, then bundles will be taken from your local Maven repository, once they have been downloaded.
A new feature in Pax Exam 2.3.0 is the reference: protocol which allows you to provision bundles in place, without copying - this works even for exploded bundles (i.e. an unzipped directory structure).
Related
Good localtime community,
First a disclaimer, I am a greenhorn when it comes to Maven and OSGi, though I have read through half a dozen threads on the subject here. We are migrating from our current RESTful architecture to Apache's Fuse (ESB) and of course this requires us to sort out our dependencies and carefully craft our POMS for each bundle/container. Generally speaking each container will contain 3 in-house dependency bundles:
The container specific bundle which requires the next two in house bundles).
A processing bundle that all of our in-house containers will require (though they might require different versions), and which requires the bundle below.
A utility bundle that all of our containers will require (though they might require different versions).
Additionally each container can require one or more (and possibly up to over 100) third party jars, which are often reused between containers (though again no guarantee if versions will be the same).
We have (or will soon have) profiles for each container and for the ESB as a whole. What I would like to know is:
What is the best (cleanest) way to reuse third party jars between containers when we can?
I've read that we should never use if we can avoid it, but we are heading down to jarmageddon when we try to explicitly import the jars/packages we need for our three bundles. I have read that *;resolution:=optional is not the way to go either (and for what it's worth, that doesn't seem to work for us anyway). Any thoughts?
I read on some forums that bundling third party jars is the way to go (though that seems a little overboard), and I have read on others that would kind of defeat the purpose of OSGi. Any thoughts there?
Our in-house bundles often require many of the same third party jars/versions. Is this simply a matter of building/installing our bundles in the correct order (utility, processing, and container specific) and exporting jars that can be reused by the next bundle(s)?
We are in a position where we can get our container working if we do some things we do not want to do (export everything, import *), but we'd like to handle this as cleanly as possible since we will have to repeat the process for many containers (with increasing dependencies) and since we will have to live with supporting/updating our implementation.
Thanks in advance for your guidance.
I have a test war file that contains many tests. Each test is packaged in maven project with a lot of dependencies. We use maven for dependency management but it comes with a problem. When a test update a common library, it can break other test that depends on the older version of the lib. How to make all the test run in a completely separate environment with its own set of library version? I can't execute them in a separate jvm because these tests need to be executed very frequently like very 30 sec or so. Can OSGi help solve this problem?
Yes OSGi can solve this problem, but it is not a step to be taken lightly. Use OSGi when you are ready to commit time and effort to isolating and managing dependencies, versioning them properly and, optionally, making your code and architecture more modular/reusable.
Bear in mind that adopting OSGi can be painful at first due to non-modular practices used by some legacy libraries.
I'm writing a server application which makes use of external modules. I would like to make them to be upgradeable without requiring server restart. How do I do that? I've found OSGi but it looks very complicated and big for my task.
Simple *.jar files are ok, but once they are loaded, I suppose, I cannot unload them from VM and replace with another version on-the-fly.
What approach can you suggest?
It seems like OSGi is exactly what you're asking for. It can be complex, but there are ways to deal with that. Some of the complexity can be mitigated by using SpringDM or something similar to handle the boilerplate tasks of registering and consuming services in the runtime. Annotation-driven service registration and dependency injection really reduces the amount of code that needs to be written.
Another way to reduce complexity is to deploy the bulk of your application in a single bundle and only deploy the parts that need to be modular into their own bundles. This reduces your exposure to registering and using services from other bundles in the runtime as well as reducing the complexity of deployment. Code running within a bundle can use other code in the same bundle just as in a standard Java app - no need to interact with the OSGi runtime. The opposite of this approach is to break up your application into lots of discrete bundles that export well-defined services to other bundles in the system. While this is a very modular approach, it does come with extra complexity of managing all those bundles and more interaction with the OSGi runtime.
I would suggest taking a look at the book "OSGi in Action" to get a sense of the issues and to see some decent samples.
It would at least require you to define your custom classloader... I don't see how can this be simpler than just using Felix, Equinox, Knoplerfish or any open source Osgi runtime to do the task.
Maybe SpringDM is simpler...
What you're going for is definitely possible. I believe that you can unload classes from memory by loading them in a separate ClassLoader and then disposing that ClassLoader. If you're not wanting to go all out and use OSGI, I'd recommend something like JBoss Microcontainer (http://www.jboss.org/jbossmc) or ClassWorlds (http://classworlds.codehaus.org/). It's not too terribly difficult to write something like this from scratch if your needs are specialized enough.
Hope this helps,
Nate
If you follow the ClassLoader route (is not that difficult, really), I suggest each module to be packaged in its own jar, and use a different ClassLoader to read each jar. that way, unloading a module is the same as "discarding" the ClassLoader.
OSGi is not so complicated - using PAX runner with maven worked as a breeze.
Or implement your own ClassLoader and set it to JVM :
java -Djava.system.class.loader=com.test.YourClassLoader App.class
How do you determine what jars are needed for such and such feature of a framework? For example, what jars would be needed out of all those available for Spring in order to support only dependency injection?
There are tools that create minimal JARs by figuring out which classes are actually used in an application by statically analyzing the code, then creating a new JAR containing only those classes. (I recall using Zelix Classmaster to do this, but there are many alternatives.)
The problem with using these tools for a DI framework like Spring include:
The existing only trace static dependencies. If you dynamically load classes, you have to specifically tell the analyser about each one. DI frameworks in general, and Spring in particular is replete with dynamic loading, including dynamic loading that is opaque to application code.
The existing tools work by creating a new output JAR, not by telling you which of the input JARs are not used. While repackaging the JARs is OK if you are creating a shrink-wrapped application from a closed-source codebase, it is undesirable in general, and potentially problematic with some open-source licenses. Certainly you don't want to do this with Spring.
In theory, someone could write a tool to help. In practice, the tool would need to (for example) know how to extract dynamic class dependencies from Spring configurations expressed in annotations, XML and from bean descriptors created at runtime from higher order configuration (SpringSecurity does this for example). That is a big ask. And even then you have the problem that a "small" change to the wirings made on the installation platform could fail due to a required JARs having been left out by the JAR pruning process.
In my view, the more practical alternatives are:
If you use Maven / Ivy to manage your dependencies, look at the dependency graphs, strip out dependencies that appear to be no longer needed ... and test, test, test.
Manually strip out JARs that appear to be unused ... and test, test, test.
Don't worry about it. A moderate level of unused JAR cruft might add a second or three to deployment and webapp startup times, but that generally doesn't matter. (But if it does ... see above.)
This is why some older Java projects end up having 600 Jars and a 200 MB war file, for a 10,000 line application. Kind of a pain if you don't manage it carefully...
You should really ask the framework provider or read the documentation. Statically analyzing what jars are required might not be enough in some cases(dynamic loading) and sometimes you might end up with too many jars.
I once did some ftp helper stuff to a sort of "utility" library. It depended on some apache ftp jar. If you never used the ftp features in the library you would not need the ftp jar but statical analysis of the code might say you need it. This is something you should documents.
Currently, I am working on a new version control system as part of a final year project at University. The idea is to make it highly adaptable and pluggable.
We're using the OSGi framework (Equinox implementation) to manage our plug ins. My problem is that I can't find a simple & easy to use method for testing OSGi bundles.
Currently, I have to build the bundle using Maven and then execute a test harness. I'm looking for something like the JUnit test runner for Eclipse, as it will save me a bunch of time.
Is there a quick and easy way to test OSGi bundles?
EDIT: I don't need something to test Eclipse plug ins or GUI components, just OSGi bundles.
EDIT2: Is there some framework that supports JUnit4?
More recently, you should have a look at Pax Exam:
http://team.ops4j.org/wiki/display/paxexam/Pax+Exam
This is the current effort at OPS4J related to testing.
Here are some tools not mentioned yet:
I'm using Tycho, which is a tool for using Maven to build Eclipse plugins. If you create tests inside their own plug-ins, or plug-in fragments, Tycho can run each set of tests inside its own OSGi instance, with all its required dependencies. Intro and further info. This is working quite well for me.
jUnit4OSGI looks straightforward. You make subclasses of OSGiTestCase, and you get methods like getServiceReference(), etc.
Pluginbuilder, a headless build system for OSGi bundles / Eclipse plug-ins, has a test-running framework called Autotestsuite. It runs the tests in the context of the OSGi environment, after the build step. But, it doesn't seem to have been maintained for several years. I think that many Eclipse projects are migrating from Pluginbuilder to Tycho.
Another option is to start an instance of an OSGi container within your unit test, which you run directly, as explained here.
Here's someone who's written a small bundle test collector, which searches for JUnit (3) tests and runs them.
Spring Dynamic Modules has excellent support for testing OSGi bundles.
There is a dedicated open source OSGi testing framework on OPS4J (ops4j.org) called Pax Drone.
You might want to have a look at Pax Drone ([http://wiki.ops4j.org/confluence/x/KABo]) which enables you to use all Felix Versions as well as Equinox and Knopflerfish in your tests.
Cheers,
Toni
Eclipse has a launch configuration type for running JUnit tests in the context of an Eclipse (i.e. OSGi) application:
http://help.eclipse.org/stable/index.jsp?topic=/org.eclipse.pde.doc.user/guide/tools/launchers/junit_launcher.htm
If you need to test GUI components I've found SWTBot gets the job done.
Treaty is a contract(testing) framework that is pretty academic but has some nice ideas. There are papers that are published on it, and the people currently working on improving it.
The ProSyst Test Execution Environment is a useful test tool for OSGi bundles. It also supports JUnit tests as one of the possible test models.
For unit tests use the EasyMock framework or create your own implementations of the required interfaces for testing .
I think we met the same issue and we made our own solution. There are different parts of the solution:
A junit4runner that catches all OSGi services that has a special property defined. It runs these caught services with JUnit4 engine. JUnit annotations should be placed into interfaces that the services implement.
A maven plugin that starts an OSGi framework (a custom framework can be created as maven dependency) and runs the unit tests inside the integration-test maven lifecycle.
A deployer OSGi bundle. If this is dropped into your OSGi container a simple always-on-top window will be opened where you can drop your project folders (from total commander or from eclipse). This will then redeploy that bundle.
With the tools you can do TDD and have the written tests always run inside the maven integration-phase as well. It is recommended to use eclipse with m2e and maven-bundle-plugin as in this case the target/classes/META-INF/MANIFEST.MF is regenerated as soon as you save a class in your source so you can drag the project and drop to the deployer window. The OSGi bundles you develop do not have to have any special feature (like being an eclipse plugin or something).
The whole solution is OpenSource. You can find a tutorial at http://cookbook.everit.org
During the last couple of years Tycho - a new Maven based build system for OSGi - has become rather popular among the Eclipse Foundation. This framework also includes method to use Maven Surefire to test OSGi bundles in separate testbeds...
There are many ways to test OSGi components, I suppose. One way of doing the testing is to use Robot Framework. What I've done is made my tests with Robot Framework and have the remote libraries either installed in OSGi or have them talk to OSGi-test components through sockets and robot would talk to these modules and run tests through them.
So, basically your OSGi-modules should have interfaces that do something and produce some output. So, in my setup I had a test components that would make service calls to the actual OSGi-component and then there would be a listening-service that would catch the events/service calls (made by the module under test) and those results could be asked by the robot. So basically this way you can split a massive system in small components and have the system run in production/production like enviroment and have it tested automatically on component level or have some of the real components be tested in unison.
Along with others mentioned mockito is very handy to mock plugin dependencies(references etc). see https://www.baeldung.com/mockito-annotations
How about bnd-testing-maven-plugin?
It allow running JUnit inside a running container like Felix or Equinox.
If you used the BNDTools for eclipse this is very similar but just maven withpout eclipse and without a UI.
https://github.com/bndtools/bnd/tree/master/maven/bnd-testing-maven-plugin
also look at the effectiveosgi archetype for maven. This will give you a good starting point to build your project or just add tests.
https://github.com/effectiveosgi