I wrote an unit-test for an activity which finally puts a message into a queue. As soon as a message is put into that queue, a message driven bean starts processing. But I don't want to test MDBs in a unit test. How can I tell OpenEJB to ignore them?
I set up OpenEJB with several properties:
p.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.openejb.client.LocalInitialContextFactory");
p.setProperty("openejb.deployments.classpath.include", ".*");
p.setProperty("openejb.localcopy", "false");
// Messaging
p.put("MyJmsResourceAdapter",
"new://Resource?type=ActiveMQResourceAdapter");
// Do not start the ActiveMQ broker
p.put("MyJmsResourceAdapter.BrokerXmlConfig", "");
p.put("MyJmsConnectionFactory",
"new://Resource?type=javax.jms.ConnectionFactory");
p.put("MyJmsConnectionFactory.ResourceAdapter", "MyJmsResourceAdapter");
p.put("queue/MyQueue",
"new://Resource?type=javax.jms.Queue");
I know I must set openejb.deployments.classpath.exclude, but I can't figure out the right value:
p.setProperty("openejb.deployments.classpath.exclude", "org.example.mdb.*");
For example my class is named org.example.mdb.MyMDB.
just my 2 cents:
try ".*org/example/mdb.*" or ".*org.example.mdb.*"
from Loading Deployments from the Classpath:
Note by default these settings will
only affect which jars OpenEJB will
scan for annotated components when no
descriptor is found. If you would like
to use these settings to also filter
out jars that do contain descriptors,
set the
openejb.deployments.classpath.filter.descriptors
property to true. The default is false
We don't have that feature, but it could easily be added if you wanted to do a little hacking -- new contributions and contributors are always welcome.
This class will do exactly what you want... and a few things you probably don't want :) It strips out all MDBs and JMS resource references (the good part) and it strips out all entity beans and persistence unit references (the part you probably don't want). We wrote it due to some debugging issues we were having when either ActiveMQ or OpenJPA were loaded. If you cleaned it up we'd happily take it back and support it as a feature.
There is a similar feature which strips out all web services. It is installed in the ConfigurationFactory if a specific system property is set. Should be easy to plug an "MDB & JMS" remover using a similar flag at basically that same place in ConfigurationFactory
In fact since in OpenEJB all annotation and xml meta-data is merged into one object tree (which is also a JAXB tree), you could do pretty powerful transformations of the app prior to it being actually deployed. Say for example swap out specific beans for mock versions.
One of those things I think would make an excellent feature but haven't yet had the time to work on. I.e. making some clean hook for people to mess with the tree just before we send it off for deployment. Anyone reading this is welcome to jump in and take a stab at it (yay open source!).
Related
The OSGi ConfigurationAdmin specification mentions that implementations of ManagedService and ManagedServiceFactory may signal an invalid incoming configuration by throwing a ConfigurationException. Yet, apart from this statement, the spec is silent on how the various actors should handle the situation, and most importantly, what should be the state of the environment after such an exception.
For example, suppose that a ManagedServiceFactory currently has a service instance (lets say service.pid=example.12345) with a valid set of properties; that service instance is published by the factory into the service registry. Then, the factory is informed of a configuration update for that service instance; however, on verification, the update method determine that incoming properties are invalid. Based on the spec, the factory should therefore throw a ConfigurationException.
But then, if nothing else is done, the environment remains in an unstable state: there is now a published service in the registry based on a configuration that no longer exists; consequently, whenever the ManagedServiceFactory service gets restarted (for example because of a bundle update or a whole framework restart), it will not be possible to reinstantiate that service, its former valid configuration having been lost. This breaks the persistence paradigm of the Configuration Admin subsystem, and pose severe issues regarding the stability of some OSGi environment.
Unfortunately, there is no easy way for the initial configurator bundle to detect that its configuration change caused a ConfigurationException, making it hardly possible in general to restore a stable configuration from that place. It seems to me that it would be more appropriate, in such situation, for the ConfigurationAdmin to (persistently) restore the previously valid configuration, but there is unfortunately no mention of such behaviour in the spec, and I don't see any trace of such mechanism in Felix's implementation.
Given these facts, it seems that the only possibility to maintain the stability of the environment would be for a ManagedServiceFactory implementation to first unregister and destroy existing service instances for which it has received invalid configuration properties, and only after that, throw the mandated ConfigurationException. This would effectively result in the same environment state as what would be if the framework was relaunched at that point. Similarly, a ManagedService implementation should handle an invalid configuration by first entirely restoring its default configuration, and then throw a ConfigurationException.
So, how exactly should errors in ManagedService and ManagedServiceFactories configuration updates be handled? Is my understanding correct? From what I see out there in example/open source implementations of ManagedService and ManagedServiceFactory, this aspect seems to be totally ignored by most developers. Does the spec provides any clarification on the subject?
The general strategy is to log it as an error and pray it will be solved soon. The purpose of the Configuration Exception is to provide detailed information to the devops so it can be corrected quickly.
The strategies you describe are imho so hopelessly complex and open ended that they tend to create more problems then they ever can solve. Someone made a mistake to create wrong configuration, the only solution is to fix that configuration. I find that in general systems that handle these exceptional case to become very fragile. Once something is wrong, you're in an infinite space and software is extremely bad in reasoning about things you don't know about.
So unless you have some very specific use cases I do not think it can, nor should it, have a general solution.
In general there are three strategies to handle this:
Rejecting the invalid configuration but keeping the previous state
Rejecting the invalid configuration and destroy the current state as if there was no configuration before
Rejecting invalid values, apply valid values as much as possible
What to choose, an wheather you descide to throw an exception, print a warning to the log, send an e-mail or bring up a popup heavyly depends on your system and use cases.
For example if you have an UI and the user can change the config you can simply save the old config and if you detect an error you can ask the user to either correct or revert the configuration.
Even better, you can describe the Configuration requirements with the MetaTypeService so you can validate the config before apply it.
If you have a set of config files, you better make a backup before so you can revert :)
Context
We use a javax.ws.rs.ext.ExceptionMapper<Exception> annotated as #javax.ws.rs.ext.Provider to handle all exceptions. Internally this ExceptionMapper is distinguishing between different types of exceptions to determine what information to reveal to the client.
In the case of the javax.validation.ConstraintViolationException, we return additional information about which field was invalid and why.
Problem
We just switched from TomEE 1.7.2 JAX-RS to TomEE 7.0.0-SNAPSHOT webprofile.
With TomEE 1.7.2 JAX-RS we used the openejb.jaxrs.providers.auto=true system property, our ExceptionMapper was automatically found and used.
With TomEE 7.0.0-SNAPSHOT webprofile the property is no longer necessary to benefit from auto discovery.
However the org.apache.cxf.jaxrs.validation.ValidationExceptionMapper is also discovered and now acts as the preferred ExceptionMapper for the javax.validation.ConstraintViolationException. Our own ExceptionMapper does not run, and the client therefore gets no information about what went wrong during validation.
Our own ExceptionMapper<Exception> still handles all other exceptions.
What I already tried
"duplicate" the specialized ExceptionMapper
I placed my own javax.ws.rs.ext.ExceptionMapper<javax.validation.ConstraintViolationException> next to my resources, hoping that it takes precedence over the CXF one.
Still the org.apache.cxf.jaxrs.validation.ValidationExceptionMapper takes precedence.
Update: it turned out that this indeed does the trick. I don't know why my initial test didn't work.
Disable the ValidationExceptionMapper via system.properties
In the changelog of TomEE 7.0.0-SNAPSHOT I noticed
TOMEE-1336 Support classname.activated = true/false for auto discovered providers
Looking at the corresponding changeset I was hopeful that I could just disable the org.apache.cxf.jaxrs.validation.ValidationExceptionMapper by adding
org.apache.cxf.jaxrs.validation.ValidationExceptionMapper.activated=false
to our system.properties.
This remained without effect.
Questions
Is this CXF or TomEE behaviour?
How do we configure which ExceptionMapper takes precedence?
Makes some time now but think it is needed by spec but you can disable it by setting cxf.jaxrs.skip-provider-scanning=true.
It completely disables auto providers including the scanned ones but then you can control the one you want in openejb-jar.xml - surely the best and safer solution IMHO cause otherwise you depend a lot of the libs and container setup you use.
There is no priority afaik cause the exception hierarchy is used.
edit: missed a part: you need to impl ExceptionMapper{ValidationException} otherwise CXF one has higher priority than your own one (Exception is less specific)
edit 2: https://issues.apache.org/jira/browse/TOMEE-1656 for the activated support
Problem Statement
I want to be able to run junit tests on methods that connect to a database.
Current setup
Eclipse Java EE IDE – Java code is using no framework. The developers (me included) want more robust testing of current legacy code BEFORE attempting to move the code into a Spring framework so that we can prove along the way that the behavior is still correct.
JBoss 4.2 – Version limitation by vendor software (Adobe LiveCycle ES2); Our Java web application uses this setup of JBoss to run and makes use of the Adobe LiveCycle API.
We have been unable to successfully run the vendor configured JBoss within Eclipse – we have spent weeks attempting this, including contacting the company that provides our support for the configuration of JBoss for Adobe LiveCycle. Supposedly the problem is a memory limitation issue with settings in Eclipse, but changing the memory settings has thus far failed in a successful JBoss server start within Eclipse. For now, attempting to get JBoss to run inside of Eclipse is on hold.
The database connection is defined in a JNDI data source that JBoss loads on start up. Both our web application and Adobe LiveCycle need to create connections to this data source.
Code
I am glossing over error checking and class structure in this code snippet to focus on the heart of the matter. Hopefully that does not cause problems for others. Text in square brackets is not actual text.
Our code to create the connection is something like this:
Properties props = new Properties();
FileInputStream in = null;
in = new FileInputStream(System.getProperty("[Properties File Alias]"));
props.load(in);
String dsName = props.getProperty(“[JNDI data source name here]”);
InitialContext jndiCntx = new InitialContext();
DataSource ds = (DataSource) jndiCntx.lookup(dsName);
Ds.getConnection();
I want to be able to test methods dependent upon this code without making any changes to it.
Reference to properties file alias in properties-service.xml file:
<!-- ==================================================================== -->
<!-- System Properties Service -->
<!-- ==================================================================== -->
<!-- Allows rich access to system properties.-->
<mbean code="org.jboss.varia.property.SystemPropertiesService"
name="jboss:type=Service,name=SystemProperties">
<attribute name="Properties">
[Folder Alias]=[filepath1]
[Properties File Alias]=[filepath2]
</attribute>
</mbean>
Snippet from properties file located at filepath2
[JNDI data source name]=java:/[JNDI data source name]
The JNDI xml file for this data source is set up like this:
<datasources>
<local-tx-datasource>
<jndi-name>[JNDI data source name here]</jndi-name>
<connection-url>jdbc:teradata://[url]/database=[database name]</connection-url>
<driver-class>com.teradata.jdbc.TeraDriver</driver-class>
<user-name>[user name]</user-name>
<password>[password]</password>
<!-- sql to call on an existing pooled connection when it is obtained from pool -->
<check-valid-connection-sql>SELECT 1+1</check-valid-connection-sql>
</local-tx-datasource>
</datasources>
My thoughts of where the solution may be
Is there something I can do in a #BeforeClass method in order to make the properties the above code is looking for available without JBoss? Maybe somehow using the setProperty method of the java.util.Properties class? I would also like to use the same JNDI xml file that JBoss reads from, if possible, in order to reduce duplicate configuration settings.
So far all of my research ends with the advice “Use Spring”, but I don’t think we’re ready to open that can of worms yet. I am not an expert in JBoss, but if more details of our JBoss setup are needed for a helpful answer, I will do my best to get them, though I will likely need some pointers on where to look.
Stackoverflow Research references:
Jndi lookup in junit using spring
Out of container JNDI data source
Other research references:
http://docs.oracle.com/javase/1.4.2/docs/api/java/util/Properties.html
http://docs.oracle.com/javase/jndi/tutorial/basics/prepare/initial.html
There's a very simple answer to your problem, but you're not going to like it: Don't.
By definition, a unit test should verify the functionality of a single unit (the size of which may vary, but it should be self-sufficient). Creating a setup where the test depends upon web services, databases, etc. is counter-productive: It slows down your tests, it includes a gzillion of possible things that could go wrong (failed network connections, changes to data sets, ...) during the test, which have nothing to do with the actual code you are working on, and most importantly: It makes testing much, much harder and more complicated.
Instead, you should be looking for ways to decouple the legacy code from any data sources, so that you can easily substitute mock objects or similar test doubles while you are testing.
You should create tests to verify the integrity of your entire stack, but those are called integration tests, and they operate at a higher level of abstraction. I personally like to defer writing those until the units themselves are in place, tested and working - at least until you have come to a point where you no longer expect changes to service calls and protocols on a daily basis.
In your case, the most obvious strategy would be to encapsulate all calls to the web service in one or more separate classes, extract an interface that the business objects can depend on, and use mocks implementing that same interface for unit testing.
For example, if you have a business object that calls an address database, you should copy the JNDI lookup code into a new service class called AddressServiceImpl. Its public methods should mimic all the method signatures of your JNDI datasource. Those, then, you extract to the AddressService interface.
You can then write a simple integration test to verify that the new class works: Call all the methods once and see if you get proper results. The beauty of this is that you can supply a JNDI configuration that points to a test database (instead of the original one), which you can populate with test datasets to make sure you always get the the expected results. You don't necessarily need a JBoss instance for this (though I have never had any problems with the eclipse integration) - any other JNDI provider should work, as long as the data source itself behaves the same way. And to be clear: You test this once, then forget about it. At least until the actual service methods ever change.
Once you verified that the service is functional, the next task is to go through all the dependent classes and replace the direct calls to the datasource with calls to the AddressService interface. And from that point on, you have a proper setup to implement unit tests on the actual business methods, without ever having to worry about things that should be tested elsewhere ;)
EDIT
I second the recommendation for Mockito. Really good!
I had a very similar situation with some legacy code in JBoss AS7, for which refactoring would have been way out of scope.
I gave up on trying to get the datasource out of JBoss, because it does not support remote access to datasources, which I confirmed in trying.
Ideally though, you don't want to have your unit tests dependant on a running JBoss instance in order to run, and you really don't want them to have to run inside of JBoss. It would be counter to the concept of self-contained unit tests (even though you'll still need the database to be running :) ).
Fortunately, the initial context used by your app doesn't have to come from a running JBoss instance. After looking at this article referred to by an answer to another question, I was able to create my own initial context, populate it with my own datasource object.
This works without creating dependencies in the code because the classes under test typically run inside the container, where they simply do something like this to get the container-provided context:
InitialContext ic = new InitialContext();
DataSource ds = (DataSource)ic.lookup(DATA_SOURCE_NAME);
They don't need to specify any environment to the constructor, because it has already been set up by the container.
In order for your unit tests to stand in for the container and provide a context, you create it, and bind a name:
InitialContext ic = new InitialContext();
// Construct DataSource
OracleConnectionPoolDataSource ds = new OracleConnectionPoolDataSource();
ds.setURL("url");
ds.setUser("username");
ds.setPassword("password");
ic.bind(DATA_SOURCE_NAME, ds);
This needs to happen in each test class's #BeforeClass method.
Now the classes being tested get my initial context when running in unit tests, and the container's when deployed.
If you are using tools like Git and Maven this can be done easily with them. Check in a UnitTest specific properties file along side development and qa. Use Maven and its profile facilities to specify a profile that copies your UnitTest file over to where it should go, same with your dev and qa when run with different profiles active.
There is no magic to this; Spring introduces complexity more than anything. it definitly doesn't introduce simplicity like this.
You can run your tests with a fake InitialContext implementation, which returns whatever you need from calls to lookup(String).
A mocking/faking tool which allows such fake implementations is JMockit. The fake implementation would be written like the following:
public class FakeInitialContext extends MockUp<InitialContext>
{
#Mock
public Object lookup(String name)
{
// Return whatever is needed based on "name".
}
}
To apply it to a JUnit/TestNG test run, add jmockit.jar to the runtime classpath (before junit.jar if this is the case) and set the "jmockit-mocks" system property to the name of the fake class: -Djmockit-mocks=com.whatever.FakeInitialContext.
Of course, you can also write true JUnit/TestNG unit tests where any dependency can be easily mocked, by using the "Expectations & Verifications" mocking API.
(PS: For full disclosure, I am the creator of the JMockit project.)
One of the beauties with Java EE 6 is the new dependency injection framework - CDI with the Weld reference implementation - which has prompted us to start migrating internally to JSR-330 in an implementation agnostic manner, with the explicit target of being able to have a core jar which is frozen, and then being able to add extra jars providing new modules replacing functionality in the core jar.
I am now in the process of making the above work with Weld, and to be frank there is simply too much magic going on behind the covers. Either it works or it doesn't, and it doesn't provide very much help by default on what happens so you can investigate what is wrong and fix it.
I would expect that there are switches to switch which can easily enable things like:
What classpath entries are scanned and where? What was the result?
What beans are available for injection for which class?
What caused a given bean not to be considered for later? A given jar?
In other words, I need to see the decision process in much more detail. For some reason this is not as needed with Guice, perhaps because there is much less magic, and perhaps because the error messages are very good.
What do you do to debug your Weld applications, and how much does it help?
Short answer: there is no dedicated debug option for CDI (as no such thing is required by the spec), and no dedicated debug option for Weld.
Long Answer: There is a lot you can do on your own. Familiarise yourself with the extension mechanism of CDI, and you'll discover that you can easily (really!) write your own extension that debugs your required information
What classpath entries are scanned and
where? What was the result?
Listen to the ProcessAnnotatedType-Event
What beans are available for injection
for which class?
Query the BeanManager for that.
What caused a given bean not to be
considered for later? A given jar?
Listen to the AfterBeanDiscovery-Event and see what you've got in the BeanManager. Basically, the following scenarios make a ManageBean ineligible for injection:
it's no ManagedBean (like there is no beans.xml in the jar)
it does not qualify as a managed bean (https://docs.jboss.org/weld/reference/1.1.0.Final/en-US/html/beanscdi.html#d0e794)
it has no BeanType (#Type{})
it is vetoed (Seam Solder) or suppressed by any other extension-mechanism
Weld uses Simple Logging for Java (sl4j). If you are using Tomcat, I suggest you add sl4j-jdk14-x.x.x.jar to application class path and append following lines to apache-tomcat-7.0.x/conf/logging.properties:
org.jboss.weld.Bootstrap.level = FINEST
org.jboss.weld.Version.level = FINEST
org.jboss.weld.Utilities.level = FINEST
org.jboss.weld.Bean.level = FINEST
org.jboss.weld.Servlet.level = FINEST
org.jboss.weld.Reflection.level = FINEST
org.jboss.weld.JSF.level = FINEST
org.jboss.weld.Event.level = FINEST
org.jboss.weld.Conversation.level = FINEST
org.jboss.weld.Context.level = FINEST
org.jboss.weld.El.level = FINEST
org.jboss.weld.ClassLoading.level = FINEST
This will generate lots of debug in console, so you`d better select something specific and comment out other lines.
Other logging libraries (like log4j) can be configured using their respective config files and adding similar levels.
I can suggest a few options:
lower the logging threshold. I don't know what logging framework is used by Weld, but you can see that and configure, say, DEBUG or INFO
get the source code and put breakpoints in the BeanManager implementation (BeanManagerImpl perhaps). It is the main class in CDI and handles almost everything.
Try putting a different implementation (if not tied by the application server) - for example OpenWebBeans. Its exception messages might be better
Open the specification and read about the particular case. It is often the case the you have missed a given precondition - for example an annotation has to have a specific #Target, otherwise it is not handled by CDI.
I can confirm that the exception messages of Weld are rather disappointing. I haven't used Guice, but in Spring they are very, very informative. With Weld I had to refer to the 4th point above (opened the spec) and verify all preconditions. This was my suspicion initially - that even though the spec looks very good, the implementations will not be as shiny (at first at least). But I guess one gets used to this.
I am trying to configure a custom layout class to Log4J as described in my previous post. The class uses java.util.regex.Matcher to identify potential credit card numbers in log messages. It works perfectly in unit tests, also in a minimal web app containing a single servlet. However when I try to deploy it with our app in JBoss, I get the following error:
--- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM ---
ObjectName: jboss.web.deployment:war=MyWebApp-2010_02-SNAPSHOT.war,id=476602902
State: FAILED
Reason: java.lang.LinkageError: java/util/regex/Matcher
I couldn't even find any info on this form of the error - typically LinkageError seems to show up with a "loader constrain violation" message, like in here.
Technical details: we use JBoss 4.2, Java 5, Log4J 1.2.12. We deploy our app in an .ear, which contains (among others) the above mentioned .war file, and the custom layout class in a separate jar file (let's call it Commons). We override the default settings in jboss-log4j.xml with our own log4j.properties located in a different folder, which is added to the classpath at startup, and is provided via Sapient's Carbon framework.
Update to #skaffman's answer:
The reason we have a separate log4j.properties file is the scheme propagated by Sapient Carbon. This basically decouples the configuration and data files from the application server environment, so that they are accessible via Carbon's lookup functionality and they can be stored in a directory external to the app server. We inherited this setup, and we hate it because it causes us lots of trouble with deployment, classpath issues etc. since it does not adhere to the JEE conventions. We aim to get rid of it in the long run, but it's gonna take time :-(
Even though the separate log4j.properties file is not best practice, it certainly works. It has been functioning in our app for years, and I could also make it work with a minimalist web app containing a single servlet (not using Sapient Carbon). If log4j.properties is put into the classpath, Log4J reads it properly when the web app is launched, and reconfigures logging accordingly.
Update#2: An interesting finding is that Matcher is not even used in MyWebApp, only in the Commons module (and another module, in a separate jar). In Commons, it has been used before, in a class called StringHelper, which is used indirectly by MyWebApp, via other modules.
I guess this rules out the possibility of two different Matcher class versions loaded by different classloaders. So my only remaining guess is that Matcher is loaded by two different classloaders when it is used from the jar and the war, and then attempted to pass from one to the other. This is explained by Frank Kieviet's excellent article. However, I believe that such a setup would cause a "loader constraint violation" rather than this form of the error.
Update#3: If I add this appender (example 3.8) to jboss-log4j.xml, the error disappears, and the server runs perfectly :-o This obviously has to do something with loading log4j.jar, because this setup requires the jar to be present in the server lib directory. It works also if I change the appender type to org.jboss.logging.appender.FileAppender, and set log level to WARN, which results in an empty ucl.log file. This may suit as a temporary workaround, but I am still eager to fully understand what's going on here.
What does this error message mean, and how can I fix it properly?
Epilogue
After a long wait, I finally got to eliminate Carbon from the logging process and migrate our logging config into server/conf/jboss-log4j.xml. This required that I publish our custom log filter class in a separate jar in the server/lib directory. After this, the class loading works again, without the workaround described in Update#3 above :-)
My first reaction is that in JBoss it's not possible to override the log4j configuration like that. JBoss isn't allowing log4j to locate its own configuration, as it normally would, the location of conf/jboss-log4j.xml is specified in conf/jboss-service.xml.
To my knowledge, all log4j configuration in a given JBoss server must be centralised in to a single file, usually conf/jboss-log4j.xml.
Have you tried, as a test, moving the contents of your log4j.properties into the existing conf/jboss-log4j.xml file? If that works fine, then the problem is almost certainly caused by your attempt to override log4j. Having said that, I'd be surprised if jboss/log4j is that fragile, but perhaps in certain cases, it rejects this.
you either have two classes of different signatures but the same path in your environment or you compiled against another signature of j.u.r.Matcher. Since this is standard Java API, I think you should check your source and compilation targets and the JVM runtime version of your JBoss installation.
Edit:
After that is ruled out, I'm sure, the classloader (the server's) that manages the appenders and tries to load the appender that's using your custom layout can't see the custom layout class instance. So you have two options:
Deploy your custom layout JAR to the server's lib-directory along with log4j.
Deploy log4j along with your application and isolate the application with your own classloader (jboss-app.xml):
<jboss-app>
<loader-repository>
com.myapplication:loader=MyClassLoader
<loader-repository-config>java2ParentDelegation=false</loader-repository-config
</loader-repository>
</jboss-app>
I hope, the problem will go away then.