I have use JPA with Hibernate in a standalone application but now I want to try with with an application server. I know GlassFish provides EclipseLink implementation for JPA but I have a few questions.
Do I need to specify in persistence.xml EclipseLink as a provider for my persistence-unit?
Does persistence.xml look the same as if it the application would not be deployed? If it does not look the same how does it look?
Do I need to specifically download the implementation jars for EclipseLink and build with them or does the container handles this after my application is deployed?
How do I specify the jdbc driver in persistence.xml?
Does my application need to be deployed as a .ear?
You don't need to specify the persistence provider, by default the one contained in your application server will be used (if it has at least the Web profile, of course, otherwise servers such as Tomcat won't provide you EclipseLink).
Yes, it will have the same look (in both applications you are just using JPA the same way).
For your code to compile, you will only need to have persistence-api.jar in your classpath (if you use Maven, set the scope to "provided"). Then the server will automatically provide its implementation jars.
You could use a persistence unit like described in this page ("typical configuration in a Java SE environment"). But I would rather suggest you use a <jta-data-source> instead, that refers to a datasource provided by GlassFish.
As far as I can tell, it can also be a WAR file, I didn't have any problem deploying it (webapp as a Maven WAR module + beans in a JAR module).
Related
I have a Spring Roo app that is deploying to Tomcat with no issues. I'm trying to deploy it to JBoss 6, but I'm finding it impossible to do so.
I've exhausted all resources from Google and I simply receive errors everywhere. Unfortunately, they do not seem specific enough to start narrowing them down to list here.
What can information could I provide to help resolve this situation?
Essentially, I need to know what I need to change from a standard Spring Roo app, using Hibernate and Mysql to work with JBoss 6.
EDIT:
This is the error that I am getting
[ClassLoaderManager] Unexpected error during load of:org.apache.commons.collections.DoubleOrderedMap$1$1: java.lang.IllegalAccessError: class org.apache.commons.collections.DoubleOrderedMap$1$1 cannot access its superclass org.apache.commons.collections.DoubleOrderedMap$DoubleOrderedMapIterator
Impossible to tell, since you posted no errors.
I'm guessing that it's a problem with the configuration difference between JBOSS and Tomcat.
You set up JDBC data source connection pools differently. Tomcat has the context.xml in the server /conf folder. JBOSS has other XML config files in its server/default/deploy folder. Did you create those correctly?
I assume that you're using JNDI names for injected data sources.
Your JDBC driver JAR for MySQL goes in the Tomcat /lib folder and the JBOSS server/default/deploy/lib folder, not the wAR WEB-INF/lib.
But you should be able to take a WAR with all the Spring Roo stuff, put it into an EAR with jboss-web.xml configuration, and start it up.
Chapter 126 of the OSGI Enterprise Release 5 specification mentions compatibility:
"Support the traditional JNDI programming model used by Java SE and Java EE clients."
and use of OSGI-unaware code:
"Clients and JNDI Context providers that are unaware of OSGi use static methods to connect to the
JRE JNDI implementation. The InitialContext class provides access to a Context from a provider and
providers use the static NamingManager methods to do object conversion and find URL Contexts.
This traditional model is not aware of OSGi and can therefore only be used reliably if the consequences
of this lack of OSGi awareness are managed."
but it is not clear to me if this text only applies to "legacy" code executed inside an OSGI bundle, or also to code outside the OSGI container, f ex in a scenario where the OSGI container is embedded in an application.
In an embedding scenario, there may be application code both outside and inside the OSGI container that performs JNDI calls, and as they execute in the same JVM they will share JNDI implementation.
Question: Should an OSGI JNDI implementation running in an embedded OSGI container allow OSGI-unaware code outside the container to perform its JNDI calls like usual, or is some porting to "OSGI-awareness" required?
Trying this out myself with Apache Karaf 2.3.0 (which uses Apache Aries JNDI 1.0.0) this doesn't seem to work, as Apache Aries requires JNDI client calls to originate from an OSGI bundle.
Partial stacktrace:
javax.naming.NoInitialContextException: The calling code's BundleContext could not be determined.
at org.apache.aries.jndi.OSGiInitialContextFactoryBuilder.getInitialContext(OSGiInitialContextFactoryBuilder.java:46)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:307)
at javax.naming.InitialContext.init(InitialContext.java:242)
at javax.naming.InitialContext.<init>(InitialContext.java:192)
Question: Is this correct behaviour, or is there a section of the specification I can refer to that is violated by this limitation?
I ran into same issue when trying to deploy Apache Karaf on Weblogic.
We use karaf through a servlet bridge - a war is deployed in weblogic which bridges all http requests to karaf.
I am running with the the following applications on weblogic:
app1 (uses JNDI)
app2
karaf-bridge (bridges requests to Karaf)
As soon as karaf starts the Aries JNDI implementation running inside Karaf sets InitialContextFactoryBuilder inside javax.naming.NamingManager to its own implementation. NamingManager holds a static reference to the initial context factory builder, so whichever implementation, irrespective of whether its running in an OSGI environment, sets this static reference becomes the JNDI provider.
In my case when app1 (non-OSGI) tries to do a new InitialContext, Aries JNDI tries to resolve it using the BundleContext and fails.
I fixed this using some very ugly hacks that involved extracting the javax.naming package from jre and installing it as a bundle in karaf.
So the answer to your question: I think the issue is really in the jre and not with OSGI on how JNDI lookup is managed.
I'm not sure if I understand the problem correctly... JNDI is a Service Provider Interface, and it needs some underlying implementation to run with. All you need to do is to provision it the OSGI container.
I would recommend creating single bundle with all jars needed by JNDI and export all packages. Then use Dynamic-Import: * to use it. It worked in our case (Eclipse RCP application with JBoss 5 JNDI used for EJB calls).
However if you need JNDI inside and outside of the container and you don't want to struggle with Classloading, I would recommend adding all jars to the applications classpath. This way it should be accessible in whole your application.
Apache Aries seems to have thought about this and have provided an implementation of JRE initial context factory builder (org.apache.aries.jndi.JREInitialContextFactoryBuilder) which seems to work. However, for this to work, I had to change Aries code that registers the JVM wide initial context factory builder. There may be another (and possibly better) way of achieving this. But this seemed to work.
Also, note that the problem does not stop at InitialContextFactoryBuilder being set in NamingManager. The same issue arises for ObjectFactoryBuilder (which is again set JVM wide in NamingManager). Depending on the JNDI provider you are trying to connect to, you may need to change that part of Aries JNDI code as well. e.g. for Tibco EMS JNDI connection, I had to tweak the code for OSGiObjectFactoryBuilder from Aries to return a Tibco specific ObjectFactory. This can be easily generalized using Context.OBJECT_FACTORIES environment value.
I've raised a JIRA for the same - https://issues.apache.org/jira/browse/ARIES-1127
After several days of searching, trying and head-banging, I post this question to SO although it seems to be answered already.
Here is the scenario:
I have an EAR application containing (for the moment) one WAR and one EJB module. The EJB module uses JPA (persistence.xml) and some Stateless Session Beans are exposed via Web Services. The web services use Basic authentication with a jdbc realm. The web module uses form authentication with the same realm.
The requirement:
I need to be able to deploy this application either on different servers (dev/test/prod) or on the same server (or cluster) with different deployment descriptors. The deployment settings that need to be different in each application instance are:
The jta-data-source in persistence.xml
The realm-name in web.xml
The javax.faces.PROJECT_STAGE in web.xml
The webservice-endpoint\endpoint-address-uri and login-config\realm in glassfish-ejb-jar.xml
The context-root in application.xml (i could move it to web.xml if it made any difference, see below)
The realm in glassfish-application.xml
During my research, I managed the following:
I can override the javax.faces.PROJECT_STAGE using asadmin set-web-context-param
I can override all settings in glassfish-ejb-jar.xml using a deployment plan during asadmin deploy
The same applies for glassfish-application.xml
I can probably override context-root during asadmin deploy (I don't know how would this work with more than one web modules in the EAR)
So far so good. This leaves me with the following problems:
How can I easily modify the the realm-name in web.xml?
How can I easily modify the jta-data-source in persistence.xml?
By easily I mean during deployment or using something similar to a deployment plan jar. Maintaining multiple copies of ejb.jar or war just with a modified .xml file is not an option.
Just to be clear, the need is to have different databases (either in different stages of development or for different customers) using the same application. The application uses one persistence-unit but it needs to point to different databases (hence the jta-data-source). The realm is a jdbc realm (on the same database) that also needs to be different per application instance.
Any help or pointer would be greatly appreciated.
Have you thought about preparing templates for the deployment descriptors, and populating them with value from property file during build? If you are using ant, you can use the expandproperties filter.
You can do all those things with a deployment plan jar.
It looks like the content of the deployment plan jar is pushed into archive/directory tree of the application BEFORE any of the heavy lifting associated with deployment happens.
See
http://java.net/projects/glassfish/sources/svn/content/trunk/main/appserver/deployment/javaee-core/src/main/java/org/glassfish/javaee/core/deployment/DolProvider.java
and
http://java.net/projects/glassfish/sources/svn/content/trunk/main/appserver/deployment/dol/src/main/java/com/sun/enterprise/deployment/archivist/Archivist.java
I have a 100% JPA2 compliant application which needs to be portable to many application servers. Being JPA compliant (theoretically) means we can switch JPA providers via configuration (e.g. without changing source code) -- (right???).
When running within a servlet container (e.g. Tomcat, Jetty) the application is configured to run with Hibernate. We choose Hibernate over TopLink and Eclipselink for its maturity and performance. So far this works.
However, when running within a Java EE application server, should we default to the JPA provider therein, or stick with Hibernate?
I know within JBoss, the provider is Hibernate so it probably doesn't matter. However, I think the provider within WebLogic is Eclipselink. I have no idea what the provider WebSphere or Glassfish use, but I have seen detailed instructions on how to use Hibernate as the provider within those application servers.
I guess another way to ask the question is what would we be missing by using Hibernate in these application servers?
I have a 100% JPA2 compliant application which needs to be portable to many application servers. Being JPA compliant (...) means we can switch JPA providers via configuration (...)
Yes.
(...) However, when running within a Java EE application server, should we default to the JPA provider therein, or stick with Hibernate?
Well, if you deploy on a Java EE 6 server, this doesn't really matter. It's not clear who is going to run the application and you can maybe make recommendations but the runtime is actually "not your business" :) Also note that you may not benefit from support if you don't use the default provider (if this matters).
I know within JBoss, the provider is Hibernate so it probably doesn't matter. However, I think the provider within WebLogic is Eclipselink. I have no idea what the provider WebSphere or Glassfish use, but I have seen detailed instructions on how to use Hibernate as the provider within those application servers.
First of all, keep in mind that JPA 2.0 is part of Java EE 6 and that GlassFish v3 is the only one Java EE 6 container at this time. WebLogic and WebSphere are Java EE 5 server, they may not support JPA 2.0.
Now, regarding the default providers:
GlassFish v3 uses EclipseLink 2.0 as default provider but can be configured to use Hibernate 3.5 (through an add-on).
In Weblogic 10.3.2, the default JPA provider is OpenJPA/Kodo and EclipseLink 1.2 is available as a WLS module. In WLS 10.3.3 (not released yet), EclipseLink 2.0 will be available as a WLS module, the default being still OpenJPA/Kodo. But, the container JPA API will still be JPA 1.0! It seems possible to package a JPA 2.0 provider inside your application. See this thread and this page. But this is not officially supported and doing this same thing with Hibernate 3.5 might be another story.
In WebSphere 6 and 7, the default provider is OpenJPA. This link will give you some details about the way to change the default provider (and the consequences). But I can't tell you more.
I guess another way to ask the question is what would we be missing by using Hibernate in these application servers?
As I mentioned, this may not be supported by the vendor. Additionally, if you want to maximize portability and plan to deploy your application in a near future, going for JPA 2.0 is maybe not a wise choice (or too optimistic if you prefer).
I don't see what you will be missing, unless you're using implementation specific API in your JPA code. I.e. do not import org.hibernate anywhere in your JPA code, but just write it against the JPA API.
I am having troubles migrating from OC4J 10.1.2.3 to 10.1.3.1.4. The problem is for applications that have multiple EJBs (all are 2.1, no EJB 3.0). Jdeveloper will take the default ejb-jar.xml (the one required for Jdeveloper to run it on its stand-alone OC4J instance) and package it into each EJB JAR module NO MATTER what. This results in the app server drilling into each EJB JAR module when deploying, and find the same ejb-jar.xml file N times (where N = number of EJB Modules). This results in duplicate EJB references and will break any JNDI lookups such as: "java:comp/env/ejb/EJBName". Thus deploying an app that has 3 EJBs, EJB1, EJB2 and EJB3 causes the app server to register 9 EJBs instead of 3. I need a best practices way, but in between the way 10.1.3.4 and JDeveloper are acting the situation is rather dire...
Side note: They will work if the web app's JNDI look-up code is refractored to just "ejb/EJBName". This is not desirable though.
You should check the Oracle documentation to see which is your case.
The Oracle® Containers for J2EE Enterprise JavaBeans Developer's Guide is a good start
According to the Oracle® Containers for J2EE Services Guide, chapter 2: Using JNDI
when you use the form "ejb/EJBName" you perform "local" lookup. If you want to use the full form you must check the "Enabling Global JNDI Lookups" section of the "Using JNDI" chapter.
The problem was multiple reference in our deployment profiles. We were create a deployment profile for EACH EJB. This meant that each EJB had it's own ejb-jar.xml (this file contained a description of all EJBS in the project). Therefore, every time JDeveloper created an EJB, it placed a descriptor of all EJBS in each EJB it generated, causing an NxN amount of references. Therefore Nx(N-1) extra references.
Now, the key point is that Oracle Application Server 10.1.2.3.0 and bellow did not care about these duplicate references. However as we can see, 10.1.3.1.4 is a much different version and this did break.
Our fix: to have only 1 EJB Deployment profile that contains all of the EJB classes and the POJO's that they use. Remember, before there was 1 EJB Profile for each EJB... All this did was allow for Jdeveloper (which is crap IMHO) to correctly generate a valid EAR. A combination of Jdeveloper and Oracle's Application Server's crap is what caused this.