I have several session beans that I have written unit tests for. I have setup Maven to include a persistence.xml in the src/main/resources/META-INF directory that refers to a local MySQL database for development purposes. I have another persistence.xml in src/test/resources/META-INF directory that refers to the embedded Derby database __default. The tests are deployed to an embedded GlassFish 3.1 container.
When I run the tests however, I get the following error:
java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/mylog'
jdbc/mylog is the MySQL database that the persistence unit in the main directory refers to. It is obviously ignoring the persistence unit in the test directory but I have no clue as to why.
Maven is setting the classpath correctly as far as I can tell, with test-classes before classes and a peek in the actual target/test-classes/META-INF directory reveals that it copied the correct, embedded Derby, persistence unit.
[DEBUG] Test Classpath :
[DEBUG] C:\Users\Laurens\Documents\Projects\Mylog\target\test-classes
[DEBUG] C:\Users\Laurens\Documents\Projects\Mylog\target\classes
[DEBUG] C:\Users\Laurens\.m2\repository\org\eclipse\persistence\eclipselink\2.2.0\eclipselink-2.2.0.jar
[DEBUG] C:\Users\Laurens\.m2\repository\org\eclipse\persistence\javax.persistence\2.0.3\javax.persistence-2.0.3.jar
[DEBUG] C:\Users\Laurens\.m2\repository\org\eclipse\persistence\org.eclipse.persistence.jpa.modelgen.processor\2.2.0\org.eclipse.persistence.jpa.modelgen.processor-2.2.0.jar
[DEBUG] C:\Users\Laurens\.m2\repository\org\glassfish\extras\glassfish-embedded-all\3.1\glassfish-embedded-all-3.1.jar
[DEBUG] C:\Users\Laurens\.m2\repository\javax\javaee-web-api\6.0\javaee-web-api-6.0.jar
[DEBUG] C:\Users\Laurens\.m2\repository\junit\junit\4.8.1\junit-4.8.1.jar
Any hint on how to have GlassFish use the proper persistence unit very much appreciated! Thanks!
When running tests using embedded Glassfish, the JPA provider does not use the classpath displayed on the command-line, before executing the maven-surefire-plugin goal (that is used to run the test phase). Embedded Glassfish deploys the artifacts that are available as part of a test scope, as a ScatteredArchive. This scattered archive is typically created in the java.io.tmpdir directory usually with the name gfembed<a_random_number>tmp, unless the embedded Glassfish configuration specified the location of a Glassfish installation root, and a Glassfish domain.
When the embedded Glassfish domain is prepared with the deployed scattered archive, the files to be deployed are typically copied into an exploded directory that houses all the classes (including all dependencies) required by the application. This directory typically happens to be present in the GF_EMBED_DOMAIN_HOME/applications/<application_name> directory. The persistence.xml files from your src/main/resources/META-INF and src/test/resources/META-INF directories are copied here into the <application-name>/META-INF directory. Needless to state, the one that gets copied last, or the one that doesn't get overwritten is the one that is used by the JPA provider during the tests. This always happens to be the file in src/main/resources/META-INF.
You can overcome this situation in two ways:
1. Using a custom Glassfish domain configuration file
You can specify a domain configuration file (domain.xml) that will contain the datasource definition for jdbc/mylog. This is what I do currently for it is very flexible and the domain configuration file can contain other configurations as well. The config file, needs to specified as part of test setup in the following way:
Map<String, Object> props = new HashMap<String, Object>();
props.put("org.glassfish.ejb.embedded.glassfish.installation.root", "./glassfish-install/glassfish");
container = EJBContainer.createEJBContainer(props);
context = container.getContext();
datasource = (DataSource) context.lookup("jdbc/mylog"); //You can lookup the datasource too, to confirm that your setup is successful.
The afore-mentioned glassfish-install directory and its sub-directory glassfish are present in the Maven project root (and also checked into version control); the glassfish directory must contain a directory structure of domain1/config to represent the directory structure of the Glassfish domain of name domain1. The structure in the project can be seen in the below screenshot. The other related files (the JDBC resource adapter JARs and the like), can be obtained from a Glassfish installation directory, but typically these might also be placed in the correct location by the embedded Glassfish runtime, if configured correctly.
The contents of the Glassfish domain configuration file are different from the default one used by embedded Glassfish, except for the datasource and connection pool configuration (the relevant entries added in my usecase where I perform integration tests, have been posted below):
<domain log-root="${com.sun.aas.instanceRoot}/logs" application-root="${com.sun.aas.instanceRoot}/applications" version="10.0">
<system-applications/>
<applications/>
<resources>
<jdbc-resource pool-name="MyPool" jndi-name="jdbc/mylog"/>
...
<jdbc-connection-pool driver-classname="" datasource-classname="org.apache.derby.jdbc.ClientDataSource" res-type="javax.sql.DataSource" description="" name="MyPool" ping="true">
<property name="User" value="APP"></property>
<property name="RetrieveMessageText" value="true"></property>
<property name="CreateDatabase" value="true"></property>
<property name="ServerName" value="localhost"></property>
<property name="Ssl" value="off"></property>
<property name="SecurityMechanism" value="4"></property>
<property name="TraceFileAppend" value="false"></property>
<property name="TraceLevel" value="-1"></property>
<property name="PortNumber" value="1527"></property>
<property name="LoginTimeout" value="0"></property>
<property name="Password" value="APP"></property>
<property name="databaseName" value="MYDB"></property>
</jdbc-connection-pool>
...
</resources>
<servers>
<server name="server" config-ref="server-config">
<resource-ref ref="jdbc/__TimerPool"/>
<resource-ref ref="jdbc/__default"/>
<resource-ref ref="jdbc/mylog"/>
</server>
</servers>
...
...
The default domain.xml file can be downloaded from the java.net site, and modified, in the event you wish to keep the changes as minimal as possible, instead of copying one from a Glassfish installation.
2. Copying over the persistence.xml files
One can add goals to the Maven POM file, to backup and copy the persistence.xml file from src/test/resources/META-INF to src/main/resources/META-INF, before the test phase. After the test phase is complete, the original is restored. I will not go into details of this, as a similar solution is already discussed in a related StackOverflow question. I did not use this approach for integration tests as I required changes to be done beyond the ones that can be carried in persistence.xml, like creation of a custom realm. I use it for unit-tests however, due to the fact that the JPA provider will fetch the persistence.xml file from target/classes instead of target/test-classes, despite the latter appearing first in the classpath order. If you use Hibernate as your JPA provider, enabling TRACE logging for the org.hibernate.ejb logger (as the Ejb3Configuration class is responsible for performing the lookup) would convince you that file in test-classes will not be picked up.
Note:
Most of the answer assumes Glassfish 3.1 but may hold good for upcoming versions as well.
By "embedded glassfish container", do you mean a maven plugin that runs glassfish for you? The classpath for a maven plugin is different and managed differently than the maven test classpath. You might need to be working with a different classpath.
This answer might sounds silly but I was looking for a way which lets me run those tests from eclipse by Run As -> JUnit Test. This is how I made it:
#BeforeClass
public static void setUp() throws IOException {
Files.copy(new File("target/test-classes/META-INF/persistence.xml"), new File("target/classes/META-INF/persistence.xml"));
// ...
}
I'm just copying the test/persistence.xml to classes/persistence.xml. This works.
Related
I'm having a struggle with my first attempt with little application based on Spring, Hibernate on HSQLDB and JSF, finally deployed with Tomcat. I face two problems now.
First of all I tried to run Java Application on console based on main (String args[]) method getting customerBo bean from spring-module.xml and insert/delete from embedded HSQLDB with Hibernate. Works like a charm.
1. Two configuration locations
Another step was a JSF page printing out a sample bean content. Working as well. However I had a struggle with resources and configuration:
resources: Here is all my configuration for database.
src/main/resources
\____ config
\____ database
\____ database.properties
\____ spring/beans
\____ data-source.xml
\____ hibernate-session-factory.xml
\____ spring-module.xml ... for Java Console Application
webapp: Here is all related to JSF including WEB-INF folder` related to web-page alike application.
src/main/webapp
\___ WEB-INF
\____ applicationContext.xml ... for web application
\____ faces-config.xml
\____ web.xml
\___ default.xhtml
How to make them easily communicate with each other? If web application starts from applicationContext in webapp, it needs to work with databse saved in resources. So it forces me to prefix all imports etc. with classpath such as.
<import resource="classpath:/spring-module.xml"/>
... or ...
<property name="location">
<value>classpath:/config/database/database.properties</value>
</property>
Java Console application still works well with these decors. Would you suggest me a better way? I import to applicationContext.xml all the database-stuff stored int spring-module.xml:
<beans xmlns=....>
<import resource="classpath:/spring-module.xml"/>
<import resource="classpath:/bean.xml"/>
<bean id="customerBo" class="nch.spring.customer.bo.impl.CustomerBoImpl"></bean>
</beans>
This is the URL I use to connect to HSQLDB
jdbc.url=jdbc:hsqldb:database/customers
2. HSQLDB on Tomcat
I deploy on Tomcat externally, not in IDE. I run localhost:8080.
After injecting all beans (working well, because I tested it on console), I recieved an error on Tomcat. Here is the shortened version with the first lines of causing:
org.springframework.transaction.CannotCreateTransactionException: Could not open Hibernate Session for transaction; nested exception is org.hibernate.exception.JDBCConnectionException: Could not open connection
Caused by: org.hibernate.exception.JDBCConnectionException: Could not open connection
Caused by: java.sql.SQLException: No suitable driver found for jdbc:hsqldb:database/customers
My database location related to project. It's included in the WAR:
project
\____ src/main/java
\____ src/main/resources
\____ src/main/webapp
\____ database
\____ customers
\____ customers.script
\____ customers.lck
\____ customers.properties
And my pom.xml for HSQLDB:
<!-- HSQLDB -->
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>2.3.2</version>
</dependency>
Why I cannot print out the database content with Spring and Hibernate on JSF deployed on Tomcat, but I can print it out on console? Is there an another way to embed a database? Does Tomcat support HSQLDB at all? I have tried to work first with MySQL, but it had been so cumbersome for me.
Shortly:
HSQLDB works with Spring and Hibernate well, proved on Java Console Application.
JSF pages working as well, able to print out the content of any bean.
My application refuses to communicate with database since deployed on Tomcat. Here is my full source code on GitHub.
The core of the issue is that you're using HSQLDB with a database file, that is actually a resource of your project (i.e. inside its classpath). This means that the file will be embedded inside your WAR, and you will not be able to update it (as contents inside the WAR file will be read-only).
Your connection string for HSQLDB is:
jdbc.url=jdbc:hsqldb:database/customers
When ran inside your IDE, you need to be aware that it does not package your application in a JAR but as an exploded directory. So when you run it as a console application, it will access the database file and update it just fine. However, when packaged in a WAR, it won't find it.
If your intent is to do read-only on the database, you could configure HSQLDB with a Resource Database URL
of the form
jdbc:hsqldb:res:/database/customers
res: stored in a Java resource, such as a Jar and always read-only.
This will load a database from the resource located in /database/customers of the classpath of your application.
However, if you want to update it, then you have to use another way. A typical way is to use a Server Database URL, where the database is hosted on a server, possibly localhost for testing purposes.
I am developing a Java EE 7 application and have a requirement for the application to be deployed onto application servers running either GlassFish 4.0 or WildFly 8.1.0. The issue I've got is GlassFish and WildFly use slightly different formats for JNDI names but I can't see how to make my application compatible with both.
In GlassFish my persistence.xml file references the data source jdbc/myDataSouce, but in WildFly the data source needs to be java:/jdbc/myDataSource.
The same is also true for classes that are annotated with #Resource. In GlassFish the annotation for a class using JavaMail would be #Resource(name = "mail/myMailSession"), but to deploy onto WildFly this would need to be #Resource(name = "java:mail/myMailSession").
I know that I could unpack the EAR and JAR files to manually edit files such as persistence.xml but I can't do that for classes that have been annotated with #Resource.
Is there a way I can allow my complied application to be deployed onto GlassFish and WildFly without maintaining two different versions of the code? I'm assuming the answer probably lies with application specific deployment descriptors but I can't find any examples that cover these two scenarios.
Can anyone point me in the right direction please?
You can modify the Wildfly JNDi names and strip the undesired prefixes from the respective JNDI names to find the least common denominator in both app servers. The following works for me with Glassfish and JBoss AS 7.1. Since I expect Wildfly to be backwards-compatible to JBoss in this regard, I guess it'll work for Wildfly as well.
Persistence
Inject as:
#PersistenceContext(unitName="TestPU")
private EntityManager entityManager;
or via ejb-jar.xml:
<persistence-context-ref>
<persistence-context-ref-name>entityManager</persistence-context-ref-name>
<persistence-unit-name>TestPU</persistence-unit-name>
<injection-target> ... </injection-target>
</persistence-context-ref>
The corresponding persistence.xml:
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="TestPU" transaction-type="JTA">
<jta-data-source>datasources/TestDS</jta-data-source>
<class>org.jeeventstore.persistence.jpa.EventStoreEntry</class>
<properties>
<property name="hibernate.show_sql" value="false"/>
<property name="hibernate.format_sql" value="true"/>
<property name="hibernate.hbm2ddl.auto" value="create-drop"/>
<property name="hibernate.connection.charSet" value="UTF-8"/>
<property name="eclipselink.logging.level" value="FINE"/>
<property name="eclipselink.logging.level.sql" value="FINE"/>
<property name="eclipselink.logging.parameters" value="true"/>
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
</properties>
</persistence-unit>
</persistence>
(note the simple jta-data-source JNDI name)
Here's a glassfish-resources.xml file used to specify a Derby database on deployment, a similar setup can be used for MySQL or Postgres.
<resources>
<jdbc-resource pool-name="ArquillianEmbeddedDerbyPool"
jndi-name="datasources/TestDS"/>
<jdbc-connection-pool name="ArquillianEmbeddedDerbyPool"
res-type="javax.sql.DataSource"
datasource-classname="org.apache.derby.jdbc.EmbeddedDataSource"
is-isolation-level-guaranteed="false">
<property name="databaseName" value="target/databases/derby"/>
<property name="createDatabase" value="create"/>
</jdbc-connection-pool>
</resources>
And the settings from the JBoss standalone.xml:
<datasource jta="true" jndi-name="java:/datasources/TestDS" pool-name="TestDS" enabled="true" use-ccm="false">
<connection-url>jdbc:postgresql://localhost/test_db</connection-url>
...
</datasource>
Resources
I have not injected a JavaMail component on Glassfish, but similar to the datasoruce settings, it might be worth a try to strip the "java:" part from the #Resource annotation as well.
#Resource(name = "mail/myMailSession")
and then configure Wildfly such that that the mail resource is available at the "java:mail/myMailSession" JNDI location.
Injection via ejb-jar.xml
Another option is to manually inject the fields via a ejb-jar.xml file, and then use a build tool such as maven to copy either of ejb-jar-glassfish.xml or ejb-jar-wildfly.xml to the desired ejb-jar.xml at assembly time.
In one of our projects we use a mixed approach to avoid the burden with the xml configuration: We configure a small number of "provider" beans via ejb-jar.xml to inject, e.g., the persistence context into a PersistenceContextProvider, and then use CDI to inject the PersistenceContextProvider into the EJBs via #EJB, which are found without further configuration since they reside in the same EAR.
I haven't hit the mail-dilemma just yet. But I've ran into the same problem your having when it comes to data source definition and my solution has been to not setup the data sources using the server's console, but make them deployable together with your archive using the #DataSourceDefinition annotation. Turns out WildFly won't complain about java:app/blabla.. if the data source is setup during deployment!
Here is a real world example for you that works on both GlassFish and WildFly:
https://github.com/martinanderssondotcom/java-ee-concepts/../ArquillianDS.java
Note that the data source JNDI name declared is:
java:app/env/ArquillianDS
And here is the related persistence.xml file (don't mind the name of the file in this repository, the repository represents a test project that build archives during runtime and the app will change the name of the file in the archive to persistence.xml):
https://github.com/MartinanderssonDotcom/java-ee-concepts/../persistence-update.xml
Also note that the persistence unit need a data source located using this JNDI name:
java:app/env/ArquillianDS
This deployment works perfectly fine with both GlassFish and WildFly. I've noted that if we declare the data source during deployment, then we pay the price of not seeing the data source listed anywhere in the admin gui/console. For me, that is a small price to pay in order to have a truly portable application. As an added bonus, I don't have to write lengthy installation/setup instructions. For all my projects, the data source is an intrinsic part of the application and I don't mind having a class file in the archive that represents the data source.
The above data source is using a Java DB (or "Apache Derby" for old school people). As some comments in the ArquillianDS.java file describe: GlassFish has problems using a simple URL connection string combined with Java DB. Hence I resorted to specifying all attributes of the #DataSourceDefinition explicitly. Recently in another project of mine (alas not a public one), I used the same construct of deployment time data source definition but targeting MySQL. Here's that data source definition and it works on both servers:
#DataSourceDefinition(
name = "java:app/env/maLivechatDS",
url = "jdbc:mysql://localhost:3306/malivechat_db?createDatabaseIfNotExist=true&user=root&password",
className = "com.mysql.jdbc.jdbc2.optional.MysqlDataSource"
)
#ManagedBean
public class MySQLDataSource { }
Note that the driver is MysqlDataSource and not MysqlXADataSource. One point in my application uses a rather complex transaction scheme and GlassFish ran into problems if I used the XA-driver. However, the non-XA driver used by my application still work properly with JTA transactions so for me, it was just a cheap trick to get the boat floating. You should probably use the XA-driver.
For JNDI Portability with portable DataSourceDefinition annotation, I test it On payara-5.192, wildfly-17.0.1, tomee-8-M3 and openLiberty-19.0.0.7
#DataSourceDefinition(
name = "java:app/env/jdbc/mysql_app_name",
className = "com.mysql.cj.jdbc.MysqlConnectionPoolDataSource",
url = "jdbc:mysql://localhost:3306/db_name?characterEncoding=utf-8&zeroDateTimeBehavior=CONVERT_TO_NULL&user=root&password=password",
minPoolSize = 1,
properties = {"characterEncoding=utf-8","zeroDateTimeBehavior=CONVERT_TO_NULL"})
I used it with MySQL connector 8.
refer to reference. for wildfly I created a startup bean class for configuration and set the annotation in the startup class.
for openLiberty add in server.xml
<application id="app_name" contextRoot="/app_name" name="app_name" location="../app_name.war" type="war">
<classloader commonLibraryRef="mysql"/>
</application>
<library id="mysql">
<file name="/path_to/mysql-connector-java-8.0.17.jar"/>
</library>
and put the war file in
usr/servers/defaultServer
folder
I am trying to use Arquillian to test my JPA repository classes. However I only get nullpointer exception telling me that it doesn't find the persistence.xml. How do you configure it in a standard Maven project?
Have you looked # the official documentation here. The project structure suggests that its built using Maven.
If you still run into issues do post the exception messages.
Good luck!
It looks like your ShrinkWrap deployment does not contain the persistence.xml in the right path. The persistence.xml file should be located in the META-INF directory of a JAR, or in WEB-INF/META-INF directory of a WAR. You could verify this in two ways:
Through the verbosity flag of the Archive.toString(...) method:
In your #Deployment method, you can print out the contents of the archive, using the toString method, like
#Deployment
public static Archive<?> createDeployment() {
WebArchive war = ShrinkWrap.create(WebArchive.class).addClasses(Foo.class);
System.out.println(war.toString(true));
return war;
}
Configuring Arquillian to write the generated deployment to disk:
You can add the engine configuration element to your arquillian.xml with the deploymentdeploymentExportPath property, like
<engine>
<property name="deploymentExportPath">target/deployment</property>
</engine>
This would instruct Arquillian to write the deployments it generates into a subdirectory under the target directory generated by Maven.
There is also bunch of examples in the showcase project on github, including JPA testing (also using Arquillian Persistence Extension).
Hopefully this will lead to you to the right path :)
I am new to Spring and inherited a Spring project that had all the XML configuration in ProjectName/WebContent/WEB-INF/applicationContext.xml. I'm trying to break the configuration into different components so it is easier to substitute things like DataSources and Hibernate configuation when testing.
Here is my file structure:
ProjectName
->WebContent
->WEB-INF
->applicationContext.xml
->spring-datasource.xml
->spring-hibernate-properties.xml
->spring-persistence.xml
->test
->us.mn.k12... (Java pkgs with JUnit tests)
->spring-hsqldb-datasource.xml
->spring-test-bean-locations.xml
->spring-test-hibernate-properties.xml
->src
->us.mn.k12... (Java pkgs with production code)
In WEB-INF/applicationContext.xml, I import the following:
<import resource="spring-datasource.xml"/> <!-- Production datasource -->
<import resource="spring-hibernate-properties.xml"/> <!-- Production hibernate properties -->
<import resource="spring-persistence.xml"/> <!-- DAO's, hibernate .hbm.xml mapping files -->
The application works with the above configuration.
My JUnit tests run using DbUnit and an HSQLDB in-memory database. So my JUnit test references spring-test-bean-locations.xml, which has the following:
<import resource="spring-hsqldb-datasource.xml"/> <!-- HSQLDB datasource for test -->
<import resource="../WebContent/WEB-INF/spring-persistence.xml"/> <!-- Production DAO's, hibernate .hbm.xml mapping files -->
<import resource="spring-test-hibernate-properties.xml"/> <!-- Hibernate properties for test -->
In this way, I can specify test datasource and hibernate properties, but reuse the production mapping file for the DAO's, etc. However, I get an error running my JUnit test. Here is the relevant part of the exception:
Caused by: org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from relative location [../WebContent/WEB-INF/spring-persistence.xml]
Offending resource: class path resource [spring-test-bean-locations.xml]; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [../WebContent/WEB-INF/spring-persistence.xml]; nested exception is java.io.FileNotFoundException: class path resource [../WebContent/WEB-INF/spring-persistence.xml] cannot be opened because it does not exist
Now if I move spring-persistence.xml into /test so that I don't have to use the relative path, and reference it with <import resource="spring-persistence.xml"/>, then the tests run fine. So I think the contents of my XML files are OK, but I'm not properly importing with a relative path.
Is there anything obvious I'm doing wrong with my import of the relative path? And maybe the bigger question is does this look like a reasonable strategy for breaking applicationContext.xml into components to make it easier for testing?
Thanks!
The problem is: anything inside WEB-INF is not available to the ClassLoader in a regular project setup (and spring uses the ClassLoader by default to access resources). There are some hacks to work around this (like referencing the contexts using the file: prefix), but those are mostly ugly.
A better practice I'd suggest is to move the context files out of WEB-INF and into a dedicated resource directory (src/main/resources if you have a maven setup). That way they will be available to both the webapp ClassLoader and local unit test ClassLoaders.
Read the resources chapter to further understand the mechanisms involved.
Use
<import resource="file:**/WebContent/WEB-INF/spring-persistence.xml" />
It works in spring 3.2.1.RELEASE. Old versions I am not sure.
We have a project called web-app1 and has a dependency on another jar file called core-app.jar which is provided by another team as a shared library , yet there is a hibernate.cfg.xml in this core-app.jar (inside of the jar), with content as below.
<hibernate-configuration>
<session-factory>
<property name="dialect">${hibernate.dialect}</property>
<property name="query.substitutions"><![CDATA[false 'N', true 'Y']]></property>
<property name="show_sql">false</property>
<property name="format_sql">false</property>
<property name="use_sql_comments">false</property>
<property name="generate_statistics">true</property>
<property name="hibernate.connection.release_mode">after_transaction</property>
<!-- Search Configurations -->
<property name="hibernate.search.default.directory_provider">org.hibernate.search.store.FSDirectoryProvider</property>
<property name="hibernate.search.default.indexBase">${lucene.index.home}</property>
<property name="hibernate.search.default.batch.merge_factor">10</property>
<property name="hibernate.search.default.batch.max_buffered_docs">10</property>
</session-factory>
</hibernate-configuration>
As we see in the Search Configurations section, there is a variable ${lucene.index.home} that should be replaced by other projects on different OS platform,
so the question, does maven provide a way to filter a dependency jar file and filter the content? any plugins ? war:war , unzip ? dependencies ? I couldn't figure a fast way to do that. it looks to me , no matter what plugin would be adopted, the plugin needs to do 4 things basically.
1 unpack the jar in
process-resources phase.
2 substitute the ${var} with
value defined in profile.
3 pack it again back into a jar.
4 need to copy it back from the
packing/unpacking workspace back to
the maven process path ??
did anyone run into this similar requirement before.
thanks
I would assume that those values are meant to be set at runtime, likely as VM arguments. It doesn't make sense to provide a jar file that has to be modified to be able to be used.
If you really really REALLY have to do filtering at build time for configuration purposes, those configuration files should be filtered, NOT your dependencies. Then, you should either bundle said file into multiple artifacts (assuming of course you are targeting multiple environments), or be provided outside the built artifact as an externalized resource.