Apache Ivy: Pointing to a repository manager - java

I'm trying to configure Ivy to play nicely with Artifactory (a popular repository manager) and need confirmation that I'm heading down the right path here.
I assume that once I have Artifactory configured correctly on my repository server, that I can just point Ivy to some directory under ARTIFACTORY_HOME and pass it the same information as I would an ordinary URL resolver. This way, Artifactory gets passed the info (what command to process, username & password, etc.) and takes over on the actual repositories behalf. (Thus, if credentials are wrong or if the user doesn't have permission to honor the request, Artifactory sends back a 404/etc.)
So to start with, if this is wrong, please correct me!
So here is what I'm thinking. In the Ant build for my project:
<ivy:settings url="http://my-repo-server.com/ivy/settings/ivy-settings.xml"/>
<target name="ivy-resolve">
<ivy:configure host="http://my-repo-server.com/ivy"
realm="ivy"
username="developer" <!-- "developer" is read-only user configured in Artifactory -->
passwd="38ur84u83j38y83u" <!-- encrypted password provided by Artifactory -->
override="false"/>
<ivy:resolve file="ivy.xml" conf="compile"/>
</target>
And then, in the (server-side) ivy-settings.xml:
<resolvers>
<url name="repo-server">
<ivy pattern="http://my-repo-server.com/ivy/???"/>
<artifact pattern="http://my-repo-server.com/ivy/???"/>
</url>
</resolvers>
So when a developer's Ant build runs the ivy-resolve target, the Ivy client knows where to go to find the ivy-settings.xml file, and configures itself using the developer user and encrypted password that I set up in Artifactory.
The only problem is, I'm not sure what to specify for Ivy & Artifact patterns in the settings file? I assume I have to set them to some directory that is controlled by Artifactory, but I'm not sure what (in Artifactory my repo uses the default Ivy repo layout).
Also, am I violating Artifactory "best practices" by placing the settings file on the repo server like I did? Should I place it inside some Artifactory-managed (hidden) directory? If so, where and why?
Ultimately, code samples help me learn fastest & easiest (I'm a graphical learner), so any programmatic nudges in the right direction are enormously appreciated!
Thanks in advance!

Related

No suitable Deployment Server is defined for the project or globally

I have tried to resolve the problem by this questions: Netbeans 11.2: No suitable Deployment Server is defined for the project or globally, but my nb-configuration.xml file looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<project-shared-configuration>
<!--
This file contains additional configuration written by modules in the NetBeans IDE.
The configuration is intended to be shared among all the users of project and
therefore it is assumed to be part of version control checkout.
Without this configuration present, some functionality in the IDE may be limited or fail altogether.
-->
<properties xmlns="http://www.netbeans.org/ns/maven-properties-data/1">
<!--
Properties that influence various parts of the IDE, especially code formatting and the like.
You can copy and paste the single properties, into the pom.xml file and the IDE will pick them up.
That way multiple projects can share the same settings (useful for formatting rules for example).
Any value defined here will override the pom.xml file value but is only applicable to the current project.
-->
<org-netbeans-modules-maven-j2ee.netbeans_2e_hint_2e_j2eeVersion>1.8-web</org-netbeans-modules-maven-j2ee.netbeans_2e_hint_2e_j2eeVersion>
<org-netbeans-modules-maven-j2ee.netbeans_2e_hint_2e_deploy_2e_server>gfv5ee8</org-netbeans-modules-maven-j2ee.netbeans_2e_hint_2e_deploy_2e_server>
<netbeans.hint.jdkPlatform>JDK 1.8</netbeans.hint.jdkPlatform>
<org-netbeans-modules-maven-jaxws.rest_2e_config_2e_type>ide</org-netbeans-modules-maven-jaxws.rest_2e_config_2e_type>
</properties>
</project-shared-configuration>
So I have netbeans_2e_hint_2e_j2eeVersion set to 1.8-web and
netbeans_2e_hint_2e_deploy_2e_server set to gfv5ee8 as is in the answer. But still error of no suitable server deployment. How to resolve this?
Try this:
Right click project -> Properties -> Run.
Change Server to the one you want to use.

What is artifacts.xml?

I've been doing static analysis on Java projects, which usually boils down to running javac2 #a_list_of_all_the_java_files_in_the_project, where javac2 is my modified compiler. Except, finding the right libraries to make everything compile is difficult.
I'm working with a project now (incidentally, eclipse SDK 3.7.1) that has a file artifacts.xml in the root folder. This file looks useful. My understanding so far is that it tells eclipse which libraries to use when opening the folder as an eclipse project. If so, I'd like to download these libraries locally and reference them in my custom compilation command.
Can someone explain the purpose of artifacts.xml, and optionally offer feedback on my approach? Ultimately all I want is to be able to compile the project on the command line using a nonstandard compiler.
First few lines or artifacts.xml
<?xml version='1.0' encoding='UTF-8'?>
<?artifactRepository version='1.1.0'?>
<repository name='Bundle pool' type='org.eclipse.equinox.p2.artifact.repository.simpleRepository' version='1'>
<properties size='2'>
<property name='p2.system' value='true'/>
<property name='p2.timestamp' value='1315600353875'/>
</properties>
<mappings size='3'>
<rule filter='(& (classifier=osgi.bundle))' output='${repoUrl}/plugins/${id}_${version}.jar'/>
<rule filter='(& (classifier=binary))' output='${repoUrl}/binary/${id}_${version}'/>
<rule filter='(& (classifier=org.eclipse.update.feature))' output='${repoUrl}/features/${id}_${version}.jar'/>
</mappings>
<artifacts size='405'>
<artifact classifier='osgi.bundle' id='org.eclipse.ecf.provider.filetransfer.ssl' version='1.0.0.v20110531-2218'>
<properties size='1'>
<property name='download.size' value='8460'/>
</properties>
</artifact>
This is the file that the Eclipse 'p2' install system uses to describe a repository of installable artifacts. The file is sometimes compressed in to an artifacts.jar file.
Eclipse p2 is described here: http://wiki.eclipse.org/Equinox/p2
A maven artifact, in general, is a file that gets deployed to a maven repo.
artifacts.xml is the canonical way of listing everything that needs to be sent to said repository.
Check this previous post for more information:
What is a Maven artifact?

How to enable debug info in ant4eclipse build using buildJdtProject?

I have tried setting different properties and attributes (debug="true"), but it didn't work.
This is from our build.xml (just showing the parts relating to the build step):
<!-- Environment holen -->
<property environment="env" />
<!-- Target: all -->
<target name="all" depends="build, test, export">
</target>
<!-- Target: build -->
<target name="build">
<ant4eclipse:executeProjectSet workspaceDirectory="${env.WORKSPACE}" teamprojectset="${env.WORKSPACE}\${env.JOB_NAME}\projectSet.psf">
<ant4eclipse:forEachProject filter="(executeProjectSet.org.eclipse.jdt.core.javanature=*)">
<buildJdtProject workspaceDirectory="${env.WORKSPACE}" projectName="${executeProjectSet.project.name}" targetLevel="1.6" />
</ant4eclipse:forEachProject>
</ant4eclipse:executeProjectSet>
</target>
Detailed description:
An internal project consists of a large number of classes and some applications, all written in Java.Everything runs just fine when started from within Eclipse.
After each commit to our SVN repository, the project is built using ant4eclipse on our Hudson installation and if tests pass, a zip is automatically created and copied to a file server to be used by simply unpacking and starting the supplied startup batch script.
Now last week a colleague informed me that the version from the file server doesn't work for him. I checked and am able to reproduce the problem - loading data from a database doesn't work. No exception is shown in the log/console and I have no idea what goes wrong. Everything works when started from within eclipse (same vmargs, same JVM etc.).
When trying to connect the debugger, it seems like no debug info is present ("line numbers missing" etc.). So I now need to find out how to convince ant4eclipse to include debug infos.
In the meantime, I found out how to do this myself: I added a default compiler options file like this (attribute defaultCompilerOptionsFile):
<!-- Target: build -->
<target name="build">
<ant4eclipse:executeProjectSet workspaceDirectory="${env.WORKSPACE}" teamprojectset="${env.WORKSPACE}\${env.JOB_NAME}\projectSet.psf">
<ant4eclipse:forEachProject filter="(executeProjectSet.org.eclipse.jdt.core.javanature=*)">
<buildJdtProject
workspaceDirectory="${env.WORKSPACE}"
projectName="${executeProjectSet.project.name}"
targetLevel="1.6"
defaultCompilerOptionsFile="compilerOptions.prefs"/>
</ant4eclipse:forEachProject>
</ant4eclipse:executeProjectSet>
</target>
compiler options file is just a copy of .metadata\.plugins\org.eclipse.core.runtime\.settings\org.eclipse.jdt.core.prefs inside the workspace. Make sure to set the desired options inside the file:
org.eclipse.jdt.core.compiler.debug.lineNumber=generate
org.eclipse.jdt.core.compiler.debug.localVariable=generate
org.eclipse.jdt.core.compiler.debug.sourceFile=generate
I haven't tested if it works if you create a compiler options file that just contains the 3 lines above.

Embedded GlassFish ignores Maven test resources

I have several session beans that I have written unit tests for. I have setup Maven to include a persistence.xml in the src/main/resources/META-INF directory that refers to a local MySQL database for development purposes. I have another persistence.xml in src/test/resources/META-INF directory that refers to the embedded Derby database __default. The tests are deployed to an embedded GlassFish 3.1 container.
When I run the tests however, I get the following error:
java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/mylog'
jdbc/mylog is the MySQL database that the persistence unit in the main directory refers to. It is obviously ignoring the persistence unit in the test directory but I have no clue as to why.
Maven is setting the classpath correctly as far as I can tell, with test-classes before classes and a peek in the actual target/test-classes/META-INF directory reveals that it copied the correct, embedded Derby, persistence unit.
[DEBUG] Test Classpath :
[DEBUG] C:\Users\Laurens\Documents\Projects\Mylog\target\test-classes
[DEBUG] C:\Users\Laurens\Documents\Projects\Mylog\target\classes
[DEBUG] C:\Users\Laurens\.m2\repository\org\eclipse\persistence\eclipselink\2.2.0\eclipselink-2.2.0.jar
[DEBUG] C:\Users\Laurens\.m2\repository\org\eclipse\persistence\javax.persistence\2.0.3\javax.persistence-2.0.3.jar
[DEBUG] C:\Users\Laurens\.m2\repository\org\eclipse\persistence\org.eclipse.persistence.jpa.modelgen.processor\2.2.0\org.eclipse.persistence.jpa.modelgen.processor-2.2.0.jar
[DEBUG] C:\Users\Laurens\.m2\repository\org\glassfish\extras\glassfish-embedded-all\3.1\glassfish-embedded-all-3.1.jar
[DEBUG] C:\Users\Laurens\.m2\repository\javax\javaee-web-api\6.0\javaee-web-api-6.0.jar
[DEBUG] C:\Users\Laurens\.m2\repository\junit\junit\4.8.1\junit-4.8.1.jar
Any hint on how to have GlassFish use the proper persistence unit very much appreciated! Thanks!
When running tests using embedded Glassfish, the JPA provider does not use the classpath displayed on the command-line, before executing the maven-surefire-plugin goal (that is used to run the test phase). Embedded Glassfish deploys the artifacts that are available as part of a test scope, as a ScatteredArchive. This scattered archive is typically created in the java.io.tmpdir directory usually with the name gfembed<a_random_number>tmp, unless the embedded Glassfish configuration specified the location of a Glassfish installation root, and a Glassfish domain.
When the embedded Glassfish domain is prepared with the deployed scattered archive, the files to be deployed are typically copied into an exploded directory that houses all the classes (including all dependencies) required by the application. This directory typically happens to be present in the GF_EMBED_DOMAIN_HOME/applications/<application_name> directory. The persistence.xml files from your src/main/resources/META-INF and src/test/resources/META-INF directories are copied here into the <application-name>/META-INF directory. Needless to state, the one that gets copied last, or the one that doesn't get overwritten is the one that is used by the JPA provider during the tests. This always happens to be the file in src/main/resources/META-INF.
You can overcome this situation in two ways:
1. Using a custom Glassfish domain configuration file
You can specify a domain configuration file (domain.xml) that will contain the datasource definition for jdbc/mylog. This is what I do currently for it is very flexible and the domain configuration file can contain other configurations as well. The config file, needs to specified as part of test setup in the following way:
Map<String, Object> props = new HashMap<String, Object>();
props.put("org.glassfish.ejb.embedded.glassfish.installation.root", "./glassfish-install/glassfish");
container = EJBContainer.createEJBContainer(props);
context = container.getContext();
datasource = (DataSource) context.lookup("jdbc/mylog"); //You can lookup the datasource too, to confirm that your setup is successful.
The afore-mentioned glassfish-install directory and its sub-directory glassfish are present in the Maven project root (and also checked into version control); the glassfish directory must contain a directory structure of domain1/config to represent the directory structure of the Glassfish domain of name domain1. The structure in the project can be seen in the below screenshot. The other related files (the JDBC resource adapter JARs and the like), can be obtained from a Glassfish installation directory, but typically these might also be placed in the correct location by the embedded Glassfish runtime, if configured correctly.
The contents of the Glassfish domain configuration file are different from the default one used by embedded Glassfish, except for the datasource and connection pool configuration (the relevant entries added in my usecase where I perform integration tests, have been posted below):
<domain log-root="${com.sun.aas.instanceRoot}/logs" application-root="${com.sun.aas.instanceRoot}/applications" version="10.0">
<system-applications/>
<applications/>
<resources>
<jdbc-resource pool-name="MyPool" jndi-name="jdbc/mylog"/>
...
<jdbc-connection-pool driver-classname="" datasource-classname="org.apache.derby.jdbc.ClientDataSource" res-type="javax.sql.DataSource" description="" name="MyPool" ping="true">
<property name="User" value="APP"></property>
<property name="RetrieveMessageText" value="true"></property>
<property name="CreateDatabase" value="true"></property>
<property name="ServerName" value="localhost"></property>
<property name="Ssl" value="off"></property>
<property name="SecurityMechanism" value="4"></property>
<property name="TraceFileAppend" value="false"></property>
<property name="TraceLevel" value="-1"></property>
<property name="PortNumber" value="1527"></property>
<property name="LoginTimeout" value="0"></property>
<property name="Password" value="APP"></property>
<property name="databaseName" value="MYDB"></property>
</jdbc-connection-pool>
...
</resources>
<servers>
<server name="server" config-ref="server-config">
<resource-ref ref="jdbc/__TimerPool"/>
<resource-ref ref="jdbc/__default"/>
<resource-ref ref="jdbc/mylog"/>
</server>
</servers>
...
...
The default domain.xml file can be downloaded from the java.net site, and modified, in the event you wish to keep the changes as minimal as possible, instead of copying one from a Glassfish installation.
2. Copying over the persistence.xml files
One can add goals to the Maven POM file, to backup and copy the persistence.xml file from src/test/resources/META-INF to src/main/resources/META-INF, before the test phase. After the test phase is complete, the original is restored. I will not go into details of this, as a similar solution is already discussed in a related StackOverflow question. I did not use this approach for integration tests as I required changes to be done beyond the ones that can be carried in persistence.xml, like creation of a custom realm. I use it for unit-tests however, due to the fact that the JPA provider will fetch the persistence.xml file from target/classes instead of target/test-classes, despite the latter appearing first in the classpath order. If you use Hibernate as your JPA provider, enabling TRACE logging for the org.hibernate.ejb logger (as the Ejb3Configuration class is responsible for performing the lookup) would convince you that file in test-classes will not be picked up.
Note:
Most of the answer assumes Glassfish 3.1 but may hold good for upcoming versions as well.
By "embedded glassfish container", do you mean a maven plugin that runs glassfish for you? The classpath for a maven plugin is different and managed differently than the maven test classpath. You might need to be working with a different classpath.
This answer might sounds silly but I was looking for a way which lets me run those tests from eclipse by Run As -> JUnit Test. This is how I made it:
#BeforeClass
public static void setUp() throws IOException {
Files.copy(new File("target/test-classes/META-INF/persistence.xml"), new File("target/classes/META-INF/persistence.xml"));
// ...
}
I'm just copying the test/persistence.xml to classes/persistence.xml. This works.

Can someone explain the ivy.xml dependency's conf attribute?

I can't find any thorough explanation of the Ivy dependency tag's conf attribute:
<dependency org="hibernate" name="hibernate" rev="3.1.3" conf="runtime, standalone -> runtime(*)"/>
See that conf attribute? I can't find any explanation (that I can understand) about the right hand side of the -> symbol. PLEASE keep in mind I don't know the first thing about Maven so please explain this attribute with that consideration.
Yes, I've already looked at this: http://ant.apache.org/ivy/history/latest-milestone/ivyfile/dependency.html
Thanks,
Dan
First of all, Ivy is not Maven ;)
Maven2 is a software project management and comprehension tool, whereas Ivy is only a dependency management tool.
Ivy heavily relies on a unique concept called configuration.
In Ivy, a module configuration is a way to use or to see the module.
For instance, you can have a test and runtime configuration in your module. But you can also have a MySQL and an Oracle configuration. Or an Hibernate and a JDBC configuration.
In each configuration, you can declare:
what artifacts (jar, war, ...) are required.
your dependencies on other modules, and describe which configuration of the dependency you need. This is called configuration mapping.
So the conf attribute does precisely that: Describes a configuration mapping for a dependency.
The mapped child element is your "right hand side of the -> symbol" and represents the name of the dependency configuration mapped. '*' wildcard can be used to designate all configurations of this module.
See more at "Simplest Explanation of Ivy Configuration" from Charlie Hubbard
The important part of that is Ivy downloads dependencies and organizes them.
An ivy-module (ie ivy.xml file) has two main parts:
What dependencies do you need?
How do you want them organized?
The first part is configured under the <dependencies> element.
The 2nd is controlled by the <configurations> element
When Ivy is downloading these dependencies it needs to know what scopes to use when pulling these transitive dependencies (are we pulling this for testing, runtime, compilation, etc?). We have to tell Ivy how to map our configurations to Maven scopes so it knows what to pull.
Maven2 on its side has something called the scope.
You can declare a dependency as being part of the test scope, or the buildtime scope.
Then depending on this scope you will get the dependency artifact (only one artifact per module in maven2) with its dependencies depending on their scope. Scopes are predefined in maven2 and you can't change that.
That means :
There are a lot of unnecessary dependencies downloaded for many libraries.
For example, Hibernate downloads a bunch of JBoss JARs and the Display Tag downloads all the various web framework JARs. I found myself excluding almost as many dependencies as I added.
The problem is that hibernate can be used with several cache implementations, several connection pool implementation, ... And this can't be managed with scopes, wheres Ivy configurations offers an elegant solution to this kind of problem.
For instance, in Ivy, assuming hibernate has an Ivy file like this one, then you can declare a dependency like that:
<dependency org="hibernate" name="hibernate" rev="2.1.8" conf="default->proxool,oscache"/>
to get hibernate with its proxool and oscache implementations, and like that:
<dependency org="hibernate" name="hibernate" rev="2.1.8" conf="default->dbcp,swarmcache"/>
to get hibernate with dbcp and swarmcache.
By mapping your default master configuration to "proxool,oscache" or to "dbcp,swarmcache", you specify what you need exactly from the module "hibernate".
You can find those "proxool,..." arguments by listing the Ivy configuration defined for each modules associate with the library. For instance:
<ivy-module version="2.0">
<info organisation="ssn-src" module="pc"/>
<configurations defaultconfmapping="default->default">
<conf name="default" />
<conf name="provided" description="they are provided by the env." />
<conf name="compile" extends="default,provided" />
<conf name="war" extends="default"/>
</configurations>
<dependencies>
Example:
let's suppose modA has two configurations, default and test.
As a practical matter, it's going to be highly unusual to want to leave out the conf attribute of the dependency element.
The ivy.xml for modA might have a dependency:
<dependency org="theteam" name="modB" rev="1.0" conf="default->*" />
You're starting from default, rather than from both default and test.
The above example makes modA's default depend on modB's conf1, conf2, and conf3.
Or you might want to say that modA's default only depends on modB's conf1:
<dependency org="theteam" name="modB" rev="1.0" conf="default->*conf1*" />
I've read these answers and quite frankly I don't find them very helpful. I think they could be improved so I wrote down how I use and understand configurations by showing a practical example:
http://wrongnotes.blogspot.com/2014/02/simplest-explanation-of-ivy.html
Unfortunately, you have to understand a little about maven, and its dependencies because Ivy is using Maven repositories to download those jar files. Therefore, Ivy has to understand Maven and it passes that back to you. But, I think I kept it real simple without going into too much detail about maven.
Thanks VonC!
It helped me alot further.
When it comes to options (configurations) tieTYT, you can find them in the ivy-[revision number].xml file in your Ivy repository under: organization name --> module name.
An example configurations element from the JUnit 4.6 revision downloaded from http://www.springsource.com/repository/app/.
<configurations>
<conf name="compile" visibility="public" description="Compile dependencies"/>
<conf name="optional" visibility="public" extends="compile" description="Optional dependencies"/>
<conf name="provided" visibility="public" description="Provided dependencies"/>
<conf name="runtime" visibility="public" extends="compile" description="Runtime dependencies"/>
</configurations>
In my project's ivy.xml file, I have a configuration compile-test. In the dependencies element I have the following dependency:
<dependency org="org.junit" name="com.springsource.org.junit"
rev="4.6.0" conf="compile-test->compile" />
As you can see, my compile-test configuration depends on the compile configuration in the JUnit's ivy.xml file.
It helped me once to understand things this way:
An ivy configuration is simply a name for some subset of the module's artifacts.
Dependencies between modules are specified in terms of configuration names.

Categories

Resources