OpenTelemetry Muzzle matcher warn; instrumentation skipped in OSGi container - java

For a project I'm doing I'm trying to run OpenTelemetry (OTEL) in an OSGi-container. Hereby the situation:
I have a simple Maven/Java-application in which includes the #WithSpan-annotations via the io.opentelemetry.opentelemetry-extension-annotations dependency, version 1.12.0 (can ofcourse be changed). The application for now is fairly simple; it just calls different methods who are annotated and do some logging.
#WithSpan("multiple")
private static int multiple(int number) {
return number * 2;
}
The java-agent is successfully attached to the container via the start-up script
The collector pipeline is running successfully via a YAML-file, this has been verified by running the jar of another project with the agent against it. This collects the results and displays it in Jaeger.
However, due to the limited exports of the dependency, the main application was not able to run, because the dependency did not export its classes/methods/etc. Therefore I wrapped this as an OSGi-bundle. I did this as following:
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<configuration>
<instructions>
<Bundle-Vendor>opentelemetry (Repackaged)</Bundle-Vendor>
<Embed-Dependency>
groupId=io.opentelemetry;artifactId=opentelemetry-extension-annotations;type=jar;classifier=;inline=true
</Embed-Dependency>
<Embed-Transitive>false</Embed-Transitive>
<Export-Package>*</Export-Package>
<Import-Package>!*</Import-Package>
<Private-Package>io.opentelemetry.*</Private-Package>
</instructions>
</configuration>
</plugin>
However when now having installed the wrapped OTEL-bundle in the container to which the agent is attached, and running the test project, I get the following error:
[otel.javaagent 2022-XX-XX XX:XX:XX:XXX +0000] [shell remote=/127.0.0.1:52855] WARN muzzleMatcher - Instrumentation skipped, mismatched references were found: opentelemetry-annotations [class io.opentelemetry.javaagent.instrumentation.otelannotations.WithSpanInstrumentationModule] on org.eclipse.osgi.internal.loader.EquinoxClassLoader#c89a9f[com.example.test-project:1.0.0.SNAPSHOT(id=15)]
And logically no spans show up in Jaeger. Does anybody have any suggestion? I tried to change the versions, include transitive dependencies, etcetera. But nothing seems to work.

Related

Integrating Cloudinary with Adobe AEM

I am trying to integrate Adobe AEM 6.3 (running on Java 1.8) with Cloudinary SDK. I have done the following but, keep hitting an exception that I am not able to resolve. Has anyone integrated Cloudinary with AEM and run into similar issues?
Add the dependency in the pom.xml for compiling the code.
<dependency>
<groupId>com.cloudinary</groupId>
<artifactId>cloudinary-core</artifactId>
<version>1.24.0</version>
</dependency>
<dependency>
<groupId>com.cloudinary</groupId>
<artifactId>cloudinary-http44</artifactId>
<version>1.24.0</version>
</dependency>
Build an OSGI plugin to ensure AEM gets the right jar files. For this purpose, I followed the steps to create a third party RESTful service example. To build the bundle, I had to explicitly download the following jar files: cloudinary-1.0.14.jar, cloudinary-core-1.21.0.jar, cloudinary-http44-1.21.0.jar, commons-codec-1.10.jar, commons-collections-3.2.2.jar, commons-lang3-3.1.jar, commons-logging-1.2.jar, httpclient-4.4.jar, httpmime-4.4.jar, jsp-api-2.0.jar
Despite creating a bundle that has httpclient, I get the following exception when trying to upload an image to Cloudinary. Here's code and the exception.
Code snippet
import com.cloudinary.*;
..
Cloudinary cloudinary = new Cloudinary("<<credentials>>");
...
File toUpload = new File("/Users/akshayranganath/Downloads/background-2633962_1280.jpg");
try {
Map uploadResult = cloudinary.uploader().upload(toUpload, ObjectUtils.emptyMap());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Exception
Caused by: java.lang.NoClassDefFoundError: javax/net/ssl/HostnameVerifier
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.defineClass(BundleWiringImpl.java:2370)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.findClass(BundleWiringImpl.java:2154)
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1542)
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.http.impl.conn.SchemeRegistryFactory.createDefault(SchemeRegistryFactory.java:52)
at org.apache.http.impl.client.AbstractHttpClient.createClientConnectionManager(AbstractHttpClient.java:321)
at org.apache.http.impl.client.AbstractHttpClient.getConnectionManager(AbstractHttpClient.java:484)
at org.apache.http.impl.client.AbstractHttpClient.createHttpContext(AbstractHttpClient.java:301)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:818)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at com.cloudinary.Uploader.callApi(Uploader.java:317)
at com.cloudinary.Uploader.upload(Uploader.java:57)
at com.aem.community.core.models.HelloWorldModel.init(HelloWorldModel.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.sling.models.impl.ModelAdapterFactory.invokePostConstruct(ModelAdapterFactory.java:792)
at org.apache.sling.models.impl.ModelAdapterFactory.createObject(ModelAdapterFactory.java:607)
... 211 common frames omitted
Caused by: java.lang.ClassNotFoundException: javax.net.ssl.HostnameVerifier not found by MyBundle [550]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1574)
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:2018)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 236 common frames omitted
This is the first time I am working with AEM and I may not be following the right steps. Please let me know if anyone has been able to get past this issue.
Update
Based on Alexander's suggestion and a pointer from another source, I added the following code to the parent pom.xml file.
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>3.5.0</version>
<configuration>
<instructions>
<Embed-Dependency>*;scope=compile|runtime</Embed-Dependency>
<Embed-Directory>OSGI-INF/lib</Embed-Directory>
<Embed-Transitive>true</Embed-Transitive>
</instructions>
</configuration>
</plugin>
After making this change, the cloudinary libraries were being added to the bundle. Here's the output from AEM: http://localhost:4502/system/console/bundles
Embedded-Artifacts: OSGI-INF/lib/cloudinary-http44-1.21.0.jar; g="com.cloudinary"; a="cloudinary-http44"; v="1.21.0", OSGI-INF/lib/commons-lang3-3.1.jar; g="org.apache.commons"; a="commons-lang3"; v="3.1", OSGI-INF/lib/httpclient-4.4.jar; g="org.apache.httpcomponents"; a="httpclient"; v="4.4", OSGI-INF/lib/httpcore-4.4.jar; g="org.apache.httpcomponents"; a="httpcore"; v="4.4", OSGI-INF/lib/commons-logging-1.2.jar; g="commons-logging"; a="commons-logging"; v="1.2", OSGI-INF/lib/commons-codec-1.9.jar; g="commons-codec"; a="commons-codec"; v="1.9", OSGI-INF/lib/httpmime-4.4.jar; g="org.apache.httpcomponents"; a="httpmime"; v="4.4", OSGI-INF/lib/cloudinary-core-1.21.0.jar; g="com.cloudinary"; a="cloudinary-core"; v="1.21.0"
However, I now get an error with this message:
org.apache.avalon.framework.logger -- Cannot be resolved
org.apache.log -- Cannot be resolved
I am able to resolve the org.apache.avalon.framework.logger error by adding a dependency Avalon framework. But, I am not able to get over the org.apache.log issue. It looks like there is a version conflict that is causing the problem.
This new error starts when I include the Cloudinary http44 library. This library doesn't appear to directly reference logging (see here for dependencies). Due to this error, the application still fails to go from Installed to Active state.
Cloudinary-libs are available as Maven artifacts. Such JAR-files can be put in your bundle as private libraries with the maven-bundle-plugin.
The following sample works for me (even with Cloudinary test account)
...
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<extensions>true</extensions>
<executions>
<execution>
<!-- Create the bundle late in the compile-phase instead of the package-phase.
So the generated OSGi meta-data is available during JUnit tests. -->
<id>run-before-tests</id>
<phase>process-classes</phase>
<goals>
<goal>bundle</goal>
</goals>
</execution>
</executions>
<configuration>
<instructions>
<Bundle-Name>Test Bundle</Bundle-Name>
<Embed-Dependency>*;groupId=com.cloudinary;scope=compile|runtime</Embed-Dependency>
<Embed-Directory>OSGI-INF/lib</Embed-Directory> <!-- not needed, but nice -->
<Embed-Transitive>true</Embed-Transitive>
</instructions>
</configuration>
</plugin>
...
<dependencies>
<dependency>
<groupId>com.cloudinary</groupId>
<artifactId>cloudinary-core</artifactId>
<version>1.24.0</version>
</dependency>
<dependency>
<groupId>com.cloudinary</groupId>
<artifactId>cloudinary-http44</artifactId>
<version>1.24.0</version>
</dependency>
...
In general embedding an external library can be from simple, cumbersome to impossible. It depends on the dependencies of the imported artifacts.
Check the dependency tree manually! (e.g. https://mvnrepository.com/)
You have to fiddle with 3 instructions:
Embed-Dependency
This are the libraries, that are put in your bundle. Be careful with the asterisk operator, otherwise you may include way too many dependencies (in case of AEM easily half of the internet). But do not include too less! Extract the built bundle.jar, to see what is actually included (in case of cloudinary it was easy).
Import-Package
Often the libs have way too many dependencies, especially if libs come an other ecosystem (like Spring or JEE containers), or have a lot of semi-optional dependencies. With this setting you can tell OSGi, that a bundle can be activated, even if certain dependencies are not available.
This is a real world example :
<Import-Package>
!com.sun.msv.*,
!org.apache.log4j.jmx.*,
!sun.misc.*,
!org.jboss.logging.*,
!org.apache.zookeeper.*,
*
</Import-Package>
Export-Package
Normally the library should be bundle-private. But sometimes you have to import differently, or the lib does something automatically. So you should always check in the system console, what your bundle is exporting. In case it is not right, you have to manually fiddle with this setting:
Here is an example:
<Export-Package>
!*.internal,
!*.internal.*,
!*.impl,
!*.impl.*,
com.mycompany.myproject.mybundle.*
</Export-Package>
By default all packages * are exported, except they are named impl or internal. Also their child packages are private (the !*.impl.* rule). If the default doesn't work, then export with this instruction only what you need.
Whatever you export goes to the global OSGi space. As also the AEM- and Sling-Bundles are not perfect nor 100%-bugfree, please make sure
the startup/shutdown order of out-of-the-box AEM bundles should not be changed
a deployment, re-deployment or un-deployment of your code should not start/stop any out-of-the-box AEM bundles.
If you don't ensure this, you might experience strange deployment issues - that are very difficult to find/solve.
So the best is, NOT to export anything that is imported by any AEM out-of-the-box bundle. Everything else is for Experts-only. And even they overestimate themselves, and underestimate the long-term costs of patching AEM manually.
PS: the _removeheaders instruction could remove all osgi-instructions that are not needed for runtime. But only do this, if you want to provide a bundle to the public and make it totally shiny. I would leave it in, as it is some kind of documentation.

Not able to run a Vaadin application on remote server

I am building a Vaadin application with Java. Here is the folder structure.
- com
-- my
--- WebTool
---- ToolUI.java
---- View_1.java
---- View_2.java
The entry point to the application is ToolUI.java and has the method init() that takes VaadinRequest as a parameter. It is this file where I add the views Views_1 and View_2 as views of the application and add navigations among them. Everything runs great when I run the application via the Eclipse IDE.
Now I have a requirement that I have to deploy this application on a remote server. So I created a war of the project and deployed on the server by the name
MyWebTool.war.
Now when I try running the war with the command
java -jar MyWebTool.war
it gives me the error: Can't execute war no main manifest attribute, in MyWebTool.war
I am not sure what to add the main class as since the init method gets invoked and sets the app running. So I put a blank main function inside the MyWebToolUI.java and added this dependency in the pom.xml file.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.6</version>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<mainClass>com.my.WebTool.ToolUI</mainClass>
</manifest>
</archive>
<failOnMissingWebXml>false</failOnMissingWebXml>
<!-- Exclude an unnecessary file generated by the GWT compiler. -->
<packagingExcludes>WEB-INF/classes/VAADIN/widgetsets/WEB-INF/**</packagingExcludes>
</configuration>
</plugin>
But now when trying to run the application it says Could not find or load main class com.my.WebTool.ToolUI
Can please somebody shed light on this? I don't know if I am missing something simple here but at this point, I am stuck. Thanks a lot.
For running war packaged applications, you will need a servlet container.
The servlet container provides all the basic infrastructure needed to run java based web applications.
One of the most common ways to do this, is to deploy the war file to a tomcat installation.

How to aggregate maven subproject javadoc output without regenerating javadoc

I have a largish multimodule Maven build. I need to generate the javadoc for all of the modules and produce an "aggregated" javadoc result that I can deploy to a box for consumption by users.
I did have this working perfectly fine for quite a while, until I tried implementing a custom taglet with specific features and requirements, which makes this more complicated to produce.
All of the submodules inherit a parent pom that is not the aggregator pom. In that parent pom I define the maven-javadoc-plugin. This is what it looked like before I added the custom taglet:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.10.4</version>
<configuration>
<additionalparam>-Xdoclint:none</additionalparam>
<bottom>Unified Service Layer - bottom</bottom>
<doctitle>Unified Service Layer - title</doctitle>
<footer>Unified Service Layer - footer</footer>
<groups></groups>
<header>Unified Service Layer - header</header>
<level>public</level>
<packagesheader>Unified Service Layer - packagesheader</packagesheader>
<top>Unified Server Layer - top</top>
<windowtitle>Unified Service Layer - windowtitle</windowtitle>
</configuration>
<executions>
<execution>
<id>module-javadoc-jar</id>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
<configuration>
<show>protected</show>
<detectLinks>false</detectLinks>
</configuration>
</execution>
<execution>
<id>aggregated-documentation</id>
<phase>package</phase>
<inherited>false</inherited>
<goals>
<goal>aggregate-jar</goal>
</goals>
<configuration>
<show>protected</show>
<detectLinks>false</detectLinks>
</configuration>
</execution>
</executions>
</plugin>
With this, I could build all all of the modules, which will generate their own javadoc (which I now know is just a validation step, as aggregate-jar doesn't use this output). I have a separate step I call from jenkins that runs "javadoc:aggregate-jar" in the root project, which produces the aggregated javadoc jar that I deploy.
Again, this has been working fine until now.
I implemented a custom javadoc taglet which requires getting access to the Class object associated with the source file it is contained within. I got this to work, at least in the individual module builds by adding the following to the configuration above:
<taglets>
<taglet>
<tagletClass>com.att.det.taglet.ValidationConstraintsTaglet</tagletClass>
</taglet>
<taglet>
<tagletClass>com.att.det.taglet.ValidationConstraintsCombinedTaglet</tagletClass>
</taglet>
</taglets>
<tagletArtifacts>
<tagletArtifact>
<groupId>com.att.detsusl.taglets</groupId>
<artifactId>validationJavadocTaglet</artifactId>
<version>0.0.1-SNAPSHOT</version>
</tagletArtifact>
</tagletArtifacts>
In order to have the taglet get access to the class file, I had to add a minimal plugin configuration to each subproject pom.xml, which looks like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<configuration>
<tagletArtifacts combine.children="append">
<tagletArtifact>
<groupId>com.att.detsusl</groupId>
<artifactId>artifact-name</artifactId>
<version>${current.pom.version}</version>
</tagletArtifact>
</tagletArtifacts>
</configuration>
</plugin>
With these minimal changes, I could run the build in each module, generating the javadoc, and examining the generated javadoc output in each module, verifying that it all worked.
However, the problem is, when I run "javadoc:aggregate-jar" in the root project, all of that already built output is ignored. It reruns the javadoc generation for all of the subprojects, also ignoring the appended tagletArtifacts list in each subproject pom.xml file. As a result, I get ClassNotFound errors when it tries to get the class file.
I could "fix" this by putting all of the subproject GAVs into the top-level "tagletArtifacts" list, but I definitely do not want to do that. I liked the ability to specify this in the subproject pom.xml (with combine.children="append") to make it work.
What I need is an overall javadoc package for all of the subprojects, with the taglet able to get access to the class file, without forcing the parent pom to know about all of its subprojects. How can I do this?
I'm facing the same problem with all aggregate goals. I checked the source code to maven-javadoc-plugin and it turns out that aggregate work by traversing submodules and collecting source files and nothing more, thus completely ignoring any form configurations specified in the submodules.
During execution every submodule is completely ignored:
source
if ( isAggregator() && !project.isExecutionRoot() ) {
return;
}
And during collection of source files submodules are traversed: source
if ( isAggregator() && project.isExecutionRoot() ) {
for ( MavenProject subProject : reactorProjects ) {
if ( subProject != project ) {
List<String> sourceRoots = getProjectSourceRoots( subProject );
So at the moment, there is no way to do this.
This is not easy to fix either since the whole plugin works by composing a single call to the actual javadoc tool. If you would like to respect settings in the submodules as well, you'll have to merge the configuration blocks of them. While this would work in your case with tagletArtifacts, it does not work for all the settings you can specify, e.g. any form of filter, and can therefore not be done in a generic way.

No such method : JRE picks wrong class if ambiguous occurs

I my application I am facing below exception,
/component/ProviderServices;Lcom/sun/jersey/core/spi/factory/InjectableProviderFactory;)V
at com.sun.jersey.api.client.Client.<init>(Client.java:212)
at com.sun.jersey.api.client.Client.<init>(Client.java:150)
at com.sun.jersey.api.client.Client.create(Client.java:476)
at com.example.data.DataReader.getData(DataReader.java:25)
at com.example.data.TestServlet.doGet(TestServlet.java:41)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
I found the reason for this exception but I don't know how to resolve it. The problem is I am having two jars namely jersey-bundle-1.1.5.1 and jersey-core-1.17.1 in my classpath. ContextResolverFactory.java is present in both jars with same package name. init method is present in jersey-core-1.17.1 but not in jersey-bundle-1.1.5.1. In windows build environment it is working fine. That means the JRE picks the ContextResolverFactory.java of jersey-core-1.17.1 correctly and executes the init method. Whereas in linux environment the JRE picks ContextResolverFactory.java of jersey-bundle-1.1.5.1 and tries to invoke the init method and throwing the above exception. I cant remove a jar blindly, since both jars are needed for different business purpose.
How to fix it in both linux and windows environment?
Why it is working fine in windows environment but not in linux environment?
I fully agree with the commenters. Per se it is bad practice to have the same class (in the same package) on the classpath multiple times. This will almost always cause troubles. The best thing would be to check whether or not you can make your code work with jersey 1.17.1 and use only the jersey-core-1.17.1 jar.
However, I also understand that there are situations where you do not have control over these dependencies i.e. where 3rd party libraries depend on specific versions of a certain library and you just have to work around these issues.
In these cases it is important to notice that the default java classloaders respect the order of the elements in the classpath. I assume that the order of the CLASSPATH variable in your Linux installation is different from that on your Windows installation.
If you are using an IDE such as Eclipse during your development please check the build path setup there and try setting the CLASSPATH variable on your production in exactly the same order.
For your reference please also check these other questions on stackoverflow:
Controlling the order of how JARs are loaded in the classpath
Is the order of the value inside the CLASSPATH matter?
In the case of Tomcat the order of the JAR files in WEB-INF/lib cannot be defined. The only thing you could do here would be to ship the JAR file that needs to be loaded first to some other directory in your production environment such as the JRE/lib directory, the Tomcat/common directory or the Tomcat/shared directory. Which all have priority over the WEB-INF/lib directory. See Control the classpath ordering of jars in WEB-INF/lib on Tomcat 5? for details on how this worked on older Tomcat versions.
One of the guiding principles that I try to follow when I develop my own applications is that I want to make them "dummy-proof." I want to make it as easy as possible on the end user.
Therefore, I would change the build of the applications to include ContextResolverFactory.class in your final jar (from jersey-core-1.17.1.jar). That's the general approach. The specific tool you use to achieve this might vary.
I would use maven and the maven-shade-plugin. This plugin can even do what's called a relocation where you provide the original package in the pattern tag, and you provide the desire new package location in the shadedPattern tag:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>com.sun.jersey.core.spi.factory</pattern>
<shadedPattern>${project.groupId}.${project.artifactId}.com.sun.jersey.core.spi.factory</shadedPattern>
</relocation>
</relocations>
<artifactSet>
<includes>
<include>com.sun.jersey:jersey-core</include>
</includes>
</artifactSet>
<minimizeJar>true</minimizeJar>
</configuration>
</execution>
</executions>
</plugin>
</plugins
</build>
Even if you're not experienced with maven, you could still make a small side project whose only purpose is to refactor the package location. Then, you would add this dependency to your project and use it to reliably access the init() method.
If you are experienced with maven, then I highly recommend splitting your project up into what's called a maven multi module POM project. This would be the new build order:
The Interface module
The Implementation Layer
The Runtime module
Where the Implementation Layer typically consists of many different modules that all depend upon the Interface module. And the Runtime module chooses the correct implementation at runtime.
You might not see the value if you currently only have one implementation... But down the road, it adds flexibility if you need to add more implementations, because you will be able to add them easily. Because your code never directly references an implementation, but rather, it always uses the interface, and it doesn't care which implementation is used.
So, this would make it harder on you, the developer, but easier on the end-user. Whether they're on windows, linux, or mac, it just works!
After checking the source-code, I noticed that all the logic of init() was moved to the constructor.
So another option, is to simply use the new constructor and catch the exceptional circumstance where it's not there, in which case, you would just use the default constructor followed by the init() method:
ContextResolverFactory factory = null;
try {
factory = new ContextResolverFactory(providerServies, ipf);
} catch (InvalidClassException ex) {
factory = new ContextResolverFactory().init(providerServices, ipf);
}
// ...
ContextResolver<MyType> cr = factory.resolve(type, mediaType);
if (cr == null) // handle null and not null...
Hopefully this helps. Good luck!

Can a Maven plugin see the "configuration" tag from an "execution" section automatically?

I'm analyzing a Maven plugin that I can configure inside the configuration section of plugin:
<plugin>
...
<executions>...</executions>
<configuration>
<!-- items placed here are visible to the MOJO -->
</configuration>
</plugin>
The plugin completely ignores any configuration items for an execution, though:
<plugin>
...
<executions>
<execution>
<id>execution1</id>
<phase>test</phase>
<goals><goal>test</goal></goals>
<configuration>
<!-- items placed here are ignored -->
</configuration>
</execution>
</executions>
</plugin>
I run Maven with mvn test. I'm sure that the execution takes place, as Maven prints its id correctly, but the plugin is not configured -- prints warnings about incorrect settings that are not present when the <configuration> section is moved outside of <executions>.
The question: is it the way the plugin is implemented, that it accepts only "top level" configuration? I've studied its source code and it seemed to me that it's Maven that invokes setters on a MOJO class and it's transparent to the plugin which section the options came from.
The MOJO is annotated with:
* #component
* #goal test
* #phase test
* #execute phase="jasmine-process-test-resources"
The plugin in question is forking a custom lifecycle.
The forked custom lifecycle will have the execution with the specified id (execution1) removed (as it is a forked lifecycle)
Thus any of the plugin's goals that are performed by the forked lifecycle will be missing their configuration. The main mojo itself should be getting the configuration, but what is going wrong is the forked lifecycle executions.
I am guessing which plugin it is, if my guess is right, this is the custom lifecycle and the warnings you are seeing are coming from e.g. other mojos with text like
JavaScript source folder was expected but was not found. Set configuration property
`jsSrcDir` to the directory containing your JavaScript sources. Skipping
jasmine:resources processing.
With a situation like this you will need to either put the <configuration> section in the outer block or configure the executions for the lifecycle.
Configuring the executions for the lifecycle will require adding executions with ids that are of the magic format. I am not 100% certain, but in your case you would be defining an additional execution with an ids of either default-resources or jasmine-lifecycle-resources in order to ensure that the configuration takes.
The less verbose way is just to put the configuration in the outer section and be done with it.
I had this issue with the base maven-install-plugin:2.5.2 using the maven:3.6.3-jdk-8 docker image.
Thanks to the accepted answer for putting me on the right track.
I don't fully understand this note in the documentation (at the end of the section), but it seems that you can give the phase goal an execution id forcing it to use your configuration:
Note: Configurations inside the element used to differ from those that are outside in that they could not be used from a direct command line invocation because they were only applied when the lifecycle phase they were bound to was invoked. So you had to move a configuration section outside of the executions section to apply it globally to all invocations of the plugin. Since Maven 3.3.1 this is not the case anymore as you can specify on the command line the execution id for direct plugin goal invocation. Hence if you want to run the above plugin and it's specific execution1's configuration from the command-line, you can execute:
mvn myqyeryplugin:queryMojo#execution1
My final working docker command:
docker run -it --rm --name parser -v "$(shell pwd)":/usr/src/parser -w /usr/src/parser maven:3.6.3-jdk-8 mvn -X install:install-file#install-my-jar-file
Where install-my-jar-file is my executions id <execution><id>install-my-jar-file</id>...

Categories

Resources