Azure Synapse provides managed spark pool, where the spark jobs can be submitted.
How do submit spark-job (as jars) along with dependencies to the pool2 using Java
If multiple jobs are submitted (each along with its own set of dependencies), then are the dependencies shared across the jobs. Or are they agnostic of each other?
For (1):
Add the following dependency:
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-analytics-synapse-spark</artifactId>
<version>1.0.0-beta.4</version>
</dependency>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-identity</artifactId>
</dependency>
With below sample code:
import com.azure.analytics.synapse.spark.SparkBatchClient;
import com.azure.analytics.synapse.spark.SparkClientBuilder;
import com.azure.analytics.synapse.spark.models.SparkBatchJob;
import com.azure.analytics.synapse.spark.models.SparkBatchJobOptions;
import com.azure.identity.DefaultAzureCredentialBuilder;
import java.util.*;
public class SynapseService {
private final SparkBatchClient batchClient;
public SynapseService() {
batchClient = new SparkClientBuilder()
.endpoint("https://xxxx.dev.azuresynapse.net/")
.sparkPoolName("TestPool")
.credential(new DefaultAzureCredentialBuilder().build())
.buildSparkBatchClient();
}
public SparkBatchJob submitSparkJob(String name, String mainFile, String mainClass, List<String> arguments, List<String> jars) {
SparkBatchJobOptions options = new SparkBatchJobOptions()
.setName(name)
.setFile(mainFile)
.setClassName(mainClass)
.setArguments(arguments)
.setJars(jars)
.setExecutorCount(3)
.setExecutorCores(4)
.setDriverCores(4)
.setDriverMemory("6G")
.setExecutorMemory("6G");
return batchClient.createSparkBatchJob(options);
}
/**
* All possible Livy States: https://learn.microsoft.com/en-us/rest/api/synapse/data-plane/spark-batch/get-spark-batch-jobs#livystates
*
* Some of the values: busy, dead, error, idle, killed, not_Started, recovering, running, shutting_down, starting, success
* #param id
* #return
*/
public SparkBatchJob getSparkJob(int id, boolean detailed) {
return batchClient.getSparkBatchJob(id, detailed);
}
/**
* Cancels the ongoing synapse spark job
* #param jobId id of the synapse job
*/
public void cancelSparkJob(int jobId) {
batchClient.cancelSparkBatchJob(jobId);
}
}
And finally submit the spark-job:
SynapseService synapse = new SynapseService();
synapse.submitSparkJob("TestJob",
"abfss://builds#xxxx.dfs.core.windows.net/core/jars/main-module_2.12-1.0.jar",
"com.xx.Main",
Collections.emptyList(),
Arrays.asList("abfss://builds#xxxx.dfs.core.windows.net/core/jars/*"));
Finally, you will need to provide the necessary role in:
Open Synapse Analytics Studio
Manage -> Access Control
Provide the role Synapse Compute Operator and Synapse Compute Operator to the caller
To answer question-2:
When jobs are submitted in synapse via jars, they are equivalent to spark-submit. So all the jobs are agnostic of each other and do not share each other's dependencies.
Related
I’m somewhat new to Kotlin/Java, but I have been using AWS Lambda for several years now (all Python and Node). I’ve been trying to “successfully” enable SnapStart on a SpringBoot Lambda using Kotlin running on java11 corretto (the only runtime supported currently), but it doesn’t seem to be working as I would have expected.
I have hooked into the CRaC lifecycle methods beforeCheckpoint and afterRestore. In beforeCheckpoint I’ve initialized the SpringBoot application and I can see it in the deployment logs (AWS creates log streams for the deployment phase with SnapStart lambdas).
However, the concerning thing is I’m also seeing the SpringBoot app get initialized in the function invocation logs too. I would have expected that to only happen during the deployment/initialization phase when the snapshot is being created. As a result I’m not really seeing a tremendous improvement on latency or overall.
Any ideas why this is happening?
I ran into essentially the same issue (with Java instead of Kotlin) and the solution was to switch the runtime->handler from
org.springframework.cloud.function.adapter.aws.SpringBootStreamHandler
to
org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest
It would probably be worth mentioning that as of 2023-02-20 SnapStart isn't engaged for $LATEST version of an AWS Lambda function, i.e. make sure you are invoking a particular published version. Otherwise, Best practices for working with Lambda SnapStart article says that the main performance killers are dynamically loaded classes, and network connections that need to be re-established from time to time.
From Snapstart Integration issue raised for Spring Cloud Function on GitHub I tend to think that switching to org.springframework.cloud.function.adapter.aws.FunctionInvoker probably somewhat helps, but doesn't address the performance challenges mentioned above. I'm not sure if I'm interpreting olegz's advice correctly, but what worked best so far for my AWS lambda function built with Spring Boot/Spring Cloud Function is a "warm-up" config. It hooks into the CRaC lifecycle via beforeCheckpoint() and issues dummy requests to S3 and DynamoDB before the VM snapshot is made. This way most dynamically-loaded classes are pre-loaded, and network connections are pre-established, before any subsequent function invocation takes place.
package eu.mycompany.mysamplesystem.attachmentstore.configuration;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import eu.mycompany.mysamplesystem.attachmentstore.handlers.MainEventHandler;
import lombok.extern.slf4j.Slf4j;
import org.crac.Core;
import org.crac.Resource;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.services.s3.model.NoSuchKeyException;
import java.util.ArrayList;
import java.util.List;
#Configuration
#Slf4j
public class WarmUpConfig implements Resource {
private final MainEventHandler mainEventHandler;
public WarmUpConfig(final MainEventHandler mainEventHandler) {
Core.getGlobalContext().register(this);
this.mainEventHandler = mainEventHandler;
}
#Override
public void beforeCheckpoint(org.crac.Context<? extends Resource> context) {
log.debug("Warm-up MainEventHandler by issuing dummy requests");
dummyS3Invocation();
dummyDynamoDbInvocation();
}
#Override
public void afterRestore(org.crac.Context<? extends Resource> context) {
}
public void dummyS3Invocation() {
S3Event s3Event = generateWarmUpEvent("ObjectCreated:Put");
try {
mainEventHandler.handleRequest(s3Event, null);
throw new IllegalStateException("Warm-up event processing should have reached S3 and failed with S3Exception");
} catch (NoSuchKeyException e) {
log.debug("S3Exception is expected, since it is a warm-up");
}
}
public void dummyDynamoDbInvocation() {
S3Event s3Event = generateWarmUpEvent("ObjectRemoved:Delete");
mainEventHandler.handleRequest(s3Event, null);
}
private S3Event generateWarmUpEvent(String eventName) {
S3Event.S3BucketEntity s3BucketEntity = new S3Event.S3BucketEntity("hopefully_non_existing_bucket", null, null);
S3Event.S3ObjectEntity s3ObjectEntity = new S3Event.S3ObjectEntity("hopefully/non/existing.key", 0L, null, null, null);
S3Event.S3Entity s3Entity = new S3Event.S3Entity(null, s3BucketEntity, s3ObjectEntity, null);
List<S3Event.S3EventNotificationRecord> records = new ArrayList<>();
records.add(new S3Event.S3EventNotificationRecord(null, eventName, null, null, null, null, null, s3Entity, null));
return new S3Event(records);
}
}
P.S.: The MainEventHandler is basically the entry point to all the business logic exposed by the Function.
#SpringBootApplication
#RequiredArgsConstructor
public class Lambda {
private final MainEventHandler mainEventHandler;
public static void main(String... args) {
SpringApplication.run(Lambda.class, args);
}
#Bean
public Function<Message<S3Event>, String> defaultFunctionLambda() {
return message -> {
Context context = message.getHeaders().get("aws-context", Context.class);
return mainEventHandler.handleRequest(message.getPayload(), context);
};
}
}
I want to read specific files from a SFTP server and get only compressed files only once.
I encounter a problem when handling message because the defined filter on the remote server seems to not be applied in handle method.
Dependencies:
SpringBoot: 2.2.1
spring-integration: 5.2.1
spring-integration-jdbc: 5.2.1
spring-integration-sftp: 5.2.1
public IntegrationFlow buildSftpInboundIntegrationFlow() {
return IntegrationFlows
.from(
Sftp
.inboundStreamingAdapter(buildSftpRemoteFileTemplate())
.remoteDirectory(getRemoteDirectoryPath())
.filter(buildRemoteFileFilter())
.remoteFileSeparator(
Optional
.ofNullable(getRemoteFileSeparator())
.orElse(DEFAULT_REMOTE_PATH_SEPARATOR))
.maxFetchSize(
Optional.ofNullable(getMaxFetchSize()).orElse(DEFAULT_MAX_FETCH_SIZE)),
sourcePollingChannelAdapterSpec -> sourcePollingChannelAdapterSpec
.id(getSftpInboundStreamingAdapterIdentifier())
.autoStartup(true)
.poller(buildPollerSpec()))
.handle(handleMessage())
.get();
}
/**
* Allows to build a regex to filter files.
*
* #return a regex as a {#link String}.
*/
private String buildRegexFileFilter() {
return String.format(".*\\.%s", getFileExtensionToFilter());
}
/**
* Allows to build an instance of {#link SftpRemoteFileTemplate}.
*
* #return an instance of {#link SftpRemoteFileTemplate}.
*/
private SftpRemoteFileTemplate buildSftpRemoteFileTemplate() {
final SftpRemoteFileTemplate sftpRemoteFileTemplate = new SftpRemoteFileTemplate(getSftpSessionFactory());
sftpRemoteFileTemplate.setAutoCreateDirectory(true);
return sftpRemoteFileTemplate;
}
/**
* Allows to build the filters to apply to the remote files.
*
* #return an instance of {#link CompositeFileListFilter}.
*/
#SuppressWarnings("resource")
private CompositeFileListFilter<LsEntry> buildRemoteFileFilter() {
return new ChainFileListFilter<LsEntry>() // NOSONAR
.addFilters(
new SftpRegexPatternFileListFilter(buildRegexFileFilter()),
getSftpPersistentAcceptOnceFileListFilter());
}
/**
* Allows to build the poller specifications.
*
* #return an instance of {#link PollerSpec}.
*/
private PollerSpec buildPollerSpec() {
return Pollers
.fixedDelay(
Optional.ofNullable(getPollerDelayInSeconds()).orElse(DEFAULT_POLLER_DELAY_IN_SECONDS),
TimeUnit.SECONDS)
.transactional()
.transactionSynchronizationFactory(getTransactionSynchronizationFactory());
}
...
Do you have any ideas to suggest to me ?
why in the handle method I receive files which should be excluded by the remote filter ?
It is a bug ? How get filtered messages ?
It's a bug in the modules spring-integration and spring-integration-sftp in version 5.2.1.
It works by upgrading these dependencies in version 5.2.2. (December 6, 2019)
I have a basic Maven java app that I created and it depends on JeroMQ which is a full Java implemenetation of ZeroMQ. Since I also need to wrap this java app as a windows service, I chose to use Apache Commons Daemon and specifically, followed this excellent example: http://web.archive.org/web/20090228071059/http://blog.platinumsolutions.com/node/234 Here's what the Java code looks like:
package com.org.SubscriberACD;
import java.nio.charset.Charset;
import org.zeromq.ZContext;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
/**
* JeroMQ Subscriber for Apache Commons Daemon
*
*/
public class Subscriber
{
/**
* Single static instance of the service class
*/
private static Subscriber subscriber_service = new Subscriber();
/**
* Static method called by prunsrv to start/stop
* the service. Pass the argument "start"
* to start the service, and pass "stop" to
* stop the service.
*/
public static void windowsService(String args[]) {
String cmd = "start";
if(args.length > 0) {
cmd = args[0];
}
if("start".equals(cmd)) {
subscriber_service.start();
}
else {
subscriber_service.stop();
}
}
/**
* Flag to know if this service
* instance has been stopped.
*/
private boolean stopped = false;
/**
* Start this service instance
*/
public void start() {
stopped = false;
System.out.println("My Service Started "
+ new java.util.Date());
ZContext context = new ZContext();
Socket subscriber = context.createSocket(ZMQ.SUB);
subscriber.connect("tcp://localhost:5556");
String subscription = "MySub";
subscriber.subscribe(subscription.getBytes(Charset.forName("UTF-8")));
while(!stopped) {
System.out.println("My Service Executing "
+ new java.util.Date());
String topic = subscriber.recvStr();
if (topic == null)
break;
String data = subscriber.recvStr();
assert(topic.equals(subscription));
System.out.println(data);
synchronized(this) {
try {
this.wait(60000); // wait 1 minute
}
catch(InterruptedException ie){}
}
}
subscriber.close();
context.close();
context.destroy();
System.out.println("My Service Finished "
+ new java.util.Date());
}
/**
* Stop this service instance
*/
public void stop() {
stopped = true;
synchronized(this) {
this.notify();
}
}
}
Then I created the following folder structure just like the tutorial suggested:
E:\SubscriberACD
\bin
\subscriberACD.exe
\subscriberACDw.exe
\classes
\com\org\SubscriberACD\Subscriber.class
\logs
I then navigated to the bin directory and issued the following command to install the service:
subscriberACD.exe //IS//SubscriberACD --Install=E:\SubscriberACD\bin\subscriberACD.exe --Descriptio
n="Subscriber using Apache Commons Daemon" --Jvm=c:\glassfish4\jdk7\jre
\bin\server\jvm.dll --Classpath=E:\SubscriberACD\classes --StartMode=jvm
--StartClass=com.org.SubscriberACD.Subscriber --StartMethod=windowsSer
vice --StartParams=start --StopMode=jvm --StopClass=com.org.SubscriberA
CD.Subscriber --StopMethod=windowsService --StopParams=stop --LogPath=E:\SubscriberACD\logs --StdOutput=auto --StdError=auto
The install works fine since I can see it in Windows Services. However, when I try to start it from there, I get an error saying "Windows cannot start the SubscriberACD on Local Computer".
I checked the error logs and see the following entry:
2016-04-14 14:38:40 Commons Daemon procrun stderr initialized
Exception in thread "main" ror: org/zeromq/ZContext
at com.org.SubscriberACD.Subscriber.start(Subscriber.java:57)
at com.org.SubscriberACD.Subscriber.windowsService(Subscriber.java:33)
Caused by: java.lang.ClassNotFoundException: org.zeromq.ZContext
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
... 2 more
It's worth noting that JeroMQ is currently a jar under my Maven Dependencies. I configured it from my POM.xml file.
I think the problem might be that my service doesn't have access to the JeroMQ jar that is under my Maven Dependencies. My assumption is that the class file doesn't contain the dependencies. So what I tried was exporting my entire project as a jar and stuck that baby under E:\SubscriberACD\classes\
So my structure now looks like this:
E:\SubscriberACD
\bin
\subscriberACD.exe
\subscriberACDw.exe
\classes
\com\org\SubscriberACD\
\Subscriber.class
\Subscriber.jar
\logs
However, that didn't fix the issue. Can anyone shed some light on this?
Change your --Classpath argument to :
--Classpath=E:\SubscriberACD\classes\your-jar-filename.jar
You almost certainly have other jarfiles you'll need, so just append them to the end of the --Classpath using ; (semi-colon) delimiters...
--Classpath=E:\SubscriberACD\classes\your-jar-filename.jar;e:\other-dir\classes\some-other.jar;etc...
i am trying to spawn multiple threads on an EJB 2.1 bean to test the load on the connection pool. Is there any way of doing so or open source tool i may look into?
Thanks if anyone has any experience in this.
Wrap your calls to your EJBs in a Servlet. There are a variety of tools in various languages to simulate load on web apps.
I use The Grinder for load testing OpenEJB. It's pretty great.
An example of a grinder.py file. This is essentially the client:
from net.grinder.script.Grinder import grinder
from net.grinder.script import Test
from javax.naming import Context,InitialContext
from java.util import Properties
# A shorter alias for the grinder.logger.output() method.
log = grinder.logger.output
tests = {
"ping" : Test(1, "ping"),
"add" : Test(2, "add"),
"sum" : Test(3, "sum"),
}
# Wrap the log() method with our Test and call the result logWrapper.
# Calls to logWrapper() will be recorded and forwarded on to the real
# log() method.
#logWrapper = test1.wrap(log)
# Initial context lookup for EJB home.
p = Properties()
p[Context.INITIAL_CONTEXT_FACTORY] = "org.apache.openejb.client.RemoteInitialContextFactory"
p[Context.PROVIDER_URL] = "ejbd://127.0.0.1:4201";
loadBean = InitialContext(p).lookup("LoadBeanRemote")
pingBean = tests["ping"].wrap(loadBean)
addBean = tests["add"].wrap(loadBean)
sumBean = tests["sum"].wrap(loadBean)
# A TestRunner instance is created for each thread. It can be used to
# store thread-specific data.
class TestRunner:
# This method is called for every run.
def __call__(self):
pingBean.ping()
addBean.add(3, 4)
sumBean.sum([3, 4, 5, 6])
Note the grinder.py file is written in jython so you can hookup any java client jars.
Here's an example grinder.properties file:
grinder.script grinder.py
grinder.processes 2
grinder.threads 20
grinder.runs 0
grinder.jvm.classpath=/Users/dblevins/work/grinder/openejb-3.1.4/lib/openejb-client-3.1.4.jar:\
/Users/dblevins/work/grinder/openejb-3.1.4/lib/javaee-api-5.0-3.jar:\
/Users/dblevins/work/grinder/openejb-3.1.4/lib/ejb31-experimental-api-3.1.4.jar:\
/Users/dblevins/work/grinder/load-beans/target/load-beans-1.0.jar
grinder.logDirectory logs
grinder.numberOfOldLogs 0
Then the LoadBean looks like so (the app you want to load test):
package org.superbiz.load;
import javax.ejb.*;
import java.lang.reflect.Method;
#Local
#Remote
#Singleton
#Lock(LockType.READ)
public class LoadBean implements Load {
public void ping() {
// do nothing
}
public int add(int a, int b) {
return a + b;
}
public int sum(int... items) {
int i = 0;
for (int item : items) {
i += item;
}
return i;
}
}
I am trying to invoke maven-dependency-plugin programatically. i am using maven 3 version. the problem is that when i invoke it through pluginManager.executeMojo(session, execution), i receive the following error message:
[ERROR] **The parameters 'project', 'local', 'remoteRepos',
'reactorProjects' for goal
org.apache.maven.plugins:maven-dependency-plugin:2.1:unpack are
missing or invalid**
**org.apache.maven.plugin.PluginParameterException: The parameters 'project',
'local', 'remoteRepos', 'reactorProjects' for goal
org.apache.maven.plugins:maven-dependency-plugin:2.1:unpack are missing or
invalid**
at org.apache.maven.plugin.internal.DefaultMavenPluginManager
.populatePluginFields(DefaultMavenPluginManager.java:518)
at org.apache.maven.plugin.internal.DefaultMavenPluginManager
.getConfiguredMojo(DefaultMavenPluginManager.java:471)
at org.apache.maven.plugin.DefaultBuildPluginManager
.executeMojo(DefaultBuildPluginManager.java:99)
at com.sap.ldi.qi.osgi.OSGiManifesrMfHandlerMojo
.invokeMavenDependencyPlugin(OSGiManifesrMfHandlerMojo.java:139)
at com.sap.ldi.qi.osgi.OSGiManifesrMfHandlerMojo
.execute(OSGiManifesrMfHandlerMojo.java:100)
at org.apache.maven.plugin.DefaultBuildPluginManager
.executeMojo(DefaultBuildPluginManager.java:110)
at org.apache.maven.lifecycle.internal.MojoExecutor
.execute(MojoExecutor.java:144)
at org.apache.maven.lifecycle.internal.MojoExecutor
.execute(MojoExecutor.java:87)
at org.apache.maven.lifecycle.internal.MojoExecutor
.execute(MojoExecutor.java:79)
-- many lines stripped from stack trace --
[INFO] ----------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ----------------------------------------------------------------------
[INFO] Total time: 17.938s
[INFO] Finished at: Mon Nov 22 10:27:42 EET 2010
[INFO] Final Memory: 12M/23M
[INFO] ----------------------------------------------------------------------
[ERROR] Failed to execute goal
com.sap.ldi.qi:osgi-manifest-handler-plugin:0.0.1-SNAPSHOT:handle
(osgi-manifest-handler plugin) on project com.sap.ldi.demo.calc
.cmd.tests: The parameters 'project', 'local', 'remoteRepos',
'reactorProjects' for goal
org.apache.maven.plugins:maven-dependency-plugin:2.1:unpack are missing
or invalid -> [Help 1]
-- stripped rest --
As I know, the only required parameter for the unpack goal of maven dependency plugin is artifactItems. I set the plugin configuration by using PluginExecution.setConfiguration() method. It seems that this plugin configuration is not correctly set.
Do you have any idea why this exception is thrown?
Here is the configuration that I am using:
<configuration>
<artifactItems>
<artifactItem>
<groupId>com.sap.ldi.demo.calc</groupId>
<artifactId>com.sap.ldi.demo.calc.cmd</artifactId>
<version>0.1.2-SNAPSHOT</version>
<type>jar</type>
<overWrite>true</overWrite>
<outputDirectory>target/demo-calc-stuff</outputDirectory>
<includes>**/*.*</includes>
</artifactItem>
</artifactItems>
</configuration>
Thanks
One correction from my side. The used Maven version is not Maven 3.0 but Maven 3.0-beta-1. I see that BuildPluginManager.loadPlugin() in version 3.0-beta-1 has two args, and the same method in version 3.0 has three.
I am wondering, does anyone tried to invoke a maven plugin programatically with maven 3.0 or maven 3.0-beta-1. I am still trying to invoke it with maven 3.0-beta-1, but it still returns the same exception as pasted above.
Here is an updated version of Mojo Executor designed for Maven 3:
package com.googlecode.boostmavenproject;
import java.util.Collections;
import org.apache.maven.execution.MavenSession;
import org.apache.maven.model.Plugin;
import org.apache.maven.plugin.MojoExecution;
import org.apache.maven.plugin.MojoExecutionException;
import org.apache.maven.plugin.descriptor.PluginDescriptor;
import org.apache.maven.project.MavenProject;
import org.codehaus.plexus.util.xml.Xpp3Dom;
import org.apache.maven.plugin.BuildPluginManager;
import org.apache.maven.plugin.descriptor.MojoDescriptor;
import org.codehaus.plexus.configuration.PlexusConfiguration;
import org.codehaus.plexus.util.xml.Xpp3DomUtils;
import org.sonatype.aether.repository.RemoteRepository;
/**
* Executes an arbitrary mojo using a fluent interface. This is meant to be executed within the context of a Maven 2
* mojo.
*
* Here is an execution that invokes the dependency plugin:
* <pre>
* executeMojo(
* plugin(
* groupId("org.apache.maven.plugins"),
* artifactId("maven-dependency-plugin"),
* version("2.0")
* ),
* goal("copy-dependencies"),
* configuration(
* element(name("outputDirectory"), "${project.build.directory}/foo")
* ),
* executionEnvironment(
* project,
* session,
* pluginManager
* )
* );
* </pre>
* #see http://code.google.com/p/mojo-executor/
*/
public class MojoExecutor
{
/**
* Entry point for executing a mojo
*
* #param plugin The plugin to execute
* #param goal The goal to execute
* #param configuration The execution configuration
* #param env The execution environment
* #throws MojoExecutionException If there are any exceptions locating or executing the mojo
*/
public static void executeMojo(Plugin plugin, String goal, Xpp3Dom configuration,
ExecutionEnvironment env) throws MojoExecutionException
{
if (configuration == null)
throw new NullPointerException("configuration may not be null");
try
{
MavenSession session = env.getMavenSession();
PluginDescriptor pluginDescriptor = env.getPluginManager().loadPlugin(plugin,
Collections.<RemoteRepository>emptyList(), session.getRepositorySession());
MojoDescriptor mojo = pluginDescriptor.getMojo(goal);
if (mojo == null)
{
throw new MojoExecutionException("Could not find goal '" + goal + "' in plugin "
+ plugin.getGroupId() + ":"
+ plugin.getArtifactId() + ":"
+ plugin.getVersion());
}
configuration = Xpp3DomUtils.mergeXpp3Dom(configuration,
toXpp3Dom(mojo.getMojoConfiguration()));
MojoExecution exec = new MojoExecution(mojo, configuration);
env.getPluginManager().executeMojo(session, exec);
}
catch (Exception e)
{
throw new MojoExecutionException("Unable to execute mojo", e);
}
}
/**
* Converts PlexusConfiguration to a Xpp3Dom.
*
* #param config the PlexusConfiguration
* #return the Xpp3Dom representation of the PlexusConfiguration
*/
private static Xpp3Dom toXpp3Dom(PlexusConfiguration config)
{
Xpp3Dom result = new Xpp3Dom(config.getName());
result.setValue(config.getValue(null));
for (String name: config.getAttributeNames())
result.setAttribute(name, config.getAttribute(name));
for (PlexusConfiguration child: config.getChildren())
result.addChild(toXpp3Dom(child));
return result;
}
/**
* Constructs the {#link ExecutionEnvironment} instance fluently
* #param mavenProject The current Maven project
* #param mavenSession The current Maven session
* #param pluginManager The Build plugin manager
* #return The execution environment
* #throws NullPointerException if mavenProject, mavenSession or pluginManager
* are null
*/
public static ExecutionEnvironment executionEnvironment(MavenProject mavenProject,
MavenSession mavenSession,
BuildPluginManager pluginManager)
{
return new ExecutionEnvironment(mavenProject, mavenSession, pluginManager);
}
/**
* Builds the configuration for the goal using Elements
* #param elements A list of elements for the configuration section
* #return The elements transformed into the Maven-native XML format
*/
public static Xpp3Dom configuration(Element... elements)
{
Xpp3Dom dom = new Xpp3Dom("configuration");
for (Element e: elements)
dom.addChild(e.toDom());
return dom;
}
/**
* Defines the plugin without its version
* #param groupId The group id
* #param artifactId The artifact id
* #return The plugin instance
*/
public static Plugin plugin(String groupId, String artifactId)
{
return plugin(groupId, artifactId, null);
}
/**
* Defines a plugin
* #param groupId The group id
* #param artifactId The artifact id
* #param version The plugin version
* #return The plugin instance
*/
public static Plugin plugin(String groupId, String artifactId, String version)
{
Plugin plugin = new Plugin();
plugin.setArtifactId(artifactId);
plugin.setGroupId(groupId);
plugin.setVersion(version);
return plugin;
}
/**
* Wraps the group id string in a more readable format
* #param groupId The value
* #return The value
*/
public static String groupId(String groupId)
{
return groupId;
}
/**
* Wraps the artifact id string in a more readable format
* #param artifactId The value
* #return The value
*/
public static String artifactId(String artifactId)
{
return artifactId;
}
/**
* Wraps the version string in a more readable format
* #param version The value
* #return The value
*/
public static String version(String version)
{
return version;
}
/**
* Wraps the goal string in a more readable format
* #param goal The value
* #return The value
*/
public static String goal(String goal)
{
return goal;
}
/**
* Wraps the element name string in a more readable format
* #param name The value
* #return The value
*/
public static String name(String name)
{
return name;
}
/**
* Constructs the element with a textual body
* #param name The element name
* #param value The element text value
* #return The element object
*/
public static Element element(String name, String value)
{
return new Element(name, value);
}
/**
* Constructs the element containg child elements
* #param name The element name
* #param elements The child elements
* #return The Element object
*/
public static Element element(String name, Element... elements)
{
return new Element(name, elements);
}
/**
* Element wrapper class for configuration elements
*/
public static class Element
{
private final Element[] children;
private final String name;
private final String text;
public Element(String name, Element... children)
{
this(name, null, children);
}
public Element(String name, String text, Element... children)
{
this.name = name;
this.text = text;
this.children = children;
}
public Xpp3Dom toDom()
{
Xpp3Dom dom = new Xpp3Dom(name);
if (text != null)
{
dom.setValue(text);
}
for (Element e: children)
{
dom.addChild(e.toDom());
}
return dom;
}
}
/**
* Collects Maven execution information
*/
public static class ExecutionEnvironment
{
private final MavenProject mavenProject;
private final MavenSession mavenSession;
private final BuildPluginManager pluginManager;
public ExecutionEnvironment(MavenProject mavenProject, MavenSession mavenSession,
BuildPluginManager pluginManager)
{
if (mavenProject == null)
throw new NullPointerException("mavenProject may not be null");
if (mavenSession == null)
throw new NullPointerException("mavenSession may not be null");
if (pluginManager == null)
throw new NullPointerException("pluginManager may not be null");
this.mavenProject = mavenProject;
this.mavenSession = mavenSession;
this.pluginManager = pluginManager;
}
public MavenProject getMavenProject()
{
return mavenProject;
}
public MavenSession getMavenSession()
{
return mavenSession;
}
public BuildPluginManager getPluginManager()
{
return pluginManager;
}
}
}
I will attempt to contribute my changes back into the official Mojo Executor plugin.
Folks, I think I get it.
The problem is not in the version of Maven that I am using. It is in the configuration that I am using for invoking maven-dependency-plugin. The unpack goal of maven-dependency-plugin requires the following parameters: artifactItems, local, project, reactorProjects and remoteRepos. Here is the correct version of the configuration used for invoking the plugin:
<configuration>
<artifactItems>
<artifactItem>
<groupId>com.sap.ldi.demo.calc</groupId>
<artifactId>com.sap.ldi.demo.calc.cmd</artifactId>
<version>0.1.3-SNAPSHOT</version>
<type>jar</type>
<overWrite>true</overWrite>
<outputDirectory>target/demo-calc-stuff</outputDirectory>
<includes>**/*.*</includes>
</artifactItem>
</artifactItems>
<local>${localRepository}</local>
<project>${project}</project>
<reactorProjects>${reactorProjects}</reactorProjects>
<remoteRepos>${project.remoteArtifactRepositories}</remoteRepos>
</configuration>`
Maven Plugins are not meant to be invoked programmatically.
They rely on values that are injected by the underlying plexus container.
So either you will have to find out how to inject those values or you will have to rely on the default mechanism.
One thing you can use is the Maven Invoker. With that, you can programmatically launch maven lifecycles, but they will execute in a separate VM. So if you need to change the model dynamically beforehand, you will need to serialize the model out to a temporary pom.xml and use that with maven invoker. This is heavy stuff, but I have done it successfully some two years ago.