I am trying to develop a tool that get a directory of maven artifacts and upload them to Nexus 3. The tool is working but I have a performance issue.
My program launch a separate maven process for each artifact that must be uploaded.I'm curious whether these could be batched somehow.
I am using the maven-invoker library for executing maven commands.
public class MavenUploader {
private final MavenDeployer mavenDeployer;
#Inject
public MavenUploader(MavenDeployer mavenDeployer) {
this.mavenDeployer = mavenDeployer;
}
#Override
public void uploadToRepository(Path pathToUpload) {
try (Stream<Path> files = Files.walk(pathToUpload)){
files.forEach(mavenDeployer::deployArtifact);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
This is the class that resposible to upload the artifacts
import org.apache.maven.shared.invoker.*;
public class MavenDeployer {
private final InvocationRequest invocationRequest;
private final Invoker invoker;
#Inject
public MavenDeployer(InvocationRequest invocationRequest,
Invoker invoker) {
this.invocationRequest = invocationRequest;
this.invoker = invoker;
}
public void deployArtifact(Path pathToPom, String commandToExecute) {
invocationRequest.setGoals(Collections.singletonList(commandToExecute));
InvocationResult invocationResult = invoker.execute(invocationRequest);
}
for each time that the deployArtifact method called new process is opened, There is a way to batch all the uploads to use the same process?
Not maven-deploy but you can use this if you'd like: https://github.com/DarthHater/nexus-repository-import-scripts
I think it accomplishes what you want to do.
Related
When I build project by maven, it's OK, but when deploy it by Tomkat, I have NullPointerException.
Class, where can be problem - PropertiesManager.
logline: PropertiesManager.getApplicationProperties(PropertiesManager.java:31)
public class PropertiesManager {
private static final String PROPERTY_FILE_NAME =
"resources/application.properties";
private static PropertiesManager Instance;
private Properties properties;
private PropertiesManager() {
}
public static PropertiesManager getInstance() {
if (Instance == null) {
Instance = new PropertiesManager();
}
return Instance;
}
public Properties getApplicationProperties() {
if (properties == null) {
properties = new Properties();
try (InputStream stream = Thread.currentThread()
.getContextClassLoader()
.getResourceAsStream(PROPERTY_FILE_NAME)) {
properties.load(stream);
} catch (IOException e) {
throw new ApplicationException("Failed to load property file", e);
}
}
return properties;
}
}
And logline: ApplicationLifecycleListener.contextInitialized(ApplicationLifecycleListener.java:14)
Class ApplicationLifecycleListener:
public class ApplicationLifecycleListener implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent sce) {
Properties applicationProperties = PropertiesManager.getInstance().getApplicationProperties();
DBManager.getInstance().initialize(applicationProperties);
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
DBManager.getInstance().stopDb();
}
}
What is problem can be?
Without providing the file with the exact line you see the NullPointerException (none of the files you provided have the lines shown in log), it is difficult to be sure. But one hint is that although you put your resources files to be built with Maven in the '<project>/src/main/resources' folder, when built and packing the war file, it will put your application resource files in the 'WEB-INF/classes' folder which is part of the application default classpath. Therefore, to correctly reference them using the method Thread.currentThread().getContextClassLoader().getResourceAsStream(...) you should not add the 'resources\...' prefix to the file name, since this method already look files in the default application classpath. Remove the prefix and see if it works. Please, refer to this answer for more detail.
i have joined to one of those Vertx lovers , how ever the single threaded main frame may not be working for me , because in my server there might be 50 file download requests at a moment , as a work around i have created this class
public abstract T onRun() throws Exception;
public abstract void onSuccess(T result);
public abstract void onException();
private static final int poolSize = Runtime.getRuntime().availableProcessors();
private static final long maxExecuteTime = 120000;
private static WorkerExecutor mExecutor;
private static final String BG_THREAD_TAG = "BG_THREAD";
protected RoutingContext ctx;
private boolean isThreadInBackground(){
return Thread.currentThread().getName() != null && Thread.currentThread().getName().equals(BG_THREAD_TAG);
}
//on success will not be called if exception be thrown
public BackgroundExecutor(RoutingContext ctx){
this.ctx = ctx;
if(mExecutor == null){
mExecutor = MyVertxServer.vertx.createSharedWorkerExecutor("my-worker-pool",poolSize,maxExecuteTime);
}
if(!isThreadInBackground()){
/** we are unlocking the lock before res.succeeded , because it might take long and keeps any thread waiting */
mExecutor.executeBlocking(future -> {
try{
Thread.currentThread().setName(BG_THREAD_TAG);
T result = onRun();
future.complete(result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
future.fail(e);
}
/** false here means they should not be parallel , and will run without order multiple times on same context*/
},false, res -> {
if(res.succeeded()){
onSuccess((T)res.result());
}
});
}else{
GUI.display("AVOIDED DUPLICATE BACKGROUND THREADING");
System.out.println("AVOIDED DUPLICATE BACKGROUND THREADING");
try{
T result = onRun();
onSuccess((T)result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
}
}
}
allowing the handlers to extend it and use it like this
public abstract class DefaultFileHandler implements MyHttpHandler{
public abstract File getFile(String suffix);
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
new BackgroundExecutor<Void>(ctx) {
#Override
public Void onRun() throws Exception {
File file = getFile(URLDecoder.decode(suffix, "UTF-8"));
if(file == null || !file.exists()){
utils.sendResponseAndEnd(ctx.response(),404);
return null;
}else{
utils.sendFile(ctx, file);
}
return null;
}
#Override
public void onSuccess(Void result) {}
#Override
public void onException() {
utils.sendResponseAndEnd(ctx.response(),404);
}
};
}
and here is how i initialize my vertx server
vertx.deployVerticle(MainDeployment.class.getCanonicalName(),res -> {
if (res.succeeded()) {
GUI.display("Deployed");
} else {
res.cause().printStackTrace();
}
});
server.requestHandler(router::accept).listen(port);
and here is my MainDeployment class
public class MainDeployment extends AbstractVerticle{
#Override
public void start() throws Exception {
// Different ways of deploying verticles
// Deploy a verticle and don't wait for it to start
for(Entry<String, MyHttpHandler> entry : MyVertxServer.map.entrySet()){
MyVertxServer.router.route(entry.getKey()).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
entry.getValue().Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
}
this is working just fine when and where i need it , but i still wonder if is there any better way to handle concurencies like this on vertx , if so an example would be really appreciated . thanks alot
I don't fully understand your problem and reasons for your solution. Why don't you implement one verticle to handle your http uploads and deploy it multiple times? I think that handling 50 concurrent uploads should be a piece of cake for vert.x.
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
DeploymentOptions options = new DeploymentOptions().setInstances(16);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to take utilise all the cores.
http://vertx.io/docs/vertx-core/java/#_specifying_number_of_verticle_instances
vertx is a well-designed model so that a concurrency issue does not occur.
generally, vertx does not recommend the multi-thread model.
(because, handling is not easy.)
If you select multi-thread model, you have to think about shared data..
Simply, if you just only want to split EventLoop Area,
first of all, you make sure Check your a number of CPU Cores.
and then Set up the count of Instances .
DeploymentOptions options = new DeploymentOptions().setInstances(4);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
But, If you have 4cores of CPU, you don't set up over 4 instances.
If you set up to number four or more, the performance won't improve.
vertx concurrency reference
http://vertx.io/docs/vertx-core/java/
Is it possible to have my app update the config settings at runtime? I can easily expose the settings I want in my UI but is there a way to allow the user to update settings and make them permanent ie save them to the config.yaml file? The only way I can see it to update the file by hand then restart the server which seems a bit limiting.
Yes. It is possible to reload the service classes at runtime.
Dropwizard by itself does not have the way to reload the app, but jersey has.
Jersey uses a container object internally to maintain the running application. Dropwizard uses the ServletContainer class of Jersey to run the application.
How to reload the app without restarting it -
Get a handle to the container used internally by jersey
You can do this by registering a AbstractContainerLifeCycleListener in Dropwizard Environment before starting the app. and implement its onStartup method as below -
In your main method where you start the app -
//getting the container instance
environment.jersey().register(new AbstractContainerLifecycleListener() {
#Override
public void onStartup(Container container) {
//initializing container - which will be used to reload the app
_container = container;
}
});
Add a method to your app to reload the app. It will take in the list of string which are the names of the service classes you want to reload. This method will call the reload method of the container with the new custom DropWizardConfiguration instance.
In your Application class
public static synchronized void reloadApp(List<String> reloadClasses) {
DropwizardResourceConfig dropwizardResourceConfig = new DropwizardResourceConfig();
for (String className : reloadClasses) {
try {
Class<?> serviceClass = Class.forName(className);
dropwizardResourceConfig.registerClasses(serviceClass);
System.out.printf(" + loaded class %s.\n", className);
} catch (ClassNotFoundException ex) {
System.out.printf(" ! class %s not found.\n", className);
}
}
_container.reload(dropwizardResourceConfig);
}
For more details see the example documentation of jersey - jersey example for reload
Consider going through the code and documentation of following files in Dropwizard/Jersey for a better understanding -
Container.java
ContainerLifeCycleListener.java
ServletContainer.java
AbstractContainerLifeCycleListener.java
DropWizardResourceConfig.java
ResourceConfig.java
No.
Yaml file is parsed at startup and given to the application as Configuration object once and for all. I believe you can change the file after that but it wouldn't affect your application until you restart it.
Possible follow up question: Can one restart the service programmatically?
AFAIK, no. I've researched and read the code somewhat for that but couldn't find a way to do that yet. If there is, I'd love to hear that :).
I made a task that reloads the main yaml file (it would be useful if something in the file changes). However, it is not reloading the environment. After researching this, Dropwizard uses a lot of final variables and it's quite hard to reload these on the go, without restarting the app.
class ReloadYAMLTask extends Task {
private String yamlFileName;
ReloadYAMLTask(String yamlFileName) {
super("reloadYaml");
this.yamlFileName = yamlFileName;
}
#Override
public void execute(ImmutableMultimap<String, String> parameters, PrintWriter output) throws Exception {
if (yamlFileName != null) {
ConfigurationFactoryFactory configurationFactoryFactory = new DefaultConfigurationFactoryFactory<ReportingServiceConfiguration>();
ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = Jackson.newObjectMapper();
final ConfigurationFactory<ServiceConfiguration> configurationFactory = configurationFactoryFactory.create(ServiceConfiguration.class, validator, objectMapper, "dw");
File confFile = new File(yamlFileName);
configurationFactory.build(new File(confFile.toURI()));
}
}
}
You can change the configuration in the YAML and read it while your application is running. This will not however restart the server or change any server configurations. You will be able to read any changed custom configurations and use them. For example, you can change the logging level at runtime or reload other custom settings.
My solution -
Define a custom server command. You should use this command to start your application instead of the "server" command.
ArgsServerCommand.java
public class ArgsServerCommand<WC extends WebConfiguration> extends EnvironmentCommand<WC> {
private static final Logger LOGGER = LoggerFactory.getLogger(ArgsServerCommand.class);
private final Class<WC> configurationClass;
private Namespace _namespace;
public static String COMMAND_NAME = "args-server";
public ArgsServerCommand(Application<WC> application) {
super(application, "args-server", "Runs the Dropwizard application as an HTTP server specific to my settings");
this.configurationClass = application.getConfigurationClass();
}
/*
* Since we don't subclass ServerCommand, we need a concrete reference to the configuration
* class.
*/
#Override
protected Class<WC> getConfigurationClass() {
return configurationClass;
}
public Namespace getNamespace() {
return _namespace;
}
#Override
protected void run(Environment environment, Namespace namespace, WC configuration) throws Exception {
_namespace = namespace;
final Server server = configuration.getServerFactory().build(environment);
try {
server.addLifeCycleListener(new LifeCycleListener());
cleanupAsynchronously();
server.start();
} catch (Exception e) {
LOGGER.error("Unable to start server, shutting down", e);
server.stop();
cleanup();
throw e;
}
}
private class LifeCycleListener extends AbstractLifeCycle.AbstractLifeCycleListener {
#Override
public void lifeCycleStopped(LifeCycle event) {
cleanup();
}
}
}
Method to reload in your Application -
_ymlFilePath = null; //class variable
public static boolean reloadConfiguration() throws IOException, ConfigurationException {
boolean reloaded = false;
if (_ymlFilePath == null) {
List<Command> commands = _configurationBootstrap.getCommands();
for (Command command : commands) {
String commandName = command.getName();
if (commandName.equals(ArgsServerCommand.COMMAND_NAME)) {
Namespace namespace = ((ArgsServerCommand) command).getNamespace();
if (namespace != null) {
_ymlFilePath = namespace.getString("file");
}
}
}
}
ConfigurationFactoryFactory configurationFactoryFactory = _configurationBootstrap.getConfigurationFactoryFactory();
ValidatorFactory validatorFactory = _configurationBootstrap.getValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = _configurationBootstrap.getObjectMapper();
ConfigurationSourceProvider provider = _configurationBootstrap.getConfigurationSourceProvider();
final ConfigurationFactory<CustomWebConfiguration> configurationFactory = configurationFactoryFactory.create(CustomWebConfiguration.class, validator, objectMapper, "dw");
if (_ymlFilePath != null) {
// Refresh logging level.
CustomWebConfiguration webConfiguration = configurationFactory.build(provider, _ymlFilePath);
LoggingFactory loggingFactory = webConfiguration.getLoggingFactory();
loggingFactory.configure(_configurationBootstrap.getMetricRegistry(), _configurationBootstrap.getApplication().getName());
// Get my defined custom settings
CustomSettings customSettings = webConfiguration.getCustomSettings();
reloaded = true;
}
return reloaded;
}
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you.
Before I get started, note that this isn't a complete solution for the question asked as it doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I want to run an action (with a rule) when a file enters the folder in my alfresco repository. The file needs to be moved to a new folder. The new folder will be named after the metadata property "subject" from the file I uploaded.
I am not able to figure out how to do this. Who got any tips?
(A repository webscript is also an option).
This is how I see it:
import java.util.List;
public class MoveExecuter extends ActionExecuterAbstractBase {
public static final String DESTINATION_FOLDER = "destination-folder";
private FileFolderService fileFolderService;
private NodeService nodeService;
#Override
protected void addParameterDefinitions(List<ParameterDefinition> paramList) {
paramList.add(
new ParameterDefinitionImpl(DESTINATION_FOLDER,
DataTypeDefinition.NODE_REF,
true,
getParamDisplayLabel(METADATA VALUE FROM FIELD SUBJECT FROM INCOMING FILE)));}
public void executeImpl(Action ruleAction, NodeRef actionedUponNodeRef) {
NodeRef destinationParent = (NodeRef)ruleAction.getParameterValue(DESTINATION_FOLDER);
// if the node exists
if (this.nodeService.exists(destinationParent) == true) {
try {
fileFolderService.move(incomingfile, destinationParent, null);
} catch (FileNotFoundException e) {
// Do nothing
}
if (this.nodeService.exists(destinationParent) == false) {
try {
nodeService.createNode(parentRef, assocTypeQName, assocQName, "metadata field subject");
fileFolderService.move(incomingfile, destinationParent, null);
} catch (FileNotFoundException e) {
// Do nothing
}
}
}
}
For such a simple action I'd just use a JavaScript instead of a java Action.
Install the JavaScript addon from googlecode or github (newer version)
And just write your Javascript code according the api and run it in runtime in the console to test your code.
The following code works when I execute the Pig script locally while specifying a local GeoIPASNum.dat file. However, it does not work when run in MapReduce distributed mode. What am I missing?
Pig job
DEFINE AsnResolver AsnResolver('/hdfs/location/of/GeoIPASNum.dat');
loaded = LOAD 'log_file' Using PigStorage() AS (ip:chararray);
columned = FOREACH loaded GENERATE AsnResolver(ip);
STORE columned INTO 'output/' USING PigStorage();
AsnResolver.java
public class AsnResolver extends EvalFunc<String> {
String ipAsnFile = null;
#Override
public String exec(Tuple input) throws IOException {
try {
LookupService lus = new LookupService(ipAsnFile,
LookupService.GEOIP_MEMORY_CACHE);
return lus.getOrg((String) input.get(0));
} catch (IOException e) {
}
return null;
}
public AsnResolver(String file) {
ipAsnFile = file;
}
...
}
The problem is that you are using a string reference to an HDFS path and the LookupService constructor can't resolve the file. It probably works when you run it locally since the LookupService has no problem with a file in your local FS.
Override the getCacheFiles method:
#Override
public List<String> getCacheFiles() {
List<String> list = new ArrayList<String>(1);
list.add(ipAsnFile + "#GeoIPASNum.dat");
return list;
}
Then change your LookupService constructor to use the Distributed Cache reference to "GeoIPASNum.dat" :
LookupService lus = new LookupService("GeoIPASNum.dat", LookupService.GEOIP_MEMORY_CACHE);
Search for "Distributed Cache" in this page of the Pig docs: http://pig.apache.org/docs/r0.11.0/udf.html
The example it shows using the getCacheFiles() method should ensure that the file is accessible to all the nodes in the cluster.