I have looked around and around for this answer, but I have not been able to find a good answer. I would like to create a system based on Quartz that allows people to schedule their own tasks. I will use a pseudo example.
Let's say my main method for my Quartz program is called quartz.java.
Then I have a file called sweep.java that implements the Quartz "job" interface.
So in my quartz.java, I schedule my sweep.java to run every hour. I run quartz.java, and it works fine. GREAT; however, now I want to add a dust.java to the quartz scheduler; however, since this is a production service, I don't want to have to stop my quartz.java file, add in my dust.java, and recompile and run quartz.java again. This downtime would be unacceptable.
Does anyone have any ideas on how I could accomplish this? It seems impossible because how could you ever feed another java file into the program without recompiling, linking, etc.
I hope that this example is clear. Please let me know if I need to clarify any part of it.
Partial answer: it is possible to compile, and then instantiate, a class, programatically.
Here are links to example code:
how to compile from a String;
CompilerOutput;
CompilerOutputDirectory.
The extracted class is grabbed in the third source file (see method getGeneratedClass, which returns a Class<?> object).
HOWEVER: keep in mind that this is potentially dangerous to do so. One problem, which can be quite serious if you are not careful, is that when you dynamically instantiate a class, its static initialization blocks are executed. And these can potentially wreak havoc on your application. So, in addition, you'll have to create an appropriate SecurityContext.
In the code above, I actually only ever get the Class<?> object and never instantiate it in any way, so no code is executed. But your usage scenario is quite different.
I have not tried any of these but are worth trying .
1) Consider using Quartz camel endpoint .
If my understanding is right, Apache Camel lets you create the camel routes on the fly.
It just needs to deploy the camel-context.xml into a container taking into consideration that the required classes would be already available on classpath of container.
2) Quartz lets you create a job declaratively i.e. with xml configuration of job and trigger.
You can find more information here.
3) Now this requires some efforts ;-)
Create an interface which has a method which you will execute as a part of job. Lets say this will have a method called
public interface MyDynamicJob
{
public void executeThisAsPartOfJob();
}
Create your instances of Job methods.
public EmailJob implements MyDynamicJob
{
#Override
public void executeThisAsPartOfJob()
{
System.out.println("Sending Email");
}
}
Now in your main scheduler engine, use the Observer pattern to store/initiate the job dynamically.
Something like,
HashMap jobs=new HashMap<String,MyDynamicJob>();
// call this method to add the job dynamically.
// If you add a job after the scheduler engine started , find a way here how to reiterate over this map without shutting down the scheduler :-).
public void addJob(String someJobName,MyDynamicJob job)
{
jobs.add(someJobName,job);
}
public void initiateScheduler()
{
// Iterate over the jobs map to get all registered jobs. Create
// Create JobDetail instances dynamically for each job Entry. add your custom job class as a part of job data map.
Job jd1=JobBuilder.newJob(GenericJob.class)
.withIdentity("FirstJob", "First Group").build();
Map jobDataMap=jd1.getJobDataMap();
jobDataMap.put("dynamicjob", jobs.get("dynamicjob1"));
}
public class GenericJob implements Job {
public void execute(JobExecutionContext arg0) throws JobExecutionException {
System.out.println("Executing job");
Map jdm=arg0.getJobDetail().getJobDataMap();
MyDynamicJob mdj=jdm.get("dynamicjob");
// Now execute your custom job method here.
mdj.executeThisAsPartOfJob();
System.out.println("Job Execution complete");
}
}
Related
I'm very new to Hazelcast, and it might very well be that I am missing something glaringly obvious, but here goes.
I have a Java Application that runs distributed, each containing its own Hazelcast Instance. I need Hazelcast to schedule a job that will run at a fixed rate, but never simultaneously on several instances. To achieve this I plan to use the IScheduledExecutorService and create a job that implements Runnable and NamedTask.
My problem is that the job needs to call methods on the application. My understanding is that the job is serialized and deserialized by hazelcast, which means that I can't just create a Runnable and feed it the objects it needs through its constructor. So how do I "Get back" to the application objects from the Hazelcast job?
For example, say I had a plain old java Runnable that i would like to execute in a Hazelcast Executor like this:
public class DoStuffJob implements Runnable, NamedTask {
private MyResource resource;
public DoStuffJob (MyResource resource){
this.resource = resource;
}
#Override
public String getName() {
return "Do stuff";
}
#Override
public void run() {
resource.doAllTheStuff();
}
}
How would I create a Runnable I can execute on Hazelcast, that can still access MyResource on the instance it executes on?
The only option I have found is to make the job HazelcastInstanceAware, and use the HazelcastInstance.getUserContext() to keep the object, but I am hoping it is somehow possible to "get back" to the executing application.
Thank you in advance.
You could have your Runnable task put the derived data into a distributed data-structure - probably an IMap. It would then be accessible from any of your JVMs. Would that handle your requirements?
The program I am working on has a distributed architecture, more precisely the Broker-Agent Pattern. The broker will send messages to its corresponding agent in order to tell the agent to execute a task. Each message sent contains the target task information(the task name, configuration properties needed for the task to perform etc.). In my code, each task in the agent side is implemente in a seperate class. Like :
public class Task1 {}
public class Task2 {}
public class Task3 {}
...
Messages are in JSON format like:
{
"taskName": "Task1", // put the class name here
"config": {
}
}
So what I need is to associate the message sent from the broker with the right task in the agent side.
I know one way is to put the target task class name in the message so that the agent is able to create an instance of that task class by the task name extracted from the message using reflections, like:
Class.forName(className).getConstructor(String.class).newInstance(arg);
I want to know what is the best practice to implement this association. The number of tasks is growing and I think to write string is easy to make mistakes and not easy to maintain.
If you're that specific about classnames you could even think about serializing task objects and sending them directly. That's probably simpler than your reflection approach (though even tighter coupled).
But usually you don't want that kind of coupling between Broker and Agent. A broker needs to know which task types there are and how to describe the task in a way that everybody understands (like in JSON). It doesn't / shouldn't know how the Agent implements the task. Or even in which language the Agent is written. (That doesn't mean that it's a bad idea to define task names in a place that is common to both code bases)
So you're left with finding a good way to construct objects (or call methods) inside your agent based on some string. And the common solution for that is some form of factory pattern like: http://alvinalexander.com/java/java-factory-pattern-example - also helpful: a Map<String, Factory> like
interface Task {
void doSomething();
}
interface Factory {
Task makeTask(String taskDescription);
}
Map<String, Factory> taskMap = new HashMap<>();
void init() {
taskMap.put("sayHello", new Factory() {
#Override
public Task makeTask(String taskDescription) {
return new Task() {
#Override
public void doSomething() {
System.out.println("Hello" + taskDescription);
}
};
}
});
}
void onTask(String taskName, String taskDescription) {
Factory factory = taskMap.get(taskName);
if (factory == null) {
System.out.println("Unknown task: " + taskName);
}
Task task = factory.makeTask(taskDescription);
// execute task somewhere
new Thread(task::doSomething).start();
}
http://ideone.com/We5FZk
And if you want it fancy consider annotation based reflection magic. Depends on how many task classes there are. The more the more effort to put into an automagic solution that hides the complexity from you.
For example above Map could be filled automatically by adding some class path scanning for classes of the right type with some annotation that holds the string(s). Or you could let some DI framework inject all the things that need to go into the map. DI in larger projects usually solves those kinds of issues really well: https://softwareengineering.stackexchange.com/questions/188030/how-to-use-dependency-injection-in-conjunction-with-the-factory-pattern
And besides writing your own distribution system you can probably use existing ones. (And reuse rather then reinvent is a best practice). Maybe http://www.typesafe.com/activator/template/akka-distributed-workers or more general http://twitter.github.io/finagle/ work in your context. But there are way too many other open source distributed things that cover different aspects to name all the interesting ones.
I am using the PostContextCreate part of the life cycle in an e4 RCP application to create the back-end "business logic" part of my application. I then inject it into the context using an IEclipseContext. I now have a requirement to persist some business logic configuration options between executions of my application. I have some questions:
It looks like properties (e.g. accessible from MContext) would be really useful here, a straightforward Map<String,String> sounds ideal for my simple requirements, but how can I get them in PostContextCreate?
Will my properties persist if my application is being run with clearPersistedState set to true? (I'm guessing not).
If I turn clearPersistedState off then will it try and persist the other stuff that I injected into the context?
Or am I going about this all wrong? Any suggestions would be welcome. I may just give up and read/write my own properties file.
I think the Map returned by MApplicationElement.getPersistedState() is intended to be used for persistent data. This will be cleared by -clearPersistedState.
The PostContextCreate method of the life cycle is run quite early in the startup and not everything is available at this point. So you might have to wait for the app startup complete event (UIEvents.UILifeCycle.APP_STARTUP_COMPLETE) before accessing the persisted state data.
You can always use the traditional Platform.getStateLocation(bundle) to get a location in the workspace .metadata to store arbitrary data. This is not touched by clearPersistedState.
Update:
To subscribe to the app startup complete:
#PostContextCreate
public void postContextCreate(IEventBroker eventBroker)
{
eventBroker.subscribe(UIEvents.UILifeCycle.APP_STARTUP_COMPLETE, new AppStartupCompleteEventHandler());
}
private static final class AppStartupCompleteEventHandler implements EventHandler
{
#Override
public void handleEvent(final Event event)
{
... your code here
}
}
I'm working in an Spring application that downloads data from different APIs. For that purpose I need a class Fetcher that interacts with an API to fetch the needed data. One of the requirements of this class is that it has to have a method to start the fetching and a method to stop it. Also, it must download all asynchronously because users must be able to interact with a dashboard while fetching data.
Which is the best way to accomplish this? I've been reading about task executors and the different annotations of Spring to schedule tasks and execute them asynchronously but this solutions don't seem to solve my problem.
Asynchronous task execution is what you're after and since Spring 3.0 you can achieve this using annotations too directly on the method you want to run asyncrhonously.
There are two ways of implementing this depending whether you are interested in getting a result from the async process:
#Async
public Future<ReturnPOJO> asyncTaskWithReturn(){
//..
return new AsyncResult<ReturnPOJO>(yourReturnPOJOInstance);
}
or not:
#Async
public void asyncTaskNoReturn() {
//..
}
In the former method the result of your computation conveyed by yourReturnPOJOInstance object instance, is stored in an instance of org.springframework.scheduling.annotation.AsyncResult<V> which in return implements the java.util.concurrent.Future<V> that the caller can use to retrieve the result of the computation later on.
To activate the above functionality in Spring you have to add in your XML config file:
<task: annotation-driven />
along with the needed task namespace.
The simplest way to do this is to use the Thread class. You supply a Runnable object that performs the fetching functionality in the run() method and when the Thread is started, it invokes the run method in a separate thread of execution.
So something like this:
public class Fetcher implements Runnable{
public void run(){
//do fetching stuff
}
}
//in your code
Thread fetchThread = new Thread(new Fetcher());
fetchThread.start();
Now, if you want to be able to cancel, you can do that a couple of ways. The easiest (albeit most violent and nonadvisable way to do it is to interrupt the thread:
fetchThread.interrupt();
The correct way to do it would be to implement logic in your Fetcher class that periodically checks a variable to see whether it should stop doing whatever it's doing or not.
Edit To your question about getting Spring to run it automatically, if you wanted it to run periodically, you'll need to use a scheduling framework like Quartz. However, if you just want it to run once what you could do is use the #PostConstruct annotation. The method annotated with #PostConstruct will be executed after the bean is created. So you could do something like this
#Service
public class Fetcher implements Runnable{
public void run(){
//do stuff
}
#PostConstruct
public void goDoIt(){
Thread trd = new Thread(this);
trd.start();
}
}
Edit 2 I actually didn't know about this, but check out the #Async discussion in the Spring documentation if you haven't already. Might also be what you want to do.
You might only need certain methods to run on a separate thread rather than the entire class. If so, the #Async annotation is so simple and easy to use.
Simply add it to any method you want to run asynchronously, you can also use it on methods with return types thanks to Java's Future library.
Check out this page: http://www.baeldung.com/spring-async
I've recently been asked to look into speeding up a mapreduce project.
I'm trying to view log4j log information which is being generated within the 'map' method of a class which implements: org.apache.hadoop.mapred.Mapper
Within this class there are the following methods:
#Override
public void configure( .. ) { .. }
public static void doCompileAndAdd( .. ) { .. }
public void map( .. ) { .. }
Logging information is available for the configure method and the doCompileAndAdd method (which is called from the configure method); however, no log information is being displayed for the 'map' method.
I've also tried simply using System.out.println( .. ) within the map method without success.
Is there anyone who might be able to help to shed some light on this issue?
Thanks,
Telax
Since the mapper classes actually run in tasks distributed across nodes in the cluster, the stdout from those tasks appears in the individual logs for each task. The simplest way to see those logs is to go to the job tracker page for the cluster, usually at http://namenode:50030/jobtracker.jsp. From there you can select the job and then select the map tasks you are interested in the logs for.