I'm very new to Hazelcast, and it might very well be that I am missing something glaringly obvious, but here goes.
I have a Java Application that runs distributed, each containing its own Hazelcast Instance. I need Hazelcast to schedule a job that will run at a fixed rate, but never simultaneously on several instances. To achieve this I plan to use the IScheduledExecutorService and create a job that implements Runnable and NamedTask.
My problem is that the job needs to call methods on the application. My understanding is that the job is serialized and deserialized by hazelcast, which means that I can't just create a Runnable and feed it the objects it needs through its constructor. So how do I "Get back" to the application objects from the Hazelcast job?
For example, say I had a plain old java Runnable that i would like to execute in a Hazelcast Executor like this:
public class DoStuffJob implements Runnable, NamedTask {
private MyResource resource;
public DoStuffJob (MyResource resource){
this.resource = resource;
}
#Override
public String getName() {
return "Do stuff";
}
#Override
public void run() {
resource.doAllTheStuff();
}
}
How would I create a Runnable I can execute on Hazelcast, that can still access MyResource on the instance it executes on?
The only option I have found is to make the job HazelcastInstanceAware, and use the HazelcastInstance.getUserContext() to keep the object, but I am hoping it is somehow possible to "get back" to the executing application.
Thank you in advance.
You could have your Runnable task put the derived data into a distributed data-structure - probably an IMap. It would then be accessible from any of your JVMs. Would that handle your requirements?
Related
Can i attach java shutdown hook across jvm .
I mean can I attach shut down from my JVM to weblogic server running in different jvm?
The shutdown hook part is in Runtime.
The across JVM part you'll have to implement yourself, because only you know how your JVMs can discover and identify themselves.
It could be as simple as creating a listening socket at JVM1 startup, and sending port number of JVM2 to it. JVM1 would send shutdown notification to JVM2 (to that port) in its shutdown hook.
The short anser is: You can, but not out of the box and there are some pitfalls so please read the section pitfalls at the end.
A shutdown hook must be a thread object Runtime.addShutdownHook(Thread) that the jvm can access. Thus it must be instantiated within that jvm.
The only way I see to do it is to implement a Runnable that is also Serializable and some kind of remote service (e.g. RMI) which you can pass the SerializableRunnable. This service must then create a Thread pass the SerializableRunnable to that Thread's constructor and add it as a shutdown hook to the Runtime.
But there is also another problem in this case. The SerializableRunnable has no references to objects within the remote service's jvm and you have to find a way how that SerializableRunnable can obtain them or to get them injected. So you have the choice between a ServiceLocator or an
dependency injection mechanism. I will use the service locator pattern for the following examples.
I would suggest to define an interface like this:
public interface RemoteRunnable extends Runnable, Serializable {
/**
* Called after de-serialization from a remote invocation to give the
* RemoteRunnable a chance to obtain service references of the jvm it has
* been de-serialized in.
*/
public void initialize(ServiceLocator sl);
}
The remote service method could then look like this
public class RemoteShutdownHookService {
public void addShutdownhook(RemoteRunnable rr){
// Since an instance of a RemoteShutdownHookService is an object of the remote
// jvm, it can provide a mechanism that gives access to objects in that jvm.
// Either through a service locator
ServiceLocator sl = ...;
rr.initialize(sl);
// or a dependency injection.
// In case of a dependecy injection the initialize method of RemoteRunnable
// can be omitted.
// A short spring example:
//
// AutowireCapableBeanFactory beanFactory = .....;
// beanFactory.autowireBean(rr);
Runtime.getRuntime().addShutdownHook(new Thread(rr));
}
}
and your RemoteRunnable might look lioke this
public class SomeRemoteRunnable implements RemoteRunnable {
private static final long serialVersionUID = 1L;
private SomeServiceInterface someService;
#Override
public void run() {
// call someService on shutdown
someService.doSomething();
}
#Override
public void initialize(ServiceLocator sl) {
someService = sl.getService(SomeServiceInterface.class);
}
}
Pitfalls
There is only one problem with this approach that is not obvious. The RemoteRunnable implementation class must be available in the remote service's classpath. Thus you can not just create a new RemoteRunnable class and pass an instance of it to the remote service. You always have to add it to the remote JVMs classpath.
So this approach only makes sense if the RemoteRunnable implements an algorithm that can be configured by the state of the RemoteRunnable.
If you want to dynamically add arbitrary shutdown hook code to the remote JVM without the need to modify the remote JVMs classpath you must use a dynamic language and pass that script to the remote service, e.g. groovy.
I'm constructing an AsyncHttpClient like this:
public AsyncHttpClient getAsyncHttpClient() {
AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
.setProxyServer(makeProxyServer())
.setRequestTimeoutInMs((int) Duration.create(ASYNC_HTTP_REQUEST_TIMEOUT_MIN, TimeUnit.MINUTES).toMillis())
.build();
return new AsyncHttpClient(new NettyAsyncHttpProvider(config), config);
}
This gets called once at startup, and then the return value is passed around and used in various places. makeProxyServer() is my own function to take my proxy settings an return a ProxyServer object. What I need to do is be able to change the proxy server settings and then recreate the AsyncHttpClient object. But, I don't know how to shut it down cleanly. A bit of searching on leads me to believe that close() isn't gracefull. I'm worried about spinning up a whole new executor and set of threads every time the proxy settings change. This won't be often, but my application is very long-running.
I know I can use RequestBuilder.setProxyServer() for each request, but I'd like to have it set in one spot so that all callers of my asyncHttpClient instance obey the system-wide proxy settings without each developer having to remember to do it.
What's the right way to re-configure or teardown and rebuild a Netty-based AsyncHttpClient?
The problem with using AsyncHttpClient.close() is that it shuts down the thread pool executor used by the provider, then there is no way to re-use the client without re-building it, because as per documentation, the executor instance cannot be reused once ts is shutdown. So, there is no way but re-build the client if you go that way (unless you implement your own ExecutorService that would have another shutdown logic, but it is a long way to go, IMHO).
However, from looking into the implementation of NettyAsyncHttpProvider, I can see that it stores the reference to the given AsyncHttpClientConfiginstance and calls its getProxyServerSelector() to get the proxy settings for every new NettyAsyncHttpProvider.execute(Request...) invocation (i.e. for every request executed by AsyncHttpClient).
Then, if we could make the getProxyServerSelector() return the configurable instance of ProxyServerSelector, that would do the thing.
Unfortunately, AsyncHttpClientConfig is designed to be a read-only container, instantiated by AsyncHttpClientConfig.Builder.
To overcome this limitation, we would have to hack it, using, say, "wrap/delegate" approach:
Create a new class, derived from AsyncHttpClientConfig. The class should wrap the given separate AsyncHttpClientConfig instance and implement the delegation of the AsyncHttpClientConfig getters to that instance.
To be able to return the proxy selector we want at any given point of time, we make this setting mutable in a this wrapper class and expose the setter for it.
Example:
public class MyAsyncHttpClientConfig extends AsyncHttpClientConfig
{
private final AsyncHttpClientConfig config;
private ProxyServerSelector proxyServerSelector;
public MyAsyncHttpClientConfig(AsyncHttpClientConfig config)
{
this.config = config;
}
#Override
public int getMaxTotalConnections() { return config.maxTotalConnections; }
#Override
public int getMaxConnectionPerHost() { return config.maxConnectionPerHost; }
// delegate the others but getProxyServerSelector()
...
#Override
public ProxyServerSelector getProxyServerSelector()
{
return proxyServerSelector == null
? config.getProxyServerSelector()
: proxyServerSelector;
}
public void setProxyServerSelector(ProxyServerSelector proxyServerSelector)
{
this.proxyServerSelector = proxyServerSelector;
}
}
Now, in your example, wrap your AsyncHttpClient config instance with our new wrapper and use it to configure the AsyncHttpClient:
Example:
MyAsyncHttpClientConfig myConfig = new MyAsyncHttpClientConfig(config);
return new AsyncHttpClient(new NettyAsyncHttpProvider(myConfig), myConfig);
Whenever you invoke myConfig.setProxyServerSelector(newSelector), the new request executed by NettyAsyncHttpProvider instance in your client will use the new proxy server settings.
A few hints/warnings:
This approach relies on the internal implementation of NettyAsyncHttpProvider; therefore make your own judgement on maintainability, future Netty libraries versions upgrade strategy etc. You could always look at the Netty source code before upgrading to the new version. At the current point, I personally think it is unlikely to change too much to invalidate this implementation.
You could get ProxyServerSelector for ProxyServer by using com.ning.http.util.ProxyUtils.createProxyServerSelector(proxyServer) - that's exactly what AsyncHttpClientConfig.Builder does.
The given example has no synchronization logic for accessing proxyServerSelector; you may want to add some as your application logic needs.
Maybe it is a good idea to submit a feature request for AsyncHttpClient to be able to setup a "configuration factory" for the AsyncHttpProvider so all these complications would vanish :-)
You should be holding a RequestHandle instance for all your unfinished requests. When you want to shut down, you can loop through and call isFinished() on all of them until they are all done. Then you know you can safely close it and no pending requests will be killed.
Once it's closed, just build a new one. Don't try to reuse the existing one. If you have references to it around, change those to reference a Factory that will return the current one.
I'm working in an Spring application that downloads data from different APIs. For that purpose I need a class Fetcher that interacts with an API to fetch the needed data. One of the requirements of this class is that it has to have a method to start the fetching and a method to stop it. Also, it must download all asynchronously because users must be able to interact with a dashboard while fetching data.
Which is the best way to accomplish this? I've been reading about task executors and the different annotations of Spring to schedule tasks and execute them asynchronously but this solutions don't seem to solve my problem.
Asynchronous task execution is what you're after and since Spring 3.0 you can achieve this using annotations too directly on the method you want to run asyncrhonously.
There are two ways of implementing this depending whether you are interested in getting a result from the async process:
#Async
public Future<ReturnPOJO> asyncTaskWithReturn(){
//..
return new AsyncResult<ReturnPOJO>(yourReturnPOJOInstance);
}
or not:
#Async
public void asyncTaskNoReturn() {
//..
}
In the former method the result of your computation conveyed by yourReturnPOJOInstance object instance, is stored in an instance of org.springframework.scheduling.annotation.AsyncResult<V> which in return implements the java.util.concurrent.Future<V> that the caller can use to retrieve the result of the computation later on.
To activate the above functionality in Spring you have to add in your XML config file:
<task: annotation-driven />
along with the needed task namespace.
The simplest way to do this is to use the Thread class. You supply a Runnable object that performs the fetching functionality in the run() method and when the Thread is started, it invokes the run method in a separate thread of execution.
So something like this:
public class Fetcher implements Runnable{
public void run(){
//do fetching stuff
}
}
//in your code
Thread fetchThread = new Thread(new Fetcher());
fetchThread.start();
Now, if you want to be able to cancel, you can do that a couple of ways. The easiest (albeit most violent and nonadvisable way to do it is to interrupt the thread:
fetchThread.interrupt();
The correct way to do it would be to implement logic in your Fetcher class that periodically checks a variable to see whether it should stop doing whatever it's doing or not.
Edit To your question about getting Spring to run it automatically, if you wanted it to run periodically, you'll need to use a scheduling framework like Quartz. However, if you just want it to run once what you could do is use the #PostConstruct annotation. The method annotated with #PostConstruct will be executed after the bean is created. So you could do something like this
#Service
public class Fetcher implements Runnable{
public void run(){
//do stuff
}
#PostConstruct
public void goDoIt(){
Thread trd = new Thread(this);
trd.start();
}
}
Edit 2 I actually didn't know about this, but check out the #Async discussion in the Spring documentation if you haven't already. Might also be what you want to do.
You might only need certain methods to run on a separate thread rather than the entire class. If so, the #Async annotation is so simple and easy to use.
Simply add it to any method you want to run asynchronously, you can also use it on methods with return types thanks to Java's Future library.
Check out this page: http://www.baeldung.com/spring-async
I have looked around and around for this answer, but I have not been able to find a good answer. I would like to create a system based on Quartz that allows people to schedule their own tasks. I will use a pseudo example.
Let's say my main method for my Quartz program is called quartz.java.
Then I have a file called sweep.java that implements the Quartz "job" interface.
So in my quartz.java, I schedule my sweep.java to run every hour. I run quartz.java, and it works fine. GREAT; however, now I want to add a dust.java to the quartz scheduler; however, since this is a production service, I don't want to have to stop my quartz.java file, add in my dust.java, and recompile and run quartz.java again. This downtime would be unacceptable.
Does anyone have any ideas on how I could accomplish this? It seems impossible because how could you ever feed another java file into the program without recompiling, linking, etc.
I hope that this example is clear. Please let me know if I need to clarify any part of it.
Partial answer: it is possible to compile, and then instantiate, a class, programatically.
Here are links to example code:
how to compile from a String;
CompilerOutput;
CompilerOutputDirectory.
The extracted class is grabbed in the third source file (see method getGeneratedClass, which returns a Class<?> object).
HOWEVER: keep in mind that this is potentially dangerous to do so. One problem, which can be quite serious if you are not careful, is that when you dynamically instantiate a class, its static initialization blocks are executed. And these can potentially wreak havoc on your application. So, in addition, you'll have to create an appropriate SecurityContext.
In the code above, I actually only ever get the Class<?> object and never instantiate it in any way, so no code is executed. But your usage scenario is quite different.
I have not tried any of these but are worth trying .
1) Consider using Quartz camel endpoint .
If my understanding is right, Apache Camel lets you create the camel routes on the fly.
It just needs to deploy the camel-context.xml into a container taking into consideration that the required classes would be already available on classpath of container.
2) Quartz lets you create a job declaratively i.e. with xml configuration of job and trigger.
You can find more information here.
3) Now this requires some efforts ;-)
Create an interface which has a method which you will execute as a part of job. Lets say this will have a method called
public interface MyDynamicJob
{
public void executeThisAsPartOfJob();
}
Create your instances of Job methods.
public EmailJob implements MyDynamicJob
{
#Override
public void executeThisAsPartOfJob()
{
System.out.println("Sending Email");
}
}
Now in your main scheduler engine, use the Observer pattern to store/initiate the job dynamically.
Something like,
HashMap jobs=new HashMap<String,MyDynamicJob>();
// call this method to add the job dynamically.
// If you add a job after the scheduler engine started , find a way here how to reiterate over this map without shutting down the scheduler :-).
public void addJob(String someJobName,MyDynamicJob job)
{
jobs.add(someJobName,job);
}
public void initiateScheduler()
{
// Iterate over the jobs map to get all registered jobs. Create
// Create JobDetail instances dynamically for each job Entry. add your custom job class as a part of job data map.
Job jd1=JobBuilder.newJob(GenericJob.class)
.withIdentity("FirstJob", "First Group").build();
Map jobDataMap=jd1.getJobDataMap();
jobDataMap.put("dynamicjob", jobs.get("dynamicjob1"));
}
public class GenericJob implements Job {
public void execute(JobExecutionContext arg0) throws JobExecutionException {
System.out.println("Executing job");
Map jdm=arg0.getJobDetail().getJobDataMap();
MyDynamicJob mdj=jdm.get("dynamicjob");
// Now execute your custom job method here.
mdj.executeThisAsPartOfJob();
System.out.println("Job Execution complete");
}
}
Because of all the problems we can meet when trying to use Hibernate in a multithreaded application (1st clue, 2nd clue, 3rd clue, etc.), I was thinking of another solution: implementing the logical part within a classic Controller, and simply call it from my thread using URL.openConnection().
In other words, instead of doing something like this:
MyThread.java
public class MyThread implements Runnable {
#Override
public void run() {
// do some great stuff with Hibernate
}
}
Anywhere.java
new Thread(new MyThread()).start();
I would like to try something like that:
MyController.java
#Controller
public class MyController {
#RequestMapping(value = "myUrl", method = RequestMethod.GET)
public void myMethod() {
// do some great stuff with Hibernate
}
}
MyThread.java
public class MyThread implements Runnable {
#Override
public void run() {
// simple call the above mapped url
}
}
Anywhere.java
new Thread(new MyThread()).start();
What do you think about it? Good or bad? I haven't tried yet, but I think such a solution will prevent the common errors we can meet using Hibernate in multithreading, because the server will execute the logical part as if someone were requesting the fake page.
PS: I know there are some solutions to use Hibernate in multithreaded applications, but each time I try one, another appears, and that until the I'm-fed-up-with-it point of no return.
PS2: I'm aware that such a solution need to be secured (e.g. UID as a token).
I don't really see what problem you're trying to solve here. Hibernate is almost always used in a multi-threaded environment. In webapps, for example, concurrent requests are handled by multiple concurrent threads, and each thread uses its own Hibernate session. And that doesn't cause any problem.
You will have problem if you share the same session among threads, or if you share a given entity among threads.
If you start your own thread, and this thread uses its own session and entities, I don't see why you would have any problem.