Java shutdown hook across different JVM - java

Can i attach java shutdown hook across jvm .
I mean can I attach shut down from my JVM to weblogic server running in different jvm?

The shutdown hook part is in Runtime.
The across JVM part you'll have to implement yourself, because only you know how your JVMs can discover and identify themselves.
It could be as simple as creating a listening socket at JVM1 startup, and sending port number of JVM2 to it. JVM1 would send shutdown notification to JVM2 (to that port) in its shutdown hook.

The short anser is: You can, but not out of the box and there are some pitfalls so please read the section pitfalls at the end.
A shutdown hook must be a thread object Runtime.addShutdownHook(Thread) that the jvm can access. Thus it must be instantiated within that jvm.
The only way I see to do it is to implement a Runnable that is also Serializable and some kind of remote service (e.g. RMI) which you can pass the SerializableRunnable. This service must then create a Thread pass the SerializableRunnable to that Thread's constructor and add it as a shutdown hook to the Runtime.
But there is also another problem in this case. The SerializableRunnable has no references to objects within the remote service's jvm and you have to find a way how that SerializableRunnable can obtain them or to get them injected. So you have the choice between a ServiceLocator or an
dependency injection mechanism. I will use the service locator pattern for the following examples.
I would suggest to define an interface like this:
public interface RemoteRunnable extends Runnable, Serializable {
/**
* Called after de-serialization from a remote invocation to give the
* RemoteRunnable a chance to obtain service references of the jvm it has
* been de-serialized in.
*/
public void initialize(ServiceLocator sl);
}
The remote service method could then look like this
public class RemoteShutdownHookService {
public void addShutdownhook(RemoteRunnable rr){
// Since an instance of a RemoteShutdownHookService is an object of the remote
// jvm, it can provide a mechanism that gives access to objects in that jvm.
// Either through a service locator
ServiceLocator sl = ...;
rr.initialize(sl);
// or a dependency injection.
// In case of a dependecy injection the initialize method of RemoteRunnable
// can be omitted.
// A short spring example:
//
// AutowireCapableBeanFactory beanFactory = .....;
// beanFactory.autowireBean(rr);
Runtime.getRuntime().addShutdownHook(new Thread(rr));
}
}
and your RemoteRunnable might look lioke this
public class SomeRemoteRunnable implements RemoteRunnable {
private static final long serialVersionUID = 1L;
private SomeServiceInterface someService;
#Override
public void run() {
// call someService on shutdown
someService.doSomething();
}
#Override
public void initialize(ServiceLocator sl) {
someService = sl.getService(SomeServiceInterface.class);
}
}
Pitfalls
There is only one problem with this approach that is not obvious. The RemoteRunnable implementation class must be available in the remote service's classpath. Thus you can not just create a new RemoteRunnable class and pass an instance of it to the remote service. You always have to add it to the remote JVMs classpath.
So this approach only makes sense if the RemoteRunnable implements an algorithm that can be configured by the state of the RemoteRunnable.
If you want to dynamically add arbitrary shutdown hook code to the remote JVM without the need to modify the remote JVMs classpath you must use a dynamic language and pass that script to the remote service, e.g. groovy.

Related

Access local resources with hazelcast scheduled job

I'm very new to Hazelcast, and it might very well be that I am missing something glaringly obvious, but here goes.
I have a Java Application that runs distributed, each containing its own Hazelcast Instance. I need Hazelcast to schedule a job that will run at a fixed rate, but never simultaneously on several instances. To achieve this I plan to use the IScheduledExecutorService and create a job that implements Runnable and NamedTask.
My problem is that the job needs to call methods on the application. My understanding is that the job is serialized and deserialized by hazelcast, which means that I can't just create a Runnable and feed it the objects it needs through its constructor. So how do I "Get back" to the application objects from the Hazelcast job?
For example, say I had a plain old java Runnable that i would like to execute in a Hazelcast Executor like this:
public class DoStuffJob implements Runnable, NamedTask {
private MyResource resource;
public DoStuffJob (MyResource resource){
this.resource = resource;
}
#Override
public String getName() {
return "Do stuff";
}
#Override
public void run() {
resource.doAllTheStuff();
}
}
How would I create a Runnable I can execute on Hazelcast, that can still access MyResource on the instance it executes on?
The only option I have found is to make the job HazelcastInstanceAware, and use the HazelcastInstance.getUserContext() to keep the object, but I am hoping it is somehow possible to "get back" to the executing application.
Thank you in advance.
You could have your Runnable task put the derived data into a distributed data-structure - probably an IMap. It would then be accessible from any of your JVMs. Would that handle your requirements?

How to use an class Instance created by another Jenkins Plugin

I'd like to use an instance of a class that another plugin creates.
In particular, I'd like to use the instance of MQConnection that the mq-notifier-plugin creates and maintains.
I've declared this plugin as a dependency in the POM:
<dependency>
<groupId>com.sonymobile.jenkins.plugins.mq</groupId>
<artifactId>mq-notifier</artifactId>
<version>1.2.5</version>
</dependency>
Imported the class:
import com.sonymobile.jenkins.plugins.mq.mqnotifier.MQConnection;
Tried to get the instance and add a message within the workflowstep:
..
public static class TestConnectionWorkflowStep extends AbstractSynchronousNonBlockingStepExecution<Void> {
private static final long serialVersionUID = 1L;
#StepContextParameter
private transient Run build;
#StepContextParameter
transient TaskListener listener;
#Override
protected Void run() throws Exception {
..
// fill in with exchange, routing_key, data, properties
MQConnection.getInstance().addMessageToQueue(..);
}
}
It compiles fine. I've also instrumented the MQConnection class to log whenever a message is added.
It seems that none of my build step messages are added to the instance's queue and just silently continues.
And as expected, I do still see messages from the mq-notifier-plugin showing up fine.
I've tried using Jenkins.getInstance().getPlugin(MQConnection.class) but doesn't work since MQConnection isn't a subclass of Plugin.
How can I access the MQConnection instance from my plugin?
getInstance() likely assumes an instance was already created when the application was started up, and it retrieves that instance. Since you're calling the method from a library, that startup hasn't happened, so there's no instance to return.
Look at the getInstance() code if you can, and also check any mq-notifier application startup or main methods in the library class. See how it instantiates the MQConnection instance, and you'll need to do the same thing.
There's probably some dependency injection going on in the other project.
I'd like to use the instance of MQConnection that the mq-notifier-plugin creates and maintains.
You're either going to have to have the two applications running side-by-side and communicating with each other, or you're going to have to figure out how to instantiate MQConnection yourself.
It seems that none of my build step messages are added to the instance's queue and just silently continues.
Is this running remotely then? If you have a remote MQConnection instance running then simply calling getInstance will not be enough for the two seperate programs to communicate with each other.

RMI Naming.lookup throws NotBoundException

I have some objects registered in my Rmi registry, i check that it's done because when i do a LocateRegistry.getRegistry().list() it results 2 registries like:
0 = "rmi://Mac.local/192.168.1.40:1099/DataService"
1 = "rmi://Mac.local/192.168.1.40:1099/AuthService"
Then, i call a
ServicioAutenticacionInterface authService = (ServicioAutenticacionInterface) Naming.lookup("rmi://Mac.local/192.168.1.40:1099/AuthService");
It throws a NotBoundException..
Just say that interfaces are in a package named commons defined as a dependency for server package who is it´s trying to invoke that lookup.
You passed a URL to Registry.bind()/rebind() instead of just a name.
URLs are passed to Naming.bind()/rebind()/unbind()/lookup(), and returned by Naming.list()`.
Simple names (such as "AuthService") are passed to Registry.bind()/rebind()/unbind()/lookup()
Whatever you passed to Registry.bind()/rebind() is returned verbatim by Registry.list().
Ergo, as Registry.list() is returning URLs, you must have supplied them via Registry.bind()/rebind().
For proof, try Naming.list("rmi://Mac.local/192.168.1.40:1099"). It will return this:
0 = "rmi://Mac.local/192.168.1.40:1099/rmi://Mac.local/192.168.1.40:1099/DataService"
1 = "rmi://Mac.local/192.168.1.40:1099/rmi://Mac.local/192.168.1.40:1099/AuthService"
which is obviously not what you want.
So you need to either use Naming.bind()/rebind() with the same URL strings, or else remove the URL part of the strings and keep using Registry.bind()/rebind().
java.rmi.NotBoundException:
My RMI-based application was working fine until I introduced another function which utilizes a service(WatchService), the service had an internal infinite loop and so this would stall the whole application.
My thought was that, when the server was started, maybe binding process did not completely happen because of the loop implemented inside the service, and the service was started at the same time during binding phase, and so when the client came looking up for the server stub, it could not find it because it wasn't bound or registered/fully in the first place.
When I removed the function/service everything worked fine again, but since I needed the service/function, I had to start it on a new thread inside the same class of the server stub like so
private class FileWatcherThread implements Runnable {
public FileWatcherThread() {
}
#Override
public void run() {
startMonitors();
}
}
Then somewhere inside your main code start the defined thread above.
new Thread(new FileWatcherThread()).start();
And this startMonitors(); is the method that has infinite loop and is defined in the main class, FileWatcherThread is an inner class of the main server class- it actually depends on how you have done your implementation and design. Just get the idea then see if it suits your problem.

how to create rmi in java [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
hi i have searched the internet for a long time trying to find some thing that talk about how to start rmi registry in windows 7 using cmd so if any one know how to do that pleas let me to know how to do it or if any one can provide us a good link for that... thx in advance
ok thanx for all how answered my question when i asked the question i was not fully understand the RMI system or how it work but know i have good idea i will summarized this for provide all with an idea for the RMI system and if i have any mistake please correct me
so
Remote Interface:
We need an interface that extends from the Remote class and defined the method that we would like to invoke remotely
note:
Remote is a "marker" interface that identifies interfaces whose methods may be invoked from a non-local virtual machine.
import java.rmi.Remote;
import java.rmi.RemoteException;
import java.util.Calendar;
public interface CalendarTask extends Remote {
Calendar getDate() throws RemoteException;
}
The Remote Object:
We need class that create a Remote object's so we crate class object implement the Remote Interface to make the object's that created by this class object remote object's and we link this object's to the RMI System by extends from this class UnicastRemoteObjec so When a class extends from UnicastRemoteObject, it must provide a constructor declaring this constructor calls super(), it activates code in UnicastRemoteObject, which performs the RMI linking and remote object initialization.
import java.rmi.RemoteException;
import java.rmi.server.UnicastRemoteObject;
import java.util.Calendar;
public class CalendarImpl extends UnicastRemoteObject implements CalendarTask {
private int counter = 1;
public CalendarImpl() throws RemoteException {}
public Calendar getDate() throws RemoteException{
System.out.print("Method called on server:");
System.out.println("counter = " + counter++);
return Calendar.getInstance();
}
}
Writing the Server:
3.1 The server's job is to accept requests from a client, perform some service, and then send the results back to the client.
3.2 The server must specify an interface that defines the methods available to clients as a service. we do that above in the first step (Remote Interface)
3.3 The server creates the remote object, registers it under some arbitrary name, then waits for remote requests
3.4 so for register The remote object we use java.rmi.registry.LocateRegistry class allows the RMI registry service (provided as part of the JVM) to be started within the code by calling its createRegistry() method.
3.5 The java.rmi.registry.Registry class provides two methods for binding objects to the registry.
• Naming.bind("ArbitraryName", remoteObj);
throws an Exception if an object is already bound under the "ArbitrayName".
• Naming.rebind ("ArbitraryName", remoteObj);
binds the object under the "ArbitraryName" if it does not exist or overwrites the object that is bound.
3.6 The example on the following acts as a server that creates a CalendarImpl object and makes it available to clients by binding it under a name of "TheCalendar"
import java.rmi.Naming;
import java.rmi.registry.LocateRegistry;
public class CalendarServer {
public static void main(String args[]) {
System.out.println("Starting server...");
// Start RMI registry service and bind
// object to the registry
try {
LocateRegistry.createRegistry(1099);
Naming.rebind("TheCalendar",
new CalendarImpl());
} catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
System.out.println("Server ready");
}
}
Writing the Client:
4.1 An RMI client is a program that accesses the services provided by a remote object
4.2 The java.rmi.registry.LocateRegistry class allows the RMI registry service to be located by a client by its getRegistry() method
4.3 The java.rmi.registry.Registry class provides a lookup() method that takes the "ArbitraryName" the remote object was bound to by the server.
Once the client obtains a reference to a remote object, it invokes methods as if the object were local
import java.rmi.registry.*;
import java.util.Calendar;
public class CalendarClient {
public static void main(String args[]) {
Calendar c = null;
CalendarTask remoteObj;
String host = "localhost";
if(args.length == 1)
host = args[0];
try {
Registry r =
LocateRegistry.getRegistry(host, 1099);
Object o = r.lookup("TheCalendar");
remoteObj = (CalendarTask) o;
c = remoteObj.getDate();
} catch (Exception e) {
e.printStackTrace();
}
System.out.printf("%tc", c);
}
}
The code you have written doesn't start a registry. LocateRegistry.getRegistry() doesn't do that. Check the Javadoc. It assumes the Registry is already running. LocateRegistry.getRegistry() just constructs a Registry stub according to the host and port you provide. It doesn't even do any network operations.
To start a Registry from within your JVM, use LocateRegistry.createRegistry(), as its Javadoc states.
EDIT: There's a lot of misinformation in your edit.
Remote is a "marker" interface that identifies interfaces whose methods may be invoked from a non-local virtual machine.
Only if implemented by an exported remote object whose stub has been transmitted to that VM. The remote interface itself doesn't have any such magical property. All methods defined in a remote interface must be declared to throw RemoteException, although the implementations of these methods generally don't need to be so declared (i.e. unless they perform remote operations themselves: the compiler will tell you).
We need class that create a Remote object's so we crate class object implement the Remote Interface to make the object's that created by this class object
Far too much confusion here. We need a class. The class must implement the remote interface. This is not an 'object' yet: it is a piece of code that must be compiled to a .class file. A class doesn't 'make objects'. An application does that, with the new operator.
we link this object's to the RMI System by extends from this class UnicastRemoteObjec so When a class extends from UnicastRemoteObject, it must provide a constructor declaring this constructor calls super(), it activates code in UnicastRemoteObject, which performs the RMI linking and remote object initialization
There is no 'link' step in RMI. There is an 'export' step. It is performed either by extending UnicastRemoteObject or by calling UnicastRemoteObject.exportObject(). If you don't extend UnicastRemoteObject you don't need the constructor you described.
The server's job is to accept requests from a client, perform some service, and then send the results back to the client.
The server's job is to implement the methods in the remote interface. RMI does all the rest for you.
The server creates the remote object, registers it under some arbitrary name, then waits for remote requests
Or else the server is the remote object and it registers itself.
for register The remote object we use java.rmi.registry.LocateRegistry class allows the RMI registry service (provided as part of the JVM) to be started within the code by calling its createRegistry() method.
Or you can use an external Registry via the rmiregistry command. Or you can use an LDAP server via JNDI.
LocateRegistry.createRegistry(1099);
Naming.rebind("TheCalendar",
new CalendarImpl());
This won't work unless you store the result of createRegistry() into a static variable. And having stored it, you may as well use it to do the bind, instead of using the Naming class. If you don't store it into a static variable it will be garbage-collected and so will the remote object.
The java.rmi.registry.LocateRegistry class allows the RMI registry service to be located by a client by its getRegistry() method
Or you can use the Naming class, see below.
The java.rmi.registry.Registry class provides a lookup() method that takes the "ArbitraryName" the remote object was bound to by the server.
So does the Naming class. It takes an rmi: URL which specifies the host and port and bind-name. You can omit the rmi:// part. If you omit the host it defaults to 'localhost', but this is only useful if the client is running in the same host as the server, which isn't itself very useful. If you omit the port it defaults to 1099.

How do I shutdown and reconfigure an AsyncHttpClient that is using NettyAsyncHttpProvider

I'm constructing an AsyncHttpClient like this:
public AsyncHttpClient getAsyncHttpClient() {
AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
.setProxyServer(makeProxyServer())
.setRequestTimeoutInMs((int) Duration.create(ASYNC_HTTP_REQUEST_TIMEOUT_MIN, TimeUnit.MINUTES).toMillis())
.build();
return new AsyncHttpClient(new NettyAsyncHttpProvider(config), config);
}
This gets called once at startup, and then the return value is passed around and used in various places. makeProxyServer() is my own function to take my proxy settings an return a ProxyServer object. What I need to do is be able to change the proxy server settings and then recreate the AsyncHttpClient object. But, I don't know how to shut it down cleanly. A bit of searching on leads me to believe that close() isn't gracefull. I'm worried about spinning up a whole new executor and set of threads every time the proxy settings change. This won't be often, but my application is very long-running.
I know I can use RequestBuilder.setProxyServer() for each request, but I'd like to have it set in one spot so that all callers of my asyncHttpClient instance obey the system-wide proxy settings without each developer having to remember to do it.
What's the right way to re-configure or teardown and rebuild a Netty-based AsyncHttpClient?
The problem with using AsyncHttpClient.close() is that it shuts down the thread pool executor used by the provider, then there is no way to re-use the client without re-building it, because as per documentation, the executor instance cannot be reused once ts is shutdown. So, there is no way but re-build the client if you go that way (unless you implement your own ExecutorService that would have another shutdown logic, but it is a long way to go, IMHO).
However, from looking into the implementation of NettyAsyncHttpProvider, I can see that it stores the reference to the given AsyncHttpClientConfiginstance and calls its getProxyServerSelector() to get the proxy settings for every new NettyAsyncHttpProvider.execute(Request...) invocation (i.e. for every request executed by AsyncHttpClient).
Then, if we could make the getProxyServerSelector() return the configurable instance of ProxyServerSelector, that would do the thing.
Unfortunately, AsyncHttpClientConfig is designed to be a read-only container, instantiated by AsyncHttpClientConfig.Builder.
To overcome this limitation, we would have to hack it, using, say, "wrap/delegate" approach:
Create a new class, derived from AsyncHttpClientConfig. The class should wrap the given separate AsyncHttpClientConfig instance and implement the delegation of the AsyncHttpClientConfig getters to that instance.
To be able to return the proxy selector we want at any given point of time, we make this setting mutable in a this wrapper class and expose the setter for it.
Example:
public class MyAsyncHttpClientConfig extends AsyncHttpClientConfig
{
private final AsyncHttpClientConfig config;
private ProxyServerSelector proxyServerSelector;
public MyAsyncHttpClientConfig(AsyncHttpClientConfig config)
{
this.config = config;
}
#Override
public int getMaxTotalConnections() { return config.maxTotalConnections; }
#Override
public int getMaxConnectionPerHost() { return config.maxConnectionPerHost; }
// delegate the others but getProxyServerSelector()
...
#Override
public ProxyServerSelector getProxyServerSelector()
{
return proxyServerSelector == null
? config.getProxyServerSelector()
: proxyServerSelector;
}
public void setProxyServerSelector(ProxyServerSelector proxyServerSelector)
{
this.proxyServerSelector = proxyServerSelector;
}
}
Now, in your example, wrap your AsyncHttpClient config instance with our new wrapper and use it to configure the AsyncHttpClient:
Example:
MyAsyncHttpClientConfig myConfig = new MyAsyncHttpClientConfig(config);
return new AsyncHttpClient(new NettyAsyncHttpProvider(myConfig), myConfig);
Whenever you invoke myConfig.setProxyServerSelector(newSelector), the new request executed by NettyAsyncHttpProvider instance in your client will use the new proxy server settings.
A few hints/warnings:
This approach relies on the internal implementation of NettyAsyncHttpProvider; therefore make your own judgement on maintainability, future Netty libraries versions upgrade strategy etc. You could always look at the Netty source code before upgrading to the new version. At the current point, I personally think it is unlikely to change too much to invalidate this implementation.
You could get ProxyServerSelector for ProxyServer by using com.ning.http.util.ProxyUtils.createProxyServerSelector(proxyServer) - that's exactly what AsyncHttpClientConfig.Builder does.
The given example has no synchronization logic for accessing proxyServerSelector; you may want to add some as your application logic needs.
Maybe it is a good idea to submit a feature request for AsyncHttpClient to be able to setup a "configuration factory" for the AsyncHttpProvider so all these complications would vanish :-)
You should be holding a RequestHandle instance for all your unfinished requests. When you want to shut down, you can loop through and call isFinished() on all of them until they are all done. Then you know you can safely close it and no pending requests will be killed.
Once it's closed, just build a new one. Don't try to reuse the existing one. If you have references to it around, change those to reference a Factory that will return the current one.

Categories

Resources