I'm constructing an AsyncHttpClient like this:
public AsyncHttpClient getAsyncHttpClient() {
AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder()
.setProxyServer(makeProxyServer())
.setRequestTimeoutInMs((int) Duration.create(ASYNC_HTTP_REQUEST_TIMEOUT_MIN, TimeUnit.MINUTES).toMillis())
.build();
return new AsyncHttpClient(new NettyAsyncHttpProvider(config), config);
}
This gets called once at startup, and then the return value is passed around and used in various places. makeProxyServer() is my own function to take my proxy settings an return a ProxyServer object. What I need to do is be able to change the proxy server settings and then recreate the AsyncHttpClient object. But, I don't know how to shut it down cleanly. A bit of searching on leads me to believe that close() isn't gracefull. I'm worried about spinning up a whole new executor and set of threads every time the proxy settings change. This won't be often, but my application is very long-running.
I know I can use RequestBuilder.setProxyServer() for each request, but I'd like to have it set in one spot so that all callers of my asyncHttpClient instance obey the system-wide proxy settings without each developer having to remember to do it.
What's the right way to re-configure or teardown and rebuild a Netty-based AsyncHttpClient?
The problem with using AsyncHttpClient.close() is that it shuts down the thread pool executor used by the provider, then there is no way to re-use the client without re-building it, because as per documentation, the executor instance cannot be reused once ts is shutdown. So, there is no way but re-build the client if you go that way (unless you implement your own ExecutorService that would have another shutdown logic, but it is a long way to go, IMHO).
However, from looking into the implementation of NettyAsyncHttpProvider, I can see that it stores the reference to the given AsyncHttpClientConfiginstance and calls its getProxyServerSelector() to get the proxy settings for every new NettyAsyncHttpProvider.execute(Request...) invocation (i.e. for every request executed by AsyncHttpClient).
Then, if we could make the getProxyServerSelector() return the configurable instance of ProxyServerSelector, that would do the thing.
Unfortunately, AsyncHttpClientConfig is designed to be a read-only container, instantiated by AsyncHttpClientConfig.Builder.
To overcome this limitation, we would have to hack it, using, say, "wrap/delegate" approach:
Create a new class, derived from AsyncHttpClientConfig. The class should wrap the given separate AsyncHttpClientConfig instance and implement the delegation of the AsyncHttpClientConfig getters to that instance.
To be able to return the proxy selector we want at any given point of time, we make this setting mutable in a this wrapper class and expose the setter for it.
Example:
public class MyAsyncHttpClientConfig extends AsyncHttpClientConfig
{
private final AsyncHttpClientConfig config;
private ProxyServerSelector proxyServerSelector;
public MyAsyncHttpClientConfig(AsyncHttpClientConfig config)
{
this.config = config;
}
#Override
public int getMaxTotalConnections() { return config.maxTotalConnections; }
#Override
public int getMaxConnectionPerHost() { return config.maxConnectionPerHost; }
// delegate the others but getProxyServerSelector()
...
#Override
public ProxyServerSelector getProxyServerSelector()
{
return proxyServerSelector == null
? config.getProxyServerSelector()
: proxyServerSelector;
}
public void setProxyServerSelector(ProxyServerSelector proxyServerSelector)
{
this.proxyServerSelector = proxyServerSelector;
}
}
Now, in your example, wrap your AsyncHttpClient config instance with our new wrapper and use it to configure the AsyncHttpClient:
Example:
MyAsyncHttpClientConfig myConfig = new MyAsyncHttpClientConfig(config);
return new AsyncHttpClient(new NettyAsyncHttpProvider(myConfig), myConfig);
Whenever you invoke myConfig.setProxyServerSelector(newSelector), the new request executed by NettyAsyncHttpProvider instance in your client will use the new proxy server settings.
A few hints/warnings:
This approach relies on the internal implementation of NettyAsyncHttpProvider; therefore make your own judgement on maintainability, future Netty libraries versions upgrade strategy etc. You could always look at the Netty source code before upgrading to the new version. At the current point, I personally think it is unlikely to change too much to invalidate this implementation.
You could get ProxyServerSelector for ProxyServer by using com.ning.http.util.ProxyUtils.createProxyServerSelector(proxyServer) - that's exactly what AsyncHttpClientConfig.Builder does.
The given example has no synchronization logic for accessing proxyServerSelector; you may want to add some as your application logic needs.
Maybe it is a good idea to submit a feature request for AsyncHttpClient to be able to setup a "configuration factory" for the AsyncHttpProvider so all these complications would vanish :-)
You should be holding a RequestHandle instance for all your unfinished requests. When you want to shut down, you can loop through and call isFinished() on all of them until they are all done. Then you know you can safely close it and no pending requests will be killed.
Once it's closed, just build a new one. Don't try to reuse the existing one. If you have references to it around, change those to reference a Factory that will return the current one.
Related
I'm currently checking out the following guide: https://developer.android.com/topic/libraries/architecture/guide.html
The networkBoundResource class:
// ResultType: Type for the Resource data
// RequestType: Type for the API response
public abstract class NetworkBoundResource<ResultType, RequestType> {
// Called to save the result of the API response into the database
#WorkerThread
protected abstract void saveCallResult(#NonNull RequestType item);
// Called with the data in the database to decide whether it should be
// fetched from the network.
#MainThread
protected abstract boolean shouldFetch(#Nullable ResultType data);
// Called to get the cached data from the database
#NonNull #MainThread
protected abstract LiveData<ResultType> loadFromDb();
// Called to create the API call.
#NonNull #MainThread
protected abstract LiveData<ApiResponse<RequestType>> createCall();
// Called when the fetch fails. The child class may want to reset components
// like rate limiter.
#MainThread
protected void onFetchFailed() {
}
// returns a LiveData that represents the resource
public final LiveData<Resource<ResultType>> getAsLiveData() {
return result;
}
}
I'm a bit confused here about the use of threads.
Why is #MainThread applied here for networkIO?
Also, for saving into the db, #WorkerThread is applied, whereas #MainThread for retrieving results.
Is it bad practise to use a worker thread by default for NetworkIO and local db interaction?
I'm also checking out the following demo (GithubBrowserSample): https://github.com/googlesamples/android-architecture-components
This confuses me from a threading point of view.
The demo uses executors framework, and defines a fixed pool with 3 threads for networkIO, however in the demo only a worker task is defined for one call, i.e. the FetchNextSearchPageTask. All other network requests seem to be executed on the main thread.
Can someone clarify the rationale?
It seems you have a few misconceptions.
Generally it is never OK to call network from the Main (UI) thread but unless you have a lot of data it might be OK to fetch data from DB in the Main thread. And this is what Google example does.
1.
The demo uses executors framework, and defines a fixed pool with 3 threads for networkIO, however in the demo only a worker task is defined for one call, i.e. the FetchNextSearchPageTask.
First of all, since Java 8 you can create simple implementation of some interfaces (so called "functional interfaces") using lambda syntax. This is what happens in the NetworkBoundResource:
appExecutors.diskIO().execute(() -> {
saveCallResult(processResponse(response));
appExecutors.mainThread().execute(() ->
// we specially request a new live data,
// otherwise we will get immediately last cached value,
// which may not be updated with latest results received from network.
result.addSource(loadFromDb(),
newData -> result.setValue(Resource.success(newData)))
);
});
at first task (processResponse and saveCallResult) is scheduled on a thread provided by the diskIO Executor and then from that thread the rest of the work is scheduled back to the Main thread.
2.
Why is #MainThread applied here for networkIO?
and
All other network requests seem to be executed on the main thread.
This is not so. Only result wrapper i.e. LiveData<ApiResponse<RequestType>> is created on the main thread. The network request is done on a different thread. This is not easy to see because Retrofit library is used to do all the network-related heavy lifting and it nicely hides such implementation details. Still, if you look at the LiveDataCallAdapter that wraps Retrofit into a LiveData, you can see that Call.enqueue is used which is actually an asynchronous call (scheduled internally by Retrofit).
Actually if not for "pagination" feature, the example would not need networkIO Executor at all. "Pagination" is a complicated feature and thus it is implemented using explicit FetchNextSearchPageTask and this is a place where I think Google example is done not very well: FetchNextSearchPageTask doesn't re-use request parsing logic (i.e. processResponse) from RepoRepository but just assumes that it is trivial (which it is now, but who knows about the future...). Also there is no scheduling of the merging job onto the diskIO Executor which is also inconsistent with the rest of the response processing.
I have some objects registered in my Rmi registry, i check that it's done because when i do a LocateRegistry.getRegistry().list() it results 2 registries like:
0 = "rmi://Mac.local/192.168.1.40:1099/DataService"
1 = "rmi://Mac.local/192.168.1.40:1099/AuthService"
Then, i call a
ServicioAutenticacionInterface authService = (ServicioAutenticacionInterface) Naming.lookup("rmi://Mac.local/192.168.1.40:1099/AuthService");
It throws a NotBoundException..
Just say that interfaces are in a package named commons defined as a dependency for server package who is it´s trying to invoke that lookup.
You passed a URL to Registry.bind()/rebind() instead of just a name.
URLs are passed to Naming.bind()/rebind()/unbind()/lookup(), and returned by Naming.list()`.
Simple names (such as "AuthService") are passed to Registry.bind()/rebind()/unbind()/lookup()
Whatever you passed to Registry.bind()/rebind() is returned verbatim by Registry.list().
Ergo, as Registry.list() is returning URLs, you must have supplied them via Registry.bind()/rebind().
For proof, try Naming.list("rmi://Mac.local/192.168.1.40:1099"). It will return this:
0 = "rmi://Mac.local/192.168.1.40:1099/rmi://Mac.local/192.168.1.40:1099/DataService"
1 = "rmi://Mac.local/192.168.1.40:1099/rmi://Mac.local/192.168.1.40:1099/AuthService"
which is obviously not what you want.
So you need to either use Naming.bind()/rebind() with the same URL strings, or else remove the URL part of the strings and keep using Registry.bind()/rebind().
java.rmi.NotBoundException:
My RMI-based application was working fine until I introduced another function which utilizes a service(WatchService), the service had an internal infinite loop and so this would stall the whole application.
My thought was that, when the server was started, maybe binding process did not completely happen because of the loop implemented inside the service, and the service was started at the same time during binding phase, and so when the client came looking up for the server stub, it could not find it because it wasn't bound or registered/fully in the first place.
When I removed the function/service everything worked fine again, but since I needed the service/function, I had to start it on a new thread inside the same class of the server stub like so
private class FileWatcherThread implements Runnable {
public FileWatcherThread() {
}
#Override
public void run() {
startMonitors();
}
}
Then somewhere inside your main code start the defined thread above.
new Thread(new FileWatcherThread()).start();
And this startMonitors(); is the method that has infinite loop and is defined in the main class, FileWatcherThread is an inner class of the main server class- it actually depends on how you have done your implementation and design. Just get the idea then see if it suits your problem.
Can i attach java shutdown hook across jvm .
I mean can I attach shut down from my JVM to weblogic server running in different jvm?
The shutdown hook part is in Runtime.
The across JVM part you'll have to implement yourself, because only you know how your JVMs can discover and identify themselves.
It could be as simple as creating a listening socket at JVM1 startup, and sending port number of JVM2 to it. JVM1 would send shutdown notification to JVM2 (to that port) in its shutdown hook.
The short anser is: You can, but not out of the box and there are some pitfalls so please read the section pitfalls at the end.
A shutdown hook must be a thread object Runtime.addShutdownHook(Thread) that the jvm can access. Thus it must be instantiated within that jvm.
The only way I see to do it is to implement a Runnable that is also Serializable and some kind of remote service (e.g. RMI) which you can pass the SerializableRunnable. This service must then create a Thread pass the SerializableRunnable to that Thread's constructor and add it as a shutdown hook to the Runtime.
But there is also another problem in this case. The SerializableRunnable has no references to objects within the remote service's jvm and you have to find a way how that SerializableRunnable can obtain them or to get them injected. So you have the choice between a ServiceLocator or an
dependency injection mechanism. I will use the service locator pattern for the following examples.
I would suggest to define an interface like this:
public interface RemoteRunnable extends Runnable, Serializable {
/**
* Called after de-serialization from a remote invocation to give the
* RemoteRunnable a chance to obtain service references of the jvm it has
* been de-serialized in.
*/
public void initialize(ServiceLocator sl);
}
The remote service method could then look like this
public class RemoteShutdownHookService {
public void addShutdownhook(RemoteRunnable rr){
// Since an instance of a RemoteShutdownHookService is an object of the remote
// jvm, it can provide a mechanism that gives access to objects in that jvm.
// Either through a service locator
ServiceLocator sl = ...;
rr.initialize(sl);
// or a dependency injection.
// In case of a dependecy injection the initialize method of RemoteRunnable
// can be omitted.
// A short spring example:
//
// AutowireCapableBeanFactory beanFactory = .....;
// beanFactory.autowireBean(rr);
Runtime.getRuntime().addShutdownHook(new Thread(rr));
}
}
and your RemoteRunnable might look lioke this
public class SomeRemoteRunnable implements RemoteRunnable {
private static final long serialVersionUID = 1L;
private SomeServiceInterface someService;
#Override
public void run() {
// call someService on shutdown
someService.doSomething();
}
#Override
public void initialize(ServiceLocator sl) {
someService = sl.getService(SomeServiceInterface.class);
}
}
Pitfalls
There is only one problem with this approach that is not obvious. The RemoteRunnable implementation class must be available in the remote service's classpath. Thus you can not just create a new RemoteRunnable class and pass an instance of it to the remote service. You always have to add it to the remote JVMs classpath.
So this approach only makes sense if the RemoteRunnable implements an algorithm that can be configured by the state of the RemoteRunnable.
If you want to dynamically add arbitrary shutdown hook code to the remote JVM without the need to modify the remote JVMs classpath you must use a dynamic language and pass that script to the remote service, e.g. groovy.
I'm working in an Spring application that downloads data from different APIs. For that purpose I need a class Fetcher that interacts with an API to fetch the needed data. One of the requirements of this class is that it has to have a method to start the fetching and a method to stop it. Also, it must download all asynchronously because users must be able to interact with a dashboard while fetching data.
Which is the best way to accomplish this? I've been reading about task executors and the different annotations of Spring to schedule tasks and execute them asynchronously but this solutions don't seem to solve my problem.
Asynchronous task execution is what you're after and since Spring 3.0 you can achieve this using annotations too directly on the method you want to run asyncrhonously.
There are two ways of implementing this depending whether you are interested in getting a result from the async process:
#Async
public Future<ReturnPOJO> asyncTaskWithReturn(){
//..
return new AsyncResult<ReturnPOJO>(yourReturnPOJOInstance);
}
or not:
#Async
public void asyncTaskNoReturn() {
//..
}
In the former method the result of your computation conveyed by yourReturnPOJOInstance object instance, is stored in an instance of org.springframework.scheduling.annotation.AsyncResult<V> which in return implements the java.util.concurrent.Future<V> that the caller can use to retrieve the result of the computation later on.
To activate the above functionality in Spring you have to add in your XML config file:
<task: annotation-driven />
along with the needed task namespace.
The simplest way to do this is to use the Thread class. You supply a Runnable object that performs the fetching functionality in the run() method and when the Thread is started, it invokes the run method in a separate thread of execution.
So something like this:
public class Fetcher implements Runnable{
public void run(){
//do fetching stuff
}
}
//in your code
Thread fetchThread = new Thread(new Fetcher());
fetchThread.start();
Now, if you want to be able to cancel, you can do that a couple of ways. The easiest (albeit most violent and nonadvisable way to do it is to interrupt the thread:
fetchThread.interrupt();
The correct way to do it would be to implement logic in your Fetcher class that periodically checks a variable to see whether it should stop doing whatever it's doing or not.
Edit To your question about getting Spring to run it automatically, if you wanted it to run periodically, you'll need to use a scheduling framework like Quartz. However, if you just want it to run once what you could do is use the #PostConstruct annotation. The method annotated with #PostConstruct will be executed after the bean is created. So you could do something like this
#Service
public class Fetcher implements Runnable{
public void run(){
//do stuff
}
#PostConstruct
public void goDoIt(){
Thread trd = new Thread(this);
trd.start();
}
}
Edit 2 I actually didn't know about this, but check out the #Async discussion in the Spring documentation if you haven't already. Might also be what you want to do.
You might only need certain methods to run on a separate thread rather than the entire class. If so, the #Async annotation is so simple and easy to use.
Simply add it to any method you want to run asynchronously, you can also use it on methods with return types thanks to Java's Future library.
Check out this page: http://www.baeldung.com/spring-async
I would like to create a pipeline of handlers such as:
public ChannelPipeline getPipeline() throws Exception
{
return Channels.pipeline(
new ObjectEncoder(),
new ObjectDecoder(),
new AuthenticationServerHandler(),
new BusinessLogicServerHandler());
}
The key here is that I'd like the AuthenticationServerHandler to be able to pass the login information to the BusinessLogicServerHandler.
I do understand that you can use an Attachment, however that only stores the information for that handler, the other handlers in the pipeline cannot access it. I also noticed there was something called ChannelLocal which might do the trick, however I cannot find any real information on how to use it. All I've seen is people create a static instance to it, but how do you retrieve and access the info in another handler? Assuming that's the correct method.
My question is: how you do pass information between handlers in the same pipeline. In the example above, how do I pass the login credentials from the AuthenticationServerHandler to the BusinessLogicServerHandler?
ChannelLocal is the way to go atm. Just create an static instance somewhere and then access it from within your handlers by pass the Channel to the set/get method. This way you can share stuff between your channels.
I wasn't a fan of the ChannelLocal implementation with the lack of an internal static map, so what I ended up doing was putting my object on the Channel's attachment for now:
ctx.getChannel().setAttachment(myobj);
Then I make "myobj" basically a context POJO that contains all the information gathered about the request so far.
public class RequestContext {
private String foo = "";
public String getFoo(){
return foo;
}
public void setFoo(String foo){
this.foo = foo;
}
}
RequestContext reqCtx = new RequestContext();
reqCtx.setFoo("Bar");
ctx.getChannel().setAttachment(reqCtx);
reqCtx = (RequestContext)ctx.getChannel().getAttachment();
It's not elegant, but it works...
I pass information from one handler to the next ones by using dedicated instances to compose the pipeline for each channel, and by having the handlers reference each others within each pipeline.
The passing of information is made the old way, very simply, without any problem.