How is Apache NIO HttpAsyncClient able to wait for a remote response without blocking any thread? Does it have a way to setup a callback with the OS (I doubt so?). Otherwise does it perform some sort of polling?
EDIT - THIS ANSWER IS WRONG. PLEASE IGNORE AS IT IS INCORRECT.
You did not specify a version, so I can not point you to source code. But to answer your question, the way that Apache does it is by returning a Future<T>.
Take a look at this link -- https://hc.apache.org/httpcomponents-asyncclient-4.1.x/current/httpasyncclient/apidocs/org/apache/http/nio/client/HttpAsyncClient.html
Notice how the link says nio in the package. That stands for "non-blocking IO". And 9 times out of 10, that is done by doing some work with a new thread.
This operates almost exactly like a CompletableFuture<T> from your first question. Long story short, the library kicks off the process in a new thread (just like CompletableFuture<T>), stores that thread into the Future<T>, then allows you to use that Future<T> to manage that newly created thread containing your non-blocking task. By doing this, you get to decide exactly when and where the code blocks, potentially giving you the chance to make some significant performance optimizations.
To be more explicit, let's give a pseudocode example. Let's say I have a method attached to an endpoint. Whenever the endpoint is hit, the method is executed. The method takes in a single parameter --- userID. I then use that userID to perform 2 operations --- fetch the user's personal info, and fetch the user's suggested content. I need both pieces, and neither request needs to wait for the other to finish before starting. So, what I do is something like the following.
public StoreFrontPage visitStorePage(int userID)
{
final Future<UserInfo> userInfoFuture = this.fetchUserInfo(userID);
final Future<PageSuggestion> recommendedContentFuture = this.fetchRecommendedContent(userId);
final UserInfo userInfo = userInfoFuture.get();
final PageSuggestion recommendedContent = recommendedContentFuture.get();
return new StoreFrontPage(userInfo, recommendedContent);
}
When I call this.fetchUserInfo(userID), my code creates a new thread, starts fetching user info on that new thread, but let's my main thread continue and kick off this.fetchRecommendedContent(userID) in the meantime. The 2 fetches are occurring in parallel.
However, I need both results in order to create my StoreFrontPage. So, when I decided that I cannot continue any further until I have the results from both fetches, I call Future::get on each of my fetches. What this method does is merge the new thread back into my original one. In short, it says "wait for that one thread you created to finish doing what it was doing, then output the result as a return value".
And to more explicitly answer your question, no, this tool does not require you to do anything involving callbacks or polling. All it does is give you a Future<T> and lets you decide when you need to block the thread to wait on that Future<T> to finish.
EDIT - THIS ANSWER IS WRONG. PLEASE IGNORE AS IT IS INCORRECT.
Related
I'm trying to make a call by using slack's sdk client in java in order to get user's id by using the email name. The slack client returns CompletableFuture object. I can get the user_name if i use get() method but as far as i understand, it's a synchronous function and it will make the application slower. Is there another way to make this call asynchronous?
public static String lookUpUserId(String email) throws ExecutionException, InterruptedException {
CompletableFuture<UsersLookupByEmailResponse> response = slackClient.usersLookupByEmail(r -> r
.email(email));
UsersLookupByEmailResponse data = response.get();
return data.getUser().getId();
}
I tried using supplyFunc and thenApply like this but it gives an error saying 'missing return statement' although i return a value. I'm just concerned about the performance and curious if there's a a better way to handle this. Thanks
response.thenApply(r->r.getUser().getId());
Since the call returns CompletableFuture, it is alredy asynchronous. Yes, get() method is a synchronous function. Using it will NOT make the application slower, it will make application consume more memory. Asynchronous access like
response.thenApply(r->r.getUser().getId());
looks correct.
The reason of error message is that the method lookUpUserId is declared as returning String and so must have a return staement. If you want to make it asynchronous, then declare it as returning CompletableFuture<String> and add return statement
return response.thenApply(r->r.getUser().getId());
I don't know much specifically about the slack api but some information can be found in this answer with regards to housing your function in a class that implements Runnable.
Make your `CompletableFuture` in the constructor and run your gets in `run()`.
In my opinion it is a best practice to process all of your api requests off of the main thread but you should know that running some single thing in a separate thread on one line and joining that thread back on the next is only going to add a little overhead without any performance advantages. If you are processing a batch of requests you should start each on independent threads with a for loop and join them all together after.
I'm also noticing that my referenced answer doesn't really cover thread joins for retrieving your results sooo you will probably also find this informative. And if you havn't learned about object oriented programming yet that's ok! you'll be writing your own classes in no time.
Edit: Ask for code if you need it, It's better if you write it yourself.
Today I did some experiments on AsyncRestTemplate. Below is a piece of sample code:
ListenableFuture<ResponseEntity<MyObject[]>> result
= asyncRestTemplate.getForEntity(uri, MyObject[]);
List<MyObject> objects = Arrays.asList(result.get().getBody());
To my surprise, the request was not sent to uri in first line (i.e. after calling getForEntity) but sent after result.get() is called.
Isn't it a synchronous way of doing stuff?
The whole idea of doing async request is that either you do not want to wait for the async task to start/complete OR you want the main thread to do some other task before asking for the result from the Future instance. Internally, the AsyncRestTemplate prepares an AsyncRequest and calls executeAsync method.
AsyncClientHttpRequest request = createAsyncRequest(url, method);
if (requestCallback != null) {
requestCallback.doWithRequest(request);
}
ListenableFuture<ClientHttpResponse> responseFuture = request.executeAsync();
There are two different implementations - HttpComponentsAsyncClientHttpRequest ( which uses high performant async support provided in Apache http component library ) and SimpleBufferingAsyncClientHttpRequest (which uses facilities provided by J2SE classes). In case of HttpComponentsAsyncClientHttpRequest, internally it has a thread factory (which is not spring managed AFAIK) whereas in SimpleBufferingAsyncClientHttpRequest, there is a provision of Spring managed AsyncListenableTaskExecutor. The whole point is that in all cases there is some ExecutorService of some kind to be able to run the tasks asynchronously. Of course as is natural with these thread pools, the actual starting time of task is indeterminate and depends upon lots of factor like load, available CPU etc. and should not be relied upon.
When you call future.get() you're essentially turning an asynchronous operation into a synchronous one by waiting for the result.
It doesn't matter when the actual request is performed, the important thing is that since it's asynchronous, you don't need to worry about it unless/until you need the result.
The advantage is obvious when you need to perform other work before processing the result, or when you're not waiting for a result at all.
Suppose I need to execute N tasks in the same thread. The tasks may sometimes need some values from an external storage. I have no idea in advance which task may need such a value and when. It is much faster to fetch M values in one go rather than the same M values in M queries to the external storage.
Note that I cannot expect cooperation from tasks themselves, they can be concidered as nothing more than java.lang.Runnable objects.
Now, the ideal procedure, as I see it, would look like
Execute all tasks in a loop. If a task requests an external value, remember this, suspend the task and switch to the next one.
Fetch the values requested at the previous step, all at once.
Remove all completed task (suspended ones don't count as completed).
If there are still tasks left, go to step 1, but instead of executing a task, continue its execution from the suspended state.
As far as I see, the only way to "suspend" and "resume" something would be to remove its related frames from JVM stack, store them somewhere, and later push them back onto the stack and let JVM continue.
Is there any standard (not involving hacking at lower level than JVM bytecode) way to do this?
Or can you maybe suggest another possible way to achieve this (other than starting N threads or making tasks cooperate in some way)?
It's possible using something like quasar that does stack-slicing via an agent. Some degree of cooperation from the tasks is helpful, but it is possible to use AOP to insert suspension points from outside.
(IMO it's better to be explicit about what's going on (using e.g. Future and ForkJoinPool). If some plain code runs on one thread for a while and is then "magically" suspended and jumps to another thread, this can be very confusing to debug or reason about. With modern languages and libraries the overhead of being explicit about the asynchronicity boundaries should not be overwhelming. If your tasks are written in terms of generic types then it's fairly easy to pass-through something like scalaz Future. But that wouldn't meet your requirements as given).
As mentioned, Quasar does exactly that (it usually schedules N fibers on M threads, but you can set M to 1), using bytecode transformations. It even gives each task (AKA "fiber") its own stack trace, so you can dump it and get a complete stack trace without any interference from any other task sharing the thread.
Well you could try this
you need
A mechanism to save the current state of the task because when the task returns its frame would be popped from the call stack. Based on the return value or something like that you can determine weather it completed or not since you would need to re-execute it from the point where it left thus u need to preserve the state information.
Create a Request Data structure for each task. When ever a task wants to request something it logs it there , The data structure should support all the possible request a task can make.
Store these DS in a Map. At the end of the loop you can query this DS to determine the kind of resource required by each task.
get the resource put it in the DS . Start the task from the state when it returned.
The task queries the DS gets the resource.
The task should use this DS when ever it wants to use an external resource.
you would need to design the method in which resource is requested with special consideration since when you will re-execute the task again you would need to call this method yourself so that the task can execute from where it left.
*DS -> Data Structure
hope it helps.
Even after reading http://krondo.com/?p=1209 or Does an asynchronous call always create/call a new thread? I am still confused about how to provide asynchronous calls on an inherently single-threaded system. I will explain my understanding so far and point out my doubts.
One of the examples I read was describing a TCP server providing asynch processing of requests - a user would call a method e.g. get(Callback c) and the callback would be invoked some time later. Now, my first issue here - we have already two systems, one server and one client. This is not what I mean, cause in fact we have two threads at least - one in the server and one on the client side.
The other example I read was JavaScript, as this is the most prominent example of single-threaded asynch system with Node.js. What I cannot get through my head, maybe thinking in Java terms, is this:If I execute the code below (apologies for incorrect, probably atrocious syntax):
function foo(){
read_file(FIle location, Callback c) //asynchronous call, does not block
//do many things more here, potentially for hours
}
the call to read file executes (sth) and returns, allowing the rest of my function to execute. Since there is only one thread i.e. the one that is executing my function, how on earth the same thread (the one and only one which is executing my stuff) will ever get to read in the bytes from disk?
Basically, it seems to me I am missing some underlying mechanism that is acting like round-robin scheduler of some sort, which is inherently single-threaded and might split the tasks to smaller ones or call into a multiothraded components that would spawn a thread and read the file in.
Thanks in advance for all comments and pointing out my mistakes on the way.
Update: Thanks for all responses. Further good sources that helped me out with this are here:
http://www.html5rocks.com/en/tutorials/async/deferred/
http://lostechies.com/johnteague/2012/11/30/node-js-must-know-concepts-asynchrounous/
http://www.interact-sw.co.uk/iangblog/2004/09/23/threadless (.NET)
http://ejohn.org/blog/how-javascript-timers-work/ (intrinsics of timers)
http://www.mobl-lang.org/283/reducing-the-pain-synchronous-asynchronous-programming/
The real answer is that it depends on what you mean by "single thread".
There are two approaches to multitasking: cooperative and interrupt-driven. Cooperative, which is what the other StackOverflow item you cited describes, requires that routines explicitly relinquish ownership of the processor so it can do other things. Event-driven systems are often designed this way. The advantage is that it's a lot easier to administer and avoids most of the risks of conflicting access to data since only one chunk of your code is ever executing at any one time. The disadvantage is that, because only one thing is being done at a time, everything has to either be designed to execute fairly quickly or be broken up into chunks that to so (via explicit pauses like a yield() call), or the system will appear to freeze until that event has been fully processed.
The other approach -- threads or processes -- actively takes the processor away from running chunks of code, pausing them while something else is done. This is much more complicated to implement, and requires more care in coding since you now have the risk of simultaneous access to shared data structures, but is much more powerful and -- done right -- much more robust and responsive.
Yes, there is indeed a scheduler involved in either case. In the former version the scheduler is just spinning until an event arrives (delivered from the operating system and/or runtime environment, which is implicitly another thread or process) and dispatches that event before handling the next to arrive.
The way I think of it in JavaScript is that there is a Queue which holds events. In the old Java producer/consumer parlance, there is a single consumer thread pulling stuff off this queue and executing every function registered to receive the current event. Events such as asynchronous calls (AJAX requests completing), timeouts or mouse events get pushed on to the Queue as soon as they happen. The single "consumer" thread pulls them off the queue and locates any interested functions and then executes them, it cannot get to the next Event until it has finished invoking all the functions registered on the current one. Thus if you have a handler that never completes, the Queue just fills up - it is said to be "blocked".
The system has more than one thread (it has at least one producer and a consumer) since something generates the events to go on the queue, but as the author of the event handlers you need to be aware that events are processed in a single thread, if you go into a tight loop, you will lock up the only consumer thread and make the system unresponsive.
So in your example :
function foo(){
read_file(location, function(fileContents) {
// called with the fileContents when file is read
}
//do many things more here, potentially for hours
}
If you do as your comments says and execute potentially for hours - the callback which handles fileContents will not fire for hours even though the file has been read. As soon as you hit the last } of foo() the consumer thread is done with this event and can process the next one where it will execute the registered callback with the file contents.
HTH
I'm using JDBC, need to constantly check the database against changing values.
What I have currently is an infinite loop running, inner loop iterating over a changing values, and each iteration checking against the database.
public void runInBG() { //this method called from another thread
while(true) {
while(els.hasElements()) {
Test el = (Test)els.next();
String sql = "SELECT * FROM Test WHERE id = '" + el.getId() + "'";
Record r = db.getTestRecord(sql);//this function makes connection, executeQuery etc...and return Record object with values
if(r != null) {
//do something
}
}
}
}
I'm think this isn't the best way.
The other way I'm thinking is the reverse, to keep iterating over the database.
UPDATE
Thank you for the feedback regarding timers, but I don't think it will solve my problem.
Once a change occurs in the database I need to process the results almost instantaneously against the changing values ("els" from the example code).
Even if the database does not change it still has to check constantly against the changing values.
UPDATE 2
OK, to anyone interested in the answer I believe I have the solution now. Basically the solution is NOT to use the database for this. Load in, update, add, etc... only whats needed from the database to memory.
That way you don't have to open and close the database constantly, you only deal with the database when you make a change to it, and reflect those changes back into memory and only deal with whatever is in memory at the time.
Sure this is more memory intensive but performance is absolute key here.
As to the periodic "timer" answers, I'm sorry but this is not right at all. Nobody has responded with a reason how the use of timers would solve this particular situation.
But thank you again for the feedback, it was still helpful nevertheless.
Another possibility would be using ScheduledThreadPoolExecutor.
You could implement a Runnable containing your logic and register it to the ScheduledExecutorService as follows:
ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(10);
executor.scheduleAtFixedRate(myRunnable, 0, 5, TimeUnit.SECONDS);
The code above, creates a ScheduledThreadPoolExecutor with 10 Threads in its pool, and would have a Runnable registered to it that will run in a 5 seconds period starting immediately.
To schedule your runnable you could use:
scheduleAtFixedRate
Creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given period; that is executions will commence after initialDelay then initialDelay+period, then initialDelay + 2 * period, and so on.
scheduleWithFixedDelay
Creates and executes a periodic action that becomes enabled first after the given initial delay, and subsequently with the given delay between the termination of one execution and the commencement of the next.
And here you can see the advantages of ThreadPoolExecutor, in order to see if it fits to your requirements. I advise this question: Java Timer vs ExecutorService? too in order to make a good decision.
Keeping the while(true) in the runInBG() is a bad idea. You better remove that. Instead you can have a Scheduler/Timer(use Timer & TimerTask) which would call the runInBG() periodically and check for the updates in the DB.
u could use a timer--->
Timer timer = new Timer("runInBG");
//Taking an instance of class contains your repeated method.
MyClass t = new MyClass();
timer.schedule(t, 0, 2000);
As you said in the comment above, if application controls the updates and inserts then you can create a framework which notifies for 'BG' thread or process about change in database. Notification can be over network via JMS or intra VM using observer pattern or both local and remote notifications.
You can have generic notification message like (it can be class for local notification or text message for remote notifications)
<Notification>
<Type>update/insert</Type>
<Entity>
<Name>Account/Customer</Name>
<Id>id</Id>
<Entity>
</Notification>
To avoid a 'busy loop', I would try to use triggers. H2 also supports a DatabaseEventListener API, that way you wouldn't have to create a trigger for each table.
This may not always work, for example if you use a remote connection.
UPDATE 2
OK, to anyone interested in the answer I believe I have the solution now. Basically the solution is NOT to use the database for this. Load in, update, add, etc... only whats needed from the database to memory. That way you don't have to open and close the database constantly, you only deal with the database when you make a change to it, and reflect those changes back into memory and only deal with whatever is in memory at the time. Sure this is more memory intensive but performance is absolute key here.