I have a Java/Ninja framework web application sending requests to third party API's via Retrofit API.
I have a thread T1, which is generated from thread pool using Ninja framework Schedule annotation (http://www.ninjaframework.org/apidocs/ninja/scheduler/Schedule.html) .
T1 is polling continously to host1 (example.com).
When another thread say T2 starts a request to the same host (example.com), I want T2 to wait till T1 completes its operations and vice-versa.
I checked out http://square.github.io/okhttp/3.x/okhttp/okhttp3/Dispatcher.html, but looks like the maxRequestsPerHost might not work concurrently.
I tried having a hashMap to modify the hostStatus from each thread and include the status inside synchronzied block, but adding this to multiple methods seems cumbersome.
I'm not sure which concurrency pattern or locking mechanism holds good for this case. Is there a way to add the hostIds to queue or other pattern to implement this behavior.
Related
I want to build a process flow. I have multiple requests to process in a queue. One thread (call it T1) takes the first request start processing it and then passes it to some other thread (from a pool of threads lets call it T2) at a point when it has to do some blocking database access. T1 should get free now to process another request from queue. The blocking database access is done by a thread from T2 pool. Then after the database operation is completed, thread from T2 passes it to a thread T3 which return the processed result of the request and after that gets free to return another result processed by T2.
I want to do this to avoid one thread for one request model as it would bring a lot of context switching overheads and all threads will eventually block on database access and CPU resources will be wasted at that time. The T1 and T3 threads can be considered as thread pools of size of limited size depending on the cores in the CPU.
I thought about above approach after getting to know about async servelet as after getting the request it does not block the thread and instead a different thread does the job and returns the response later.
Let me know if the process flow I need to build is feasible in java and some resources on how can it be achieved.
Splitting a single request seems like a nice idea, but to optimize use of resources, I would look for using semaphores and have each request be handled by a different thread.
Try limiting the number of requests with semaphores, and limit the access to resources that can be accessed only one at a time also with semaphores.
Splitting a single request can be a good idea, but I think it is a good idea mostly in the case scenario that you want to save the data inside some files, to lower the memory usage of the threads.
I will try adding some Java code later if I can find my old projects...
Should I accept connections and monitoring clients on a listener thread and then let workers handle the request and answer to the client, or should I do everything on one thread?
Neither.
Ideally, for an NIO-based server, you create a thread pool using something like Executors.newFixedThreadPool(), which you will use to perform all the processing for handling your requests.
But, there should be no assignment of requests to specific threads, because the rest of your system should be asynchronous as well. That means that when a request handler needs to perform some lengthy I/O work or similar, instead of blocking the thread and waiting for it to finish, it starts it asynchronously and arranges for processing to continue when the work is finished by submitting a new task to the thread pool. There's no telling which thread will pick up the work at that point, so the processing for a request could end up being spread across many threads.
You should usually coordinate your asynchronous processing using CompletableFuture the same way that Promise is used in node. Have a look at my answer over here, which tries to explain how to do that: decoupled design for async http request
If your request handling is 100% asynchronous, that is you never wait for anything during request handling and you're on a single-core system, then it might be slightly better to do everything in the same thread.
If you have a multi-core system or you wait on I/O during request processing, then you should use a thread pool instead.
During a job meeting.I have heard that Thread Local is absolutely an anti pattern because new Application servers uses new Thread technologies called new IO.In fact,they told me that the problem with ThreadLocal is that a complete thread must wait for the database query to return a response and that's absolutely a waste of resources(memory as well as CPU).
New developed Thread strategy uses a pool of threads so when a thread is not needed any more it will return to pool.What i have heard is that this new Technology is implemented in new AS such us Jboss,Websphere...(i'm not sure).
Can i use it locally with Apache tomcat for example?(if it's possible documentation on that fact)
ThreadLocal is a side character in your story. What you have heard about is asynchronous request processing, which is helped, among other things, by the NIO library.
In this programming paradigm, you don't get a simple method like
Response processRequest(Request req)
Instead you get
void requestReceived(Request req, Response resp)
and within this method you will usually just start the processing by preparing the back-end request and calling its method which will look like
execute(Query q, ResultCallback cb)
and the framework will call your ResultCallback's method resultReady(Result res) which will contain the query result.
The main point here is that the method requestReceived will return immediately, and will not occupy the thread while the back-end request is being processed at the back-end subsystem.
BTW another name for this style of programming is continuation-passing style or CPS. It is because when you call a function, you don't wait for its return value, but rather pass a callback into it which will be called with the function's result, and which implements the continuation of the total request processing.
How ThreadLocal fits into this
If you have followed what I have said above, it should already be clear to you that in this style of request processing, ThreadLocals are a useless concept because the request processing freely jumps from thread to thread, and in a way which is completely outside of your control.
ThreadLocal has basically nothing to do with databases or ThreadPools/ExecutorServices. ThreadLocal just means that the value stored in it is just visible to the Thread how set it. This doesn't cause any blocking. You must confuse some things there.
ThreadLocal: Stores variable per Thread.
"new IO": They most likely meant the java.nio package. It about reading/writing data without blocking.
Threadpools/Executorservice: Bunch of Threads where you can submit Runnables to. You can use ExecutorServices in any Java application, because they are part of the standard library.
For accessing the database you normally use a dedicated system like C3P0, which manages Threads and database connections
I think that i misunderstand the subject.
Well,i will explain in detail what i have heard.
When using ThreadLocal.If we have for example a query to DataBase or JMS call .The thread must be alive for the response to return (suppose that takes 15 minute for example).The thread will be in a waiting situation waiting for Db to return response.so it's a waste for CPU as well as memory.
New Thread management technology uses a pool of threads.In fact during the waiting time.The thread will be used to server another client.
That's what i have heard.
To Marko Topolnik:What you have exposed is asynchronous calls and it does nothing to do with Threads.
ThreadLocals, thread pools, and new IO can cooperate quite well. All you need is to define thread factory when creating a threadpool so that each new thread would get correct threadlocal value at the moment of creation. This value, for example, can keep a reference to java.nio.channels.Selector singleton. Another thread local variable can hold reference to the thread pool itself, so submitting new tasks to the thread pool can be simplified.
Asynchronous framework https://github.com/rfqu/df4j uses thread locals intensively this way.
So I have a long running process that I want to encapsulate as a Runnable and dispatch it in a thread. To be more specific, I have a POST web service that creates a file in the file system but the creation of the file can take a very long time.
In the resource method of my web service, I want to be able to dispatch a thread to do the file creation and return the status 200. I don't think I can just do Thread.join because this would mean that the current thread would have to wait for the file creation thread to finish. Instead, I want to join the file creation thread to the main thread. Question is, how do I get the main thread in java?
I am not sure whether I get you right. Here is what I understood:
You want to preform a possibly long running operation (file creation)
you do not want you service method to block while that task is exectued
you want the task executed in a thread that exists outside the boundary/lifetime of the single request.
Am I right so far?
If sou really recommend you look into the newer concepts in java.util.concurrent. The concepts described there should give you enogh information tackkle this
Basic credo: Don't think in threads, think in tasks.
General Book recommendation: Java Concurrency in Practice by Brian Goetz
You will need to process the request asynchronously. A separate thread will be created for doing the heavy work and the request receiving thread will be free to process other requests. Please checkout following articles.
Asynchronous processing in Servlet 3.0
Asynchronous support in Servlet 3.0 spec
Asynchronous Support in Servlet 3.0
When you spawn the file-creation thread, you need to pass it some kind of reference to the parent thread, so it can communicate back (i.e. you provide something to enable a callback).
This could be the actual Thread object (obtained using Thread.currentThread, as someone said in a comment) or some other object that you use to signal when the file-creation thread is done.
I have code called from a servlet that calls out to an external service. Of course, no guarantees how long the service will take to return a response. I need to ensure that no more than one call to this service executes at a time, but of course the servlet container can be running concurrent requests to the servlet. I want to guarantee that the priority of requests are processed single file, on a first come first server basis. So it is not enough that my call to the external servlet be synchronized because once the current call is finished there would be no guarantee as to which thread gets in to make the call next.
Any ideas?
You could use a fair Lock
Lock lock = new ReentrantLock(true);
This gives the lock in the order it was attempted.
You can use single-threaded ExecutorService to submit Callables (which will perform actual request) and wait for Future value to become available.
Many utilities in java.util.concurrent are suitable for this situation, Semaphore with fairness setting is another choice
import java.util.concurrent.*;
Semaphore sem = new Semaphore (int n = HOW_MANY_PERMITS, boolean fairness = true);
Create a worker thread in your servlet and enqueue callouts there. Request handling would synchronize on adding a request to the queue, then just sit there waiting for the worker thread to report back.