guarantee thread execution order is first come first serve - java

I have code called from a servlet that calls out to an external service. Of course, no guarantees how long the service will take to return a response. I need to ensure that no more than one call to this service executes at a time, but of course the servlet container can be running concurrent requests to the servlet. I want to guarantee that the priority of requests are processed single file, on a first come first server basis. So it is not enough that my call to the external servlet be synchronized because once the current call is finished there would be no guarantee as to which thread gets in to make the call next.
Any ideas?

You could use a fair Lock
Lock lock = new ReentrantLock(true);
This gives the lock in the order it was attempted.

You can use single-threaded ExecutorService to submit Callables (which will perform actual request) and wait for Future value to become available.

Many utilities in java.util.concurrent are suitable for this situation, Semaphore with fairness setting is another choice
import java.util.concurrent.*;
Semaphore sem = new Semaphore (int n = HOW_MANY_PERMITS, boolean fairness = true);

Create a worker thread in your servlet and enqueue callouts there. Request handling would synchronize on adding a request to the queue, then just sit there waiting for the worker thread to report back.

Related

Concurrent access to an ExecutorService

Consider the service class below:-
//Singleton service
public class InquiryService{
private final ExecutorService es = Executors. newSingleThreadExecutor();
private final CustomerService cs = new CustomerServiceImpl();
public String process(){
//Asynchronous calls to get info from CustomerService
Future<String> result = es.submit(()->{return cs.getCustomer()});
//Query database
//Perform logic et all
String customerName = result.submit.get();
//continue processing.
}
}
Service class above has an ExecutorService as field. If say, 100 concurrent requests on process method, does remaining (100-1) requests need to wait for thread availability?
How to solve the request wait? One option I could think is, instantiate, use and shutdown ExecutorService within process method. However aren't thread pool is meant to be reused?
Another option would be run as new Thread(new FutureTask<>(() -> {return cs.getCustomer()})) . Which one is correct way?
Updates:-
Based from comments and answers, the ExecutorService meant to be reused and frequent new Thread creation is expensive. Therefore another option is just run service call sequentially.
Your service is singleton which means only one instance exists throughout the runtime of your application (If implemented right!). So, effectively you have one ExecutorService handling a newFixedThreadPool(1).
does remaining (100-1) requests need to wait for thread availability?
Oh yes, all your 100-1 other requests has to wait since the first request is already executing in the thread present in the threadpool. Since the thread pool is of fixed size, it can never grow to handle other requests.
How to solve the request wait?
You need more threads in your threadpool to take up your task.
One option I could think is, instantiate, use and shutdown
ExecutorService within process method
This is really a bad idea. The time taken to create and destroy Thread is too much. More on this here. That is whole idea of using a ThreadPool.
Another option would be run as new Thread(new FutureTask<>(() ->
{return cs.getCustomer()}))
Constructing a new Thread(). Read my previous point.
So, what is right!?
One way would be to a Executors.newFixedThreadPool(10), so that (90-10) requests wait. Is it okay? Or maybe you are looking for newCachedThreadPool!
Warning: Also, please read about side-effects of using ThreadLocal in a ThreadPool, if applicable.

Blocking concurrent requests in Java/Retrofit to same host

I have a Java/Ninja framework web application sending requests to third party API's via Retrofit API.
I have a thread T1, which is generated from thread pool using Ninja framework Schedule annotation (http://www.ninjaframework.org/apidocs/ninja/scheduler/Schedule.html) .
T1 is polling continously to host1 (example.com).
When another thread say T2 starts a request to the same host (example.com), I want T2 to wait till T1 completes its operations and vice-versa.
I checked out http://square.github.io/okhttp/3.x/okhttp/okhttp3/Dispatcher.html, but looks like the maxRequestsPerHost might not work concurrently.
I tried having a hashMap to modify the hostStatus from each thread and include the status inside synchronzied block, but adding this to multiple methods seems cumbersome.
I'm not sure which concurrency pattern or locking mechanism holds good for this case. Is there a way to add the hostIds to queue or other pattern to implement this behavior.

Handle a lot of Futures given by asynchronous rest-requests

I wanna use the jersey-client for creating asynchronous rest-requests, the function delivers me Futures, so i can, in my understanding, invoke get, and if the request is finished it will return something.
So i am thinking, i could store the Futures in a map and look into them from time to time by one thread. Or maybe i should create a new thread everytime someone sending an asynchronous request. There is also a requirement that it shouldn't last forever (a timeout).
What do you think?
I often use a List<Future<Void>> to store the futures. As get() blocks, I just cycle through them rather than poll them.
There is also a requirement that it should last forever (a timeout).
I assume you mean its shouldn't last forever. This requires support in the library you are using to make the requests. If they can be interrupted you can cancel(true) the future either in your waiting thread or another ScheduledExecutorService. If they can't be interrupts you may have to stop() the thread but only as a last resort.
The javadoc says:
A Future represents the result of an asynchronous computation. Methods
are provided to check if the computation is complete, to wait for its
completion, and to retrieve the result of the computation. The result
can only be retrieved using method get when the computation has
completed, blocking if necessary until it is ready.
Therefore it is up to you to choose which strategy to adopt: it mostly depends on what you want to do with those requests.
You could place those Futures in any iterable structure before going through them. Block on each get may be a strategy if you can handle each result pretty fast and do need to check while waiting if other futures are already returned.

Is there a non-reentrant ReadWriteLock I can use?

I need a ReadWriteLock that is NOT reentrant, because the lock may be released by a different thread than the one that acquired it. (I realized this when I started to get IllegalMonitorStateException intermittently.)
I'm not sure if non-reentrant is the right term. A ReentrantLock allows the thread that currently holds to lock to acquire it again. I do NOT want this behaviour, therefore I'm calling it "non-reentrant".
The context is that I have a socket server using a thread pool. There is NOT a thread per connection. Requests may get handled by different threads. A client connection may need to lock in one request and unlock in another request. Since the requests may be handled by different threads, I need to be able to lock and unlock in different threads.
Assume for the sake of this question that I need to stay with this configuration and that I do really need to lock and unlock in different requests and therefore possibly different threads.
It's a ReadWriteLock because I need to allow multiple "readers" OR an exclusive "writer".
It looks like this could be written using AbstractQueuedSynchronizer but I'm afraid if I write it myself I'll make some subtle mistake. I can find various examples of using AbstractQueuedSynchronizer but not a ReadWriteLock.
I could take the OpenJDK ReentrantReadWriteLock source and try to remove the reentrant part but again I'm afraid I wouldn't get it quite right.
I've looked in Guava and Apache Commons but didn't find anything suitable. Apache Commons has RWLockManager which might do what I need but I'm not sure and it seems more complex than I need.
A Semaphore allows different threads to perform the acquire and release of permits. An exclusive write is equivalent to having all of the permits, as the thread waits until all have been released and no additional permits can be acquired by other threads.
final int PERMITS = Integer.MAX_VALUE;
Semaphore semaphore = new Semaphore(PERMITS);
// read
semaphore.acquire(1);
try { ... }
finally {
semaphore.release(1);
}
// write
semaphore.acquire(PERMITS);
try { ... }
finally {
semaphore.release(PERMITS);
}
I know you've already accepted another answer. But I still think that you are going to create quite a nightmare for yourself. Eventually, a client is going to fail to come back and release those permits and you'll begin to wonder why the "writer" never writes.
If I were doing it, I would do it like this:
Client issues a request to start a transaction
The initial request creates a task (Runnable/Callable) and places it in an Executor for execution
The initial request also registers that task in a Map by transaction id
Client issues the second request to close the transaction
The close request finds the task by transaction id in a map
The close request calls a method on the task to indicate that it should close (probably a signal on a Condition or if data needs to be passed, placing an object in a BlockingQueue)
Now, the transaction task would have code like this:
public void run() {
readWriteLock.readLock().lock();
try {
//do stuff for initializing this transaction
if (condition.await(someDurationAsLong, someTimeUnit)( {
//do the rest of the transaction stuff
} else {
//do some other stuff to back out the transaction
}
} finally {
readWriteLock.readLock.unlock();
}
}
Not entirely sure what you need, esp. why it should be a read write lock, but if you have task that need to be handled by many threads, and you don't want it to be processesd/accessed concurrently, I'd use actually a ConcurrentMap ( etc.).
You can remove the task from the map or substitute it with a special "lock object" to indicate it's locked. You could return the task with an updated state to the map to let another thread take over, or alternatively you can pass the task directly to the next thread and let it return the task to the map instead.
They seem to have dropped the ball on this one by deprecating com.sun.corba.se.impl.orbutil.concurrent.Mutex;
I mean who in his right mind thinks that we won't need non-reentrant locks. Here we are, wasting our times arguing over the definition of reentrant (can slighty change in meaning per framework btw). Yes I want to tryLock on the same thread is that such a bad thing? it won't deadlock because ill else out of it. A non-reentrant lock that locks in the same thread can be very usefull to prevent errors on GUI apps where the user presses on the same button rapidly and repeatedly. Been there, done that, QT was right...again.

can i join the threads that a ThreadPoolExecutor creates

I'm using ThreadPoolExecutor to make it easy to create threads to handle requests but now a requirement is to execute the requests in order. I was wondering if i can use the join method to make a thread get executed right after a previous launched thread finishes.
i've been looking at the api but i haven't found a method that returns the Thread object from the ThreadPoolExecutor
Can i do that? or do i need to implement something like my own thread factory to do this?
If you don't want the requests to happen concurrently, you can use java.util.concurrent.Executors.newSingleThreadExecutor() and they'll happen one at a time.

Categories

Resources