I am working on executor thread pools to check if any object has been inserted into a blocking queue. If any object has been inside of the queue, one thread wakes up from the pool and take object from the queue, send it to some class to process.
But im confused at using executor threads such as below. When I am using them inside a for loop, processes works fast as I expected but it looks like something wrong. When I take executors out inside of the for loop, processes gets slow. Is this logic correct?
Rest Class
#RestController
public class FraudRestController {
#Autowired
private CoreApplication core;
//LOGIC HERE
....
core.addMesageToQueue(rbtran, type);
}
Message Add To Queue
public static void addMessageToQueue(TCPRequestMessage message) throws InterruptedException {
jobQueue.put(message);
}
Executor Threads To Listen Queue in Core Class
ExecutorService consumers = Executors.newFixedThreadPool(THREAD_SIZE);
//Core Inits in here
#PostConstruct
public void init() {
//LOGIC
...
//<---THIS BLOCK----->
for (int i = 0; i < THREAD_SIZE; i++) { //<---- This For Loop
consumers.submit(() -> {
while (true)
sendMessageToServer();
});
}
//<---THIS BLOCK----->
}
Send Message Function
private void sendMessageToServer() throws Exception {
//LOGIC
...
if (host.isActive()) {
TCPRequestMessage message = jobQueue.take();
}
This will create a thread pool for you of the size that you pass.
ExecutorService consumers = Executors.newFixedThreadPool(THREAD_SIZE);
This means now there are THREAD_SIZE number of threads waiting on a queue. This queue created is a LinkedBlockingQueue. This queue has the property of making the threads wait on it if it is empty or full.
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
If a task is submitted to a pool, at that time if the queue is full, the task wont be submitted. In our case since we did not mention the size, so the size here is Integer.MAX_VALUE
If the queue is empty, the threads in the pool will be waiting for the task to be inserted in the queue.
When the ExecutorService's submit method is called.
Internally, the task is submitted into the queue boolean offer(E e); of the LinkedBlockingQueue.
I believe based on this, you can may be re design what you are implementing.
Related
I'm creating a Threadpool as shown below for a job.
public class MoveToCherwellThreadPool {
public static ThreadPoolExecutor cherwellMoveThreadPoolExecutor = null;
private static EMLogger logger = EMLogger.getLogger();
private static final String CLASSNAME = "MoveToCherwellThreadPool";
public static void initiateCherwellMoveThreadPool() {
BlockingQueue<Runnable> q = new LinkedBlockingQueue<Runnable>(100000);
cherwellMoveThreadPoolExecutor = new ThreadPoolExecutor(10,20, 20, TimeUnit.SECONDS, q);
cherwellMoveThreadPoolExecutor.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r,
ThreadPoolExecutor executor) {
logger.logDebug(CLASSNAME,"Rejected task cherwellMoveThreadPoolExecutor Active tasks : " + cherwellMoveThreadPoolExecutor.getActiveCount() + ", " + "cherwellMoveThreadPoolExecutor Completed tasks : " + cherwellMoveThreadPoolExecutor.getCompletedTaskCount()+" Waiting for a second !! ");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
executor.execute(r);
}
});
}
}
I'm using this during a process running for multiple customers. For each customer new threadpool will be initialized and threads will be running.
Below is the code where I'm using the threadpool.
for (Object[] objects : relationshipList) {
CherwellRelationshipMoveThread relationshipThread = new CherwellRelationshipMoveThread(objects,
this.customerId, sb, credential,mainCIId,moveUniqueId,this.startTime);
CompletableFuture<?> future = CompletableFuture.runAsync(relationshipThread,
MoveToCherwellThreadPool.cherwellMoveThreadPoolExecutor);
crelationshipList.add(future);
}
crelationshipList.forEach(CompletableFuture::join);
This thread will be created for multiple customers. I'm giving an option to terminate this job in UI. On click of stop process I need to stop/kill only the threads running for that particular customer and other customer's thread shouldn't be harmed and should be keep running.
On click of stop process from UI I'm calling a service where inside the service my code will be
MoveToCherwellThreadPool.cherwellMoveThreadPoolExecutor.shutdownNow();
I'm calling shutdownNow() on the ThreadPoolExecutor.
This is killing all the threads of all the customers. I don't want to kill all the customers process, but only for the customer where I'll click on stop process.
This code doesn't maintain any mapping from a tenant to a thread pool, there's only one static reference to a ThreadPoolExecutor. Each time initiateCherwellMoveThreadPool is called, any existing executor will be replaced with a new one, and the existing one isn't shut down, so it will leak resources. As a result, this will execute tasks from multiple tenants in the same thread pool.
This code is also not thread safe. It's possible (if unlikely) that a thread could schedule a task on a newly-created executor, or even shut it down, before setRejectedExecutionHandler is called.
If you need a separate executor per tenant, this will need to be implemented. A good option might be to use a ConcurrentHashMap with customerId keys and ThreadPoolExecutor values, for example (logging omitted for brevity):
public class MoveToCherwellThreadPool {
public static ConcurrentMap<String, ThreadPoolExecutor> cherwellMoveThreadPoolExecutors = new ConcurrentHashMap<>();
public static ThreadPoolExecutor getCherwellMoveThreadPool(String customerId) {
return cherwellMoveThreadPoolExecutors.computeIfAbsent(customerId, id -> {
BlockingQueue<Runnable> q = new LinkedBlockingQueue<Runnable>(100000);
ThreadPoolExecutor executor = new ThreadPoolExecutor(10, 20, 20, TimeUnit.SECONDS, q);
executor.setRejectedExecutionHandler(new RejectedExecutionHandler() { /*...*/ });
return executor;
});
}
public static List<Runnable> stopCherwellMoveTheadPool(String customerId) {
if (cherwellMoveThreadPoolExecutors.containsKey(customerId)) {
return cherwellMoveThreadPoolExecutors.get(customerId).shutdownNow();
}
return Collections.emptyList();
}
}
This can be used like this:
CompletableFuture<?> future = CompletableFuture.runAsync(relationshipThread,
MoveToCherwellThreadPool.getCherwellMoveThreadPool(customerId));
It's also important to realise that calling shutdownNow can only attempt to cancel currently executing tasks, and "does not wait for actively executing tasks to terminate":
This implementation cancels tasks via Thread.interrupt(), so any task that fails to respond to interrupts may never terminate.
The code implementing CherwellRelationshipMoveThread isn't shown, so this may or may not be the case.
I am reading Java Concurrency in Practice and encounter the following code snippet.
public class TrackingExecutor extends AbstractExecutorService {
private final ExecutorService exec;
private final Set<Runnable> tasksCancelledAtShutdown =
Collections.synchronizedSet(new HashSet<Runnable>());
...
public List<Runnable> getCancelledTasks() {
if (!exec.isTerminated())
throw new IllegalStateException(...);
return new ArrayList<Runnable>(tasksCancelledAtShutdown);
}
public void execute(final Runnable runnable) {
exec.execute(new Runnable() {
public void run() {
try {
runnable.run();
} finally {
if (isShutdown()
&& Thread.currentThread().isInterrupted())
tasksCancelledAtShutdown.add(runnable);
}
}
});
}
// delegate other ExecutorService methods to exec
}
The book says:
TrackingExecutor has an unavoidable race condition that could make it yield false positives: tasks that are identified as cancelled but actually completed. This arises because the thread pool could be shut down between when the last instruction of the task executes and when the pool records the task as complete.
Does it mean the below situation?
one worker thread has finished runnable.run(), but stop at finally block.
exec is shut down.
the worker thread executes finally block and add runnable into tasksCancelledAtShutdown. (runnable.run() has been finished, it should be recognized as finished, not cancelledAtShutdown.)
Do I get it right? If not, where is the race condition? What does "the thread pool could be shut down between when the last instruction of the task executes and when the pool records the task as complete" really mean?
I know that a java thread cannot be restarted. So when I submit more than one tasks to newSingleThreadExecutor, then how does it perform all tasks using single thread?
My understanding is that newSingleThreadExecutor will use maximum one thread at a time to process any submitted tasks. I guess same for newFixedThreadPool.
If a Thread cannot be restarted then for performing n tasks, n threads should be spawned. I think newSingleThreadExecutor, newFixedThreadPool will make sure that not many threads should be spawned at a same time, like we do without using ExecutorService (where we attach each task with a thread and start separately)
Here is code example
class Task implements Runnable {
public void run() {
System.out.println("ThreadID-" + Thread.currentThread().getId());
try {
Thread.sleep(100);
}
catch (InterruptedException e) {
}
}
}
public class SingleThreadExecutorTest {
public static void main(String[] args) {
System.out.println("ThreadID-" + Thread.currentThread().getId());
ExecutorService ex = Executors.newSingleThreadExecutor();
for (int i = 0; i < 10; i++) {
ex.execute(new Task());
}
}
}
The above code always prints the same ThreadID.
If I replace below line
Executors.newSingleThreadExecutor();
with
ExecutorService ex = Executors.newFixedThreadPool(2);
Then again it is able to perform all tasks using 2 Threads.
Only when I use
Executors.newCachedThreadPool();
I see different Thread IDs.
How does ExecutorService reuse a Thread?
Does it not let it reach to Dead State?
The ThreadPoolExecutor maintains some Worker threads, which work like this:
public class Demo {
public class Worker implements Runnable {
#Override
public void run() {
Runnable task = getTaskFromQueue();
while (task != null) {
task.run();
task = getTaskFromQueue(); // This might get blocked if the queue is empty, so the worker thread will not terminate
}
}
}
public static void main(String[] args) {
Worker worker = new Worker();
Thread thread = new Thread(worker);
thread.start();
}
}
When you submit a task to ThreadPoolExecutor which has a single Worker thread, the calling threads will put the task into a BlockingQueue on below condition:
the single Worker is busy
the BlockingQueue is not full
And when the Worker is free, it will retrieve new task from this BlockingQueue.
I have to do schoolwork, and I have some code done, but got some questions:
must create a boss-workers application in java.
I have these classes: Main WorkerThread BossThread Job
Basically what I want to do is, that BossThread holds a BlockingQueue and workers go there and look for Jobs.
Question 1:
At the moment I start 5 WorkingThreads and 1 BossThread.
Main:
Collection<WorkerThread> workers = new ArrayList<WorkerThread>();
for(int i = 1; i < 5; i++) {
WorkerThread worker = new WorkerThread();
workers.add(worker);
}
BossThread thread = new BossThread(jobs, workers);
thread.run();
BossThread:
private BlockingQueue<Job> queue = new ArrayBlockingQueue<Job>(100);
private Collection<WorkerThread> workers;
public BossThread(Set<Job> jobs, Collection<WorkerThread> workers) {
for(Job job : jobs) {
queue.add(job);
}
for(WorkerThread worker : workers) {
worker.setQueue(queue);
}
this.workers = workers;
}
Is this normal, or I should create WorkerThreads in my BossThread ?
Question 2:
As you see I am giving the queue to each WorkerThread , is that reasonable or I could store the queue only in one place?
Question 3:
Must I keep my BossThread running somehow, just to wait if user adds more stuff to queue? And how I keep WorkerThreads running, to look for jobs from queue?
Any overall suggestions or design flaws or suggestions?
public class WorkerThread implements Runnable {
private BlockingQueue<Job> queue;
public WorkerThread() {
}
public void run() {
try {
queue.take().start();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public void setQueue(BlockingQueue<Job> queue) {
this.queue = queue;
}
}
Firstly, one important mistake I noticed:
BossThread thread = new BossThread(jobs, workers));
thread.run();
Runnables must be passed to a Thread object and threads are started with start, not run. By calling run you get sequential execution on the same thread. So:
Thread thread = new Thread(new BossThread(jobs, workers)));
thread.start();
Secondly, unless you absolutely must use BlockingQueue and explicit threads I would instead use ExecutorService. It neatly encapsulates a blocking work queue and a team of workers (whose size you can set). It's basically what you're doing but much simpler to use:
class Job implements Runnable {
public void run() {
// work
}
}
...
// create thread pool with 5 threads and blocking queue
ExecutorService exec = Executors.newFixedThreadPool(5);
// submit some work
for(int i = 0; i < 10; i++) {
exec.submit(new Job());
}
And that's it! All the put and take stuff is handled by the executor automatically.
I'm writing a load-testing application in Java, and have a thread pool that executes tasks against the server under test. So to make 1000 jobs and run them in 5 threads I do something like this:
ExecutorService pool = Executors.newFixedThreadPool(5);
List<Runnable> jobs = makeJobs(1000);
for(Runnable job : jobs){
pool.execute(job);
}
However I don't think this approach will scale very well, because I have to make all the 'job' objects ahead of time and have them sitting in memory until they are needed.
I'm looking for a way to have the threads in the pool go to some kind of 'JobFactory' class each time they need a new job, and for the factory to build Runnables on request until the required number of jobs have been run. The factory could maybe start returning 'null' to signal to the threads that there is no more work to do.
I could code something like this up by hand, but it seems like a common enough use-case and was wondering if there was anything in the wonderful but complex 'java.util.concurrent' package that I could use instead?
You can do all the work in the executing threads of the thread pools using an AtomicInteger to monitor the number of runnables executed
int numberOfParties = 5;
AtomicInteger numberOfJobsToExecute = new AtomicInteger(1000);
ExecutorService pool = Executors.newFixedThreadPool(numberOfParties );
for(int i =0; i < numberOfParties; i++){
pool.submit(new Runnable(){
public void run(){
while(numberOfJobsToExecute.decrementAndGet() >= 0){
makeJobs(1).get(0).run();
}
}
});
}
You can also store the returned Future's in a List and get() on them to await completion (among other mechanisms)
Hrm. You could create a BlockingQueue<Runnable> with a fixed capacity and have each of your worker threads dequeue a Runnable and run it. Then you could have a producer thread which is what puts the jobs into the queue.
Main thread would do something like:
// 100 is the capacity of the queue before blocking
BlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>(100);
// start the submitter thread
new Thread(new JobSubmitterThread(queue)).start();
// make in a loop or something?
new Thread(new WorkerThread(queue)).start();
new Thread(new WorkerThread(queue)).start();
...
The worker would look something like:
public class WorkerThread implements Runnable {
private final BlockingQueue<Runnable> queue;
public WorkerThread(BlockingQueue<Runnable> queue) {
this.queue = queue;
}
public void run() {
// run until the main thread shuts it down using volatile boolean or ...
while (!shutdown) {
Runnable job = queue.take();
job.run();
}
}
}
And the job submitter would look something like:
public class JobSubmitterThread implements Runnable {
private final BlockingQueue<Runnable> queue;
public WorkerThread(BlockingQueue<Runnable> queue) {
this.queue = queue;
}
public void run() {
for (int jobC = 0; jobC < 1000; jobC++) {
Runnable job = makeJob();
// this would block when the queue reaches capacity
queue.put(job);
}
}
}