Android Thread Pool to manage multiple bluetooth handeling threads? - java

So I have my Android Bluetooth application that has it's host and clients. The problem is, because I am making multiple connections, I need a thread to handle each connection. That's all milk'n'cookies, so I thought I'd stick all the threads in an array. A little research says a better method to doing this is using a Thread Pool, but I can't seem to get my head around how that works. Also, is it actually even possible to hold threads in an array?

A thread pool is built around the idea that, since creating threads over and over again is time-consuming, we should try to recycle them as much as possible. Thus, a thread pool is a collection of threads that execute jobs, but are not destroyed when they finish a job, but instead "return to the pool" and either take another job or sit idle if there is nothing to do.
Usually the underlying implementation is a thread-safe queue in which the programmer puts jobs and a bunch of threads managed by the implementation keep polling (I'm not implying busy-spinning necessarily) the queue for work.
In Java the thread pool is represented by the ExecutorService class which can be:
fixed - create a thread pool with a fixed number of threads
cached - dynamically creates and destroys threads as needed
single - a pool with a single thread
Note that, since thread pool threads operate in the manner described above (i.e. are recycled), in the case of a fixed thread pool it is not recommended to have jobs that do blocking I/O operations, since the threads taking those jobs will be effectively removed from the pool until they finish the job and thus you may have deadlocks.
As for the array of threads, it's as simple as creating any object array:
Thread[] threads = new Thread[10]; // array of 10 threads

Related

Thread pool that also uses the calling thread?

Is there any thread pool implementation that also allows to use the calling thread for execution?
Some background - I have a service that needs to call lots of dependent services (and do some work with their results). My service is massively parallel and might use up to 1000 threads serving concurrent requests (really, I'm not kidding).
A common pattern for parallel processing is, of course, a shared pool of background threads that is used to farm out the work from the main thread. It also has a fundamental problem of exhaustion, if each of 1000 service threads submits a long-running request then it's extremely easy to completely exhaust all of the pool's capacity.
Another classic solution is to use a private thread pool for each of the service threads. It's not very appealing, since I won't be able to make these private pools large enough.
So my idea is to use a special type of a thread pool executor that runs tasks in the calling thread and opportunistically uses the background thread pool to run tasks if it has free capacity. This way I can guarantee that the calling thread will make some progress in any case, even if the background pool is exhausted.
Does anybody know of such thread pool implementation?
Though it isn't very clear from the question, it sounds like the threads are mostly blocking waiting for responses from other services. This isn't a very productive use of these threads. A large number of threads often causes the scheduler operate inefficiently.
Alternatively, you can think about using asynchronous sockets with completion handlers. This avoids the blocking i/o, and calls handlers in your code where you can respond to i/o events occurring in the channel.
This ultimately means that you can reduce massively the number of threads in your application, and should improve performance.
Another approach is to place a task queue between the calling thread(s) and the thread pool. Every request is placed on the queue, and workers process tasks in the queue in turn. When a task is complete notification is sent back to the calling thread.
Using this mechanism, you can always ensure that tasks will eventually be processed.

Two threads of FixedThreadPool are dead locked, Will thread pool create two more threads?

I create a Fixed thread pool using Exectors.newFixedThreadPool(5)
I submit two such tasks that they cause dead lock for two threads of above pool. Since these two Threads will never pick any more tasks from work queue. Is Thread pool intelligent enough to identify such situation and add two more threads.
In my opinion its NO.
But interviewer disagreed with me. He said if same dead lock happens for remaining three threads then Thread pool is useless. I will never use such API.

Are there any disadvantages of using a thread pool?

I know the thread pool is a good thing because it can reuse threads and thus save the cost of creating new threads. But my question is, are there any disadvantages of using a thread pool? In which situation is using a thread pool not as good as using just individual threads?
In which situation is using a thread pool not as good as using just individual threads?
The only time I can think of is when you have a single thread that only needs to do a single task for the life of your program. Something like a background thread attached to a permanent cache or something. That's about the only time I fork a thread directly as opposed to using an ExecutorService. Even then, using a Executor.newSingleThreadExecutor() would be fine. The overhead of the thread-pool itself is maybe a bit more logic and some memory but very hard to see a pressing downside.
Certainly anytime you need multiple threads to perform tasks, a thread-pool is warranted. What the ExecutorService code does is reduce the amount of code you need to write to manage the threads. The improvements in readability and code maintainability is a big win.
Threadpool is suitable only when you use it for operations that takes less time to complete. Threadpool threads are not suitable for long running operations, as it can easily lead to thread starvation.
If you require your thread to have a specific priority, then threadpool thread is not suitable.
You have tasks that cause the thread to block for long periods of time. The thread pool has a maximum number of threads, so a large number of blocked thread pool threads might prevent tasks from starting.
You've got a bunch of different answers here. I think one reason for that is the question is incomplete. You are asking for "disadvantages of using a thread pool," but you didn't say, disadvantages compared to what?
A thread pool solves a particular problem. There are other problems where "thread" or "threads" is part of the solution, but "thread pool" is not. "Thread pool" usually is the answer, when the question is, how to achieve parallel execution of many, short-lived, CPU-intensive tasks, on a multi-processor system.
Threads are useful, even on a uni-processor, for other purposes. The first question I ask about any long-running thread, for example, is "what does it wait for." Threads are an excellent tool for organizing a program that has to wait for different kinds of event. You would not use a thread pool for that, though.
In addition to Gray's answer.
Other use-case is if you are using thread local or using thread as a key of some kind of hash table or stateful custom implementation of thread. In this case you have to care about cleaning the state when particular task finished using the thread even if it failed. Otherwise some surprises are possible: next task that uses thread that has some state can start functioning wrong.
Thread pools of limited size are dangerous if the tasks running on it exchange information via blocking queues - this may cause a thread starvation: What is starvation?. Good rule is to never use blocking operation in the tasks running on a thread pool.
Theads are better when you don't plan to stop using the thread. For instance in an infinite loop. Threadpools are best when doing many tasks that don't happen all at the same time. Especially when the tasks are short the overhead and clarity of using the same thread is bigger.
It depends on the situation you are going to utilize the thread pool. For example, if your system does not need to perform tasks in parallel, a threading pool would be in no use. It would keep unnecessary threads ready for a work that will never come. In such cases you can use a SingleThreadExecutor anyway. Check this link if you haven't it may clarify you about it: Thread Pool Pattern

Java: How thread pools map threads to runnables

Trying to wrap my head around Java concurrency and am having a tough time understanding the relationship between thread pools, threads, and the runnable "tasks" they are executing.
If I create a thread pool with, say, 10 threads, then do I have to pass the same task to each thread in the pool, or are the pooled threads literally just task-agnostic "worker drones" available to execute any task?
Either way, how does an Executor/ExecutorService assign the right task to the right thread?
Typically, thread pools are implemented with one producer-consumer queue that all of the pool threads wait on for tasks. The Executor does not have to assign tasks, all it has to do is push them onto the queue. Some thread, a 'task-agnostic worker drone', will pop the task, execute its 'run()' method and, when complete, loop round to wait on the queue again for more work.
If I create a thread pool with, say, 10 threads, then do I have to pass the same task to each thread in the pool, or are the pooled threads literally just task-agnostic "worker drones" available to execute any task?
More or less the latter. Any given task gets assigned to the next available thread.
Either way, how does an Executor/ExecutorService assign the right task to the right thread?
There is no such thing as the "right" thread. The task (i.e. the Runnable) needs to be designed so that it doesn't matter which thread runs it. This is not normally an issue ... assuming that your application properly synchronizes access / updates to data that is potentially used by more than one threads.

How to avoid both deadlocks and using too many threads?

Using Executors.newFixedThreadPool(int nThreads) is a nice way to minimize the overhead of creating too many threads, but it may lead to a deadlock in case that all threads are waiting for another job which itself is waiting for a free thread from the pool. Sometimes the problem can be solved by using multiple thread pools, but sometimes it can't. I'm looking for something behaving similar to newFixedThreadPool except in case that all pooled threads are blocked - in such a case the pool should grow despite its predefined bound. Is there something like this?
Actually, the deadlock is not that important here. The real problem is "how to manage the number of running threads" rather than their total number. This can be also interesting when trying to keep the CPU fully utilized without creating needlessly many threads.
If you have a contention issue, this is a design problem. If you want a quick fix as you described, you will only be curing the symptoms, not the underlying sickness.
You should instead refactor your design to eliminate deadlock using some other means.
It's generally a bad idea to have threads in a pool blocked waiting for other threads in the same thread pool.
I would try to change the design to a non-blocking one. If a thread needs the result of another operation that is being processed by the same executor I would have it submit a task back to the executor to run after the second operation completes. Or place an object into a queue to be picked up later when the other job finishes.
Alternatively you can do what Swing does with modal dialogs and have the thread that is about to block start up a child thread to keep processing requests until the parent thread unblocks. This is tricky to get right though and would require you to manually manage the threads which is a lot less safe than using an Executor.
Executor.newCachedThreadPool(); A cached thread pool will check to see if there are any available threads. If there is, the thread pool will re use the thread. If it isnt, the thread pool will create a new thread. The threads time to live is 60 seconds, so after 60 seconds the extra threads will be terminated.

Categories

Resources