im testing my Server which has a threadpool for the connections.
public class Test
{
public static final void main(String[] args)
{
ThreadPoolExecutor threadPoolExecutorSentMessage = new ThreadPoolExecutor(Runtime.getRuntime().availableProcessors(),
100,
5,
TimeUnit.SECONDS,
new LinkedBlockingQueue<Runnable>());
ConnctionListener con = new ConnctionListener() //ignore this, included it for other usage.
{
public void onStartSendingMessages()
{
while(true)
{
for(int i = 0; i < 50; i++)
{
threadPoolExecutorSentMessage.execute(new TestT("Message: " + i));
}
}
}
};
con.onStartSendingMessages();
//new Thread(new MessageConnectionWaiter(con)).start();
}
private static class TestT implements Runnable
{
private String msg;
public TestT(String msg)
{
this.msg = msg;
}
#Override
public void run()
{
System.out.println(msg);
}
}
}
Its not the server code, but im testing with the code how the threads working.
When im starting unlimited threads(like many connections to my server), there is a problem, that its stuck and nothing happens. I though that the threadpool is blocking new tasks, before the threadpool has avaiable space for a new thread. Can someone tell how to handle something like this? I tried to reduce the amount of max. threads but it dont fixed my problem. I just want that the threadpool runs thread no matter how much threads are waiting.
This is not the number of threads causing the problem. It is the number of tasks you are adding to the workqueue. You are adding the tasks in a infinite loop. And work queue has a capacity, the linked queue has maximum capacity of Integer.MAX_VALUE. After you have added that many tasks, the main thread start waiting for the space to be emptied in work queue. After your threadpools thread complete execution of any task and removes the task from the queue, the space becomes available and main thread can add the task only if there is any space for the task in the queue
Related
My program looks like this:
Executor executor = Executors.newSingleThreadExecutor();
void work1(){
while (true) {
// do heavy work 1
Object data;
executor.execute(() -> work2(data));
}
}
void work2(Object data){
// do heavy work 2
}
I noticed that when work2 becomes heavy it affects work1 as well. It gets to the point when there is almost no gain in splitting the process into two threads.
What could be the reasons for this behavior and what tools do I have to find and analyze those problems?
Oh and here are my machine specs:
"while (true) {}" works fast but work2 is heavy and works slow. As a result, the number of tasks waiting for the single thread increases infinitely. So available core memory is exhausted and virtual memory is used, which is much slower. Standard thread pool is not designed to handle large number of tasks. A correct solution is as follows:
class WorkerThread extends Thread {
ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<>(10);
public void run() {
while true() {
queue.take().run();
}
}
}
WorkerThread workerThread = new WorkerThread();
workerThread.start();
void work1(){
while (true) {
Object data;
// do heavy work
workerThread.queue.put(() -> work2(data));
}
}
Using ArrayBlockingQueue keeps number of waiting tasks small.
My data size is huge so I thought of dividing it into chunks and using threads to process it asynchronously.
To make it simple let's say I have a list and associate each entry with a thread, so the number of threads is equal to the number of elements. Since I am new to threads in Java so I am not sure how the threads run asynchronously. Here is a simplified code for better understanding.
class ThreadRunner extends Thread {
String threadName;
String element;
public MyThread (String threadName, String element) {
this.threadName = threadName;
this.element = element;
}
public void run() {
System.out.println("Run: "+ threadName);
// some processing on the item
}
}
class TestThread {
public static void main (String arg[]) {
List<String> mainList = new ArrayList<>();
for (int x=0; x< mainList.size(); x++)
{
MyThread temp= new MyThread("Thread #" + x+1);
temp.start();
System.out.println("Started Thread:" + x+1);
}
}
Does this code execute the threads in an asynchronous manner?
Instead of spawning threads yourself, use an ExecutorService and submit work to it in the form of Runnables.
Each Runnable task should process enough work to justify the overhead of spawning threads but not so much work that you underutilize the other cores. In other words, you want to properly load balance the work across your cores. One way to do this is to divide the elements evenly across the tasks so that each task processes roughly num_threads / mainList.size()elements and you submit num_thread tasks to the ExecutorService.
I got many operations to do on multiple threads.
It should be allowed to pause calculations and increase/decrease thread amount then unpause calculations.
At this moment I can just increase amount of threads - tasks already produced by Future and waiting in queue are processed all time and according to docs threads cannot be reduced:
public void setCorePoolSize(int corePoolSize)
Sets the core number of threads. This overrides any value set in the
constructor. If the new value is smaller than the current value,
excess existing threads will be terminated when they next become idle.
If larger, new threads will, if needed, be started to execute any
queued tasks.
So my main problem is:
How to pause executor to not execute tasks waiting in queue?
Class example definition:
import java.util.concurrent.*;
public class Calc {
private int numberOfThreads;
private ThradPoolExecutor pool;
private Future fut;
public void setNumberOfThreads(int threads) {
this.numberOfThreads = threads + 1;
}
public void start() {
if(es == null){
pool = (ThreadPoolExecutor) Executors.newFixedThreadPool(numberOfThreads);
} else {
pool.setCorePoolSize(numberOfThreads);
pool.setMaximumPoolSize(numberOfThreads);
}
fut = pool.submit(() -> {
while (true) {
pool.execute(this::calculate);
}
});
}
public void suspendCalculations() {
fut.cancel(true);
}
public void continueCalculations() {
start();
}
private void calculate() {
// calculation logic, not important
}
}
Basing on my example lets imagine situation:
call setNumberOfThreads(5)
call start()
fut will create big queue with tasks waiting to be proceded ex random number 10000
call suspendCalculations()
call setNumberOfThreads(2)
call continueCalculations()
In this way threads cannot be reduced - we got 10000 tasks to be proceded in queue so we need to wait when queue will be empty.
I want to wait until 5 tasks on 5 threads will end and tasks from queue (10000) will be not passed to threads until I call continueCalculations.
In this way I can call setCorePoolSize(2) before continueCalculations because threads will be not processing tasks because of suspendCalculations
I have the code sample:
public class ThreadPoolTest {
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 100; i++) {
if (test() != 5 * 100) {
throw new RuntimeException("main");
}
}
test();
}
private static long test() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(100);
CountDownLatch countDownLatch = new CountDownLatch(100 * 5);
Set<Thread> threads = Collections.synchronizedSet(new HashSet<>());
AtomicLong atomicLong = new AtomicLong();
for (int i = 0; i < 5 * 100; i++) {
Thread.sleep(100);
executorService.submit(new Runnable() {
#Override
public void run() {
try {
threads.add(Thread.currentThread());
atomicLong.incrementAndGet();
countDownLatch.countDown();
Thread.sleep(1000);
} catch (Exception e) {
System.out.println(e);
}
}
});
}
executorService.shutdown();
countDownLatch.await();
if (threads.size() != 100) {
throw new RuntimeException("test");
}
return atomicLong.get();
}
}
I especially made application to work long.
And I see jvisualVM.
Each time gap threadpool was recreated.
After several minutes I see:
but if I use newCachedThreadPool instead of newFixedThreadPool I see constant picture:
Can you explain this behaviour?
P.S.
Problem was that exception occures in code and second iteration was not started
To answer your question; just look here:
private static long test() throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(100);
The JVM creates a new ThreadPool during each run of test(), because you tell it to do so.
In other words: if you intend to re-use the same threadpool, then avoid creating/shutting down your instances all the time.
In that sense, the simple fix is: move the creation of that ExecutorService into your main() method; and pass the service as argument to your test() method.
Edit: regarding your last comment on cached vs. fixed threadpool; you probably want to look into this question.
Because you asked it to, in your code ? :) Try moving the Pool creation code outside the test.
From docs:
newFixedThreadPool
Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available. If any thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks. The threads in the pool will exist until it is explicitly shutdown.
newCachedThreadPool
Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available. These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks. Calls to execute will reuse previously constructed threads if available. If no existing thread is available, a new thread will be created and added to the pool. Threads that have not been used for sixty seconds are terminated and removed from the cache. Thus, a pool that remains idle for long enough will not consume any resources. Note that pools with similar properties but different details (for example, timeout parameters) may be created using ThreadPoolExecutor constructors.
I am creating new threads every 5 seconds and using Threadpool exector. I am making sure threads are getting closed. But I am getting
Exception in thread "Timer-0" java.lang.OutOfMemoryError: unable to create new native thread.
Is it happening because I am creating to many threads and not closing them? or creating new ones frequently?
Can someone please tell me IF I am doing anything wrong in the code?
public class API{
private MongoStoreExecutor executor = new MongoStoreExecutor(10,50);
private class MongoStoreExecutor extends ThreadPoolExecutor {
public MongoStoreExecutor(int queueSize, int maxThreadPoolSize) {
super(10, maxThreadPoolSize, 30, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(queueSize),
new ThreadPoolExecutor.CallerRunsPolicy());
}
}
public TimerTask alertingPoolsData() throws Exception, UnknownHostException {
TimeerTask task = new TimerTask(){
public void run(){
Pools = alertingPools.values().toArray();
List<Future<?>> tasks = new ArrayList<Future<?>>(Pools.length);
for (Object pool : Pools) {
tasks.add(executor.submit(new DataAccumulation(timeStartSecData,
timeEndSec,pool, jsonArrayResult,dataResult)));
}
for( Future<?> f: tasks) {
f.get(2 * 1000L, TimeUnit.MILLISECONDS);
}
}
} catch (Exception e) {
e.printStackTrace();
}
long interval = 5 * 1000L;
tm.scheduleAtFixedRate(task,(interval -
(System.currentTimeMillis() % interval)), interval);
return task;
}
}
My hunch has to do with the code that we can't see: I suspect that you're calling alertingPoolsData over and over again somewhere. As a result, I suspect that you are scheduling that TimerTask over and over again. Each one of those component TimerTasks is then repeatedly creating some unknown number of Future<?>s (one for each of the elements of Pools), each of which is going to your MongoStoreExecutor.
If all of the above is the case, you've got a positive first derivative on your thread count curve. As a result, you're going to see a quadratic increase in your thread count over time. You'll start hitting the upper limits of your native threads pretty quickly.
As suggested in the comments, you could easily modify your current implementation. I'd suggest a combination of a ScheduledExecutorService and a ForkJoinPool.