How to analyze multi-threading overhead in Java? - java

My program looks like this:
Executor executor = Executors.newSingleThreadExecutor();
void work1(){
while (true) {
// do heavy work 1
Object data;
executor.execute(() -> work2(data));
}
}
void work2(Object data){
// do heavy work 2
}
I noticed that when work2 becomes heavy it affects work1 as well. It gets to the point when there is almost no gain in splitting the process into two threads.
What could be the reasons for this behavior and what tools do I have to find and analyze those problems?
Oh and here are my machine specs:

"while (true) {}" works fast but work2 is heavy and works slow. As a result, the number of tasks waiting for the single thread increases infinitely. So available core memory is exhausted and virtual memory is used, which is much slower. Standard thread pool is not designed to handle large number of tasks. A correct solution is as follows:
class WorkerThread extends Thread {
ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<>(10);
public void run() {
while true() {
queue.take().run();
}
}
}
WorkerThread workerThread = new WorkerThread();
workerThread.start();
void work1(){
while (true) {
Object data;
// do heavy work
workerThread.queue.put(() -> work2(data));
}
}
Using ArrayBlockingQueue keeps number of waiting tasks small.

Related

Get results of scheduled non-blocking operations in Java

I am trying to do some blocking operations (say HTTP request) in a scheduled and non-blocking manner. Let's say I have 10 requests and one request takes 3 seconds but I would like not to wait for 3 seconds but wait 1 second and send the next one. After all executions are finished I would like to gather all results in a list and return to the user.
Below, there is a prototype of my scenario (thread sleep used as blocking operation instead of HTTP req.)
public static List<Integer> getResults(List<Integer> inputs) throws InterruptedException, ExecutionException {
List<Integer> results = new LinkedList<Integer>();
Queue<Callable<Integer>> tasks = new LinkedList<Callable<Integer>>();
List<Future<Integer>> futures = new LinkedList<Future<Integer>>();
for (Integer input : inputs) {
Callable<Integer> task = new Callable<Integer>() {
public Integer call() throws InterruptedException {
Thread.sleep(3000);
return input + 1000;
}
};
tasks.add(task);
}
ExecutorService es = Executors.newCachedThreadPool();
ScheduledExecutorService ses = Executors.newScheduledThreadPool(1);
ses.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
Callable<Integer> task = tasks.poll();
if (task == null) {
ses.shutdown();
es.shutdown();
return;
}
futures.add(es.submit(task));
}
}, 0, 1000, TimeUnit.MILLISECONDS);
while(true) {
if(futures.size() == inputs.size()) {
for (Future<Integer> future : futures) {
Integer result = future.get();
results.add(result);
}
return results;
}
}
}
public static void main(String[] args) throws InterruptedException, ExecutionException {
List<Integer> results = getResults(new LinkedList<Integer>(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)));
System.out.println(Arrays.toString(results.toArray()));
}
I am waiting in a while loop until all tasks return a proper result. But it never enters inside the breaking condition and it infinitely loops. Whenever I put an I/O operation like logger or even a breakpoint, it just break the while loop and everything becomes ok.
I am relatively new to Java concurrency and trying to understand what is happening and whether this is the correct way to do. I guess I/O operation triggers something on thread scheduler and make it check the collections' sizes.
You need to synchronize your threads. You have two different threads (the main thread and the exectuor service thread) accessing the futures list and since LinkedList is not synchronized, these two threads see two different values of futures.
while(true) {
synchronized(futures) {
if(futures.size() == inputs.size()) {
...
}
}
}
This happens because threads in java use the cpu cache to improve performance. So each thread could have different values of a variable until they are synchronized.
This SO question has more information on this.
Also from this answer:
It's all about memory. Threads communicate through shared memory, but when there are multiple CPUs in a system, all trying to access the same memory system, then the memory system becomes a bottleneck. Therefore, the CPUs in a typical multi-CPU computer are allowed to delay, re-order, and cache memory operations in order to speed things up.
That works great when threads are not interacting with one another, but it causes problems when they actually do want to interact: If thread A stores a value into an ordinary variable, Java makes no guarantee about when (or even if) thread B will see the value change.
In order to overcome that problem when it's important, Java gives you certain means of synchronizing threads. That is, getting the threads to agree on the state of the program's memory. The volatile keyword and the synchronized keyword are two means of establishing synchronization between threads.
And finally, the futures list does not update in your code because the main thread is continuously occupied, because of the infinte while block. Doing any I/O operation in your while loop gives the cpu enough breathing space to update its local cache.
An infinite while loop is generally a bad idea because it is very resource intensive. Adding a small delay before the next iteration can make it a little better (though still inefficient).

Java Threadpool handle unlimited queued threads?

im testing my Server which has a threadpool for the connections.
public class Test
{
public static final void main(String[] args)
{
ThreadPoolExecutor threadPoolExecutorSentMessage = new ThreadPoolExecutor(Runtime.getRuntime().availableProcessors(),
100,
5,
TimeUnit.SECONDS,
new LinkedBlockingQueue<Runnable>());
ConnctionListener con = new ConnctionListener() //ignore this, included it for other usage.
{
public void onStartSendingMessages()
{
while(true)
{
for(int i = 0; i < 50; i++)
{
threadPoolExecutorSentMessage.execute(new TestT("Message: " + i));
}
}
}
};
con.onStartSendingMessages();
//new Thread(new MessageConnectionWaiter(con)).start();
}
private static class TestT implements Runnable
{
private String msg;
public TestT(String msg)
{
this.msg = msg;
}
#Override
public void run()
{
System.out.println(msg);
}
}
}
Its not the server code, but im testing with the code how the threads working.
When im starting unlimited threads(like many connections to my server), there is a problem, that its stuck and nothing happens. I though that the threadpool is blocking new tasks, before the threadpool has avaiable space for a new thread. Can someone tell how to handle something like this? I tried to reduce the amount of max. threads but it dont fixed my problem. I just want that the threadpool runs thread no matter how much threads are waiting.
This is not the number of threads causing the problem. It is the number of tasks you are adding to the workqueue. You are adding the tasks in a infinite loop. And work queue has a capacity, the linked queue has maximum capacity of Integer.MAX_VALUE. After you have added that many tasks, the main thread start waiting for the space to be emptied in work queue. After your threadpools thread complete execution of any task and removes the task from the queue, the space becomes available and main thread can add the task only if there is any space for the task in the queue

Why cannot `ExecutorService` consistently schedule threads?

I am attempting to reimplement my concurrent code using CyclicBarrier which is new to me. I can do without it but am time trialling it against my other solution, the problem I have is a deadlock situation with the following code:
//instance variables (fully initialised elsewhere).
private final ExecutorService exec = Executors.newFixedThreadPool(4);
private ArrayList<IListener> listeners = new ArrayList<IListener>();
private int[] playerIds;
private class WorldUpdater {
final CyclicBarrier barrier1;
final CyclicBarrier barrier2;
volatile boolean anyChange;
List<Callable<Void>> calls = new ArrayList<Callable<Void>>();
class SyncedCallable implements Callable<Void> {
final IListener listener;
private SyncedCallable(IListener listener) {
this.listener = listener;
}
#Override
public Void call() throws Exception {
listener.startUpdate();
if (barrier1.await() == 0) {
anyChange = processCommons();
}
barrier2.await();
listener.endUpdate(anyChange);
return null;
}
}
public WorldUpdater(ArrayList<IListener> listeners, int[] playerIds) {
barrier2 = new CyclicBarrier(listeners.size());
barrier1 = new CyclicBarrier(listeners.size());
for (int i : playerIds)
calls.add(new SyncedCallable(listeners.get(i)));
}
void start(){
try {
exec.invokeAll(calls);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
void someMethodCalledEveryFrame() {
//Calls some Fisher-something method that shuffles int[]
shufflePIDs();
WorldUpdater updater = new WorldUpdater(listeners, playerIds);
updater.start();
}
I use the debugger in Android Studio (intelliJ) to pause execution at this stage. I get multiple threads showing the my await calls as the last of my code to be executed
->Unsafe.park
->LockSupport.park
->AbstractQueuedSynchronizer$ConditionObject.await
->CyclicBarrier.doWait
->CyclicBarrier.await
At least one thread will be have this stack:
->Unsafe.park.
->LockSupport.park
->AbstractQueuedSynchronizer$ConditionObject.await
->LinkedBlockingQueue.take
->ThreadPoolExecutor.getTask
->ThreadPoolExecutor.runWorker
->ThreadPoolExecutor$Worker.run
->Thread.run
I notice that the CyclicBarrier plays no part in these latter stray threads.
processCommons is calling exec.invokeAll (on the 3 listeners), I suppose this means I am running out of threads. But many times this doesn't happen so please could someone clarify why ExecutorService cannot consistently schedule my threads? They have their own stack and program counter so I would have thought this to not be a problem. I only ever have max 4 running at once. Someone help me with the math?
What is the value of listeners.size() when your WorldUpdater is created? If it is more than four, then your threads will never get past the barrier.
Your ExecutorService has exactly four threads. No more, no fewer. The callers of barrier1.await() and barrier2.await() will not get past the barrier until exactly listeners.size() threads are waiting.
My gut reaction is, it would be a mistake for pool threads to use a CyclicBarrier. CyclicBarrier is only useful when you know exactly how many threads will be using it. But, when you're using a thread pool, you often do not know the size of the pool. In fact, in a real-world (i.e., commercial) application, if you're using a thread pool, It probably was not created by your code at all. It probably was created somewhere else, and passed in to your code as an injected dependency.
I did a little experiment and came up with:
#Override
public Void call() throws Exception {
System.out.println("startUpdate, Thread:" + Thread.currentThread());
listener.startUpdate();
if (barrier1.await() == 0) {
System.out.println("processCommons, Thread:" + Thread.currentThread());
anyChange = processCommons();
}
barrier2.await();
System.out.println("endUpdate, Thread:" + Thread.currentThread());
listener.endUpdate(anyChange);
return null;
}
Which revealed when using a pool of 3 with 3 listeners, I will always hang in processCommons which contains the following:
List<Callable<Void>> calls = new ArrayList<Callable<Void>>();
for (IListener listiner : listeners)
calls.add(new CommonsCallable(listener));
try {
exec.invokeAll(calls);
} catch (InterruptedException e) {
e.printStackTrace();
}
With 2 threads waiting at the barrier and the third attempting to create 3 more. I needed one extra thread in the ExecutorService and the 2 at the barrier could be "recycled" as I was asking in my question. I've got references to 6 threads at this stage when exec is only holding 4. This can run happily for many minutes.
private final ExecutorService exec = Executors.newFixedThreadPool(8);
Should be better, but it was not.
Finally I did breakpoint stepping in intelliJ (thanks ideaC!)
The problem is
if (barrier1.await() == 0) {
anyChange = processCommons();
}
barrier2.await();
Between the 2 await you may get several suspended threads that haven't actually reached the await. In the case of 3 listeners out of a pool of 4 it only takes one to get "unscheduled" (or whatever) and barrier2 will never get the full complement. But what about when I have a pool of 8? The same behaviour manifests with all but two of the threads the stack of limbo:
->Unsafe.park.
->LockSupport.park
->AbstractQueuedSynchronizer$ConditionObject.await
->LinkedBlockingQueue.take
->ThreadPoolExecutor.getTask
->ThreadPoolExecutor.runWorker
->ThreadPoolExecutor$Worker.run
->Thread.run
What can be happening here to disable all 5 threads? I should have taken James Large's advice and avoided crowbarring in this over elaborate CyclicBarrier.--UPDATE-- It can run all night now without CyclicBarrier.

Java threads - High cpu utilization?

I have two threads. I am invoking one (the SocketThread) first and then from that one I am invoking another thread (the 'ProcessThread'). My issue is that, during the execution the CPU usage is 50%. It reduces to 0% when I add TimeUnit.NANOSECONDS.sleep(1) in the ProcessThread run method. Is this the right method to modify? Or any advice in general for reducing the CUP utilization.
Below is my code:
public class SocketThread extends Thread {
private Set<Object> setSocketOutput = new HashSet<Object>(1, 1);
private BlockingQueue<Set<Object>> bqSocketOutput;
ProcessThread pThread;
#Override
public void run() {
pThread = new ProcessThread(bqSocketOutput);
pThread.start();
for(long i=0; i<= 30000; i++) {
System.out.println("SocketThread - Testing" + i);
}
}
}
public class ProcessThread extends Thread {
public ProcessThread(BlockingQueue<Set<Object>> bqTrace) {
System.out.println("ProcessThread - Constructor");
}
#Override
public void run() {
System.out.println("ProcessThread - Exectution");
while (true) {
/*
try {
TimeUnit.NANOSECONDS.sleep(1);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}*/
}
}
}
You can "reduce the CPU utilization" by sleeping threads, but that means that you are not getting any work done on those threads (or if you are, it's getting done dramatically slower than if you just let the threads run full-out). It's like saying you can reduce fuel consumption in your car by stopping every few miles and turning the engine off.
Typically a while(true) loop being run without some sort of blocking thread synchronization (like a .Wait(), or in your situation, a BlockingQueue.take() as #Martin-James suggests) is a code smell and indicates code that should be refactored.
If your worker thread waits on the blocking queue by calling take() it should not consume CPU resources.
If its in a tight loop that does nothing (as the code in the example suggests) then of course it will consume resources. Calling sleep inside a loop as a limiter probably isn't the best idea.
You have two tight loops that will hog the CPU as long there's work to be done. Sleeping is one way to slow down your application (and decrease CPU utilization) but it's rarely, if ever, the desired result.
If you insist on sleeping, you need to increase your sleep times to be at least 20 milliseconds and tune from there. You can also look into sleeping after a batch of tasks. You'll also need a similar sleep in the SocketThread print loop.

Java - Basic Multithreading

I would like to ask basic question about Java threads. Let's consider a producer - consumer scenario. Say there is one producer, and n consumer. Consumer arrive at random time, and once they are served they go away, meaning each consumer runs on its own thread. Should I still use run forever condition for consumer ?
public class Consumer extends Thread {
public void run() {
while (true) {
}
}
}
Won't this keep thread running forever ?
I wouldn't extend Thread, instead I would implement Runnable.
If you want the thread to run forever, I would have it loop forever.
A common alternative is to use
while(!Thread.currentThread().isInterrupted()) {
or
while(!Thread.interrupted()) {
It will, so you might want to do something like
while(beingServed)
{
//check if the customer is done being served (set beingServed to false)
}
This way you'll escaped the loop when it's meant to die.
Why not use a boolean that represents the presence of the Consumer?
public class Consumer extends Thread {
private volatile boolean present;
public Consumer() {
present = true;
}
public void run() {
while (present) {
// Do Stuff
}
}
public void consumerLeft() {
present = false;
}
}
First, you can create for each consumer and after the consumer will finish it's job than the consumer will finish the run function and will die, so no need for infinite loop. however, creating thread for each consumer is not good idea since creation of thread is quite expensive in performance point of view. threads are very expensive resources. In addition, i agree with the answers above that it is better to implement runnable and not to extends thread. extend thread only when you wish to customize your thread.
I strongly suggest you will use thread pool and the consumer will be the runnable object that ran by the thread in the thread pool.
the code should look like this:
public class ConsumerMgr{
int poolSize = 2;
int maxPoolSize = 2;
long keepAliveTime = 10;
ThreadPoolExecutor threadPool = null;
final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(
5);
public ConsumerMgr()
{
threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize,
keepAliveTime, TimeUnit.SECONDS, queue);
}
public void runTask(Runnable task)
{
// System.out.println("Task count.."+threadPool.getTaskCount() );
// System.out.println("Queue Size before assigning the
// task.."+queue.size() );
threadPool.execute(task);
// System.out.println("Queue Size after assigning the
// task.."+queue.size() );
// System.out.println("Pool Size after assigning the
// task.."+threadPool.getActiveCount() );
// System.out.println("Task count.."+threadPool.getTaskCount() );
System.out.println("Task count.." + queue.size());
}
It is not a good idea to extend Thread (unless you are coding a new kind of thread - ie never).
The best approach is to pass a Runnable to the Thread's constructor, like this:
public class Consumer implements Runnable {
public void run() {
while (true) {
// Do something
}
}
}
new Thread(new Consumer()).start();
In general, while(true) is OK, but you have to handle being interrupted, either by normal wake or by spurious wakeup. There are many examples out there on the web.
I recommend reading Java Concurrency in Practice.
for producer-consumer pattern you better use wait() and notify(). See this tutorial. This is far more efficient than using while(true) loop.
If you want your thread to processes messages until you kill them (or they are killed in some way) inside while (true) there would be some synchronized call to your producer thread (or SynchronizedQueue, or queuing system) which would block until a message becomes available. Once a message is consumed, the loop restarts and waits again.
If you want to manually instantiate a bunch of thread which pull a message from a producer just once then die, don't use while (true).

Categories

Resources