Response time Increases as concurrency increases in Java - java

There is a standalone java program that I have written which involves one recursive function which is called atleast 60 thousand times. Now when I run this java program as a single thread it takes approx 2 seconds. However when I spawn 25 parallel threads the response time increases to 10 seconds where each thread starts at the same time and end also approximately at the same time after 10 - 15 seconds.
I ran the profiler tool (jvisualvm) to check if there is any lock on any thread that is being taken but all 25 show in running state at all times.
Hence my questions are:
1. In concurrency response time may increase (depends on cores on the system) , however if suppose the machine on which test runs is a 6 core machine then atleast 6 threads should finish in atleast 2 seconds or little more? Not all should complete processing at the same time ie. after 10 seconds?
2. How do i detect what is it that is causing problem in parallel processing?
Appreciate response on the same.
Thank you.

In concurrency response time may increase (depends on cores on the system) , however if suppose the machine on which test runs is a 6 core machine then at least 6 threads should finish in at least 2 seconds or little more?
In theory, this is possible, but it depends entirely on what you are doing. If you are CPU bound and you have a problem which naturally parallelises, you can get 6x improvement with 6 code. If you have an network IO bound process, then 100 threads can be 100x faster with just 6 cores. However if you have a task where the overhead is too high, you can use orders of magnitude slower with multiple thread than just using one.
Not all should complete processing at the same time ie. after 10 seconds?
That is highly unlikely, but possible.
How do i detect what is it that is causing problem in parallel processing?
When you use multiple threads it is slower than one thread. I suggest you look at how you are break up the problem and making sure you don't have more over head than useful work.

This is a pretty broad question. So I'm going to give a pretty broad answer.
In concurrency response time may increase (depends on cores on the system) , however if suppose the machine on which test runs is a 6 core machine then atleast 6 threads should finish in atleast 2 seconds or little more?
Completely depends on the type of work being done. With N cores, the creation of N threads involves non-trivial overhead, as well as the parent process managing IO between cores. Multiple threads may help or hurt. Multithreading is usually a good bet if you have tasks that perform a lot of off-cpu IO, or when you have long running calculations on data that is not shared between threads.
How do i detect what is it that is causing problem in parallel processing?
This can be tricky. Use a good debugger that pauses all threads at points. Look for many shared resources between threads, excessive/unnecessary synchronization, or long wait times. There are many possible reasons.

My response time improved by warming up the threads. A lot of time was taken by thread creation which will improve when we run the code in an app server (like tomcat)
Another noticeable feature was after the first run, the second run was slower which makes me to believe the way classes are loaded by JVM at runtime (Apart from thread warmup)
Thank you all for your inputs

Related

Optimal number of threads [duplicate]

Let's say I have a 4-core CPU, and I want to run some process in the minimum amount of time. The process is ideally parallelizable, so I can run chunks of it on an infinite number of threads and each thread takes the same amount of time.
Since I have 4 cores, I don't expect any speedup by running more threads than cores, since a single core is only capable of running a single thread at a given moment. I don't know much about hardware, so this is only a guess.
Is there a benefit to running a parallelizable process on more threads than cores? In other words, will my process finish faster, slower, or in about the same amount of time if I run it using 4000 threads rather than 4 threads?
If your threads don't do I/O, synchronization, etc., and there's nothing else running, 1 thread per core will get you the best performance. However that very likely not the case. Adding more threads usually helps, but after some point, they cause some performance degradation.
Not long ago, I was doing performance testing on a 2 quad-core machine running an ASP.NET application on Mono under a pretty decent load. We played with the minimum and maximum number of threads and in the end we found out that for that particular application in that particular configuration the best throughput was somewhere between 36 and 40 threads. Anything outside those boundaries performed worse. Lesson learned? If I were you, I would test with different number of threads until you find the right number for your application.
One thing for sure: 4k threads will take longer. That's a lot of context switches.
I agree with #Gonzalo's answer. I have a process that doesn't do I/O, and here is what I've found:
Note that all threads work on one array but different ranges (two threads do not access the same index), so the results may differ if they've worked on different arrays.
The 1.86 machine is a macbook air with an SSD. The other mac is an iMac with a normal HDD (I think it's 7200 rpm). The windows machine also has a 7200 rpm HDD.
In this test, the optimal number was equal to the number of cores in the machine.
I know this question is rather old, but things have evolved since 2009.
There are two things to take into account now: the number of cores, and the number of threads that can run within each core.
With Intel processors, the number of threads is defined by the Hyperthreading which is just 2 (when available). But Hyperthreading cuts your execution time by two, even when not using 2 threads! (i.e. 1 pipeline shared between two processes -- this is good when you have more processes, not so good otherwise. More cores are definitively better!) Note that modern CPUs generally have more pipelines to divide the workload, so it's no really divided by two anymore. But Hyperthreading still shares a lot of the CPU units between the two threads (some call those logical CPUs).
On other processors you may have 2, 4, or even 8 threads. So if you have 8 cores each of which support 8 threads, you could have 64 processes running in parallel without context switching.
"No context switching" is obviously not true if you run with a standard operating system which will do context switching for all sorts of other things out of your control. But that's the main idea. Some OSes let you allocate processors so only your application has access/usage of said processor!
From my own experience, if you have a lot of I/O, multiple threads is good. If you have very heavy memory intensive work (read source 1, read source 2, fast computation, write) then having more threads doesn't help. Again, this depends on how much data you read/write simultaneously (i.e. if you use SSE 4.2 and read 256 bits values, that stops all threads in their step... in other words, 1 thread is probably a lot easier to implement and probably nearly as speedy if not actually faster. This will depend on your process & memory architecture, some advanced servers manage separate memory ranges for separate cores so separate threads will be faster assuming your data is properly filed... which is why, on some architectures, 4 processes will run faster than 1 process with 4 threads.)
The answer depends on the complexity of the algorithms used in the program. I came up with a method to calculate the optimal number of threads by making two measurements of processing times Tn and Tm for two arbitrary number of threads ‘n’ and ‘m’. For linear algorithms, the optimal number of threads will be N = sqrt ( (mn(Tm*(n-1) – Tn*(m-1)))/(nTn-mTm) ) .
Please read my article regarding calculations of the optimal number for various algorithms: pavelkazenin.wordpress.com
The actual performance will depend on how much voluntary yielding each thread will do. For example, if the threads do NO I/O at all and use no system services (i.e. they're 100% cpu-bound) then 1 thread per core is the optimal. If the threads do anything that requires waiting, then you'll have to experiment to determine the optimal number of threads. 4000 threads would incur significant scheduling overhead, so that's probably not optimal either.
I thought I'd add another perspective here. The answer depends on whether the question is assuming weak scaling or strong scaling.
From Wikipedia:
Weak scaling: how the solution time varies with the number of processors for a fixed problem size per processor.
Strong scaling: how the solution time varies with the number of processors for a fixed total problem size.
If the question is assuming weak scaling then #Gonzalo's answer suffices. However if the question is assuming strong scaling, there's something more to add. In strong scaling you're assuming a fixed workload size so if you increase the number of threads, the size of the data that each thread needs to work on decreases. On modern CPUs memory accesses are expensive and would be preferable to maintain locality by keeping the data in caches. Therefore, the likely optimal number of threads can be found when the dataset of each thread fits in each core's cache (I'm not going into the details of discussing whether it's L1/L2/L3 cache(s) of the system).
This holds true even when the number of threads exceeds the number of cores. For example assume there's 8 arbitrary unit (or AU) of work in the program which will be executed on a 4 core machine.
Case 1: run with four threads where each thread needs to complete 2AU. Each thread takes 10s to complete (with a lot of cache misses). With four cores the total amount of time will be 10s (10s * 4 threads / 4 cores).
Case 2: run with eight threads where each thread needs to complete 1AU. Each thread takes only 2s (instead of 5s because of the reduced amount of cache misses). With four cores the total amount of time will be 4s (2s * 8 threads / 4 cores).
I've simplified the problem and ignored overheads mentioned in other answers (e.g., context switches) but hope you get the point that it might be beneficial to have more number of threads than the available number of cores, depending on the data size you're dealing with.
4000 threads at one time is pretty high.
The answer is yes and no. If you are doing a lot of blocking I/O in each thread, then yes, you could show significant speedups doing up to probably 3 or 4 threads per logical core.
If you are not doing a lot of blocking things however, then the extra overhead with threading will just make it slower. So use a profiler and see where the bottlenecks are in each possibly parallel piece. If you are doing heavy computations, then more than 1 thread per CPU won't help. If you are doing a lot of memory transfer, it won't help either. If you are doing a lot of I/O though such as for disk access or internet access, then yes multiple threads will help up to a certain extent, or at the least make the application more responsive.
Benchmark.
I'd start ramping up the number of threads for an application, starting at 1, and then go to something like 100, run three-five trials for each number of threads, and build yourself a graph of operation speed vs. number of threads.
You should that the four thread case is optimal, with slight rises in runtime after that, but maybe not. It may be that your application is bandwidth limited, ie, the dataset you're loading into memory is huge, you're getting lots of cache misses, etc, such that 2 threads are optimal.
You can't know until you test.
You will find how many threads you can run on your machine by running htop or ps command that returns number of process on your machine.
You can use man page about 'ps' command.
man ps
If you want to calculate number of all users process, you can use one of these commands:
ps -aux| wc -l
ps -eLf | wc -l
Calculating number of an user process:
ps --User root | wc -l
Also, you can use "htop" [Reference]:
Installing on Ubuntu or Debian:
sudo apt-get install htop
Installing on Redhat or CentOS:
yum install htop
dnf install htop [On Fedora 22+ releases]
If you want to compile htop from source code, you will find it here.
The ideal is 1 thread per core, as long as none of the threads will block.
One case where this may not be true: there are other threads running on the core, in which case more threads may give your program a bigger slice of the execution time.
One example of lots of threads ("thread pool") vs one per core is that of implementing a web-server in Linux or in Windows.
Since sockets are polled in Linux a lot of threads may increase the likelihood of one of them polling the right socket at the right time - but the overall processing cost will be very high.
In Windows the server will be implemented using I/O Completion Ports - IOCPs - which will make the application event driven: if an I/O completes the OS launches a stand-by thread to process it. When the processing has completed (usually with another I/O operation as in a request-response pair) the thread returns to the IOCP port (queue) to wait for the next completion.
If no I/O has completed there is no processing to be done and no thread is launched.
Indeed, Microsoft recommends no more than one thread per core in IOCP implementations. Any I/O may be attached to the IOCP mechanism. IOCs may also be posted by the application, if necessary.
speaking from computation and memory bound point of view (scientific computing) 4000 threads will make application run really slow. Part of the problem is a very high overhead of context switching and most likely very poor memory locality.
But it also depends on your architecture. From where I heard Niagara processors are suppose to be able to handle multiple threads on a single core using some kind of advanced pipelining technique. However I have no experience with those processors.
Hope this makes sense, Check the CPU and Memory utilization and put some threshold value. If the threshold value is crossed,don't allow to create new thread else allow...

How do I compensate for not having a "quiet" machine when benchmarking my Java application?

I run numerical simulations all the time. I can tell if my simulations don't work (i.e., they fail to give acceptable answers), but because I typically run a variable number of these on designated cores running in the background (as I work), looking at clock time tells me less than nothing about how quickly they ran.
I don't want clock time; I want CPU time. None of the articles seems to mention this little aspect. In particular, the recommendation to use a "quiet" machine seems to blur what's being measured.
I don't need a great deal of detail, I just want to know that simulation A runs about 15% faster or slower than simulation B or C, despite the fact that A ran by itself for a while, and then I started B, followed by C. And maybe I played for a little while before retiring, which would run a higher-priority application for part of that time. Don't tell me that ideally I should use a "quiet" machine; my question specifically asks how to do benchmarking without a dedicated machine for this. I also do not wish to kill the efficiency of my applications while measuring how long they take to run; it seems that significant overhead would only be required when a great deal of detail is needed. Am I right?
I want to modify my applications so that when I check whether a batch job succeeds, I can also see how long it took to reach these results in CPU time. Can benchmarking give me the answers I'm looking for? Can I simply use Java 9's benchmarking harness, or do I need something else?
You can measure CPU time instead of wall-clock time from outside the JVM easily enough on most OSes. e.g. time java foo.jar on Unix/Linux, or even perf stat java foo.jar on Linux.
The biggest problem with this is that some workloads have more parallelism than others. Consider this simple example. It's unrealistic, but the math works the same for real programs that alternate between more-parallel and less-parallel phases.
version A is purely serial for 9 minutes, and keeps 8 cores saturated for 1 minute. Wall-clock time = 10 minutes, CPU time = 17 minutes
version B is serial for 1 minute, and keeps all 8 cores busy for 5 minutes. Wall time = 6 minutes, CPU time = 5*8 + 1 = 41 minutes
If you were just looking at CPU time, you wouldn't know which version was stuck on an inherently serial portion of its work. (And this is assuming purely CPU-bound, no I/O waiting.)
For two similar implementations that are both mostly serial, though, CPU time and wall time could give you a reasonable guess.
But modern JVMs like HotSpot use multi-threaded garbage-collection, so even if your own code never starts multiple threads, one version that makes the GC do more work can use more CPU time but still be faster. That might be rare, though.
Another confounding factor: contention for memory bandwidth and cache footprint will mean that it takes more CPU time to do the same work, because your code will spend more time waiting for memory.
And with HyperThreading or other SMT cpu architectures (like Ryzen) where one physical core can act as multiple logical cores, having both logical cores active increases total throughput at the cost of lower per-thread performance.
So 1 minute of CPU time on a core where the HT sibling is idle can get more work done that when the other logical core was also active.
With both logical cores active, a modern Skylake or Ryzen might give you somewhere from 50 to 99% of the single-thread performance of having all the execution resources available for a single core, completely dependent on what the code is running on each thread. (If both bottleneck on latency of FP add and multiply with very long loop-carried dependency chains that out-of-order execution can't see past, e.g. both summing very large arrays in order with strict FP, that's the best case for HT. Neither thread will slow the other down, because FP add throughput is 3 to 8x FP add latency.)
But in the worst case, if both tasks slow slow down a lot from L1d cache misses, HT can even lose throughput from running both at once on the same core, vs. running one then the other.

difference between 96 threads and 6500 threads?

I recently read in a post that I can run 96 threads. However, when I look at my PC's performance, and posts like this, I see thousands of threads being able to run.
I assume the word thread is being used for two different things here (correct me if I'm wrong and explain). I want to know what the difference between the two "thread"s are. Why is one post saying 92 but another saying 6500?
The answer is: both links "talk" about the same thread.
The major difference is: the first is effectively asking about the number of threads that a given CPU can really execute in parallel.
The other link talks about the fact how many threads you can coexist within a certain scope (for example one JVM).
In other words: the major "idea" behind threads is that ... most of the time, they are idle! So having 6400 threads can work out - assuming that your workload is such, that 99.9% of the time, each thread is just doing nothing (like: waiting for something to happen). But of course: such a high number is probably not a good idea, unless we are talking about a really huge server that has zillions of cores to work with. One has to keep in mind that threads are also a resource, owned by the operating system, and many problems that you did solve using "more threads" in the past have no different answers (like using nio packages and non-blocking io instead of having zillions of threads waiting for responses for example).
Meaning: when you write example code where each thread just computes something (so, if run alone, that thread would consume 100% of the available CPU cycles) - then adding more threads just creates more load on the system.
Typically, a modern day CPU has c cores. And each cores can run t threads in parallel. So, you got often like 4 x 2 threads that can occupy the CPU in parallel. But as soon as your threads spent more time doing nothing (waiting for a disk read or network request to come back), you can easily create, manage, and utilize hundreds or even thousands of threads.

How to decide the suitable number of threads to create in java?

I have a java application that creates SSL socket with remote hosts. I want to employ threads to hasten the process.
I want the maximum possible utilization that does not affect the program performance. How can I decide the suitable number of threads to use? After running the following line: Runtime.getRuntime().availableProcessors(); I got 4. My processor is an Intel core i7, with 8 GB RAM.
If you have 4 cores, then in theory you should have exactly four worker threads going at any given time for maximum optimization. Unfortunately what happens in theory never happens in practice. You may have worker threads who, for whatever reason, have significant amounts of downtime. Perhaps they're hitting the web for more data, or reading from a disk, much of which is just waiting and not utilizing the cpu.
Depending on how much waiting you're doing, you'll want to bump up the number. The costs to increasing the number of threads is that you'll have more context switching and competition for resources. The benefits are that you'll have another thread ready to work in case one of the other threads decides it has to break for something.
Your best bet is to set it to something (let's start with 4) and work your way up. Profile your code with each setting, and see if your benchmarks go up or down. You should soon see a pattern and a break-even point.
When it comes to optimization, you can theorize all you want about what should be the fastest, but you won't beat actually running and timing your code to truly answer this question.
As DarthVader said, you can use a ThreadPool (CachedThreadPool). With this construct you don't have to specify a concrete number of threads.
From the oracle site:
The newCachedThreadPool method creates an executor with an expandable thread pool. This executor is suitable for applications that launch many short-lived tasks.
Maybe thats what you are looking for.
About the number of cores is hard to say. You have 4 hyperthreading cores, at least one core you should leave for your OS. i would say 4-6 Threads.

Unable to achieve load testing on EJB subject using threading

I posted this question asking why 100000 run() calls is faster compred to 100000 start() as i found out that despite multi threading, 100000 start would actually take longer than 10000 run calls because of the thread management.
Actually, i was trying to spawn 100000 threads to simulate a load on an EJB method i wish to test, and it seems not possible this way. Is there a way which i could achieve this? Or is it that i would need to have multiple machines in order to achieve that load.
Is it true that if i have a quad core pc, i should only spawn at most 4 threads at a time to prevent too heavy context switching because at any one time 4 threads would be run?
If you have 4 cores which support hyper threading, you can only actually load 8 threads at once. You can start more threads than this, however only 8 can be active at any one time. This is a limitation of the hardware you are using.
I very much doubt you need to run 10K or 100K threads to test any system. Most systems can be saturated with work with just one thread (or a very small number) and I suspect your EJB is no exception.
You cannot test a method is thread safe via brute force testing. You can only determine this by reading the code.
You might find this article interesting Java: What is the limit to the number of threads you can create?
100000 is definitely too much. If your thread is CPU intensive then only 1 thread per CPU/core should be enough. On the other hand, if your thread is not CPU intensive, you can have more than 1 thread per CPU/core. You should conduct more test to find out the optimum number of thread.

Categories

Resources