Does downloading with multiple threads actually speed things up? - java

So, I was starting up minecraft a few days ago and opened up it's developer console to see what it was doing while it was updating itself. I noticed one of the lines said the following:
Downloading 32 files. (16 threads)
Now, the first thing that came to mind was: the processor can still only do one thing at a time, all threads do is split each of their tasks up and distribute the CPU power between them, so what would the purpose be of downloading multiple files on multiple threads if each thread is still only being run on a single processor?
Then, in the process of deciding whether or not I should ask this question on SO, I remembered that multiple cores can reside on one processor. For example, my processor is quad-core. So, you can actually accomplish 4 downloads truly simultaneously. Now that sounds like it makes sense. Except for the fact that there are 16 threads being use for minecraft's download. So, basically my question is:
Does increasing the number of threads during a download help the speed at all? (Assuming a multi-core processor, and the thread count is less than the core count.)
And
If you increase the number of threads to past the number of cores, does speed still increase? (It sounds to me like the downloads would be max-speed after 4 threads, on a quad-core processor.)

Downloads are network-bound, not CPU-bound. So theoretically, using multiple threads will not make it faster.
On the one hand, if your program downloads using synchronous (blocking) I/O, then multiple threads simply enables less blocking to occur. In general, on the other hand, it is more sensible to just use a single thread with asynchronous I/O.
On the gripping hand, asynchronous I/O is trickier to code correctly than synchronous I/O (which is straightforward). So the developers may have just decided to favour ease of programming over pure performance. (Or they may favour compatibility with older Java platforms: real async I/O is only available with NIO2 (which came with Java 7).)

When one thread downloads one file, it will spend some time waiting. When one thread downloads N files, one after another, it will spend, on average, N times as much total wait time.
When N threads each download one file, each of those threads will spend some time waiting, but some of those waits will be overlapped (e.g., thread A and thread B are both waiting at the same time.) The end result is that it may take less wall-clock time to get all N of the files.
On the other hand, if the threads are waiting for files from the same server, each thread's individual wait time may be longer.
The question of whether or not there is an over-all performance benefit depends on the client, on the server, and on the available network bandwidth. If the network can't carry bytes as fast as the server can pump them out, then multi-threading the client probably won't save any time, if the server is single-threaded, then multi-threading the client definitely won't help, but if the conditions are right (e.g., if you have a fast internet connection and especially if the files are coming from a server farm instead of a single machine), then multi-threading potentially can speed things up.

Normally it will not be faster, but there are always exceptions.
Assuming for each download thread, you are opening a new connection, then if
The network (either your own network, or target system) is limiting the download speed for each connection, or
You are downloading from multiple servers, and etc
Or, if the "download" is not a plain download, but downloading something and do some CPU intensive processing on that.
In such cases you may see download speed become faster when having multiple thread.

Related

difference between 96 threads and 6500 threads?

I recently read in a post that I can run 96 threads. However, when I look at my PC's performance, and posts like this, I see thousands of threads being able to run.
I assume the word thread is being used for two different things here (correct me if I'm wrong and explain). I want to know what the difference between the two "thread"s are. Why is one post saying 92 but another saying 6500?
The answer is: both links "talk" about the same thread.
The major difference is: the first is effectively asking about the number of threads that a given CPU can really execute in parallel.
The other link talks about the fact how many threads you can coexist within a certain scope (for example one JVM).
In other words: the major "idea" behind threads is that ... most of the time, they are idle! So having 6400 threads can work out - assuming that your workload is such, that 99.9% of the time, each thread is just doing nothing (like: waiting for something to happen). But of course: such a high number is probably not a good idea, unless we are talking about a really huge server that has zillions of cores to work with. One has to keep in mind that threads are also a resource, owned by the operating system, and many problems that you did solve using "more threads" in the past have no different answers (like using nio packages and non-blocking io instead of having zillions of threads waiting for responses for example).
Meaning: when you write example code where each thread just computes something (so, if run alone, that thread would consume 100% of the available CPU cycles) - then adding more threads just creates more load on the system.
Typically, a modern day CPU has c cores. And each cores can run t threads in parallel. So, you got often like 4 x 2 threads that can occupy the CPU in parallel. But as soon as your threads spent more time doing nothing (waiting for a disk read or network request to come back), you can easily create, manage, and utilize hundreds or even thousands of threads.

What options are available to me when considering the use of blocking sockets?

While leveraging Java's blocking sockets where I intend to both read and write independently, I see that my two options are either to dedicate a separate thread for each operation or to poll on a timeout using setSoTimeout().
Making a choice between the two implementations appears to be a trade-off of memory (threads) versus CPU time (polling).
I see scaling issues that may occur with regards to scheduler and context switching of many threads which may outweigh the CPU time spent polling as well as latency which may occur between reading and writing from a single thread depending on the size of packets received. Alternatively, a small pool of threads could be used for scale in combination with the polling of several sockets if tuned appropriately.
With the exception of Java's NIO as an alternative, which is outside the scope of this question, am I correctly understanding the options available to me for working with blocking sockets?
First of all, I think you have excluded the only option that will scale; i.e. using NIO.
Neither per-socket threads or polling will scale.
In the thread case, you will need two threads per socket. (A thread pool doesn't work.) That consumes space for the thread stacks and Thread objects, kernel resources for native thread descriptors. Then there are secondary effects such as context switching, extra GC tracing, and so on.
In the polling case, you need to make regular syscalls for each socket. Whether you do this with one thread or a small pool of threads, the number of syscalls is the same. If you poll more frequently, the syscall rate increases. If you poll less frequently, your system becomes less responsive.
AFAIK, there are no other options, given the restrictions that you have set.
Now if you are trying to figure out which of threads and polling is better, the answer will be, "it depends". There are lots of variables:
The amount of spare physical memory and spare CPU cycles.
The number of sockets.
The relative activity of the sockets.
Requirements for responsiveness.
What else is going on in the JVM (e.g. to trigger GCs)
What else is going on outside of the JVM.
Operating system performance characteristics.
These all add up to a complicated scenario which would probably be too difficult to analyze mathematically, too difficult to simulate (accurately) and difficult to measure empirically.
Not quite.
For one, reading with a timeout is no more expensive than reading without one. In either case, the thread goes to sleep after telling the OS to wake it if there is data for it. If you have a timeout, it additionally tells the OS to wake it after the specified delay. No cpu cycles are wasted waiting.
For another, context switching overhead in on the order of a couple thousand cpu cycles, so about a few micro seconds. Delay in network communication is > 1ms. Until this overhead brings a server to its knees, you can probably serve thousands of concurrent connections.

How many threads is it advisable to have running at the same time in Java?

I am new to multithreading in Java, after looking at Java virtual machine - maximum number of threads it would appear there isn't a limit to how many threads a Java/Android app can run. However, is there an advisable limit? What I mean by this is, is there a number of threads where if you run past this number then it is unwise because you are unable to determine what thread does what at what time? I hope my question makes sense.
There are some advisable limits, however they don't really have anything to do with keeping track of them.
Most multithreading comes with locking. If you are using central data storage or global mutable state then the more threads you have, the more lock contention you will get. This is app-specific and depends on how much of said state you have and how often threads read and write it.
There are no limits in desktop JVMs by default, but there are OS limits.It should be in the tens of thousands for modern Windows machines, but don't rely on the ability to create much more than that.
Running multiple tasks in parallel is great, but the hardware can only cope with so much. If you are using small threads that get fired up sometimes, and spend most their time idle, that's no biggie (Java servers were written like this for years). However if your threads are very intensive, making more of them than the number of cores you have is not likely to give you any benefit. (I believe the standard practice is twice the number of cores if you anticipate threads going idle sometimes).
Threads have a cost to them. Whenever you switch Threads you switch context, and while it isn't that expensive, doing it constantly will hurt performance. It's not a good idea to create a Thread to sum up two integers and write back a result.
If Threads need visibility of each others state, then they are greatly slowed down, since a lot of their writes have to be written back to main memory. Threads are best used for standalone tasks that require little interaction with each other.
TL;DR
Depends on OS and Hardware: on servers creating thousands of threads is fine, on desktop machines you should limit yourself to 50-200 and choose carefully what you do with them.
Note: Androids default and suggested "UI multithread helper" - the AsyncTask is not actually a thread. It's a task invoked from a ThreadPool, and as such there is no limit or penalty to using it. It has an upper limit on the number of threads it spawns and reuses them rather than creating new ones. Most Android apps should use it instead of spawning their own threads. In general, Thread Pools are fairly widespread and are a great choice unless you are forced into blocking operations.

How to decide the suitable number of threads to create in java?

I have a java application that creates SSL socket with remote hosts. I want to employ threads to hasten the process.
I want the maximum possible utilization that does not affect the program performance. How can I decide the suitable number of threads to use? After running the following line: Runtime.getRuntime().availableProcessors(); I got 4. My processor is an Intel core i7, with 8 GB RAM.
If you have 4 cores, then in theory you should have exactly four worker threads going at any given time for maximum optimization. Unfortunately what happens in theory never happens in practice. You may have worker threads who, for whatever reason, have significant amounts of downtime. Perhaps they're hitting the web for more data, or reading from a disk, much of which is just waiting and not utilizing the cpu.
Depending on how much waiting you're doing, you'll want to bump up the number. The costs to increasing the number of threads is that you'll have more context switching and competition for resources. The benefits are that you'll have another thread ready to work in case one of the other threads decides it has to break for something.
Your best bet is to set it to something (let's start with 4) and work your way up. Profile your code with each setting, and see if your benchmarks go up or down. You should soon see a pattern and a break-even point.
When it comes to optimization, you can theorize all you want about what should be the fastest, but you won't beat actually running and timing your code to truly answer this question.
As DarthVader said, you can use a ThreadPool (CachedThreadPool). With this construct you don't have to specify a concrete number of threads.
From the oracle site:
The newCachedThreadPool method creates an executor with an expandable thread pool. This executor is suitable for applications that launch many short-lived tasks.
Maybe thats what you are looking for.
About the number of cores is hard to say. You have 4 hyperthreading cores, at least one core you should leave for your OS. i would say 4-6 Threads.

Is there a way to determine the ideal number of threads? [duplicate]

This question already has answers here:
How to find out the optimal amount of threads?
(5 answers)
Closed 6 years ago.
I am doing a webcrawler and using threads to download pages.
The first limiting factor to the performance of my program is the bandwidth, I can never download more pages that it can get.
The second thing is what I interested. I am using threads to download many pages at same time, but as I create more threads, more sharing of processor occurs. Is there some metric/way/class of tests to determine what is the ideal number of threads or if after certain number, the performance doesn't change or decrease?
we've developped a multithreaded parrallel web crawler. Benchmarking troughput is the best way to get ideas on how the beast will handle his job. For a dedicated java server, one thread per core is a base to start, then the I/O comes into play and change.
Performances do decrease after certain number of threads. But it depends on the site you crawl too, on the OS you use, etc. Try to find a site with a merely constant response time to do your first benchmarks (like Google, but take differents services)
With slow websites, higher number of threads tends to compensate i/o blocking
Have a look at my answer in this thread
How to find out the optimal amount of threads?
Your example will likely be CPU bound, so you need a way to work out the contention to be able to work out the right number of threads on your box to use and be able to keep them all busy. Profiling will help there but remember it'll depend on the number of cores (as well as the network latency already mentioned etc) so use the runtime to get the number of cores when wiring up your thread pool size.
No quick answer I'm afraid, there will be an element of test, measure, adjust, repeat I'm afraid!
The ideal number of thread should be close to the number of cores (virtual cores) your hardware provides. This is to avoid thread context switching and thread scheduling. If you're doing heavy IO operations with many blocking reads (your thread blocks on a socket read) I suggest you redesign your code to use non-blocking IO APIs. Typically this will involve one "selector" thread that will monitor the activity of thousands of sockets and a small number of worker threads that will do the processing. If you code is in Java, the APIs are NIO. The only blocking call will be when you call selector.select() and it will only block if there is nothing to be processed on any of the thousands of sockets. Event-driven frameworks such as netty.io use this model and have proven to be very scalable and to best use the hardware resources of the system.
I say use something like Akka manage the threads for u. Use Jersey http client lib with non blocking IO which works with callback if i remember correctly. It's possibly the ideal setting for that type of tasks.

Categories

Resources