Multi-threading with Java, How to stop? - java

I am writing a code for my homework, I am not so familiar with writing multi-threaded applications. I learned how to open a thread and start it. I better show the code.
for (int i = 0; i < a.length; i++) {
download(host, port, a[i]);
scan.next();
}
My code above connects to a server opens a.length multiple parallel requests. In other words, download opens a[i] connections to get the same content on each iteration. However, I want my server to complete the download method when i = 0 and start the next iteration i = 1, when the the threads that download has opened completes. I did it with scan.next() to stop it by hand but obviously it is not a nice solution. How can I do that?
Edit:
public static long download(String host, int port) {
new java.io.File("Folder_" + N).mkdir();
N--;
int totalLength = length(host, port);
long result = 0;
ArrayList<HTTPThread> list = new ArrayList<HTTPThread>();
for (int i = 0; i < totalLength; i = i + N + 1) {
HTTPThread t;
if (i + N > totalLength) {
t = (new HTTPThread(host, port, i, totalLength - 1));
} else {
t = new HTTPThread(host, port, i, i + N);
}
list.add(t);
}
for (HTTPThread t : list) {
t.start();
}
return result;
}
And In my HTTPThread;
public void run() {
init(host, port);
downloadData(low, high);
close();
}
Note: Our test web server is a modified web server, it gets Range: i-j and in the response, there is contents of the i-j files.

You will need to call the join() method of the thread that is doing the downloading. This will cause the current thread to wait until the download thread is finished. This is a good post on how to use join.
If you'd like to post your download method you will probably get a more complete solution
EDIT:
Ok, so after you start your threads you will need to join them like so:
for (HTTPThread t : list) {
t.start();
}
for (HTTPThread t : list) {
t.join();
}
This will stop the method returning until all HTTPThreads have completed

It's probably not a great idea to create an unbounded number of threads to do an unbounded number of parallel http requests. (Both network sockets and threads are operating system resources, and require some bookkeeping overhead, and are therefore subject to quotas in many operating systems. In addition, the webserver you are reading from might not like 1000s of concurrent connections, because his network sockets are finite, too!).
You can easily control the number of concurrent connections using an ExecutorService:
List<DownloadTask> tasks = new ArrayList<DownloadTask>();
for (int i = 0; i < length; i++) {
tasks.add(new DownloadTask(i));
}
ExecutorService executor = Executors.newFixedThreadPool(N);
executor.invokeAll(tasks);
executor.shutdown();
This is both shorter and better than your homegrown concurrency limit, because your limit will delay starting with the next batch until all threads from the current batch have completed. With an ExceutorService, a new task is begun whenever an old task has completed (and there are still tasks left). That is, your solution will have 1 to N concurrent requests until all tasks have been started, whereas the ExecutorService will always have N concurrent requests.

Related

Rest parellel calls to service -Multithreading in java

I have a rest call api where max count of result return by the api is 1000.start page=1
{
"status": "OK",
"payload": {
"EMPList":[],
count:5665
}
So to get other result I have to change the start page=2 and again hit the service.again will get 1000 results only.
but after first call i want to make it as a parallel call and I want to collect the result and combine it and send it back to calling service in java. Please suggest i am new to java.i tried using callable but it's not working
It seems to me that ideally you should be able to configure your max count to one appropriate for your use case. I'm assuming you aren't able to do that. Here is a simple, lock-less, multi threading scheme that acts as a simple reduction operation for your two network calls:
// online runnable: https://ideone.com/47KsoS
int resultSize = 5;
int[] result = new int[resultSize*2];
Thread pg1 = new Thread(){
public void run(){
System.out.println("Thread 1 Running...");
// write numbers 1-5 to indexes 0-4
for(int i = 0 ; i < resultSize; i ++) {
result[i] = i + 1;
}
System.out.println("Thread 1 Exiting...");
}
};
Thread pg2 = new Thread(){
public void run(){
System.out.println("Thread 2 Running");
// write numbers 5-10 to indexes 5-9
for(int i = 0 ; i < resultSize; i ++) {
result[i + resultSize] = i + 1 + resultSize;
}
System.out.println("Thread 2 Exiting...");
}
};
pg1.start();
pg2.start();
// ensure that pg1 execution finishes
pg1.join();
// ensure that pg2 execution finishes
pg2.join();
// print result of reduction operation
System.out.println(Arrays.toString(result));
There is a very important caveat with this implementation however. You will notice that both of the threads DO NOT overlap in their memory writes. This is very important as if you were to simply change our int[] result to ArrayList<Integer> this could lead to catastrophic failure in our reduction operation between the two threads called a Race Condition (I believe the standard ArrayList implementation in Java is not thread safe). Since we can guarantee how large our result will be I would highly suggest sticking to my usage of an array for this multi-threaded implementation as ArrayLists hide a lot of implementation logic from you that you likely won't understand until you take a basic data-structures course.

Performance of executorService multithreading pool

I am using Java's concurrency library ExecutorService to run my tasks. The threshold for writing to the database is 200 QPS, however, this program can only reach 20 QPS with 15 threads. I tried 5, 10, 20, 30 threads, and they were even slower than 15 threads. Here is the code:
ExecutorService executor = Executors.newFixedThreadPool(15);
List<Callable<Object>> todos = new ArrayList<>();
for (final int id : ids) {
todos.add(Executors.callable(() -> {
try {
TestObject test = testServiceClient.callRemoteService();
SaveToDatabase();
} catch (Exception ex) {}
}));
}
try {
executor.invokeAll(todos);
} catch (InterruptedException ex) {}
executor.shutdown();
1) I checked the CPU usage of the linux server on which this program is running, and the usage was 90% and 60% (it has 4 CPUs). The memory usage was only 20%. So the CPU & memory were still fine. The database server's CPU usage was low (around 20%). What could prevent the speed from reaching 200 QPS? Maybe this service call: testServiceClient.callRemoteService()? I checked the server configuration for that call and it allows high number of calls per seconds.
2) If the count of id in ids is more than 50000, is it a good idea to use invokeAll? Should we split it to smaller batches, such as 5000 each batch?
There is nothing in this code which prevents this query rate, except creating and destroying a thread pool repeately is very expensive. I suggest using the Streams API which is not only simpler but reuses a built in thread pool
int[] ids = ....
IntStream.of(ids).parallel()
.forEach(id -> testServiceClient.callRemoteService(id));
Here is a benchmark using a trivial service. The main overhead is the latency in creating the connection.
public static void main(String[] args) throws IOException {
ServerSocket ss = new ServerSocket(0);
Thread service = new Thread(() -> {
try {
for (; ; ) {
try (Socket s = ss.accept()) {
s.getOutputStream().write(s.getInputStream().read());
}
}
} catch (Throwable t) {
t.printStackTrace();
}
});
service.setDaemon(true);
service.start();
for (int t = 0; t < 5; t++) {
long start = System.nanoTime();
int[] ids = new int[5000];
IntStream.of(ids).parallel().forEach(id -> {
try {
Socket s = new Socket("localhost", ss.getLocalPort());
s.getOutputStream().write(id);
s.getInputStream().read();
} catch (IOException e) {
e.printStackTrace();
}
});
long time = System.nanoTime() - start;
System.out.println("Throughput " + (int) (ids.length * 1e9 / time) + " connects/sec");
}
}
prints
Throughput 12491 connects/sec
Throughput 13138 connects/sec
Throughput 15148 connects/sec
Throughput 14602 connects/sec
Throughput 15807 connects/sec
Using an ExecutorService would be better as #grzegorz-piwowarek mentions.
ExecutorService es = Executors.newFixedThreadPool(8);
for (int t = 0; t < 5; t++) {
long start = System.nanoTime();
int[] ids = new int[5000];
List<Future> futures = new ArrayList<>(ids.length);
for (int id : ids) {
futures.add(es.submit(() -> {
try {
Socket s = new Socket("localhost", ss.getLocalPort());
s.getOutputStream().write(id);
s.getInputStream().read();
} catch (IOException e) {
e.printStackTrace();
}
}));
}
for (Future future : futures) {
future.get();
}
long time = System.nanoTime() - start;
System.out.println("Throughput " + (int) (ids.length * 1e9 / time) + " connects/sec");
}
es.shutdown();
In this case produces much the same results.
Why do you restrict yourself to such a low number of threads?
You're missing performance opportunities this way. It seems that your tasks are really not CPU-bound. The network operations (remote service + database query) may take up the majority of time for each task to finish. During these times, where a single task/thread needs to wait for some event (network,...), another thread can use the CPU. The more threads you make available to the system, the more threads may be waiting for their network I/O to complete while still having some threads use the CPU at the same time.
I suggest you drastically ramp up the number of threads for the executor. As you say that both remote servers are rather under-utilized, I assume the host your program runs at is the bottleneck at the moment. Try to increase (double?) the number of threads until either your CPU utilization approaches 100% or memory or the remote side become the bottleneck.
By the way, you shutdown the executor, but do you actually wait for the tasks to terminate? How do you measure the "QPS"?
One more thing comes to my mind: How are DB connections handled? I.e. how are SaveToDatabase()s synchronized? Do all threads share (and compete for) a single connection? Or, worse, will each thread create a new connection to the DB, do its thing, and then close the connection again? This may be a serious bottleneck because establishing a TCP connection and doing the authentication handshake may take up as much time as running a simple SQL statement.
If the count of id in ids is more than 50000, is it a good idea to use
invokeAll? Should we split it to smaller batches, such as 5000 each
batch?
As #Vaclav Stengl already wrote, the Executors have internal queues in which they enqueue and from which they process the tasks. So no need to worry about that one. You can also just call submit for each single task as soon as you have created it. This allows the first tasks to already start executing while you're still creating/preparing later tasks, which makes sense especially when each task creation takes comparatively long, but won't hurt in all other cases. Think about invokeAll as a convenience method for cases where you already have a collection of tasks. If you create the tasks successively yourself and you already have access to the ExecutorService to run them on, just submit() them a.s.a.p.
About batch spliting:
ExecutorService has inner queue for storing tasks. In your case ExecutorService executor = Executors.newFixedThreadPool(15); has 15 thread so max 15 tasks will run concurrently and others will be stored in queue. Size of queue can be parametrized. By default size will scale up to max int. InvokeAll call inside of method execute and this method will place tasks in to queue when all threads are working.
Imho there are 2 possible scenarios why CPU is not at 100%:
try to enlarge thread pool
thread is waiting for testServiceClient.callRemoteService() to
complete and meanwhile CPU is starwing
The problem of QPS maybe is the bandwidth limit or transaction execution(it will lock the table or row). So you just increase pool size is not worked. Additional, You can try to use the producer-consumer pattern.

Threads not executing in method

I am trying to learn threads by building a web crawler, In the following code
searchHelper(website, keyword); finds all the links from a web page and keeps the links that have a keyword within the url, the idea is that searchHelper is called and then for each link found a thread is formed to act as web crawlers, so for example if the FIRST website contains 5 links on it then 5 threads will be formed so that there will be five web crawlers working together, currently the threads are not working so that i only get the results from the first page, I have been able to get it to work without threads, for example if i remove the entire for loop and replace with the while loop then the web crawler program works as expected, any help would appreciate, here is the method that performs the threads
private void search(String website, String keyword)
{
searchHelper(website, keyword);
int limit = queue.size();
Thread[] threads = new Thread[limit];
for(int i = 0; i < limit; i++)
{
threads[i] = new Thread(new Runnable()
{
#Override
public void run()
{
while(!queue.isEmpty() && queue.size() <= 10000)
searchHelper(queue.poll(), keyword);
}
});
threads[i].start();
}
if(results.isEmpty())
text.append("No results, sorry :(");
else
{
text.append("\nList of results:\n\n");
for(String x: results)
text.append(x + "\n");
}
}
You don't wait for the threads to finish. The last part of your code will probably be reached before even a single thread finishes its work, so no results can be displayed.
Add this right after your for-loop:
for(int i = 0; i < limit; i++)
{
threads[i].join();
}
This way your main thread will wait for all of the threads to finish execution before accessing the results.

Fibonacci on Java ExecutorService runs faster sequentially than in parallel

I am trying out the executor service in Java, and wrote the following code to run Fibonacci (yes, the massively recursive version, just to stress out the executor service).
Surprisingly, it will run faster if I set the nThreads to 1. It might be related to the fact that the size of each "task" submitted to the executor service is really small. But still it must be the same number also if I set nThreads to 1.
To see if the access to the shared Atomic variables can cause this issue, I commented out the three lines with the comment "see text", and looked at the system monitor to see how long the execution takes. But the results are the same.
Any idea why this is happening?
BTW, I wanted to compare it with the similar implementation with Fork/Join. It turns out to be way slower than the F/J implementation.
public class MainSimpler {
static int N=35;
static AtomicInteger result = new AtomicInteger(0), pendingTasks = new AtomicInteger(1);
static ExecutorService executor;
public static void main(String[] args) {
int nThreads=2;
System.out.println("Number of threads = "+nThreads);
executor = Executors.newFixedThreadPool(nThreads);
Executable.inQueue = new AtomicInteger(nThreads);
long before = System.currentTimeMillis();
System.out.println("Fibonacci "+N+" is ... ");
executor.submit(new FibSimpler(N));
waitToFinish();
System.out.println(result.get());
long after = System.currentTimeMillis();
System.out.println("Duration: " + (after - before) + " milliseconds\n");
}
private static void waitToFinish() {
while (0 < pendingTasks.get()){
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
executor.shutdown();
}
}
class FibSimpler implements Runnable {
int N;
FibSimpler (int n) { N=n; }
#Override
public void run() {
compute();
MainSimpler.pendingTasks.decrementAndGet(); // see text
}
void compute() {
int n = N;
if (n <= 1) {
MainSimpler.result.addAndGet(n); // see text
return;
}
MainSimpler.executor.submit(new FibSimpler(n-1));
MainSimpler.pendingTasks.incrementAndGet(); // see text
N = n-2;
compute(); // similar to the F/J counterpart
}
}
Runtime (approximately):
1 thread : 11 seconds
2 threads: 19 seconds
4 threads: 19 seconds
Update:
I notice that even if I use one thread inside the executor service, the whole program will use all four cores of my machine (each core around 80% usage on average). This could explain why using more threads inside the executor service slows down the whole process, but now, why does this program use 4 cores if only one thread is active inside the executor service??
It might be related to the fact that the size of each "task" submitted
to the executor service is really small.
This is certainly the case and as a result you are mainly measuring the overhead of context switching. When n == 1, there is no context switching and thus the performance is better.
But still it must be the same number also if I set nThreads to 1.
I'm guessing you meant 'to higher than 1' here.
You are running into the problem of heavy lock contention. When you have multiple threads, the lock on the result is contended all the time. Threads have to wait for each other before they can update the result and that slows them down. When there is only a single thread, the JVM probably detects that and performs lock elision, meaning it doesn't actually perform any locking at all.
You may get better performance if you don't divide the problem into N tasks, but rather divide it into N/nThreads tasks, which can be handled simultaneously by the threads (assuming you choose nThreads to be at most the number of physical cores/threads available). Each thread then does its own work, calculating its own total and only adding that to a grand total when the thread is done. Even then, for fib(35) I expect the costs of thread management to outweigh the benefits. Perhaps try fib(1000).

Possible to catch the OS doing Context Switching on threads?

I would like to know how to catch a thread being interrupted by a Context Switch in java.
I got two threads running side by side, which are both changing the same int up and down, and I would like for the sake of my report, to know how to catch the switches, and do something. (eg. make a simple System.out when that happens) Thanks!
As far as an application is concerned context switches are invisible - there is no signal or message sent to the application that notifies it.
The only thing that might tip off an application is timing. For example, if you time a tight loop repeatedly, you might be able to (unreliably) detect a context switch that happens as the loop is executed, due to the longer time required in comparison to executions that were not interrupted. Unfortunately, this would only be possible for compiled languages like C. Languages like Java that make use of a virtual machine make it practically impossible to reliably detect something like this because a loop slowing down might be attributed to any number of reasons, like e.g. the garbage collector acting up.
Moreover, keep in mind that any system call - and especially I/O calls like the ones you'd use to log such an event - very often cause an implicit context switch, which could throw off anything you might want to do.
Why would you want to know something like this anyway? And especially from a Java application?
EDIT:
Well, if you are after creating a synchronization problem, here's my version:
public class Test {
public static long count = 0;
public static void main(String[] args) {
for (int run = 0; run < 5; ++run) {
Test.count = 0;
Thread[] threads = new Thread[10];
for (int i = 0; i < threads.length; ++i) {
threads[i] = new Thread(new Runnable() {
public void run() {
for (long i = 0; i < (10 * 1000 * 1000 * 1000); ++i) {
Test.count += 1;
}
}
});
}
for (int i = 0; i < threads.length; ++i) {
threads[i].start();
}
for (int i = 0; i < threads.length; ++i) {
try {
threads[i].join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println(Test.count);
}
}
}
Here's what I got from a single run:
1443685504
1439908180
1461384255
1477413204
1440892041
Record the thread making the modification.
Every time the int is modified, store the thread that makes the modification. When it differs from the previous thread, make your System.out.
Otherwise, you would need an operating system that allows you to interrupt a context switch. I don't know of an operating system that allows you to do that.
I don't think you can listen events from thread scheduler (part ot OS Core) but you can use pull strategy:
Make some static volatile string field
In every thread read this field(very often)
If field value != "current-thread-name" => print "Context is Switched to "+thread-name
write thread name to field

Categories

Resources