how to multithread in Java - java

I have to multithread a method that runs a code in the batches of 1000 . I need to give these batches to different threads .
Currently i have spawn 3 threads but all 3 are picking the 1st batch of 1000 .
I want that the other batches should not pick the same batch instead pick the other batch .
Please help and give suggestions.

I would use an ExecutorService
int numberOfTasks = ....
int batchSize = 1000;
ExecutorService es = Executors.newFixedThreadPool(3);
for (int i = 0; i < numberOfTasks; i += batchSize) {
final int start = i;
final int last = Math.min(i + batchSize, numberOfTasks);
es.submit(new Runnable() {
#Override
public void run() {
for (int j = start; j < last; j++)
System.out.println(j); // do something with j
}
});
}
es.shutdown();

Put the batches in a BlockingQueue and make your worker threads to take the batches from the queue.

Use a lock or a mutex when retrieving the batch. That way, the threads can't access the critical section at the same time and can't accidentally access the same batch.
I'm assuming you're removing a batch once it was picked by a thread.
EDIT: aioobe and jonas' answers are better, use that. This is an alternative. :)

You need to synchronize the access to the list of jobs in the batch. ("Synchronize" essentially means "make sure the threads aware of potential race-conditions". In most scenarios this means "let some method be executed by a single thread at a time".)
This is easiest solved using the java.util.concurrent package. Have a look at the various implementations of BlockingQueue for instance ArrayBlockingQueue or LinkedBlockingQueue.

Related

Java - Run tasks in varying time intervals

This question is for a college assignment.
I want to run a block of code every n*2 seconds (e.g. wait 1 second and run and wait 2 seconds and run and wait 4 seconds and run, etc) up to 5 times.
I currently have something like this.
int timer = 1000;
int tryCounter = 0;
while( !condition() && counter < 5){
doTask();
Thread.sleep(timer);
timer *= 2;
counter++;
}
Although this works, my grade benefits from not using Thread.sleep(). I figured out using a ScheduledThreadPoolExecutor with a fixed rate would be one way to go but I cannot get it to work due to the fact that the interval is not actually fixed.
This is for a theoretical Distributed System with high concurrency capabilities so what matters is the high scalability.
I could get away with Thread.sleep() if there was really no benefit or a viable way of doing this by writing it on my report. So does anyone have any insight on this?
It is possible to schedule tasks with ScheduledExecutorService combined with some logic. The .schedule argument lets you specify a time unit to use. You can declare a variable that can handle the increment you are trying to do.
int timer = 1000;
ScheduledExecutorService service = Executors.newSingleThreadScheduledExecutor();
Runnable runnable = new Runnable() {
public void run()
{
//Move your code you want to implement here
}
};
//Increment your variable
while(!condition()) {
for(int i = 0; i < 5; i++) {
service.schedule(runnable, timer, TimeUnit.SECOND);
timer *= 2;
}
}
Moving your code execution within the runnable block and then scheduling it within a for loop where the timer is incremented should accomplish the effect you are going for. Hope that helps!

exec multiThread for one process - java

I have a problem, like this :
I have an array with 50 elements, i'd like calculate with each element, for faster, how do devide for 5 threads, each thread handling and calculate for 10 elements, and not duplicate with orther threads.
And remember number of thread like a variable, maybe 5 or 10 or any number.
I try use like :
ExecutorService executor = Executors.newCachedThreadPool();
for(int i = 1; i <= 5; i++){ //mycalculate }
but all of 5 threads just process 10 elements first.
Anyone can help me ! please.
(Hope you understand my question, my English not good)
Thanks
ExecutorService executor = Executors.newCachedThreadPool();
for (int i = 1; i <= 5; i++) {
int final taskNo = i;
executor.submit(new Runnable() {
public void run() {
// perform 'mycalculate' for task 'taskNo'
}
});
}
(That can be written more neatly using lambdas, but lets stick with the "classic Java" way for now.)
This does not deal with the issue of how to wait for the tasks to finish. For that, you could capture the Future objects that submit returns, and call get on each one.
It also doesn't deal with any synchronization that would be necessary if the tasks changed any shared objects.
i'd like calculate with each element, for faster ...
If the 'mycalculate' task is lengthy, and the tasks don't interfere with each other, and you have multiple cores, this approach should give some speedup.
Try parallel stream like this.
SomeClass[] array = new SomeClass[50];
// fill array
Stream.of(array)
.parallel()
.forEach(e -> /* calculate */);

How to make writes to array visible to other Threads

I have an input array of basic type int, I would like to process this array using multiple threads and store the results in an output array of same type and size. Is the following code correct in terms of memory visibility?
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class ArraySynchronization2
{
final int width = 100;
final int height = 100;
final int[][] img = new int[width][height];
volatile int[][] avg = new int[width][height];
public static void main(String[] args) throws InterruptedException, ExecutionException
{
new ArraySynchronization2().doJob();;
}
private void doJob() throws InterruptedException, ExecutionException
{
final int threadNo = 8;
ExecutorService pool = Executors.newFixedThreadPool(threadNo);
final CountDownLatch countDownLatch = new CountDownLatch(width - 2);
for (int x = 1; x < width - 1; x++)
{
final int col = x;
pool.execute(new Runnable()
{
public void run()
{
for (int y = 0; y < height; y++)
{
avg[col][y] = (img[col - 1][y] + img[col][y] + img[col + 1][y]) / 3;
}
// how can I make the writes to the data in avg[][] visible to other threads? is this ok?
avg = avg;
countDownLatch.countDown();
};
});
}
try
{
// Does this make any memory visibility guarantees?
countDownLatch.await();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
// can I read avg here, will the results be correct?
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
System.out.println(avg[x][y]);
}
}
pool.shutdown();
pool.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
// now I know tasks are completed and results synchronized (after thread death), but what if I plan to reuse the pool?
}
}
I do not want to synchronize on CountDownLatch. I would like to know how to make the writes to the output array visible to other threads. Let imagine that I have an array (eg. image) that I would like to process, I could do this in multiple separate tasks that process chunks of the input array into the output array, there are no inter-dependencies between the writes to the output. After all computations complete, I would like to have all the results in the output array ready to read. How could I achieve such behaviour? I know that it is achievable by using submit and Future.get() instead of execute, I'd like to know how to properly implement such low-level mechanism? Please also refer to the questions raised in comments near the code.
Hm, just wondering if you actually need a latch. The array itself is a reserved block in memory, with every cell being a dedicated memory address. (btw. marking it volatile does only mark the reference to the array as volatile, not the cells of the array, see here). So you need to coordinate access to the cells only if multiple threads write-access the same cell.
Question is, are you actually doing this? Or the aim should be: avoid coordinating access if possible, because it comes at a cost.
In your algorithm, you operate on rows, so why not parallelize on rows, so that each thread only reads & calculates values of a row-segement of the entire array and ignore the other rows?
i.e.
thread-0 -> rows 0, 8, 15, ...
thread-1 -> rows 1, 9, 16, ...
...
basically this (haven't tested):
for (int n = 0; n < threadNo; n++) { //each n relates to a thread
pool.execute(new Runnable() {
public void run() {
for (int row = n; row < height; row += threadNo) { //proceed to the next row for the thread
for (int col = 1; col < width-1; col++) {
avg[col][row] = (img[col - 1][row] + img[col][row] + img[col + 1][row]) / 3;
}
}
};
});
}
So they can operate on the entire array without having to synchronize at all. An by putting the loop to print out the result after shutting down the pool will ensure all calculate-threads have finished, and the only thread that has to wait is the main thread.
An alternative to this approach is to create an avg-array of size 100/ThreadNo for each thread so that each thread write-operate on it's on array and you merge the arrays afterwards with System.arraycopy() into one array.
If you intend to reuse the pool, you should use submit instead of execute and call get() on the Futures you get from submit.
Set<Future> futures = new HashSet<>();
for(int n = 0; ...) {
futures.add(pool.submit(new Runnable() {...}));
}
for(Future f : futures) {
f.get(); //blocks until the task is completed
}
In case you want to read intermediate states of the array you can either read it directly, if inconsistent data on single cells is acceptable, or use AtomicIntegerArray, as Nicolas Filotto suggested.
-- EDIT --
After the edit for using the width for the latch instead of the original thread number and all the discussion I'd like to add a few words.
As #jameslarge pointed out, it's about how to establish a "happens-before" relationship, or how to guarantee, that operation A (i.e. a write) happens before operation B (i.e. a read). Therefore access between two threads needs to be coordinated. There are several options
volatile keyword - doesn't work on arrays as it marks only the reference and not the values as being volatile
synchronization - pessimistic locking (synchronized modifier or statement)
CAS - optimistic locking, used by quite a few concurrent implementations
However every syncpoint (pessimistic or optimistic) establishes a happens-before relationship. Which one you choose, depends on your requirement.
What you like to achieve is a coordination between the read operation of the main thread and the write operations of the worker threads. How do you implement, is up to you and your requirements. The CountDownLatch counting down the total number of jobs is one way (btw., the latch uses a state property which is a volatile int). A CyclicBarrier may also be a construct worth to consider, especially if you'd like to read consistent intermediate states. Or a future.get(), or...
All boils down to the worker thread having to signal they're done writingso the reader thread can start reading.
However be aware of using sleep instead of synchronization. Sleep does not establish a happens before relationship, and using sleep for synchronization is a typical concurrency bug pattern. I.e. in the worst case, sleep is executed before any of the work has been done.
What you need to use is rather an AtomicIntegerArray instead of a simple volatile int array. Indeed, it is meant to be used in your case to update array element atomically and visible by all threads.

Fibonacci on Java ExecutorService runs faster sequentially than in parallel

I am trying out the executor service in Java, and wrote the following code to run Fibonacci (yes, the massively recursive version, just to stress out the executor service).
Surprisingly, it will run faster if I set the nThreads to 1. It might be related to the fact that the size of each "task" submitted to the executor service is really small. But still it must be the same number also if I set nThreads to 1.
To see if the access to the shared Atomic variables can cause this issue, I commented out the three lines with the comment "see text", and looked at the system monitor to see how long the execution takes. But the results are the same.
Any idea why this is happening?
BTW, I wanted to compare it with the similar implementation with Fork/Join. It turns out to be way slower than the F/J implementation.
public class MainSimpler {
static int N=35;
static AtomicInteger result = new AtomicInteger(0), pendingTasks = new AtomicInteger(1);
static ExecutorService executor;
public static void main(String[] args) {
int nThreads=2;
System.out.println("Number of threads = "+nThreads);
executor = Executors.newFixedThreadPool(nThreads);
Executable.inQueue = new AtomicInteger(nThreads);
long before = System.currentTimeMillis();
System.out.println("Fibonacci "+N+" is ... ");
executor.submit(new FibSimpler(N));
waitToFinish();
System.out.println(result.get());
long after = System.currentTimeMillis();
System.out.println("Duration: " + (after - before) + " milliseconds\n");
}
private static void waitToFinish() {
while (0 < pendingTasks.get()){
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
executor.shutdown();
}
}
class FibSimpler implements Runnable {
int N;
FibSimpler (int n) { N=n; }
#Override
public void run() {
compute();
MainSimpler.pendingTasks.decrementAndGet(); // see text
}
void compute() {
int n = N;
if (n <= 1) {
MainSimpler.result.addAndGet(n); // see text
return;
}
MainSimpler.executor.submit(new FibSimpler(n-1));
MainSimpler.pendingTasks.incrementAndGet(); // see text
N = n-2;
compute(); // similar to the F/J counterpart
}
}
Runtime (approximately):
1 thread : 11 seconds
2 threads: 19 seconds
4 threads: 19 seconds
Update:
I notice that even if I use one thread inside the executor service, the whole program will use all four cores of my machine (each core around 80% usage on average). This could explain why using more threads inside the executor service slows down the whole process, but now, why does this program use 4 cores if only one thread is active inside the executor service??
It might be related to the fact that the size of each "task" submitted
to the executor service is really small.
This is certainly the case and as a result you are mainly measuring the overhead of context switching. When n == 1, there is no context switching and thus the performance is better.
But still it must be the same number also if I set nThreads to 1.
I'm guessing you meant 'to higher than 1' here.
You are running into the problem of heavy lock contention. When you have multiple threads, the lock on the result is contended all the time. Threads have to wait for each other before they can update the result and that slows them down. When there is only a single thread, the JVM probably detects that and performs lock elision, meaning it doesn't actually perform any locking at all.
You may get better performance if you don't divide the problem into N tasks, but rather divide it into N/nThreads tasks, which can be handled simultaneously by the threads (assuming you choose nThreads to be at most the number of physical cores/threads available). Each thread then does its own work, calculating its own total and only adding that to a grand total when the thread is done. Even then, for fib(35) I expect the costs of thread management to outweigh the benefits. Perhaps try fib(1000).

Multi-threading with Java, How to stop?

I am writing a code for my homework, I am not so familiar with writing multi-threaded applications. I learned how to open a thread and start it. I better show the code.
for (int i = 0; i < a.length; i++) {
download(host, port, a[i]);
scan.next();
}
My code above connects to a server opens a.length multiple parallel requests. In other words, download opens a[i] connections to get the same content on each iteration. However, I want my server to complete the download method when i = 0 and start the next iteration i = 1, when the the threads that download has opened completes. I did it with scan.next() to stop it by hand but obviously it is not a nice solution. How can I do that?
Edit:
public static long download(String host, int port) {
new java.io.File("Folder_" + N).mkdir();
N--;
int totalLength = length(host, port);
long result = 0;
ArrayList<HTTPThread> list = new ArrayList<HTTPThread>();
for (int i = 0; i < totalLength; i = i + N + 1) {
HTTPThread t;
if (i + N > totalLength) {
t = (new HTTPThread(host, port, i, totalLength - 1));
} else {
t = new HTTPThread(host, port, i, i + N);
}
list.add(t);
}
for (HTTPThread t : list) {
t.start();
}
return result;
}
And In my HTTPThread;
public void run() {
init(host, port);
downloadData(low, high);
close();
}
Note: Our test web server is a modified web server, it gets Range: i-j and in the response, there is contents of the i-j files.
You will need to call the join() method of the thread that is doing the downloading. This will cause the current thread to wait until the download thread is finished. This is a good post on how to use join.
If you'd like to post your download method you will probably get a more complete solution
EDIT:
Ok, so after you start your threads you will need to join them like so:
for (HTTPThread t : list) {
t.start();
}
for (HTTPThread t : list) {
t.join();
}
This will stop the method returning until all HTTPThreads have completed
It's probably not a great idea to create an unbounded number of threads to do an unbounded number of parallel http requests. (Both network sockets and threads are operating system resources, and require some bookkeeping overhead, and are therefore subject to quotas in many operating systems. In addition, the webserver you are reading from might not like 1000s of concurrent connections, because his network sockets are finite, too!).
You can easily control the number of concurrent connections using an ExecutorService:
List<DownloadTask> tasks = new ArrayList<DownloadTask>();
for (int i = 0; i < length; i++) {
tasks.add(new DownloadTask(i));
}
ExecutorService executor = Executors.newFixedThreadPool(N);
executor.invokeAll(tasks);
executor.shutdown();
This is both shorter and better than your homegrown concurrency limit, because your limit will delay starting with the next batch until all threads from the current batch have completed. With an ExceutorService, a new task is begun whenever an old task has completed (and there are still tasks left). That is, your solution will have 1 to N concurrent requests until all tasks have been started, whereas the ExecutorService will always have N concurrent requests.

Categories

Resources