I made a program to count words from individual files,
but how can i modify my program, so it gives the total amount of words from all files (as ONE value).
My code looks like this:
public class WordCount implements Runnable
{
public WordCount(String filename)
{
this.filename = filename;
}
public void run()
{
int count = 0;
try
{
Scanner in = new Scanner(new File(filename));
while (in.hasNext())
{
in.next();
count++;
}
System.out.println(filename + ": " + count);
}
catch (FileNotFoundException e)
{
System.out.println(filename + " blev ikke fundet.");
}
}
private String filename;
}
With a Main-Class:
public class Main
{
public static void main(String args[])
{
for (String filename : args)
{
Runnable tester = new WordCount(filename);
Thread t = new Thread(tester);
t.start();
}
}
}
And how to avoid race conditions?
Thank you for your help.
A worker thread:
class WordCount extends Thread
{
int count;
#Override
public void run()
{
count = 0;
/* Count the words... */
...
++count;
...
}
}
And a class to use them:
class Main
{
public static void main(String args[]) throws InterruptedException
{
WordCount[] counters = new WordCount[args.length];
for (int idx = 0; idx < args.length; ++idx) {
counters[idx] = new WordCount(args[idx]);
counters[idx].start();
}
int total = 0;
for (WordCount counter : counters) {
counter.join();
total += counter.count;
}
System.out.println("Total: " + total);
}
}
Many hard drives don't do a great job of reading multiple files concurrently. Locality of reference has a big impact on performance.
You can either use Future to get the count number and in the end add up all the counts or use a static variable and increment it in a synchronized manner i.e. use explicitely synchronized or use Atomic Increment
What if your Runnable took two arguments:
a BlockingQueue<String> or BlockingQueue<File> of input files
an AtomicLong
In a loop, you would get the next String/File from the queue, count its words, and increment the AtomicLong by that amount. Whether the loop is while(!queue.isEmpty()) or while(!done) depends on how you feed files into the queue: if you know all the files from the start, you can use the isEmpty version, but if you're streaming them in from somewhere, you want to use the !done version (and have done be a volatile boolean or AtomicBoolean for memory visibility).
Then you feed these Runnables to an executor, and you should be good to go.
You can create some listener to get a feedback from the thread.
public interface ResultListener {
public synchronized void result(int words);
}
private String filename;
private ResultListener listener;
public void run()
{
int count = 0;
try
{
Scanner in = new Scanner(new File(filename));
while (in.hasNext())
{
in.next();
count++;
}
listener.result(count);
}
catch (FileNotFoundException e)
{
System.out.println(filename + " blev ikke fundet.");
}
}
}
You can add a contructor parameter for the listener just like for your filename.
public class Main
{
private static int totalCount = 0;
private static ResultListener listener = new ResultListener(){
public synchronized void result(int words){
totalCount += words;
}
}
public static void main(String args[])
{
for (String filename : args)
{
Runnable tester = new WordCount(filename, listener);
Thread t = new Thread(tester);
t.start();
}
}
}
You can make the count volatile and static so all the threads can increment it.
public class WordCount implements Runnable
{
private static AtomicInteger count = new AtomicInteger(0); // <-- now all threads increment the same count
private String filename;
public WordCount(String filename)
{
this.filename = filename;
}
public static int getCount()
{
return count.get();
}
public void run()
{
try
{
Scanner in = new Scanner(new File(filename));
while (in.hasNext())
{
in.next();
count.incrementAndGet();
}
System.out.println(filename + ": " + count);
}
catch (FileNotFoundException e)
{
System.out.println(filename + " blev ikke fundet.");
}
}
}
Update: haven't done java in a while, but the point about making it a private static field still stands... just make it an AtomicInteger.
You could create a Thread pool with a synchronized task queue that would hold all of the files you wish to count the words for.
When your thread pool workers come online they could ask the task queue for a file to count.
After the worker completes their job then they could notify the main thread of their final number.
The main thread would have a synchronized notify method that would add up all of the worker threads' results.
Hope this helps.
Or you can have all the threads update a single word count variable. count++ is atomic if count is word-sided (an int should suffice).
EDIT: Turns out the Java specs are just silly enough that count++ is not atomic. I have no idea why. Anyway, look at AtomicInteger and its incrementAndGet method. Hopefully this is atomic (I don't know what to expect now...), and you don't need any other synchronization mechanisms - just store your count in an AtomicInteger.
The given solution is shared with consideration to Java8 concurrent package involving Executors and Future for multithreading.
First, callable class created for processing individual file
public class WordCounter implements Callable {
Path bookPath;
public WordCounter(Path bookPath) {
this.bookPath = bookPath;
}
#Override
public Map<String, Long> call() throws Exception {
Map<String, Long> wordCount = new HashMap<>();
wordCount = Files.lines(bookPath).flatMap(line -> Arrays.stream(line.trim().split(" ")).parallel())
.map(word -> word.replaceAll("[^a-zA-Z]", "").toLowerCase().trim())
.filter(word -> word.length() > 0)
.map(word -> new SimpleEntry<>(word, 1))
.collect(Collectors.groupingBy(SimpleEntry::getKey, Collectors.counting()));
return wordCount;
}
}
Now, we'll create multiple future tasks to invoke/process each file in the argument as below
ExecutorService exes = Executors.newCachedThreadPool();
FutureTask[] tasks = new FutureTask[count];
Map<String, Long> result = new HashMap<>();
Path[] books = new Path[2];
books[0] = Paths.get("C:\\Users\\Documents\\book1.txt");
books[1] = Paths.get("C:\\Users\\Documents\\book2.txt");
for(int i=0; i<books.length; i++) {
tasks[i] = new FutureTask(new WordCounter(books[i]));
exes.submit(tasks[i]);
}
for(int i=0; i<count; i++) {
try {
Map<String, Long> wordCount = (Map<String, Long>) tasks[i].get();
wordCount.forEach((k,v) -> result.put(k, result.getOrDefault(k, 0L)+1));
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
exes.shutdown();
Further result map can be upgraded to volatile keyword and shared among the WordCounter threads to update word count concurrently.
End Result : result.size() should give the expected output
Related
public class ThreadsDemo {
public static int n = 0;
private static final int NTHREADS = 300;
public static void main(String[] argv) throws InterruptedException {
final CountDownLatch cdl = new CountDownLatch(NTHREADS);
for (int i = 0; i < NTHREADS; i++) {
new Thread(new Runnable() {
public void run() {
// try {
// Thread.sleep(10);
// } catch (InterruptedException e) {
// e.printStackTrace();
// }
n += 1;
cdl.countDown();
}
}).start();
}
cdl.await();
System.out.println("fxxk, n is: " + n);
}
}
Why the output is "n is: 300"? n isn't explicitly synchronized. And if I uncomment "Thread.sleep", the output is "n is: 299 or less".
I changed your code this way:
private static final int NTHREADS = 300;
private static AtomicInteger n = new AtomicInteger();
public static void main(String[] argv) throws InterruptedException {
final CountDownLatch cdl = new CountDownLatch(NTHREADS);
for (int i = 0; i < NTHREADS; i++) {
new Thread(new Runnable() {
public void run() {
n.incrementAndGet();
cdl.countDown();
}
}).start();
}
cdl.await();
System.out.println("fxxk, n is: " + n);
}
You have to deal with racing-conditions. All the 300 threads are modifying n concurrently. For example: if two threads would have read and increment n concurrently than both increment n to the same value.
That was the reason why n wasn't always 300, you lost one increment in such a situation. And this situation could have occurred zero or many times.
I changed n from int to AtomicInteger which is thread safe. Now everything works as expected.
You better use AtomicInteger.
This question will help you with description and example: Practical uses for AtomicInteger
Static context need to have lock on the class and not on the Object. If you need a static variable to be synchronized and do not need it to be cached inside the thread locally you need to declare it as volatile.
public class ThreadsDemo {
public static int n = 0;
private static final int NTHREADS = 30;
public static void main(String[] argv) throws InterruptedException {
final CountDownLatch cdl = new CountDownLatch(NTHREADS);
for (int i = 0; i < NTHREADS; i++) {
new Thread(new Runnable() {
public void run() {
for (int j = 0; j < 1000; j++) // run a long time duration
n += 1;
cdl.countDown();
}
}).start();
}
cdl.await();
System.out.println("fxxk, n is: " + n);
}
}
output "n is: 29953"
I think the reason is, the threads run a short time duration, and the jvm don't make a context switch.
Java static field will be synchronized among threads?
No. You should make it volatile or synchronize all access to it, depending on your usage patterns.
There is class Counter, which contains a set of keys and allows incrementing value of each key and getting all values. So, the task I'm trying to solve is the same as in Atomically incrementing counters stored in ConcurrentHashMap . The difference is that the set of keys is unbounded, so new keys are added frequently.
In order to reduce memory consumption, I clear values after they are read, this happens in Counter.getAndClear(). Keys are also removed, and this seems to break things up.
One thread increments random keys and another thread gets snapshots of all values and clears them.
The code is below:
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ThreadLocalRandom;
import java.util.Map;
import java.util.HashMap;
import java.lang.Thread;
class HashMapTest {
private final static int hashMapInitSize = 170;
private final static int maxKeys = 100;
private final static int nIterations = 10_000_000;
private final static int sleepMs = 100;
private static class Counter {
private ConcurrentMap<String, Long> map;
public Counter() {
map = new ConcurrentHashMap<String, Long>(hashMapInitSize);
}
public void increment(String key) {
Long value;
do {
value = map.computeIfAbsent(key, k -> 0L);
} while (!map.replace(key, value, value + 1L));
}
public Map<String, Long> getAndClear() {
Map<String, Long> mapCopy = new HashMap<String, Long>();
for (String key : map.keySet()) {
Long removedValue = map.remove(key);
if (removedValue != null)
mapCopy.put(key, removedValue);
}
return mapCopy;
}
}
// The code below is used for testing
public static void main(String[] args) throws InterruptedException {
Counter counter = new Counter();
Thread thread = new Thread(new Runnable() {
public void run() {
for (int j = 0; j < nIterations; j++) {
int index = ThreadLocalRandom.current().nextInt(maxKeys);
counter.increment(Integer.toString(index));
}
}
}, "incrementThread");
Thread readerThread = new Thread(new Runnable() {
public void run() {
long sum = 0;
boolean isDone = false;
while (!isDone) {
try {
Thread.sleep(sleepMs);
}
catch (InterruptedException e) {
isDone = true;
}
Map<String, Long> map = counter.getAndClear();
for (Map.Entry<String, Long> entry : map.entrySet()) {
Long value = entry.getValue();
sum += value;
}
System.out.println("mapSize: " + map.size());
}
System.out.println("sum: " + sum);
System.out.println("expected: " + nIterations);
}
}, "readerThread");
thread.start();
readerThread.start();
thread.join();
readerThread.interrupt();
readerThread.join();
// Ensure that counter is empty
System.out.println("elements left in map: " + counter.getAndClear().size());
}
}
While testing I have noticed that some increments are lost. I get the following results:
sum: 9993354
expected: 10000000
elements left in map: 0
If you can't reproduce this error (that sum is less than expected), you can try to increase maxKeys a few orders of magnitude or decrease hashMapInitSize or increase nIterations (the latter also increases run time). I have also included testing code (main method) in the case it has any errors.
I suspect that the error is happening when capacity of ConcurrentHashMap is increased during runtime. On my computer the code appears to work correctly when hashMapInitSize is 170, but fails when hashMapInitSize is 171. I believe that size of 171 triggers increasing of capacity (128 / 0.75 == 170.66, where 0.75 is the default load factor of hash map).
So, the question is: am I using remove, replace and computeIfAbsent operations correctly? I assume that they are atomic operations on ConcurrentHashMap based on answers to Use of ConcurrentHashMap eliminates data-visibility troubles?. If so, why are some increments lost?
EDIT:
I think that I missed an important detail here that increment() is supposed to be called much more frequently than getAndClear(), so that I try to avoid any explicit locking in increment(). However, I'm going to test performance of different versions later to see if it is really an issue.
I gues the problem is the use of remove while iterating over the keySet. This is what the JavaDoc says for Map#keySet() (my emphasis):
Returns a Set view of the keys contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress (except through the iterator's own remove operation), the results of the iteration are undefined.
The JavaDoc for ConcurrentHashMap give further clues:
Similarly, Iterators, Spliterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration.
The conclusion is that mutating the map while iterating over the keys is not predicatble.
One solution is to create a new map for the getAndClear() operation and just return the old map. The switch has to be protected, and in the example below I used a ReentrantReadWriteLock:
class HashMapTest {
private final static int hashMapInitSize = 170;
private final static int maxKeys = 100;
private final static int nIterations = 10_000_000;
private final static int sleepMs = 100;
private static class Counter {
private ConcurrentMap<String, Long> map;
ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
ReadLock readLock = lock.readLock();
WriteLock writeLock = lock.writeLock();
public Counter() {
map = new ConcurrentHashMap<>(hashMapInitSize);
}
public void increment(String key) {
readLock.lock();
try {
map.merge(key, 1L, Long::sum);
} finally {
readLock.unlock();
}
}
public Map<String, Long> getAndClear() {
ConcurrentMap<String, Long> oldMap;
writeLock.lock();
try {
oldMap = map;
map = new ConcurrentHashMap<>(hashMapInitSize);
} finally {
writeLock.unlock();
}
return oldMap;
}
}
// The code below is used for testing
public static void main(String[] args) throws InterruptedException {
final AtomicBoolean ready = new AtomicBoolean(false);
Counter counter = new Counter();
Thread thread = new Thread(new Runnable() {
public void run() {
for (int j = 0; j < nIterations; j++) {
int index = ThreadLocalRandom.current().nextInt(maxKeys);
counter.increment(Integer.toString(index));
}
}
}, "incrementThread");
Thread readerThread = new Thread(new Runnable() {
public void run() {
long sum = 0;
while (!ready.get()) {
try {
Thread.sleep(sleepMs);
} catch (InterruptedException e) {
//
}
Map<String, Long> map = counter.getAndClear();
for (Map.Entry<String, Long> entry : map.entrySet()) {
Long value = entry.getValue();
sum += value;
}
System.out.println("mapSize: " + map.size());
}
System.out.println("sum: " + sum);
System.out.println("expected: " + nIterations);
}
}, "readerThread");
thread.start();
readerThread.start();
thread.join();
ready.set(true);
readerThread.join();
// Ensure that counter is empty
System.out.println("elements left in map: " + counter.getAndClear().size());
}
}
I've made a class that counts words in given files within the same directory. Seeing as the files are very large, I've decided to achieve the count of multiple files using multiple threads.
When running the DriverClass as specified below, it get's stuck at thread one.
What am I doing wrong? As I'm iterating over queue.take(), one would expect the parser to wait for something to retrieve and move on. Getting stuck at thread 1 makes me suspect an error when putting() into the queue.
Thank's, in advance!
DriverClass:
public class WordCountTest {
public static void main(String[] args){
if (args.length<1){
System.out.println("Please specify, atleast, one file");
}
BlockingQueue<Integer> threadQueue = new LinkedBlockingQueue<>();
Runnable r;
Thread t;
for (int i = 0; i<args.length; i++){
r = new WordCount(args[i], threadQueue);
t = new Thread(r);
t.start();
int total = 0;
for (int k = 0; k<args.length; k++){
try {
total += threadQueue.take();
} catch (InterruptedException e){
}
}
System.out.println("Total wordcount: " + total);
}
}
}
WordCountClass:
public class WordCount implements Runnable {
private int myId = 0;
private String _file;
private BlockingQueue<Integer> _queue;
private static int id = 0;
public WordCount(String file, BlockingQueue<Integer> queue){
_queue = queue;
_file = file;
myId = ++id;
}
#Override
public void run() {
System.out.println("Thread " + myId + " running");
try {
_queue.put(countWord(_file));
} catch (InterruptedException e){
}
}
public int countWord(String file){
int count = 0;
try {
Scanner in = new Scanner(new FileReader(file));
while (in.hasNext()){
count++;
in.next();
}
} catch (IOException e){
System.out.println("File," + file + ",not found");
}
return count;
}
}
The problem is that you're using a nested loop, when you should be using two separate loops: one to start the WordCounts, another to collect the results, something like
public class WordCountTest {
public static void main(String[] args){
Queue<Integer> threadQueue = new ConcurrentLinkedQueue<>();
ExecutorService executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
CountDownLatch latch = new CountDownLatch(args.length);
for (int i = 0; i<args.length; i++){
CompletableFuture.runAsync(new WordCount(args[i], threadQueue), executor)
.thenRunAsync(latch.countDown(), executor);
}
latch.await();
int sum = 0;
for(Integer i : threadQueue) {
sum += i;
}
}
}
Or however you want to implement it, the point being that you shouldn't start collecting results until all of the WordCounts have started.
You are waiting for all the results after the first thread is started. Perhaps you intended to wait for the results after all the threads have started.
Note: if you create more threads than you have CPUs its likely to be slower. I suggest using a fixed thread pool instead.
I am trying to generate all permutations of some String in parallel using algorithm from here
(difference is that my code also handles Strings containing repetitive characters). I am using SynchrounousQueue for thread synchronization. Generator generates a permutation and Printer takes it and prints it. My code:
import java.util.ArrayList;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.SynchronousQueue;
public class PermBlockingQueue {
public static int printers = 4;
public static String str = "ABCD";
public static class Generator implements Runnable {
private final BlockingQueue q;
private final String pref;
public Generator(BlockingQueue q, String pref) {
this.q = q;
this.pref = pref;
}
public void permutations(String pref, String str) {
int n = str.length();
if (n == 0) {
try {
q.put(pref);
} catch (InterruptedException ex) { }
} else {
for (int i = 0; i < str.length(); i++) {
if (str.indexOf(str.charAt(i), i + 1) != -1) {
continue;
}
permutations(pref + str.charAt(i), str.substring(0, i) + str.substring(i + 1));
}
}
}
#Override
public void run() {
int k = str.indexOf(pref);
permutations(pref, str.substring(0, k) + str.substring(k + 1)));
//Sending messages for printers to quit
try {
for (int x = 0; x < printers; x++) {
q.put("");
}
} catch (InterruptedException ex) {
}
}
}
public static class Printer implements Runnable {
private final BlockingQueue q;
public Printer(BlockingQueue q) {
this.q = q;
}
#Override
public void run() {
try {
String permut;
while (!((permut = (String) q.take()).equals(""))) {
System.out.println(permut);
}
} catch (InterruptedException ex) {
}
}
}
public static void main(String[] args) throws InterruptedException {
ArrayList<Thread> threads = new ArrayList<>();
BlockingQueue q = new SynchronousQueue();
Thread th1 = new Thread(new Generator(q, "A"));
Thread th2 = new Thread(new Generator(q, "B"));
Thread th3 = new Thread(new Generator(q, "C"));
Thread th4 = new Thread(new Generator(q, "D"));
threads.add(th1);
threads.add(th2);
threads.add(th3);
threads.add(th4);
for (int i = 0; i < printers; i++) {
threads.add(new Thread(new Printer(q)));
}
for (Thread th : threads.toArray(new Thread[threads.size()])) {
th.start();
}
for (Thread th : threads.toArray(new Thread[threads.size()])) {
th.join();
}
}
}
In sequential program method permutations(String pref, String str) works as expected. As far as synchronization goes, there are no problems as long as I am not using recursive method. In this program however, I am using recursive method and it deadlocks. I am guessing that Generator locks itself because multiple values are returned to it and further more, nothing gets printed. Also, if I put print statement in permutations method before return statement,
if (n == 0) {
System.out.println(pref);
return pref;
}
I get the expected output, which in case of "ABCD" is (4! = 24 permutations):
ABCD
ABDC
ACBD
....
DBAC
DBCA
DCAB
DCBA
Of course these values are not printed by Printers who are waiting for input from Generators and program is deadlocked.
So what exactly happens with recursive method here? And most importantly, how do I approach this kind of problem?
Thank you.
Edit: I've changed my code a little bit, considering the fact, that
q.put(permutations(pref, str.substring(0, k) + str.substring(k + 1)));
didn't really made much of a sense (correct me, if I'm wrong). This is because, in my mind, it creates a ambiguity as to which permutation should be put in SynchronousQueue, as single call to permutation yields more than one permutation (at least in my example).
Now I'm putting permutations into queue inside permutations method, whereas in Generators' run() I only call permutations. This gives better results: instead of no permutations getting printed, some are printed. This 'some' varies with each execution and program still gets deadlocked.
This program in Java creates a list of 15 numbers and creates 3 threads to search for the maximum in a given interval. I want to create another thread that takes those 3 numbers and get the maximum. but i don't know how to get those values in the other thread.
public class apple implements Runnable{
String name;
int time, number, first, last, maximum;
int[] array = {12, 32, 54 ,64, 656, 756, 765 ,43, 34, 54,5 ,45 ,6 , 5, 65};
public apple(String s, int f, int l){
name = s;
first = f;
last = l;
maximum = array[0];
}
public void run(){
try{
for(int i = first; i < last; i++ )
{
if(maximum < array[i])
{
maximum = array[i];
}
}
System.out.println("Thread"+ name + "maximum = " + maximum);
}catch(Exception e){}
}
public static void main(String[] args){
Thread t1 = new Thread(new apple("1 ", 0, 5));
Thread t2 = new Thread(new apple("2 ", 5, 10 ));
Thread t3 = new Thread(new apple("3 ", 10, 15));
try{
t1.start();
t2.start();
t3.start();
}catch(Exception e){}
}
}
Here is how ExecutorService and ExecutorCompletionService can solve it:
public class MaxFinder {
private int[] values;
private int threadsCount;
public MaxFinder(int[] values, int threadsCount) {
this.values = values;
this.threadsCount = threadsCount;
}
public int find() throws InterruptedException {
ExecutorService executor = Executors.newFixedThreadPool(threadsCount);
ExecutorCompletionService<Integer> cs = new ExecutorCompletionService<Integer>(executor);
// Split the work
int perThread = values.length / threadsCount;
int from = 0;
for(int i = 0; i < threadsCount - 1; i++) {
cs.submit(new Worker(from, from + perThread));
from += perThread;
}
cs.submit(new Worker(from,values.length));
// Start collecting results as they arrive
int globalMax = values[0];
try {
for(int i = 0; i < threadsCount; i++){
int v = cs.take().get();
if (v > globalMax)
globalMax = v;
}
} catch (ExecutionException e) {
throw new RuntimeException(e);
}
executor.shutdown();
return globalMax;
}
private class Worker implements Callable<Integer> {
private int fromIndex;
private int toIndex;
public Worker(int fromIndex, int toIndex) {
this.fromIndex = fromIndex;
this.toIndex = toIndex;
}
#Override
public Integer call() {
int max = values[0];
for(int i = fromIndex; i<toIndex; i++){
if (values[i] > max)
max = values[i];
}
return max;
}
}
}
In this solution, N threads work concurrently, each on its portion of the array. The caller thread is responsible for gathering the local maximums as they arrive, and find the global maximum. This solution uses some non-trivial concurrency tools from java.util.concurrent package.
If you prefer a solution that only uses primitive synchronization tools, then you should use a synchronized block in the worker threads, that sets the maximum in some data member and then notifies the collector thread. The collector thread should be in a loop, waiting for notification and then examining the new number, and updating the global maximum if needed. This "consumer producer" model requires careful synchronization.
Based on the code you have, the simplest solution is to join the main thread to each instance thread and then get the max value from them for comparison purposes. Like so:
int globalMax;
try{
t1.start();
t2.start();
t3.start();
t1.join();
globalMax = t1.maximum;
t2.join();
if (t2.maximum > globalMax) {
globalMax = t2.maximum;
}
t3.join();
if (t3.maximum > globalMax) {
globalMax = t3.maximum;
}
} catch(Exception e){
}
Instead of implementing Runnable, try implementing Callable, which is capable of returning a result. The tutorial given here is a good source for describing how to do this.
Another approach to your problem could be to create an object which each apple instance (not sure why you've called it this) could register its maximum with the object. This new class could be passed into each apple constructor, then the apple could call a method, passing its own maximum into this.
For instance:
public class MaximumOfMaximumsFinder implements Runnable {
private List<Integer> maximums = new ArrayList<Integer>();
public void registerSingleMaximum(Integer max) {
maximums.add(max);
}
public void run() {
// use similar logic to find the maximum
}
}
There are several issues around making sure this is coordinated with the other threads, I'll leave this to you, since there's some interesting things to think about.