Parallel Implementation of DFS for undirected graph - java

I have been trying to implement a parallel Depth First Search in Java for undirected graph. I wrote this code but it doesn't work properly. It doesn't speed-up.
Main method:
package dfsearch_v2;
import java.util.Calendar;
import java.util.Stack;
import java.util.Random;
public class DFSearch_v2 {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
long ts_b, ts_e;
int el_count=100;
int thread_count = 4;
int vertices[][]; // graph matrix
boolean isVisited[] = new boolean[el_count];
for(int i=0;i<el_count;i++){
for(int j=0;j<el_count;j++){
Random boolNumber = new Random();
boolean edge = boolNumber.nextBoolean();
vertices[i][j]=edge ? 1 :
}
}
DFSTest r[] = new DFSTest[thread_count];
ts_b = Calendar.getInstance().getTimeInMillis();
for(int i = 0; i < thread_count; i++) {
r[i] = new DFSTest(el_count,vertices,isVisited);
r[i].start();
}
for(int i = 0; i < thread_count;
try {
r[i].join();
} catch (InterruptedException e) {
}
}
ts_e = Calendar.getInstance().getTimeInMillis();
System.out.println("Time "+(ts_e-ts_b));
}
Thread implementation:
package dfsearch_v2;
import java.util.Stack;
public class DFSTest extends Thread {
int numberOfNodes;
int adj[][];
boolean isVisit[];
public DFSTest(int numberOfNodes, int adj[][],boolean isVisit[]){
this.numberOfNodes = numberOfNodes;
this.adj=adj;
this.isVisit=isVisit;
}
public void run()
{
int k,i,s=0;
Stack<Integer> st = new Stack<>();
for(k=0; k < numberOfNodes; k++) isVisit[k]=false;
for (k = numberOfNodes - 1; k >= 0; k--) {
st.push(k);
}
DFSearch(st, isVisit);
}
private void DFSearch(Stack<Integer> st,boolean isVisit[]){
synchronized(isVisit){
int i,k;
while (!st.empty()) {
k=st.pop();
if (!isVisit[k]) {
isVisit[k] = true;
System.out.println("Node "+k+" is visit");
for(i=numberOfNodes-1; i>=0; i--)
if(adj[k][i]==1) st.push(i);
}
}
}
}
}
Could anybody, please, help me? I am really new to parallel programming.
Thanks

If I understand your program correctly, you are locking on the isVisit array which is shared between all threads - this means that you're not going to get any speedup because only one thread is able to make progress. Try using a ConcurrentHashMap or ConcurrentSkipListMap instead.
// shared between all threads
ConcurrentMap<Integer, Boolean> map = new ConcurrentHashMap<>();
public boolean isVisit(Integer integer) {
return map.putIfAbsent(integer, Boolean.TRUE) != null;
}
private void DFSearch(Stack<Integer> st) {
if(!isVisit(st.pop())) {
...
}
}
The concurrent maps use sharding to increase parallelism. Use the putIfAbsent method in isVisit to avoid a data race (you only want the method to return false for one thread).
As for how to divide the work up among multiple threads, use a ConcurrentLinkedQueue of worker threads. When a thread has no more work to perform, it adds itself to the worker thread queue. When a thread has two edges to traverse, it polls the worker thread queue for an available worker thread, and if one is available it assigns one of the edges to the worker thread. When all threads are on the available thread queue then you've traversed the entire list.

You shouldn't need to synchronize on isVisit, which is what is destroying your parallelism. Multiple readers/multiple writers for a Boolean array should be quite safe.
If at all possible, you should avoid dependencies between threads. To this end, don't use a shared stack (if this is what your code is doing -- it's unclear).
In your case, the amount of work done per vertex is tiny, so it makes sense to batch up work in each thread and only consider handing work on to other threads once some backlog threshold is reached.

I changed the approach a little. Now it uses one global stack which is shared by all the threads and n local stacks where n is the number of threads. Each thread stores the nodes of its sub-tree in its local stack. Initially the global stack contains the root of the tree and only one thread gets access to it while the other threads are waiting to be woken up by the working thread. The working thread retrieves and processes the root from the global stack, adds one successor to its local stack then pushes the rest of the successors, if they exist, to the global stack to be processed by other threads and wakes up all the waiting threads. All the other threads follow the same approach (i.e. when threads get a node from the global stack they push one successor to their local stack and the rest to the global stack then start accessing their local stack until it becomes empty).
Yet, it doesn't speed up. I'll be thankful to all of your further ideas.
Main method:
package dfsearch_v2;
import java.util.Calendar;
import java.util.Random;
public class DFSearch_v2 {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here
long ts_b, ts_e;
//number of nodes
int el_count=400;
int thread_count = 8;
int gCounter=0;
int vertices[][] = new int[el_count][el_count]; // graph matrix
boolean isVisited[] = new boolean[el_count];
for(int i=0;i<el_count;i++){
for(int j=0;j<el_count;j++){
Random boolNumber = new Random();
boolean edge = boolNumber.nextBoolean();
vertices[i][j]=edge ? 1 : 0;
}
}
DFSearch2 r[] = new DFSearch2[thread_count];
ts_b = Calendar.getInstance().getTimeInMillis();
for(int i = 0; i < thread_count; i++) {
r[i] = new DFSearch2(el_count,vertices,isVisited,gCounter);
r[i].start();
}
for(int i = 0; i < thread_count; i++) {
try {
r[i].join();
} catch (InterruptedException e) {
}
}
ts_e = Calendar.getInstance().getTimeInMillis();
System.out.println("Time "+(ts_e-ts_b));
}
}
Thread implementation:
package dfsearch_v2;
import java.util.Stack;
public class DFSearch2 extends Thread{
private boolean isVisit[];
private final Stack<Integer> globalStack;
int numberOfNodes;
//traversal is done ?
boolean isDone;
int adj[][];
// count visited nodes
int gCounter;
public DFSearch2(int number_Nodes,int adj[][],boolean isVisit[],int gCounter){
this.numberOfNodes=number_Nodes;
this.isVisit = isVisit;
this.globalStack = new Stack<>();
this.isDone=false;
this.adj=adj;
this.gCounter=gCounter;
this.globalStack.push(number_Nodes-1);
}
public void run(){
// local stack
Stack<Integer> localStack = new Stack<>();
while (!isDone) {
int k;
synchronized(globalStack){
k = globalStack.pop();
//pop until k is not visited
while (isVisit[k]) {
if(globalStack.empty()) {
isDone=true;
return;
}else{
k=globalStack.pop();
}
}
}
// traverse sub-graph with start node k
DFSearchNode(localStack,k);
yield();
if(globalStack.empty()) {
isDone = true;
}
// if gCounter is not null unvisited node are pushed in globalStack
if(isDone&&gCounter<numberOfNodes){
isDone=false;
//unvisited nodes are pushed in globalStack
for (int i = 0; i < isVisit.length; i++) {
if (!isVisit[i]) {
globalStack.push(i);
}
}
}
}
}
synchronized private void DFSearchNode(Stack<Integer> localStack, int k){
localStack.push(k);
while (!localStack.empty()) {
int s=localStack.pop();
if (!isVisit[s]) {
isVisit[s] = true;
gCounter++;
//System.out.println("Node "+s+" is visit");
//first element is pushed into localStack and anothers in globalStack
boolean flag = true; // local or global stack (true -> local; false ->global )
for(int i=numberOfNodes-1; i>=0; i--)
{
//
if(i==s) continue;
//push another successors in global stack
if(adj[s][i]==1&&!flag&&!isVisit[s]){//visited nodes are not pushed in globalStack
globalStack.push(i);
}
//push first successor in global stack
if(adj[s][i]==1&&flag&&!isVisit[s]) //visited nodes are not pushed in localStack
{
localStack.push(i);
flag=false; //only first element is pushed into localStack
}
}
}
}
}
}

Related

Why are my threads not synchronizing?

I am trying to get a grasp on synchronizing threads, but I don't understand the problem I'm encountering.
Can someone please help me diagnose this or, even better, explain how I can diagnose this for myself?
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CyclicBarrier;
public class Controller {
public static void main(String[] args) {
int numThreads = 0;
List<Thread> threads = new ArrayList<>();
if (args.length > 0) {
numThreads = Integer.parseInt(args[0]);
}
else {
System.out.println("No arguments");
System.exit(1);
}
CyclicBarrier barrier = new CyclicBarrier(numThreads);
int arr[][] = new int[10][10];
for (int i = 0; i < numThreads; i++) {
Thread newThread = new Thread(new ThreadableClass(barrier, arr));
threads.add(newThread);
}
for (Thread thread : threads) {
thread.start();
}
}
}
There is a main method (above) which accepts the number of threads I want as a command line argument. And there is a work-flow (below) which I am aiming to have increment all elements in a 2D array and print the array before the next thread has its chance to do the same:
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
public class ThreadableClass implements Runnable {
private CyclicBarrier barrier;
private int arr[][];
public ThreadableClass(CyclicBarrier barrier, int[][] arr) {
this.barrier = barrier;
this.arr = arr;
}
#Override
public void run() {
long threadId = Thread.currentThread().getId();
System.out.println(threadId + " Starting");
for (int i = 0; i < 10; i++) {
changeArray();
try {
barrier.await();
} catch (InterruptedException | BrokenBarrierException e) {
e.printStackTrace();
}
}
}
private synchronized void changeArray() {
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < arr.length; j++) {
arr[i][j]++;
}
}
printArray();
}
private synchronized void printArray() {
System.out.println(Thread.currentThread().getId() + " is printing: ");
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < arr.length; j++) {
System.out.print(arr[i][j] + " ");
}
System.out.println();
}
}
}
Imagining the size of the array is 2x2, the expected output would look something like this:
1 1
1 1
2 2
2 2
3 3
3 3
4 4
4 4
...
...
(10 * numThreads)-1 (10 * numThreads)-1
(10 * numThreads)-1 (10 * numThreads)-1
(10 * numThreads) (10 * numThreads)
(10 * numThreads) (10 * numThreads)
Instead, all threads increment the array, and begin printing over one another.
There is nothing surprising about the result. You create n threads. You tell all threads to start. Each threads run() starts with:
long threadId = Thread.currentThread().getId();
System.out.println(threadId + " Starting");
...changeArray();
going to change that shared array. After writing to the array, you try to sync (on that barrier). Its too late then!
The point is: you have 10 different ThreadableClass instances. Each one is operating on its own! The synchronized key word ... simply doesn't provide any protection here!
Because: synchronized prevents two different threads calling the same method on the same object. But when you have multiple objects, and your threads are calling that method on those different objects, than there is no locking! What your code does boils down to:
threadA to call changeArray() .. on itself
threadB to call changeArray() .. on itself
threadC to call changeArray() .. on itself
...
In other words: you give n threads access to that shared array. But then you allow those n threads to enter changeArray() at the same time.
One simple fix; change
private synchronized void changeArray() {
to
private void changeArray() {
synchronized(arr) {
In other words: make sure that the n threads have to lock on the same monitor; in that case the shared array.
Alternatively: instead of making changeArray() a method in that ThreadableClass ... create a class
ArrayUpdater {
int arr[] to update
synchronized changeArray() ...
Then create one instance of that class; and give that same instance to each of your threads. Now the sync'ed method will prevent multiple threads to enter!
Because you are providing new instance for each theard using new ThreadableClass(barrier, arr), basically, all the theadrs are using different ThreadableClass objects, so your code synchronized methods run parallely, so you need to use a single ThreadableClass object as shown below:
ThreadableClass threadableClass= new ThreadableClass(barrier, arr);
for (int i = 0; i < numThreads; i++) {
Thread newThread = new Thread(threadableClass);
threads.add(newThread);
}
The important point is synchronization is all about providing access (i.e., key) to an object for a single thread at a time. If you are using a different object for each thread, threads don't wait for the key because each thread has got its own key (like in your example).

java.util.NoSuchElementException when run with Semaphore

I have a Queue containing 10 elements, and I start 100 threads of which 6 may run concurrently, controlled by a Semaphore. When each thread runs, it takes the head element then adds it to the tail. But sometimes I get this exception:
java.util.NoSuchElementException
at java.util.LinkedList.removeFirst(LinkedList.java:270)
at java.util.LinkedList.remove(LinkedList.java:685)
at IBM.SemApp$1.run(SemApp.java:27)
at java.lang.Thread.run(Thread.java:745)
import java.util.LinkedList;
import java.util.Queue;
import java.util.Random;
import java.util.concurrent.Semaphore;
public class SemApp {
public static void main(String[] args) {
Queue queueB = new LinkedList<>();
for (int i = 0; i < 10; i++) {
queueB.add("Object " + i);
}
Runnable limitedCall = new Runnable() {
final Random rand = new Random();
final Semaphore available = new Semaphore(6);
int count = 0;
public void run() {
int time = rand.nextInt(15);
try {
available.acquire();
String A = (String) queueB.remove();
queueB.add(A);
available.release();
count++;
System.out.println(count);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
for (int i = 0; i < 100; i++) {
new Thread(limitedCall).start();
}
}
}
What am I doing wrong?
The problem is that LinkedList is not a thread-safe structure.
Therefore, it should not be shared and modified by multiple concurrent threads as the changes on queueB might not be properly "communicated" to other threads.
Try using a LinkedBlockingQueue instead.
Also, use an AtomicLong for count for the same reason: it is shared in between several threads and you want to avoid race conditions.
The fact that up to six threads may be operating on the queue concurrently means that modifications are not safe.

Is my code in a state of deadlock?

On compiling my code below it seems to be in a state of deadlock, and i don't know how i can fix it. I am attempting to write a pipeline as a sequence of threads linked together as a buffer, and each thread can read the preceding node in the pipeline, and consequentially write to the next one. The overall goal is to spilt a randomly generated arraylist of data over 10 threads and sort it.
class Buffer{
// x is the current node
private int x;
private boolean item;
private Lock lock = new ReentrantLock();
private Condition full = lock.newCondition();
private Condition empty = lock.newCondition();
public Buffer(){item = false;}
public int read(){
lock.lock();
try{
while(!item)
try{full.await();}
catch(InterruptedException e){}
item = false;
empty.signal();
return x;
}finally{lock.unlock();}
}
public void write(int k){
lock.lock();
try{
while(item)
try{empty.await();}
catch(InterruptedException e){}
x = k; item = true;
full.signal();
}finally{lock.unlock();}
}
}
class Pipeline extends Thread {
private Buffer b;
//private Sorted s;
private ArrayList<Integer> pipe; // array pipeline
private int ub; // upper bounds
private int lb; // lower bounds
public Pipeline(Buffer bf, ArrayList<Integer> p, int u, int l) {
pipe = p;ub = u;lb = l;b = bf;//s = ss;
}
public void run() {
while(lb < ub) {
if(b.read() > pipe.get(lb+1)) {
b.write(pipe.get(lb+1));
}
lb++;
}
if(lb == ub) {
// store sorted array segment
Collections.sort(pipe);
new Sorted(pipe, this.lb, this.ub);
}
}
}
class Sorted {
private volatile ArrayList<Integer> shared;
private int ub;
private int lb;
public Sorted(ArrayList<Integer> s, int u, int l) {
ub = u;lb = l;shared = s;
// merge data to array from given bounds
}
}
class Test1 {
public static void main(String[] args) {
int N = 1000000;
ArrayList<Integer> list = new ArrayList<Integer>();
for(int i=0;i<N;i++) {
int k = (int)(Math.random()*N);
list.add(k);
}
// write to buffer
Buffer b = new Buffer();
b.write(list.get(0));
//Sorted s = new Sorted();
int maxBuffer = 10;
int index[] = new int[maxBuffer+1];
Thread workers[] = new Pipeline[maxBuffer];
// Distribute data evenly over threads
for(int i=0;i<maxBuffer;i++)
index[i] = (i*N) / maxBuffer;
for(int i=0;i<maxBuffer;i++) {
// create instacen of pipeline
workers[i] = new Pipeline(b,list,index[i],index[i+1]);
workers[i].start();
}
// join threads
try {
for(int i=0;i<maxBuffer;i++) {
workers[i].join();
}
} catch(InterruptedException e) {}
boolean sorted = true;
System.out.println();
for(int i=0;i<list.size()-1;i++) {
if(list.get(i) > list.get(i+1)) {
sorted = false;
}
}
System.out.println(sorted);
}
}
When you start the run methods, all threads will block until the first thread hits full.await(). then one after the other, all threads will end up hitting full.await(). they will wait for this signal.
However the only place where full.signal occurs is after one of the read methods finishes.
As this code is never reached (because the signal is never fired) you end up with all threads waiting.
in short, only after 1 read finishes, will the writes trigger.
if you reverse the logic, you start empty, you write to the buffer (with signal, etc, etc) and then the threads try to read, I expect it will work.
generally speaking you want to write to a pipeline before reading from it. (or there's nothing to read).
I hope i'm not misreading your code but that's what I see on first scan.
Your Buffer class it flipping between read and write mode. Each read must be followed by a write, that by a read and so on.
You write the buffer initially in your main method.
Now one of your threads reaches if(b.read() > pipe.get(lb+1)) { in Pipeline#run. If that condition evaluates to false, then nothing gets written. And since every other thread must still be the very same if(b.read(), you end up with all reading threads that can't progress. You will either have to write in the else branch or allow multiple reads.

Java lock/concurrency issue when searching array with multiple threads

I am new to Java and trying to write a method that finds the maximum value in a 2D array of longs.
The method searches through each row in a separate thread, and the threads maintain a shared current maximal value. Whenever a thread finds a value larger than its own local maximum, it compares this value with the shared local maximum and updates its current local maximum and possibly the shared maximum as appropriate. I need to make sure that appropriate synchronization is implemented so that the result is correct regardless of how to computations interleave.
My code is verbose and messy, but for starters, I have this function:
static long sharedMaxOf2DArray(long[][] arr, int r){
MyRunnableShared[] myRunnables = new MyRunnableShared[r];
for(int row = 0; row < r; row++){
MyRunnableShared rr = new MyRunnableShared(arr, row, r);
Thread t = new Thread(rr);
t.start();
myRunnables[row] = rr;
}
return myRunnables[0].sharedMax; //should be the same as any other one (?)
}
For the adapted runnable, I have this:
public static class MyRunnableShared implements Runnable{
long[][] theArray;
private int row;
private long rowMax;
public long localMax;
public long sharedMax;
private static Lock sharedMaxLock = new ReentrantLock();
MyRunnableShared(long[][] a, int r, int rm){
theArray = a;
row = r;
rowMax = rm;
}
public void run(){
localMax = 0;
for(int i = 0; i < rowMax; i++){
if(theArray[row][i] > localMax){
localMax = theArray[row][i];
sharedMaxLock.lock();
try{
if(localMax > sharedMax)
sharedMax = localMax;
}
finally{
sharedMaxLock.unlock();
}
}
}
}
}
I thought this use of a lock would be a safe way to prevent multiple threads from messing with the sharedMax at a time, but upon testing/comparing with a non-concurrent maximum-finding function on the same input, I found the results to be incorrect. I'm thinking the problem might come from the fact that I just say
...
t.start();
myRunnables[row] = rr;
...
in the sharedMaxOf2DArray function. Perhaps a given thread needs to finish before I put it in the array of myRunnables; otherwise, I will have "captured" the wrong sharedMax? Or is it something else? I'm not sure on the timing of things..
I'm not sure if this is a typo or not, but your Runnable implementation declares sharedMax as an instance variable:
public long sharedMax;
rather than a shared one:
public static long sharedMax;
In the former case, each Runnable gets its own copy and will not "see" the values of others. Changing it to the latter should help. Or, change it to:
public long[] sharedMax; // array of size 1 shared across all threads
and you can now create an array of size one outside the loop and pass it in to each Runnable to use as shared storage.
As an aside: please note that there will be tremendous lock contention since every thread checks the common sharedMax value by holding a lock for every iteration of its loop. This will likely lead to poor performance. You'd have to measure, but I'd surmise that letting each thread find the row maximum and then running a final pass to find the "max of maxes" might actually be comparable or quicker.
From JavaDocs:
public interface Callable
A task that returns a result and may
throw an exception. Implementors define a single method with no
arguments called call.
The Callable interface is similar to Runnable, in that both are
designed for classes whose instances are potentially executed by
another thread. A Runnable, however, does not return a result and
cannot throw a checked exception.
Well, you can use Callable to calculate your result from one 1darray and wait with an ExecutorService for the end. You can now compare each result of the Callable to fetch the maximum. The code may look like this:
Random random = new Random(System.nanoTime());
long[][] myArray = new long[5][5];
for (int i = 0; i < 5; i++) {
myArray[i] = new long[5];
for (int j = 0; j < 5; j++) {
myArray[i][j] = random.nextLong();
}
}
ExecutorService executor = Executors.newFixedThreadPool(myArray.length);
List<Future<Long>> myResults = new ArrayList<>();
// create a callable for each 1d array in the 2d array
for (int i = 0; i < myArray.length; i++) {
Callable<Long> callable = new SearchCallable(myArray[i]);
Future<Long> callResult = executor.submit(callable);
myResults.add(callResult);
}
// This will make the executor accept no new threads
// and finish all existing threads in the queue
executor.shutdown();
// Wait until all threads are finish
while (!executor.isTerminated()) {
}
// now compare the results and fetch the biggest one
long max = 0;
for (Future<Long> future : myResults) {
try {
max = Math.max(max, future.get());
} catch (InterruptedException | ExecutionException e) {
// something bad happend...!
e.printStackTrace();
}
}
System.out.println("The result is " + max);
And your Callable:
public class SearchCallable implements Callable<Long> {
private final long[] mArray;
public SearchCallable(final long[] pArray) {
mArray = pArray;
}
#Override
public Long call() throws Exception {
long max = 0;
for (int i = 0; i < mArray.length; i++) {
max = Math.max(max, mArray[i]);
}
System.out.println("I've got the maximum " + max + ", and you guys?");
return max;
}
}
Your code has serious lock contention and thread safety issues. Even worse, it doesn't actually wait for any of the threads to finish before the return myRunnables[0].sharedMax which is a really bad race condition. Also, using explicit locking via ReentrantLock or even synchronized blocks is usually the wrong way of doing things unless you're implementing something low level (eg your own/new concurrent data structure)
Here's a version that uses the Future concurrent primitive and an ExecutorService to handle the thread creation. The general idea is:
Submit a number of concurrent jobs to your ExecutorService
Add the Future returned backed from submit(...) to a List
Loop through the list calling get() on each Future and aggregating the result
This version has the added benefit that there is no lock contention (or locking in general) between the worker threads as each just returns back the max for its slice of the array.
import java.util.concurrent.*;
import java.util.*;
public class PMax {
public static long pmax(final long[][] arr, int numThreads) {
ExecutorService pool = Executors.newFixedThreadPool(numThreads);
try {
List<Future<Long>> list = new ArrayList<Future<Long>>();
for(int i=0;i<arr.length;i++) {
// put sub-array in a final so the inner class can see it:
final long[] subArr = arr[i];
list.add(pool.submit(new Callable<Long>() {
public Long call() {
long max = Long.MIN_VALUE;
for(int j=0;j<subArr.length;j++) {
if( subArr[j] > max ) {
max = subArr[j];
}
}
return max;
}
}));
}
// find the max of each slice's max:
long max = Long.MIN_VALUE;
for(Future<Long> future : list) {
long threadMax = future.get();
System.out.println("threadMax: " + threadMax);
if( threadMax > max ) {
max = threadMax;
}
}
return max;
} catch( RuntimeException e ) {
throw e;
} catch( Exception e ) {
throw new RuntimeException(e);
} finally {
pool.shutdown();
}
}
public static void main(String args[]) {
int x = 1000;
int y = 1000;
long max = Long.MIN_VALUE;
long[][] foo = new long[x][y];
for(int i=0;i<x;i++) {
for(int j=0;j<y;j++) {
long r = (long)(Math.random() * 100000000);
if( r > max ) {
// save this to compare against pmax:
max = r;
}
foo[i][j] = r;
}
}
int numThreads = 32;
long pmax = pmax(foo, numThreads);
System.out.println("max: " + max);
System.out.println("pmax: " + pmax);
}
}
Bonus: If you're calling this method repeatedly then it would probably make sense to pull the ExecutorService creation out of the method and have it be reused across calls.
Well, that definetly is an issue - but without more code it is hard to understand if it is the only thing.
There is basically a race condition between the access of thread[0] (and this read of sharedMax) and the modification of the sharedMax in other threads.
Think what happens if the scheduler decides to let no let any thread run for now - so when you are done creating the threads, you will return the answer without modifying it even once! (of course there are other possible scenarios...)
You can overcome it by join()ing all threads before returning an answer.

Semaphores: Critical Section with priorities

I'm writing a program in Java that deals with Semaphores for an assignment. I'm still new to the idea of Semaphores and concurrency.
The description of the problem is as follows:
A vector V[] of booleans. V[i] is "True"if Pi needs to use the critical section.
A vector of binary semaphores B[] to block processes from entering their critical section: B[i] will be the semaphore blocking process Pi.
A special scheduler process SCHED is used whenever a blocked process needs to be awakened to use the critical section.
SCHED is blocked by waiting on a special semaphore S
When a process Pi needs to enter the critical section, it sets V[i] to "True", signals the semaphore S and then waits on the semaphore B[i].
Whenever SCHED is unblocked, it selects the process Pi with the smallest index i for which V[i] is "True". Process Pi is then awakened by signaling B[i] and SCHED goes back to sleep by blocking on semaphore S.
When a process Pi leaves the critical section, it signals S.
This is my code:
import java.util.concurrent.Semaphore;
public class Process extends Thread {
static boolean V[];
int i;
static Semaphore B[]; //blocking semaphore
static Semaphore S;
private static int id;
static int N;
static int insist = 0;
public static void process (int i, int n) {
id = i;
N = n;
V = new boolean[N];
}
private void delay () {
try {
sleep (random(500));
}
catch (InterruptedException p) {
}
}
private static int random(int n) {
return (int) Math.round(n * Math.random() - 0.5);
}
private void entryprotocol(int i) {
V[Process.id] = true;
int turn = N;
while (V[Process.id] == true && turn == N) {
System.out.println("P" + Process.id + " is requesting critical section");
signal(S);
}
critical(Process.id);
wait(B[Process.id]);
V[Process.id] = false;
}
private void wait(Semaphore S) {
if (Process.id > 0) {
Process.id--;
} else {
//add Process.id to id.queue and block
wait(B[Process.id]);
}
}
private void signal(Semaphore S) {
if (B[Process.id] != null) {
Sched(Process.id);
} else {
Process.id++; //remove process from queue
critical(Process.id); //wakes up current process
}
}
private void critical(int i) {
System.out.println("P" + Process.id + " is in the critical section");
delay();
exitprotocol(i);
}
private void exitprotocol(int i) {
System.out.println("P" + Process.id + " is leaving the critical section");
V[id] = false;
signal(S);
}
public void Sched(int i) {
if (B[Process.id] == null) {
signal(B[Process.id]);
}
wait(S);
}
public void run() {
for (int i = 0; i < 5; i++) {
Sched(i);
entryprotocol(Process.id);
try {
wait(Process.id);
}
catch (InterruptedException p) {
}
signal(S);
}
}
public static void main (String[] args) {
int N = 5;
Process p[] = new Process[N];
for (int i = 0; i < N; i++) {
p[i] = new Process();
p[i].start();
}
}
}
I believe my logic here is correct but I'm getting a lot of errors (such as Exception in thread "Thread-1" java.lang.NullPointerException). Can any shed some light on what I'm doing wrong & provide me with some help. It's greatly appreciated!
Your NPE is probably due to the fact that you never initialize your Semaphore array - but its hard to say without a proper stack trace.
Two pieces of advice:
1) You might want to give your class variables more meaningful names than :
B
N
S
V.
Imagine walking away from this project and revisiting it in 4 months and had to read through that.
2) Figure out your class model on on a white board before writing any code. You have methods that take semaphores with the same name as some of your static fields. What are the relationships of the objects in your program? If you don't know, odds are your program doesn't know either.

Categories

Resources