I have this resource allocator class
public class ResourceAllocator {
ArrayList<Request> queue = new ArrayList<>();
Lock lock = new ReentrantLock();
int maxResources;
int available;
public ResourceAllocator(int max) {
maxResources = max;
available = max;
}
public int getMax() {
return maxResources;
}
public void getResources(Request req) {
lock.lock();
try {
if (req.getRequest() <= available) {
available = available - req.getRequest();
req.allocate();
} else {
queue.add(req);
}
} finally {
lock.unlock();
}
}
public void returnResources(int n) {
lock.lock();
try {
available = available + n;
if (queue.size() > 0) {
Request req = queue.get(0);
while (queue.size() > 0 &&
req.getRequest() <= available) {
available = available - req.getRequest();
req.allocate();
queue.remove(0);
if (queue.size() > 0) {
req = queue.get(0);
}
}
}
} finally {
lock.unlock();
}
}
public int size(){
return queue.size();
}
}
which is called from a thread
public class QThread extends Thread {
Semaphore sem = new Semaphore(0);
ResourceAllocator resources;
int number;
public QThread(ResourceAllocator rs, int n) {
resources = rs;
number = n;
}
public void run() {
int items = (int) (Math.random() * resources.getMax()) + 1;
Request req = new Request(sem, items);
resources.getResources(req);
try {
sem.acquire();
} catch (InterruptedException ex) {
}
System.out.printf("Thread %3d got %3d resources\n", number, items);
try{
Thread.sleep(2000);
}catch(InterruptedException ex){
}
resources.returnResources(items);
System.out.printf("Thread %3d returned %3d resources\n", number,items);
}
}
And all is fine apart from the fact that resources are allocated FIFO.
Any ideas how could I change this to allow clients with small requests to proceed before clients with large requests, bounded overtaking?
You can use PriorityQueue which suits best to your needs, then you can implement custom Comparator (if you think in future you may need a different implementation for sorting) or else Comparablewhich will sort your Request so that jobs in terms of size are submitted, executed first.
how about using a PriorityQueue where the priority is the inverse of the size of the request?
If you know the size of the job ahead of time, use a PriorityQueue instead of an ArrayList to hold the jobs and implement Comparable
son your Request object such that small jobs are sorted before large ones.
Related
I ran across this barrier code and I cannot understand the barrierPost method.
Im supposed to use this code to solve an exercise where two teams of threads race eachother to count to 10000.
I don't understand why the same condition has different results which are opposite
public class Barrier {
private int currentPosters = 0, totalPosters = 0;
private int passedWaiters = 0, totalWaiters = 1;
/**
* #param totalPosters - Nr of threads required to start the waiting threads
* #param totalWaiters - Nr of threads started later
*/
public Barrier (int totalPosters, int totalWaiters) {
this.totalPosters = totalPosters;
this.totalWaiters = totalWaiters;
}
public synchronized void init(int i) {
totalPosters = i; currentPosters=0;
}
public synchronized void barrierSet(int i) {
totalPosters = i; currentPosters=0;
}
public synchronized void barrierWait() {
boolean interrupted = false;
while (currentPosters = totalPosters) {
try {wait();}
catch (InterruptedException ie) {interrupted=true;}
}
passedWaiters++;
if (passedWaiters == totalWaiters) {
currentPosters = 0; passedWaiters = 0; notifyAll();
}
if (interrupted) Thread.currentThread().interrupt();
}
public synchronized void barrierPost() {
boolean interrupted = false; // In case a poster thread beats barrierWait, keep count of posters.
while (currentPosters == totalPosters) {
try {wait();}
catch (InterruptedException ie) {interrupted=true;}
}
currentPosters++;
if (currentPosters == totalPosters) notifyAll();
if (interrupted) Thread.currentThread().interrupt();
}
}
Can someone help?
I have a requirement where I need to read from a set of Blocking queues. The blocking queues are created by the Library I am using. My code has to read from the queues. I don't want to create a reader thread for each of these blocking queues. Rather I want to poll them for availability of data using a single thread (or probably using 2/3 threads at max). As some of the blocking queues might not have data for long time, while some of them may get bursts of data. Polling the queues with small timeout will work, but that is not efficient at all as it still needs to keep looping over all the queues even when some of them are without data for long time. Basically, I am looking for a select/epoll(used on sockets) kind of mechanism on blocking queues. Any clue is really appreciated.
Doing that in Go is real easy though. Below code simulates the same with channels and goroutines:
package main
import "fmt"
import "time"
import "math/rand"
func sendMessage(sc chan string) {
var i int
for {
i = rand.Intn(10)
for ; i >= 0 ; i-- {
sc <- fmt.Sprintf("Order number %d",rand.Intn(100))
}
i = 1000 + rand.Intn(32000);
time.Sleep(time.Duration(i) * time.Millisecond)
}
}
func sendNum(c chan int) {
var i int
for {
i = rand.Intn(16);
for ; i >= 0; i-- {
time.Sleep(20 * time.Millisecond)
c <- rand.Intn(65534)
}
i = 1000 + rand.Intn(24000);
time.Sleep(time.Duration(i) * time.Millisecond)
}
}
func main() {
msgchan := make(chan string, 32)
numchan := make(chan int, 32)
i := 0
for ; i < 8 ; i++ {
go sendNum(numchan)
go sendMessage(msgchan)
}
for {
select {
case msg := <- msgchan:
fmt.Printf("Worked on %s\n", msg)
case x := <- numchan:
fmt.Printf("I got %d \n", x)
}
}
}
I suggest you look into using the JCSP library. The equivalent of Go's select is called Alternative. You would only need one consuming thread, which will not need to poll the incoming channels if it switches on them with Alternative. Therefore this would be an efficient way to multiplex the source data.
It will help a lot if you are able to replace the BlockingQueues with JCSP channels. Channels behave essentially the same but provide a greater degree of flexibility regarding the fan-out or fan-in of sharing of channel ends, and in particular, the use of channels with Alternative.
For an example of usage, here is a fair multiplexer. This example demonstrates a process that fairly multiplexes traffic from its array of input channels to its single output channel. No input channel will be starved, regardless of the eagerness of its competitors.
import org.jcsp.lang.*;
public class FairPlex implements CSProcess {
private final AltingChannelInput[] in;
private final ChannelOutput out;
public FairPlex (final AltingChannelInput[] in, final ChannelOutput out) {
this.in = in;
this.out = out;
}
public void run () {
final Alternative alt = new Alternative (in);
while (true) {
final int index = alt.fairSelect ();
out.write (in[index].read ());
}
}
}
Note that if priSelect were used above, higher-indexed channels would be starved if lower-indexed channels were continually demanding service. Or instead of fairSelect, select could be used, but then no starvation analysis is possible. The select mechanism should only be used when starvation is not an issue.
Freedom from Deadlock
As with Go, a Java program using channels must be designed not to deadlock. The implementation of low-level concurrency primitives in Java is very hard to get right and you need something dependable. Fortunately, Alternative has been validated by formal analysis, along with the JCSP channels. This makes it a solid reliable choice.
Just to clear up on slight point of confusion, the current JCSP version is 1.1-rc5 in the Maven repos, not what the website says.
An another choice is here for Java6+
A BlockingDeque implementation class:
import java.lang.ref.WeakReference;
import java.util.WeakHashMap;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.atomic.AtomicLong;
class GoChannelPool {
private final static GoChannelPool defaultInstance = newPool();
private final AtomicLong serialNumber = new AtomicLong();
private final WeakHashMap<Long, WeakReference<GoChannel>> channelWeakHashMap = new WeakHashMap<>();
private final LinkedBlockingDeque<GoChannelObject> totalQueue = new LinkedBlockingDeque<>();
public <T> GoChannel<T> newChannel() {
GoChannel<T> channel = new GoChannel<>();
channelWeakHashMap.put(channel.getId(), new WeakReference<GoChannel>(channel));
return channel;
}
public void select(GoSelectConsumer consumer) throws InterruptedException {
consumer.accept(getTotalQueue().take());
}
public int size() {
return getTotalQueue().size();
}
public int getChannelCount() {
return channelWeakHashMap.values().size();
}
private LinkedBlockingDeque<GoChannelObject> getTotalQueue() {
return totalQueue;
}
public static GoChannelPool getDefaultInstance() {
return defaultInstance;
}
public static GoChannelPool newPool() {
return new GoChannelPool();
}
private GoChannelPool() {}
private long getSerialNumber() {
return serialNumber.getAndIncrement();
}
private synchronized void syncTakeAndDispatchObject() throws InterruptedException {
select(new GoSelectConsumer() {
#Override
void accept(GoChannelObject t) {
WeakReference<GoChannel> goChannelWeakReference = channelWeakHashMap.get(t.channel_id);
GoChannel channel = goChannelWeakReference != null ? goChannelWeakReference.get() : null;
if (channel != null) {
channel.offerBuffer(t);
}
}
});
}
class GoChannel<E> {
// Instance
private final long id;
private final LinkedBlockingDeque<GoChannelObject<E>> buffer = new LinkedBlockingDeque<>();
public GoChannel() {
this(getSerialNumber());
}
private GoChannel(long id) {
this.id = id;
}
public long getId() {
return id;
}
public E take() throws InterruptedException {
GoChannelObject object;
while((object = pollBuffer()) == null) {
syncTakeAndDispatchObject();
}
return (E) object.data;
}
public void offer(E object) {
GoChannelObject<E> e = new GoChannelObject();
e.channel_id = getId();
e.data = object;
getTotalQueue().offer(e);
}
protected void offerBuffer(GoChannelObject<E> data) {
buffer.offer(data);
}
protected GoChannelObject<E> pollBuffer() {
return buffer.poll();
}
public int size() {
return buffer.size();
}
#Override
protected void finalize() throws Throwable {
super.finalize();
channelWeakHashMap.remove(getId());
}
}
class GoChannelObject<E> {
long channel_id;
E data;
boolean belongsTo(GoChannel channel) {
return channel != null && channel_id == channel.id;
}
}
abstract static class GoSelectConsumer{
abstract void accept(GoChannelObject t);
}
}
then we can use it in this way:
GoChannelPool pool = GoChannelPool.getDefaultInstance();
final GoChannelPool.GoChannel<Integer> numberCh = pool.newChannel();
final GoChannelPool.GoChannel<String> stringCh = pool.newChannel();
final GoChannelPool.GoChannel<String> otherCh = pool.newChannel();
ExecutorService executorService = Executors.newCachedThreadPool();
int times;
times = 2000;
final CountDownLatch countDownLatch = new CountDownLatch(times * 2);
final AtomicInteger numTimes = new AtomicInteger();
final AtomicInteger strTimes = new AtomicInteger();
final AtomicInteger defaultTimes = new AtomicInteger();
final int finalTimes = times;
executorService.submit(new Runnable() {
#Override
public void run() {
for (int i = 0; i < finalTimes; i++) {
numberCh.offer(i);
try {
Thread.sleep((long) (Math.random() * 10));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
});
executorService.submit(new Runnable() {
#Override
public void run() {
for (int i = 0; i < finalTimes; i++) {
stringCh.offer("s"+i+"e");
try {
Thread.sleep((long) (Math.random() * 10));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
});
int otherTimes = 3;
for (int i = 0; i < otherTimes; i++) {
otherCh.offer("a"+i);
}
for (int i = 0; i < times*2 + otherTimes; i++) {
pool.select(new GoChannelPool.GoSelectConsumer() {
#Override
void accept(GoChannelPool.GoChannelObject t) {
// The data order should be randomized.
System.out.println(t.data);
countDownLatch.countDown();
if (t.belongsTo(stringCh)) {
strTimes.incrementAndGet();
return;
}
else if (t.belongsTo(numberCh)) {
numTimes.incrementAndGet();
return;
}
defaultTimes.incrementAndGet();
}
});
}
countDownLatch.await(10, TimeUnit.SECONDS);
/**
The console output of data should be randomized.
numTimes.get() should be 2000
strTimes.get() should be 2000
defaultTimes.get() should be 3
*/
and beware that the select works only if the channels belong to the same GoChannelPool, or just use the default GoChannelPool(however the performance would be lower if too many channels share the same GoChannelPool)
The only way is to replace standard queues with objects of a more functional class, which notifies consumer(s) when datum is inserted in an empty queue. This class still can implement the BlockingQueue interface, so the other side (producer) see no difference. The trick is that put operation should also raise a flag and notify consumer. Consumer, after polling all threads, clears the flag and calls Object.wait().
I remember when I was very new to Java, not knowing threads could share the memory of the process, I would have my threads communicate using (TCP/local) Sockets. Perhaps this can also work.
I implemented a buffer for the producer/consumer pattern, however, it seems that the Consumer never acquires the lock so Starvation occurs. I can't identify why this happens since both put() and take() seem to release the lock properly...
I know there is BlockingQueue and other nice implementations, but I want to implement this using wait() and notify() as an exercise.
public class ProducerConsumerRaw {
public static void main(String[] args) {
IntBuffer buffer = new IntBuffer(8);
ConsumerRaw consumer = new ConsumerRaw(buffer);
ProducerRaw producer = new ProducerRaw(buffer);
Thread t1 = new Thread(consumer);
Thread t2 = new Thread(producer);
t1.start();
t2.start();
}
}
class ConsumerRaw implements Runnable{
private final IntBuffer buffer;
public ConsumerRaw(IntBuffer b){
buffer = b;
}
public void run() {
while(!buffer.isEmpty()) {
int i = buffer.take();
System.out.println("Consumer reads "+i); // this print may not be in the order
}
}
}
class ProducerRaw implements Runnable{
private final IntBuffer buffer;
ProducerRaw(IntBuffer b) {
this.buffer = b;
}
public void run(){
for (int i = 0; i < 20; i++) {
int n = (int) (Math.random()*100);
buffer.put(n);
System.out.println("Producer puts "+n);
}
}
}
class IntBuffer{
private final int[] storage;
private volatile int end;
private volatile int start;
public IntBuffer(int size) {
this.storage = new int[size];
end = 0;
start = 0;
}
public void put(int n) { // puts add the END
synchronized(storage) {
boolean full = (start == (end+storage.length+1)%storage.length);
while(full){ // queue is full
try {
storage.notifyAll();
storage.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
this.storage[end] = n;
end = incrementMod(end);
storage.notifyAll();
}
}
public int take(){
synchronized(storage) {
while (end == start) { // empty queue
try {
storage.notifyAll(); // notify waiting producers
storage.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
int index = start;
start = incrementMod(start);
storage.notifyAll(); // notify waiting producers
return this.storage[index];
}
}
private int incrementMod(int index) {
synchronized (storage) {
if (index == storage.length-1) return 0;
else return index+1;
}
}
public boolean isEmpty(){
synchronized (storage) {
return (start == end);
}
}
}
This is at least one problem, in your put method:
boolean full = (start == (end+storage.length+1)%storage.length);
while(full){ // queue is full
// Code that doesn't change full
}
If full is ever initialized as true, how do you expect the loop to end?
The other problem is this loop, in the consumer:
while(!buffer.isEmpty()) {
int i = buffer.take();
System.out.println("Consumer reads "+i);
}
You're assuming the producer never lets the buffer get empty - if the consumer starts before the producer, it will stop immediately.
Instead, you want some way of telling the buffer that you've stopped producing. The consumer should keep taking until the queue is empty and won't receive any more data.
I was attempting to solve a multi threaded problem and I am facing difficulties getting to know its behavior.
The problem is:
There are 2 threads which simultaneously consume even and odd numbers. I have to introduce the thread communication between them to have the "consumption" in natural ordering.
here is my code
public class EvenOddDemo {
public static void main(String[] args) {
Number n = new Number();
EvenThread et = new EvenThread(n);
OddThread ot = new OddThread(n);
et.start();
ot.start();
}
}
class EvenThread extends Thread {
private Number number;
public EvenThread(Number number) {
this.number = number;
}
#Override
public void run() {
for(int i=0; i<5; i++) {
System.out.println(number.getEven());
}
}
}
class OddThread extends Thread {
private Number number;
public OddThread(Number number) {
this.number = number;
}
#Override
public void run() {
for(int i=0; i<5; i++) {
System.out.println(number.getOdd());
}
}
}
class Number {
private int currentEven = 0;
private int currentOdd = 1;
private volatile String last = "odd";
public synchronized int getEven() {
if("even".equals(last)) {
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
int i = currentEven;
last = "even";
currentEven +=2;
notify();
return i;
}
public synchronized int getOdd() {
if("odd".equals(last)) {
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
int i = currentOdd;
last = "odd";
currentOdd +=2;
notify();
return i;
}
}
and the output is
0
2
1
3
4
5
7
6
8
9
But when I debug the code, it prints the numbers in the correct order. Hence I am not able to figure out what I am missing. Please help me. Thanks in advance for your time for this thread.
As far as I can see, there is nothing preventing this from happening, explaining why 2 is displayed before 1 in your output:
OddThread EvenThread
---------- ----------
gets odd
gets even
prints even
prints odd
The lock therefore needs to be around the whole sequence "get/print".
You'll notice that you are never "two numbers apart" in your output, too.
notify chooses any available thread.
The choice is arbitrary and occurs at the discretion of the implementation
If there are more than two threads waiting you could be signalling the "wrong" thread.
Also, note that both of your threads could be just finished in get(Even|Odd) with neither waiting, leading to the notify going nowhere depending upon the scheduling.
You need to be more strict to ensure the ordering. Perhaps two locks, even and odd, would be helpful.
You need to print the number in getEven and getOdd functions and notify the other thread.
But you were notifying and printing the number, so between noti
Modified code:
public class ThreadExp {
public static void main(String[] args) {
Number n = new Number();
EvenThread et = new EvenThread(n);
OddThread ot = new OddThread(n);
et.start();
ot.start();
}
}
class EvenThread extends Thread {
private Number number;
public EvenThread(Number number) {
this.number = number;
}
#Override
public void run() {
for (int i = 0; i < 10; i++) {
number.getEven();
}
}
}
class OddThread extends Thread {
private Number number;
public OddThread(Number number) {
this.number = number;
}
#Override
public void run() {
for (int i = 0; i < 10; i++) {
number.getOdd();
}
}
}
class Number {
private int currentEven = 0;
private int currentOdd = 1;
private StringBuilder odd;
private StringBuilder even;
private StringBuilder last;
{
odd = new StringBuilder("odd");
even = new StringBuilder("even");
last = odd;
}
public synchronized void getEven() {
if (last == even) {
try {
//System.out.println("inside if in even--->" +Thread.currentThread());
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
//System.out.println("out of if in even--> " + Thread.currentThread());
int i = currentEven;
last = even;
currentEven += 2;
System.out.println(i);
notify();
return;
}
public synchronized void getOdd() {
if (last == odd) {
try {
//System.out.println("inside if in odd--->" +Thread.currentThread());
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
//System.out.println("out of if in odd--> " + Thread.currentThread());
int i = currentOdd;
last = odd;
currentOdd += 2;
System.out.println(i);
notify();
return;
}
}
I'm trying to scan all files in my Android device. I used a multithread class like this:
public class FileScanner {
// subfolders to explore
private final Queue<File> exploreList = new ConcurrentLinkedQueue<File>();
private long fileCounter = 0;
List<File> listFile = new ArrayList<File>();
public void count() {
fileCounter++;
}
public long getCounter() {
return this.fileCounter;
}
public List<File> getListFile() {
return this.listFile;
}
int[] threads;
public FileScanner(int numberOfThreads) {
threads = new int[numberOfThreads];
for (int i = 0; i < threads.length; i++) {
threads[i] = -1;
}
}
void scan(File file) {
// add the first one to the list
exploreList.add(file);
for (int i = 0; i < threads.length; i++) {
FileExplorer explorer = new FileExplorer(i, this);
Thread t = new Thread(explorer);
t.start();
}
Thread waitToFinish = new Thread(new Runnable() {
#Override
public void run() {
boolean working = true;
while (working) {
working = false;
for (int i = 0; i < threads.length; i++) {
if (threads[i] == -1) {
working = true;
break;
}
}
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
});
waitToFinish.start();
}
public void done(int id, int counter) {
threads[id] = counter;
}
public boolean isFinished() {
for (int i = 0; i < threads.length; i++) {
if (threads[i] == -1) {
return false;
}
}
return true;
}
class FileExplorer implements Runnable {
public int counter = 0;
public FileScanner owner;
private int id;
public FileExplorer(int id, FileScanner owner) {
this.id = id;
this.owner = owner;
}
#Override
public void run() {
while (!owner.exploreList.isEmpty()) {
// get the first from the list
try {
File file = (File) owner.exploreList.remove();
if (file.exists()) {
if (!file.isDirectory()) {
count();
listFile.add(file);
} else {
// add the files to the queue
File[] arr = file.listFiles();
if (arr != null) {
for (int i = 0; i < arr.length; i++) {
owner.exploreList.add(arr[i]);
}
}
}
}
} catch (Exception e) {
e.printStackTrace();
// silent kill :)
}
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
owner.done(id, counter);
}
}
And I call it in my Asynctask:
private class FetchResidualAsynctask extends AsyncTask {
FileScanner fileMachine;
#Override
protected void onPreExecute() {
super.onPreExecute();
listResidualFileTemp.clear();
listResidualFileThumbnail.clear();
listResidualAppAds.clear();
listResidualAppLeftOvers.clear();
findAllStorage();
for (int i = 0; i < listStorage.size(); i++) {
fileMachine = new FileScanner(20);
fileMachine.scan(listStorage.get(i));
listFile.addAll(fileMachine.getListFile());
}
}
#Override
protected Void doInBackground(Void... params) {
numberOfFiles = listFile.size();
Log.i("numberOfFiles", "NUmber: " + numberOfFiles);
processindex = 0;
getActivity().runOnUiThread(new Runnable() {
public void run() {
mBtnClean.setText(R.string.btn_rescan);
mBtnClean.setEnabled(false);
txtResidualFile.setText("");
mProgressbar.setVisibility(View.VISIBLE);
mProgressbar.setProgress(0);
mBtnClean.setText(R.string.btn_stop);
mBtnClean.setEnabled(true);
mProgressbar.setMax(numberOfFiles);
}
});
for (int i = 0; i < listFile.size(); i++) {
getFilePath(listFile.get(i));
}
}
The problem is the list of file is returned so messy. As I debugged, the results are different each time I tested. The first time it returns a very little small number of files (ex: 160), next time is quite bigger (1200).
I think the FileScanner fileMachine.scan() hasn't finish yet and force stopped to run to the DoInBackground.
Can anybody help me on this one?
This looks excessively complicated and full of race conditions. Your main bug is probably that threads are detecting that the queue is empty (and then the thread exits) before it is actually empty... i.e. at one moment in time the queue has become momentarily empty (a thread remove()d the last item) but then a thread adds something back to it.
To wait for your workers to complete... you can use Thread.join() or a Semaphore, rather than that complex unsafe polling you've got there.
Are you even sure there's a benefit to parallelizing something like this? I imagine 20 threads all trying to hammer the filesystem simultaneously don't actually get to enjoy a lot of simultaneous execution. It may even be that the filesystem driver serializes all IO requests!
Good question. In general, it's not possible to fire off a bunch of threads and somehow have them "work". Instead, you need to create a pool of threads of a pre-defined size, and parcel a new one out when you have work to do. At some point, a task you want to run on a thread will wait, because there are no threads left. This is expected behavior. To facilitate multiple thread usage, decide on the max number of threads you want in advance, build a threadpool, and only then start doing the work. The training class Sending Operations to Multiple Threads describes this in some detail.