java ScheduledFuture getDelay return negative value - java

I am using ScheduledExecutorService, Semaphore and ScheduledFuture to write a rate limiting function, simply put, when a client reaches the limit, server will return error 429 with "msg please try after %d second".
I use scheduledFuture.getDelay(TimeUnit.SECONDS) to get value of %d. For the first or second attempts, it acts normal, i.e. allow access unit reach the limit and showing how many seconds to wait afterward. Then getDelay starts showing negative value. Does it mean the ScheduledExecutorService not working properly?
following is the snippet
public RateLimiter(int permits, long durationInMillis){
this.semaphore = new Semaphore(permits);
this.permits = permits;
this.durationInMillis = durationInMillis;
scheduleReplenishment();
}
public boolean allowAccess() {
return semaphore.tryAcquire();
}
public long nextReplenishmentTime() {
return scheduledFuture.getDelay(TimeUnit.SECONDS);
}
public void stop() {
scheduler.shutdownNow();
}
public void scheduleReplenishment() {
scheduledFuture = scheduler.schedule(() -> {
semaphore.release(permits - semaphore.availablePermits());
}, durationInMillis, TimeUnit.MILLISECONDS);
}

If the task has done, the getDelay(TimeUnit) will be negative. To show it, I add two parameters to scheduleReplenishment(), and change getReplenishmentTime() to printReplenishmentTime().
Note1: If you create a Future<>, and replace one with another, you should care about the deleted one...
Note2: If you want test Future<> and Semaphore, don't release the allocated resources immediately.
private final ConcurrentSkipListMap<String, ScheduledFuture<?>> scheduledFutures
= new ConcurrentSkipListMap<>();
private final AtomicInteger counter = new AtomicInteger();
public void printReplenishmentTime() {
scheduledFutures.forEach((name, f) -> {
final long delay = f.getDelay(TimeUnit.SECONDS);
System.out.println(name + " delay " + delay);
});
}
/**
* try acquire one permit once from {#code semaphore},
* then wait {#code waitInMillis}, until all permits used.
*
* #param waitInMillis after successfully used one permit, wait
* #param permits all permits to use, best if permits #gt; 2
*/
public void scheduleReplenishment(final long waitInMillis, final int permits) {
final String name = "future" + counter.getAndIncrement();
scheduledFutures.put(name, scheduler.schedule(() -> {
try {
for (int permit = permits; 0 < permit;) {
final boolean ack = semaphore.tryAcquire(1);
System.out.println(name + " " + (ack ? "acquire" : "not acquire")
+ " one, but need " + permit);
if (ack) {
permit--;
}
if (0 < permit) {
try {
Thread.sleep(waitInMillis);
} catch (final InterruptedException e) {
System.out.println(name + " interrupted, exiting...");
return;
}
}
}
System.out.println(name + " done");
} finally {
semaphore.release(permits - permit);
}
// BAD CODE: semaphore.availablePermits() for debugging purposes
// only, maybe 0 release...
// semaphore.release(permits - semaphore.availablePermits());
}, durationInMillis, TimeUnit.MILLISECONDS));
}

scheduler.schedule() is a one time go function, that's why it shows negative getDelay() value.

Related

Thread Pool per key in Java

Suppose that you have a grid G of n x m cells, where n and m are huge.
Further, suppose that we have numerous tasks, where each task belong to a single cell in G, and should be executed in parallel (in a thread pool or other resource pool).
However, task belonging to the same cell must be done serially, that is, it should wait that previous task in the same cell to be done.
How can I solve this issue?
I've search and used several thread pools (Executors, Thread), but no luck.
Minimum Working Example
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class MWE {
public static void main(String[] args) {
ExecutorService threadPool = Executors.newFixedThreadPool(16);
Random r = new Random();
for (int i = 0; i < 10000; i++) {
int nx = r.nextInt(10);
int ny = r.nextInt(10);
Runnable task = new Runnable() {
public void run() {
try {
System.out.println("Task is running");
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
threadPool.submit(new Thread(task)); // Should use nx,ny here somehow
}
}
}
You can create a list of n Executors.newFixedThreadPool(1).
Then submit to the corresponding thread by using a hash function.
Ex. threadPool[key%n].submit(new Thread(task)).
A callback mechanism with a synchronized block could work efficiently here.
I have previously answered a similar question here.
There are some limitations (see the linked answer), but it is simple enough to keep track of what is going on (good maintainability).
I have adapted the source code and made it more efficient for your case where most tasks will be executed in parallel
(since n and m are huge), but on occasion must be serial (when a task is for the same point in the grid G).
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.locks.ReentrantLock;
// Adapted from https://stackoverflow.com/a/33113200/3080094
public class GridTaskExecutor {
public static void main(String[] args) {
final int maxTasks = 10_000;
final CountDownLatch tasksDone = new CountDownLatch(maxTasks);
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(16);
try {
GridTaskExecutor gte = new GridTaskExecutor(executor);
Random r = new Random();
for (int i = 0; i < maxTasks; i++) {
final int nx = r.nextInt(10);
final int ny = r.nextInt(10);
Runnable task = new Runnable() {
public void run() {
try {
// System.out.println("Task " + nx + " / " + ny + " is running");
Thread.sleep(1);
} catch (Exception e) {
e.printStackTrace();
} finally {
tasksDone.countDown();
}
}
};
gte.addTask(task, nx, ny);
}
tasksDone.await();
System.out.println("All tasks done, task points remaining: " + gte.size());
} catch (Exception e) {
e.printStackTrace();
} finally {
executor.shutdownNow();
}
}
private final Executor executor;
private final Map<Long, List<CallbackPointTask>> tasksWaiting = new HashMap<>();
// make lock fair so that adding and removing tasks is balanced.
private final ReentrantLock lock = new ReentrantLock(true);
public GridTaskExecutor(Executor executor) {
this.executor = executor;
}
public void addTask(Runnable r, int x, int y) {
Long point = toPoint(x, y);
CallbackPointTask pr = new CallbackPointTask(point, r);
boolean runNow = false;
lock.lock();
try {
List<CallbackPointTask> pointTasks = tasksWaiting.get(point);
if (pointTasks == null) {
if (tasksWaiting.containsKey(point)) {
pointTasks = new LinkedList<CallbackPointTask>();
pointTasks.add(pr);
tasksWaiting.put(point, pointTasks);
} else {
tasksWaiting.put(point, null);
runNow = true;
}
} else {
pointTasks.add(pr);
}
} finally {
lock.unlock();
}
if (runNow) {
executor.execute(pr);
}
}
private void taskCompleted(Long point) {
lock.lock();
try {
List<CallbackPointTask> pointTasks = tasksWaiting.get(point);
if (pointTasks == null || pointTasks.isEmpty()) {
tasksWaiting.remove(point);
} else {
System.out.println(Arrays.toString(fromPoint(point)) + " executing task " + pointTasks.size());
executor.execute(pointTasks.remove(0));
}
} finally {
lock.unlock();
}
}
// for a general callback-task, see https://stackoverflow.com/a/826283/3080094
private class CallbackPointTask implements Runnable {
final Long point;
final Runnable original;
CallbackPointTask(Long point, Runnable original) {
this.point = point;
this.original = original;
}
#Override
public void run() {
try {
original.run();
} finally {
taskCompleted(point);
}
}
}
/** Amount of points with tasks. */
public int size() {
int l = 0;
lock.lock();
try {
l = tasksWaiting.size();
} finally {
lock.unlock();
}
return l;
}
// https://stackoverflow.com/a/12772968/3080094
public static long toPoint(int x, int y) {
return (((long)x) << 32) | (y & 0xffffffffL);
}
public static int[] fromPoint(long p) {
return new int[] {(int)(p >> 32), (int)p };
}
}
This is were systems like Akka in java world make sense.If both X and Y are large, you may want to look at processing them using a message passing mechanism rather than bunch them up in a huge chain of callbacks and futures. One actor has the list of tasks to be done and is handed a cell and the actor would eventually compute the result and persist it. If something fails in the intermediate step, it's not end of world.
If I get you right, you want to execute X tasks (X is very big) in Y queues (Y is much smaller than X).
Java 8 has CompletableFuture class, which represents an (asynchronous) computation. Basically, it's Java's implementation of Promise. Here is how you can organize a chain of computations (generic types omitted):
// start the queue with a "completed" task
CompletableFuture queue = CompletableFuture.completedFuture(null);
// append a first task to the queue
queue = queue.thenRunAsync(() -> System.out.println("first task running"));
// append a second task to the queue
queue = queue.thenRunAsync(() -> System.out.println("second task running"));
// ... and so on
When you use thenRunAsync(Runnable), tasks will be executed using a thread pool (there are other possibilites - see API docs). You can also supply your own thread pool as well.
You can create Y of such chains (possibly keeping references to them in some table).
This library should do the job: https://github.com/jano7/executor
int maxTasks = 16;
ExecutorService threadPool = Executors.newFixedThreadPool(maxTasks);
KeySequentialBoundedExecutor executor = new KeySequentialBoundedExecutor(maxTasks, threadPool);
Random r = new Random();
for (int i = 0; i < 10000; i++) {
int nx = r.nextInt(10);
int ny = r.nextInt(10);
Runnable task = new Runnable() {
public void run() {
try {
System.out.println("Task is running");
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
executor.execute(new KeyRunnable<>((ny * 10) + nx, task));
}
The Scala example given below demonstrates how keys in a map can be executed in parallel and values of a key are executed in serial. Change it to Java syntax if you want to try it in Java (Scala uses JVM libraries). Basically chain the tasks future to have them execute sequentially.
import java.util.concurrent.{CompletableFuture, ExecutorService, Executors, Future, TimeUnit}
import scala.collection.concurrent.TrieMap
import scala.collection.mutable.ListBuffer
import scala.util.Random
/**
* For a given Key-Value pair with tasks as values, demonstrates sequential execution of tasks
* within a key and parallel execution across keys.
*/
object AsyncThreads {
val cachedPool: ExecutorService = Executors.newCachedThreadPool
var initialData: Map[String, ListBuffer[Int]] = Map()
var processedData: TrieMap[String, ListBuffer[Int]] = TrieMap()
var runningTasks: TrieMap[String, CompletableFuture[Void]] = TrieMap()
/**
* synchronous execution across keys and values
*/
def processSync(key: String, value: Int, initialSleep: Long) = {
Thread.sleep(initialSleep)
if (key.equals("key_0")) {
println(s"${Thread.currentThread().getName} -> sleep: $initialSleep. Inserting key_0 -> $value")
}
processedData.getOrElseUpdate(key, new ListBuffer[Int]).addOne(value)
}
/**
* parallel execution across keys
*/
def processASync(key: String, value: Int, initialSleep: Long) = {
val task: Runnable = () => {
processSync(key, value, initialSleep)
}
// 1. Chain the futures for sequential execution within a key
val prevFuture = runningTasks.getOrElseUpdate(key, CompletableFuture.completedFuture(null))
runningTasks.put(key, prevFuture.thenRunAsync(task, cachedPool))
// 2. Parallel execution across keys and values
// cachedPool.submit(task)
}
def process(key: String, value: Int, initialSleep: Int): Unit = {
//processSync(key, value, initialSleep) // synchronous execution across keys and values
processASync(key, value, initialSleep) // parallel execution across keys
}
def main(args: Array[String]): Unit = {
checkDiff()
0.to(9).map(kIndex => {
var key = "key_" + kIndex
var values = ListBuffer[Int]()
initialData += (key -> values)
1.to(10).map(vIndex => {
values += kIndex * 10 + vIndex
})
})
println(s"before data:$initialData")
initialData.foreach(entry => {
entry._2.foreach(value => {
process(entry._1, value, Random.between(0, 100))
})
})
cachedPool.awaitTermination(5, TimeUnit.SECONDS)
println(s"after data:$processedData")
println("diff: " + (initialData.toSet diff processedData.toSet).toMap)
cachedPool.shutdown()
}
def checkDiff(): Unit = {
var a1: TrieMap[String, List[Int]] = new TrieMap()
a1.put("one", List(1, 2, 3, 4, 5))
a1.put("two", List(11, 12, 13, 14, 15))
var a2: TrieMap[String, List[Int]] = new TrieMap()
a2.put("one", List(2, 1, 3, 4, 5))
a2.put("two", List(11, 12, 13, 14, 15))
println("a1: " + a1)
println("a2: " + a2)
println("check.diff: " + (a1.toSet diff a2.toSet).toMap)
}
}

Java thread not responding to volatile boolean flag

I am new to Java concurrency, and I met a very strange problem:
I read from a large file and used several worker threads to work on the input (some complicated string matching tasks). I used a LinkedBlockingQueue to transmit the data to the worker threads, and a volatile boolean flag in the worker class to respond to the signal when the end-of-file is reached.
However, I cannot get the worker thread to stop properly. The CPU usage by this program is almost zero in the end, but the program won't terminate normally.
The simplified code is below. I have removed the real code and replaced them with a simple word counter. But the effect is the same. The worker thread won't terminate after the whole file is processed, and the boolean flag is set to true in the main thread.
The class with main
public class MultiThreadTestEntry
{
private static String inputFileLocation = "someFile";
private static int numbOfThread = 3;
public static void main(String[] args)
{
int i = 0;
Worker[] workers = new Worker[numbOfThread];
Scanner input = GetIO.getTextInput(inputFileLocation);
String temp = null;
ExecutorService es = Executors.newFixedThreadPool(numbOfThread);
LinkedBlockingQueue<String> dataQueue = new LinkedBlockingQueue<String>(1024);
for(i = 0 ; i < numbOfThread ; i ++)
{
workers[i] = new Worker(dataQueue);
workers[i].setIsDone(false);
es.execute(workers[i]);
}
try
{
while(input.hasNext())
{
temp = input.nextLine().trim();
dataQueue.put(temp);
}
}
catch (InterruptedException e)
{
Thread.currentThread().interrupt();
}
input.close();
for(i = 0 ; i < numbOfThread ; i ++)
{
workers[i].setIsDone(true);
}
es.shutdown();
try
{
es.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
} catch (InterruptedException e)
{
Thread.currentThread().interrupt();
}
}
}
The worker class
public class Worker implements Runnable
{
private LinkedBlockingQueue<String> dataQueue = null;
private volatile boolean isDone = false;
public Worker(LinkedBlockingQueue<String> dataQueue)
{
this.dataQueue = dataQueue;
}
#Override
public void run()
{
String temp = null;
long count = 0;
System.out.println(Thread.currentThread().getName() + " running...");
try
{
while(!isDone || !dataQueue.isEmpty())
{
temp = dataQueue.take();
count = temp.length() + count;
if(count%1000 == 0)
{
System.out.println(Thread.currentThread().getName() + " : " + count);
}
}
System.out.println("Final result: " + Thread.currentThread().getName() + " : " + count);
}
catch (InterruptedException e)
{
}
}
public void setIsDone(boolean isDone)
{
this.isDone = isDone;
}
}
Any suggestions to why this happens?
Thank you very much.
As Dan Getz already said your worker take() waits until an element becomes available but the Queue may be empty.
In your code you check if the Queue is empty but nothing prevents the other Workers to read and remove an element from the element after the check.
If the Queue contains only one element and t1 and t2 are two Threads
the following could happen:
t2.isEmpty(); // -> false
t1.isEmpty(); // -> false
t2.take(); // now the queue is empty
t1.take(); // wait forever
in this case t1 would wait "forever".
You can avoid this by using pollinstead of take and check if the result is null
public void run()
{
String temp = null;
long count = 0;
System.out.println(Thread.currentThread().getName() + " running...");
try
{
while(!isDone || !dataQueue.isEmpty())
{
temp = dataQueue.poll(2, TimeUnit.SECONDS);
if (temp == null)
// re-check if this was really the last element
continue;
count = temp.length() + count;
if(count%1000 == 0)
{
System.out.println(Thread.currentThread().getName() + " : " + count);
}
}
System.out.println("Final result: " + Thread.currentThread().getName() + " : " + count);
}
catch (InterruptedException e)
{
// here it is important to restore the interrupted flag!
Thread.currentThread().interrupt();
}
}

Threading Isues With Fixed Thread Pool and Large Number of Tasks

I'm using a program to run the Collatz Conjecture (http://en.wikipedia.org/wiki/Collatz_conjecture) from mathematics. I've implemented a class that runs the conjecture algorithm (and gives you back the output) and one that creates a fixed thread pool (with my number of processors: 8) and accepts Callables which are calls for the conjecture algorithm.
I created a HashSet<Callable> for all the numbers between 1 (the input type must be a positive integer) and 400,000. This hangs (seemingly) forever, but lower numbers work out just fine, which is strange. Stranger yet, running it appears to take longer to process these calls than it takes a single thread to process the same amount of information; it also bloats the memory significantly.
For instance, on my computer, the program takes less than a second to perform the algorithm (just one iteration) with 400,000 (the final value) and all the lower values take less time to compute (maybe with the exception of primes, which take longer) I'm running Windows 8.1 with 8GB ram, and 8 logical processors at 2.2Ghz.
Code:
private static void initThreads() throws InterruptedException {
//Files.createDirectories(SEQUENCER_FOLDER_PATH);
//Files.createFile(SEQUENCER_FILE_PATH);
ExecutorService service = Executors.newFixedThreadPool(8, new ThreadFactory() {
private BigInteger count = BigInteger.ZERO;
#Override
public Thread newThread(Runnable r) {
count = count.add(BigInteger.ONE);
return new Thread(r, "Collatz Sequencer Thread: " + count);
}
});
int finalNumber = 400_000;
final HashSet<Callable<Void>> tasks = new HashSet<>(finalNumber);
for (long l = 1; l <= finalNumber; l++) {
final BigInteger number = BigInteger.valueOf(l);
tasks.add(() -> {
CollatzSequencer sequencer = new CollatzSequencer(new BigInteger(number.toString()));
synchronized (dataSet) {
dataSet.put(number, sequencer.init());
}
return null;
});
}
service.invokeAll(tasks);
Thread dataThread = new Thread(() -> {
while (true) {
synchronized (dataSet) {
if (dataSet.size() == finalNumber) {
System.err.println("Values: \n");
for (CollatzSequencer.FinalSequencerReport data : dataSet.values()) {
System.err.println("Entry: " + data.getInitialValue() + ", " + data.getIterations());
}
System.exit(0);
}
}
}
}, "Collatz Conjecture Data Set Thread");
dataThread.start();
}
Collatz Conjecture Algorithm:
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.collatzsequencer.core;
import java.math.BigInteger;
/**
* A sequencer used for computing the collatz sequence.
*
* #author Sarah Szabo
* #version 1.0
*/
public class CollatzSequencer {
private final BigInteger initialValue;
public CollatzSequencer(BigInteger currentValue) {
if (currentValue == null) {
throw new NullPointerException("Value passed can't be null");
} else if (currentValue.compareTo(new BigInteger("1")) < 0) {
throw new NumberFormatException("The value passed to the constructor must be a natural number.");
}
this.initialValue = currentValue;
}
public FinalSequencerReport init() {
return new FinalSequencerReport(performOperation(new SequencerReport(this.initialValue)), this.initialValue);
}
private SequencerReport performOperation(SequencerReport report) {
if (report.getResult().equals(new BigInteger("1"))) {
return new SequencerReport(report.getResult(), report.getIterations(), report.getSequence().length() > 1
? report.getSequence().substring(0, report.getSequence().length() - 3) : "The sequence starts and ends at 1 <Nothing Done>");
} else if (report.getResult().mod(new BigInteger("2")).equals(new BigInteger("0"))) {
BigInteger value = report.getResult().divide(new BigInteger("2"));
return performOperation(new SequencerReport(value, report.getIterations().add(new BigInteger("1")),
report.getSequence() + " " + report.getResult() + "/2 -> " + value + " ->"));
} else {
BigInteger value = report.getResult().multiply(new BigInteger("3")).add(new BigInteger("1"));
return performOperation(new SequencerReport(value, report.getIterations()
.add(new BigInteger("1")), report.getSequence() + report.getResult() + " * 3 + 1 ->" + value + " ->"));
}
}
public static final class FinalSequencerReport extends SequencerReport {
private final BigInteger initialValue;
private final String finalFormattedString;
public FinalSequencerReport(SequencerReport finalReport, BigInteger initialValue) {
super(finalReport.getResult(), finalReport.getIterations(), finalReport.getSequence());
this.initialValue = initialValue;
this.finalFormattedString = "Initial Value: "
+ getInitialValue() + "\nFinal Value: " + getResult() + "\nIterations: "
+ getIterations() + "\nAlgebraic Sequence:\n" + getSequence();
}
public String getFinalFormattedString() {
return finalFormattedString;
}
public BigInteger getInitialValue() {
return initialValue;
}
}
public static class SequencerReport {
private final BigInteger result, iterations;
private final String sequence;
public SequencerReport(BigInteger result) {
this(result, new BigInteger("0"), "");
}
public SequencerReport(BigInteger result, BigInteger iterations, String sequence) {
this.result = result;
this.iterations = iterations;
this.sequence = sequence;
}
public BigInteger getResult() {
return this.result;
}
public BigInteger getIterations() {
return this.iterations;
}
public String getSequence() {
return this.sequence;
}
}
}
As you said, your code works; the problem is probably just performance. Some things I would try:
Use long instead of BigInteger. BigInteger is very slow.
Instead of mod 2 (or % 2), use & 1. The binary AND will have effectively the same result and is much faster.
You are doing way, way too much String manipulation. Override sequencerReport.toString() and have it do the toString calls all at the end when you're printing the data.
Don't do new ThreadFactory(). Use Guava's ThreadFactoryBuilder.
You should never call new Thread() ever in your code unless you really know what you're doing, which means don't do it.
Add a wait/notify mechanism for dataThread instead of a busy loop. Call dataSet.notify() when the work is done and dataSet.wait() inside the dataThread body.

Many ProducerS and many ConsumerS. Making the last producer alive killing the consumers

I have a standard producer consumer problem. Producer puts data into the stack(buffer) consumers take it.
I would like to have many producers and consumers.
the problem is I would like to make only the last living producer to be able to call b.stop()
for(int i = 0; i < 10; i++){
try{
// sleep((int)(Math.random() * 1));
}catch(Exception e){e.printStackTrace();}
b.put((int) (Math.random()* 10));
System.out.println("i = " + i);
}
b.stop();
so then I call b.stop() which changes running field in Buffer to false and notifiesAll()
End then I get:
i = 9 // number of iteration this is 10th iteration
Consumer 2.: no data to take. I wait. Memory: 0
Consumer 1.: no data to take. I wait. Memory: 0
Consumer 3.: no data to take. I wait. Memory: 0
they should die then, so I made method stop() but it did not work.
Code is running please check it
import java.util.Stack;
public class Buffer {
private static int SIZE = 4;
private int i;//number of elements in buffer
public Stack<Integer> stack;
private volatile boolean running;
public Buffer() {
stack = new Stack<>();
running = true;
i = 0;
}
synchronized public void put(int val){
while (i >= SIZE) {
try {
System.out.println("Buffer full, producer waits");
wait();
} catch (InterruptedException exc) {
exc.printStackTrace();
}
}
stack.push(val);//txt = s;
i++;
System.out.println("Producer inserted " + val + " memory: " + i);
if(i - 1 == 0)
notifyAll();
System.out.println(stack);
}
public synchronized Integer get(Consumer c) {
while (i == 0) {
try {
System.out.println(c + ": no data to take. I wait. Memory: " + i);
wait();
} catch (InterruptedException exc) {
exc.printStackTrace();
}
}
if(running){
int data = stack.pop();
i--;
System.out.println(c+ ": I took: " + data +" memory: " + i);
System.out.println(stack);
if(i + 1 == SIZE){//if the buffer was full so the producer is waiting
notifyAll();
System.out.println(c + "I notified producer about it");
}
return data;}
else
return null;
}
public boolean isEmpty(){
return i == 0;
}
public synchronized void stop(){//I THOUGH THIS WOULD FIX IT~!!!!!!!!!!!!!!
running = false;
notifyAll();
}
public boolean isRunning(){
return running;
}
}
public class Producer extends Thread {
private Buffer b;
public Producer(Buffer b) {
this.b = b;
}
public void run(){
for(int i = 0; i < 10; i++){
try{
// sleep((int)(Math.random() * 1));
}catch(Exception e){e.printStackTrace();}
b.put((int) (Math.random()* 10));
System.out.println("i = " + i);
}
b.stop();
}
}
public class Consumer extends Thread {
Buffer b;
int nr;
static int NR = 0;
public Consumer(Buffer b) {
this.b = b;
nr = ++NR;
}
public void run() {
Integer i = b.get(this);
while (i != null) {
System.out.println(nr + " I received : " + i);
i = b.get(this);
}
System.out.println("Consumer " + nr + " is dead");
}
public String toString() {
return "Consumer " + nr + ".";
}
}
public class Main {
public static void main(String[] args) {
Buffer b = new Buffer();
Producer p = new Producer(b);
Consumer c1 = new Consumer(b);
Consumer c2 = new Consumer(b);
Consumer c3 = new Consumer(b);
p.start();
c1.start();c2.start();c3.start();
}
}
What you have to realise is that your threads could be waiting in either of two locations:
In the wait loop with i == 0 - in which case notifyall will kick all of them out. However, if i is still 0 they will go straight back to waiting again.
Waiting for exclusive access to the object (i.e. waiting on a synchronized method) - in which case (if you fix issue 1 above and the lock will be released) they will go straight into a while (i == 0) loop.
I would suggest you change your while ( i == 0 ) loop to while ( running && i == 0 ). This should fix your problem. Since your running flag is (correctly) volatile all should tidily exit.
In your stop method, you set running to false, but your while loop is running as long as i == 0. Set i to something different than zero and it should fix it.
BTW, I don't understand why you have a running variable and a separate i variable, which is actually the variable keeping a thread running.
I would rethink your design. Classes should have a coherent set of responsibilities; making a class responsible for both consuming objects off the queue, while also being responsible for shutting down other consumers, seems to be something you'd want to seperate.
In answer to the to make only the last living producer to be able to call b.stop().
You should add an AtomicInteger to your Buffer containing the number of producers and make each producer call b.start() (which increments it) in its constructor.
That way you can decrement it in b.stop() and only when it has gone to zero should running be set to false.

Distributing each thread a Particular Range

I am using ThreadPoolExecutor in my multithreading program, I want each thread should have particular range of ID's if ThreadSize is set as 10 and Start = 1 and End = 1000 then each thread would have range of 100 id's(basically by dividing end range with thread size) that it can use without stepping on other threads.
Thread1 will use 1 to 100 (id's)
Thread2 will use 101 to 200 (id's)
Thread3 will use 201 to 300 (id's)
-----
-----
Thread10 will use 901 to 1000
I know the logic basically, the logic can be like this-
Each thread gets `N = (End - Start + 1) / ThreadSize` numbers.
Thread number `i` gets range `(Start + i*N) - (Start + i*N + N - 1)`.
As I am working with ThreadPoolExecutor for the first time, so I am not sure where should I use this logic in my code so that each Thread is Using a predefined ID's without stepping on other threads. Any suggestions will be appreciated.
public class CommandExecutor {
private List<Command> commands;
ExecutorService executorService;
private static int noOfThreads = 3;
// Singleton
private static CommandExecutor instance;
public static synchronized CommandExecutor getInstance() {
if (instance == null) {
instance = new CommandExecutor();
}
return instance;
}
private CommandExecutor() {
try {
executorService = Executors.newFixedThreadPool(noOfThreads);
} catch(Exception e) {
System.out.println(e);
}
}
// Get the next command to execute based on percentages
private synchronized Command getNextCommandToExecute() {
}
// Runs the next command
public synchronized void runNextCommand() {
// If there are any free threads in the thread pool
if (!(((ThreadPoolExecutor) executorService).getActiveCount() < noOfThreads))
return;
// Get command to execute
Command nextCommand = getNextCommandToExecute();
// Create a runnable wrapping that command
Task nextCommandExecutorRunnable = new Task(nextCommand);
executorService.submit(nextCommandExecutorRunnable); // Submit it for execution
}
// Implementation of runnable (the real unit level command executor)
private static final class Task implements Runnable {
private Command command;
public Task(Command command) {
this.command = command;
}
public void run() {
// Run the command
command.run();
}
}
// A wrapper class that invoked at every certain frequency, asks CommandExecutor to execute next command (if any free threads are available)
private static final class CoreTask implements Runnable {
public void run() {
CommandExecutor commandExecutor = CommandExecutor.getInstance();
commandExecutor.runNextCommand();
}
}
// Main Method
public static void main(String args[]) {
// Scheduling the execution of any command every 10 milli-seconds
Runnable coreTask = new CoreTask();
ScheduledFuture<?> scheduledFuture = Executors.newScheduledThreadPool(1).scheduleWithFixedDelay(coreTask, 0, 10, TimeUnit.MILLISECONDS);
}
}
Whether this is a good idea or not I will leave it for you to decide. But to give you a hand, I wrote a little program that does what you want... in my case I am just summing over the "ids".
Here is the code:
public class Driver {
private static final int N = 5;
public static void main(String args[]) throws InterruptedException, ExecutionException{
int startId = 1;
int endId = 1000;
int range = (1 + endId - startId) / N;
ExecutorService ex = Executors.newFixedThreadPool(N);
List<Future<Integer>> futures = new ArrayList<Future<Integer>>(N);
// submit all the N threads
for (int i = startId; i < endId; i += range) {
futures.add(ex.submit(new SumCallable(i, range+i-1)));
}
// get all the results
int result = 0;
for (int i = 0; i < futures.size(); i++) {
result += futures.get(i).get();
}
System.out.println("Result of summing over everything is : " + result);
}
private static class SumCallable implements Callable<Integer> {
private int from, to, count;
private static int countInstance = 1;
public SumCallable(int from, int to) {
this.from = from;
this.to = to;
this.count = countInstance;
System.out.println("Thread " + countInstance++ + " will use " + from + " to " + to);
}
// example implementation: sums over all integers between from and to, inclusive.
#Override
public Integer call() throws Exception {
int result = 0;
for (int i = from; i <= to; i++) {
result += i;
}
System.out.println("Thread " + count + " got result : " + result);
return result;
}
}
}
which produces the following output (notice that in true multi-thread fashion, you have print statements in random order, as the threads are executed in whatever order the system decides):
Thread 1 will use 1 to 200
Thread 2 will use 201 to 400
Thread 1 got result : 20100
Thread 3 will use 401 to 600
Thread 2 got result : 60100
Thread 4 will use 601 to 800
Thread 3 got result : 100100
Thread 5 will use 801 to 1000
Thread 4 got result : 140100
Thread 5 got result : 180100
Result of summing over everything is : 500500

Categories

Resources