Multithreading programming in Java, using semaphores - java

I'm Learning Java multithreading and I have problem, I can't understand Semaphores. How can I execute threads in this order? for example : on image1 : the 5-th thread start running only then 1-st and 2-nd is finished to execute.
Image 2:
Image 1:
I upload now images for better understanding . :))

Usually in java you use mutexes (also called monitors), which prohibits that two or more threads access the code region proctected by that mutex
That code region is defined using the sychronized statement
sychronized(mutex) {
// mutual exclusive code begin
// ...
// ...
// mutual exclusive code end
}
where mutex is defined as e.g:
Object mutex = new Object();
To prevent a task from beeing started you need advanced technics, such as barriers, defined in java.util.concurrency package.
But first make yourself confortable with the synchronized statement.
If you think that you will often use multi threading in java, you might want to read
"Java Concurrency in Practise"

Synchronized is used so that each thread will enter that method or that portion of the code on at a time. If you want to
public class CountingSemaphore {
private int value = 0;
private int waitCount = 0;
private int notifyCount = 0;
public CountingSemaphore(int initial) {
if (initial > 0) {
value = initial;
}
}
public synchronized void waitForNotify() {
if (value <= waitCount) {
waitCount++;
try {
do {
wait();
} while (notifyCount == 0);
} catch (InterruptedException e) {
notify();
} finally {
waitCount--;
}
notifyCount--;
}
value--;
}
public synchronized void notifyToWakeup() {
value++;
if (waitCount > notifyCount) {
notifyCount++;
notify();
}
}
}
This is an implementation of a counting semaphore. It maintains counter variables ‘value’, ‘waitCount’ and ‘notifyCount’. This makes the thread to wait if value is lesser than waitCount and notifyCount is empty.
You can use Java Counting Semaphore. Conceptually, a semaphore maintains a set of permits. Each acquire() blocks if necessary until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the Semaphore just keeps a count of the number available and acts accordingly.
Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource. For example, here is a class that uses a semaphore to control access to a pool of items:
class Pool {
private static final MAX_AVAILABLE = 100;
private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
public Object getItem() throws InterruptedException {
available.acquire();
return getNextAvailableItem();
}
public void putItem(Object x) {
if (markAsUnused(x))
available.release();
}
// Not a particularly efficient data structure; just for demo
protected Object[] items = ... whatever kinds of items being managed
protected boolean[] used = new boolean[MAX_AVAILABLE];
protected synchronized Object getNextAvailableItem() {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (!used[i]) {
used[i] = true;
return items[i];
}
}
return null; // not reached
}
protected synchronized boolean markAsUnused(Object item) {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (item == items[i]) {
if (used[i]) {
used[i] = false;
return true;
} else
return false;
}
}
return false;
}
}
Before obtaining an item each thread must acquire a permit from the semaphore, guaranteeing that an item is available for use. When the thread has finished with the item it is returned back to the pool and a permit is returned to the semaphore, allowing another thread to acquire that item. Note that no synchronization lock is held when acquire() is called as that would prevent an item from being returned to the pool. The semaphore encapsulates the synchronization needed to restrict access to the pool, separately from any synchronization needed to maintain the consistency of the pool itself.
A semaphore initialized to one, and which is used such that it only has at most one permit available, can serve as a mutual exclusion lock. This is more commonly known as a binary semaphore, because it only has two states: one permit available, or zero permits available. When used in this way, the binary semaphore has the property (unlike many Lock implementations), that the "lock" can be released by a thread other than the owner (as semaphores have no notion of ownership). This can be useful in some specialized contexts, such as deadlock recovery.

Related

Terribly slow synchronization

I'm trying to write game of life on many threads, 1 cell = 1 thread, it requires synchronization between threads, so no thread will start calculating it new state before other thread does not finish reading previous state. here is my code
public class Cell extends Processor{
private static int count = 0;
private static Semaphore waitForAll = new Semaphore(0);
private static Semaphore waiter = new Semaphore(0);
private IntField isDead;
public Cell(int n)
{
super(n);
count ++;
}
public void initialize()
{
this.algorithmName = Cell.class.getSimpleName();
isDead = new IntField(0);
this.addField(isDead, "state");
}
public synchronized void step()
{
int size = neighbours.size();
IntField[] states = new IntField[size];
int readElementValue = 0;
IntField readElement;
sendAll(new IntField(isDead.getDist()));
Cell.waitForAll.release();
//here wait untill all other threads finish reading
while (Cell.waitForAll.availablePermits() != Cell.count) {
}
//here release semaphore neader lower
Cell.waiter.release();
for (int i = 0; i < neighbours.size(); i++) {
readElement = (IntField) reciveMessage(neighbours.get(i));
states[i] = (IntField) reciveMessage(neighbours.get(i));
}
int alive = 0;
int dead = 0;
for(IntField ii: states)
{
if(ii.getDist() == 1)
alive++;
else
dead++;
}
if(isDead.getDist() == 0)
{
if(alive == 3)
isDead.setValue(1);
else
;
}
else
{
if(alive == 3 || alive == 2)
;
else
isDead.setValue(0);
}
try {
while(Cell.waiter.availablePermits() != Cell.count)
{
;
//if every thread finished reading we can acquire this semaphore
}
Cell.waitForAll.acquire();
while(Cell.waitForAll.availablePermits() != 0)
;
//here we make sure every thread ends step in same moment
Cell.waiter.acquire();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
processor
class extends thread and in run method if i turn switch on it calls step() method. well it works nice for small amount of cells but when i run abou 36 cells it start to be very slow, how can repair my synchronization so it woudl be faster?
Using large numbers of threads tends not to be very efficient, but 36 is not so many that I would expect that in itself to produce a difference that you would characterize as "very slow". I think more likely the problem is inherent in your strategy. In particular, I suspect this busy-wait is problematic:
Cell.waitForAll.release();
//here wait untill all other threads finish reading
while (Cell.waitForAll.availablePermits() != Cell.count) {
}
Busy-waiting is always a performance problem because you are tying up the CPU with testing the condition over and over again. This busy-wait is worse than most, because it involves testing the state of a synchronization object, and this not only has extra overhead, but also introduces extra interference among threads.
Instead of busy-waiting, you want to use one of the various methods for making threads suspend execution until a condition is satisfied. It looks like what you've actually done is created a poor-man's version of a CyclicBarrier, so you might consider instead using CyclicBarrier itself. Alternatively, since this is a learning exercise you might benefit from learning how to use Object.wait(), Object.notify(), and Object.notifyAll() -- Java's built-in condition variable implementation.
If you insist on using semaphores, then I think you could do it without the busy-wait. The key to using semaphores is that it is being able to acquire the semaphore (at all) that indicates that the thread can proceed, not the number of available permits. If you maintain a separate variable with which to track how many threads are waiting on a given semaphore at a given point, then each thread reaching that point can determine whether to release all the other threads (and proceed itself) or whether to block by attempting to acquire the semaphore.

Are unsynchronized reads (combined with synchronized writes) eventually consistent

I have a use case with many writer threads and a single reader thread. The data being written is an event counter which is being read by a display thread.
The counter only ever increases and the display is intended for humans, so the exact point-in-time value is not critical. For this purpose, I would consider a solution to be correct as long as:
The value seen by the reader thread never decreases.
Reads are eventually consistent. After a certain amount of time without any writes, all reads will return the exact value.
Assuming writers are properly synchronized with each other, is it necessary to synchronize the reader thread with the writers in order to guarantee correctness, as defined above?
A simplified example. Would this be correct, as defined above?
public class Eventual {
private static class Counter {
private int count = 0;
private Lock writeLock = new ReentrantLock();
// Unsynchronized reads
public int getCount() {
return count;
}
// Synchronized writes
public void increment() {
writeLock.lock();
try {
count++;
} finally {
writeLock.unlock();
}
}
}
public static void main(String[] args) {
List<Thread> contentiousThreads = new ArrayList<>();
final Counter sharedCounter = new Counter();
// 5 synchronized writer threads
for(int i = 0; i < 5; ++i) {
contentiousThreads.add(new Thread(new Runnable(){
#Override
public void run() {
for(int i = 0; i < 20_000; ++i) {
sharedCounter.increment();
safeSleep(1);
}
}
}));
}
// 1 unsynchronized reader thread
contentiousThreads.add(new Thread(new Runnable(){
#Override
public void run() {
for(int i = 0; i < 30; ++i) {
// This value should:
// +Never decrease
// +Reach 100,000 if we are eventually consistent.
System.out.println("Count: " + sharedCounter.getCount());
safeSleep(1000);
}
}
}));
contentiousThreads.stream().forEach(t -> t.start());
// Just cleaning up...
// For the question, assume readers/writers run indefinitely
try {
for(Thread t : contentiousThreads) {
t.join();
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private static void safeSleep(int ms) {
try {
Thread.sleep(ms);
} catch (InterruptedException e) {
//Don't care about error handling for now.
}
}
}
There is no guarantee that the readers would ever see an update to the count. A simple fix is to make count volatile.
As noted in another answer, in your current example, the "Final Count" will be correct because the main thread is joining the writer threads (thus establishing a happens-before relationship). however, your reader thread is never guaranteed to see any update to the count.
JTahlborn is correct, +1 from me. I was rushing and misread the question, I was assuming wrongly that the reader thread was the main thread.
The main thread can display the final count correctly due to the happens-before relationship:
All actions in a thread happen-before any other thread successfully returns from a join on that thread.
Once the main thread has joined to all the writers then the counter's updated value is visible. However, there is no happens-before relationship forcing the reader's view to get updated, you are at the mercy of the JVM implementation. There is no promise in the JLS about values getting visible if enough time passes, it is left open to the implementation. The counter value could get cached and the reader could possibly not see any updates whatsoever.
Testing this on one platform gives no assurance of what other platforms will do, so don't think this is OK just because the test passes on your PC. How many of us develop on the same platform we deploy to?
Using volatile on the counter or using AtomicInteger would be good fixes. Using AtomicInteger would allow removing the locks from the writer thread. Using volatile without locking would be OK only in a case where there is just one writer, when two or more writers are present then ++ or += not being threadsafe will be an issue. Using an Atomic class is a better choice.
(Btw eating the InterruptedException isn't "safe", it just makes the thread unresponsive to interruption, which happens when your program asks the thread to finish early.)

Why is a semaphore used when using synchronization?

I was reading about semaphore's and in the code example it confused me why a semaphore was used when the code uses sychronization around the method that is ultimately called. Isn't that doing the same thing, i.e. restricting 1 thread at a time to perform the mutation?
class Pool {
private static final int MAX_AVAILABLE = 100;
private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
public Object getItem() throws InterruptedException {
available.acquire();
return getNextAvailableItem();
}
public void putItem(Object x) {
if (markAsUnused(x))
available.release();
}
// Not a particularly efficient data structure; just for demo
protected Object[] items = ... whatever kinds of items being managed
protected boolean[] used = new boolean[MAX_AVAILABLE];
protected synchronized Object getNextAvailableItem() {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (!used[i]) {
used[i] = true;
return items[i];
}
}
return null; // not reached
}
protected synchronized boolean markAsUnused(Object item) {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (item == items[i]) {
if (used[i]) {
used[i] = false;
return true;
} else
return false;
}
}
return false;
}
}
I'm referring to the call to getItem() which calls acquire(), and then calls getNextAvailableItem, but that is synchronized anyhow.
What am I missing?
Reference: http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Semaphore.html
The semaphore and the synchronized block are doing two different jobs.
The synchronized keyword is protecting getNextAvailableItem() when it is accessing and mutating the array of items. An operation that would corrupt if it was not restricted to one thread at a time.
The semaphore will allow up to 100 threads through, significantly more than 1. Its purpose in this code sample is to block requests for an object from the pool when the pool is empty, and to then unblock one thread when an object is returned to the pool. Without the semaphore, things would look like they were working until the pool was empty. At that time requesting threads would not block and wait for an object to be returned, but would instead receive null.
A Semaphore gives you a thread-safe counter that blocks when the acquire has been called beyond the initial limit. release can be used to undo an acquire.
It will guarantee that if a call to acquire succeeds there is sufficient capacity to hold the new item.
In the sample there are loops that look for a free item. Using a Semaphore ensures that none of those loops are begun until there is a free item.
synchronized only guarantees that ony one thread can execute this section of code at a time.

Understanding Multi-Threading in Java

I am learning multithreading in Java. Problem statement is: Suppose there is a datastruture that can contains million of Integers, now I want to search for a key in this. I want to use 2 threads so that if any one of the thread founds the key, it should set a shared boolean variable as false, and both the thread should stop further processing.
Here is what I am trying:
public class Test implements Runnable{
private List<Integer> list;
private Boolean value;
private int key = 27;
public Test(List<Integer> list,boolean value) {
this.list=list;
this.value=value;
}
#Override
public void run() {
synchronized (value) {
if(value){
Thread.currentThread().interrupt();
}
for(int i=0;i<list.size();i++){
if(list.get(i)==key){
System.out.println("Found by: "+Thread.currentThread().getName());
value = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() +": "+ list.get(i));
}
}
}
}
And main class is:
public class MainClass {
public static void main(String[] args) {
List<Integer> list = new ArrayList<Integer>(101);
for(int i=0;i<=100;i++){
list.add(i);
}
Boolean value=false;
Thread t1 = new Thread(new Test(list.subList(0, 49),value));
t1.setName("Thread 1");
Thread t2 = new Thread(new Test(list.subList(50, 99),value));
t2.setName("Thread 2");
t1.start();
t2.start();
}
}
What I am expecting:
Both threads will run randomly and when any of thread encounters 27, both thread will be interrupted. So, thread 1 should not be able to process all the inputs, similarly thread 2.
But, what is happening:
Both threads are completing the loop and thread 2 is always starting after Thread 1 completes.
Please highlight the mistakes, I am still learning threading.
My next practice question will be: Access one by one any shared resource
You are wrapping your whole block of code under the synchronized block under the object value. What this means is that, once execution arrives at the synchronized block the first thread will hold the monitor to object value and any subsequent threads will block until the monitor is released.
Note how the whole block:
synchronized (value){
if(value){
Thread.currentThread().interrupt();
}
for(int i=0; i < list.size(); i++){
if(list.get(i) == key){
System.out.println("Found by: "+Thread.currentThread().getName());
value = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() +": "+ list.get(i));
}
}
is wrapped within a synchronized block meaning that only one thread can run that block at once, contrary to your objective.
In this context, I believe you are misunderstanding the principals behind synchronization and "sharing variables". To clarify:
static - is the variable modifier used to make a variable global across objects (i.e. class variable) such that each object shares the same static variable.
volatile - is the variable modifier used to make a variable thread-safe. Note that you can still access a variable without this modifier from different threads (this is however dangerous and can lead to race conditions). Threads have no effect on the scope of variables (unless you use a ThreadLocal).
I would just like to add that you can't put volatile everywhere and expect code to be thread-safe. I suggest you read Oracle's guide on synchronization for a more in-depth review of how to establish thread-safety.
In your case, I would remove the synchronization block and declare the shared boolean as a:
private static volatile Boolean value;
Additionally, the task you are trying to perform right now is something a Fork/Join pool is built for. I suggest reading this part of Oracle's java tutorials to see how a Fork/Join pool is used in a divide-and-conquer approach.
By wrapping the main logic of your thread in a synchronized block, execution of the code in that block becomes mutually exclusive. Thread 1 will enter the block, acquiring a lock on "value" and run the entire loop before returning the lock and allowing Thread 2 to run.
If you were to wrap only the checking and setting of the flag "value", then both threads should run the code concurrently.
EDIT: As other people have discussed making "value" a static volatile boolean within the Test class, and not using the synchronized block at all, would also work. This is because access to volatile variables occurs as if it were in a synchronized block.
Reference: https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html
You should not obtain a lock on the found flag - that will just make sure only one thread can run. Instead make the flag static so it is shared and volatile so it cannot be cached.
Also, you should check the flag more often.
private List<Integer> list;
private int key = 27;
private static volatile boolean found;
public Test(List<Integer> list, boolean value) {
this.list = list;
this.found = value;
}
#Override
public void run() {
for (int i = 0; i < list.size(); i++) {
// Has the other thread found it?
if (found) {
Thread.currentThread().interrupt();
}
if (list.get(i) == key) {
System.out.println("Found by: " + Thread.currentThread().getName());
// I found it!
found = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() + ": " + list.get(i));
}
}
BTW: Both of your threads start at 0 and walk up the array - I presume you do this in this code as a demonstration and you either have them work from opposite ends or they walk at random.
Make boolean value static so both threads can access and edit the same variable. You then don't need to pass it in. Then as soon as one thread changes it to true, the second thread will also stop since it is using the same value.

Thread Mutual Exclusive Section

Hello I just had phone interview I was not able to answer this question and would like to know the answer, I believe, its advisable to reach out for answers that you don't know. Please encourage me to understand the concept.
His question was:
"The synchronized block only allows one thread a time into the mutual exclusive section.
When a thread exits the synchronized block, the synchronized block does not specify
which of the waiting threads will be allowed next into the mutual exclusive section?
Using synchronized and methods available in Object, can you implement first-come,
first-serve mutual exclusive section? One that guarantees that threads are let into
the mutual exclusive section in the order of arrival? "
public class Test {
public static final Object obj = new Object();
public void doSomething() {
synchronized (obj) {
// mutual exclusive section
}
}
}
Here's a simple example:
public class FairLock {
private int _nextNumber;
private int _curNumber;
public synchronized void lock() throws InterruptedException {
int myNumber = _nextNumber++;
while(myNumber != _curNumber) {
wait();
}
}
public synchronized void unlock() {
_curNumber++;
notifyAll();
}
}
you would use it like:
public class Example {
private final FairLock _lock = new FairLock();
public void doSomething() {
_lock.lock();
try {
// do something mutually exclusive here ...
} finally {
_lock.unlock();
}
}
}
(note, this does not handle the situation where a caller to lock() receives an interrupted exception!)
what they were asking is a fair mutex
create a FIFO queue of lock objects that are pushed on it by threads waiting for the lock and then wait on it (all this except the waiting in a synchronized block on a separate lock)
then when the lock is released an object is popped of the queue and the thread waiting on it woken (also synchronized on the same lock for adding the objects)
You can use ReentrantLock with fairness parameter set to true. Then the next thread served will be the thread waiting for the longest time i.e. the one that arrived first.
Here is my attempt. The idea to give a ticket number for each thread. Threads are entered based on the order of their ticket numbers. I am not familiar with Java, so please read my comments:
public class Test {
public static final Object obj = new Object();
unsigned int count = 0; // unsigned global int
unsigned int next = 0; // unsigned global int
public void doSomething() {
unsigned int my_number; // my ticket number
// the critical section is small. Just pick your ticket number. Guarantee FIFO
synchronized (obj) { my_number = count ++; }
// busy waiting
while (next != my_number);
// mutual exclusion
next++; // only one thread will modify this global variable
}
}
The disadvantage of this answer is the busy waiting which will consume CPU time.
Using only Object's method and synchronized, in my point of view is a little difficult. Maybe, by setting each thread a priority, you can garantee an ordered access to the critical section.

Categories

Resources