I cannot understand why I am getting deadlock in this simple sample.What is wrong with it?
public static void main(String[] args) {
Object data = null;
new Thread(new Producer(data)).start();
new Thread(new Consumer(data)).start();
}
}
class Producer implements Runnable {
private Object data;
public Producer(Object data) {
this.data = data;
}
#Override
public void run() {
while (true) {
while (data != null) {}
data = new Object();
System.out.println("put");
}
}
}
class Consumer implements Runnable {
private Object data;
public Consumer(Object data) {
this.data = data;
}
#Override
public void run() {
while (true) {
while (data == null) { }
data = null;
System.out.println("get");
}
}
There are two problems.
1: You have two separate Runnables that each have their own private internal member named data. Changes made to one aren't visible to the other. If you want to pass data between two threads, you need to store it in a common place where they both access it. You also need to either synchronize around the accesses or make the reference volatile.
2: Your checks seem to be inverted. You probably want to null it when it's not null, and create one when it is null? Its tough to tell what you want it to actually do there! :)
public static volatile Object data;
public static void main(String[] args) {
data = null;
new Thread(new Producer(data)).start();
new Thread(new Consumer(data)).start();
}
}
class Producer implements Runnable {
public Producer(Object data) {
this.data = data;
}
#Override
public void run() {
while (true) {
while (data == null) {}
data = new Object();
System.out.println("put");
}
}
}
class Consumer implements Runnable {
public Consumer(Object data) {
this.data = data;
}
#Override
public void run() {
while (true) {
while (data != null) { }
data = null;
System.out.println("get");
}
}
(Also this isn't really an example classically what we'd define as a deadlock, where two threads can't proceed because they both want locks the other has. There are no locks here. This is an example of two infinite loops that just don't do anything.)
Each instance has its own data field.
The consumer never sees the producer's changes.
The consumer and producer have separate data fields, so the consumer will never get any data to consume.
Also, spinlocking a consumer/producer on a field isn't generally a good idea, you're much better off using mutexes or semaphores to signal the availability of data / the possibility to publish. If this is more than a test in the search of knowledge, you should really read up on how to use those two.
When your produced "produces", all it does is points its own data reference to the new object, and the consumer has no way of knowing what happened. What you can do instead is make another class
class Data {
private Object data = null;
synchronized void set( Object data ){ this.data = data; }
synchronized Object get(){ return data; }
}
Then in your main do
Data data = new Data();
Pass the 'Data' object to the consumer and producer, and use the get/set methods instead of assignment.
That way, both consumer and producer will be pointing to the same Data object and when the producer produces or the consumer consumes, they will be changing the reference in the Data object, which they are sharing.
I think this should do what you intend (it is still bad code):
public class Example {
public static void main(String[] args) {
Consumer consumer = new Consumer();
new Thread(new Producer(consumer)).start();
new Thread(consumer).start();
}
}
class Producer implements Runnable {
private final Consumer consumer;
public Producer(Consumer consumer) {
this.consumer = consumer;
}
#Override
public void run() {
while (true) {
while (consumer.data != null) {}
consumer.data = new Object();
System.out.println("put");
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
class Consumer implements Runnable {
public volatile Object data;
#Override
public void run() {
while (true) {
while (data == null) {}
data = null;
System.out.println("get");
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
I think you should focus on the basics of Java, before you go for advanced topics such as parallel programming as the main error in you example (separate data fields) is very basic.
Related
I want to synchronize one method or one block based on input parameters.
So I have one API which has two inputs (let's say id1 and id2) of long type (could be primitive or wrapper) in post payload, which can be JSON. This API will be called by multiple threads at the same time or at different times randomly.
Now if the first API call has id1=1 and id2=1, and at the same time another API call has id1=1 and id2=1, it should wait for the first API call to finish processing before executing the second call. If the second API call has a different combination of values like id1=1 and id2=2, it should go through parallel without any wait time.
I don't mind creating a service method also which the API resource method can call, rather than handling directly at API resource method.
I'm using Spring boot Rest Controlller APIs.
**Edit**
I've already tried using map as suggested but this partially works. It waits for all input values, not just the same input values. Below is my code:
public static void main(String[] args) throws Exception {
ApplicationContext context = SpringApplication.run(Application.class, args);
AccountResource ar = context.getBean(AccountResource.class);
UID uid1 = new UID();
uid1.setFieldId(1);
uid1.setLetterFieldId(1);
UID uid2 = new UID();
uid2.setFieldId(2);
uid2.setLetterFieldId(2);
UID uid3 = new UID();
uid3.setFieldId(1);
uid3.setLetterFieldId(1);
Runnable r1 = new Runnable() {
#Override
public void run() {
while (true) {
ar.test(uid1);
}
}
};
Runnable r2 = new Runnable() {
#Override
public void run() {
while (true) {
ar.test(uid2);
}
}
};
Runnable r3 = new Runnable() {
#Override
public void run() {
while (true) {
ar.test(uid3);
}
}
};
Thread t1 = new Thread(r1);
t1.start();
Thread t2 = new Thread(r2);
t2.start();
Thread t3 = new Thread(r3);
t3.start();
}
#Path("v1/account")
#Service
public class AccountResource {
public void test(UID uid) {
uidFieldValidator.setUid(uid);
Object lock;
synchronized (map) {
lock = map.get(uid);
if (lock == null) {
map.put(uid, (lock = new Object()));
}
synchronized (lock) {
//some operation
}
}
}
}
package com.urman.hibernate.test;
import java.util.Objects;
public class UID {
private long letterFieldId;
private long fieldId;
private String value;
public long getLetterFieldId() {
return letterFieldId;
}
public void setLetterFieldId(long letterFieldId) {
this.letterFieldId = letterFieldId;
}
public long getFieldId() {
return fieldId;
}
public void setFieldId(long fieldId) {
this.fieldId = fieldId;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
#Override
public int hashCode() {
return Objects.hash(fieldId, letterFieldId);
}
#Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
UID other = (UID) obj;
return fieldId == other.fieldId && letterFieldId == other.letterFieldId;
}
}
You need a collection of locks, which you can keep in a map and allocate as required. Here I assume that your id1 and id2 are Strings; adjust as appropriate.
Map<String,Object> lockMap = new HashMap<>();
:
void someMethod(String id1, String id2) {
Object lock;
synchronized (lockMap) {
lock = lockMap.get(id1+id2);
if (lock == null) lockMap.put(id1+id2, (lock = new Object()));
}
synchronized (lock) {
:
}
}
You need a little bit of 'global' synchronization for the map operations, or you could use one of the concurrent implementations. I used the base HashMap for simplicity of implementation.
After you've selected a lock, sync on it.
This is a pseudocode version of my current working code:
public class DataTransformer {
private final boolean async = true;
private final ExecutorService executorService = Executors.newSingleThreadExecutor();
public void modifyAsync(Data data) {
if (async) {
executorService.submit(new Runnable() {
#Override
public void run() {
modify(data);
}
});
} else {
modify(data);
}
}
// This should actually be a variable inside modify(byte[] data)
// But I reuse it to avoid reallocation
// This is no problem in this case
// Because whether or not async is true, only one thread is used
private final byte[] temp = new byte[1024];
private void modify(Data data) {
// Do work using temp
data.setReady(true); // Sets a volatile flag
}
}
Please read the comments. But now I want to use Executors.newFixedThreadPool(10) instead of Executors.newSingleThreadExecutor(). This is easily possible in my case by moving the field temp inside modify(Data data), such that each execution has it's own temp array. But that's not what I want to do because i want to reuse the array if possible. Instead I want for each of the 10 threads a temp array. What's the best way to achieve this?
As static variable is shared between all Threads, so you could declare as static. But if you want to use different values then either use Threadlocal or use different object.
With ThreadLocal you could do :
ThreadLocal<byte[]> value = ThreadLocal.withInitial(() -> new byte[1024]);
You could also use object like this:
public class Test {
public static void main(String[] args) {
try {
Test test = new Test();
test.test();
} catch (Exception e) {
e.printStackTrace();
}
}
class Control {
public volatile byte[] temp = "Hello World".getBytes();
}
final Control control = new Control();
class T1 implements Runnable {
#Override
public void run() {
String a = Arrays.toString(control.temp);
System.out.println(a);
}
}
class T2 implements Runnable {
#Override
public void run() {
String a = Arrays.toString(control.temp);
System.out.println(a);
}
}
private void test() {
T1 t1 = new T1();
T2 t2 = new T2();
new Thread(t1).start();
new Thread(t2).start();
}
}
I searched for an answer to this question on SO and Google but couldn't find a proper solution so far.
I'm currently working on a LayerManager in a graph routing problem. The manager is responsible for providing and resetting a fixed set of layers.
I wanted to implement the Consumer-Producer pattern with a blocking list, so that incoming routing requests are blocked as long no free layer is available. So far I only found a blocking queue but since we don't need FIFO, LIFO but random access a queue doesn't really work. To be a little more precise, something like this should be possible:
/* this should be blocking until a layer becomes available */
public Layer getLayer(){
for ( Layer layer : layers ) {
if ( layer.isUnused() && layer.matches(request) )
return layers.pop(layer);
}
}
Is there any way to achieve this?
What you are looking for is called "Semaphore".
Create a Semaphore class
Add it as a field to Layer class
Example
public class Semaphore
{
private boolean signal = false;
public synchronized boolean take()
{
if(this.signal==true)
return false; //already in use
this.signal = true;
this.notify();
return true;
}
public synchronized void release() throws InterruptedException
{
while(!this.signal) wait();
this.signal = false;
}
public boolean isUnused()
{
return !signal ;
}
}
//2.
class Layer
{
Semaphore sem =null;
/*your code*/
/*sem = new Semaphore(); in constructors*/
public boolean take()
{
return this.sem.take();
}
public void release()
{
this.sem.release();
}
public Layer getLayer()
{
for ( Layer layer : layers )
{
if ( layer.matches(request) && layer.take())
return layer;
}
return null;
}
}
Synchronized methods handle access concurrence
3. Loop over getLayer until
Layer l=null;
while(l==null)
{
l= getlayer();
Thread.sleep(100); //set time
}
// continue
// do not forget to release the layer when you are done
Try to use Map<String, BlockingQueue<Layer>>. The idea is to hold free Layers inside of BlockingQueue. Every request has his own queue.
public class LayerQueue {
Map<String, BlockingQueue<Layer>> freeLayers = Collections.synchronizedMap(new HashMap<String, BlockingQueue<Layer>>());
public LayerQueue() {
//init QUEUEs
freeLayers.put("request-1", new ArrayBlockingQueue<Layer>(1)); // one to one...
freeLayers.put("request-2", new ArrayBlockingQueue<Layer>(1));
[...]
}
public void addUnusedLayer(Layer layer, String request) {
BlockingQueue<Layer> freeLayersForRequest = freeLayers.get(request);
freeLayersForRequest.add(layer);
}
public Layer getLayer(String request) {
BlockingQueue<Layer> freeLayersForRequest = freeLayers.get(request);
try {
return freeLayersForRequest.take(); // blocks until a layer becomes available
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}
}
I am not quite sure I understand your need correctly, but you could consume a blocking queue and put the results into a list. If an appropriate layer is not found in the list, call wait() and check again when a new item is added to the list from the queue. This sounds like it could work conceptually, even if the code below doesn't get it right (I am quite sure this is not quite properly synchronized)
public class PredicateBlockingQueue<Product> {
private final List<Product> products = new LinkedList<Product>();
private final BlockingQueue<Product> queue;
private final Thread consumer;
public PredicateBlockingQueue(int capacity) {
queue = new ArrayBlockingQueue<Product>(capacity);
consumer = new Thread() {
#Override
public void run() {
while(!Thread.interrupted()) {
try {
products.add(queue.take());
synchronized(queue) {
queue.notifyAll();
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
consumer.start();
}
public void put(Product product) throws InterruptedException {
queue.put(product);
}
public Product take(Predicate<Product> predicate) throws InterruptedException {
Product product;
while((product=find(predicate))==null) {
synchronized(queue) {
queue.wait();
}
}
return product;
}
private synchronized Product find(Predicate<Product> predicate) {
Iterator<Product> it = products.iterator();
while(it.hasNext()) {
Product product = it.next();
if(predicate.test(product)) {
it.remove();
return product;
}
}
return null;
}
This article explains "Double-Checked Locking" where the idea is to reduce lock contention. As the article explains it does not work. See the code sample in the table "(Still) Broken multithreaded version "Double-Checked Locking" idiom".
Now I think I found a variant that should work. Question is whether that is correct. Let's say we have a consumer and a producer that exchange data through a shared queue:
class Producer {
private Queue queue = ...;
private AtomicInteger updateCount;
public void add(Data data) {
synchronized(updateCount) {
queue.add(task);
updateCount.incrementAndGet();
}
}
}
class Consumer {
private AtomicInteger updateCount = new AtomicInteger(0);
private int updateCountSnapshot = updateCount.get();
public void run() {
while(true) {
// do something
if(updateCountSnapshot != updateCount.get()) {
// synchronizing on the same updateCount
// instance the Producer has
synchronized(updateCount) {
Data data = queue.poll()
// mess with data
updateCountSnapshot = updateCount.get();
}
}
}
}
}
Question now is whether you think this approach works. I'm asking to be sure, because tons of things would break if it doesn't ... The idea is to reduce lock contention when only entering a synchronized block in the consumer when the updateCount has changed in the meanwhile.
I suspect you are looking more for a Code Review.
You should consider the following:
This is not double-checked locking.
Your consumer will spin on nothing and eat cpu while no data is arriving.
You use an AtomicInteger as a Semaphore.
A BlockingQueue will do all of this for you.
You haven't properly ensured that updateCount is shared.
You do not have to synchronize on atomics.
Here's a simple Producer/Consumer pair for demonstration.
public class TwoThreads {
public static void main(String args[]) throws InterruptedException {
System.out.println("TwoThreads:Test");
new TwoThreads().test();
}
// The end of the list.
private static final Integer End = -1;
static class Producer implements Runnable {
final Queue<Integer> queue;
public Producer(Queue<Integer> queue) {
this.queue = queue;
}
#Override
public void run() {
try {
for (int i = 0; i < 1000; i++) {
queue.add(i);
Thread.sleep(1);
}
// Finish the queue.
queue.add(End);
} catch (InterruptedException ex) {
// Just exit.
}
}
}
static class Consumer implements Runnable {
final Queue<Integer> queue;
public Consumer(Queue<Integer> queue) {
this.queue = queue;
}
#Override
public void run() {
boolean ended = false;
while (!ended) {
Integer i = queue.poll();
if (i != null) {
ended = i == End;
System.out.println(i);
}
}
}
}
public void test() throws InterruptedException {
Queue<Integer> queue = new LinkedBlockingQueue<>();
Thread pt = new Thread(new Producer(queue));
Thread ct = new Thread(new Consumer(queue));
// Start it all going.
pt.start();
ct.start();
// Wait for it to finish.
pt.join();
ct.join();
}
}
first of all i am new to threads and shared variables. So please be kind with me ;-)
I'm having a class called Routing. This class recieves and handles messages. If a message is of type A the Routing-Object should pass it to the ASender Object which implements the Runnable Interface. If the message is of type B the Routing-Class should pass it to the BSender Object.
But the ASender and BSender Objects have common variables, that should be stored into the Routing-Object.
My idea now is to declare the variables as synchronized/volatile in the Routing-Object and the getter/setter also.
Is this the right way to synchronize the code? Or is something missing?
Edit: Added the basic code idea.
RoutingClass
public class Routing {
private synchronized Hashtable<Long, HashSet<String>> reverseLookup;
private ASender asender;
private BSender bsender;
public Routing() {
//Constructor work to be done here..
reverseLookup = new Hashtable<Long, HashSet<String>>();
}
public void notify(TopicEvent event) {
if (event.getMessage() instanceof AMessage) {
asender = new ASender(this, event.getMessage())
} else if (event.getMessage() instanceof BMessage) {
bsender = new BSender(this, event.getMessage())
}
}
public synchronized void setReverseLookup(long l, Hashset<String> set) {
reverseLookup.put(l, set);
}
public synchronized Hashtable<Long, Hashset<String>> getReverseLookup() {
return reverseLookup;
}
}
ASender Class
public class ASender implements Runnable {
private Routing routing;
private RoutingMessage routingMessage;
public ASender(Routing r, RoutingMessage rm) {
routing = r;
routingMessage = rm;
this.run();
}
public void run() {
handleMessage();
}
private void handleMessage() {
// do some stuff and extract data from the routing message object
routing.setReverseLookup(somethingToSet)
}
}
Some comments:
Hashtable is a thread-safe implementation, you do not need another "synchronized" keyword see this and this for more information
Avoid coupling, try to work with interfaces or pass the hashtable to your senders, see this for more information
Depending on the amount of senders, you might want to use a ConcurrentHashMap, it greatly improves the performance, see ConcurrentHashMap and Hashtable in Java and Java theory and practice: Concurrent collections classes
This would conclude something like...:
public interface IRoutingHandling {
void writeMessage(Long key, HashSet<String> value);
}
public class Routing implements IRoutingHandling {
private final Hashtable<Long, HashSet<String>> reverseLookup;
private ASender asender;
private BSender bsender;
public Routing() {
//Constructor work to be done here..
reverseLookup = new Hashtable<Long, HashSet<String>>();
}
public void notify(TopicEvent event) {
if (event.getMessage() instanceof AMessage) {
asender = new ASender(this, event.getMessage())
} else if (event.getMessage() instanceof BMessage) {
bsender = new BSender(this, event.getMessage())
}
}
#Override
public void writeMessage(Long key, HashSet<String> value) {
reverseLookup.put(key, value);
}
}
public class ASender implements Runnable {
private IRoutingHandling _routingHandling;
public ASender(IRoutingHandling r, RoutingMessage rm) {
_routingHandling = r;
routingMessage = rm;
this.run();
}
public void run() {
handleMessage();
}
private void handleMessage() {
// do some stuff and extract data from the routing message object
_routingHandling.writeMessage(somethingToSetAsKey, somethingToSetAsValue)
}
}