I need a byte generator that would generate values from Byte.MIN_VALUE to Byte.MAX_VALUE. When it reaches MAX_VALUE, it should start over again from MIN_VALUE.
I have written the code using AtomicInteger (see below); however, the code does not seem to behave properly if accessed concurrently and if made artificially slow with Thread.sleep() (if no sleeping, it runs fine; however, I suspect it is just too fast for concurrency problems to show up).
The code (with some added debug code):
public class ByteGenerator {
private static final int INITIAL_VALUE = Byte.MIN_VALUE-1;
private AtomicInteger counter = new AtomicInteger(INITIAL_VALUE);
private AtomicInteger resetCounter = new AtomicInteger(0);
private boolean isSlow = false;
private long startTime;
public byte nextValue() {
int next = counter.incrementAndGet();
//if (isSlow) slowDown(5);
if (next > Byte.MAX_VALUE) {
synchronized(counter) {
int i = counter.get();
//if value is still larger than max byte value, we reset it
if (i > Byte.MAX_VALUE) {
counter.set(INITIAL_VALUE);
resetCounter.incrementAndGet();
if (isSlow) slowDownAndLog(10, "resetting");
} else {
if (isSlow) slowDownAndLog(1, "missed");
}
next = counter.incrementAndGet();
}
}
return (byte) next;
}
private void slowDown(long millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
}
}
private void slowDownAndLog(long millis, String msg) {
slowDown(millis);
System.out.println(resetCounter + " "
+ (System.currentTimeMillis()-startTime) + " "
+ Thread.currentThread().getName() + ": " + msg);
}
public void setSlow(boolean isSlow) {
this.isSlow = isSlow;
}
public void setStartTime(long startTime) {
this.startTime = startTime;
}
}
And, the test:
public class ByteGeneratorTest {
#Test
public void testGenerate() throws Exception {
ByteGenerator g = new ByteGenerator();
for (int n = 0; n < 10; n++) {
for (int i = Byte.MIN_VALUE; i <= Byte.MAX_VALUE; i++) {
assertEquals(i, g.nextValue());
}
}
}
#Test
public void testGenerateMultiThreaded() throws Exception {
final ByteGenerator g = new ByteGenerator();
g.setSlow(true);
final AtomicInteger[] counters = new AtomicInteger[Byte.MAX_VALUE-Byte.MIN_VALUE+1];
for (int i = 0; i < counters.length; i++) {
counters[i] = new AtomicInteger(0);
}
Thread[] threads = new Thread[100];
final CountDownLatch latch = new CountDownLatch(threads.length);
for (int i = 0; i < threads.length; i++) {
threads[i] = new Thread(new Runnable() {
public void run() {
try {
for (int i = Byte.MIN_VALUE; i <= Byte.MAX_VALUE; i++) {
byte value = g.nextValue();
counters[value-Byte.MIN_VALUE].incrementAndGet();
}
} finally {
latch.countDown();
}
}
}, "generator-client-" + i);
threads[i].setDaemon(true);
}
g.setStartTime(System.currentTimeMillis());
for (int i = 0; i < threads.length; i++) {
threads[i].start();
}
latch.await();
for (int i = 0; i < counters.length; i++) {
System.out.println("value #" + (i+Byte.MIN_VALUE) + ": " + counters[i].get());
}
//print out the number of hits for each value
for (int i = 0; i < counters.length; i++) {
assertEquals("value #" + (i+Byte.MIN_VALUE), threads.length, counters[i].get());
}
}
}
The result on my 2-core machine is that value #-128 gets 146 hits (all of them should get 100 hits equally as we have 100 threads).
If anyone has any ideas, what's wrong with this code, I'm all ears/eyes.
UPDATE: for those who are in a hurry and do not want to scroll down, the correct (and shortest and most elegant) way to solve this in Java would be like this:
public byte nextValue() {
return (byte) counter.incrementAndGet();
}
Thanks, Heinz!
Initially, Java stored all fields as 4 or 8 byte values, even short and byte. Operations on the fields would simply do bit masking to shrink the bytes. Thus we could very easily do this:
public byte nextValue() {
return (byte) counter.incrementAndGet();
}
Fun little puzzle, thanks Neeme :-)
You make the decision to incrementAndGet() based on a old value of counter.get(). The value of the counter can reach MAX_VALUE again before you do the incrementAndGet() operation on the counter.
if (next > Byte.MAX_VALUE) {
synchronized(counter) {
int i = counter.get(); //here You make sure the the counter is not over the MAX_VALUE
if (i > Byte.MAX_VALUE) {
counter.set(INITIAL_VALUE);
resetCounter.incrementAndGet();
if (isSlow) slowDownAndLog(10, "resetting");
} else {
if (isSlow) slowDownAndLog(1, "missed"); //the counter can reach MAX_VALUE again if you wait here long enough
}
next = counter.incrementAndGet(); //here you increment on return the counter that can reach >MAX_VALUE in the meantime
}
}
To make it work one has to make sure the no decisions are made on stale info. Either reset the counter or return the old value.
public byte nextValue() {
int next = counter.incrementAndGet();
if (next > Byte.MAX_VALUE) {
synchronized(counter) {
next = counter.incrementAndGet();
//if value is still larger than max byte value, we reset it
if (next > Byte.MAX_VALUE) {
counter.set(INITIAL_VALUE + 1);
next = INITIAL_VALUE + 1;
resetCounter.incrementAndGet();
if (isSlow) slowDownAndLog(10, "resetting");
} else {
if (isSlow) slowDownAndLog(1, "missed");
}
}
}
return (byte) next;
}
Your synchronized block contains only the if body. It should wrap whole method including if statement itself. Or just make your method nextValue synchronized. BTW in this case you do not need Atomic variables at all.
I hope this will work for you. Try to use Atomic variables only if your really need highest performance code, i.e. synchronized statement bothers you. IMHO in most cases it does not.
If I understand you correctly, you care that the results of nextValue are in the range of Byte.MIN_VALUE and Byte.MAX_VALUE and you don't care about the value stored in the counter.
Then you can map integers on bytes such that you required enumeration behavior is exposed:
private static final int VALUE_RANGE = Byte.MAX_VALUE - Byte.MIN_VALUE + 1;
private final AtomicInteger counter = new AtomicInteger(0);
public byte nextValue() {
return (byte) (counter.incrementAndGet() % VALUE_RANGE + Byte.MIN_VALUE - 1);
}
Beware, this is untested code. But the idea should work.
I coded up the following version of nextValue using compareAndSet which is designed to be used in a non-synchronized block. It passed your unit tests:
Oh, and I introduced new constants for MIN_VALUE and MAX_VALUE but you can ignore those if you prefer.
static final int LOWEST_VALUE = Byte.MIN_VALUE;
static final int HIGHEST_VALUE = Byte.MAX_VALUE;
private AtomicInteger counter = new AtomicInteger(LOWEST_VALUE - 1);
private AtomicInteger resetCounter = new AtomicInteger(0);
public byte nextValue() {
int oldValue;
int newValue;
do {
oldValue = counter.get();
if (oldValue >= HIGHEST_VALUE) {
newValue = LOWEST_VALUE;
resetCounter.incrementAndGet();
if (isSlow) slowDownAndLog(10, "resetting");
} else {
newValue = oldValue + 1;
if (isSlow) slowDownAndLog(1, "missed");
}
} while (!counter.compareAndSet(oldValue, newValue));
return (byte) newValue;
}
compareAndSet() works in conjunction with get() to manage concurrency.
At the start of your critical section, you perform a get() to retrieve the old value. You then perform some function dependent only on the old value to compute a new value. Then you use compareAndSet() to set the new value. If the AtomicInteger is no longer equal to the old value at the time compareAndSet() is executed (because of concurrent activity), it fails and you must start over.
If you have an extreme amount of concurrency and the computation time is long, it is conceivable that the compareAndSet() may fail many times before succeeding and it may be worth gathering statistics on that if concerns you.
I'm not suggesting that this is a better or worse approach than a simple synchronized block as others have suggested, but I personally would probably use a synchronized block for simplicity.
EDIT: I'll answer your actual question "Why doesn't mine work?"
Your code has:
int next = counter.incrementAndGet();
if (next > Byte.MAX_VALUE) {
As these two lines are not protected by a synchronized block, multiple threads can execute them concurrently and all obtain values of next > Byte.MAX_VALUE. All of them will then drop through into the synchronized block and set counter back to INITIAL_VALUE (one after another as they wait for each other).
Over the years, there has been a huge amount written over the pitfalls of trying to get a performance tweak by not synchronizing when it doesn't seem necessary. For example, see Double Checked Locking
Notwithstanding that Heinz Kabutz is the clean answer to the specific question, ye olde Java SE 8 [March 2014] added AtomicIntger.updateAndGet (and friends). This leads to a more general solution if circumstances required:
public class ByteGenerator {
private static final int MIN = Byte.MIN_VALUE;
private static final int MAX = Byte.MAX_VALUE;
private final AtomicInteger counter = new AtomicInteger(MIN);
public byte nextValue() {
return (byte)counter.getAndUpdate(ByteGenerator::update);
}
private static int update(int old) {
return old==MAX ? MIN : old+1;
}
}
Related
Have a scenario where multiple threads have race condition on comparison code.
private int volatile maxValue;
private AtomicInteger currentValue;
public void constructor() {
this.current = new AtomicInteger(getNewValue());
}
public getNextValue() {
while(true) {
int latestValue = this.currentValue.get();
int nextValue = latestValue + 1;
if(latestValue == maxValue) {//Race condition 1
latestValue = getNewValue();
}
if(currentValue.compareAndSet(latestValue, nextValue) {//Race condition 2
return latestValue;
}
}
}
private int getNewValue() {
int newValue = getFromDb(); //not idempotent
maxValue = newValue + 10;
return newValue;
}
Questions :
The obvious way to fix this would be add synchronized block/method around the if condition. What are other performant way to fix this using concurrent api without using any kind of locks ?
How to get rid of the while loop so we can get the next value with no or less thread contention ?
Constraints :
The next db sequences will be in increasing order not necessarily evenly distributed. So it could be 1, 11, 31 where 21 may be have asked by other node. The requested next value will always be unique. Also need to make sure all the sequences are used and once we reach the max for previous range then only request to db for another starting sequence and so on.
Example :
for db next sequences 1,11,31 with 10 increment, the output next sequence should be 1-10, 11-20, 31-40 for 30 requests.
First of all: I would recommend thinking one more time about using synchronized, because:
look at how simple such code is:
private int maxValue;
private int currentValue;
public constructor() {
requestNextValue();
}
public synchronized int getNextValue() {
currentValue += 1;
if (currentValue == maxValue) {
requestNextValue();
}
return currentValue;
}
private void requestNextValue() {
currentValue = getFromDb(); //not idempotent
maxValue = currentValue + 10;
}
locks in java actually are pretty intelligent and have pretty good performance.
you talk to DB in your code — the performance cost of that alone can be orders of magnitude higher than the performance cost of locks.
But in general, your race conditions happen because you update maxValue and currentValue independently.
You can combine these 2 values into a single immutable object and then work with the object atomically:
private final AtomicReference<State> stateHolder = new AtomicReference<>(newStateFromDb());
public int getNextValue() {
while (true) {
State oldState = stateHolder.get();
State newState = (oldState.currentValue == oldState.maxValue)
? newStateFromDb()
: new State(oldState.currentValue + 1, oldState.maxValue);
if (stateHolder.compareAndSet(oldState, newState)) {
return newState.currentValue;
}
}
}
private static State newStateFromDb() {
int newValue = getFromDb(); // not idempotent
return new State(newValue, newValue + 10);
}
private static class State {
final int currentValue;
final int maxValue;
State(int currentValue, int maxValue) {
this.currentValue = currentValue;
this.maxValue = maxValue;
}
}
After fixing that you will probably have to solve the following problems next:
how to prevent multiple parallel getFromDb(); (especially after taking into account that the method is idempotent)
when one thread performs getFromDb();, how to prevent other threads from busy spinning inside while(true) loop and consuming all available cpu time
more similar problems
Solving each of these problems will probably make your code more and more complicated.
So, IMHO it is almost never worth it — locks work fine and keep the code simple.
You cannot completely avoid locking with the given constraints: since (1) every value returned by getFromDb() must be used and (2) calling getFromDb() is only allowed once maxValue has been reached, you need to ensure mutual exclusion for calls to getFromDb().
Without either of the constraints (1) or (2) you could resort to optimistic locking though:
Without (1) you could allow multiple threads calling getFromDb() concurrently and choose one of the results dropping all others.
Without (2) you could allow multiple threads calling getFromDb() concurrently and choose one of the results. The other results would be "saved for later".
The obvious way to fix this would be add synchronized block around the if condition
That is not going to work. Let me try and explain.
When you hit the condition: if(latestValue == maxValue) { ... }, you want to update both maxValue and currentValue atomically. Something like this:
latestValue = getNewValue();
currentValue.set(latestValue);
getNewValue will get your next starting value from the DB and update maxValue, but at the same time, you want to set currentValue to that new starting one now. Suppose the case:
you first read 1 from the DB. As such maxValue = 11, currentValue = 1.
when you reach the condition if(latestValue == maxValue), you want to go to the DB to get the new starting position (let's say 21), but at the same time you want every thread to now start from 21. So you must also set currentValue.
Now the problem is that if you write to currentValue under a synchronized block, for example:
if(latestValue == maxValue) {
synchronized (lock) {
latestValue = getNewValue();
currentValue.set(latestValue);
}
}
you also need to read under the same lock, otherwise you have race. Initially I thought I can be a bit smarter and do something like:
if(latestValue == maxValue) {
synchronized (lock) {
if(latestValue == maxValue) {
latestValue = getNewValue();
currentValue.set(latestValue);
} else {
continue;
}
}
}
So that all threads that wait on a lock do not override the previously written value to maxValue when the lock is released. But that still is a race and will cause problems elsewhere, in a different case, rather trivially. For example:
ThreadA does latestValue = getNewValue();, thus maxValue == 21. Before it does currentValue.set(latestValue);
ThreadB reads int latestValue = this.currentValue.get();, sees 11 and of course this will be false : if(latestValue == maxValue) {, so it can write 12 (nextValue) to currentValue. Which breaks the entire algorithm.
I do not see any other way then to make getNextValue synchronized or somehow else protected by a mutex/spin-lock.
I don't really see a way around synchonizing the DB call - unless calling the DB multiple times is not an issue (i.e. retrieving several "new values").
To remove the need to synchronize the getNextValue method, you could use a BlockingQueue which will remove the need to atomically update 2 variables. And if you really don't want to use the synchronize keyword, you can use a flag to only let one thread call the DB.
It could look like this (looks ok, but not tested):
private final BlockingQueue<Integer> nextValues = new ArrayBlockingQueue<>(10);
private final AtomicBoolean updating = new AtomicBoolean();
public int getNextValue() {
while (true) {
Integer nextValue = nextValues.poll();
if (nextValue != null) return nextValue;
else getNewValues();
}
}
private void getNewValues() {
if (updating.compareAndSet(false, true)) {
//we hold the "lock" to run the update
if (!nextValues.isEmpty()) {
updating.set(false);
throw new IllegalStateException("nextValues should be empty here");
}
try {
int newValue = getFromDb(); //not idempotent
for (int i = 0; i < 10; i++) {
nextValues.add(newValue + i);
}
} finally {
updating.set(false);
}
}
}
But as mentioned in other comments, there is a high chance that the most costly operation here is the DB call, which remains synchronized, so you may as well synchronize everything and keep it simple, with very little difference performance wise.
As getFromDb hits the database you really want some locking - the other threads should block not also go for the database or spin. Really, if you are doing that every 10 iterations, you can probably synchronize the lot. However, that is no fun.
Any reasonable, non-microcontroller platform should support AtomicLong as lock-free. So we can conveniently pack the two ints into one atomic.
private final AtomicLong combinedValue;
public getNextValue() {
for (;;) {
long combined = combinedValue.get();
int latestValue = (int)combined;
int maxValue = (int)(combined>>32);
int nextValue = latestValue + 1;
long nextCombined = (newValue&0xffffffff) | (maxValue<<32)
if (latestValue == maxValue) {
nextValue();
} else if (currentValue.compareAndSet(combined, nextCombined)) {
return latestValue;
}
}
}
private synchronized void nextValue() {
// Yup, we need to double check with this locking.
long combined = combinedValue.get();
int latestValue = (int)combined;
int maxValue = (int)(combined>>32);
if (latestValue == maxValue) {
int newValue = getFromDb(); //not idempotent
int maxValue = newValue + 10;
long nextCombined = (newValue&0xffffffff) | (maxValue<<32)
combinedValue.set(nextCombined);
}
}
An alternative with memory allocation would be to lump both values into one object and use AtomicReference. However, we can observe that the value changes more frequently than the maximum, so we can use a slow changing object and a fast offset.
private static record Segment(
int maxValue, AtomicInteger currentValue
) {
}
private volatile Segment segment;
public getNextValue() {
for (;;) {
Segment segment = this.segment;
int latestValue = segment.currentValue().get();
int nextValue = latestValue + 1;
if (latestValue == segment.maxValue()) {
nextValue();
} else if (segment.currentValue().compareAndSet(
latestValue, nextValue
)) {
return latestValue;
}
}
}
private synchronized void nextValue() {
// Yup, we need to double check with this locking.
Segment segment = this.segment;
int latestValue = segment.currentValue().get();
if (latestValue == segment.maxValue()) {
int newValue = getFromDb(); //not idempotent
int maxValue = newValue + 10;
segment = new Segment(maxValue, new AtomicInteger(newValue));
}
}
(Standard disclaimer: Code not so much as compiled, tested or thought about much. records require a quite new at time of writing JDK. Constructors elided.)
What an interesting question. As others have said you get round with your problem by using synchronized keyword.
public synchronized int getNextValue() { ... }
But because you didn't want to use that keyword and at the same time want to avoid race condition, this probably helps. No guarantee though. And please don't ask for explanations, I'll throw you with OutOfBrainException.
private volatile int maxValue;
private volatile boolean locked = false; //For clarity.
private AtomicInteger currentValue;
public int getNextValue() {
int latestValue = this.currentValue.get();
int nextValue = latestValue + 1;
if(!locked && latestValue == maxValue) {
locked = true; //Only one thread per time.
latestValue = getNewValue();
currentValue.set(latestValue);
locked = false;
}
while(locked) { latestValue = 0; } //If a thread running in the previous if statement, we need this to buy some time.
//We also need to reset "latestValue" so that when this thread runs the next loop,
//it will guarantee to call AtomicInteger.get() for the updated value.
while(!currentValue.compareAndSet(latestValue, nextValue)) {
latestValue = this.currentValue.get();
nextValue = latestValue + 1;
}
return nextValue;
}
Or you can use Atomic to fight Atomic.
private AtomicBoolean locked = new AtomicBoolean(false);
public int getNextValue() {
...
if(locked.compareAndSet(false, true)) { //Only one thread per time.
if(latestValue == maxValue) {
latestValue = getNewValue();
currentValue.set(latestValue);
}
locked.set(false);
}
...
I can't think of a way to remove all locking since the underlying problem is accessing a mutable value from several threads. However there several improvements that can be done to the code you provided, basically taking advantage of the fact that when data is read by multiple threads, there is no need to lock the reads unless a write has to be done, so using Read/Write locks will reduce the contention. Only 1/10 times there will be a "full" write lock
So the code could be rewritten like this (leaving bugs aside):
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class Counter {
private final ReentrantReadWriteLock reentrantLock = new ReentrantReadWriteLock(true);
private final ReentrantReadWriteLock.ReadLock readLock = reentrantLock.readLock();
private final ReentrantReadWriteLock.WriteLock writeLock = reentrantLock.writeLock();
private AtomicInteger currentValue;
private AtomicInteger maxValue;
public Counter() {
int initialValue = getFromDb();
this.currentValue = new AtomicInteger(initialValue);
this.maxValue = new AtomicInteger(initialValue + 10);
}
public int getNextValue() {
readLock.lock();
while (true){
int nextValue = currentValue.getAndIncrement();
if(nextValue<maxValue.get()){
readLock.unlock();
return nextValue;
}
else {
readLock.unlock();
writeLock.lock();
reload();
readLock.lock();
writeLock.unlock();
}
}
}
private void reload(){
int newValue = getFromDb();
if(newValue>maxValue.get()) {
this.currentValue.set(newValue);
this.maxValue.set(newValue + 10);
}
}
private int getFromDb(){
// your implementation
}
}
What is the business use case you are trying to solve?
Can the next scenario work for you:
Create SQL sequence (based your database) with counter requirements in the database;
Fetch counters from the database as a batch like 50-100 ids
Once 50-100 are used on the app level, fetch 100 values more from db ...
?
Slightly modified version of user15102975's answer with no while-loop and getFromDb() mock impl.
/**
* Lock free sequence counter implementation
*/
public class LockFreeSequenceCounter {
private static final int BATCH_SIZE = 10;
private final AtomicReference<Sequence> currentSequence;
private final ConcurrentLinkedQueue<Integer> databaseSequenceQueue;
public LockFreeSequenceCounter() {
this.currentSequence = new AtomicReference<>(new Sequence(0,0));
this.databaseSequenceQueue = new ConcurrentLinkedQueue<>();
}
/**
* Get next unique id (threadsafe)
*/
public int getNextValue() {
return currentSequence.updateAndGet((old) -> old.next(this)).currentValue;
}
/**
* Immutable class to handle current and max value
*/
private static final class Sequence {
private final int currentValue;
private final int maxValue;
public Sequence(int currentValue, int maxValue) {
this.currentValue = currentValue;
this.maxValue = maxValue;
}
public Sequence next(LockFreeSequenceCounter counter){
return isMaxReached() ? fetchDB(counter) : inc();
}
private boolean isMaxReached(){
return currentValue == maxValue;
}
private Sequence inc(){
return new Sequence(this.currentValue + 1, this.maxValue);
}
private Sequence fetchDB(LockFreeSequenceCounter counter){
counter.databaseSequenceQueue.add(counter.getFromDb());
int newValue = counter.databaseSequenceQueue.poll();
int maxValue = newValue + BATCH_SIZE -1;
return new Sequence(newValue, maxValue);
}
}
/**
* Get unique id from db (mocked)
* return on call #1: 1
* return on call #2: 11
* return on call #3: 31
* Note: this function is not idempotent
*/
private int getFromDb() {
if (dbSequencer.get() == 21){
return dbSequencer.addAndGet(BATCH_SIZE);
} else{
return dbSequencer.getAndAdd(BATCH_SIZE);
}
}
private final AtomicInteger dbSequencer = new AtomicInteger(1);
}
Slightly modified version of Tom Hawtin - tackline's answer and also the suggestion by codeflush.dev in the comments of the question
Code
I have added a working version of code and simulated a basic multithreaded environment.
Disclaimer: Use with your own discretion
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Random;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
class Seed {
private static final int MSB = 32;
private final int start;
private final int end;
private final long window;
public Seed(int start, int end) {
this.start = start;
this.end = end;
this.window = (((long) end) << MSB) | start;
}
public Seed(long window) {
this.start = (int) window;
this.end = (int) (window >> MSB);
this.window = window;
}
public int getStart() {
return start;
}
public int getEnd() {
return end;
}
public long getWindow() {
return window;
}
// this will not update the state, will only return the computed value
public long computeNextInWindow() {
return window + 1;
}
}
// a mock external seed service to abstract the seed generation and window logic
class SeedService {
private static final int SEED_INIT = 1;
private static final AtomicInteger SEED = new AtomicInteger(SEED_INIT);
private static final int SEQ_LENGTH = 10;
private static final int JITTER_FACTOR = 5;
private final boolean canAddRandomJitterToSeed;
private final Random random;
public SeedService(boolean canJitterSeed) {
this.canAddRandomJitterToSeed = canJitterSeed;
this.random = new Random();
}
public int getSeqLengthForTest() {
return SEQ_LENGTH;
}
public Seed getDefaultWindow() {
return new Seed(1, 1);
}
public Seed getNextWindow() {
int offset = SEQ_LENGTH;
// trying to simulate multiple machines with interleaved start seed
if (canAddRandomJitterToSeed) {
offset += random.nextInt(JITTER_FACTOR) * SEQ_LENGTH;
}
final int start = SEED.getAndAdd(offset);
return new Seed(start, start + SEQ_LENGTH);
}
// helper to validate generated ids
public boolean validate(List<Integer> ids) {
Collections.sort(ids);
// unique check
if (ids.size() != new HashSet<>(ids).size()) {
return false;
}
for (int startIndex = 0; startIndex < ids.size(); startIndex += SEQ_LENGTH) {
if (!checkSequence(ids, startIndex)) {
return false;
}
}
return true;
}
// checks a sequence
// relies on 'main' methods usage of SEQ_LENGTH
protected boolean checkSequence(List<Integer> ids, int startIndex) {
final int startRange = ids.get(startIndex);
return IntStream.range(startRange, startRange + SEQ_LENGTH).boxed()
.collect(Collectors.toList())
.containsAll(ids.subList(startIndex, startIndex + SEQ_LENGTH));
}
public void shutdown() {
SEED.set(SEED_INIT);
System.out.println("See you soon!!!");
}
}
class SequenceGenerator {
private final SeedService seedService;
private final AtomicLong currentWindow;
public SequenceGenerator(SeedService seedService) {
this.seedService = seedService;
// initialize currentWindow using seedService
// best to initialize to an old window so that every instance of SequenceGenerator
// will lazy load from seedService during the first getNext() call
currentWindow = new AtomicLong(seedService.getDefaultWindow().getWindow());
}
public synchronized boolean requestSeed() {
Seed seed = new Seed(currentWindow.get());
if (seed.getStart() == seed.getEnd()) {
final Seed nextSeed = seedService.getNextWindow();
currentWindow.set(nextSeed.getWindow());
return true;
}
return false;
}
public int getNext() {
while (true) {
// get current window
Seed seed = new Seed(currentWindow.get());
// exhausted and need to seed again
if (seed.getStart() == seed.getEnd()) {
// this will loop at least one more time to return value
requestSeed();
} else if (currentWindow.compareAndSet(seed.getWindow(), seed.computeNextInWindow())) {
// successfully incremented value for next call. so return current value
return seed.getStart();
}
}
}
}
public class SequenceGeneratorTest {
public static void test(boolean canJitterSeed) throws Exception {
// just some random multithreaded invocation
final int EXECUTOR_THREAD_COUNT = 10;
final Random random = new Random();
final int INSTANCES = 500;
final SeedService seedService = new SeedService(canJitterSeed);
final int randomRps = 500;
final int seqLength = seedService.getSeqLengthForTest();
ExecutorService executorService = Executors.newFixedThreadPool(EXECUTOR_THREAD_COUNT);
Callable<List<Integer>> callable = () -> {
final SequenceGenerator generator = new SequenceGenerator(seedService);
int rps = (1 + random.nextInt(randomRps)) * seqLength;
return IntStream.range(0, rps).parallel().mapToObj(i -> generator.getNext())
.collect(Collectors.toList());
};
List<Future<List<Integer>>> futures = IntStream.range(0, INSTANCES).parallel()
.mapToObj(i -> executorService.submit(callable))
.collect(Collectors.toList());
List<Integer> ids = new ArrayList<>();
for (Future<List<Integer>> f : futures) {
ids.addAll(f.get());
}
executorService.shutdown();
// validate generated ids for correctness
if (!seedService.validate(ids)) {
throw new IllegalStateException();
}
seedService.shutdown();
// summary
System.out.println("count: " + ids.size() + ", unique count: " + new HashSet<>(ids).size());
Collections.sort(ids);
System.out.println("min id: " + ids.get(0) + ", max id: " + ids.get(ids.size() - 1));
}
public static void main(String[] args) throws Exception {
test(true);
System.out.println("Note: ids can be interleaved. if continuous sequence is needed, initialize SeedService with canJitterSeed=false");
final String ruler = Collections.nCopies( 50, "-" ).stream().collect( Collectors.joining());
System.out.println(ruler);
test(false);
System.out.println("Thank you!!!");
System.out.println(ruler);
}
}
I have this code, where I have my own homemade array class, that I want to use to test the speed of some different concurrency tools in java
public class LongArrayListUnsafe {
private static final ExecutorService executor
= Executors.newFixedThreadPool(1);
public static void main(String[] args) {
LongArrayList dal1 = new LongArrayList();
int n = 100_000_000;
Timer t = new Timer();
List<Callable<Void>> tasks = new ArrayList<>();
tasks.add(() -> {
for (int i = 0; i <= n; i+=2){
dal1.add(i);
}
return null;
});
tasks.add(() -> {
for (int i = 0; i < n; i++){
dal1.set(i, i + 1);
}
return null;});
tasks.add(() -> {
for (int i = 0; i < n; i++) {
dal1.get(i);
}
return null;});
tasks.add(() -> {
for (int i = n; i < n * 2; i++) {
dal1.add(i + 1);
}
return null;});
try {
executor.invokeAll(tasks);
} catch (InterruptedException exn) {
System.out.println("Interrupted: " + exn);
}
executor.shutdown();
try {
executor.awaitTermination(1000, TimeUnit.MILLISECONDS);
} catch (Exception e){
System.out.println("what?");
}
System.out.println("Using toString(): " + t.check() + " ms");
}
}
class LongArrayList {
// Invariant: 0 <= size <= items.length
private long[] items;
private int size;
public LongArrayList() {
reset();
}
public static LongArrayList withElements(long... initialValues){
LongArrayList list = new LongArrayList();
for (long l : initialValues) list.add( l );
return list;
}
public void reset(){
items = new long[2];
size = 0;
}
// Number of items in the double list
public int size() {
return size;
}
// Return item number i
public long get(int i) {
if (0 <= i && i < size)
return items[i];
else
throw new IndexOutOfBoundsException(String.valueOf(i));
}
// Replace item number i, if any, with x
public long set(int i, long x) {
if (0 <= i && i < size) {
long old = items[i];
items[i] = x;
return old;
} else
throw new IndexOutOfBoundsException(String.valueOf(i));
}
// Add item x to end of list
public LongArrayList add(long x) {
if (size == items.length) {
long[] newItems = new long[items.length * 2];
for (int i=0; i<items.length; i++)
newItems[i] = items[i];
items = newItems;
}
items[size] = x;
size++;
return this;
}
public String toString() {
return Arrays.stream(items, 0,size)
.mapToObj( Long::toString )
.collect(Collectors.joining(", ", "[", "]"));
}
}
public class Timer {
private long start, spent = 0;
public Timer() { play(); }
public double check() { return (System.nanoTime()-start+spent)/1e9; }
public void pause() { spent += System.nanoTime()-start; }
public void play() { start = System.nanoTime(); }
}
The implementation of a LongArrayList class is not so important,it's not threadsafe.
The drivercode with the executorservice performs a bunch of operations on the arraylist, and has 4 different tasks doing it, each 100_000_000 times.
The problem is that when I give the threadpool more threads "Executors.newFixedThreadPool(2);" it only becomes slower.
For example, for one thread, a typical timing is 1.0366974 ms, but if I run it with 3 threads, the time ramps up to 5.7932714 ms.
What is going on? why is more threads so much slower?
EDIT:
To boil the issue down, I made this much simpler drivercode, that has four tasks that simply add elements:
ExecutorService executor
= Executors.newFixedThreadPool(2);
LongArrayList dal1 = new LongArrayList();
int n = 100_000_00;
Timer t = new Timer();
for (int i = 0; i < 4 ; i++){
executor.execute(new Runnable() {
#Override
public void run() {
for (int j = 0; j < n ; j++)
dal1.add(j);
}
});
}
executor.shutdown();
try {
executor.awaitTermination(1000, TimeUnit.MILLISECONDS);
} catch (Exception e){
System.out.println("what?");
}
System.out.println("Using toString(): " + t.check() + " ms");
Here it still does not seem to matter how many threads i allocate, there is no speedup at all, could this simply be because of overhead?
There are some problems with your code that make it hard to reason why with more threads the time increases.
btw
public double check() { return (System.nanoTime()-start+spent)/1e9; }
gives you back seconds not milliseconds, so change this:
System.out.println("Using toString(): " + t.check() + " ms");
to
System.out.println("Using toString(): " + t.check() + "s");
First problem:
LongArrayList dal1 = new LongArrayList();
dal1 is shared among all threads, and those threads are updating that shared variable without any mutual exclusion around it, consequently, leading to race conditions. Moreover, this can also lead to cache invalidation, which can increase your overall execution time.
The other thing is that you may have load balancing problems. You have 4 parallel tasks, but clearly the last one
tasks.add(() -> {
for (int i = n; i < n * 2; i++) {
dal1.add(i + 1);
}
return null;});
is the most computing-intensive task. Even if the 4 tasks run in parallel, without the problems that I have mention (i.e., lack of synchronization around the shared data), the last task will dictate the overall execution time.
Not to mention that parallelism does not come for free, it adds overhead (e.g., scheduling the parallel work and so on), which might be high enough that makes it not worth to parallelize the code in the first place. In your code, there is at least the overhead of waiting for the tasks to be completed, and also the overhead of shutting down the pool of executors.
Another possibility that would also explain why you are not getting ArrayIndexOutOfBoundsException all over the place is that the first 3 tasks are so small that they are being executed by the same thread. This would also again make your overall execution time very dependent on the last task, the on the overhead of executor.shutdown(); and executor.awaitTermination. However, even if that is the case, the order of execution of tasks, and which threads will execute then, is typically non-deterministic, and consequently, is not something that your application should rely upon. Funny enough, when I changed your code to immediately execute the tasks (i.e., executor.execute) I got ArrayIndexOutOfBoundsException all over the place.
I'm writing some code to simulate CAS(compare and swap).
Here I have a method cas to simulate CAS instruction, a method increase to plus field count 1. And I start 2 threads that every thread add field count 10000 times.
The problem is that the expected output is 20000, but the actual output is a little bit smaller than 20000. For example 19984, 19992, 19989...Every time is different.
I would very appreciate it if you can help me .
public class SimulateCAS {
private volatile int count;
private synchronized int cas(int expectation, int newValue) {
int curValue = count;
if (expectation == curValue) {
count = newValue;
}
return curValue;
}
void increase() {
int newValue;
do {
newValue = count + 1; // ①
} while (count != cas(count, newValue)); // ②
}
public static void main(String[] args) throws InterruptedException {
final SimulateCAS demo = new SimulateCAS();
Thread t1 = new Thread(() -> {
for (int i = 0; i < 10000; i++) {
demo.add10k();
}
});
Thread t2 = new Thread(() -> {
for (int i = 0; i < 10000; i++) {
demo.add10k();
}
});
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println(demo.count);
}
}
The problem is your increase method.
The value of count can be updated at any point between the lines with the comment ① and ②.
Your implementation of increase assumes that this can not happen, and that the count in line ① is the same count as in line ②.
A better implementation increase would be
void increase() {
int oldValue, newValue;
do {
oldValue = count; // get the current value
newValue = oldValue + 1; // calculate the new value based on the old
} while (oldValue != cas(oldValue, newValue)); // Do a compare and swap - if the oldValue is still the current value, change it to the newValue, otherwise not.
}
Here your full code with a real CAS, so no locks are needed.
So I have two AtomicBoolean and I need to check both of them. Something like that:
if (atomicBoolean1.get() == true && atomicBoolean2.get() == false) {
// ...
}
But there is a race condition in between :(
Is there a way to combine two atomic boolean checks into a single one without using synchronization (i.e. synchronized blocks) ?
Well I can think of a couple ways but it depends on the functionality you need.
One way is to "cheat" and use an AtomicMarkableReference<Boolean>:
final AtomicMarkableReference<Boolean> twoBooleans = (
new AtomicMarkableReference<Boolean>(true, false)
);
void somewhere() {
boolean b0;
boolean[] b1 = new boolean[1];
b0 = twoBooleans.get(b1);
b0 = false;
b1[0] = true;
twoBooleans.set(b0, b1);
}
But that's kind of a pain and only gets you two values.
So then you can use AtomicInteger with bit flags:
static final int FLAG0 = 1;
static final int FLAG1 = 1 << 1;
final AtomicInteger intFlags = new AtomicInteger(FLAG0);
void somewhere() {
int flags = intFlags.get();
int both = FLAG0 | FLAG1;
if((flags & both) == FLAG0) { // if FLAG0 has a 1 and FLAG1 has a 0
something();
}
flags &= ~FLAG0; // set FLAG0 to 0 (false)
flags |= FLAG1; // set FLAG1 to 1 (true)
intFlags.set(flags);
}
Also kind of a pain but it gets you 32 values. You could probably create a wrapper class around this if you really wanted. For example:
public class AtomicBooleanArray {
private final AtomicInteger intFlags = new AtomicInteger();
public void get(boolean[] arr) {
int flags = intFlags.get();
int f = 1;
for(int i = 0; i < 32; i++) {
arr[i] = (flags & f) != 0;
f <<= 1;
}
}
public void set(boolean[] arr) {
int flags = 0;
int f = 1;
for(int i = 0; i < 32; i++) {
if(arr[i]) {
flags |= f;
}
f <<= 1;
}
intFlags.set(flags);
}
public boolean get(int index) {
return (intFlags.get() & (1 << index)) != 0;
}
public void set(int index, boolean b) {
int f = 1 << index;
int current, updated;
do {
current = intFlags.get();
updated = b ? (current | f) : (current & ~f);
} while(!intFlags.compareAndSet(current, updated));
}
}
That's pretty good. Maybe a set is performed while the array is being copied in get but the point is you can get or set all 32 atomically. (The compare and set do-while loop is major ugly but it's how the atomic classes themselves work for things like getAndAdd.)
AtomicReference seems impractical here. It allows atomic gets and sets but once you have your hands on the internal object you are no longer updating atomically. You'd have to create a brand new object each time.
final AtomicReference<boolean[]> booleanRefs = (
new AtomicReference<boolean[]>(new boolean[] { true, true })
);
void somewhere() {
boolean[] refs = booleanRefs.get();
refs[0] = false; // not atomic!!
boolean[] copy = booleanRefs.get().clone(); // pretty safe
copy[0] = false;
booleanRefs.set(copy);
}
If you want to perform an interim operation on the data atomically (get -> change -> set, without interference) you have to use a lock or synchronization. Personally I would use a lock or synchronization since it's usually the case that the entire update is what you want to hold on to.
** UNSAFE !! **
Don't do this!
This can (possibly) be done with sun.misc.Unsafe. Here's a class that uses Unsafe to write to two halves of a volatile long, cowboy style.
public class UnsafeBooleanPair {
private static final Unsafe UNSAFE;
private static final long[] OFFS = new long[2];
private static final long[] MASKS = new long[] {
-1L >>> 32L, -1L << 32L
};
static {
try {
UNSAFE = getTheUnsafe();
Field pair = UnsafeBooleanPair.class.getDeclaredField("pair");
OFFS[0] = UNSAFE.objectFieldOffset(pair);
OFFS[1] = OFFS[0] + 4L;
} catch(Exception e) {
throw new RuntimeException(e);
}
}
private volatile long pair;
public void set(int ind, boolean val) {
UNSAFE.putIntVolatile(this, OFFS[ind], val ? 1 : 0);
}
public boolean get(int ind) {
return (pair & MASKS[ind]) != 0L;
}
public boolean[] get(boolean[] vals) {
long p = pair;
vals[0] = (p & MASKS[0]) != 0L;
vals[1] = (p & MASKS[1]) != 0L;
return vals;
}
private static Unsafe getTheUnsafe()
throws Exception {
Field theUnsafe = Unsafe.class.getDeclaredField("theUnsafe");
theUnsafe.setAccessible(true);
return (Unsafe)theUnsafe.get(null);
}
}
Importantly, the Javadoc in the Open JDK source for fieldOffset says not to do arithmetic with the offset. However, doing arithmetic with it appears to actually work in that I don't get garbage.
This nets a single volatile read for the entire word, but also (potentially) a volatile write to either half of it. Potentially putByteVolatile could be used to split a long in to 8 segments.
I wouldn't recommend that anybody use this (don't use this!) but it's kind of interesting as an oddity.
I can only think of two ways: use the lower two bits of an AtomicInteger or use a spinlock. I think Hotspot can optimize certain locks down to spinlocks on its own.
Use a Lock:
Lock l = ...;
l.lock();
try {
// access the resource protected by this lock
} finally {
l.unlock();
}
it's technically not a synchronized block, even though it's a form of synchronization, i think that what you're asking for is the very definition of synchronization, so i don't think is possible doing it 'without synchronization'.
I am new to Java and trying to write a method that finds the maximum value in a 2D array of longs.
The method searches through each row in a separate thread, and the threads maintain a shared current maximal value. Whenever a thread finds a value larger than its own local maximum, it compares this value with the shared local maximum and updates its current local maximum and possibly the shared maximum as appropriate. I need to make sure that appropriate synchronization is implemented so that the result is correct regardless of how to computations interleave.
My code is verbose and messy, but for starters, I have this function:
static long sharedMaxOf2DArray(long[][] arr, int r){
MyRunnableShared[] myRunnables = new MyRunnableShared[r];
for(int row = 0; row < r; row++){
MyRunnableShared rr = new MyRunnableShared(arr, row, r);
Thread t = new Thread(rr);
t.start();
myRunnables[row] = rr;
}
return myRunnables[0].sharedMax; //should be the same as any other one (?)
}
For the adapted runnable, I have this:
public static class MyRunnableShared implements Runnable{
long[][] theArray;
private int row;
private long rowMax;
public long localMax;
public long sharedMax;
private static Lock sharedMaxLock = new ReentrantLock();
MyRunnableShared(long[][] a, int r, int rm){
theArray = a;
row = r;
rowMax = rm;
}
public void run(){
localMax = 0;
for(int i = 0; i < rowMax; i++){
if(theArray[row][i] > localMax){
localMax = theArray[row][i];
sharedMaxLock.lock();
try{
if(localMax > sharedMax)
sharedMax = localMax;
}
finally{
sharedMaxLock.unlock();
}
}
}
}
}
I thought this use of a lock would be a safe way to prevent multiple threads from messing with the sharedMax at a time, but upon testing/comparing with a non-concurrent maximum-finding function on the same input, I found the results to be incorrect. I'm thinking the problem might come from the fact that I just say
...
t.start();
myRunnables[row] = rr;
...
in the sharedMaxOf2DArray function. Perhaps a given thread needs to finish before I put it in the array of myRunnables; otherwise, I will have "captured" the wrong sharedMax? Or is it something else? I'm not sure on the timing of things..
I'm not sure if this is a typo or not, but your Runnable implementation declares sharedMax as an instance variable:
public long sharedMax;
rather than a shared one:
public static long sharedMax;
In the former case, each Runnable gets its own copy and will not "see" the values of others. Changing it to the latter should help. Or, change it to:
public long[] sharedMax; // array of size 1 shared across all threads
and you can now create an array of size one outside the loop and pass it in to each Runnable to use as shared storage.
As an aside: please note that there will be tremendous lock contention since every thread checks the common sharedMax value by holding a lock for every iteration of its loop. This will likely lead to poor performance. You'd have to measure, but I'd surmise that letting each thread find the row maximum and then running a final pass to find the "max of maxes" might actually be comparable or quicker.
From JavaDocs:
public interface Callable
A task that returns a result and may
throw an exception. Implementors define a single method with no
arguments called call.
The Callable interface is similar to Runnable, in that both are
designed for classes whose instances are potentially executed by
another thread. A Runnable, however, does not return a result and
cannot throw a checked exception.
Well, you can use Callable to calculate your result from one 1darray and wait with an ExecutorService for the end. You can now compare each result of the Callable to fetch the maximum. The code may look like this:
Random random = new Random(System.nanoTime());
long[][] myArray = new long[5][5];
for (int i = 0; i < 5; i++) {
myArray[i] = new long[5];
for (int j = 0; j < 5; j++) {
myArray[i][j] = random.nextLong();
}
}
ExecutorService executor = Executors.newFixedThreadPool(myArray.length);
List<Future<Long>> myResults = new ArrayList<>();
// create a callable for each 1d array in the 2d array
for (int i = 0; i < myArray.length; i++) {
Callable<Long> callable = new SearchCallable(myArray[i]);
Future<Long> callResult = executor.submit(callable);
myResults.add(callResult);
}
// This will make the executor accept no new threads
// and finish all existing threads in the queue
executor.shutdown();
// Wait until all threads are finish
while (!executor.isTerminated()) {
}
// now compare the results and fetch the biggest one
long max = 0;
for (Future<Long> future : myResults) {
try {
max = Math.max(max, future.get());
} catch (InterruptedException | ExecutionException e) {
// something bad happend...!
e.printStackTrace();
}
}
System.out.println("The result is " + max);
And your Callable:
public class SearchCallable implements Callable<Long> {
private final long[] mArray;
public SearchCallable(final long[] pArray) {
mArray = pArray;
}
#Override
public Long call() throws Exception {
long max = 0;
for (int i = 0; i < mArray.length; i++) {
max = Math.max(max, mArray[i]);
}
System.out.println("I've got the maximum " + max + ", and you guys?");
return max;
}
}
Your code has serious lock contention and thread safety issues. Even worse, it doesn't actually wait for any of the threads to finish before the return myRunnables[0].sharedMax which is a really bad race condition. Also, using explicit locking via ReentrantLock or even synchronized blocks is usually the wrong way of doing things unless you're implementing something low level (eg your own/new concurrent data structure)
Here's a version that uses the Future concurrent primitive and an ExecutorService to handle the thread creation. The general idea is:
Submit a number of concurrent jobs to your ExecutorService
Add the Future returned backed from submit(...) to a List
Loop through the list calling get() on each Future and aggregating the result
This version has the added benefit that there is no lock contention (or locking in general) between the worker threads as each just returns back the max for its slice of the array.
import java.util.concurrent.*;
import java.util.*;
public class PMax {
public static long pmax(final long[][] arr, int numThreads) {
ExecutorService pool = Executors.newFixedThreadPool(numThreads);
try {
List<Future<Long>> list = new ArrayList<Future<Long>>();
for(int i=0;i<arr.length;i++) {
// put sub-array in a final so the inner class can see it:
final long[] subArr = arr[i];
list.add(pool.submit(new Callable<Long>() {
public Long call() {
long max = Long.MIN_VALUE;
for(int j=0;j<subArr.length;j++) {
if( subArr[j] > max ) {
max = subArr[j];
}
}
return max;
}
}));
}
// find the max of each slice's max:
long max = Long.MIN_VALUE;
for(Future<Long> future : list) {
long threadMax = future.get();
System.out.println("threadMax: " + threadMax);
if( threadMax > max ) {
max = threadMax;
}
}
return max;
} catch( RuntimeException e ) {
throw e;
} catch( Exception e ) {
throw new RuntimeException(e);
} finally {
pool.shutdown();
}
}
public static void main(String args[]) {
int x = 1000;
int y = 1000;
long max = Long.MIN_VALUE;
long[][] foo = new long[x][y];
for(int i=0;i<x;i++) {
for(int j=0;j<y;j++) {
long r = (long)(Math.random() * 100000000);
if( r > max ) {
// save this to compare against pmax:
max = r;
}
foo[i][j] = r;
}
}
int numThreads = 32;
long pmax = pmax(foo, numThreads);
System.out.println("max: " + max);
System.out.println("pmax: " + pmax);
}
}
Bonus: If you're calling this method repeatedly then it would probably make sense to pull the ExecutorService creation out of the method and have it be reused across calls.
Well, that definetly is an issue - but without more code it is hard to understand if it is the only thing.
There is basically a race condition between the access of thread[0] (and this read of sharedMax) and the modification of the sharedMax in other threads.
Think what happens if the scheduler decides to let no let any thread run for now - so when you are done creating the threads, you will return the answer without modifying it even once! (of course there are other possible scenarios...)
You can overcome it by join()ing all threads before returning an answer.