Concurrent tests: test case scenario automatization - java

Task definition: I need to test custom concurrent collection or some container which manipulates with collections in concurrent environment. More precisely - I've read-API and write-API. I should test if there is any scenarios where I can get inconsistent data.
Problem: All concurrent test frameworks (like MultiThreadedTC, look at MultiThreadedTc section of my question) just provides you an ability to control the asynchronous code execution sequence. I mean you should suppose a critical scenarios by your own.
Broad question: Is there frameworks that can take annotations like #SharedResource, #readAPI, #writeAPI and check if your data will always be consistent? Is that impossible or I just leak a startup idea?
Annotation: If there is no such framework, but you find the idea attractive, you are welcome to contact me or propose your ideas.
Narrow question: I'm new in concurrency. So can you suggest which scenarios should I test in the code below? (look at PeerContainer class)
PeerContainer:
public class PeersContainer {
public class DaemonThreadFactory implements ThreadFactory {
private int counter = 1;
private final String prefix = "Daemon";
#Override
public Thread newThread(Runnable r) {
Thread thread = new Thread(r, prefix + "-" + counter);
thread.setDaemon(true);
counter++;
return thread;
}
}
private static class CacheCleaner implements Runnable {
private final Cache<Long, BlockingDeque<Peer>> cache;
public CacheCleaner(Cache<Long, BlockingDeque<Peer>> cache) {
this.cache = cache;
Thread.currentThread().setDaemon(true);
}
#Override
public void run() {
cache.cleanUp();
}
}
private final static int MAX_CACHE_SIZE = 100;
private final static int STRIPES_AMOUNT = 10;
private final static int PEER_ACCESS_TIMEOUT_MIN = 30;
private final static int CACHE_CLEAN_FREQUENCY_MIN = 1;
private final static PeersContainer INSTANCE;
private final Cache<Long, BlockingDeque<Peer>> peers = CacheBuilder.newBuilder()
.maximumSize(MAX_CACHE_SIZE)
.expireAfterWrite(PEER_ACCESS_TIMEOUT_MIN, TimeUnit.MINUTES)
.removalListener(new RemovalListener<Long, BlockingDeque<Peer>>() {
public void onRemoval(RemovalNotification<Long, BlockingDeque<Peer>> removal) {
if (removal.getCause() == RemovalCause.EXPIRED) {
for (Peer peer : removal.getValue()) {
peer.sendLogoutResponse(peer);
}
}
}
})
.build();
private final Striped<Lock> stripes = Striped.lock(STRIPES_AMOUNT);
private final ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1, new DaemonThreadFactory());
private PeersContainer() {
scheduledExecutorService.schedule(new CacheCleaner(peers), CACHE_CLEAN_FREQUENCY_MIN, TimeUnit.MINUTES);
}
static {
INSTANCE = new PeersContainer();
}
public static PeersContainer getInstance() {
return INSTANCE;
}
private final Cache<Long, UserAuthorities> authToRestore = CacheBuilder.newBuilder()
.maximumSize(MAX_CACHE_SIZE)
.expireAfterWrite(PEER_ACCESS_TIMEOUT_MIN, TimeUnit.MINUTES)
.build();
public Collection<Peer> getPeers(long sessionId) {
return Collections.unmodifiableCollection(peers.getIfPresent(sessionId));
}
public Collection<Peer> getAllPeers() {
BlockingDeque<Peer> result = new LinkedBlockingDeque<Peer>();
for (BlockingDeque<Peer> deque : peers.asMap().values()) {
result.addAll(deque);
}
return Collections.unmodifiableCollection(result);
}
public boolean addPeer(Peer peer) {
long key = peer.getSessionId();
Lock lock = stripes.get(key);
lock.lock();
try {
BlockingDeque<Peer> userPeers = peers.getIfPresent(key);
if (userPeers == null) {
userPeers = new LinkedBlockingDeque<Peer>();
peers.put(key, userPeers);
}
UserAuthorities authorities = restoreSession(key);
if (authorities != null) {
peer.setAuthorities(authorities);
}
return userPeers.offer(peer);
} finally {
lock.unlock();
}
}
public void removePeer(Peer peer) {
long sessionId = peer.getSessionId();
Lock lock = stripes.get(sessionId);
lock.lock();
try {
BlockingDeque<Peer> userPeers = peers.getIfPresent(sessionId);
if (userPeers != null && !userPeers.isEmpty()) {
UserAuthorities authorities = userPeers.getFirst().getAuthorities();
authToRestore.put(sessionId, authorities);
userPeers.remove(peer);
}
} finally {
lock.unlock();
}
}
void removePeers(long sessionId) {
Lock lock = stripes.get(sessionId);
lock.lock();
try {
peers.invalidate(sessionId);
authToRestore.invalidate(sessionId);
} finally {
lock.unlock();
}
}
private UserAuthorities restoreSession(long sessionId) {
BlockingDeque<Peer> activePeers = peers.getIfPresent(sessionId);
return (activePeers != null && !activePeers.isEmpty()) ? activePeers.getFirst().getAuthorities() : authToRestore.getIfPresent(sessionId);
}
public void resetAccessedTimeout(long sessionId) {
Lock lock = stripes.get(sessionId);
lock.lock();
try {
BlockingDeque<Peer> deque = peers.getIfPresent(sessionId);
peers.invalidate(sessionId);
peers.put(sessionId, deque);
} finally {
lock.unlock();
}
}
}
MultiThreadedTC test case sample: [optional section of question]
public class ProducerConsumerTest extends MultithreadedTestCase {
private LinkedTransferQueue<String> queue;
#Override
public void initialize() {
super.initialize();
queue = new LinkedTransferQueue<String>();
}
public void thread1() throws InterruptedException {
String ret = queue.take();
}
public void thread2() throws InterruptedException {
waitForTick(1);
String ret = queue.take();
}
public void thread3() {
waitForTick(1);
waitForTick(2);
queue.put("Event 1");
queue.put("Event 2");
}
#Override
public void finish() {
super.finish();
assertEquals(true, queue.size() == 0);
}
}

Sounds like a job for static analysis, not testing, unless you have time to run multiple trillions of test cases. You pretty much can't test multithreaded behaviour - test behaviour in a single thread, then prove the abscence of threading bugs.
Try:
http://www.contemplateltd.com/threadsafe
http://checkthread.org/

Related

Using Java stream forEach() in ScheduledExecutorService freezes

The general idea is to have a Runnable running every 10 seconds in background to check some data and if needed make changes in an object. ScheduledExecutorService is instantiated in method main() and the task is scheduled. Runnable task instantiates Crawler object and starts crawling. Most of the times it runs couple of times with success but when application is running and data changes one of crawler's method is fired but never ends. There is no loop in the code. I was trying to debug also without success. Maybe you will be able to spot where the problem lays.
Main:
public class Main {
public static void main(String[] args) {
DataStock dataStock = DataStock.getInstance();
ScheduledExecutorService ses = Executors.newSingleThreadScheduledExecutor();
ses.scheduleAtFixedRate(new EveryFiveSeconds(), 5, 5, TimeUnit.SECONDS);
// below the task which fails after couple of runs
ses.scheduleAtFixedRate(new EveryTenSeconds(), 1 , 10, TimeUnit.SECONDS);
dataStock.init();
Menu currentScreen = new UserMenu();
while(currentScreen != null) {
currentScreen = currentScreen.display();
}
}
}
EveryTenSeconds Runnable:
public class EveryTenSeconds implements Runnable {
#Override
public void run() {
Crawler crawler = new Crawler();
crawler.crawl();
}
}
Crawler:
public class Crawler {
private final DataStock dataStock;
public Crawler() {
this.dataStock = DataStock.getInstance();
}
public void crawl() {
checkOutRentables(dataStock.getCarServicesWithOwners().keySet());
checkFinancialBook(dataStock.getPaymentsBook(), dataStock.getCurrentDate());
}
private void checkOutRentables(Set<CarService> carServices) {
System.out.println("Start check...");
carServices.stream()
.flatMap(service -> service.getWarehousesSet().stream())
.filter(rentable -> !rentable.isAvailableForRent())
.forEach(RentableArea::refreshCurrentState);
System.out.println("Checking finished");
}
private void checkFinancialBook(Set<BookEntry> bookEntries, LocalDate currentDate) {
System.out.println("Start second check...");
bookEntries.stream()
.filter(bookEntry -> currentDate.isAfter(bookEntry.getPaymentDeadline()) && !bookEntry.isPaid() && !bookEntry.isNotified())
.forEach(BookEntry::notifyDebtor);
System.out.println("Finished second check..."); //this line never shows in one of runs and the task is never repeated again...
}
}
BookEntry
public class BookEntry {
private final UUID rentableId = UUID.randomUUID();
private final UUID personId;
private final UUID id;
private final BigDecimal amountDue;
private final LocalDate paymentDeadline;
private boolean paid = false;
private boolean notified = false;
public BookEntry(UUID personId, UUID id, BigDecimal amountDue, LocalDate paymentDeadline) {
this.personId = personId;
this.id = id;
this.amountDue = amountDue;
this.paymentDeadline = paymentDeadline;
}
public UUID getRentableId() {
return rentableId;
}
public UUID getPersonId() {
return personId;
}
public UUID getId() {
return id;
}
public BigDecimal getAmountDue() {
return amountDue;
}
public LocalDate getPaymentDeadline() {
return paymentDeadline;
}
public boolean isPaid() {
return paid;
}
public boolean isNotified() {
return notified;
}
public void settlePayment() {
if(!paid) {
paid = true;
}
else {
throw new IllegalStateException("This is already paid man!");
}
}
public void notifyDebtor() {
if(!notified) {
notified = true;
DataStock dataStock = DataStock.getInstance();
Person debtor = dataStock.getPeople().stream()
.filter(person -> person.getId().equals(personId))
.findFirst()
.orElseThrow();
debtor.alert(new TenantAlert(personId, rentableId, dataStock.getCurrentDate(), amountDue));
}
}
}
It seems that the answer is easy - whenever the task scheduled in ScheduledExecutorService throws an exception the task is halted and never repeated. Also the exception is not thrown visibly. The easiest way to avoid such situation is to have try-catch block in run() ,method of Runnable. Please have a look at this post: ScheduledExecutorService handling exceptions

Java: Synchronization based on object value

I want to synchronize one method or one block based on input parameters.
So I have one API which has two inputs (let's say id1 and id2) of long type (could be primitive or wrapper) in post payload, which can be JSON. This API will be called by multiple threads at the same time or at different times randomly.
Now if the first API call has id1=1 and id2=1, and at the same time another API call has id1=1 and id2=1, it should wait for the first API call to finish processing before executing the second call. If the second API call has a different combination of values like id1=1 and id2=2, it should go through parallel without any wait time.
I don't mind creating a service method also which the API resource method can call, rather than handling directly at API resource method.
I'm using Spring boot Rest Controlller APIs.
**Edit**
I've already tried using map as suggested but this partially works. It waits for all input values, not just the same input values. Below is my code:
public static void main(String[] args) throws Exception {
ApplicationContext context = SpringApplication.run(Application.class, args);
AccountResource ar = context.getBean(AccountResource.class);
UID uid1 = new UID();
uid1.setFieldId(1);
uid1.setLetterFieldId(1);
UID uid2 = new UID();
uid2.setFieldId(2);
uid2.setLetterFieldId(2);
UID uid3 = new UID();
uid3.setFieldId(1);
uid3.setLetterFieldId(1);
Runnable r1 = new Runnable() {
#Override
public void run() {
while (true) {
ar.test(uid1);
}
}
};
Runnable r2 = new Runnable() {
#Override
public void run() {
while (true) {
ar.test(uid2);
}
}
};
Runnable r3 = new Runnable() {
#Override
public void run() {
while (true) {
ar.test(uid3);
}
}
};
Thread t1 = new Thread(r1);
t1.start();
Thread t2 = new Thread(r2);
t2.start();
Thread t3 = new Thread(r3);
t3.start();
}
#Path("v1/account")
#Service
public class AccountResource {
public void test(UID uid) {
uidFieldValidator.setUid(uid);
Object lock;
synchronized (map) {
lock = map.get(uid);
if (lock == null) {
map.put(uid, (lock = new Object()));
}
synchronized (lock) {
//some operation
}
}
}
}
package com.urman.hibernate.test;
import java.util.Objects;
public class UID {
private long letterFieldId;
private long fieldId;
private String value;
public long getLetterFieldId() {
return letterFieldId;
}
public void setLetterFieldId(long letterFieldId) {
this.letterFieldId = letterFieldId;
}
public long getFieldId() {
return fieldId;
}
public void setFieldId(long fieldId) {
this.fieldId = fieldId;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
#Override
public int hashCode() {
return Objects.hash(fieldId, letterFieldId);
}
#Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
UID other = (UID) obj;
return fieldId == other.fieldId && letterFieldId == other.letterFieldId;
}
}
You need a collection of locks, which you can keep in a map and allocate as required. Here I assume that your id1 and id2 are Strings; adjust as appropriate.
Map<String,Object> lockMap = new HashMap<>();
:
void someMethod(String id1, String id2) {
Object lock;
synchronized (lockMap) {
lock = lockMap.get(id1+id2);
if (lock == null) lockMap.put(id1+id2, (lock = new Object()));
}
synchronized (lock) {
:
}
}
You need a little bit of 'global' synchronization for the map operations, or you could use one of the concurrent implementations. I used the base HashMap for simplicity of implementation.
After you've selected a lock, sync on it.

Reduce the number of AsyncTask in Room DataBase Repository

I am trying to simplify the number of lines for my codes in the repository.
Currently there is a lot of repetition in my codes.
Many of the solutions online only involves inserting once into the table.
I need to do insert() on many tables. I want to reduce the repetition for writing the same inner AsyncTask for inserting different data into different table
This is the codes for the repository class
public class CharacterRepository {
private UserDao rUserDao;
private CharacterDao rCharacterDao;
private EquipementDao rEquipementDao;
private LiveData<List<UserDao>> rUserLD;
private LiveData<List<CharacterDao>> rCharacterLD;
private LiveData<List<EquipmentDao>> rEquipmentLD;
// Constructor that handles the database and initialise the member variables
CharacterRepository(Application application){
MyDatabase db = MyDatabase.getDatabase(application);
rUserDao = db.userDao();
rCharacterDao = db.characterDao();
rEquipementDao = db.EquipmentDao();
rUserLD = rUserDao.getAllUser();
rCharacterLD = rCharacterDao.getAllChar();
rEquipmentLD = rEquipementDao.getAllEquip();
}
// Wrapper method that returns cached entities as LiveData
public LiveData<List<UserEntity>> getAllUser(){return rUserLD;}
public LiveData<List<CharEntity>> getAllChar(){return rCharacterLD;}
public LiveData<List<EquipEntity>> getAllEquip(){return rEquipmentLD;}
/*---------------------the start of the problem-------------------*/
//Wrapper method: calling insert on non-UI Thread
public void insert(UserEntity userEntity){new insertUserAsyncTask(rUserDao).execute(userEntity);}
public void insert(CharacterEntity characterEntity){new insertCharacterAsyncTask(rCharacterDao).execute(characterEntity);}
public void insert(EquipmentEntity equipmentEntity){new insertEquipAsyncTask(rCharacterDao).execute(equipmentEntity);}
/*-------------------THIS IS THE PART WHERE I WANT TO REDUCE THE CODE REDUNDANCY THE CODES ARE DOING THE SAME THING-------------------*/
private static class insertUserAsyncTask extends AsyncTask<UserEntity, Void, Void> {
private UserDao mAsyncTaskDao;
insertUserAsyncTask(UserDao dao) {mAsyncTaskDao = dao;}
#Override
protected Void doInBackground(UserEntity... userEntities) {
mAsyncTaskDao.save(params[0]);
return null;
}
}
private static class insertCharacterAsyncTask extends AsyncTask<CharacterEntity, Void, Void> {
private CharacterDao mAsyncTaskDao;
insertCharacterAsyncTask(CharacterDao dao) {mAsyncTaskDao = dao; }
#Override
protected Void doInBackground(CharacterEntity... characterEntities) {
mAsyncTaskDao.save(params[0]);
return null;
}
}
private static class insertEquipAsyncTask extends AsyncTask<, Void, Void> {
private EquipmentDao mAsyncTaskDao;
insertEquipAsyncTask(EquipmentDao dao) {mAsyncTaskDao = dao;}
#Override
protected Void doInBackground(EquipmentEntity... equipmentEntities) {
mAsyncTaskDao.save(params[0]);
return null;
}
}
}
I still have other insert methods and I need to call delete and update as well. I do not want the codes to so repetitive
so, #notTdar came up with this solution
Have a class call ThreadPoolExecutor.
Call this class to execute all the DAO from the Android Room Database
Call cleanResource(); in onDestroy
Call shut(); in onPause
ThreadPoolExecutorHelper.java
public class ThreadPoolExecutorHelper {
private static final String TAG = ThreadPoolExecutorHelper.class.getSimpleName() + " : ";
private static final boolean LOG_DEBUG = false;
private static volatile ThreadPoolExecutorHelper INSTANCE;
private ThreadPoolExecutor mThreadPoolExecutor;
private BlockingQueue<Runnable> mBlockingQueue;
private static final int TASK_QUEUE_SIZE = 12;
//core size, keeps thread : along with running + idle
private static final int CORE_POOL_SIZE = 5;
// pool size
private static final int MAX_POOL_SIZE = 5;
// core pool size exceeds, idle thread will wait for this time before termination.
private static final long KEEP_ALIVE_TIME = 20L;
public static ThreadPoolExecutorHelper getInstance() {
if (LOG_DEBUG) Log.e(TAG, "getInstance: ");
if (INSTANCE == null) {
synchronized (ThreadPoolExecutorHelper.class) {
if (INSTANCE == null) {
INSTANCE = new ThreadPoolExecutorHelper();
}
}
}
return INSTANCE;
}
private ThreadPoolExecutorHelper() {
if (LOG_DEBUG) Log.d(TAG, "ctor: ");
initBlockingQueue();
initThreadPoolExecutor();
}
// submit Runnables
public void submitRunnable(Runnable task) {
if (LOG_DEBUG) Log.d(TAG, "submitRunnable: " + task.getClass().getSimpleName());
//in case, init again, if null.
initBlockingQueue();
initThreadPoolExecutor();
mThreadPoolExecutor.execute(task);
}
// shut the threadpool
public synchronized void shut() {
if (LOG_DEBUG) Log.d(TAG, "shut: ");
if (mThreadPoolExecutor != null) {
mThreadPoolExecutor.shutdown();
try {
mThreadPoolExecutor.awaitTermination(6000L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
if (LOG_DEBUG) Log.w(TAG, "shut: InterruptedException");
mThreadPoolExecutor.shutdownNow();
}
} else {
Log.e(TAG, "shut: mThreadPoolExecutor instance NULL");
}
}
//clean up
public void cleanResources() {
if (LOG_DEBUG) Log.e(TAG, "cleanResources: ");
if (INSTANCE != null) {
if (mThreadPoolExecutor != null) {
mThreadPoolExecutor = null;
}
if (mBlockingQueue != null) {
mBlockingQueue = null;
}
nullifyHelper();
}
}
private static void nullifyHelper() {
if (INSTANCE != null) {
INSTANCE = null;
}
}
private void initBlockingQueue() {
if (mBlockingQueue == null) {
mBlockingQueue = new LinkedBlockingQueue<>(TASK_QUEUE_SIZE);
}
}
private void initThreadPoolExecutor() {
if (mThreadPoolExecutor == null) {
mThreadPoolExecutor = new ThreadPoolExecutor(CORE_POOL_SIZE, MAX_POOL_SIZE,
KEEP_ALIVE_TIME, TimeUnit.SECONDS, mBlockingQueue);
}
}
}
Add this codes in onCreate (activity) or onViewCreated(Fragment)
This will initialise the ThreadPoolExecutorHelper by calling getInstance()
private void initExecutorHelper() {
if (LOG_DEBUG) Log.d(TAG, "initExecutorHelper: ");
if (mExecutorHelper == null) {
mExecutorHelper = ThreadPoolExecutorHelper.getInstance();
}
}
This is the insert(); method to start a Thread
You can change this to do insert, query, delete task from the DAO in Room Database
public void insert() {
if (LOG_DEBUG) Log.d(TAG, "requestQREntityList: whatKind= " + whatKind);
mExecutorHelper.submitRunnable(() -> {
if (!Thread.interrupted()) {
//request a list or inset something, write your logic.
} else {
if (LOG_DEBUG) Log.e(TAG, "run: Thread is interrupted");
}
});
}

Stop the whole producer and consumer threads and yield the control to main thread

DefaultRunners are producers
and OrderTaker is a consumer
They both share a OrderQueue.
Currently, I use the variable isDone to indicate if a game is finished.
Once each round is done, I want to make it repeat again and again.
However, in my current implementation it will only run once.
How could I solve it?
public class OrderQueue {
public synchronized void pushOrder(Order order) throws InterruptedException {
if (isDone) {
wait();
} else {
runnersQueue.addLast(order);
notifyAll();
}
}
public void pullOrder() {
try {
if (runnersQueue.size() == 0) {
} else if (isDone) {
wait();
} else {
handleOrder(runnersQueue.pop());
}
} catch (InterruptedException e) {
}
}
In my main class
while(true){
enterYesToStart();
DefaultRunners dfltRunner = new DefaultRunners(queue);
OrderTaker taker = new OrderTaker(queue);
taker.run();
System.out.println("This round is finished"); # never reach to this line
}
Here's the full source code for the example
https://gist.github.com/poc7667/d98e3bf5b3b470fcb51e00d9a0d80931
I've taken a look at your code snippets and the problem is fairly obvious.
The main thread runs the OrderTaker runnable. The main thread is stuck in an eternal loop as the while statement cannot complete unless it throws an exception. (Note that the same is true for your ThreadRunner runnable.)
This means that the main thread i still pulling orders while the race is already done.
The OrderTaker should exit it's while loop while once the race is done. I guess that there are multiple ways achieve this, but one way is use a shared variable.
I took your code and adapted it into a working example.
import java.util.*;
import java.util.concurrent.ConcurrentLinkedDeque;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class RaceApp {
public static void main(String[] args) throws InterruptedException {
final RaceUpdateManager queue = new RaceUpdateManager();
for (int i = 0; i < 3; i++) {
queue.reset();
List<Thread> threads = Arrays.asList(
new Thread(new Runner("Tortoise", 0, 10, queue)),
new Thread(new Runner("Hare", 90, 100, queue))
);
for (Thread thread : threads) {
thread.start();
}
RaceUpdatesProcessor processor = new RaceUpdatesProcessor(queue);
processor.run();
System.out.println("Game finished");
}
}
private static class RaceUpdateManager {
private static final int TOTAL_DISTANCE = 300;
//thread-safe implementation for queue so no external syncrhonization is required when adding/removing updates
private final Deque<RaceUpdate> runnersQueue = new ConcurrentLinkedDeque<>();
//lock used to sync changes to runnersRecords and done variables
private final ReadWriteLock raceStatusLock = new ReentrantReadWriteLock();
private final Map<String, Integer> runnersRecords = new HashMap<>();
private volatile boolean raceDone = false;//volatile keyword guarantees visibility of changes to variables across threads
public boolean isRaceDone() {
return raceDone;
}
//updates can by added simultaneously (read lock)
public void register(RaceUpdate raceUpdate) throws InterruptedException {
Lock readLock = raceStatusLock.readLock();
readLock.lock();
try {
if (!raceDone) {
runnersQueue.addLast(raceUpdate);
}//ignore updates when the race is done
} finally {
readLock.unlock();
}
}
//but they need to be processed in order (exclusive write lock)
public void processOldestUpdate() {
Lock writeLock = raceStatusLock.writeLock();
writeLock.lock();
try {
RaceUpdate raceUpdate = runnersQueue.poll();
if (raceUpdate != null) {
handleUpdate(raceUpdate);
}
} finally {
writeLock.unlock();
}
}
private void handleUpdate(RaceUpdate raceUpdate) {
Integer distanceRun = runnersRecords.merge(
raceUpdate.runner, raceUpdate.distanceRunSinceLastUpdate, (total, increment) -> total + increment
);
System.out.printf("%s: %d\n", raceUpdate.runner, distanceRun);
if (distanceRun >= TOTAL_DISTANCE) {
raceDone = true;
System.out.printf("Winner %s\n", raceUpdate.runner);
}
}
public void reset() {
Lock writeLock = raceStatusLock.writeLock();
writeLock.lock();
try {
runnersQueue.clear();
runnersRecords.clear();
raceDone = false;
} finally {
writeLock.unlock();
}
}
}
public static class Runner implements Runnable {
private final String name;
private final int rest;
private final int speed;
private final RaceUpdateManager queue;
private final Random rand = new Random();
public Runner(String name, int rest, int speed, RaceUpdateManager queue) {
this.name = name;
this.rest = rest;
this.speed = speed;
this.queue = queue;
}
#Override
public void run() {
while (!queue.isRaceDone()) {
try {
if (!takeRest()) {
queue.register(new RaceUpdate(this.name, this.speed));
}
Thread.sleep(100);
} catch (InterruptedException e) {
//signal that thread was interrupted and exit method
Thread.currentThread().interrupt();
return;
}
}
}
private boolean takeRest() {
return rand.nextInt(100) < rest;
}
}
public static class RaceUpdatesProcessor implements Runnable {
private final RaceUpdateManager queue;
public RaceUpdatesProcessor(RaceUpdateManager queue) {
this.queue = queue;
}
#Override
public void run() {
while (!queue.isRaceDone()) {
try {
queue.processOldestUpdate();
Thread.sleep(50);
} catch (InterruptedException e) {
//signal that thread was interrupted and exit method
Thread.currentThread().interrupt();
return;
}
}
}
}
public static class RaceUpdate {
public final String runner;
public final int distanceRunSinceLastUpdate;
public RaceUpdate(String runner, int distanceRunSinceLastUpdate) {
this.runner = runner;
this.distanceRunSinceLastUpdate = distanceRunSinceLastUpdate;
}
}
}

Right Approach for a General Purpose Batching Class

I'm looking for a class that will allow me to add items to process and when the item count equals the batch size performs some operation. I would use it something like this:
Batcher<Token> batcher = new Batcher<Token>(500, Executors.newFixedThreadPool(4)) {
public void onFlush(List<Token> tokens) {
rest.notifyBatch(tokens);
}
};
tokens.forEach((t)->batcher.add(t));
batcher.awaitDone();
After #awaitDone I know that all tokens have been notified. The #onFlush might do anything, for example, I might want to batch inserts into a database. I would like #onFlush invocations to be put into a Executor.
I came up with a solution for this but it seems like a lot of code, so my question is this, is there a better way I should be doing this? Is there an existing class other than the one I implemented or a better way to implement this? Seems like my solution has a lot of moving pieces.
Here's the code I came up with:
/**
* Simple class to allow the batched processing of items and then to alternatively wait
* for all batches to be completed.
*/
public abstract class Batcher<T> {
private final int batchSize;
private final ArrayBlockingQueue<T> batch;
private final Executor executor;
private final Phaser phaser = new Phaser(1);
private final AtomicInteger processed = new AtomicInteger(0);
public Batcher(int batchSize, Executor executor) {
this.batchSize = batchSize;
this.executor = executor;
this.batch = new ArrayBlockingQueue<>(batchSize);
}
public void add(T item) {
processed.incrementAndGet();
while (!batch.offer(item)) {
flush();
}
}
public void addAll(Iterable<T> items) {
for (T item : items) {
add(item);
}
}
public int getProcessedCount() {
return processed.get();
}
public void flush() {
if (batch.isEmpty())
return;
final List<T> batched = new ArrayList<>(batchSize);
batch.drainTo(batched, batchSize);
if (!batched.isEmpty())
executor.execute(new PhasedRunnable(batched));
}
public abstract void onFlush(List<T> batch);
public void awaitDone() {
flush();
phaser.arriveAndAwaitAdvance();
}
public void awaitDone(long duration, TimeUnit unit) throws TimeoutException {
flush();
try {
phaser.awaitAdvanceInterruptibly(phaser.arrive(), duration, unit);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
private class PhasedRunnable implements Runnable {
private final List<T> batch;
private PhasedRunnable(List<T> batch) {
this.batch = batch;
phaser.register();
}
#Override
public void run() {
try {
onFlush(batch);
}
finally {
phaser.arrive();
}
}
}
}
A Java 8 solution would be great. Thanks.
What’s striking me is that your code doesn’t work with more than one thread adding items to a single Batcher instance. If we turn this limitation into the specified use case, there is no need to use specialized concurrent classes internally. So we can accumulate into an ordinary ArrayList and swap this list with a new one when the capacity is exhausted, without the need to copy items. This allows simplifying the code to
public class Batcher<T> implements Consumer<T> {
private final int batchSize;
private final Executor executor;
private final Consumer<List<T>> actualAction;
private final Phaser phaser = new Phaser(1);
private ArrayList<T> batch;
private int processed;
public Batcher(int batchSize, Executor executor, Consumer<List<T>> c) {
this.batchSize = batchSize;
this.executor = executor;
this.actualAction = c;
this.batch = new ArrayList<>(batchSize);
}
public void accept(T item) {
processed++;
if(batch.size()==batchSize) flush();
batch.add(item);
}
public int getProcessedCount() {
return processed;
}
public void flush() {
List<T> current = batch;
if (batch.isEmpty())
return;
batch = new ArrayList<>(batchSize);
phaser.register();
executor.execute(() -> {
try {
actualAction.accept(current);
}
finally {
phaser.arrive();
}
});
}
public void awaitDone() {
flush();
phaser.arriveAndAwaitAdvance();
}
public void awaitDone(long duration, TimeUnit unit) throws TimeoutException {
flush();
try {
phaser.awaitAdvanceInterruptibly(phaser.arrive(), duration, unit);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
regarding Java 8 specific improvements, it uses a Consumer which allows to specify the final action via lambda expression without the need to subclass Batcher. Further, the PhasedRunnable is replaced by a lambda expression. As another simplification, Batcher<T> implements Consumer<T> which elides the need for a method addAll as every Iterable supports forEach(Consumer<? super T>).
So the use case now looks like:
Batcher<Token> batcher = new Batcher<>(
500, Executors.newFixedThreadPool(4), currTokens -> rest.notifyBatch(currTokens));
tokens.forEach(batcher);
batcher.awaitDone();

Categories

Resources