Scheduling tasks with a maximum / minimum duration between tasks - java

Refreshing records from a DB. We either get an explicit notification to refresh, or poll every 60 seconds. No more than one refresh per second.
If a request comes in, it should queue an immediate refresh if one has not happened within one second. Otherwise, it should schedule a refresh for 1 second after the end of the last refresh, unless such a task is already scheduled for that time or sooner.
After one minute without an explicit refresh, the timer should kick in and refresh, in case notifications were not sent.
There may be a large number of notifications coming in (several hundred per second).
Refreshing can be done by a separate single thread.
What's an elegant way to design this?
Here's what I have, but it might lead to too many requests:
private NotificationCenter() {
recordFetchService = Executors.newSingleThreadScheduledExecutor();
recordFetchService.scheduleWithFixedDelay(refreshCommand, minTimeBetweenRefresh, maxTimeBetweenRefresh, TimeUnit.MILLISECONDS);
}
private void queueRefresh() {
// explicit refresh requested. Schedule a refreshCommand to fire immediately, unless that would break our contract
if (!pending.isDone() && pending.getDelay(TimeUnit.MILLISECONDS) < minTimeBetweenRefresh) {
// a refresh is already scheduled
} else {
pending = recordFetchService.schedule(refreshCommand, 0L, TimeUnit.MILLISECONDS);
}
}

With "hundreds of notifications per second" an AtomicBoolean comes to mind to switch state exactly once from "doing nothing" to "going to do something" and vice versa. Couple the "going to do something" state with a Semaphore and you have the option to determine the exact moment when "going to do something" takes place.
Below a (runnable) example implementation/design that combines the AtomicBoolean and Semaphore to refresh data regularly while using notifications. It is probably not the most elegant way, but I do think it accomplishes the goal in a relative straightforward manner.
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
public class RefreshTask {
private static final long MIN_WAIT_MS = 100L;
private static final long MAX_WAIT_MS = 1000L;
private ScheduledExecutorService scheduler;
private ExecutorService executor;
private volatile boolean stopping;
private final Semaphore refreshLock = new Semaphore(0);
private final AtomicBoolean refreshing = new AtomicBoolean();
private volatile long lastRefresh;
public void start() {
stopping = false;
refreshing.set(true);
lastRefresh = System.currentTimeMillis();
executor = Executors.newSingleThreadExecutor();
executor.execute(new RefreshLoop());
scheduler = Executors.newSingleThreadScheduledExecutor();
}
public void stop() {
stopping = true;
if (executor != null) {
refreshLock.release();
scheduler.shutdownNow();
executor.shutdownNow();
}
}
/** Trigger a (scheduled) refresh of data. */
public void refresh() {
if (refreshing.compareAndSet(false, true)) {
final long dataAge = System.currentTimeMillis() - lastRefresh;
if (dataAge >= MIN_WAIT_MS) {
refreshLock.release();
// println("Refresh lock released.");
} else {
long waitTime = MIN_WAIT_MS - dataAge;
scheduler.schedule(new RefreshReleaser(), waitTime, TimeUnit.MILLISECONDS);
println("Refresh scheduled in " + waitTime + " ms.");
}
} else {
// println("Refresh already triggered.");
}
}
protected void refreshData() {
// Refresh data from database
println("DATA refresh");
}
class RefreshLoop implements Runnable {
#Override
public void run() {
while (!stopping) {
try {
refreshData();
} catch (Exception e) {
e.printStackTrace();
}
lastRefresh = System.currentTimeMillis();
refreshing.set(false);
try {
if (!refreshLock.tryAcquire(MAX_WAIT_MS, TimeUnit.MILLISECONDS)) {
if (!refreshing.compareAndSet(false, true)) {
// Unlikely state, but can happen if "dataAge" in the refresh-method is around MAX_WAIT_MS.
// Resolve the race-condition by removing the extra permit.
if (refreshLock.tryAcquire()) {
println("Refresh lock race-condition detected, removed additional permit.");
} else {
println("Refresh lock race-condition detected, but no additional permit found.");
}
}
println("Refreshing after max waiting time.");
} // else refreshing already set to true
} catch (InterruptedException ie) {
if (!stopping) {
ie.printStackTrace();
}
}
}
println("Refresh loop stopped.");
}
}
class RefreshReleaser implements Runnable {
#Override
public void run() {
if (refreshing.get()) {
refreshLock.release();
println("Scheduled refresh lock release.");
} else {
println("Programming error, scheduled refresh lock release can only be done in refreshing state.");
}
}
}
/* *** some testing *** */
public static void main(String[] args) {
RefreshTask rt = new RefreshTask();
try {
println("Starting");
rt.start();
Thread.sleep(2 * MIN_WAIT_MS);
println("Triggering refresh");
rt.refresh();
Thread.sleep(MAX_WAIT_MS + (MIN_WAIT_MS / 2));
println("Triggering refresh 2");
rt.refresh();
Thread.sleep(MIN_WAIT_MS);
} catch (Exception e) {
e.printStackTrace();
} finally {
rt.stop();
}
}
public static final long startTime = System.currentTimeMillis();
public static void println(String msg) {
println(System.currentTimeMillis() - startTime, msg);
}
public static void println(long tstamp, String msg) {
System.out.println(String.format("%05d ", tstamp) + msg);
}
}

Related

Calculate time taken by worker threads while processing messages

What is the best approach to check how long my worker threads have been running since it picked up a message for processing and then log an error message if it exceeds a threshold time limit. I presume that needs to be managed in the WorkerManager class.
My WorkerManager kick starts the worker threads
If there are messages from the provider, then the worker thread processes them by calling a service class.
If there are no messages then it goes to sleep for a brief period.
When my worker is processing the messages and if it takes more than say 5 minutes to process, then I want to generate a warn message but still let the worker thread continue processing.
Question
I want to constantly check if my worker threads are exceeding processing of the messages by 5 minutes, if they exceed the threshold time, then I want to log an error message but still let the worker thread continue as is.
WorkerManager Class
public class WorkerManager implements Runnable {
private MyWorker[] workers;
private int workerCount;
private boolean stopRequested;
public WorkerManager(int count){
this.workerCount = count;
}
#Override
public void run(){
stopRequested = false;
boolean managerStarted = false;
while (!stopRequested) {
if(!managerStarted) {
workers = new MyWorker[workerCount];
for (int i = 0; i < workerCount; i++) {
final Thread workerThread = new Thread(workers[i], "Worker-" + (i + 1));
workerThread.start();
}
managerStarted = true;
}
}
}
public void stop(){
stopRequested = true;
}
//Calll this
public void cleanUpOnExit() {
for(MyWorker w: workers){
w.setStopRequested();
}
}
}
Worker Class
public class MyWorker implements Runnable {
private final int WAIT_INTERVAL = 200;
private MyService myService;
private MyProvider myProvider;
private boolean stopRequested = false;
public MyWorker(MyService myService, MyProvider myProvider){
this.myService = myService;
this.myProvider = myProvider;
}
public void setStopRequested() {
stopRequested = true;
}
#Override
public void run() {
while (!stopRequested) {
boolean processedMessage = false;
List<Message> messages = myProvider.getPendingMessages();
if (messages.size() != 0) {
AdapterLog.debug("We have " + messages.size() + " messages");
processedMessage = true;
for (Message message : messages) {
processMessage(messages);
}
}
if (!(processedMessage || stopRequested)) {
// this is to stop the thread from spinning when there are no messages
try {
Thread.sleep(WAIT_INTERVAL);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
private void processMessage(Message messages){
myService.process(messages);
}
}
Your WorkerManager needs a way to determine when the last message for each worker has been processed. So the workers will need to keep track of the timestamp of the last processed message.
Then, your WorkerManager could check the timestamps of each worker and generate the warnings if needed. In order to check the workers using a given period, you could use an exectutor:
ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor();
scheduledExecutorService.scheduleAtFixedRate(this::checkTimeoutProcessingMessages, 5l, 5l, TimeUnit.SECONDS);
And you could check the times getting the timestamp from each worker:
public void checkTimeoutProcessingMessages() {
for (MyWorker worker : workers) {
long lastProcessed = worker.getLastProcessedMessageTimestamp();
long currentTimestamp = System.currentTimeMillis();
if (lastProcessed + 5000 > currentTimestamp) {
//warn message
}
}
}

Optimising Java code for fast response

I have a multithreaded Java application that uses several threads that are CPU intensive to gather information. Once every few minutes, a result is found that requires handling by another thread of the program. The found result is added to a list and the other relevant thread is notified (using Lock and Condition), after which it handles the found information. I need the time delay for passing this information from thread to thread to be as small as possible. When measuring the time between wake-up and notify using System.currentTimeMillis(), the delay is usually smaller than 5 ms, and most often less than or equal to 1 ms. Sometimes, the delay is larger (10-20ms). Since milliseconds are macro-units when it comes to computers, I would think that a delay that is reliably smaller than 1ms should be possible, and it would benefit my application.
Do you have any idea what the cause of the larger delays can be, or how I can find out where to look? Could it be Garbage Collection? Or are several milliseconds of variation for thread wakeup considered normal?
I am using Java version 1.8.0 on a Linux Ubuntu virtual private server.
An example of the design of the program is attached. Running this does not simulate the delays as observed by my production program correctly. The 'actual' program uses a lot of memory, CPU and only transmits a bit of info once every few minutes. I tried but failed in simulating this simply.
Thank you.
import java.util.concurrent.locks.ReentrantLock;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.Condition;
import java.util.List;
import java.util.ArrayList;
import java.util.Random;
public class Example {
public static void main(String[] args) {
startInfoThreads();
startWatcherThread();
}
private static Lock lock = new ReentrantLock();
private static Condition condition = lock.newCondition();
private static List<Long> infoList = new ArrayList<>();
private static void startWatcherThread () {
Thread t = new Thread () {
#Override
public void run () {
while (true) {
// Waiting for results...
try {
lock.lock();
while (infoList.size() == 0) {
try {
condition.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
long delta = System.currentTimeMillis() - infoList.remove(0);
if (delta > 0)
System.out.println("Time for waking up: " + delta);
} finally {
lock.unlock();
}
// Do something with info
}
}
};
t.start();
}
private static void startInfoThreads () {
for (int i = 0; i < 14; i++) {
Thread t = new Thread() {
#Override
public void run() {
Random r = new Random();
while (true) {
// Gather info, 'hits' about once every few minutes!
boolean infoRandomlyFound = r.nextInt(100) >= 99;
if (infoRandomlyFound) {
try {
lock.lock();
infoList.add(System.currentTimeMillis());
condition.signal();
} finally {
lock.unlock();
}
}
}
}
};
t.start();
}
}
}
System.currentTimeMillis() can be affected by clock drift and usually have a granularity of ~10ms.
To measure elapsed time you should always use System.nanoTime() as it guarantees accuracy.
It probably will not speed up your process but using a BlockingQueue would certainly make the code clearer.
Also note the Thread.sleep for when there is no info.
final BlockingQueue<Long> queue = new ArrayBlockingQueue<>(10);
private void startWatcherThread() {
Thread t = new Thread() {
#Override
public void run() {
while (true) {
// Waiting for results...
try {
Long polled = queue.poll(1, TimeUnit.SECONDS);
// Do something with info
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
t.start();
}
private void startInfoThreads() {
for (int i = 0; i < 14; i++) {
Thread t = new Thread() {
#Override
public void run() {
Random r = new Random();
while (true) {
// Gather info, 'hits' about once every few minutes!
boolean infoRandomlyFound = r.nextInt(100) >= 99;
if (infoRandomlyFound) {
queue.put(System.currentTimeMillis());
} else {
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
};
t.start();
}
}
private void test() {
startInfoThreads();
startWatcherThread();
}

Trigger SheduledExecutor with blockingQueue Java

I'm currently working on java application which has a scenario of multiple producers adding tasks to a queue and whenever queue is not empty tasks should be executed at predefined rate. (using multiple threads to maintain execution rate) After executing the available tasks executor has to wait till tasks available in the queue again.
I know blockingQueue can be used to triggering part in here and ScheduledExecutorService for execute tasks at fixed rate. But I could not find a way to link ability of both of this for my need. So I would be very thankful if you could give me any suggestion to make this happen.
You need the task queue to be accessible by both the producer and consumer threads. I've written a basic program to demonstrate this, but I'll let you play around with the BlockingQueue API and the ScheduledExecutor as per your needs:
import java.util.concurrent.*;
public class ProducerConsumer {
private static final BlockingQueue<Integer> taskQueue = new LinkedBlockingQueue<>();
public static void main(String[] args) {
ExecutorService consumers = Executors.newFixedThreadPool(3);
consumers.submit(new Consumer());
consumers.submit(new Consumer());
consumers.submit(new Consumer());
ExecutorService producers = Executors.newFixedThreadPool(2);
producers.submit(new Producer(1));
producers.submit(new Producer(2));
}
private static class Producer implements Runnable {
private final int task;
Producer(int task) {
this.task = task;
}
#Override
public void run() {
System.out.println("Adding task: " + task);
taskQueue.add(task); // put is better, since it will block if queue is full
}
}
private static class Consumer implements Runnable {
#Override
public void run() {
try {
Integer task = taskQueue.take(); // block if there is no task available
System.out.println("Executing task: " + task);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
This is the way I could come up with as a solution. It looks little bit rusty but I have tested this and the code is working.
package test;
import java.util.concurrent.*;
public class FixedRateConsumer {
private BlockingQueue<String> queue = new ArrayBlockingQueue<>(20);
private ScheduledExecutorService executorService = new ScheduledThreadPoolExecutor(5);
private boolean continueRunning = true;
public void executeInBackGraound() throws InterruptedException, ExecutionException {
while (continueRunning) {
String s = queue.take();
Worker w = new Worker(s);
ScheduledFuture future = executorService.scheduleAtFixedRate(w, 0, 1, TimeUnit.SECONDS);
w.future = future;
try {
if (!future.isDone()) {
future.get();
}
} catch (CancellationException e) {
// Skipping
}
}
}
public void setContinueRunning(boolean state) {
continueRunning = state;
}
public void addConsumableObject(String s) throws InterruptedException {
queue.put(s);
}
private void consumeString(String s) {
System.out.println("Consumed -> " + s + ", ... # -> " + System.currentTimeMillis() + " ms");
}
private class Worker implements Runnable {
String consumableObject;
ScheduledFuture future;
public Worker(String initialConsumableObject) {
this.consumableObject = initialConsumableObject;
}
#Override
public void run() {
try {
if (consumableObject == null) {
consumableObject = queue.take();
}
consumeString(consumableObject);
consumableObject = null;
if (queue.isEmpty()) {
if (future == null) {
while (future == null) {
Thread.sleep(50);
}
}
future.cancel(false);
}
} catch (Exception e) {
System.out.println("Exception : " + e);
}
}
}
}

Google App Engine Modules + HttpServlet with static values;

I am developing an application that delivers notifications to android and iOS devices. I am using basic scaling and have implemented logic (modifying this example) so an appropriate number of workers are active at a given time without using a resident instance.
public class NotificationWorkerServlet extends HttpServlet {
/**
*
*/
private static final long serialVersionUID = 1L;
private static final Logger log = Logger
.getLogger(NotificationWorkerServlet.class.getName());
private static final int MAX_WORKER_COUNT = 5;
private static final int MILLISECONDS_TO_WAIT_WHEN_NO_TASKS_LEASED = 2500;
private static final int TEN_MINUTES = (10 * 60 * 1000);
// Area of concern
private static SyncCounter counter;
/**
* Used to keep number of running workers in sync
*/
private class SyncCounter {
private int c = 0;
public SyncCounter(){
log.info("Sync counter instantiated");
}
public synchronized void increment() {
c++;
log.info("Increment sync counter, workers:" + c);
}
public synchronized void decrement() {
c--;
log.info("Decrement sync counter, workers:" + c);
}
public synchronized int value() {
return c;
}
}
/**
* Call made from module when notification was added to task queue
*/
#Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
super.doPost(req, resp);
// Instantiate counter with first call
if(counter == null){
counter = new SyncCounter();
}
log.info("Starting to build workers");
for (int workerNo = counter.value(); workerNo < MAX_WORKER_COUNT; workerNo++) {
log.info("Starting thread for worker: " + workerNo);
// Get the current queue to check it's statistics
Queue notificationQueue = QueueFactory
.getQueue("notification-delivery");
if (notificationQueue.fetchStatistics().getNumTasks() > 30 * workerNo) {
counter.increment();
Thread thread = ThreadManager
.createBackgroundThread(new Runnable() {
#Override
public void run() {
try {
doPolling();
} catch (Exception e) {
e.printStackTrace();
}
}
});
thread.start();
} else {
break; // Current number of threads is sufficient.
}
}
resp.setStatus(HttpServletResponse.SC_OK);
}
/**
* poll the task queue and lease the tasks
*
* Wait for up to 10 minutes for tasks to be added to queue before killing
* tasks
*
*/
private void doPolling() {
log.info("Doing pulling");
try {
int loopsWithoutProcessedTasks = 0;
Queue notificationQueue = QueueFactory
.getQueue("notification-delivery");
NotificationWorker worker = new NotificationWorker(
notificationQueue);
while (!LifecycleManager.getInstance().isShuttingDown()) {
boolean tasksProcessed = worker.processBatchOfTasks();
ApiProxy.flushLogs();
if (!tasksProcessed) {
log.info("waiting for tasks");
// Wait before trying to lease tasks again.
try {
loopsWithoutProcessedTasks++;
// If worker hasn't had any tasks for 30 min, kill it.
if (loopsWithoutProcessedTasks >= (TEN_MINUTES / MILLISECONDS_TO_WAIT_WHEN_NO_TASKS_LEASED)) {
break;
} else {
// Else, wait and try again (to avoid tearing down
// useful Notification Senders)
Thread.sleep(MILLISECONDS_TO_WAIT_WHEN_NO_TASKS_LEASED);
}
} catch (InterruptedException e) {
log.info("Notification worker thread interrupted");
break;
}
} else {
log.info("processed batch of tasks");
loopsWithoutProcessedTasks = 0;
}
}
} catch (Exception e) {
log.warning("Exception caught and handled in notification worker: "
+ e.getLocalizedMessage());
} finally {
counter.decrement();
}
log.info("Instance is shutting down");
}
}
In a controlled testing scenario, it works just fine. However, I know static, mutable values are bad news in servlets where multiple users could potentially be connecting at the same time.
Has anyone done something similar and had issues with pushing multiple notifications to the same device, lost tasks or had idle tasks burning a hole in the bank?

Java thread safe caching, and return old cach if getting new is in progress

I'm having to dabble with caching and multithreading (thread per request), and I am absolute beginner in that area, so any help would be appreciated
My requirements are:
Cache one single large object that has ether interval refresh or refresh from user
Because retrieving object data is very time consuming make it thread-safe
When retrieving object data return "Old data" until new data is available
Optimize it
From SO and some other user help I have this ATM:
** Edited with Sandeep's and Kayaman's advice **
public enum MyClass
{
INSTANCE;
// caching field
private CachedObject cached = null;
private AtomicLong lastVisistToDB = new AtomicLong();
private long refreshInterval = 1000 * 60 * 5;
private CachedObject createCachedObject()
{
return new CachedObject();
}
public CachedObject getCachedObject()
{
if( ( System.currentTimeMillis() - this.lastVisistToDB.get() ) > this.refreshInterval)
{
synchronized( this.cached )
{
if( ( System.currentTimeMillis() - this.lastVisistToDB.get() ) > this.refreshInterval)
{
this.refreshCachedObject();
}
}
}
return this.cached;
}
public void refreshCachedObject()
{
// This is to prevent threads waiting on synchronized from re-refreshing the object
this.lastVisistToDB.set(System.currentTimeMillis());
new Thread()
{
public void run()
{
createCachedObject();
// Update the actual refresh time
lastVisistToDB.set(System.currentTimeMillis());
}
}.start();
}
}
In my opinion my code does all of the above written requirements. (but I'm not sure)
With code soon going to third party analysis, I really would appreciate any input on code performance and blind spots
Thx for your help.
EDIT : VanOekel's answer IS the solution, because my code ( Edited with Sandeep's and Kayaman's advice ), doesn't account for impact of user-triggered refresh() in this multi-threading enviroment
Instead of DCL as proposed by Sandeep, I'd use the enum Singleton pattern, as it's the best way for lazy-init singletons these days (and looks nicer than DCL).
There's a lot of unnecessary variables and code being used, I'd simplify it a lot.
private static Object cachedObject;
private AtomicLong lastTime = new AtomicLong();
private long refreshPeriod = 1000;
public Object get() {
if(System.currentTimeMillis() - lastTime.get() > refreshPeriod) {
synchronized(cachedObject) {
if(System.currentTimeMillis() - lastTime.get() > refreshPeriod) {
lastTime.set(System.currentTimeMillis()); // This is to prevent threads waiting on synchronized from re-refreshing the object
new Thread() {
public void run() {
cachedObject = refreshObject(); // Get from DB
lastTime.set(System.currentTimeMillis()); // Update the actual refresh time
}
}.start();
}
}
}
return cachedObject;
}
Speedwise that could still be improved a bit, but a lot of unnecessary complexity is reduced. Repeated calls to System.currentTimeMillis() could be removed, as well as setting lastTime twice. But, let's start off with this.
You should put in double checked locking in getInstance().
Also, you might want to keep just one volatile cache object, and in getAndRefreshCashedObject(), and where-ever it's refreshed, you could calculate the new data, and assign it in a syncronized way to the cache object you have.
This way, the code might look smaller, and you don't need to maintain loadInProgress, oldCached variables
I arrive at a somewhat different solution when taking into account the "random" refresh triggered by a user. Also, I think the first fetch should wait for the cache to be filled (i.e. wait for first cached object to be created). And, finally, there should be some (unit) tests to verify the cache works as intended and is thread-safe.
First the cache implementation:
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
// http://stackoverflow.com/q/31338509/3080094
public enum DbCachedObject {
INSTANCE;
private final CountDownLatch initLock = new CountDownLatch(1);
private final Object refreshLock = new Object();
private final AtomicReference<CachedObject> cachedInstance = new AtomicReference<CachedObject>();
private final AtomicLong lastUpdate = new AtomicLong();
private volatile boolean refreshing;
private long cachePeriodMs = 1000L; // make this an AtomicLong if it can be updated
public CachedObject get() {
CachedObject o = cachedInstance.get();
if (o == null || isCacheOutdated()) {
updateCache();
if (o == null) {
awaitInit();
o = cachedInstance.get();
}
}
return o;
}
public void refresh() {
updateCache();
}
private boolean isCacheOutdated() {
return (System.currentTimeMillis() - lastUpdate.get() > cachePeriodMs);
}
private void updateCache() {
synchronized (refreshLock) {
// prevent users from refreshing while an update is already in progress
if (refreshing) {
return;
}
refreshing = true;
// prevent other threads from calling this method again
lastUpdate.set(System.currentTimeMillis());
}
new Thread() {
#Override
public void run() {
try {
cachedInstance.set(getFromDb());
// set the 'real' last update time
lastUpdate.set(System.currentTimeMillis());
initLock.countDown();
} finally {
// make sure refreshing is set to false, even in case of error
refreshing = false;
}
}
}.start();
}
private boolean awaitInit() {
boolean initialized = false;
try {
// assume cache-period is longer as the time it takes to create the cached object
initialized = initLock.await(cachePeriodMs, TimeUnit.MILLISECONDS);
} catch (Exception e) {
e.printStackTrace();
}
return initialized;
}
private CachedObject getFromDb() {
// dummy call, no db is involved
return new CachedObject();
}
public long getCachePeriodMs() {
return cachePeriodMs;
}
}
Second the cached object with a main-method that tests the cache implementation:
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
public class CachedObject {
private static final AtomicInteger createCount = new AtomicInteger();
static final long createTimeMs = 100L;
private final int instanceNumber = createCount.incrementAndGet();
public CachedObject() {
println("Creating cached object " + instanceNumber);
try {
Thread.sleep(createTimeMs);
} catch (Exception ignored) {}
println("Cached object " + instanceNumber + " created");
}
public int getInstanceNumber() {
return instanceNumber;
}
#Override
public String toString() {
return getClass().getSimpleName() + "-" + getInstanceNumber();
}
private static final long startTime = System.currentTimeMillis();
/**
* Test the use of DbCachedObject.
*/
public static void main(String[] args) {
ThreadPoolExecutor tp = (ThreadPoolExecutor) Executors.newCachedThreadPool();
final int tcount = 2; // amount of tasks running in paralllel
final long threadStartGracePeriodMs = 50L; // starting runnables takes time
try {
// verify first calls wait for initialization of first cached object
fetchCacheTasks(tp, tcount, createTimeMs + threadStartGracePeriodMs);
// verify immediate return of cached object
CachedObject o = DbCachedObject.INSTANCE.get();
println("Cached: " + o);
// wait for refresh-period
Thread.sleep(DbCachedObject.INSTANCE.getCachePeriodMs() + 1);
// trigger update
o = DbCachedObject.INSTANCE.get();
println("Triggered update for " + o);
// wait for update to complete
Thread.sleep(createTimeMs + 1);
// verify updated cached object is returned
fetchCacheTasks(tp, tcount, threadStartGracePeriodMs);
// trigger update
DbCachedObject.INSTANCE.refresh();
// wait for update to complete
Thread.sleep(createTimeMs + 1);
println("Refreshed: " + DbCachedObject.INSTANCE.get());
} catch (Exception e) {
e.printStackTrace();
} finally {
tp.shutdownNow();
}
}
private static void fetchCacheTasks(ThreadPoolExecutor tp, int tasks, long doneWaitTimeMs) throws Exception {
final CountDownLatch fetchStart = new CountDownLatch(tasks);
final CountDownLatch fetchDone = new CountDownLatch(tasks);
// println("Starting " + tasks + " tasks");
for (int i = 0; i < tasks; i++) {
final int r = i;
tp.execute(new Runnable() {
#Override public void run() {
fetchStart.countDown();
try { fetchStart.await();} catch (Exception ignored) {}
CachedObject o = DbCachedObject.INSTANCE.get();
println("Task " + r + " got " + o);
fetchDone.countDown();
}
});
}
println("Awaiting " + tasks + " tasks");
if (!fetchDone.await(doneWaitTimeMs, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Fetch cached object tasks incomplete.");
}
}
private static void println(String msg) {
System.out.println((System.currentTimeMillis() - startTime) + " " + msg);
}
}
The tests in the main-method need human eyes to verify the results, but they should provide sufficient input for unit tests. Once the unit tests are more refined, the cache implementation will probably need some finishing touches as well.

Categories

Resources