How to create a timing watchdog for method execution count protection - java

I want to code something that would notify a listener if a count of method call have been exceeded a max value with a specified time laps.
Let's say that I would want to know if a method is called a little bit too fast within a period of sliding 30 seconds.
In this method I would notify this watchdog that it must increment the counter.
And I want to be able to track more than 100 call within the configured time laps.
So the watchdog would be instantiated like this : new Watchdog(100, 30, TimeUnit.SECONDS, theListener);
I don't really know how to start the coding if this kind of thing. Any hints would be appreciated.

If I have well understood, you need one or several WatchDogs which tracks if a maximumNumber has been reached within a timeLapse?
I guess this fits well the Observer pattern, in which a Subject (e.g. a program) sends notifications to Observers (e.g. a WatchDog observing how the program behaves).
Here would be the program or subject being observed by Watchdogs:
public class Subject {
private List<WatchDog> watchDogs = new ArrayList<>();
public void add(WatchDog watchDog) {
watchDogs.add(watchDog);
}
public void execute() {
for (WatchDog watchDog : watchDogs) {
watchDog.update();
}
}
}
Here would be the WatchDog definition:
// Verifies that maxCalls is not reached between lastTimeUpdateWasCalled and
// lastTimeUpdateWasCalled + periodInSeconds
public class WatchDog {
private Date lastTimeUpdateWasCalled = null;
private int currentNumberOfCalls = 0;
private int maxCalls;
private int periodInSeconds;
public WatchDog(int maxCalls, int periodInSeconds) {
this.maxCalls = maxCalls;
this.periodInSeconds = periodInSeconds;
}
public void update() {
this.currentNumberOfCalls = this.currentNumberOfCalls + 1;
Date now = new Date();
if (lastTimeUpdateWasCalled == null) {
this.lastTimeUpdateWasCalled = now;
this.currentNumberOfCalls = 1;
return;
}
long endOfPeriodMillis = lastTimeUpdateWasCalled.getTime() + this.periodInSeconds * 1000L;
Date endOfPeriod = new Date(endOfPeriodMillis);
if (now.before(endOfPeriod)) {
this.currentNumberOfCalls = this.currentNumberOfCalls + 1;
if (this.currentNumberOfCalls >= this.maxCalls) {
System.out.println("Watchdog has detected that " + this.currentNumberOfCalls + " have been done within "
+ this.periodInSeconds + " seconds");
}
} else {
// reinitialization
this.currentNumberOfCalls = 1;
this.lastTimeUpdateWasCalled = now;
}
}
}
Here is how you could put the whole together:
public class Main {
public static void main(String[] args) throws Exception {
Subject s1 = new Subject();
WatchDog w = new WatchDog(2, 2);
s1.add(w);
s1.execute();
//Thread.sleep(2000);
s1.execute();
}
}
More information about the observer pattern here: https://sourcemaking.com/design_patterns/observer

Related

Is there a way to get the Apache Commons FileAlterationMonitor to only alert once for a batch of incoming files?

I am monitoring several (about 15) paths for incoming files using the Apache Commons FileAlterationMonitor. These incoming files can come in batches of anywhere between 1 and 500 files at a time. I have everything set up and the application monitors the folders as expected, I have it set to poll the folders every minute. My issue is that, as expected, the listener that I have set up alerts for each incoming file when all I really need, and want, is to know when a new batch of files come in. So I would like to receive a single alert as opposed to up to 500 at a time.
Does anyone have any ideas for how to control the number of alerts or only pick up the first or last notification or something to that effect? I would like to stick with the FileAlterationMonitor if at all possible because it will be running for long periods and so far from what I can tell in testing is that it doesn't seem to put a heavy load on the system or slow the rest of the application down. But I am definitely open to other ideas if what I'm looking for isn't possible with the FileAlterationMonitor.
public class FileMonitor{
private final String newDirectory;
private FileAlterationMonitor monitor;
private final Alerts gui;
private final String provider;
public FileMonitor (String d, Alerts g, String pro) throws Exception{
newDirectory = d;
gui = g;
provider = pro;
}
public void startMonitor() throws Exception{
// Directory to monitor
final File directory = new File(newDirectory);
// create new observer
FileAlterationObserver fao = new FileAlterationObserver(directory);
// add listener to observer
fao.addListener(new FileAlterationListenerImpl(gui, provider));
// wait 1 minute between folder polls.
monitor = new FileAlterationMonitor(60000);
monitor.addObserver(fao);
monitor.start();
}
}
public class FileAlterationListenerImpl implements FileAlterationListener{
private final Alerts gui;
private final String provider;
private final LogFiles monitorLogs;
public FileAlterationListenerImpl(Alerts g, String pro){
gui = g;
provider = pro;
monitorLogs = new LogFiles();
}
#Override
public void onStart(final FileAlterationObserver observer){
System.out.println("The FileListener has started on: " + observer.getDirectory().getAbsolutePath());
}
#Override
public void onDirectoryCreate(File file) {
}
#Override
public void onDirectoryChange(File file) {
}
#Override
public void onDirectoryDelete(File file) {
}
#Override
public void onFileCreate(File file) {
try{
switch (provider){
case "Spectrum": gui.alertsAreaAppend("New/Updated schedules available for Spectrum zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for Spectrum zones!\r\n");
break;
case "DirecTV ZTA": gui.alertsAreaAppend("New/Updated schedules available for DirecTV ZTA zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for DirecTV ZTA zones!\r\n");
break;
case "DirecTV RSN": gui.alertsAreaAppend("New/Updated schedules available for DirecTV RSN zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for DirecTV RSN zones!\r\n");
break;
case "Suddenlink": gui.alertsAreaAppend("New/Updated schedules available for Suddenlink zones!\r\n");
monitorLogs.appendNewLogging("New/Updated schedules available for Suddenlink zones!\r\n");
break;
}
}catch (IOException e){}
}
#Override
public void onFileChange(File file) {
}
Above is the FileMonitor class and overridden FileAlterationListener I have so far.
Any suggestions would be greatly appreciated.
Here's a quick and crude implementation:
public class FileAlterationListenerAlterThrottler {
private static final int DEFAULT_THRESHOLD_MS = 5000;
private final int thresholdMs;
private final Map<String, Long> providerLastFileProcessedAt = new HashMap<>();
public FileAlterationListenerAlterThrottler() {
this(DEFAULT_THRESHOLD_MS);
}
public FileAlterationListenerAlterThrottler(int thresholdMs) {
this.thresholdMs = thresholdMs;
}
public synchronized boolean shouldAlertFor(String provider) {
long now = System.currentTimeMillis();
long last = providerLastFileProcessedAt.computeIfAbsent(provider, x -> 0l);
if (now - last < thresholdMs) {
return false;
}
providerLastFileProcessedAt.put(provider, now);
return true;
}
}
And a quicker and cruder driver:
public class Test {
public static void main(String[] args) throws Exception {
int myThreshold = 1000;
FileAlterationListenerAlterThrottler throttler = new FileAlterationListenerAlterThrottler(myThreshold);
for (int i = 0; i < 3; i++) {
doIt(throttler);
}
Thread.sleep(1500);
doIt(throttler);
}
private static void doIt(FileAlterationListenerAlterThrottler throttler) {
boolean shouldAlert = throttler.shouldAlertFor("Some Provider");
System.out.println("Time now: " + System.currentTimeMillis());
System.out.println("Should alert? " + shouldAlert);
System.out.println();
}
}
Yields:
Time now: 1553739126557
Should alert? true
Time now: 1553739126557
Should alert? false
Time now: 1553739126557
Should alert? false
Time now: 1553739128058
Should alert? true

Is there a notion of virtual time in Apache Flink tests like there is in Reactor and RxJava

In RxJava and Reactor there is this notion of virtual time to tests operators that are dependent of time. I cant figure out how to do this in Flink. For example I have put together the following example where I would like to play around with late arriving events to understand how they are handled. However im not able to understand how such a test would look? Is there a way to combine Flink and Reactor to make the tests better?
public class PlayWithFlink {
public static void main(String[] args) throws Exception {
final OutputTag<MyEvent> lateOutputTag = new OutputTag<MyEvent>("late-data"){};
// TODO understand how BoundedOutOfOrderness is related to allowedLateness
BoundedOutOfOrdernessTimestampExtractor<MyEvent> eventTimeFunction = new BoundedOutOfOrdernessTimestampExtractor<MyEvent>(Time.seconds(10)) {
#Override
public long extractTimestamp(MyEvent element) {
return element.getEventTime();
}
};
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
DataStream<MyEvent> events = env.fromCollection(MyEvent.examples())
.assignTimestampsAndWatermarks(eventTimeFunction);
AggregateFunction<MyEvent, MyAggregate, MyAggregate> aggregateFn = new AggregateFunction<MyEvent, MyAggregate, MyAggregate>() {
#Override
public MyAggregate createAccumulator() {
return new MyAggregate();
}
#Override
public MyAggregate add(MyEvent myEvent, MyAggregate myAggregate) {
if (myEvent.getTracingId().equals("trace1")) {
myAggregate.getTrace1().add(myEvent);
return myAggregate;
}
myAggregate.getTrace2().add(myEvent);
return myAggregate;
}
#Override
public MyAggregate getResult(MyAggregate myAggregate) {
return myAggregate;
}
#Override
public MyAggregate merge(MyAggregate myAggregate, MyAggregate acc1) {
acc1.getTrace1().addAll(myAggregate.getTrace1());
acc1.getTrace2().addAll(myAggregate.getTrace2());
return acc1;
}
};
KeySelector<MyEvent, String> keyFn = new KeySelector<MyEvent, String>() {
#Override
public String getKey(MyEvent myEvent) throws Exception {
return myEvent.getTracingId();
}
};
SingleOutputStreamOperator<MyAggregate> result = events
.keyBy(keyFn)
.window(EventTimeSessionWindows.withGap(Time.seconds(10)))
.allowedLateness(Time.seconds(20))
.sideOutputLateData(lateOutputTag)
.aggregate(aggregateFn);
DataStream lateStream = result.getSideOutput(lateOutputTag);
result.print("SessionData");
lateStream.print("LateData");
env.execute();
}
}
class MyEvent {
private final String tracingId;
private final Integer count;
private final long eventTime;
public MyEvent(String tracingId, Integer count, long eventTime) {
this.tracingId = tracingId;
this.count = count;
this.eventTime = eventTime;
}
public String getTracingId() {
return tracingId;
}
public Integer getCount() {
return count;
}
public long getEventTime() {
return eventTime;
}
public static List<MyEvent> examples() {
long now = System.currentTimeMillis();
MyEvent e1 = new MyEvent("trace1", 1, now);
MyEvent e2 = new MyEvent("trace2", 1, now);
MyEvent e3 = new MyEvent("trace2", 1, now - 1000);
MyEvent e4 = new MyEvent("trace1", 1, now - 200);
MyEvent e5 = new MyEvent("trace1", 1, now - 50000);
return Arrays.asList(e1,e2,e3,e4, e5);
}
#Override
public String toString() {
return "MyEvent{" +
"tracingId='" + tracingId + '\'' +
", count=" + count +
", eventTime=" + eventTime +
'}';
}
}
class MyAggregate {
private final List<MyEvent> trace1 = new ArrayList<>();
private final List<MyEvent> trace2 = new ArrayList<>();
public List<MyEvent> getTrace1() {
return trace1;
}
public List<MyEvent> getTrace2() {
return trace2;
}
#Override
public String toString() {
return "MyAggregate{" +
"trace1=" + trace1 +
", trace2=" + trace2 +
'}';
}
}
The output of running this is:
SessionData:1> MyAggregate{trace1=[], trace2=[MyEvent{tracingId='trace2', count=1, eventTime=1551034666081}, MyEvent{tracingId='trace2', count=1, eventTime=1551034665081}]}
SessionData:3> MyAggregate{trace1=[MyEvent{tracingId='trace1', count=1, eventTime=1551034166081}], trace2=[]}
SessionData:3> MyAggregate{trace1=[MyEvent{tracingId='trace1', count=1, eventTime=1551034666081}, MyEvent{tracingId='trace1', count=1, eventTime=1551034665881}], trace2=[]}
However I would expect to see the lateStream trigger for the e5 event that should be 50seconds before the first event triggers.
If you modify your watermark assigner to be like this
AssignerWithPunctuatedWatermarks eventTimeFunction = new AssignerWithPunctuatedWatermarks<MyEvent>() {
long maxTs = 0;
#Override
public long extractTimestamp(MyEvent myEvent, long l) {
long ts = myEvent.getEventTime();
if (ts > maxTs) {
maxTs = ts;
}
return ts;
}
#Override
public Watermark checkAndGetNextWatermark(MyEvent event, long extractedTimestamp) {
return new Watermark(maxTs - 10000);
}
};
then you will get the results you expect. I'm not recommending this -- just using it to illustrate what's going on.
What's happening here is that a BoundedOutOfOrdernessTimestampExtractor is a periodic watermark generator that will only insert a watermark into the stream every 200 msec (by default). Because your job completes long before then, the only watermark your job is experiencing is the one the Flink injects at the end of every finite stream (with value MAX_WATERMARK). Lateness is relative to watermarks, and the event that you expected to be late is managing to arrive before that watermark.
By switching to punctuated watermarks you can force watermarking to occur more often, or more precisely at specific points in the stream. This is generally unnecessary (and too frequent watermarking causes overhead), but is helpful when you want to have strong control over the sequencing of watermarks.
As for how to write tests, you might take a look at the test harnesses used in Flink's own tests, or at flink-spector.
Update:
The time interval associated with the BoundedOutOfOrdernessTimestampExtractor is a specification of how out-of-order the stream is expected to be. Events that arrive within this bound are not considered late, and event time timers won't fire until this delay has elapsed, thereby giving time for out-of-order events to arrive. allowedLateness only applies to the window API, and describes for how long past the normal window firing time the framework keeps window state so that events can still be added to a window and cause late firings. After this additional interval, window state is cleared and subsequent events are sent to the side output (if configured).
So when you use BoundedOutOfOrdernessTimestampExtractor<MyEvent>(Time.seconds(10)) you are not saying "wait 10 seconds after every event in case earlier events might still arrive". But you are saying that your events should be at most 10 seconds out of order. So if you are processing a live, real-time event stream, this means you will wait for at most 10 seconds in case earlier events arrive. (And if you are processing historic data, then you may be able to process 10 seconds of data in 1 second, or not -- knowing you will wait for n seconds of event time to pass says nothing about how long it will actually take.)
For more on this topic, see Event Time and Watermarks.

Java Future.isDone returning true, even though it shouldn't, which halts program progress

I have a SwingWorker which initiates a LinkedBlockingQueue, passes it to another method (PortalDriver, below), then reads from it in the doInBackground() method. The LinkedBlockingQueue holds futures for a custom object (and it is definitely filled correctly). As a check, the objects that I am creating (via an ExecutorService), have a println(this) at the end of the constructor, so I know when they're being created.
The issue is, via the println() calls (both in doInBackground and in the constructor), after a number of successful iterations in the while loop in doInBackground, the line
System.out.println("future.isDone before get(): " + futureListing.isDone());
prints true, even though the constructor hasn't printed that it has been created (this is clear just by counting). The while loop blocks permanently, even though the rest of the objects are created.
private class ReadSchedWorker extends SwingWorker<Void, Listing> {
private ChromeOptions options = new ChromeOptions();
private WebDriver driver;
private LinkedBlockingQueue<Future<Listing>> listingQueue;
private LoginFactory loginFactory;
private int year;
private int month;
private int day;
public ReadSchedWorker(Login mediasiteLogin, Login tmsLogin,
int year, int month, int day) {
System.setProperty("webdriver.chrome.driver", "\\\\private\\Home\\Desktop\\chromedriver.exe");
listingQueue = new LinkedBlockingQueue<>();
loginFactory = new LoginFactory(mediasiteLogin, tmsLogin);
this.year = year;
this.month = month;
this.day = day;
}
#Override
protected Void doInBackground() throws Exception {
this.options.addArguments("--disable-extensions");
driver = new ChromeDriver(options);
String portalUsername = loginFactory.getPortalUsername();
String portalPassword = loginFactory.getPortalPassword();
PortalDriver portalDriver = new PortalDriver(driver, listingQueue);
portalDriver.getScheduleElements(portalUsername, portalPassword, year, month, day);
while (!listingQueue.isEmpty()) {
System.out.println("beginning");
Future<Listing> futureListing = listingQueue.take();
System.out.println("future.isDone before get(): " + futureListing.isDone());
Listing listing = futureListing.get();
System.out.println("future.isDone after get(): " + futureListing.isDone());
System.out.println("before publish");
publish(listing);
System.out.println("after publish");
}
return null;
}
#Override
protected void process(List<Listing> listings) {
for (Listing listingItem : listings) {
listingListModel.addElement(listingItem);
}
}
}
Just for completion, here is getScheduleElements, which fills up the LinkedBlockingQueue:
public void getScheduleElements(String username, String password, int year, int month, int day) {
driver.get("https://rxsecure.umaryland.edu/apps/schedules/view/?type=search&searchtype=resource&id=100&start=" +
year + "-" + month + "-" + day + "&scope=week");
PortalLoginPage loginPage = PageFactory.initElements(driver, PortalLoginPage.class);
PortalScheduleEventsWeekPage scheduleEventsWeekPage = loginPage.login(username, password);
webElementsQueue = scheduleEventsWeekPage.initEventsQueue();
// doesn't need to be a queue...
// this is sequential....
// could do a parallel stream
while (webElementsQueue.peek() != null) {
Callable<Listing> callable = new PortalListingCallable(webElementsQueue.poll());
Future<Listing> future = executor.submit(callable);
listingQueue.offer(future);
}
executor.shutdown();
}
Edit:
An example of what I mean (the future not being done, but reporting that it is, via the println()s in doInBackground()):
beginning
future.isDone before get(): true //the future says it done, but the object being created is below
in listing contructor: PHAR580 Pharmacy Law;Specialty Pharmacy 2;Recorded in Mediasite PH S201 2015-04-27T10:00:00.000-04:00 2015-04-27T11:50:00.000-04:00
prints true, even though the constructor hasn't printed that it has been created
Most likely you are getting an Exception or Error which you are not seeing. You have to call Future.get() to see it.
In the meantime I suggest you change
Future<Listing> future = executor.submit(callable);
to
Future<Listing> future = executor.submit(new Callable<Listing>() {
public Listing call() throws Throwable {
try {
return callable.call();
} catch (Throwable t) {
t.printStackTrace();
throw t;
}
}
});

How to implement fair usage policy of threads in API backend?

I am building an REST API in Java which I would be exposing to the outside world. People who would be invoking the API would have to be registered and would be sending their userid in the request.
There would be a maximum of, say, 10 concurrent threads available for executing the API request. I am maintaining a queue which holds all the request ids to be serviced (the primary key of the DB entry).
I need to implement some fair usage policy as follows -
If there are more than 10 jobs in the queue (i.e more than max number of threads), a user is allowed to execute only one request at a time (the other requests submitted by him/her, if any, would remain in the queue and would be taken up only once his previous request has completed execution). In case there are free threads, i.e. even after allotting threads to requests submitted by different users, then the remaining threads in the thread pool can be distributed among the remaining requests (even if the user who has submitted the request is already holding one thread at that moment).
The current implementation is as follows -
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.PriorityBlockingQueue;
import java.util.concurrent.Semaphore;
public class APIJobExecutor implements Runnable{
private static PriorityBlockingQueue<Integer> jobQueue = new PriorityBlockingQueue<Integer>();
private static ExecutorService jobExecutor = Executors.newCachedThreadPool();
private static final int MAX_THREADS = 10;
private static Semaphore sem = new Semaphore(MAX_THREADS, true);
private APIJobExecutor(){
}
public static void addJob(int jobId)
{
if(!jobQueue.contains(jobId)){
jobQueue.add(new Integer(jobId));
}
}
public void run()
{
while (true) {
try {
sem.acquire();
}catch (InterruptedException e1) {
e1.printStackTrace();
//unable to acquire lock. retry.
continue;
}
try {
Integer jobItem = jobQueue.take();
jobExecutor.submit(new APIJobService(jobItem));
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
sem.release();
}
}
}
}
Edit:
Is there any out of the box Java data structure that gives me this functionality. If not, how do I go about implementing the same?
This is a fairly common "quality of service" pattern and can be solved using the bucket idea within a job-queue. I do not know of a standard Java implementation and/or datastructure for this pattern (maybe the PriorityQueue?), but there should be at least a couple of implementations available (let us know if you find a good one).
I did once create my own implementation and I've tried to de-couple it from the project so that you may modify and use it (add unit-tests!). A couple of notes:
a default-queue is used in case QoS is not needed (e.g. if less than 10 jobs are executing).
the basic idea is to store tasks in lists per QoS-key (e.g. the username), and maintain a separate "who is next" list.
it is intended to be used within a job queue (e.g. part of the APIJobExecutor, not a replacement). Part of the job queue's responsibility is to always call remove(taskId) after a task is executed.
there should be no memory leaks: if there are no tasks/jobs in the queue, all internal maps and lists should be empty.
The code:
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.locks.ReentrantLock;
import java.util.*;
import org.slf4j.*;
/** A FIFO task queue. */
public class QosTaskQueue<TASKTYPE, TASKIDTYPE> {
private static final Logger log = LoggerFactory.getLogger(QosTaskQueue.class);
public static final String EMPTY_STRING = "";
/** Work tasks queued which have no (relevant) QoS key. */
private final ConcurrentLinkedQueue<TASKIDTYPE> defaultQ = new ConcurrentLinkedQueue<TASKIDTYPE>();
private final AtomicInteger taskQSize = new AtomicInteger();
private final Map<TASKIDTYPE, TASKTYPE> queuedTasks = new ConcurrentHashMap<TASKIDTYPE, TASKTYPE>();
/** Amount of tasks in queue before "quality of service" distribution kicks in. */
private int qosThreshold = 10;
/** Indicates if "quality of service" distribution is in effect. */
private volatile boolean usingQos;
/**
* Lock for all modifications to Qos-queues.
* <br>Must be "fair" to ensure adding does not block polling threads forever and vice versa.
*/
private final ReentrantLock qosKeyLock = new ReentrantLock(true);
/*
* Since all QoS modifications can be done by multiple threads simultaneously,
* there is never a good time to add or remove a Qos-key with associated queue.
* There is always a chance that a key is added while being removed and vice versa.
* The simplest solution is to make everything synchronized, which is what qosKeyLock is used for.
*/
private final Map<String, Queue<TASKIDTYPE>> qosQueues = new HashMap<String, Queue<TASKIDTYPE>>();
private final Queue<String> qosTurn = new LinkedList<String>();
public boolean add(TASKTYPE wt, TASKIDTYPE taskId, String qosKey) {
if (queuedTasks.containsKey(taskId)) {
throw new IllegalStateException("Task with ID [" + taskId + "] already enqueued.");
}
queuedTasks.put(taskId, wt);
return addToQ(taskId, qosKey);
}
public TASKTYPE poll() {
TASKIDTYPE taskId = pollQos();
return (taskId == null ? null : queuedTasks.get(taskId));
}
/**
* This method must be called after a task is taken from the queue
* using {#link #poll()} and executed.
*/
public TASKTYPE remove(TASKIDTYPE taskId) {
TASKTYPE wt = queuedTasks.remove(taskId);
if (wt != null) {
taskQSize.decrementAndGet();
}
return wt;
}
private boolean addToQ(TASKIDTYPE taskId, String qosKey) {
if (qosKey == null || qosKey.equals(EMPTY_STRING) || size() < getQosThreshold()) {
defaultQ.add(taskId);
} else {
addSynced(taskId, qosKey);
}
taskQSize.incrementAndGet();
return true;
}
private void addSynced(TASKIDTYPE taskId, String qosKey) {
qosKeyLock.lock();
try {
Queue<TASKIDTYPE> qosQ = qosQueues.get(qosKey);
if (qosQ == null) {
if (!isUsingQos()) {
// Setup QoS mechanics
qosTurn.clear();
qosTurn.add(EMPTY_STRING);
usingQos = true;
}
qosQ = new LinkedList<TASKIDTYPE>();
qosQ.add(taskId);
qosQueues.put(qosKey, qosQ);
qosTurn.add(qosKey);
log.trace("Created QoS queue for {}", qosKey);
} else {
qosQ.add(taskId);
if (log.isTraceEnabled()) {
log.trace("Added task to QoS queue {}, size: " + qosQ.size(), qosKey);
}
}
} finally {
qosKeyLock.unlock();
}
}
private TASKIDTYPE pollQos() {
TASKIDTYPE taskId = null;
qosKeyLock.lock();
try {
taskId = pollQosRecursive();
} finally {
qosKeyLock.unlock();
}
return taskId;
}
/**
* Poll the work task queues according to qosTurn.
* Recursive in case empty QoS queues are removed or defaultQ is empty.
* #return
*/
private TASKIDTYPE pollQosRecursive() {
if (!isUsingQos()) {
// QoS might have been disabled before lock was released or by this recursive method.
return defaultQ.poll();
}
String qosKey = qosTurn.poll();
Queue<TASKIDTYPE> qosQ = (qosKey.equals(EMPTY_STRING) ? defaultQ : qosQueues.get(qosKey));
TASKIDTYPE taskId = qosQ.poll();
if (qosQ == defaultQ) {
// DefaultQ should always be checked, even if it was empty
qosTurn.add(EMPTY_STRING);
if (taskId == null) {
taskId = pollQosRecursive();
} else {
log.trace("Removed task from defaultQ.");
}
} else {
if (taskId == null) {
qosQueues.remove(qosKey);
if (qosQueues.isEmpty()) {
usingQos = false;
}
taskId = pollQosRecursive();
} else {
qosTurn.add(qosKey);
if (log.isTraceEnabled()) {
log.trace("Removed task from QoS queue {}, size: " + qosQ.size(), qosKey);
}
}
}
return taskId;
}
#Override
public String toString() {
StringBuilder sb = new StringBuilder(this.getClass().getName());
sb.append(", size: ").append(size());
sb.append(", number of QoS queues: ").append(qosQueues.size());
return sb.toString();
}
public boolean containsTaskId(TASKIDTYPE wid) {
return queuedTasks.containsKey(wid);
}
public int size() {
return taskQSize.get();
}
public void setQosThreshold(int size) {
this.qosThreshold = size;
}
public int getQosThreshold() {
return qosThreshold;
}
public boolean isUsingQos() {
return usingQos;
}
}

timer gets executed twice.. in java

Facing terrible issue... .
I have two timers of Class java.util.Timer, scheduledReportTimer which sends daily emails and scheduleImmediateReportTimer which send emails on every 10 mins.
scheduledReportTimer is working perfectly fine.
scheduleImmediateReportTimer is runs every 10 mins but it seems its running twice in 10 mins or there are two thread gets created for scheduleImmediateReportTimer (i am not sure what exactly it is)
but it is calling the email sending method twice
i tested email sending logic for lots of conditions and which is perfect .
please help me.
public class ScheduleReportManager implements ServletContextListener{
private ServletContext application = null;
private Environment environment =null;
private Timer scheduledReportTimer=null;
private Timer scheduleImmediateReportTimer=null; //daily timer
private ConcurrentHashMap scheduledReportTimerTasks;
private ConcurrentHashMap scheduledImmediateReportTimerTasks; //daily timer hash map
ApplicationContext webContext=null;
public ScheduleReportManager() {
}
public void contextInitialized(ServletContextEvent servletContextEvent) {
try{
this.application= servletContextEvent.getServletContext();
webContext = (ApplicationContext) application.getAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE);
//daily timer
if(application.getAttribute("scheduledReportTimer")==null)
{
application.setAttribute("scheduleReportManager",this);
this.scheduledReportTimer= new Timer(true);
setupSchedule(scheduledReportTimer, application);
application.setAttribute("scheduledReportTimer", scheduledReportTimer);
}
//timer for 10 mins
if(application.getAttribute("scheduleImmediateReportTimer")==null)
{
this.scheduleImmediateReportTimer= new Timer(true);
setupImmediateSchedule(scheduleImmediateReportTimer, application);
application.setAttribute("scheduleImmediateReportTimer", scheduleImmediateReportTimer);
}
Logger.global.log(Level.INFO, "ScheduledReportTimer: " + application.getServletContextName() + ": Setup completed");
} catch (Exception e)
{
Logger.global.log(Level.SEVERE, "Setup Report Timer Exception - " + e.toString());
}
}
//timer for 10 mins
public void setupImmediateSchedule(Timer scheduledImmediateReportTimer,ServletContext application ) {
scheduledImmediateReportTimerTasks= new ConcurrentHashMap();
ScheduleImmediateReportTimerTask immediateReportTimerTask = new ScheduleImmediateReportTimerTask(application,environment,this);
scheduledImmediateReportTimerTasks.put(environment.getCode(),immediateReportTimerTask);
scheduledImmediateReportTimer.schedule(immediateReportTimerTask,1000);
}
//timer for 10 mins
public void setTimerForImmediateReportExecution(ScheduleImmediateReportTimerTask immediateReportTimerTask){
Environment environment = immediateReportTimerTask.getEnvironment();
ServletContext application = immediateReportTimerTask.getApplication();
ScheduleImmediateReportTimerTask reportTimerTask= new ScheduleImmediateReportTimerTask(application,environment,this);
String environmentCode = environment.getCode();
synchronized (scheduledImmediateReportTimerTasks){
scheduledImmediateReportTimerTasks.put(environmentCode,reportTimerTask);
scheduleImmediateReportTimer.schedule(reportTimerTask,600000); // set timer for running every 10 mins
}
}
//daily timer
public void setTimerForNextExecution(ScheduledReportTimerTask timerTask, DateTime nextExecutionDateTime)
{
if(nextExecutionDateTime == null)
return;
Environment environment = timerTask.getEnvironment();
ServletContext application = timerTask.getApplication();
ScheduledReportTimerTask scheduledReportTimerTask = new ScheduledReportTimerTask(application,environment,this);
java.util.Date nextScheduleTime = nextExecutionDateTime.getDate();
String environmentCode = environment.getCode();
synchronized (scheduledReportTimerTasks){
scheduledReportTimerTasks.put(environmentCode,scheduledReportTimerTask);
Logger.global.log(Level.INFO, "ScheduledReportManager: next execution time is " + nextScheduleTime.toString());
scheduledReportTimer.schedule(scheduledReportTimerTask,nextScheduleTime);
}
}
//daily timer
public void setupSchedule(Timer scheduledReportTimer, ServletContext application) throws Exception{
this.environment = SpringBridgeUtil.retrieveEnvironment(application);
this.scheduledReportTimerTasks= new ConcurrentHashMap();
ScheduledReportTimerTask scheduledReportTimerTask = new ScheduledReportTimerTask(application,environment,this);
scheduledReportTimerTasks.put(environment.getCode(),scheduledReportTimerTask);
scheduledReportTimer.schedule(scheduledReportTimerTask,1000);
}
public void contextDestroyed(ServletContextEvent servletContextEvent) {
String contextName = application.getServletContextName();
if (scheduledReportTimer != null) {
scheduledReportTimer.cancel();
}
if(scheduleImmediateReportTimer !=null)
{
scheduleImmediateReportTimer.cancel();
}
Logger.global.log(Level.INFO, "scheduledReportTimer: " + contextName
+ ": Tasks cancelled due to context removal");
}
}
// sends email on every 10 mins
public class ScheduleImmediateReportTimerTask extends TimerTask {
#Override
public void run() {
try{
// send mail
sendNotifiationEmail();
}catch (Exception e) {
e.printStackTrace();
Logger.global.log(Level.INFO,"immediateScheduledReportTimerTask Exception " + e.toString());
}
}
}
i am using jdk 1.7 and spring mvc 3.5.0 .
Classes using java.util.Timer and import java.util.TimerTask;
Let me know if this is the correct way or is there any other way to get execute schedule task periodically .

Categories

Resources