how to benchmark an infinite loop java nio watchservice program - java

I have a infinite polling loop using java.nio.file.WatchService looking for new files .Inside the loop i have fixed thread pool executor service to process files concurrently.
As the polling service keeps running, how can i benchmark the time taken for a batch of say 10/n files to process.i am able to time each file in the runnable class but how can get the batch processing time ?

Something like this should work:
// inside the listener for the WatchService
final MyTimer t = new MyTimer(); // takes current time, initialized to 0 tasks
for (Change c : allChanges) {
t.incrementTaskCount(); // synchronized
launchConcurrentProcess(c, t);
}
// inside your processor, after processing a change
t.decrementTaskCount(); // also synchronized
// inside MyTimer
public void synchronized decrementTaskCount() {
totalTasks --;
// depending on your benchmarking needs, you can do different things here
// I am printing max time only (= end of last), but min/max/avg may also be nice
if (totalTasks == 0) {
System.err.println("The time spent on this batch was "
+ System.currentTimeMillis() - initialTime);
}
}

Related

Load Test using Java - timer task vs scheduler

I need to execute a load test using Java in which one of the test strategies require x threads to be fired of every y period of time for z minutes and thereafter have a constant totalThread amount of threads running for the load test duration (eg with a total of 100 threads, start 10 threads at 5 second intervals until all 100 threads have started, and continue to keep all 100 threading running (once it has finished execution it should restart) for the specified duration of the test, say one hour)
I have attempted to use the timer task but it seems limiting, would thread pool scheduler be a better option? What would be the best approach?
public class MyTask extends TimerTask{
public void run() {
System.out.println("STARTING THREAD "+ counter +" "+ new Date());
//execute test
counter++;
if (counter > maxIterations) {
MyTask.this.cancel();
return;
}
}
List<TimerTask> MyTaskList = new ArrayList<TimerTask>();
for (int i = 1 ; i <= threadsPerIteration ; i++) {
TimerTask MyTimerTask = new MyTask(NumberOfIterations);
MyTaskList.add(MyTimerTask);
timer.schedule(MyTimerTask, initialDelayMilli, threadDelayMilli);
}
Thank You
Don't use a TimerTask for each thread. Instead, use a single TimerTask, that fires once per interval, with your example numbers once every 5 seconds.
Each of the first 10 times the TimerTask fires, it spawns off 10 threads. On each subsequent firing, it checks for the number of active threads, and spawns off enough new threads to bring the total to 100, until the end of your test.
Thanks for the help, i decided to use the threadpool executor together with the timertask class as follows:
I used the Executors.newScheduledThreadPool(int x) method to control the amount of threads able to run concurrently, together with a timer task that is set to increase the threadpool size every y amount of time :
TimerTask DelayTimerTask = new TimerTask() { //task to increase threadpool size
public void run() {
MyExecutor.setCorePoolSize(i * incrementAmount); //timer task increments threadpool size by threadPoolIncrement
i++;
}
};
timer.scheduleAtFixedRate(DelayTimerTask,0,intervalLength);
in this way the amount of concurrent threads will increase by incrementAmount every intervalLength.

How can I properly block a thread until timeout starts?

I would like to run several tasks in parallel until a certain amount of time has passed. Let us suppose those threads are CPU-heavy and/or may block indefinitely. After the timeout, the threads should be interrupted immediately, and the main thread should continue execution regardless of unfinished or still running tasks.
I've seen a lot of questions asking this, and the answers were always similar, often along the lines of "create thread pool for tasks, start it, join it on timeout"
The problem is between the "start" and "join" parts. As soon as the pool is allowed to run, it may grab CPU and the timeout will not even start until I get it back.
I have tried Executor.invokeAll, and found that it did not fully meet the requirements. Example:
long dt = System.nanoTime ();
ExecutorService pool = Executors.newFixedThreadPool (4);
List <Callable <String>> list = new ArrayList <> ();
for (int i = 0; i < 10; i++) {
list.add (new Callable <String> () {
#Override
public String call () throws Exception {
while (true) {
}
}
});
}
System.out.println ("Start at " + (System.nanoTime () - dt) / 1000000 + "ms");
try {
pool.invokeAll (list, 3000, TimeUnit.MILLISECONDS);
}
catch (InterruptedException e) {
}
System.out.println ("End at " + (System.nanoTime () - dt) / 1000000 + "ms");
Start at 1ms
End at 3028ms
This (27 ms delay) may not seem too bad, but an infinite loop is rather easy to break out of - the actual program experiences ten times more easily. My expectation is that a timeout request is met with very high accuracy even under heavy load (I'm thinking along the lines of a hardware interrupt, which should always work).
This is a major pain in my particular program, as it needs to heed certain timeouts rather accurately (for instance, around 100 ms, if possible better). However, starting the pool often takes as long as 400 ms until I get control back, pushing past the deadline.
I'm a bit confused why this problem is almost never mentioned. Most of the answers I have seen definitely suffer from this. I suppose it may be acceptable usually, but in my case it's not.
Is there a clean and tested way to go ahead with this issue?
Edited to add:
My program is involved with GC, even though not on a large scale. For testing purposes, I rewrote the above example and found that the results given are very inconsistent, but on average noticeably worse than before.
long dt = System.nanoTime ();
ExecutorService pool = Executors.newFixedThreadPool (40);
List <Callable <String>> list = new ArrayList <> ();
for (int i = 0; i < 10; i++) {
list.add (new Callable <String> () {
#Override
public String call () throws Exception {
String s = "";
while (true) {
s += Long.toString (System.nanoTime ());
if (s.length () > 1000000) {
s = "";
}
}
}
});
}
System.out.println ("Start at " + (System.nanoTime () - dt) / 1000000 + "ms");
try {
pool.invokeAll (list, 1000, TimeUnit.MILLISECONDS);
}
catch (InterruptedException e) {
}
System.out.println ("End at " + (System.nanoTime () - dt) / 1000000 + "ms");
Start at 1ms
End at 1189ms
invokeAll should work just fine. However, it is vital that you write your tasks to properly respond to interrupts. When catching InterruptedException, they should exit immediately. If your code is catching IOException, each such catch-block should be preceded with something like:
} catch (InterruptedIOException e) {
logger.log(Level.FINE, "Interrupted; exiting", e);
return;
}
If you are using Channels, you will want to handle ClosedByInterruptException the same way.
If you perform time-consuming operations that don't catch the above exceptions, you need to check Thread.interrupted() periodically. Obviously, checking more often is better, though there will be a point of diminishing returns. (Meaning, checking it after every single statement in your task probably isn't useful.)
if (Thread.interrupted()) {
logger.fine("Interrupted; exiting");
return;
}
In your example code, your Callable is not checking the interrupt status at all, so my guess is that it never exits. An interrupt does not actually stop a thread; it just signals the thread that it should terminate itself on its own terms.
Using the VM option -XX:+PrintGCDetails, I found that the GC runs more rarely, but with a far larger time delay than expected. That delay just so happens to coincide with the spikes I experienced.
A mundane and sad explanation for the observed behavior.

How to run a Java program for a specific amount of time?

I want to run my program on a list of files. But a few files take much longer than expected time. So I want to kill the thread/process after the timeout period, and run the program on next file. Is there any easy way to do it? I'll be running only one thread at a time.
Edit1:
I am sorry I couldn't make it clear earlier. Here is the for loop code.
for (File file : files)
{
//Perform operations.
}
So a file is Java program file which can contain many methods. If the number of methods are less, my analysis works fine. If there are many say 20 methods, it keeps executing and analyzing them for a few hours. So in the latter case, I would like to finish that execution and go for the next java file.
And I don't have any constraint of single threading. If multi-threading works, it's still good for me. Thanks.
These kinds of things are usually done multi-threaded. For an example, see How to timeout a thread .
As you commented though, you are looking for a single-threaded solution. That is best done by periodically checking if the timeout expired yet. Some thread will have to do the checking, and since you requested it to have only one thread, the checking will have to be somewhere halfway in the code.
Let's say that the majority of the time spent is in a while-loop that reads the file line-by-line. You could do something like this:
long start = System.nanoTime();
while((line = br.readLine()) != null) {
if(System.nanoTime() - start > 600E9) { //More than 10 minutes past the start.
throw new Exception("Timeout!");
}
... //Process the line.
}
Just as a quick example of how you would do this using multi-threading:
First we make a Runnable for processing the File
class ProcessFile implements Runnable {
File file;
public ProcessFile(File file){
this.file = file;
}
public void run(){
//process the file here
}
}
Next we actually execute that class as a thread:
class FilesProcessor {
public void processFiles(){
//I guess you get the files somewhere in here
ExecutorService executor = Executors.newSingleThreadExecutor();
ProcessFile process;
Future future;
for (File file : files) {
process = new ProcessFile(file);
future = executor.submit(process);
try {
future.get(10, TimeUnit.MINUTES);
System.out.println("completed file");
} catch (TimeoutException e) {
System.out.println("file processing timed out");
}
}
executor.shutdownNow();
}
}
So we iterate through each file and process it. If it takes longer than 10 minutes, we get a timeout exception and the thread dies. Easy as pie.
I think you have a while or for loop for processing these files what about reading a timer each loop turn.
You can measure how long you are processing data with:
while(....){
long start = System.currentTimeMillis();
// ... the code being measured ...
long elapsedTime = System.currentTimeMillis() - start;
}
Or maybe starting chronometer before loop i don't know
EDIT :
So if you have one loop turn by file you have to put somme time measure at some specific point in your code like said darius.
For example :
for each file
start = System.currentTimeMillis();
//do some treatment
elapsedTime = System.currentTimeMillis() - start;
// do a more tratment
elapsedTime = System.currentTimeMillis() - start;

How to test task performance, using multitheading?

I have some exercises, and one of them refers to concurrency. This theme is new for me, however I spent 6 hours and finally solve my problem. But my knowledge of corresponding API is poor, so I need advice: is my solution correct or may be there is more appropriate way.
So, I have to implement next interface:
public interface PerformanceTester {
/**
* Runs a performance test of the given task.
* #param task which task to do performance tests on
* #param executionCount how many times the task should be executed in total
* #param threadPoolSize how many threads to use
*/
public PerformanceTestResult runPerformanceTest(
Runnable task,
int executionCount,
int threadPoolSize) throws InterruptedException;
}
where PerformanceTestResult contains total time (how long the whole performance test took in total), minimum time (how long the shortest single execution took) and maximum time (how long the longest single execution took).
So, I learned many new things today - about thread pools, types Executors, ExecutorService, Future, CompletionService etc.
If I had Callable task, I could make next:
Return current time in the end of call() procedure.
Create some data structure (some Map may be) to store start time and Future object, that retuned by fixedThreadPool.submit(task) (do this executionCount times, in loop);
After execution I could just subtract start time from end time for every Future.
(Is this right way in case of Callable task?)
But! I have only Runnable task, so I continued looking. I even create FutureListener implements Callable<Long>, that have to return time, when Future.isDone(), but is seams little crazy for my (I have to double threads count).
So, eventually I noticed CompletionService type with interesting method take(), that Retrieves and removes the Future representing the next completed task, waiting if none are yet present., and very nice example of using ExecutorCompletionService. And there is my solution.
public class PerformanceTesterImpl implements PerformanceTester {
#Override
public PerformanceTestResult runPerformanceTest(Runnable task,
int executionCount, int threadPoolSize) throws InterruptedException {
long totalTime = 0;
long[] times = new long[executionCount];
ExecutorService pool = Executors.newFixedThreadPool(threadPoolSize);
//create list of executionCount tasks
ArrayList<Runnable> solvers = new ArrayList<Runnable>();
for (int i = 0; i < executionCount; i++) {
solvers.add(task);
}
CompletionService<Long> ecs = new ExecutorCompletionService<Long>(pool);
//submit tasks and save time of execution start
for (Runnable s : solvers)
ecs.submit(s, System.currentTimeMillis());
//take Futures one by one in order of completing
for (int i = 0; i < executionCount; ++i) {
long r = 0;
try {
//this is saved time of execution start
r = ecs.take().get();
} catch (ExecutionException e) {
e.printStackTrace();
return null;
}
//put into array difference between current time and start time
times[i] = System.currentTimeMillis() - r;
//calculate sum in array
totalTime += times[i];
}
pool.shutdown();
//sort array to define min and max
Arrays.sort(times);
PerformanceTestResult performanceTestResult = new PerformanceTestResult(
totalTime, times[0], times[executionCount - 1]);
return performanceTestResult;
}
}
So, what can you say? Thanks for replies.
I would use System.nanoTime() for higher resolution timings. You might want to ignroe the first 10,000 tests to ensure the JVM has warmed up.
I wouldn't bother creating a List of Runnable and add this to the Executor. I would instead just add them to the executor.
Using Runnable is not a problem as you get a Future<?> back.
Note: Timing how long the task spends in the queue can make a big difference to the timing. Instead of taking the time from when the task was created you can have the task time itself and return a Long for the time in nano-seconds. How the timing is done should reflect the use case you have in mind.
A simple way to convert a Runnable task into one which times itself.
finla Runnable run = ...
ecs.submit(new Callable<Long>() {
public Long call() {
long start = System.nanoTime();
run.run();
return System.nanoTime() - start;
}
});
There are many intricacies when writing performance tests in the JVM. You probably aren't worried about them as this is an exercise, but if you are this question might have more information:
How do I write a correct micro-benchmark in Java?
That said, there don't seem to be any glaring bugs in your code. You might want to ask this on the lower traffic code-review site if you want a full review of your code:
http://codereview.stackexchange.com

Fibonacci on Java ExecutorService runs faster sequentially than in parallel

I am trying out the executor service in Java, and wrote the following code to run Fibonacci (yes, the massively recursive version, just to stress out the executor service).
Surprisingly, it will run faster if I set the nThreads to 1. It might be related to the fact that the size of each "task" submitted to the executor service is really small. But still it must be the same number also if I set nThreads to 1.
To see if the access to the shared Atomic variables can cause this issue, I commented out the three lines with the comment "see text", and looked at the system monitor to see how long the execution takes. But the results are the same.
Any idea why this is happening?
BTW, I wanted to compare it with the similar implementation with Fork/Join. It turns out to be way slower than the F/J implementation.
public class MainSimpler {
static int N=35;
static AtomicInteger result = new AtomicInteger(0), pendingTasks = new AtomicInteger(1);
static ExecutorService executor;
public static void main(String[] args) {
int nThreads=2;
System.out.println("Number of threads = "+nThreads);
executor = Executors.newFixedThreadPool(nThreads);
Executable.inQueue = new AtomicInteger(nThreads);
long before = System.currentTimeMillis();
System.out.println("Fibonacci "+N+" is ... ");
executor.submit(new FibSimpler(N));
waitToFinish();
System.out.println(result.get());
long after = System.currentTimeMillis();
System.out.println("Duration: " + (after - before) + " milliseconds\n");
}
private static void waitToFinish() {
while (0 < pendingTasks.get()){
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
executor.shutdown();
}
}
class FibSimpler implements Runnable {
int N;
FibSimpler (int n) { N=n; }
#Override
public void run() {
compute();
MainSimpler.pendingTasks.decrementAndGet(); // see text
}
void compute() {
int n = N;
if (n <= 1) {
MainSimpler.result.addAndGet(n); // see text
return;
}
MainSimpler.executor.submit(new FibSimpler(n-1));
MainSimpler.pendingTasks.incrementAndGet(); // see text
N = n-2;
compute(); // similar to the F/J counterpart
}
}
Runtime (approximately):
1 thread : 11 seconds
2 threads: 19 seconds
4 threads: 19 seconds
Update:
I notice that even if I use one thread inside the executor service, the whole program will use all four cores of my machine (each core around 80% usage on average). This could explain why using more threads inside the executor service slows down the whole process, but now, why does this program use 4 cores if only one thread is active inside the executor service??
It might be related to the fact that the size of each "task" submitted
to the executor service is really small.
This is certainly the case and as a result you are mainly measuring the overhead of context switching. When n == 1, there is no context switching and thus the performance is better.
But still it must be the same number also if I set nThreads to 1.
I'm guessing you meant 'to higher than 1' here.
You are running into the problem of heavy lock contention. When you have multiple threads, the lock on the result is contended all the time. Threads have to wait for each other before they can update the result and that slows them down. When there is only a single thread, the JVM probably detects that and performs lock elision, meaning it doesn't actually perform any locking at all.
You may get better performance if you don't divide the problem into N tasks, but rather divide it into N/nThreads tasks, which can be handled simultaneously by the threads (assuming you choose nThreads to be at most the number of physical cores/threads available). Each thread then does its own work, calculating its own total and only adding that to a grand total when the thread is done. Even then, for fib(35) I expect the costs of thread management to outweigh the benefits. Perhaps try fib(1000).

Categories

Resources