Accurate spending time solution - java

Considering the following code snippets
class time implement Runnable{
long t=0L;
public void run(){
try{while(true){Thread.sleep(1000);t++;/*show the time*/}}catch(Throwable t){}
}
}
////
long long t=0L;
void* time(void* a){//pthread thread start
sleep(1);t++;//show the time
}
I read in some tutorial that in Java Thread.sleep(1000) is not exactly 1 second, and it might be more if the system is busy at the time, then OS switch to the thread late.
Questions:
Is this case true at all or no?
Is this scenario same for native (C/C++) codes?
What is the accurate way to count the seconds up in an application?

Others have answered about the accuracy of timing. Unfortunately, there is no GUARANTEED way to sleep for X amount of time, and wake up at exactly X.00000 seconds (or milliseconds, nanoseconds, etc).
For displaying time in seconds, you can just lower the time you are waiting to, say, half a second. Then you won't have the time jump two seconds from time to time, because half a second isn't going to be extended to more than a second (unless the OS & system you are running on is absolutely overloaded and nothing gets to run when it should - in which case you should fix that problem [get a faster processor, more memory, or whatever it takes], not fiddle with the timing of your application). This works well for "relatively long periods of time", such as one second or 1/10th of a second. For higher precision, it won't really work, since we're now entering the "scheduling jitter" zone.
If you want very accurate timing, then you will probably need to use a Real-Time OS, or at least an OS that has "real time extensions enabled", which will allow the OS to be more strict about time (at the cost of "ease of use" from the programmer, and possibly also the OS being less efficient in it's handling of processes, because it "switches more often than it needs to", compared to a more "lazy" timing approach).
Note also that the "may take longer", in an idle system is mainly the "rounding up of the timer" (if the system tick happens every 10ms or 1ms, the timer is set to 1000ms + whatever is left of the current timer tick, so may be 1009.999ms, or 1000.75ms, for example). The other overhead, that come from scheduling and general OS overheads should be in the microseconds range if not nanoseconds on any modern system - after all, an OS can do quite a lot of work in a microsecond - a modern x86 CPU will execute 3 cycles per clock, and the clock runs around 0.3ns. That's 10 instructions per nanosecond [of course, cache-misses and such will worsen this dramatically]. If the OS has more than a few thousand instructions to go from one process to another (less still for threads), then there's something quite wrong. A few thousand instructions # 10 instructions per nanonsecond = some hundreds of nanoseconds. Definitely less than a microsecond. Compare that to the 1ms or 10ms "jitter" of starting the timer just after the timer ticked off last time.
Naturally, if the CPU is busy running other tasks, this is different - then the time "left to run" on other processes will also influence the time taken to wake up a process.
Of course, in a heavily loaded memory system, the "just woken up" process may not be "ready to run", it could be swapped out to disk, for example. In which case, tens if not hundreds of milliseconds are needed to load it back from the disk.

To answer the two first questions: Yes it's true, and yes.
First there is the time between the timeout expires and the time when the OS notices it, then there the time for the OS to reschedule your process, and lastly there's the time from the process has been "woken up" until it is its turn to run. How long will all this take? There's no way of saying.
And as it's all done on the OS level, it doesn't really matter what language you program in.
As for a more accurate way? There is none. You can use more high-precision timers, but there is no way of avoiding the lag described above.

Yes, it´s true that it is not accurate.
It´s the same for simple sleep-functions in C/C++ and pretty much everything else.
Depending on your system, there could be better functions accessible,
but:
What is the accurate way
A really accurate way does not exist.
Unles you have some really expensive special computer with atomic clock included.
(and no usual OS too. And even then, we could argue what "accurate" means)
If busy waiting (high CPU load) is acceptable, look at nanoTime or native usleep, HighPerformanceCounter or whatever is applicable for your system

The sleep call tells the system to stop the thread execution for at least a time period specified as argument. The system will then resume thread execution when it has a chance (it actually depends on many factors, such as hardware, thread priorities, etc.). To more or less acurately measure the time you can store the time at the beginning of execution and then calculate the time delta each time it's needed.

The sleep function is not accurate, but if the intent is to display the total amount of seconds then you should store the current time at the beginning and then display the time difference every now and then.

This is true. Every sleep implementation in any language (C too) will fail to wait exactly 1 second. It has to deal with your OS scheduler, the sleep duration is juste a hint : the minimum sleep duration to be precise, but the actual difference depends on gigazillions of factors.
Trying to figure out the deviation is tricky if you want a very high resolution clock. In most cases, you'll have about 1~5 ms (roughly).
The thing is that the order of magnitude will be the same whatever the sleep duration. If you want something "accurate", you can divide your time application and wait for a longer period. For example, when you benchmark, you will prefer this type of implementation because the delta-time will increase, decreasing uncertainty :
// get t0
// process n times
// get t1
// compute average time : (t1-t0)/n

Related

calculate CPU Cycle for Java function

I'd like to know how to calculate the CPU Cycle for a function in Java or Python.
I tried out in Java with:
(OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean();
osbean.getProcessCpuTime();
But no results.
in Python,
use timeit.default_timer(); it uses the most accurate option for your platform. in Ubuntu, this will use time.time() instead.
timeit.default_timer()Define a default timer, in a platform-specific manner. On Windows, time.clock() has microsecond granularity, but time.time()‘s granularity is 1/60th of a second. On Unix, time.clock() has 1/100th of a second granularity, and time.time() is much more precise. On either platform, default_timer() measures wall clock time, not the CPU time. This means that other processes running on the same computer may interfere with the timing.
in JAVA,
System.currentTimeMillis() will only ever measure wall-clock time, never CPU time.
If you need wall-clock time, then System.nanoTime() is often more precise (and never worse) than currentTimeMillis().
ThreadMXBean.getThreadCPUTime() can help you find out how much CPU time a given thread has used. Use ManagementFactory.getThreadMXBean() to get a ThreadMXBean and Thread.getId() to find the id of the thread you're interested in. Note that this method need not be supported on every JVM!

Delay by 1 millisecond not working?

I am trying to generate a number using:
System.currentTimeMillis()
I have to generate these numbers sometimes 5 times in a row, which happens so fast that they are the same (but I don't want them to be the same as we are using them as part of a unique field)
I thought I could put a delay in between when each one is generated, which would prevent them from being the same, using:
TimeUnit.MILLISECONDS.sleep(1);
But this still generates the same number. It only seems to generate a new one if I increase it to about 60 and above. I am trying to understand why this is? Thanks
If you want to generate 5 numbers starting at the current time that aren't the same - which as far as I can tell is your only requirement - you can use
long t = System.currentTimeMillis();
long ts[] = { t, t+1, t+2, t+3, t+4 };
Thread.sleep is absurd here. The user (or whoever) should not experience a delay for something that can be computed now.
The Java docs for Thread.sleep() says:
Causes the currently executing thread to sleep (temporarily cease execution) for the
specified number of milliseconds, subject to the precision and accuracy of system
timers and schedulers.
That bit about the "precision and accuracy of system timers and schedulers" is pretty important. I'd say that's why your not getting anything until you use Thread.sleep(60): those system timers just aren't very accurate.
Now a better question is why are you trying to "generate numbers..."
The JavaDoc for System.currentTimeMillis() says:
"Note that while the unit of time of the return value is a millisecond,
the granularity of the value depends on the underlying operating system
and may be larger. For example, many operating systems measure time in
units of tens of milliseconds."
From this, is it likely that your problem is not java - related, but rather your operating system is not keeping the time per - millisecond.
Thread.sleep guarantees the Thread will sleep, but not for how long. It will definitely sleep, at least for 1 millisecond but your os may not give priority to jvm, or jvm may not give priority to the thread after 1 millisecond.

Accuracy of ScheduledExecutorService on normal OS / JVM

I use ScheduledExecutorService.scheduleAtFixedRate to run a daily task, like this:
executor.scheduleAtFixedRate(task, d, 24L * 3600 * 1000, TimeUnit.MILLISECONDS);
(d is the initial delay in milliseconds).
The executor is created by Executors.newSingleThreadScheduledExecutor() and runs multiple tasks, but they are all scheduled a few hours apart, and take at most a few minutes.
I know that ScheduledExecutorService makes no guarantees about accuracy, and I'd need a real-time OS and JVM to get that. That is not a requirement for my task, though.
I noticed that on a Windows 2003 Server, using JDK 1.7.0_03, the task slips by almost 10 seconds per day. That makes about 5 minutes per month, which is acceptable for my application. I'll probably have to implement re-scheduling anyway, because I want the task to run at a specific local time, and so I'll have to take care of DST myself. The service runs for long periods of time - half a year without restart is not that unusual.
Still, I think that an inaccuracy of 10 sec/day is rather high for a mostly idle system, and I wonder if I should be prepared for even worse behavior.
So my question is about your experiences with scheduleAtFixedRate. Are the 10 sec/day normal? Will I get better or worse accuracy in other environments (our customers also use Linux and Solaris servers)? Or are the 10 seconds an indication that something is amiss in our environment?
For a very long running task, it is not too surprising. Another problem you have is that it uses nanoTime() which is not synchronized with NTP or the like. This can result in drift with the wall clock.
One way to avoid this is to schedule repeatedly as you suggest. The repeating tasks actually reschedule themselves (which is why they cannot throws an exception, see below) You can have a one shot task which reschedules itself at the end, using the wall clock time and taking into account day list savings.
BTW: I would make sure you catch an exceptions or even Throwable thrown. If you don't your task will stop, possibly silently (unless you are looking at the Future object returned)
What I do is cheat a little. I have a task which wakes every 1 - 10 seconds and checks if it needs to run and if not returns. The overhead is usually trivial if you don't have thousands of tasks and it's much simpler to implement.

Java timing, System.nanoTime() batter than System.currentTimeMillis() but does it persist over sleep?

I am trying to implement a timer, it may be used for short (seconds) events, or longer (hours, etc) events.
Ideally it should persist over periods when the CPU is off, for example, battery has died. If I set the start time using System.currentTimeMillis() and end time using the same function, it works in almost all cases, except during periods like leap seconds, leap years, daylight savings time changes, etc... Or, if the user just changes the time (I've verified this). This is on an Android system, btw.
Instead, if I used System.nanoTime(), in addition to potentially being more accurate, it won't have the usual "hard time" issues with time changes, etc. My question is, does System.nanoTime() measure nanoseconds from some arbitrary time, in "hard time"? I'm not sure what the proper term is, but for example, will System.nanoTime() ran at X, then X+1 hour later, the system is shut off (dead battery on Android device, for example), then X+10 hours, the system is started, will running System.nanoTime() at this point return 10 hours? Or will it return 1 hour (since the "counter" that nanoTime uses may not be running when system is off/asleep?).
android.os.SystemClock.elapsedRealtime() - milliseconds since the system was booted including time spent in sleep state. This should be your best bet.
I dont think you can measure the switched off time in android.
For more info it might be better to check android system clock page. http://developer.android.com/reference/android/os/SystemClock.html
It is undefined:
"The value returned represents nanoseconds since some fixed but
arbitrary origin time (perhaps in the future, so values may be
negative). The same origin is used by all invocations of this method
in an instance of a Java virtual machine; other virtual machine
instances are likely to use a different origin."
For simplicity, we'll say when you run it at time X, the origin is X (this is allowed). That means it will return 0 then, and within the VM instance, time will then elapse at the same rate as a normal clock.
When you use Thread.sleep, that doesn't change the VM instance, so it isn't treated specially.
However, after the device is rebooted, you're in a different VM instance. Therefore, X is no longer guaranteed to be the origin.

Big difference in timestamps when running same application multiple times on an emulator

In an android application I am trying to do a exponential modulus operation and want to calculate the time taken for that process. So i have created 2 timestamps,one just before the operation and the other just after the operation
Calendar calendar0 = Calendar.getInstance();
java.util.Date now0 = calendar0.getTime();
java.sql.Timestamp currentTimestamp0 = new java.sql.Timestamp(now0.getTime());
BigInteger en = big.modPow(e, n);
Calendar calendar1 = Calendar.getInstance();
java.util.Date now1 = calendar1.getTime();
java.sql.Timestamp currentTimestamp1 = new java.sql.Timestamp(now1.getTime());
The difference in time reported by these 2 timestamps is varying over a large range for the same inputs when i am running application multiple times. It gives time in range of [200ns-6ns]
Can someone point out the reason for such result/something that i am doing wrong?
Well for one thing, you're going about your timing in a very convoluted way. Here's something which gives just as much accuracy, but rather simpler:
long start = System.currentTimeMillis();
BigInteger en = big.modPow(e, n);
long end = System.currentTimeMillis();
Note that java.util.Date only has accuracy to the nearest millisecond, and using that same value and putting it in a java.sql.Timestamp doesn't magically make it more accurate. So any result under a millisecond (6ns-200ns) is obviously spurious - basically anything under a millisecond is 0.
You may be able to get a more accurate reading using System.nanoTime - I don't know whether that's supported on Android. There may be alternative high-precision timers available, too.
Now as for why operations may actually take very different amounts of recorded time:
The granularity of the system clock used above may well be considerably less than 1ms. For example, if it measures nothing smaller than 15ms, you could see some operations supposedly taking 0ms and some taking 15ms, even though they actually take the same amount of time.
There can easily be other factors involved, the most obvious being garbage collection
Do more calculations in the trial so that your expected time is several milliseconds.
Consider doing your output to logcat, recording that on the PC and then processing the logcat to only select trials with no intervening platform messages about garbage collection or routine background operations.
A large part of the reason for the variation is that your application is being run by a virtual machine running under a general-purpose operating system on an emulated computer which is running under a general-purpose operating system on a real computer.
With the exception of your application, every one of the things in that pile (the JVM running your app, Android's Linux OS, the emulator and whatever OS runs it) may use the CPU (real or virtual) to do something else at any time for any duration. Those cycles will take away from your program's execution and add to its wall-clock execution time. The result is completely nondeterministic behavior.
If you count emulator CPU cycles consumed and discount whatever Java does behind the scenes, I have no doubt that the standard deviation in execution times would be a lot lower than what you're seeing for the same inputs. The emulated Android environment isn't the place to be benchmarking algorithms because there are too many variables you can't control.
You can't make any conclusions only running a single function. There's so much more going on. You have no control over the underlying linux operating system taking up cycles, let alone the java virtual machine, and yet still all the other programs running. Try running it 1000 or 10000 times

Categories

Resources