java stop watch program - using System.nanoTime() and TimerTask together - java

I am writing a mini program in Java to use as a stop watch but I am not sure if I am using the right methods in terms of efficiency and accuracy.
From what I have read on stackoverflow it it appears that System.nanoTime() is the best method to use when measuring time elapsed. Is that right? To what extent is it accurate as in to the nearest nanosecond, microsecond, millisecond etc.
Also, while my stop watch is running, I would like it to display the current time elapsed every second. To do this I plan to use a TimerTask and schedule it to report the time (converted to seconds) every second.
Is this the best way? Will this have any effect on the accuracy?
Lastly with my current design will this use up much of a computer's resources e.g. processing time.
PS Sorry can't share much code right now cause I've just started designing it. I just did not want to waste time on a design that would be inefficient and make an inaccurate timer.

Yes, you can use java.util.Timer and TimerTask that runs periodically and updates your view every second. However I do not think you have to deal with nono seconds while you actually need resolution of seconds only. Use regular System.currentTimeMillis().

Related

JavaFX - Using time as a variable for a line chart

I am essentially trying to create a visual representation of the memory being used in my program. I have created a LineChart and set the Y-axis to be Memory used, and the X-axis to be time. My question is, what is the best way to set up a timer, so that incoming data about memory usage can be paired with the current time.
By this I mean, I want to start a timer when the window displays, and continue to count up (possibly with millisecond precision), and so I can say that after the program has been running for this long, this is the amount of memory used.
What would be the best resources to use for this task?
The best bet would probably just to use System.currentTimeMillis(); and set it to a variable when you start the count, then call it again and compare the saved value with the new timer to get your time.
so..
Long startTime = System.currentTimeMillis();
//Do whatever stuff
long timeElapsed = System.currentTimeMillis() - startTime;
One thing to keep in mind with this though, is currentTimeMillis() is platform dependent on how granular it is. On unix-based you get 1 ms. of a granularity minimum, I think on windows it's 50. So if you need something more accurate than 50 ms. time steps, you might need a different method.
You must use a StopWatch to measure the time. Please go through the following links
https://stackoverflow.com/a/8255766/1759128
There are many alternatives in different answers of the question. You can use any of them !

Accurate spending time solution

Considering the following code snippets
class time implement Runnable{
long t=0L;
public void run(){
try{while(true){Thread.sleep(1000);t++;/*show the time*/}}catch(Throwable t){}
}
}
////
long long t=0L;
void* time(void* a){//pthread thread start
sleep(1);t++;//show the time
}
I read in some tutorial that in Java Thread.sleep(1000) is not exactly 1 second, and it might be more if the system is busy at the time, then OS switch to the thread late.
Questions:
Is this case true at all or no?
Is this scenario same for native (C/C++) codes?
What is the accurate way to count the seconds up in an application?
Others have answered about the accuracy of timing. Unfortunately, there is no GUARANTEED way to sleep for X amount of time, and wake up at exactly X.00000 seconds (or milliseconds, nanoseconds, etc).
For displaying time in seconds, you can just lower the time you are waiting to, say, half a second. Then you won't have the time jump two seconds from time to time, because half a second isn't going to be extended to more than a second (unless the OS & system you are running on is absolutely overloaded and nothing gets to run when it should - in which case you should fix that problem [get a faster processor, more memory, or whatever it takes], not fiddle with the timing of your application). This works well for "relatively long periods of time", such as one second or 1/10th of a second. For higher precision, it won't really work, since we're now entering the "scheduling jitter" zone.
If you want very accurate timing, then you will probably need to use a Real-Time OS, or at least an OS that has "real time extensions enabled", which will allow the OS to be more strict about time (at the cost of "ease of use" from the programmer, and possibly also the OS being less efficient in it's handling of processes, because it "switches more often than it needs to", compared to a more "lazy" timing approach).
Note also that the "may take longer", in an idle system is mainly the "rounding up of the timer" (if the system tick happens every 10ms or 1ms, the timer is set to 1000ms + whatever is left of the current timer tick, so may be 1009.999ms, or 1000.75ms, for example). The other overhead, that come from scheduling and general OS overheads should be in the microseconds range if not nanoseconds on any modern system - after all, an OS can do quite a lot of work in a microsecond - a modern x86 CPU will execute 3 cycles per clock, and the clock runs around 0.3ns. That's 10 instructions per nanosecond [of course, cache-misses and such will worsen this dramatically]. If the OS has more than a few thousand instructions to go from one process to another (less still for threads), then there's something quite wrong. A few thousand instructions # 10 instructions per nanonsecond = some hundreds of nanoseconds. Definitely less than a microsecond. Compare that to the 1ms or 10ms "jitter" of starting the timer just after the timer ticked off last time.
Naturally, if the CPU is busy running other tasks, this is different - then the time "left to run" on other processes will also influence the time taken to wake up a process.
Of course, in a heavily loaded memory system, the "just woken up" process may not be "ready to run", it could be swapped out to disk, for example. In which case, tens if not hundreds of milliseconds are needed to load it back from the disk.
To answer the two first questions: Yes it's true, and yes.
First there is the time between the timeout expires and the time when the OS notices it, then there the time for the OS to reschedule your process, and lastly there's the time from the process has been "woken up" until it is its turn to run. How long will all this take? There's no way of saying.
And as it's all done on the OS level, it doesn't really matter what language you program in.
As for a more accurate way? There is none. You can use more high-precision timers, but there is no way of avoiding the lag described above.
Yes, it´s true that it is not accurate.
It´s the same for simple sleep-functions in C/C++ and pretty much everything else.
Depending on your system, there could be better functions accessible,
but:
What is the accurate way
A really accurate way does not exist.
Unles you have some really expensive special computer with atomic clock included.
(and no usual OS too. And even then, we could argue what "accurate" means)
If busy waiting (high CPU load) is acceptable, look at nanoTime or native usleep, HighPerformanceCounter or whatever is applicable for your system
The sleep call tells the system to stop the thread execution for at least a time period specified as argument. The system will then resume thread execution when it has a chance (it actually depends on many factors, such as hardware, thread priorities, etc.). To more or less acurately measure the time you can store the time at the beginning of execution and then calculate the time delta each time it's needed.
The sleep function is not accurate, but if the intent is to display the total amount of seconds then you should store the current time at the beginning and then display the time difference every now and then.
This is true. Every sleep implementation in any language (C too) will fail to wait exactly 1 second. It has to deal with your OS scheduler, the sleep duration is juste a hint : the minimum sleep duration to be precise, but the actual difference depends on gigazillions of factors.
Trying to figure out the deviation is tricky if you want a very high resolution clock. In most cases, you'll have about 1~5 ms (roughly).
The thing is that the order of magnitude will be the same whatever the sleep duration. If you want something "accurate", you can divide your time application and wait for a longer period. For example, when you benchmark, you will prefer this type of implementation because the delta-time will increase, decreasing uncertainty :
// get t0
// process n times
// get t1
// compute average time : (t1-t0)/n

Measuring method time

I want to optimize a method so it runs in less time. I was using System.currentTimeMillis() to calculate the time it lasted.
However, I just read the System.currentTimeMillis() Javadoc and it says this:
This method shouldn't be used for measuring timeouts or other elapsed
time measurements, as changing the system time can affect the results.
So, if I shouldn't use it to measure the elapsed time, how should I measure it?
Android native Traceview will help you measuring the time and also will give you more information.
Using it is as simple as:
// start tracing to "/sdcard/calc.trace"
Debug.startMethodTracing("calc");
// ...
// stop tracing
Debug.stopMethodTracing();
A post with more information in Android Developers Blog
Also take #Rajesh J Advani post into account.
There are a few issues with System.currentTimeMillis().
if you are not in control of the system clock, you may be reading the elapsed time wrong.
For server code or other long running java programs, your code is likely going to be called in over a few thousand iterations. By the end of this time, the JVM will have optimized the bytecode to the extent where the time taken is actually a lot lesser than what you measured as part of your testing.
It doesn't take into account the fact that there might be other processes on your computer or other threads in the JVM that compete for CPU time.
You can still use the method, but you need to keep the above points in mind. As others have mentioned, a profiler is a much better way of measuring system performance.
Welcome to the world of benchmarking.
As others point out - techniques based on timing methods like currentTimeMillis will only tell you elapsed time, rather than how long the method spent in the CPU.
I'm not aware of a way in Java to isolate timings of a method to how long it spent on the CPU - the answer is to either:
1) If the method is long running (or you run it many times, while using benchmarking rules like do not discard every result), use something like the "time" tool on Linux (http://linux.die.net/man/1/time) who will tell you how long the app spent on the CPU (obviously you have to take away the overhead of the application startup etc).
2) Use a profiler as others pointed out. This has dangers such as adding too much overhead using tracing techniques - if it uses stack sampling, it won't be 100% accurate
3) Am not sure how feasible this is on android - but you could get your bechmark running on a quiet multicore system and isolate a core (or ideally whole socket) to only be able to run your application.
You can use something called System.nanoTime(). As given here
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/System.html#nanoTime()
As the document says
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
Hope this will help.
SystemClock.elapsedRealtime()
Quoting words in the linked page: elapsedRealtime() and elapsedRealtimeNanos() return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing.

java.util.Timer - inconsistent speed between events?

I am using java.util.timer for a game that I am programming to increment the location of a JLabel but it usually goes much slower than I need it to. Sometimes it goes the correct speed but then for no reason the next time I execute the program it will be slow again. I used the following code for the timer.
java.util.Timer bulletTimer= new java.util.Timer();
bulletTimer.schedule(new bulletTimerTask(), 0, 2);
I also tried javax.swing.timer and had the same problem. Any help would be appreciated.
edit: it works fine with another timer where I set the delay to 2000 ms
Since you are moving a JLabel, I would actually continue to use (a single) swing.Timer. The reason for this is that the callback will always happen "on the EDT" and thus it is okay to access Swing components. (If you are using util.Timer then the update should be posted/queued to the EDT, but this is a little more involved.)
Now, bear in mind that util.Timer and swing.Timer do not have guaranteed timings (other than "will be at least X long") and, to this end, it is important to account for the "time delta" (how long since it was since the last time the update occurred).
This is discussed in the article Fix Your Timestep! While the article was written about a simple game-loop and not a timer, the same concept applies. To get a consistent update pattern for a fixed velocity (no acceleration), simply use:
distance_for_dt = speed * delta_time
new_position = old_position + distance_for_dt
This will account for various fluctuations on a given system -- different system load, process contention, CPU power throttle, moon phase, etc. -- as well as make the speed consistent across different computers.
Once you are familiar with the basic position update, more "advanced" discrete formulas can be used for even more accurate positioning, including those that take acceleration into account.
Happy coding.
As BizzyDizzy poined out, System.nanoTime can be used to compute the time-delta. (There are a few subtle issues with System.currentTimeMillis and clock changes.)
You can use System.nanoTime() which came with Java 5 and it is the most high resolution timer in Java. It provides value in nanoseconds.

How to get time delta independently from system time?

I'm writing a simple timer in Java. It has Start and Stop buttons, along with a text view showing how much time has passed since the timer was started.
It's implemented by setting initial time with System.currentTimeMillis() and updating current value each second in a loop.
The problem is that if I change system time while the timer is running, the whole measurement fails. E.g., if I set time back one month, the timer shows negative value as currentTimeMillis() now returns less value than initial.
So, how do I calculate time delta which would be independent from the system time? It would be also great to make this solution cross-platform.
Use:
System.nanoTime()
This should do it. It doesn't take the system time into account, and can only be used to measure elapsed time. Which is what you want. You need to divide by 1 million to get the elapsed milliseconds. See also the Javadocs.
System.nanoTime();
From Javadoc:
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
Use time web services . for example this or this or this
You can install demon like NTP, system time jumping in any directions has a lot of issues and can lead to quite a lot of other problems.
System.nanoTime() not necessarily depend on the system clock - but it can - just make sure the system time is correctly progressing.
Modifying system time is a privileged operation, so it someone does that they shall know better.
Here is a bug 13 years of age regarding the same case: http://bugs.sun.com/view_bug.do?bug_id=4290274
HTH

Categories

Resources