Run code every X seconds (Java) - java

This is not super necessary, I am just curious to see what others think. I know it is useless, it's just for fun.
Now I know how to do this, it's fairly simple. I am just trying to figure out a way to do this differently that doesn't require new variables to be created crowding up my class.
Here's how I would do it:
float timePassed = 0f;
public void update(){
timePassed += deltatime;//Deltatime is just a variable that represents the time passed from one update to another in seconds (it is a float)
if(timePassed >= 5){
//code to be ran every 5 seconds
timePassed -= 5f;
}
}
What I want to know is if there is a way to do this without the time passed variable. I have a statetime (time since loop started) variable that I use for other things that could be used for this.

If the goal really is to run code every X seconds, my first choice would be to use a util.Timer. Another option is to use a ScheduledExecutorService which adds a couple enhancements over the util.Timer (better Exception handling, for one).
I tend to avoid the Swing.Timer, as I prefer to leave the EDT (event dispatch thread) uncluttered.
Many people write a "game loop" which is closer to what you have started. A search on "game loop" will probably get you several variants, depending on whether you wish to keep a steady rate or not.
Sometimes, in situations where one doesn't want to continually test and reset, one can combine the two functions via the use of an "AND" operation. For example, if you AND 63 to an integer, you have the range 0-63 to iterate through. This works well on ranges that are a power of 2.
Depending on the structure of your calling code, you might pass in the "statetime" variable as a parameter and test if it is larger than your desired X. If you did this, I assume that a step in the called code will reset "statetime" to zero.
Another idea is to pass in a "startTime" to the update method. Then, your timer will test the difference between currentTimeMillis and startTime to see if X seconds has elapsed or not. Again, the code you call should probably set a new "startTime" as part of the process. The nice thing about this method is that there is no need to increment elapsed time.
As long as I am churning out ideas: could also create a future "targetTime" variable and test if currentTimeMillis() - targetTime > 0.
startTime or targetTime can be immutable, which often provides a slight plus, depending on how they are used.

Related

Safe time origin for System.nanotime()

I am using a variable time_of_last_call as my origin of time; since nanoTime() might give negative values, I can’t use 0 as my origin of time to initialize time_of_last_call. If I initialize time_of_last_call with Long.MIN_VALUE I can have overflow issues. Any suggestions?
EDIT: I think I should just initialize it as:
long time_of_last_call = System.nanoTime()/1000000L;
//long time_of_last_call =Long.MIN_VALUE; //could have overflow issues
//long time_of_last_call =Long.MIN_VALUE/1000000; // could still have overflow issues?
//BEGIN OF CODE WITH A LOOP
// “time_of_last_call” may (or not) be updated:
if (some_condition)
time_of_last_call=System.nanoTime()/1000000L;
if ( ( System.nanoTime()/1000000L - time_of_last_call ) > 10 )
// do something
//END OF CODE WITH A LOOP
As far as I know, there is no "origin" for nano time. You should only use it to find a difference between two instants in time, in which case the "origin" is irrelevant.
In addition, as far as I remember, it's not guaranteed to be consistent over VM restarts. Therefore, if you persist this value, you can run into some subtle bugs.
From the naming of your variables it looks like all you want to do is to track call times. You most probably don't need the precision and resolution of nano time for this use case, so you'll probably be better off using System.currentTimeMillis() instead.

Java Micro-optimization: To cache or not to cache a System.currentTimeMillis() return value?

Simple question, which I've been wondering. Of the following two versions of code, which is better optimized? Assume that the time value resulting from the System.currentTimeMillis() call only needs to be pretty accurate, so caching should only be considered from a performance point of view.
This (with value caching):
long time = System.currentTimeMillis();
for (long timestamp : times) {
if (time - timestamp > 600000L) {
// Do something
}
}
Or this (no caching):
for (long timestamp : times) {
if (System.currentTimeMillis() - timestamp > 600000L) {
// Do something
}
}
I'm assuming System.currentTimeMillis() is already a very optimized and lightweight method call, but let's assume I'll be calling it many, many times in a short period.
How many values must the "times" collection/array contain to justify caching the return value of System.currentTimeMillis() in its own variable?
Is this better to do from a CPU or memory optimization point of view?
A long is basically free. A JVM with a JIT compiler can keep it in a register, and since it's a loop invariant can even optimize your loop condition to -timestamp < 600000L - time or timestamp > time - 600000L. i.e. the loop condition becomes a trivial compare between the iterator and a loop-invariant constant in a register.
So yes it's obviously more efficient to hoist a function call out of a loop and keep the result in a variable, especially when the optimizer can't do that for you, and especially when the result is a primitive type, not an Object.
Assuming your code is running on a JVM that JITs x86 machine code, System.currentTimeMillis() will probably include at least an rdtsc instruction and some scaling of that result1. So the cheapest it can possibly be (on Skylake for example) is a micro-coded 20-uop instruction with a throughput of one per 25 clock cycles (http://agner.org/optimize/).
If your // Do something is simple, like just a few memory accesses that usually hit in cache, or some simpler calculation, or anything else that out-of-order execution can do a good job with, that could be most of the cost of your loop. Unless each loop iterations typically takes multiple microseconds (i.e. time for thousands of instructions on a 4GHz superscalar CPU), hoisting System.currentTimeMillis() out of the loop can probably make a measurable difference. Small vs. huge will depend on how simple your loop body is.
If you can prove that hoisting it out of your loop won't cause correctness problems, then go for it.
Even with it inside your loop, your thread could still sleep for an unbounded length of time between calling it and doing the work for that iteration. But hoisting it out of the loop makes it more likely that you could actually observe this kind of effect in practice; running more iterations "too late".
Footnote 1: On modern x86, the time-stamp counter runs at a fixed rate, so it's useful as a low-overhead timesource, and less useful for cycle-accurate micro-benchmarking. (Use performance counters for that, or disable turbo / power saving so core clock = reference clock.)
IDK if a JVM would actually go to the trouble of implementing its own time function, though. It might just use an OS-provided time function. On Linux, gettimeofday and clock_gettime are implemented in user-space (with code + scale factor data exported by the kernel into user-space memory, in the VDSO region). So glibc's wrapper just calls that, instead of making an actual syscall.
So clock_gettime can be very cheap compared to an actual system call that switches to kernel mode and back. That can take at least 1800 clock cycles on Skylake, on a kernel with Spectre + Meltdown mitigation enabled.
So yes, it's hopefully safe to assume System.currentTimeMillis() is "very optimized and lightweight", but even rdtsc itself is expensive compared to some loop bodies.
In your case, method calls should always be hoisted out of loops.
System.currentTimeMillis() simply reads a value from OS memory, so it is very cheap (a few nanoseconds), as opposed to System.nanoTime(), which involves a call to hardware, and therefore can be orders of magnitude slower.

I am using a lot of variables to count time in my game ,is it ok?

In my game i have timer variables for every thing that happen in my game for example timer for counting seconds till i create an enemy and deploy it and timer for any enemy to shoot .. my point here is that i am using a lot of variables of type long.
long timeToEnemyShoot = System.nanoTime();
while (true){
update();
}
public void update(){
if( System.nanoTime() - timeToEnemyShoot) / 1000000 >= 1000 ){
enemy.shoot();
}
and just imagine that there's more than 15 variable like that !
and i think this is not a good way to manage time.
So is there any other efficient way ?
I think its generally OK. Maybe using sort of a milestone long variable and to specify exact time you will use milestone + less memory consuming variable. Risks are complexity => bugs, higher computing requirements.
When declaring a variable , memory allocation takes place for the variable.
Thing you can make sure inorder to save you a lot of pain in the long run is that to make variable private.
In some case types of access will affect performance.
I hope this helps.

Time taken to execute a java method is zero?

I am reading the system time just before the method is invoked and immediately after method returns and taking the time difference, which will give the time taken by a method for execution.
Code snippet
long start = System.currentTimeMillis ();
method ();
long end = System.currentTimeMillis ();
System.out.println ("Time taken for execution is " + (end - start));
The strange thing is the output is 0..how is this possible..?
Chances are it's taking a shorter time than the fairly coarse-grained system clock. (For example, you may find that System.currentTimeMillis() only changes every 10 or 15 milliseconds.)
System.currentTimeMillis is good for finding out the current time, but it's not fine-grained enough for measuring short durations. Instead, you should use System.nanoTime() which uses a high-resolution timer. nanoTime() is not suitable for finding the current time - but it's designed for measuring durations.
Think of it as being the difference between a wall clock and a stopwatch.
use nanoTime()
Because it took less than 1 millisecond?
If you want to get a more meaningful metric, I would suggest calling your method in a loop 1000000 times, timing that, and then dividing by 1000000.
Of course, even then, that might not be representative; the effects on the cache will be different, etc.

System.nanotime running slow?

One of my friends showed me something he had done, and I was at a serious loss to explain how this could have happened: he was using a System.nanotime to time something, and it gave the user an update every second to tell how much time had elapsed (it Thread.sleep(1000) for that part), and it took seemingly forever (something that was waiting for 10 seconds took roughly 3 minutes to finish). We tried using millitime in order to see how much time had elapsed: it printed how much nanotime had elapsed every second, and we saw that for every second, the nanotime was moving by roughly 40-50 milliseconds every second.
I checked for bugs relating to System.nanotime and Java, but it seemed the only things I could find involved the nanotime suddenly greatly increasing and then stopping. I also browsed this blog entry based on something I read in a different question, but that didn't have anything that may cause it.
Obviously this could be worked around for this situation by just using the millitime instead; there are lots of workarounds to this, but what I'm curious about is if there's anything other than a hardware issue with the system clock or at least whatever the most accurate clock the CPU has (since that's what System.nanotime seems to use) that could cause it to run consistently slow like this?
long initialNano = System.nanoTime();
long initialMili = System.currentTimeMillis();
//Obviously the code isn't actually doing a while(true),
//but it illustrates the point
while(true) {
Thread.sleep(1000);
long currentNano = System.nanoTime();
long currentMili = System.currentTimeMillis();
double secondsNano = ((double) (currentNano - initialNano))/1000000000D;
double secondsMili = ((double) (currentMili - initialMili))/1000D;
System.out.println(secondsNano);
System.out.println(secondsMili);
}
secondsNano will print something along the lines of 0.04, whereas secondsMili will print something very close to 1.
It looks like a bug along this line has been reported at Sun's bug database, but they closed it as a duplicate, but their link doesn't go to an existing bug. It seems to be very system-specific, so I'm getting more and more sure this is a hardware issue.
... he was using a System.nanotime to cause the program to wait before doing something, and ...
Can you show us some code that demonstrates exactly what he was doing? Was it some strange kind of busy loop, like this:
long t = System.nanoTime() + 1000000000L;
while (System.nanoTime() < t) { /* do nothing */ }
If yes, then that's not the right way to make your program pause for a while. Use Thread.sleep(...) instead to make the program wait for a specified number of milliseconds.
You do realise that the loop you are using doesn't take exactly 1 second to run? Firstly Thread.sleep() isn't guaranteed to be accurate, and the rest of the code in the loop does take some time to execute (Both nanoTime() and currentTimeMillis() actually can be quite slow depending on the underlying implementation). Secondly, System.currentTimeMillis() is not guaranteed to be accurate either (it only updates every 50ms on some operating system and hardware combinations). You also mention it being inaccurate to 40-50ms above and then go on to say 0.004s which is actually only 4ms.
I would recommend you change your System.out.println() to be:
System.out.println(secondsNano - secondsMili);
This way, you'll be able to see how much the two clocks differ on a second-by-second basis. I left it running for about 12 hours on my laptop and it was out by 1.46 seconds (fast, not slow). This shows that there is some drift in the two clocks.
I would think that the currentTimeMillis() method provides a more accurate time over a large period of time, yet nanoTime() has a greater resolution and is good for timing code or providing sub-millisecond timing over short time periods.
I've experienced the same problem. Except in my case, it is more pronounced.
With this simple program:
public class test {
public static void main(String[] args) {
while (true) {
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
}
OStream.out("\t" + System.currentTimeMillis() + "\t" + nanoTimeMillis());
}
}
static long nanoTimeMillis() {
return Math.round(System.nanoTime() / 1000000.0);
}
}
I get the following results:
13:05:16:380 main: 1288199116375 61530042
13:05:16:764 main: 1288199117375 61530438
13:05:17:134 main: 1288199118375 61530808
13:05:17:510 main: 1288199119375 61531183
13:05:17:886 main: 1288199120375 61531559
The nanoTime is showing only ~400ms elapsed for each second.

Categories

Resources