java.util.Timer - inconsistent speed between events? - java

I am using java.util.timer for a game that I am programming to increment the location of a JLabel but it usually goes much slower than I need it to. Sometimes it goes the correct speed but then for no reason the next time I execute the program it will be slow again. I used the following code for the timer.
java.util.Timer bulletTimer= new java.util.Timer();
bulletTimer.schedule(new bulletTimerTask(), 0, 2);
I also tried javax.swing.timer and had the same problem. Any help would be appreciated.
edit: it works fine with another timer where I set the delay to 2000 ms

Since you are moving a JLabel, I would actually continue to use (a single) swing.Timer. The reason for this is that the callback will always happen "on the EDT" and thus it is okay to access Swing components. (If you are using util.Timer then the update should be posted/queued to the EDT, but this is a little more involved.)
Now, bear in mind that util.Timer and swing.Timer do not have guaranteed timings (other than "will be at least X long") and, to this end, it is important to account for the "time delta" (how long since it was since the last time the update occurred).
This is discussed in the article Fix Your Timestep! While the article was written about a simple game-loop and not a timer, the same concept applies. To get a consistent update pattern for a fixed velocity (no acceleration), simply use:
distance_for_dt = speed * delta_time
new_position = old_position + distance_for_dt
This will account for various fluctuations on a given system -- different system load, process contention, CPU power throttle, moon phase, etc. -- as well as make the speed consistent across different computers.
Once you are familiar with the basic position update, more "advanced" discrete formulas can be used for even more accurate positioning, including those that take acceleration into account.
Happy coding.
As BizzyDizzy poined out, System.nanoTime can be used to compute the time-delta. (There are a few subtle issues with System.currentTimeMillis and clock changes.)

You can use System.nanoTime() which came with Java 5 and it is the most high resolution timer in Java. It provides value in nanoseconds.

Related

How do you program time? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
This might sound like a weird question, but how can you program time without using the API of a programming language? Time is such an abstract concept, how can you write a program without using the predefined function for time.
I was thinking it would have to be calculated by a count of processor computations, but if every computer has a different speed performance how would you write code to iterate time.
Assuming the language the program is written in doesn't matter, how would you do this?
EDIT: I should also say, not using system time, or any pre-generated version of time from the system
Typically time is provided to the language runtime by the OS layer. So, if you're running a C/C++ program compiled in a Windows environment, it's asking the Windows OS for the time. If you're running a Java program that is executing in the JVM on a Linux box, the java program gets the time from the JVM, which in turn gets it from Linux. If you're running as JavaScript in a browser, the Javascript runtime gets the time from the Browser, which gets its time from the OS, etc...
At the lower levels, I believe the time the OS has it based on elapsed clock cycles in the hardware layer and then that's compared to some root time that you set in the BIOS or OS.
Updated with some more geek-detail:
Going even more abstract, if your computer is 1GHz, that means it's cpu changes "state" every 1/1Billion(10-9) second (the period of a single transition from +voltage to -voltage and back). EVERYTHING in a computer is based on these transitions, so there are hardware timers on the motherboard that make sure that these transitions happen with a consistent frequency. Since those hardware timers are so precise, they are the basis for "counting" time for things that matter about the Calendar time abstraction that we use.
I'm not a hardware expert, but this is my best understanding from computer architecture classes and building basic circuits in school.
Clarifying based on your edit:
A program doesn't inherently "know" anything about how slow or fast it's running, so on its own there is no way to accurately track the passage of time. Some languages can access information like "cycle count" and "processor speed" from the OS, so you could approximate some representation of the passage of time based on that without having to use a time api. But even that is sort of cheating given the constraints of your question.
Simply put, you can't. There's no pure-software way of telling time.
Every computer has a number of different hardware timers. These fire off interrupts once triggered, which is how the processor itself can keep track of time. Without these or some other external source, you cannot keep track of time.
The hardware clock in your motherboard contains a precisely tuned quartz crystal that vibrates at a frequency of 32,768 Hz [2^15] when a precise current is passed through it. The clock counts these vibrations to mark the passage of time. System calls reference the hardware clock for time, and without the hardware clock your PC wouldn't have the faintest idea if, between to arbitrary points in execution, a second, a day, or a year had passed.
This is what the system calls reference, and trying to use anything else is just an excercise in futility because everything else is the computer is designed to simply function as fast as possible based on the voltage it happens to be receiving at the time.
You could try counting CPU clock cycles, but the CPU clock is simply designed to vibrate as fast as possible based on the input voltage and can vary based on load requirements and how stable the voltage your power supply delivers is. This makes it wholly unreliable as a method to measure time because if you get a program to monitor the clock speed in real time you will notice that it usually fluctuates constantly by +/- a few MHz.
Even hardware clocks are unreliable as the voltage applied to the crystal, while tightly controlled, is still variable. If you turned off the NTP services and/or disconnected it from the internet the time would probably drift a few minutes per month or even per week. The NTP servers reference atomic clocks that measure fundamental properties of physics, namely the oscillations of cesium atoms. One second is currently defined as:
the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
oh... and that's measured by a computer as well.
Without a clock or reference to the OS you can't measure anything relative to the outside of the world. However you can of course measure internally knowing that the task is 1/3 of the way done or whatever. But depending on system load, CPU throttling from thermal requirements, other programs running, etc. the last 1/3 might take as long as the first 2/3rds or longer. You can apply huristics to load balanace long running tasks against themselves (only) so that for instance things will be smooth if the number of tasks relative to threads varies to achieve a desired performance characteristic, but the PC has to get it's time from somewhere. Really cheap clocks get their time from the fact that power is 60hz, so every 60 cycles a second goes by. But the actual number of hz varies a bit and is likely to constantly vary in a single direction, so clocks like that get out of synch pretty fast, seconds per day, or more. I guess with a camera and a hole in a box you could determine when the sun was at a particular position in the sky and determine time that way, but we're getting pretty far afield here.
In Java, you will notice that the easiest way to get the time is
System.currentTimeMillis();
which is implemented as
public static native long currentTimeMillis();
That is a native method which is implemented in native code, c in all likelihood. Your computer's CPU has an internal clock that can adjust itself. The native call is an OS call to the hardware to retrieve that value, possibly doing some software transformation somewhere.
I think you kind of answered your own question in a way: "Time is such an abstract concept". So I would argue it depends what exactly you're trying to measure? If it's algorithmic efficiency we don't care about a concrete measure of time, simply how much longer does the algorithm take with respect to the number of inputs (big O notation). Is it how long does it take for the earth to spin on it's axis or some fraction of that, then obviously you need some thing external to the computer to tell it when one iteration started and one ended thereafter ignoring cpu clock drift the computer should do a good job of telling you what the time in the day is.
It is possible, however, I suspect it would take quite some time.
You could use the probability that your computer will be hit by a cosmic ray.
Reference: Cosmic Rays: what is the probability they will affect a program?
You would need to create a program that manipulates large amounts of data in the computer's memory, thus making it susceptible to cosmic ray intrusion. Such data would become corrupted at a certain point in time.
The program should be able to check the integrity of the data and mark that moment when its data becomes partially corrupted. When this happens, the program should also be able to generate another frame of refference, for example how many times a given function runs between two cosmic rays hits.
Then, these intervals should be recorded in a database and averaged after a few billion/trillion/zillion occurences thereby reducing the projected randomness of an occuring cosmic ray hit.
At that point and on, the computer would be able to tell time by a cosmic ray average hit coeficient.
Offcourse, this is an oversimplified solution. I am quite certain the hardware would fail during this time, the cosmic rays could hit runnable code zones of memory instead of raw data, cosmic rays might change occurences due to our solar system's continuous motion through the galaxy etc.
However, it is indeed possible...

java stop watch program - using System.nanoTime() and TimerTask together

I am writing a mini program in Java to use as a stop watch but I am not sure if I am using the right methods in terms of efficiency and accuracy.
From what I have read on stackoverflow it it appears that System.nanoTime() is the best method to use when measuring time elapsed. Is that right? To what extent is it accurate as in to the nearest nanosecond, microsecond, millisecond etc.
Also, while my stop watch is running, I would like it to display the current time elapsed every second. To do this I plan to use a TimerTask and schedule it to report the time (converted to seconds) every second.
Is this the best way? Will this have any effect on the accuracy?
Lastly with my current design will this use up much of a computer's resources e.g. processing time.
PS Sorry can't share much code right now cause I've just started designing it. I just did not want to waste time on a design that would be inefficient and make an inaccurate timer.
Yes, you can use java.util.Timer and TimerTask that runs periodically and updates your view every second. However I do not think you have to deal with nono seconds while you actually need resolution of seconds only. Use regular System.currentTimeMillis().

How I can use threads to generate terrain based on camera position without bursts of lag?

Here's the deal. My game generates chunks of randomly generated terrain based on the position of the camera (2D isometric). When moving from chunk to chunk, there is always a tiny burst of lag which results in a little jump because the game must wait for the terrain to generate before it moves the camera more. From what I understand of threads, they could be used to have the terrain generate while the main thread moves the camera like normal. I am just not quite sure how to do this. Maybe something like this?
Update(){
if(cam != prevcam)
{
thread.start();
}
}
And then after the other thread is done with the generation it suspends itself somehow and restarts next time the camera position has changed. Note that the camera position refers to the chunk that the camera is centered on. If it was centered on block 40,42 then it would be 1,1
You could use threads to solve this problem, and it would probably work, but you're going to introduce a whole set of problems that (a) might still result in lag, and (b) will be much harder to fix.
Don't get me wrong, I love threads, but I wouldn't use them to solve this particular problem, especially in a tight input/logic/render loop in most games.
The lag happens because the amount of work that needs to be done between frame renders takes long enough that you notice it as a dropped frame rate. What you really want to do is find a way to chop your terrain generation into smaller chunks/tasks that DO fit inside of a single frame render, and then spread terrain generation over as many frame renders as necessary to get it done without dropping the frame rate below what the user will notice.
To go a little deeper, I can see why you might be tempted to use threads to solve this problem. The thing is, threads are best when you've got a job for them to do, and you either don't care so much about how long it takes for them to finish, or you really need two or more pieces of code to run simultaneously. In the case of simultaneously running code, threads aren't always an optimal solution either; sometimes spawning a separate process, and leaving multitask scheduling up to the OS (the scheduler being a piece of code that has been highly optimized by tons of professional engineers) is often a better answer.
By introducing threads into your game loop, the thread's work still needs to finish within a very small, well-defined time window, which means it needs to work in lock step with the rest of the app, which means there's no benefit to running in multiple threads because you're still acting like a single-threaded app.
The one exception to this would be if you really, truly need to take advantage of multiple cores. Just be warned that adding threads, while totally awesome, is going to add unnecessary complexity in most cases.
And after saying all of that, to answer your original question, you would use threads to do this by chopping it up into small chunks of work, and letting the thread work on those chunks as quickly as it could, without taking too much CPU time away from more time-sensitive threads, such as your rendering thread.
I'd like to see more games take advantage of threads. The issue is that most games are essentially simulations where almost everything is tied to a central clock (animation, sound, AI calculations, etc.), and removing that direct connection to the central clock can only be done when the user wouldn't notice it. Examples of this might include:
Pre-calculating random weather (where, only after the weather is calculated, do users see anything), and it doesn't matter (in terms of game play) when the actual weather appears
Path finding, but only if the actors in your game can sometimes take longer to calculate paths than other times.
Since threads don't start/stop excecution execution in a known order, or within a certain time, it means sometimes it'll take 1/10th second, but if the CPU is busy, maybe it'll take a total of 2 seconds, with how available the CPU is.
Anyhow - good luck with your game!

How to make the speed (frame rate of your game) the same across different PC?

In our school, it is common to build games as class projects in implementing the different concepts we learn from our computer science classes. Now we developed our games in our machine and everything seems to work fine, game speed is normal and so on. Now, when we try testing our games in our school's computer, or when our professor test our games in his own computer, let's say his computer is much powerful compare to the unit where we developed our games, the speed of the game changes dramatically... in most cases the game animation would happen so fast than expected. So my question, how do you prevent this kind of problem in game applications? And yeah, we use Java. We usually use passive rendering as the rendering technique in most applications that we built. tnx in advance!
You shouldn't rely on the speed of rendering for your game logic. Instead, keep track of the time spent since the last logical step in the game to the current one. Then if the time spent has exceeded a certain amount, you execute a game step (in rare cases, where the computer is so slow that two steps should have happened, you may want to come up with a smart solution for making sure the game doesn't lag behind).
This way, game logic is separate from rendering logic, and you don't have to worry about the game changing speeds depending on whether vertical sync is on or off, or if the computer is slower or faster than yours.
Some pseudo-code:
// now() would be whatever function you use to get the current time (in
// microseconds or milliseconds).
int lastStep = now();
// This would be your main loop.
while (true) {
int curTime = now();
// Calculate the time spent since last step.
int timeSinceLast = curTime - lastStep;
// Skip logic if no game step is to occur.
if (timeSinceLast < TIME_PER_STEP) continue;
// We can't assume that the loop always hits the exact moment when the step
// should occur. Most likely, it has spent slightly more time, and here we
// correct that so that the game doesn't shift out of sync.
// NOTE: You may want to make sure that + is the correct operator here.
// I tend to get it wrong when writing from the top of my head :)
lastStep = curTime + timeSinceLast % TIME_PER_STEP;
// Move your game forward one step.
}
The best way to achieve portability would be to issue some Thread.sleep() commands with the same delay between drawing of each frame. Then the results would be consistent accross every system. Since you are using passive rendering, i.e. drawing your animation directly in paint, calling Thread.sleep() from there may not be the best idea...
Maybe you could just update ceratin variables your animation depends on every couple of milliseconds?
You can also use javax.swing.Timer to govern the animation rate. There's a trivial example in this answer.

What should qualify as a "long running task" to be executed in a SwingWorker thread?

I know how to use SwingWorker threads, but I still don't have precise criteria to decide when to use one or not.
I/O seems obvious, but what about methods operating on potentially large collections ?
A criterion could be the actual running time, but what kind of number (in ms) would qualify ?
The important thing is how responsive is the UI.
Jef Raskin (of Mac UI fame) said that the maximum delay should be limited to 50 ms. RISC OS guidelines said 100 ms. Button clicks are about 50 ms, so if you want to act on release you need to act fast as the user model is generally click for action. Above 140 ms, not only does it some unresponsive but UI responses appear to disconnected from user actions (see, for instance, O'Reilly's Mind Hacks).
250-350 ms and the (normal) user will think something has gone wrong.
On the other side of things, you need 8 fps (and the includes rendering) to have the illusion of animation (for instance) scrolling. And you know how gamers like their fps.
However, I prefer software that more or less works than best possible software that is not available. Having said that, having Opera lock up for a few minutes whilst it hammered the disc in the middle of this edit did not please me.
For me it would be 1 s.
If your processing takes more than that, your UI will freeze. In that situation is much better to show a "busy" paint, or progress bar.
How much time would you like to wait for ANY application that you use, become responsive? Let's say you open your IDE or MS-Word or anyother. If you notice most of the times, when the application is loading, a progress bar or some other animation shows, even when the document/project/whatever is small enough as to be opened in 2 s.
There is not a specific number, it is a matter of what the app is supposed to do and how responsive the gui needs to be. Best approach is to do some testing, no one can answer this for you. ( though comments may be of great use to you in determining what testing you need to do )
A long running task would be anything long enough for the user to notice glitches or delays in redrawing the UI.
Setting the text of a label is probably not "long running", but just taking a few milliseconds to draw an image into an offscreen bitmap may delay the UI redrawing long enough to be noticeable.
Basically if you cannot predict how long the processing will take it will be a good idea to put it in a separate thread as it will keep you applciation responsive even in extreme cases with bad data etc. Good candidates are
Doing expensive operations on all fields in a model.
Depending on an external data source or destination. You never know if that might be a slow network drive or similar.
But why not just simply make a rule, if it has a loop (any for/while) then it goes in a SwingWorker?

Categories

Resources