So I have this Timeline custom animation. Node transformation is irrelevant.
final KeyValue kv1 = new KeyValue(angle, HALF_PI, Interpolator.LINEAR);
final KeyValue kv2 = new KeyValue(angle, -HALF_PI, Interpolator.EASE_BOTH);
final KeyFrame kf1 = new KeyFrame(Duration.millis(0), kv1);
final KeyFrame kf2 = new KeyFrame(Duration.millis(500), kv2);
Timeline animation = new Timeline(kf1, kf2);
animation.setRate(1);
My angle variable is modified in this timeline and used in the transformation of a node. As you can see, in this case the rate is 1, and the duration is 500.
To smoothen out the transformation, I'm guessing that the angle should go through several discrete values, which are set by the Interpolator.
Also, I'm not inclined to use a Task to achieve this. Only if the lagginess is severly less than if using the Timeline.
Question:
Which adjustment smoothens out the animation?
A. Low rate + High duration
B. High rate + Low duration
C. ??? Other
The rate is not the frame rate. None of these settings will affect the smoothness of the animation. Using a Task won't help: the Animation API is just a high-level mechanism to update properties of your UI on each frame render, in a way that interpolates values with respect to time. If you put work in a Task, you'll have to update the properties on the FX Application Thread anyway, so you're just replicating the code that's provided for you by the Animation API.
Behind the scenes, JavaFX has a thread which renders frames to the graphical system. The target frame rate is (currently) 60fps. At each frame render, the rendering thread has to synchronize with the JavaFX Application thread, so if you're doing too much work on that thread the frame rate will be below 60fps.
I think Animations work as follows: At each frame render, any running animations will update their associated properties by interpolation along the time line between key frames. The basic algorithm is pretty obvious: look at the current time, subtract the start time, divide by the difference between the end and start times, use that value to update the property.
All the rate does is act as a multiplier for the interpolator. So if you set rate=-1, the animation runs backwards. If you set rate=2, it completes in half the specified duration. If you set rate =-0.5, it completes in twice the specified duration, and runs backwards, etc. The frame rate is still going to be whatever the underlying FX implementation can squeeze out of the system, up to a maximum (in the current implementation) of 60 fps.
Smoothness is basically affected by frame rate (again, the only control you have over this is to make sure you're not flooding the FX Application thread with too much work), along with how much motion you get per frame. Very fast animations will involve something moving considerable distances per frame, and will look less smooth than something that is moving a barely-perceptible distance per frame.
Related
I'm building a very simple "game" for Android using Opengl ES 2.0. The gameplay consists in a moving point which accelerates or decelerates based on user input.
Of course the space covered by the point in a single frame depends on the amount of time elapsed from last frame, so to calculate this space i multiply the speed of the point and this amount of time.
I also have a camera which moves according to my point, but I flip x and y axis in order to make the camera follow the point.
My problem is that the point (and the camera) doesn't move smoothly. By the way I don't get why, since FPS are always 60 or at least 55 (I check them through an external app).
If I use always the same amount of time between a frame and the next all is smooth.
By the way, in order to understand what is wrong I built a very simple FPS counter and I log FPS (or elapsed time) through Log.d. Here I noticed that values vary between 45 and 80 FPS. Now, I think that if I could lock this value to a maximum of 60 (which is most smartphones screen refresh rate), then the movement would be way more smooth.
So my question is: how can I avoid my app draws a frame before 0.0166 seconds elapsed from last frame?
Thanks for reading and sorry for my english!
Here I noticed that values vary between 45 and 80 FPS.
The graphics subsystem will return a buffer to the application as soon as one is available, so with triple buffering the frame time measured on the CPU can be a little unpredictable as the application is not running tightly locked with the compositor and CPU time can move about due to CPU frequency changes.
In general the display update will be capped at 60 FPS but because your animation is based on the elapsed frame time you are seeing on at the API level (which is decoupled from the actual display update) you are animating each frame as if it were 45 FPS or 80 FPS which isn't what actually appears on screen.
If you know you are close to 60 FPS, I'd try something like averaging the elapsed time over the last 3 frames, and using that timestep as your animation update rate. This should remove most of the jitter caused by buffer skid, at the expense of a little latency to reacting to large workload changes.
I'm running a game/simulation on JavaFX and when I started it seemed reasonable to add an AnimationTimer to perform the tick updates. My Entity's are composed of a Polygon that holds their shape and position. I added the Polygon's to the scene via a Group and everything renders like magic. However, since it's an simulation, I now want to run millions of ticks with no rendering, to advance to the future and see the results. The problem is, since my Entity's (x,y) positions are inside the Polygon, every time handle() is called in the animation, it seems that the screen is updated.
What is the proper way to split the game loop and render, to be able to call render only after some amount of ticks?
I thought of creating my own MyPolygon class to hold the simulation data, and then when the time comes to draw, create on the fly one Polygon per Entity, but that seems also overkill for me (but maybe I'm wrong).
Also, I'm not sure how to change the ticks per second rate on the AnimationTimer. So I'm not sure it is suited for this specific need.
It seems like a very simple design choice, so there has to be a proper way to do it with JavaFX...
I am building a Java2D video game with multiple sprites updating on the screen at once, and was looking for feedback with regard to best way to handle updates via use of Timers.
I was looking for feedback on best way to handle Timer design that would update locations of each Sprite. Currently, the way I have been doing it, is I have one Timer. When it expires, I update the position of each Sprite. I only ever Update Sprite location by 1 pixel, to keep motion smooth. If something was to update slower that then rest of the Sprite, I update it's position say on every 3 or 5th call of a getImage() call (used to get the current icon image of the sprite).
Now with this approach, all updates are dependent on the main timer, and the Sprites sort of update in relation to each other. So if I wanted to speed up the game, I just update the refresh rate of the main timer.
However, I don't know if this is the best approach. Would it be better to put each object on it's own timer, but would that cause other issues? Maybe cause problems for the main paint() method?
Was just looking for feedback on a good design technique for this.
It is possible to keep using one timer while having perfectly smooth animations despite different animation and movement speeds between different sprites. The way to do is by changing your animation and movement of sprites from a tick based approach (move x many pixels per update) to a time based approach (move x many pixels per how much time has elapsed since the last update).
This would mean your Sprite class (or whatever you have), has floating point x and y positions, as well as floating point x and y velocities. To change the speed of a certain sprite, you would change the velocity (which would be pixels/drawingUnitsEtc per millisecond/nanosecond), and won't be limited by how fast you can make the timer run.
However, I don't know if this is the best approach. Would it be better to put each object on it's own timer, but would that cause other issues?
Well if you did use a timer per Sprite that used a different speed, you would run into overhead problems if the timers ran on their own thread, and if the timers were executed on the same thread then you are technically updating your Sprites based on how much time has elapsed but just moving the velocity constant to an integer.
You would also run into a problem of how you can ensure the Timer consistently returns. With separate timers, imagine that there may be two sprites that are walking next to each other in game which want to update at 10ms, but one of them is running at 11ms due to a laggy timer, eventually one will run into the back of the other and turn around and mess up your level design or some other game mechanic. Another one is that they could be two Sprites of the same kind but now one is now an animation frame ahead of the other while that didn't hold true for the first few seconds that you saw them. With a single timer that updates all sprites that operate together you'll get those consistent results.
Assuming you're using java.util.timer's schedule method.
You could use the scheduleAtFixedRate method for cleaner looking code.
Only use one instance of Timer and attach all sprites to it, this keeps all sprites running on the same thread because then you eliminate all forms of concurrent modification, using a new thread for each sprite is way overkill as the threads will be running as fast as possible and maxing out the computer.
You could also simply make something along the lines of a SpriteManager class which extends Thread so that you don't even need to use timer, all sprites will be running as fast as possible but on one thread, it wont put too much load on the cpu but it worth it. Professionally, movement falls under physics so a game would have some physics thread that handles all updates to everything.
You could get even more detailed in the physics thread by realising that throughout the course of gameplay the amount of objects on the thread will change therefore will update less frequently making everything slower (even if it is microseconds). To keep everything running smoothly you can delta scale. Delta scaling simply takes the time it took last frame as a hint to how fast the thread is running and scales the speed of objects up or down appropriately - this is how (most) games don't run slower when the frame rate drops, instead, they look like they jumped to where it would be at that point in time.
I created a game using Swing, and it was a bit unreliable, so I started remaking it using Slick2D game engine and I have encountered issues.
The background of the game rolls across the screen at a certain about of pixels each time the update method is called. This keeps speeding up and slowing down, so the background will move very fast, and then very slow, and keeps fluctuating.
I have tried * by delta (which monitors the refresh rate, I think!) on my value which moves the background, but as this wont give me an exact value I can use to reset the background to the left hand side (2 background move from right to left. left hand one goes to the right at -800 pixels).
What is causing this and how do I overcome it?
Thanks
Here's some reading for you (there's a gamedev-specific StackExchange site, BTW):
https://gamedev.stackexchange.com/questions/6825/time-based-movement-vs-frame-rate-based-movement
https://gamedev.stackexchange.com/questions/1589/fixed-time-step-vs-variable-time-step
One of the most important points in these articles is that things move at a certain rate OVER TIME, not over a certain number of frames. Since frame rates can unpredictably change, time-based and frame-based movement don't wind up being equivalent to one another.
And here's some explanation...
So, your computer and OS are multithreaded, and thus, you can never know what's happening outside your app, and what the overall load is on the machine. Because of this, even when you're in full-screen mode you aren't getting exclusive access to the CPU. So, that's one factor to why things speed up and slow down.
The delta's purpose in Slick2D is to allow you to deal with this speed up/slow down, and allow your app to change its frame rate dynamically so that the perceived movement on the screen doesn't change due to the load on your machine. The delta is not the monitor the refresh rate (which is constant); the delta is the number of milliseconds that have passed since the last call to update.
So how do you use this delta properly? Let's say your background is supposed to move at a rate of 100px/sec. If the delta (on a given call to update) is 33 milliseconds, then the amount you should move your background on this update is 100*(33/1000.0) = 0.033 - so you would move your background by 0.033 pixels. This might seem weird, and you may wonder what the point is of moving <1 pixel, but stick with me.
First, the reason you have to divide it by 1000.0 instead of 1000, is because you want the movement of the delta to give you a floating point number.
You'll notice that the 2D graphics stuff in Slick2D uses float values to track the placement of things. That's because if the delta tells you to move something by 0.033 pixels, you need to move it by 0.033: not 0, and not 1 pixels. Sub-pixel movement is critical to smoothing out the increase/decrease in frame rates as well, because the cumulative effect over several sub-pixel movements is that, when the moment is right, all those little movements add up to a whole pixel, and it's perfectly smooth, resulting in the correct overall movement rate.
You may think that, since your screen resolves images to a given pixel, and not sub-pixel elements, that it doesn't matter if you do sub-pixel movement, but if you convert all your movement tracking to floats, you'll see that the effect you're observing largely goes away.
I am working on a live wallpaper, so no worries about physics collisions. I just want to have as smooth a frame rate as possible, up to a limit of 30fps to conserve battery.
To do this, at the end of the loop, and I measure time since the beginning of that loop. If the frame took less than 33ms, I use Thread.sleep() to sleep the number of ms to get up to 33.
However, I know that Thread.sleep() is not super accurate, and is likely to sleep longer than I ask for. I don't know by how much.
Is there a different method I can use that will provide a more even rate?
Yes, Thread.sleep() is not super-accurate.
You can try to use adaptive strategy -- do not just sleep(remaining), but have a variable long lastDelay, and, each time you observe too high frame rate you increase it, and Thread.sleep(lastDelay), each time you observe too low frame rate -- you decrease it. So after second or about your code find right number...
By the way, the Thread.sleep is not the best way to limit frame rate. Using of Timer is more promising -- but you'll have same problem, since Timer accuracy is likely the same, as Thread.sleep()
I'm not 100% sure about this, but have you tried using a Timer (http://developer.android.com/reference/java/util/Timer.html) and TimerTask (http://developer.android.com/reference/java/util/TimerTask.html)? You should be able to use that to schedule your updates.
30fps does not look smooth at all in a canvas animation. One should always try to keep about 60fps and then adjust the speed of sprite movement according to the screen density. Thread.sleep() is accurate enough for wallpaper or 2d game animations, one cannot notice the difference if the fps goes up or down just few frames.
Or use the so-called frame-rate-independed-movement where
deltaTime = timeNow - prevFrameTime; //for 60fps this should be ~0.016s
object.x += speedX * deltaTime;