Limit draw processing in a game loop - java

I am using this code structure below from here http://www.koonsolo.com/news/dewitters-gameloop/
to set a game loop that processes based on a set fps but renders/draws at the most possible.
How would one implement a cap on the drawing fps so as not to use up all the processing power /battery life. or to limit it for v-syncing.
const int TICKS_PER_SECOND = 60;
const int SKIP_TICKS = 1000000000 / TICKS_PER_SECOND;
const int MAX_FRAMESKIP = 5;
DWORD next_game_tick = GetTickCount();
int loops;
float interpolation;
bool game_is_running = true;
while( game_is_running ) {
loops = 0;
while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) {
update_game();
next_game_tick += SKIP_TICKS;
loops++;
}
interpolation = float( GetTickCount() + SKIP_TICKS - next_game_tick )
/ float( SKIP_TICKS );
display_game( interpolation );
}

I assume that you are actually doing proper motion interpolation? Otherwise it doesn't make sense to render faster than your game update: you'll just be rendering all the objects again in exactly the same position.
I'd suggest the following:
Put a Thread.sleep(millis) call in to stop the busy-looping. Probably a Thread.sleep(5) is fine, since you are just going to do a quick check for whether you are ready for the next update.
Put a conditional test on the display_game call to see if at least a certain number of millisconds has elapsed since the last display_game. For example, if you make this 10ms then your frame rate will be limited to 100 FPs.
There are also a couple of other things that are a bit unclear in your code:
What is DWORD? Is this really Java? Looks like some funny C/C++ conversion? The normal way to get the current time in Java would be long time=System.nanoTime() or similar.....
What graphics framework are you using? If it is Swing, then you need to be careful about what thread you are running on, as you don't want to be blocking the GUI thread....
Finally, you should also consider whether you want to decouple your update loop from the rendering code and have them running on different threads. This is trickier to get right since you may need to lock or take snapshots of certain objects to ensure they don't change while you are rendering them, but it will help your performance and scalability on multi-core machines (which is most of them nowadays!)

I think you can update your display_game to compare the FPS being painted against the desired limit. If it has reach that limit, you can add a wait time for wait time as:
Thread.sleep(500); //wait for 500 milliseconds

Related

How can I start a LIBGDX particle effect mid way through?

Is it possible to start a particle effect mid way through? I have tried many variations of updating the particle effect/emitters upon initialisation. None of them seem to work. Has anyone managed to do this before? Thanks a lot!
ParticleEffectPool.PooledEffect effect = particleEffectPool.obtain();
effect.setPosition(posnX,posnY);
float value = 1.5f;
for(ParticleEmitter e: effect.getEmitters()){
e.update(value);
value+=1.5f;
}
The above code doesn't draw all of the particles, but it does seem to update the them somewhat. Once the initial effect is over, it resets and then it looks fine
EDIT: I've found a little bit of a hack by doing the following code snippet 5 times upon initialisation of the particle effect. Still interested to see if someone has a better solution
p.getEmitters().get(0).addParticle();
p.update(1);
I assume, that all emitters in your ParticleEffect have the same duration:
ParticleEffectPool.PooledEffect effect = particleEffectPool.obtain();
effect.reset();
effect.setPosition(posnX,posnY);
//divide by 1000 to convert from ms to seconds
float effectDuration = effect.getEmitters().first().duration / 1000f;
float skipProgress = 0.5f;
effect.update(skipProgress * effectDuration);
Note, that if emitters have different duration, you probably would want to pick the max duration. Also, if your emitters have delays, you should take them into account too.
Update
This approach will not work as expected in case, when some of effect's properties change over time. So if you skip half of its duration, you don't take in account all changes that happened before. You just start from some state.
For example, let's say effect has duration = 10, and its velocity is 100 for the first 4 seconds, and after that velocity is 0. If you call effect.update(5), i.e. just skip first 5 seconds, particles will have velocity = 0, they just won't "know", that they had to move for the first 4 seconds.
So, I guess the only workaround here, is to update the effect with small steps in a loop, instead of just updating for half of its duration in one call:
ParticleEffectPool.PooledEffect effect = particleEffectPool.obtain();
effect.reset();
effect.setPosition(posnX,posnY);
//divide by 1000 to convert from ms to seconds
float skipDuration = 0.5f * effect.getEmitters().first().duration / 1000f;
//I guess, to reduce number of iterations in a loop, you can safely use
//a bit bigger stepDeltaTime, like 1 / 10f or bigger, but it depends on you effect;
//here I just use standard frame duration
final float stepDeltaTime = 1 / 60f;
while (skipDuration > 0) {
float dt = skipDuration < stepDeltaTime ? skipDuration : stepDeltaTime;
effect.update(dt);
skipDuration -= stepDeltaTime;
}

I am using a lot of variables to count time in my game ,is it ok?

In my game i have timer variables for every thing that happen in my game for example timer for counting seconds till i create an enemy and deploy it and timer for any enemy to shoot .. my point here is that i am using a lot of variables of type long.
long timeToEnemyShoot = System.nanoTime();
while (true){
update();
}
public void update(){
if( System.nanoTime() - timeToEnemyShoot) / 1000000 >= 1000 ){
enemy.shoot();
}
and just imagine that there's more than 15 variable like that !
and i think this is not a good way to manage time.
So is there any other efficient way ?
I think its generally OK. Maybe using sort of a milestone long variable and to specify exact time you will use milestone + less memory consuming variable. Risks are complexity => bugs, higher computing requirements.
When declaring a variable , memory allocation takes place for the variable.
Thing you can make sure inorder to save you a lot of pain in the long run is that to make variable private.
In some case types of access will affect performance.
I hope this helps.

Fps algorithm error with systemtime

I'm having some trouble with an FPS algorithm I have tried to implement into my simulator. The general idea is that I want 60 to be the maximum amount of tick-render cycles per second. Here is my code:
public void run() {
x = 0; //tick is set to 0 originally
lastT = System.currentTimeMillis(); //system time in milliseconds
//tick-render cycle
while(running == true){
currentT = System.currentTimeMillis();
deltaT += currentT - lastT;
lastT = currentT;
if(deltaT/tPerTick >= 1){
tick();
render();
deltaT = 0;
}
}
stop(); //stops thread when running =! true
}
The constant 'tPerTick' is defined as follows
double tPerTick = 1000 / 60
Throughout my development of this program I thought that this algorithm was working perfectly, it was only when I traced this algorithm to confirm that I found an issue. Every time the loop cycles (iterates? I'm not sure what the correct word is here) the if statement is found to be true and therefore the tick-render cycle is executed. I did some more tracing (to find why this was happening) and found that the values for deltaT are always well over tPerTick, like way way over (in some cases 19 seconds even though this is clearly not the case). Is there an error somewhere in my code? I think that I must be either using System.currentTimeMillis() wrong or am tracing the algorithm incorrectly.
In the actual simulation it seems to be working fine (not sure why). When I draw the graphics I pass 'x' (the tick) in and write the time to the screen as x / 60 seconds.
Answering my own question.
System.currentTimeMillis();
Gets the current system time. If you are going through the algorithm manually in debug mode, 'deltaT' is going to be very large since it will be equal to the time that you take to manually trace through the algorithm.

Run code every X seconds (Java)

This is not super necessary, I am just curious to see what others think. I know it is useless, it's just for fun.
Now I know how to do this, it's fairly simple. I am just trying to figure out a way to do this differently that doesn't require new variables to be created crowding up my class.
Here's how I would do it:
float timePassed = 0f;
public void update(){
timePassed += deltatime;//Deltatime is just a variable that represents the time passed from one update to another in seconds (it is a float)
if(timePassed >= 5){
//code to be ran every 5 seconds
timePassed -= 5f;
}
}
What I want to know is if there is a way to do this without the time passed variable. I have a statetime (time since loop started) variable that I use for other things that could be used for this.
If the goal really is to run code every X seconds, my first choice would be to use a util.Timer. Another option is to use a ScheduledExecutorService which adds a couple enhancements over the util.Timer (better Exception handling, for one).
I tend to avoid the Swing.Timer, as I prefer to leave the EDT (event dispatch thread) uncluttered.
Many people write a "game loop" which is closer to what you have started. A search on "game loop" will probably get you several variants, depending on whether you wish to keep a steady rate or not.
Sometimes, in situations where one doesn't want to continually test and reset, one can combine the two functions via the use of an "AND" operation. For example, if you AND 63 to an integer, you have the range 0-63 to iterate through. This works well on ranges that are a power of 2.
Depending on the structure of your calling code, you might pass in the "statetime" variable as a parameter and test if it is larger than your desired X. If you did this, I assume that a step in the called code will reset "statetime" to zero.
Another idea is to pass in a "startTime" to the update method. Then, your timer will test the difference between currentTimeMillis and startTime to see if X seconds has elapsed or not. Again, the code you call should probably set a new "startTime" as part of the process. The nice thing about this method is that there is no need to increment elapsed time.
As long as I am churning out ideas: could also create a future "targetTime" variable and test if currentTimeMillis() - targetTime > 0.
startTime or targetTime can be immutable, which often provides a slight plus, depending on how they are used.

LWJGL 2.9.0 GL20.glUniformMatrix4 causes random stuttering

I am running renderer in a separate thread at 60FPS (16ms).
Following code produces random stuttering ...
long testTime = System.nanoTime();
GL20.glUniformMatrix4(
GL20.glGetUniformLocation(getProgram(), "projectionMatrix"),
false,
matrix4fBuffer // holds projection matrix
);
testTime = System.nanoTime() - testTime;
if (testTime > 1000000) {
System.out.println("DELAY " + (testTime / 1000000) ); // 22-30ms
}
GL20.glUniformMatrix4 call randomly takes around 22-30ms (every 10s, 30s, 45s, ...) which causes random slowdown (stuttering). Normally it takes 0ms (couple of nanoseconds).
I am testing with only one object being rendered (using programmable pipeline - shaders, OpenGL >= 3.3).
Other pieces of this example:
getProgram() // simply returns integer
// This is called before GL20.GLUniformMatrix4
FloatBuffer matrix4fBuffer = BufferUtils.createFloatBuffer(16);
projectionMatrix.store(matrix4fBuffer);
matrix4fBuffer.flip();
Any idea what is happening here?
EDIT:
I forgot to mention that I am running render and update in separate threads. I guess it could be
related with thread scheduling?
EDIT:
Okay I also tested this in single threaded environment and the problem persists ... I have also found out that other calls to glUnuformMatrix4 do not cause problems e.g.:
long testTime = System.nanoTime();
state.model.store(buffer);
buffer.flip();
GL20.glUniformMatrix4(
GL20.glGetUniformLocation(shader.getProgram(), "modelMatrix"),
false,
buffer
);
testTime = System.nanoTime() - testTime;
if (testTime > 16000000) {
System.out.println("DELAY MODEL" + (testTime / 1000000) );
}
Stop doing this:
GL20.glUniformMatrix4(
GL20.glGetUniformLocation(getProgram(), "projectionMatrix"),
[...]
Uniform locations do not change after you link your program, and querying anything from OpenGL is a great way to kill performance.
This particular Get function is particularly expensive because it uses a string to identify the location you are searching for. String comparisons are slow unless optimized into something like a trie, hash tables, etc... and the expense grows as you add more potential matches to the set of searched strings. Neither OpenGL nor GLSL defines how this function has to be implemented, but you should assume that your implementation is as stupid as they come if you are concerned about performance.
Keep a GLint handy for your frequently used named uniforms. I would honestly suggest writing a class that encapsulates a GLSL program object, and then subclass this for any specialization. The specialized classes would store all of the uniform locations they need and you would never have to query GL for uniform locations.

Categories

Resources