I am writing a timeline for text:
Usage:
Text text = new Text();
......
group.getChildren().addAll(text);
root.getChildren().addAll(group);
tl.play();
This works fine. If I want to pause and continue the animation, tl.pause(); and tl.play(); can do that.
Now, I want to make the animation restart from the beginning and I use tl.stop(); and tl.playFromStart(); But the effect of this combination is as same as the effect of tl.pause(); & tl.play();.
My question is, why does tl.playFromStart(); not work properly and how to resume animation?
How Timelines Work
A Timeline represents a period of time over which an animation is performed. The Timeline comprises of a collection of KeyFrames. Each KeyFrame
must specify a point in time on the Timeline (the Duration object you pass in)
may optionally also specify a collection of KeyValues, which
comprise WritableValues (for example, Propertys) and target
values for those WritableValues at that timepoint
may optionally specify an action to be performed, in the form of an
EventHandler<ActionEvent>
The Timeline has a currentTime property, which (of course) progresses forward as time elapses while the Timeline is playing. pause() will stop the progression of the currentTime, leaving it fixed at its current value. stop() will stop the progression of the currentTime and sets the currentTime back to zero.
If the Timeline has KeyFrames that specify KeyValues, then as the currentTime changes, the WritableValues specified in the KeyValues will be set to values depending on the currentTime. (Specifically, if the WritableValues are interpolatable, the value will be interpolated between two adjacent KeyFrames specifying KeyValues for that WritableValue. Otherwise the value will just be set to the "most recent" KeyFrame specifying a value for that WritableValue.)
If the Timeline has KeyFrames that specify actions (EventHandler<ActionEvent>s), then as the currentTime progresses past the time specified by that KeyFrame, the action is invoked.
Why your code doesn't work with stop() or playFromStart()
In your case, your KeyFrame specifies an action, which adds new transforms to the node's list of transforms. Note that this does not depend on the currentTime at all, except that every time the currentTime reaches 0.04 seconds, a new transform is added (plus, whatever method shiftAndScale whose implementation you didn't show does). Thus if you stop() the timeline, the currentTime gets reset to zero, but nothing happens to the node because of this. (Indeed, the currentTime only varies between 0 and 0.04 seconds anyway.)
Other problems with your code
There is a problem with your code, in that you have a memory leak. A Node maintains an ObservableList of Transforms. You are adding to this list (quite frequently), but never removing anything. The Node is quite intelligent: it keeps a hidden matrix which is the net effect of all the transforms; when you add a new transform it stores it in the list and then updates the "net" matrix with a simple matrix multiplication. Hence you won't see any computational performance problems here: it scales fine from that perspective. However, it does store all the individual transforms (because, for example, it supports removing them later), and so if you let this run long enough you will eventually run out of memory.
The one other (maybe minor) issue with your code is that you are doing a lot of floating point arithmetic when you combine all these transforms. Any rounding errors will eventually accumulate. You should try to find a technique that avoids accumulation of rounding errors.
Ways to fix your code
To fix this, you have a couple of options:
If the animation is "naturally cyclical" (meaning it returns to its starting state after some fixed time, like a rotation), then just define the Timeline in terms of that natural duration. Using just your rotation as a simple example, you could do:
double secondsPerCompleteCycle = (360.0 / 0.75) * 0.04 ;
Rotate rotation = new Rotate(0, new Point3D(1, 0, 0));
group.getTransforms().add(rotation);
Timeline timeline = new Timeline(new KeyFrame(Duration.seconds(secondsPerCompleteCycle),
new KeyValue(rotation.angleProperty(), 360, Interpolator.LINEAR)));
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
Now timeline.stop() will set the currentTime to zero, which will have the effect of setting the angle of the rotation back to its initial value (also zero).
If the animation is not naturally repetitive, I would use a (integer type) counter to keep track of the "current frame" in whatever time units you choose, and then bind values of the transform to the counter. Using the same example, you could do
double degreesPerFrame = 0.75 ;
LongProperty frameCount = new SimpleLongProperty();
Rotate rotation = new Rotate(0, new Point3D(1, 0, 0));
group.getTransforms().add(rotation);
rotation.angleProperty().bind(frameCount.multiply(degreesPerFrame));
Timeline timeline = new Timeline(new KeyFrame(Duration.seconds(0.04), e ->
frameCount.set(frameCount.get() + 1)));
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
// to reset to the beginning:
timeline.stop();
frameCount.set(0L);
You could also consider using an AnimationTimer, depending on your exact requirements. I would try one of these techniques first, though.
In your case the algebra gets quite complex (prohibitively complex, for me at any rate). Each action adds three transforms to the node; a translation, a scale, and a rotation about the x-axis. The 4x4 matrix representations of these are:
1 0 0 tx
0 1 0 ty
0 0 1 0
0 0 0 1
for the translation,
sx 0 0 0
0 sy 0 0
0 0 1 0
0 0 0 1
for the scale, and
1 0 0 0
0 cos(t) -sin(t) 0
0 sin(t) cos(t) 0
0 0 0 1
for the rotation.
While it's not too hard to compute the net effect of these three (just multiply them together), computing the net matrix you get from applying these an arbitrary number of times is beyond me (perhaps...). Additionally, the amount you are translating in the x direction is changing, which makes it pretty much impossible.
So the other way to approach this is to define a single transform and apply it to the node, then modify it on each event. This would look like
Affine transform = new Affine() ; // creates identity transform
node.getTransforms().add(transform);
Timeline timeline = new Timeline(Duration.seconds(0.04), event -> {
double shiftX = ... ;
double shiftY = ... ;
double scaleX = ... ;
double scaleY = ... ;
double angle = 0.75 ;
Affine change = new Affine();
change.append(new Translate(shiftX, shiftY));
change.append(new Scale(scaleX, scaleY));
change.append(new Rotate(angle, new Point3D(1, 0, 0)));
transform.append(change);
});
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
As described above, stop() and pause() will have (almost) the same effect. (The only difference is the time to the first new update when you play again, for stop() it will be 0.04 seconds, for pause() it will be less - whatever remained until the next update when it was paused.) But to "reset" the animation, you just do
timeline.stop();
transform.setToIdentity(); // resets to beginning
Note that by using this technique, the node only has one transform applied to it; we just update that transform as we progress. Rounding errors still accumulate, but at least the algebra is viable :).
Related
in my model I have 9 different service blocks and each service can produce 9 different features. Each combination has a different delay time and standard deviation. For example feature 3 need 5 minutes in service block 8 with a deviation of 0.05, but only needs 3 minutes with a deviation of 0.1 in service block 4.
How can I permanently track the last 5 needed times of each combination and calculate the average (like a moving average)? I want to use the average to let the products decide which service block to choose for the respective feature according to the shortes time comparing the past times of all of the machines for the respective feature. The product agents already have a parameter for the time entering the service and one calculating the processing time by subtracting the entering time from the time leaving the service block.
Thank you for your support!
I am not sure if I understand what you are asking, but this may be an answer:
to track the last 5 needed times you can use a dataset from the analysis palette, limiting the number of samples to 5...
you will update the dataset using dataset.add(yourTimeVariable); so you can leave the vertical axis value of the dataset empty.
I assume you would need 1 dataset per feature
Then you can calculate your moving average doing:
dataset.getYMean();
If you need 81 datasets, then you can create a collection as an ArrayList with element type DataSet
And on Main properties, in On Startup you can add the following code and it will have the same effect.
for(int i=0;i<81;i++){
collection.add(new DataSet( 5, new DataUpdater_xjal() {
double _lastUpdateX = Double.NaN;
#Override
public void update( DataSet _d ) {
if ( time() == _lastUpdateX ) { return; }
_d.add( time(), 0 );
_lastUpdateX = time();
}
#Override
public double getDataXValue() {
return time();
}
} )
);
}
you will only need to remember what corresponds to what serviceblock and feature and then you can just do
collection.get(4).getYMean();
and to add a new value to the dataset:
collection.get(2).add(yourTimeVariable);
Is it possible to start a particle effect mid way through? I have tried many variations of updating the particle effect/emitters upon initialisation. None of them seem to work. Has anyone managed to do this before? Thanks a lot!
ParticleEffectPool.PooledEffect effect = particleEffectPool.obtain();
effect.setPosition(posnX,posnY);
float value = 1.5f;
for(ParticleEmitter e: effect.getEmitters()){
e.update(value);
value+=1.5f;
}
The above code doesn't draw all of the particles, but it does seem to update the them somewhat. Once the initial effect is over, it resets and then it looks fine
EDIT: I've found a little bit of a hack by doing the following code snippet 5 times upon initialisation of the particle effect. Still interested to see if someone has a better solution
p.getEmitters().get(0).addParticle();
p.update(1);
I assume, that all emitters in your ParticleEffect have the same duration:
ParticleEffectPool.PooledEffect effect = particleEffectPool.obtain();
effect.reset();
effect.setPosition(posnX,posnY);
//divide by 1000 to convert from ms to seconds
float effectDuration = effect.getEmitters().first().duration / 1000f;
float skipProgress = 0.5f;
effect.update(skipProgress * effectDuration);
Note, that if emitters have different duration, you probably would want to pick the max duration. Also, if your emitters have delays, you should take them into account too.
Update
This approach will not work as expected in case, when some of effect's properties change over time. So if you skip half of its duration, you don't take in account all changes that happened before. You just start from some state.
For example, let's say effect has duration = 10, and its velocity is 100 for the first 4 seconds, and after that velocity is 0. If you call effect.update(5), i.e. just skip first 5 seconds, particles will have velocity = 0, they just won't "know", that they had to move for the first 4 seconds.
So, I guess the only workaround here, is to update the effect with small steps in a loop, instead of just updating for half of its duration in one call:
ParticleEffectPool.PooledEffect effect = particleEffectPool.obtain();
effect.reset();
effect.setPosition(posnX,posnY);
//divide by 1000 to convert from ms to seconds
float skipDuration = 0.5f * effect.getEmitters().first().duration / 1000f;
//I guess, to reduce number of iterations in a loop, you can safely use
//a bit bigger stepDeltaTime, like 1 / 10f or bigger, but it depends on you effect;
//here I just use standard frame duration
final float stepDeltaTime = 1 / 60f;
while (skipDuration > 0) {
float dt = skipDuration < stepDeltaTime ? skipDuration : stepDeltaTime;
effect.update(dt);
skipDuration -= stepDeltaTime;
}
I have collection time stamps, e.g 10:18:07.490,11:50:18.251 where first is the start time and second is end time for an event. I need to find a range where maximum events are happening just 24 hours of time. These events are happening in precision of milliseconds.
What I am doing is to divide 24 hours on millisecond scale, and attach events at every millisecond, and then finding a range where maximum events are happening.
LocalTime start = LocalTime.parse("00:00");
LocalTime end = LocalTime.parse("23:59");
for (LocalTime x = start; x.isBefore(end); x = x.plus(Duration.ofMillis(1))) {
for (int i = 0; i < startTime.size(); i++) {
if (startTime.get(i).isAfter(x) && endTime.get(i).isBefore(x))
// add them to list;
}
}
Certainly this is not a good approach, it takes too much memory. How I can do it in a proper way? Any suggestion?
A solution finding the first period of maximum concurrent events:
If you're willing to use a third party library, this can be implemented "relatively easy" in a SQL style with jOOλ's window functions. The idea is the same as explained in amit's answer:
System.out.println(
Seq.of(tuple(LocalTime.parse("10:18:07.490"), LocalTime.parse("11:50:18.251")),
tuple(LocalTime.parse("09:37:03.100"), LocalTime.parse("16:57:13.938")),
tuple(LocalTime.parse("08:15:11.201"), LocalTime.parse("10:33:17.019")),
tuple(LocalTime.parse("10:37:03.100"), LocalTime.parse("11:00:15.123")),
tuple(LocalTime.parse("11:20:55.037"), LocalTime.parse("14:37:25.188")),
tuple(LocalTime.parse("12:15:00.000"), LocalTime.parse("14:13:11.456")))
.flatMap(t -> Seq.of(tuple(t.v1, 1), tuple(t.v2, -1)))
.sorted(Comparator.comparing(t -> t.v1))
.window(Long.MIN_VALUE, 0)
.map(w -> tuple(
w.value().v1,
w.lead().map(t -> t.v1).orElse(null),
w.sum(t -> t.v2).orElse(0)))
.maxBy(t -> t.v3)
);
The above prints:
Optional[(10:18:07.490, 10:33:17.019, 3)]
So, during the period between 10:18... and 10:33..., there had been 3 events, which is the most number of events that overlap at any time during the day.
Finding all periods of maximum concurrent events:
Note that there are several periods when there are 3 concurrent events in the sample data. maxBy() returns only the first such period. In order to return all such periods, use maxAllBy() instead (added to jOOλ 0.9.11):
.maxAllBy(t -> t.v3)
.toList()
Yielding then:
[(10:18:07.490, 10:33:17.019, 3),
(10:37:03.100, 11:00:15.123, 3),
(11:20:55.037, 11:50:18.251, 3),
(12:15 , 14:13:11.456, 3)]
Or, a graphical representation
3 /-----\ /-----\ /-----\ /-----\
2 /-----/ \-----/ \-----/ \-----/ \-----\
1 -----/ \-----\
0 \--
08:15 09:37 10:18 10:33 10:37 11:00 11:20 11:50 12:15 14:13 14:37 16:57
Explanations:
Here's the original solution again with comments:
// This is your input data
Seq.of(tuple(LocalTime.parse("10:18:07.490"), LocalTime.parse("11:50:18.251")),
tuple(LocalTime.parse("09:37:03.100"), LocalTime.parse("16:57:13.938")),
tuple(LocalTime.parse("08:15:11.201"), LocalTime.parse("10:33:17.019")),
tuple(LocalTime.parse("10:37:03.100"), LocalTime.parse("11:00:15.123")),
tuple(LocalTime.parse("11:20:55.037"), LocalTime.parse("14:37:25.188")),
tuple(LocalTime.parse("12:15:00.000"), LocalTime.parse("14:13:11.456")))
// Flatten "start" and "end" times into a single sequence, with start times being
// accompanied by a "+1" event, and end times by a "-1" event, which can then be summed
.flatMap(t -> Seq.of(tuple(t.v1, 1), tuple(t.v2, -1)))
// Sort the "start" and "end" times according to the time
.sorted(Comparator.comparing(t -> t.v1))
// Create a "window" between the first time and the current time in the sequence
.window(Long.MIN_VALUE, 0)
// Map each time value to a tuple containing
// (1) the time value itself
// (2) the subsequent time value (lead)
// (3) the "running total" of the +1 / -1 values
.map(w -> tuple(
w.value().v1,
w.lead().map(t -> t.v1).orElse(null),
w.sum(t -> t.v2).orElse(0)))
// Now, find the tuple that has the maximum "running total" value
.maxBy(t -> t.v3)
I have written up more about window functions and how to implement them in Java in this blog post.
(disclaimer: I work for the company behind jOOλ)
It can be done significantly better in terms of memory (well, assuming O(n) is considered good for you, and you don't regard 24*60*60*1000 as tolerable constant):
Create a list of items [time, type] (where time is the time, and type is
either start or end).
Sort the list by time.
Iterate the list, and when you see a "start", increment a counter, and when you see a "end", decrememnt it.
By storing a "so far seen maximum", you can easily identify the single point where maximal number of events occuring on it.
If you want to get the interval containing this point, you can simply find the time where "first maximum" occures, until when it ends (which is the next [time, type] pair, or if you allow start,end to be together and not counted, just linear scan from this point until the counter decreases and time moved, this can be done only once, and does not change total complexity of the algorithm).
This is really easy to modify this approach to get the interval from the point
I am running renderer in a separate thread at 60FPS (16ms).
Following code produces random stuttering ...
long testTime = System.nanoTime();
GL20.glUniformMatrix4(
GL20.glGetUniformLocation(getProgram(), "projectionMatrix"),
false,
matrix4fBuffer // holds projection matrix
);
testTime = System.nanoTime() - testTime;
if (testTime > 1000000) {
System.out.println("DELAY " + (testTime / 1000000) ); // 22-30ms
}
GL20.glUniformMatrix4 call randomly takes around 22-30ms (every 10s, 30s, 45s, ...) which causes random slowdown (stuttering). Normally it takes 0ms (couple of nanoseconds).
I am testing with only one object being rendered (using programmable pipeline - shaders, OpenGL >= 3.3).
Other pieces of this example:
getProgram() // simply returns integer
// This is called before GL20.GLUniformMatrix4
FloatBuffer matrix4fBuffer = BufferUtils.createFloatBuffer(16);
projectionMatrix.store(matrix4fBuffer);
matrix4fBuffer.flip();
Any idea what is happening here?
EDIT:
I forgot to mention that I am running render and update in separate threads. I guess it could be
related with thread scheduling?
EDIT:
Okay I also tested this in single threaded environment and the problem persists ... I have also found out that other calls to glUnuformMatrix4 do not cause problems e.g.:
long testTime = System.nanoTime();
state.model.store(buffer);
buffer.flip();
GL20.glUniformMatrix4(
GL20.glGetUniformLocation(shader.getProgram(), "modelMatrix"),
false,
buffer
);
testTime = System.nanoTime() - testTime;
if (testTime > 16000000) {
System.out.println("DELAY MODEL" + (testTime / 1000000) );
}
Stop doing this:
GL20.glUniformMatrix4(
GL20.glGetUniformLocation(getProgram(), "projectionMatrix"),
[...]
Uniform locations do not change after you link your program, and querying anything from OpenGL is a great way to kill performance.
This particular Get function is particularly expensive because it uses a string to identify the location you are searching for. String comparisons are slow unless optimized into something like a trie, hash tables, etc... and the expense grows as you add more potential matches to the set of searched strings. Neither OpenGL nor GLSL defines how this function has to be implemented, but you should assume that your implementation is as stupid as they come if you are concerned about performance.
Keep a GLint handy for your frequently used named uniforms. I would honestly suggest writing a class that encapsulates a GLSL program object, and then subclass this for any specialization. The specialized classes would store all of the uniform locations they need and you would never have to query GL for uniform locations.
In short, I'm trying to make a bar (using GWT's wrapper for HTML5 canvas) that will show something reasonable for a given value, no matter what the value of the bottom and top of the chart actually are. I'm assuming the best approach is logarithmic, but I'm completely open to any other solution.
Assumptions:
Our "bar" vertical, measuring 200 pixels high, 35 pixels wide.
We're showing a "site" versus it's parent "region". The units are ones of power (e.g. kW, MW, GW).
The "region" has a range of 1 kW to 55.19 GW. The average value is 27.6 MW.
Approximately 95% of sites within the region are much closer to 1 W than 55 GW, but the top 5% skew the average significantly.
The first site has a value of 12.67 MW. The second site has a value of 192.21 kW.
Obviously the second site wouldn't even register on a linear graph, while the first would register very low.
How can I make this bar more useful? For example, I'd like the top 5% of sites that skew the region's average to represent only a small portion (5%) of the total bar, while the other 95% should represent 95%.
The line in the lower area of the bar is the region average line, while the entire bar represents Minimum (bottom) to Maximum (top).
Current Java code using log10:
// BAR_GRAPH_WIDTH = 36, BAR_GRAPH_HEIGHT = 200
// regionNsp (MW): [min=0.0, max=55192.8, avg=27596.5]
// siteNsp (MW) = 187.18
DrawingArea canvas = new DrawingArea(BAR_GRAPH_WIDTH, BAR_GRAPH_HEIGHT);
Rectangle bgRect = new Rectangle(1, 0, BAR_GRAPH_WIDTH - 1, BAR_GRAPH_HEIGHT); // backgound bar
bgRect.setFillColor("white");
canvas.add(bgRect);
int graphSize = (int)(BAR_GRAPH_HEIGHT / Math.log10(regionNsp.getMax()));
int siteHeight = (int)Math.log10(siteNsp - regionNsp.getMin()) * graphSize;
Rectangle valueRect = new Rectangle(1, BAR_GRAPH_HEIGHT-siteHeight, 35, siteHeight);
valueRect.setFillColor("lightgreen");
canvas.add(valueRect);
Consider logarithmic scale with a break for extremely high values that are far beyond any others in the population. For an example of a break in the bars and axis, see: http://tomhopper.files.wordpress.com/2010/08/bar-chart-natural-axis-split1.png
I admit that I don't know much about GWT, so I'm answering on the basis of how would I show your values on a paper-and-pencil graph. That answer is that you've answered your own question - use logs. The range from 1000 to 55200000000 with an average around 27600000, after taking (common base 10) logs, becomes about 3 to 11, with the average around 7.4.
The caveat is that what you gain in "reasonableness" you do loose in perspective. Take the decibel scale, which is (common base 10) log based. The difference between an 80 decibel sound and an 85 decibel sound doesn't seem like a big change, except that the second is three times more energetic.