I am playing a MIDI song using a Java Sequencer. The song is designed to be looped continuously, which I can do easily with
sequencer.setLoopCount(Sequencer.LOOP_CONTINUOUSLY)
When played through the internal (soundcard) synthesizer this works fine and (with the addition of a dummy event if necessary) the loop timing is spot on.
However when played through an external (USB or serial) synth there is a noticeable gap in the output at the point where it loops around. This is explained by the fact that there are many setup events at the start of the .mid file that take some time to be sent over the serial line.
What I would like to try is isolating the one-time setup events into their own Sequence which is sent to the device once when the song is loaded but kept out of the main (looped) Sequence.
Is there a simple algorithm (or library function) that can distinguish the two kinds of event?
It would need to provide for:
Registered parameter changes, which are sent as a group of related messages.
Occasionally channel program changes are sent in the middle of a track (and must be part of the looped sequence), but where the same program is kept throughout the song (the majority of cases) the program change should be part of the setup sequence. The same applies to tempo changes.
Take a look at javax.sound.midi. Sequence consists of Tracks. Tracks contain MidiEvents. MidiEvents are a combination of a timestamp and a MidiMessage.
MidiMessage has subclasses ShortMessage, MetaMessage and SysexMessage.
Most probably filtering out SysexMessages at tick 0 (MidiEvent.getTick() == 0) will do the trick. If not, then try filtering also the MetaMessages at the tick 0. Note information, program changes etc are done via ShortMessages, do not filter those.
for each track in sequence {
for all midievents in track at tick 0 {
remove from track if instanceof SysexMessage or MetaMessage
}
}
The other part is to create the initialization Sequence. Just create Sequence with same divisionType and resolution. One track is enough, you can add all events removed from the looping Sequence to a single Track in the initialization Sequence.
Related
I want to process multiple events in order of their timestamps coming into the system via multiple source systems like MQ, S3 ,KAFKA .
What approach should be taken to solve the problem?
As soon as an event comes in, the program can't know if another source will send events that should be processed before this one but have not arrived yet. Therefore, you need some waiting period, e.g. 5 minutes, in which events won't be processed so that late events have a chance to cut in front.
There is a trade-off here, making the waiting window larger will give late events a higher chance to be processed in the right order, but will also delay event processing.
For implementation, one way is to use a priority-queue that sorts by min-timestamp. All event sources write to this queue and events are consumed only from the top and only if they are at least x seconds old.
One possible optimisation for the processing lag: As long as all data sources provide at least one event that is ready for consumption, you can safely process events until one source is empty again. This only works if sources provide their own events in-order. You could implement this by having a counter for each data source of how many events exist in the priority-queue.
Another aspect is what happens to the priority-queue when a node crashes. Using acknowledgements should help here, so that on crash the queue can be rebuilt from unacknowledged events.
with a basic understanding of Akka classic, I moved to Typed and noticed, that the typed version of my code is significantly slower than the classic one.
The task is to aggregate "ticks" (containing an instrument name, a timestamp and a price) per instrument.
In the classic code, I dynamically create one actor for each instrument and kep a Map<Instrument, ActorRef> outside the actor system to delegate the incoming ticks to.
In the typed code, a "parent" was required, thus I moved the routing logic with the Map into this parent actor, so I ended up with two Actors classes here (the actual tick actor and the routing parent actor).
Otherwise, the code is pretty much the same, just once implemented via the classic api and once typed.
When testing both logics (primitively) I found that the version using the classic logic took a bit less than 1.5 seconds to process 1,000,000 ticks, while the typed one required a bit more than 3.5 seconds.
The obvious first reason was to move the guardian parent (which is also the router) to its own PinnedDispatcher, so it could run on its own thread, with all the other actors using the default threadpool. This increased performance a good bit, leading to around 2.1 seconds to process 1,000,000 ticks.
My question is: Does anyone have an idea where the remaining performance (0.6 seconds) might be lost?
Typed runs on top of classic (a typed Behavior<T> is effectively wrapped in a function which casts messages to T; once wrapped, it can then be treated as basically a classic Receive), so it introduces some overhead per-message.
I'm guessing from the improvement in putting the routing parent on a pinned dispatcher that the typed implementation sent every tick through the parent, so note that you're incurring that overhead twice. Depending on how many Instruments you have relative to the number of ticks, the typed code can be made much more like the classic code by using something like the SpawnProtocol for the parent, so the code outside the ActorSystem would, at a high-level:
check a local Map<Instrument, ActorRef<Tick>> (or whatever)
if there's an ActorRef for the instrument in question send the tick to that ActorRef
otherwise, ask the parent actor for an ActorRef<Tick> corresponding to the instrument in question; then save the resulting ActorRef in the local Map and send the tick to that ActorRef
This is more like the situation in classic: the number of messages (ignoring internal system messages) is now 1 million plus 2x the number of Instruments, vs. 2 million.
I have a set of rules in my BroadcastStream in Apache Flink.
I am able to apply new rules as they come to my stream of events.
But I am not able to figure out how can I implement if my rules are like
rule 1> alert when count of event a is greater than 5 in a window of 5 mins
rule 2> alert when count of event a is greater than 4 in a window of 15 mins
I am a newbie to flink. I am not able to figure this out.
An application based on flink-sql or flink-cep won't be able to do this, because those libraries can only handle rules that are defined at the time the job is compiled. You would need to start a new job for each new rule, which may not meet your requirements.
If you want to have a single job that can handle a dynamic set of rules that are supplied while the job is running, you'll have to build this yourself. You can use a KeyedBroadcastProcessFunction to do this (which it sounds like you have already begun to experiment with).
Here's a sketch of a possible implementation:
You can use keyed state in the KeyedBroadcastProcessFunction to keep track of the current count in each window. If the rules can be characterized by a time interval and a counting threshold, then you could use MapState, where the keys are the rule IDs, and the values in the map are the current count for that rule. You can have a timer for each rule that fires when each window ends.
As events arrive, you iterate through the rule-based map, incrementing the counter for every relevant rule. And when the timers fire, you find the relevant rules, compare the counters to the thresholds, take appropriate action, and clear those counters.
Some potential complications to keep in mind:
This implementation requires that you partition your stream with a keyBy, so that you can use MapState and timers.
The broadcast stream can't have timers associated with it, so the timers will have to be managed by the processElement method that's handling the keyed stream.
Flink only allows one timer for a given key and given timestamp. So take care if you must handle the case where two rules would need to be triggered at the same time.
If events can arrive out of order, then you will need to either first sort the stream by timestamp, or allow for having multiple windows open concurrently.
I'm currently working on my masters thesis which involves using Drools Fusion to process events coming from multiple streams of XML files (So I am 'replaying' each file as a stream). These files are of a football match taking place with GPS sensors attached to the players that monitors their acceleration and velocity and other stuff like player load etc.
Each XML file contains instances of events stating an ID, start time, end time and code as follows:
<file>
<SESSION_INFO>
<start_time>2015-09-17 19:02:31.31 +100</start_time>
</SESSION_INFO>
<SORT_INFO>
<sort_type>sort order</sort_type>
</SORT_INFO>
<ALL_INSTANCES>
<instance>
<ID>1</ID>
<start>0</start>
<end>1.51</end>
<code>Accel : 0.00 - 2.00</code>
</instance>
<instance>
<ID>2</ID>
<start>1.52</start>
<end>3.01</end>
<code>Accel : -2.00 - 0.00</code>
</instance>
<instance>
<ID>3</ID>
<start>3.02</start>
<end>4.01</end>
<code>Accel : 0.00 - 2.00</code>
</instance>
<instance>
<ID>4</ID>
<start>4.02</start>
<end>4.21</end>
<code>Accel : 2.00 - 4.00</code>
</instance>
</ALL_INSTANCES>
I have 9 of these files which all need to be processed concurrently and feed in these events simultaneously into the engine. My current implementation is of a JAXB unmarshaller to feed these events into the stream but no idea how to do it concurrently (ie: feed the first event in per stream, then the second event in per stream etc). I was looking into using threads for that part of the implementation, unless their is another tool I've missed in Drools to help do this. But searched fairly thoroughly and no comprehensive examples exist in processing multiple streams concurrently.
Another question I have is regarding the Pseudoclock. Because I have these 9 different streams with events happening at different times, I cannot advance the time after every insert because each event in each stream happens at a different time, therefore, these events won't line up. The time at which all these streams start is the same. For example, if I have instance 1 in the XML happening in the duration of 1.51, and another event from another stream with a duration of. Say. 4 seconds, and say I was to advance both of these events, then they would be out of sync from each other.
However, all my time related data exists in each stream. The Kick Off time is 19:02:31, and each event has a timestamp in that stream in seconds after kick off through the 'end' timestamp with the duration of each event of (end timestamp - start timestamp). The processing I need to do with these streams involves taking these acceleration events and correlating them with other streams whenever 2 or more players accelerate at the same rate at roughly the same duration/time interval.
Can anyone give me any pointers or assistance? To summarize, I need to know a better way of concurrently inserting streams into the engine and need to know if I need the pseudoclock for my implementation/processing. I am pretty much a beginner in programming so all I want is to get the system to run.
Thanks a lot!
Stu.
You don't need to process the nine XML files concurrently, i.e., distributed on threads. <instance> elements appear to be sorted according to start or end time (this may depend on what needs to be computed during an instance event), and you can process them all in their natural sequence - just determine what is next in the nine streams.
This way, also your issue relating to the pseudo clock ceases to be a problem. You can easily advance the clock to the next instance event once you have determined it.
Without knowing all the details, I think that each <instance> defines two events: the player starts moving and the player stops moving. And you may have to reasess the situation at each of these two events.
I'm developing a Java desktop flight simulation. I need to record all the pilot actions as they occur in the cockpit, such as throttle controls, steering, weapon deployment, etc. so that I can view these events at a later time (or stream them live).
I'd like to add a visual replay feature on the playback of the events so I can visually see the cockpit as I move forward and backward in time. There's no problem with the replay as long as I play back the event in chronological order, but the rewind is a little trickier.
How would you implement the rewind feature?
I would use a modified Memento pattern.
The difference would be that I would have the Memento object store a list of all of the pilot actions.
The Memento pattern is typically used for rolling back (undo), however in your case I could see it applying as well. You would need to have the pilot actions be store-able states as well.
You could use a variant of the Command Pattern and have each one of your pilot actions implement an undo operation.
For example if your pilot made the action steer left (simple, i know) the inverse of it would be steer right.
public interface IPilotAction {
void doAction(CockpitState state);
void undoAction(CockpitState state);
}
public class ThrottleControl implement IPilotAction {
private boolean increase;
private int speedAmount;
public ThrottleControl(boolean increase, int speedAmount) {
this.increase = increase;
this.speedAmount = speedAmount;
}
public void doAction(CockpitState state) {
if (increase) {
state.speed += speedAmount;
} else {
state.speed -= speedAmount;
}
}
public void undoAction(CockpitState state) {
if (increase {
state.speed -= speedAmount;
} else {
state.speed += speedAmount;
}
}
What you're looking for is actually a blend of the Command and Memento patterns. Every pilot action should be a command that you can log. Every logged command has, if req'd, a memento recording any additional state that (A) is not in the command, and (B) cannot reliably be reconstructed. The "B" is important, there's some of this state in pretty much any non-trivial domain. It needs to be stored to recover an accurate reconstruction.
If you merge these concepts, essentially attaching a memento to each command, you'll have a fully logged series of deterministic events.
I discussed this at more length in a different answer. Don't be afraid to substantially adapt the design patterns to your specific needs. :)
RE Performance Concerns:
If you expect jumping a number of minutes to be a frequent case, and after implementation you show that it's an unworkable performance bottleneck, I would suggest implementing an occasional "snapshot" along with the logging mechanism. Essentially save the entire application state once every few minutes to minimize the amount of log-rolling that you need to perform. You can then access the desired timeframe from the nearest saved state. This is analogous to key frames in animation and media.
Not a direct answer, but check out discussion of implementing undo. Mostly they will be about text editors, but the same principles should apply.
It helps if you prefer immutability. Undoing complex changes is difficult. Even automated systems have performance problems (Software Transaction Memory, STM).
Make sure that you've implemented the simulation in such a way that the simulation's "state" is a function. That is, a function of time.
Given an initial state at time T0, you should be able to construct the simulation frame at time Tn for any n. For example, an initial stationary state and no events (yet) might equal the identity function, so Tn == Tn+1.
Given some pilot action event at time Ta, you should be able to construct a frame Ta+n for any n. So you think of events as modifying a function that takes a time value as argument and returns the frame of the simulation for that time.
I would implement the history of events as a Zipper of (time, function) pairs representing the control state of the simulation. The "current" state would be in focus, with a list of future states on the right, and past states on the left. Like so:
([past], present, [future])
Every time the simulation state changes, record a new state function in the future. Running the simulation then becomes a matter of taking functions out of the future list and passing the current time into them. Running it backwards is exactly the same except that you take events out of the past list instead.
So if you're at time Tn and you want to rewind to time Tn-1, look into the past list for the latest state whose time attribute is less than n-1. Pass n-1 into its function attribute, and you have the state of simulation at time Tn-1.
I've implemented a Zipper datastructure in Java, here.
you can just store the state at every instance. 1kb for state (wind speed, object speeds + orientation / control input states, x 30fps x 20 min ~ 36megs. 1kb of state would let you record about 16 objects (pos / speed / angular speed / orientation / and 5 axis of control / effect)
that may be too much for you, but it will be easiest to implement. there will have to be no work done at all to recreate state (instant acecss), and you can interpolate between states pretty easy (for faster / slower playback). for disk space you can just zip it, and that can be done while recording, so while playing that memory is not being hogged.
a quick way to save space would be to paginate the recording file, and compress each bin separately. ie one zip stream for each minute. that way you would only have to decompress the current bin, saving a bunch on memory, but that depends how well your state data zips.
recording commands and having your class files implement multiple directions of playback would require a lot of debugging work. slowing / speeding up playback would also be more computationally intensive. and the only thing you save on is space.
if thats a premium, there are other ways to save on that too.