I am trying to build an app in java with android studio, and what I want to do is to generate a AudioTrack (right now basically a sine wave), but I want this to run as a separate thread (so multiple tracks can be started simultaneous, and also the rest of the app is not locked while playing), and I want to make it stop on demand (for example pushing a stop button). However I also want to be able to make changes in parameters such as frequency while the track is playing and get these changes in more or less realtime (a small delay is definitely acceptable)
This must be pretty standard, but I can't get my head around how to do it, because if the class generating the audio is a separate class I seem to loose access to the built in functions (if they are not native to Thread class), and also I can't make changes to internal variables (like frequency), while the audio is being generated.
Below is the code right now producing the sound, basically building a buffer in a loop and then writing to the audio track. Any ideas how I can do this?
for (int i = 0; i < mSound.length; i++) {
double beatFreq = Math.sin((2.0*Math.PI * i/(44100/this.beat)));
mSound[i] = beatFreq*Math.sin((2.0*Math.PI * i/(44100/this.pitch)));
mBuffer[i] = (short) (mSound[i]*Short.MAX_VALUE);
}
mAudioTrack.setStereoVolume(AudioTrack.getMaxVolume(), AudioTrack.getMaxVolume());
mAudioTrack.play();
mAudioTrack.write(mBuffer, 0, mSound.length);
mAudioTrack.stop();
mAudioTrack.release();
Related
I'm working on a project where I will have one 24-hours long sound clip which has different phases based on local daytime (morning phase has one sound, transition phases, evening phase, etc.)
so here is what i got now, and it's ok
method that plays the clip (turns current local time in microseconds and sets starting point to match current time - if i start program 13:35 it will start playing mid-day phase of sound which is on that position, and it's ok
void playMusic(String musicLocation){
try{
File musicPath = new File(musicLocation);
if(musicPath.exists())
{
Calendar calendar = Calendar.getInstance();
//Returns current time in millis
long timeMilli2 = calendar.getTimeInMillis();
System.out.println("Time in milliseconds using Calendar: " + (timeMilli2 * 1000)) ;
AudioInputStream audioInput = AudioSystem.getAudioInputStream(musicPath);
Clip clip = AudioSystem.getClip();
clip.open(audioInput);
clip.setMicrosecondPosition(12345678);
clip.start();
clip.loop(Clip.LOOP_CONTINUOUSLY);
System.out.println(clip.getMicrosecondLength());
//setFramePosition
JOptionPane.showMessageDialog(null, "Press OK to stop playung");
}
else
{
System.out.println("no file");
}
}catch(Exception ex){
ex.printStackTrace();
}
}
main method that just calls this method
public static void main(String[] args) {
String filepath = "src/sounds/test_file.wav";
PyramidMethods pyra = new PyramidMethods();
pyra.playMusic(filepath);
}
now this is pretty simple and straightforward, and also what I need, but now what i wonder is the following -> can I and if can, how, add sound effects based on the temperature outside?
so what I was thinking is to open separate thread in main which would regularly check some wheather API and when temperature changes add sound effects like echo, distortion or something else based on temperature change (if it's colder then x it would put echo sound effect on running clip, etc.)
it this even possible in Java? it's my first time using sounds with Java so I am even inexperienced with the search terms here, would someone suggest some other programming language for it?
thanks for your answers in advance.
That must be a huge file!
Yes, Java works quite well for creating and managing soundscapes.
It is possible to play and hear different Clips at the same time. When you play them, Java automatically creates a Thread for that playback, and most operating systems will mix together all the playing threads. At one time there were Linux systems that only allowed a single output. IDK if that is still a limitation or if you are even targeting Linux systems. Also, there is going to be a ceiling on the total number of sound playbacks that an OS will be able to handle cleanly. (Usually what happens is you get dropouts if you overstress the system in this way.)
To manage the sounds, I'd consider using a util.Timer (not the Swing.Timer), and check the time and date (and weather if you have an API for that) with each iteration before deciding what to do with the constituent cues of your mix. Or maybe use an util.concurrent.ExecutorService. If your GUI is JavaFX, an AnimationTimer is also a reasonable choice.
If you do prefer to mix the sound files down to a single output line, this can most easily be done by using a library such as TinySound or AudioCue. With AudioCue (which I wrote) you can both mix down to a single output, and have guaranteed volume, panning and even playback speed management for each sound cue that is part of your "soundscape".
This could help with lowering the total amount of RAM needed to run program. As I show in a demo, one can take a single cue (e.g. a frog croak) and play it multiple times as different volumes, pans, and speeds to create the illusion of a whole pond of frogs croaking. Thus, a single .wav, only a second in length can be used to simulate a .wav that is hours in length.
I think if you want to add effects like echo or distortion, you will have to use a library or write your own. Java supports Processing Audio with Controls, but this is highly dependent upon the OS of the computer being used. Echo and Distortion are not terribly difficult to write though, and could be added to the AudioCue library code if you have incorporated that into your program. (Echo involves adding a time delay, usually using an array to hold sound data until it is time for it to play, and Distortion involves running the PCM sound data through a transform function, such as Math.tanh and a max and min to keep the results within the [-1, 1] range.)
For other possible libraries or languages, I believe both Unity (C#) and Unreal (C++) game engines/environments have extensive array of audio effects implemented, including 3D handling.
I'm writing an java code to control a fairly simple robot, which should execute the following actions; PID-linefollower, ultrasonic detection and color detection.
As this is my first program in java, I obviously have lots to learn in regards to OOP.
The robot runs on a track where the line is accompanied by colors on the road, which the robot should periodically check for and if found, act differently based on which color it reads.
So the process should run somewhat alike this following pseudo(java)-code:
Initialize and calibrate sensors.
while (! Button.ENTER.isDown)
Run PID-controller
If (ColorSensorColor = 0 || ColorSensorColor = 2)
if (color = 0)
turn left
if (color = 2)
turn right
while (UltraSonicDistance < 30cm)
free-roll motors
My question therefore is; how do I construct two threads that can run the ColorSensor and UltraSonicSensor in parallel with a main thread?
The latest actual code is situated here
Lastly, thanks for all your input - I've scoured the interwebz for good tutorials, but it seems that I have too few braincells to comprehend the mother of all OOP.
/u/evil_burrito on /r/javahelp kindle answered with the following, working suggestion:
First, you might want to consider a real time JVM, if you're not already. If your controller has to run uninterrupted, that might be something to consider. FWIW, RT is not my area of expertise at all.
However, your basic question about threads is easier to answer.
Create an inner class for each of the ultrasonic sensor and the color sensor. Have those classes implement Runnable. In the run method, check the appropriate sensor and update a volatile variable in the outer class.
Use a ScheduledExecutorService to execute a sensor check for each of these classes (create an instance of the inner class and submit it to the executor to be run at intervals of 1ms or 100 microseconds, or whatever).
The main class can just monitor the values of the volatile variables and decide what to do w/o interruption.
I have a small Android app that I have been working on that logs GPS data to my SD card in a GPX file. I currently have a main Activity that starts a Service to do all the background work. The service is kept in the foreground in the notification bar to make it the least likely thing to be killed by the OS. Currently I am requesting location updates from the service at the maximum frequency to get the most accurate route. The problem I am having is my User Interface is acting slow/strange. Correct me if I am wrong, but what I have concluded is that I have too much going on in the main thread of the app. My next thought is to try and move the service performing the acquiring and logging of data to a separate thread. I am new to Java/Android so the whole topic of interacting with separate threads is hard for me to wrap my head around. Initially in research I came across IntentServices, which are supposed to make threading easier, but from what I read these don’t seem to mix well with the Android location package because they don’t run long enough. I feel like I am running in circles with internet searches on this topic. I desperately need some guidance on how to achieve the following features for my programs service:
Separate thread from Main Thread
Fetching and storing of data must be the least likely thing to be killed by the OS and run indefinitely once started (don’t worry about battery I will have the device plugged in to power while running the app)
Eventually I will need the ability to interact with the User Interface
Thanks for any help you can offer!
this is a common problem that i have accomplished a lot on
in the launcher or main() ( what Android is calling an Activity ) you do as little as possible ( which amounts to saving the the refs they give you and maybe setting a few other things as long as you are there ) and do ^not^ drop in to a long-running activity
A Service is exactly what you need but instead of trying to pump it into a "hold on to it" state what you do is implement checks for nulls and handle as needed -- trying to "fix" a machine to make it run the way you want here actually involves rescinding you hold on the main thread and letting it go as fast as consistent with the Applicaton's general constraints.
To do this you can simply write a Service - reading everything available - then extend that service and implement Runnable then you run the constructor on that code from the Activity Constructor and do new Thead(yourClass).start(); in the onCreate() checking for Thread.isRunning() before starting it again ...
Service will have an onCompletion() call in it somewhere - it will go through an interface
All this is done in Android in something like start activity for result then you just to the UI stuff in that call or sorta figure out a way for the GUI to get called somehow at some time then check to see if Service is done an so report in the gui
I am playing a MIDI song using a Java Sequencer. The song is designed to be looped continuously, which I can do easily with
sequencer.setLoopCount(Sequencer.LOOP_CONTINUOUSLY)
When played through the internal (soundcard) synthesizer this works fine and (with the addition of a dummy event if necessary) the loop timing is spot on.
However when played through an external (USB or serial) synth there is a noticeable gap in the output at the point where it loops around. This is explained by the fact that there are many setup events at the start of the .mid file that take some time to be sent over the serial line.
What I would like to try is isolating the one-time setup events into their own Sequence which is sent to the device once when the song is loaded but kept out of the main (looped) Sequence.
Is there a simple algorithm (or library function) that can distinguish the two kinds of event?
It would need to provide for:
Registered parameter changes, which are sent as a group of related messages.
Occasionally channel program changes are sent in the middle of a track (and must be part of the looped sequence), but where the same program is kept throughout the song (the majority of cases) the program change should be part of the setup sequence. The same applies to tempo changes.
Take a look at javax.sound.midi. Sequence consists of Tracks. Tracks contain MidiEvents. MidiEvents are a combination of a timestamp and a MidiMessage.
MidiMessage has subclasses ShortMessage, MetaMessage and SysexMessage.
Most probably filtering out SysexMessages at tick 0 (MidiEvent.getTick() == 0) will do the trick. If not, then try filtering also the MetaMessages at the tick 0. Note information, program changes etc are done via ShortMessages, do not filter those.
for each track in sequence {
for all midievents in track at tick 0 {
remove from track if instanceof SysexMessage or MetaMessage
}
}
The other part is to create the initialization Sequence. Just create Sequence with same divisionType and resolution. One track is enough, you can add all events removed from the looping Sequence to a single Track in the initialization Sequence.
I'm developing a Java desktop flight simulation. I need to record all the pilot actions as they occur in the cockpit, such as throttle controls, steering, weapon deployment, etc. so that I can view these events at a later time (or stream them live).
I'd like to add a visual replay feature on the playback of the events so I can visually see the cockpit as I move forward and backward in time. There's no problem with the replay as long as I play back the event in chronological order, but the rewind is a little trickier.
How would you implement the rewind feature?
I would use a modified Memento pattern.
The difference would be that I would have the Memento object store a list of all of the pilot actions.
The Memento pattern is typically used for rolling back (undo), however in your case I could see it applying as well. You would need to have the pilot actions be store-able states as well.
You could use a variant of the Command Pattern and have each one of your pilot actions implement an undo operation.
For example if your pilot made the action steer left (simple, i know) the inverse of it would be steer right.
public interface IPilotAction {
void doAction(CockpitState state);
void undoAction(CockpitState state);
}
public class ThrottleControl implement IPilotAction {
private boolean increase;
private int speedAmount;
public ThrottleControl(boolean increase, int speedAmount) {
this.increase = increase;
this.speedAmount = speedAmount;
}
public void doAction(CockpitState state) {
if (increase) {
state.speed += speedAmount;
} else {
state.speed -= speedAmount;
}
}
public void undoAction(CockpitState state) {
if (increase {
state.speed -= speedAmount;
} else {
state.speed += speedAmount;
}
}
What you're looking for is actually a blend of the Command and Memento patterns. Every pilot action should be a command that you can log. Every logged command has, if req'd, a memento recording any additional state that (A) is not in the command, and (B) cannot reliably be reconstructed. The "B" is important, there's some of this state in pretty much any non-trivial domain. It needs to be stored to recover an accurate reconstruction.
If you merge these concepts, essentially attaching a memento to each command, you'll have a fully logged series of deterministic events.
I discussed this at more length in a different answer. Don't be afraid to substantially adapt the design patterns to your specific needs. :)
RE Performance Concerns:
If you expect jumping a number of minutes to be a frequent case, and after implementation you show that it's an unworkable performance bottleneck, I would suggest implementing an occasional "snapshot" along with the logging mechanism. Essentially save the entire application state once every few minutes to minimize the amount of log-rolling that you need to perform. You can then access the desired timeframe from the nearest saved state. This is analogous to key frames in animation and media.
Not a direct answer, but check out discussion of implementing undo. Mostly they will be about text editors, but the same principles should apply.
It helps if you prefer immutability. Undoing complex changes is difficult. Even automated systems have performance problems (Software Transaction Memory, STM).
Make sure that you've implemented the simulation in such a way that the simulation's "state" is a function. That is, a function of time.
Given an initial state at time T0, you should be able to construct the simulation frame at time Tn for any n. For example, an initial stationary state and no events (yet) might equal the identity function, so Tn == Tn+1.
Given some pilot action event at time Ta, you should be able to construct a frame Ta+n for any n. So you think of events as modifying a function that takes a time value as argument and returns the frame of the simulation for that time.
I would implement the history of events as a Zipper of (time, function) pairs representing the control state of the simulation. The "current" state would be in focus, with a list of future states on the right, and past states on the left. Like so:
([past], present, [future])
Every time the simulation state changes, record a new state function in the future. Running the simulation then becomes a matter of taking functions out of the future list and passing the current time into them. Running it backwards is exactly the same except that you take events out of the past list instead.
So if you're at time Tn and you want to rewind to time Tn-1, look into the past list for the latest state whose time attribute is less than n-1. Pass n-1 into its function attribute, and you have the state of simulation at time Tn-1.
I've implemented a Zipper datastructure in Java, here.
you can just store the state at every instance. 1kb for state (wind speed, object speeds + orientation / control input states, x 30fps x 20 min ~ 36megs. 1kb of state would let you record about 16 objects (pos / speed / angular speed / orientation / and 5 axis of control / effect)
that may be too much for you, but it will be easiest to implement. there will have to be no work done at all to recreate state (instant acecss), and you can interpolate between states pretty easy (for faster / slower playback). for disk space you can just zip it, and that can be done while recording, so while playing that memory is not being hogged.
a quick way to save space would be to paginate the recording file, and compress each bin separately. ie one zip stream for each minute. that way you would only have to decompress the current bin, saving a bunch on memory, but that depends how well your state data zips.
recording commands and having your class files implement multiple directions of playback would require a lot of debugging work. slowing / speeding up playback would also be more computationally intensive. and the only thing you save on is space.
if thats a premium, there are other ways to save on that too.