WHAT I KNOW...
In LibGDX there are 2 classes for playing music/sounds. Music.java and Sound.java.
When you want to play a short sound(less than 1m), it is a good practice to use Sound.java class, because it loads to the memory.
When you want to play long music (more than 1m), it is a good practice to use Music.java class, because it doesn't load in memory, but use streaming to play it.
WHAT I DO...
I use Music.java class in order to play background and loading music in my game.
WHAT PROBLEM I HAVE...
The problem is when I play music using Music.java class and when I read from disk some data (atlasses, for example), the music plays with jitter. So, as I understand, that the problem is in streaming, as I have that problem when I read from disk only. It seems, there is no way to open 2 fully separate threads for streaming. I mean, one for music and another for all other things such us read from a file or write. I tried to play the music in the new thread, but nothing changed.
Any ideas?
Thanks.
Related
I'm trying to write a simple "drum-machine" type app. in Android. Currently I'm prototyping it in Processing.
I'm looking for the way to play audio sound samples using the Android SDK.
In particular I need :
to play the sample in a separate thread, so that it doesn't hold up the main UI thread
to be able to play multiple samples simultaneously
to be able to play the same sample simultaneously (eg. if a particular sound has a longish decay, to be able to launch a new version of it while the other is finishing)
NOT to have to have the overhead of loading the audio file, creating and instantiating a player object each time I play the sample
I've read a bit about AAudio, but I need to work on older versions of Android (ideally back to 4, but at least 5)
ideally I want to do this in Java, not have to fall down to using the ndk if I can possibly avoid it.
So I'm thinking I want to load my samples into buffers in memory, have some kind of pool of "player" type objects, each of which can iterate through the buffer independently, in their own thread, while sending to a common audio stream.
Any idea what I should be using for this.
Most of what I found seems to be monolithic "audio player" type objects which do everything from loading and playing an audio sample when invoked. I think I'm looking for lower-level components than this. But I'm not sure where to look for them.
I am developing a musical piano app for android, in this app i want to record sound which user play by clicking piano buttons. I am using soundpool to play piano sounds. now for recording android gave us two api's MediaRecorder and AudioRecorder. but for both we have to set MediaRecorder.AudioSource. i didn't want to record sounds from mic because user sound can be included and more important sound quality decrease in recording. then i try to read bytes of that resource file which is played on clicking piano app. and when user click again and again then bytes will be together in global byte array. but when i play global byte array then this play only one resource file and one time only.
More important thing is i am new to android development. kindly guide me.
Although i did not find answer of my question, but now i am using technique for recording aspects which i want to share with all of you. Above function which i try to use is totally wrong option when you are trying to save music created by your app at run time. now for my first version i just saving the clicks of piano buttons along with system time in an array. now when i play that clicks it will play same sound as user try to record. i save these notes in database for further reuse by users.although this is not solution because these notes will be played in our app only, but this is good feature which we can add in our app. when i solve the correct solution, i will share with you guys
Basic idea is to combine images, voice and background music to create a single movie. I am able to do the same with images and voice audio (single audio) but now i want to add 2 audio files for the same.
Please help.
You need to load the two sound clips into memory as a single unit, then play them.
Refer to http://www.java-gaming.org/index.php?topic=1948.0 for a great example of how to do this.
I found this solution which worked for me on Ubuntu 11.10
http://www.jsresources.org/examples/AudioConcat.html
with source code.
I'm writing an accompaniment application that continuously needs to play specific notes (or even chords). I have a thread running that figures out which note I need to play, but I have no idea where to begin regarding the actual playback. I know I can create an audiotrack and write a sine wave to it, but for this project a simple tone won't cut it. So I'm guessing I either need to use MIDI (can android do that?) or to somehow take a sample and change its pitch on the fly, but I don't know if that's even possible.
All I can say is to check out pitch-shifting (which you seem to have heard of) and soundpool (which would require some recording of your own) and these 2 links:
Audio Playback Rate in Android
Programmatically increase the pitch of an array of audio samples
the second link seems to have more info.
I have made a few simple apps on android, and thought it was time for something a bit more complex. So, i thought I'd try something that's already out there, but build it from scratch.
The idea is to create an app that allows user to play piano by pressing virtual keys on the display. But I'm not sure how to go about synthesizing the sound of each note, is it best to have copies of of each note stored on file, or is there a more dynamic way of synthesising notes and chords on the fly.
I have worked with C++ so NDK stuff is also okay.
Thanks for any help.
Sound playback (handing off buffers) pretty much has to be done from the Android java apis
Synthesis could be done in native or java, whichever it preferred.
Short (uncompressed) samples could be played back repeatedly, but you probably also want an attack transient. Perhaps you could have an attack, a sustain, and release, repeating the sustain as long as the key is down. Ideally each sample should be an integral number of periods of its fundamental component long so that you don't get a transient when you change between the attack to sustain or sustain to decay.
I'm sure you can find code somewhere for an FM or other synthesizer... this you might well want to implement in a native library that hands off buffers to java code to pass to the audio apis.
What is too bad is that android already has an internal midi synthesizer, but apparently lacks a dynamic interface to it, so it can only play midi files.
By far the easiest solution would be to record the sound of each note on the piano and play it back when the key is pressed. Many professional virtual piano instruments work this way, recording every note on the piano being played at multiple velocities. Obviously this can take many gigabytes of disk space, but for a mobile phone app, you might get away with a single MP3 recording of each note in an octave.
Actually algorithmically synthesizing the sound of a piano is very difficult to do, and until fairly recently, very few have done it convincingly (pianoteq is one of the best current implementations).