Playing audio samples - java

I'm trying to write a simple "drum-machine" type app. in Android. Currently I'm prototyping it in Processing.
I'm looking for the way to play audio sound samples using the Android SDK.
In particular I need :
to play the sample in a separate thread, so that it doesn't hold up the main UI thread
to be able to play multiple samples simultaneously
to be able to play the same sample simultaneously (eg. if a particular sound has a longish decay, to be able to launch a new version of it while the other is finishing)
NOT to have to have the overhead of loading the audio file, creating and instantiating a player object each time I play the sample
I've read a bit about AAudio, but I need to work on older versions of Android (ideally back to 4, but at least 5)
ideally I want to do this in Java, not have to fall down to using the ndk if I can possibly avoid it.
So I'm thinking I want to load my samples into buffers in memory, have some kind of pool of "player" type objects, each of which can iterate through the buffer independently, in their own thread, while sending to a common audio stream.
Any idea what I should be using for this.
Most of what I found seems to be monolithic "audio player" type objects which do everything from loading and playing an audio sample when invoked. I think I'm looking for lower-level components than this. But I'm not sure where to look for them.

Related

java: how to read RTSP stream frame by frame

For performance reasons, I ditched Python-Opnecv/FFmpeg solution and moved on to Java.
But to my surprise, I am not able to find any better and complete solution as we have in Python. I tried using vlcj but again it just gives more of a command line kind of interface. I am not able to find any callback kind of mechanism for reading and analyzing all the frames.
I also tried using Java Sockets but wasn't able to do anything more than establishing a connection with Ip Camera streaming h264 video over RTSP.
Note: It will be running in a server environment so we don't want to show any frame, we just need run certain other operations on frames.
Please guide me in the right direction.
If you want to get access to the video frame buffer while media is playing you have a couple of options.
I'l assume you are using vlcj 4.x+, which is current at time of writing.
First, you can use an EmbeddedMediaPlayer with a CallbackVideoSurface.
You can use the MediaPlayerFactory to create your video surface.
When you create your video surface, it requires a RenderCallback implementation that you provide.
Create the embedded media player as normal, and invoke mediaPlayer.setVideoSurface() to set your video surface.
It is this render callback implementation class that will be called back by VLC with raw video frame data in the form of a ByteBuffer backed by native memory. You can then do your analysis on the data in this byte buffer.
The second approach is to look instead at the CallbackMediaPlayerComponent class - this class aims to make it very easy for you to get an out-of-the-box working media player and provides a way for you to plug in only the bits you want to customise. In this case you plug in your render callback implementation to do your analysis.
There are examples in the vlcj source code at the github project page that show all of this. One of the examples processes this buffer to dynamically convert the video to greyscale, but obviously you can do anything you want with the frame data.
The method is named "onDisplay()" but you do not have to actually display the video anywhere if you're only interested in performing some analysis.
This is the extent of what vlcj can provide if you want to access the video frame data.

LibGDX - Music problems

WHAT I KNOW...
In LibGDX there are 2 classes for playing music/sounds. Music.java and Sound.java.
When you want to play a short sound(less than 1m), it is a good practice to use Sound.java class, because it loads to the memory.
When you want to play long music (more than 1m), it is a good practice to use Music.java class, because it doesn't load in memory, but use streaming to play it.
WHAT I DO...
I use Music.java class in order to play background and loading music in my game.
WHAT PROBLEM I HAVE...
The problem is when I play music using Music.java class and when I read from disk some data (atlasses, for example), the music plays with jitter. So, as I understand, that the problem is in streaming, as I have that problem when I read from disk only. It seems, there is no way to open 2 fully separate threads for streaming. I mean, one for music and another for all other things such us read from a file or write. I tried to play the music in the new thread, but nothing changed.
Any ideas?
Thanks.

Building a music player for both android and Desktop at the same time

I'm going to build a music player working on both Android and Desktops. It won't be anything special, I'm doing it more to training myself and know more or less what problems I might encounter if I want to do a real app/program one day. Therefore, since I'm already rather decent at web technologies, I'll try to use something else: Java.
My app / program with have to
be able to read music files and play them (I'm planning on reading the files myself, meaning that I only need to be able to read "raw" sound, WAV or such)
be able to write to music files (to change tags)
be able to communicate with another instance of the program on another device that's on the same network (I want to be able to use my phone as a remote control and my pc as a remote control for my phone)
If possible, show some play/pause buttons on the screen even if it's locked (probably just on android)
And this is where I need your help: What you I do to write as little "device specific" code as possible?
It's obvious I can reuse classes used to encode/decode some music types. Finding the files, reading them, writing them, playing raw sound and connection to the network will be easy to abstract if needed.
But then there is the UI and it looks like if I don't plan carefully, I'll have to do it twice... I've seen libGDX but they kinda insist a lot on the fact it's for games...
All I need is some way to build a simple UI (a few buttons, the cover of the albums) that'd work for both the desktop and the phone.
Should I use libGDX, the "normal" libs (*WT, Swing, neither of which seem to be "compatible" with Android) or something else?
I'd also like to request as few permissions as possible. Meaning that I'd like to have a base music player that only request access to the sd card, and then features requiring additional permissions would be added as other apps/programs or addons.
From what I understood, the only way to achieve this is to create a second app and make the user install it. I think I'll manage to make the two apps communicate (with Intent?) but is it really the only solution?
Thank you in advance for your answers.
Maybe you could consider building the app with something such as Phonegap: http://phonegap.com/ This would let you use your web technologies strength and write a very slim layer of device specific code if any at all!
As for getting a phonegap app to run on the desktop....you could use something like :http://ripple.incubator.apache.org/ to have it run on the desktop. I know this is slightly different and you wanted to tackle writing something in Java - however this is the way mobile development is moving so you may want to get started like this!

How do I play non-sine notes on Android? MIDI?

I'm writing an accompaniment application that continuously needs to play specific notes (or even chords). I have a thread running that figures out which note I need to play, but I have no idea where to begin regarding the actual playback. I know I can create an audiotrack and write a sine wave to it, but for this project a simple tone won't cut it. So I'm guessing I either need to use MIDI (can android do that?) or to somehow take a sample and change its pitch on the fly, but I don't know if that's even possible.
All I can say is to check out pitch-shifting (which you seem to have heard of) and soundpool (which would require some recording of your own) and these 2 links:
Audio Playback Rate in Android
Programmatically increase the pitch of an array of audio samples
the second link seems to have more info.

How to synthesize piano sounds in android/java

I have made a few simple apps on android, and thought it was time for something a bit more complex. So, i thought I'd try something that's already out there, but build it from scratch.
The idea is to create an app that allows user to play piano by pressing virtual keys on the display. But I'm not sure how to go about synthesizing the sound of each note, is it best to have copies of of each note stored on file, or is there a more dynamic way of synthesising notes and chords on the fly.
I have worked with C++ so NDK stuff is also okay.
Thanks for any help.
Sound playback (handing off buffers) pretty much has to be done from the Android java apis
Synthesis could be done in native or java, whichever it preferred.
Short (uncompressed) samples could be played back repeatedly, but you probably also want an attack transient. Perhaps you could have an attack, a sustain, and release, repeating the sustain as long as the key is down. Ideally each sample should be an integral number of periods of its fundamental component long so that you don't get a transient when you change between the attack to sustain or sustain to decay.
I'm sure you can find code somewhere for an FM or other synthesizer... this you might well want to implement in a native library that hands off buffers to java code to pass to the audio apis.
What is too bad is that android already has an internal midi synthesizer, but apparently lacks a dynamic interface to it, so it can only play midi files.
By far the easiest solution would be to record the sound of each note on the piano and play it back when the key is pressed. Many professional virtual piano instruments work this way, recording every note on the piano being played at multiple velocities. Obviously this can take many gigabytes of disk space, but for a mobile phone app, you might get away with a single MP3 recording of each note in an octave.
Actually algorithmically synthesizing the sound of a piano is very difficult to do, and until fairly recently, very few have done it convincingly (pianoteq is one of the best current implementations).

Categories

Resources