I need to extract audio from a live stream on red5 and stream it separately. On nginx with rtmp module I'd just retranslate this stream via ffmpeg without videodata, but I have no idea how to do anything like this (with or without ffmpeg) on Red5.
The first link on google gave me this:
just register IStreamListeners on your IClientStreams, and then separate AudioData from VideoData in the RTMPEvents
But this doesn't help much. To be honest, this doesn't help at all. What are these IStreamListeners and how do I register them on IClientStream?
And, what is more misterious, how do I separate AudioData from VideoData in some RTMPEvents?
This is how your extend RTMPClient and capture the Audio or Video events
private class TestClient extends RTMPClient {
private int audioCounter;
private int videoCounter;
public void connect() {
private IEventDispatcher streamEventDispatcher = new IEventDispatcher() {
public void dispatchEvent(IEvent event) {
System.out.println("ClientStream.dispachEvent()" + event.toString());
String evt = event.toString();
if (evt.indexOf("Audio") >= 0) {
audioCounter++;
} else if (evt.indexOf("Video") >= 0) {
videoCounter++;
}
}
};
}
}
This simply counts the a/v events, but it will get you part of the way there. I suggest looking though the unit tests in red5 and there you can learn a lot.
Related
The Audio interface allows to set its volume, but apps usually use the Media or Notifications volumes on Android. In a Gluon Mobile app, pressing the volume keys does nothing, while in other apps the device's volume changes.
Tested on Android 8 and 12 using Attach version 4.0.15.
Is there a way to play audio using the device's volume settings and allow the user to adjust the volume from within the device?
There seems to be no proper way to do this. Using the VideoService it's possible to change the volume of the device, but only while audio is playing, which requires the user to be ready to do so for short audio clips.
The VideoService also does not support playing a single file out of a playlist. The only solution I found was switching the playlist to a single audio clip whenever it's required to be played and isn't played already:
class MobileNotifier {
private static final String SMALL_BEEP_PATH = "/sounds/SmallBeep.wav";
private static final String BIG_BEEP_PATH = "/sounds/BigBeep.wav";
VideoService service;
private MobileNotifier(VideoService service) {
this.service = service;
service.getPlaylist().add(SHORT_BEEP_PATH);
}
public void play(Alert alert) {
switch (alert) {
case SMALL -> {
if (service.statusProperty().get() != Status.PLAYING || !SMALL_BEEP_PATH.equals(service.getPlaylist().get(0))) {
service.stop();
service.getPlaylist().set(0, SMALL_BEEP_PATH);
service.play();
}
}
case BIG -> {
if (service.statusProperty().get() != Status.PLAYING || !BIG_BEEP_PATH.equals(service.getPlaylist().get(0))) {
service.stop();
service.getPlaylist().set(0, LONG_BEEP_PATH);
service.play();
}
}
};
}
}
MediaPlayer cannot use the InputStream as DataSource. what I want is I can reduce or discard the MediaPlayer prepared time, so I need cache the stream to a file.
When the target Sdk >= 23, I can Use
mMediaPlayer.setDataSource(new MediaDataSource() {
#Override
public int readAt(long position, byte[] buffer, int offset, int size) throws IOException {
return 0;
}
#Override
public long getSize() throws IOException {
return 0;
}
#Override
public void close() throws IOException {
}
});
but how to make it work at the lower target.
Now, I have researched two ways:
1:Just like NanoHttp cache stream , build a local server, convert the remote url to local url, and use setDataSource(localUri)
2:Cache the video head to a local file, but the FileDescriptor cannot be read over the writing time. There are other data structure could be read and write meanwhile?
Do you have better idea?
Could anybody help me ?
I recommend you to use ExoPlayer instead of using Android MediaPlayer, because MediaPlayer has some issues in streaming videos.
If you want to use a library simpler than ExoPlayer you can use ExoMedia instead.
I am trying to retrieve all MIDI events at the right time from a MIDI file, within an Android app.
The following code works on a standard JVM (on my computer), using the javax.sound.midi API.
Sequencer sequencer = MidiSystem.getSequencer();
Sequence sequence = MidiSystem.getSequence(new File(FILENAME));
sequencer.setSequence(sequence);
sequencer.open();
sequencer.getTransmitter().setReceiver(new Receiver()
{
#Override
public void send(MidiMessage message, long timeStamp)
{
System.out.println(Arrays.toString(message.getMessage()));
}
#Override
public void close()
{
}
});
sequencer.start();
Unfortunatelly, javax.sound.* package is not available on Android. A porting for Android is available on Github (https://github.com/kshoji/javax.sound.midi-for-Android) but my code sample above doesn't work (sequencer.getTransmitter() returns null).
Does anyone know how to do that? I didn't found any interesting library (http://www.midi.org/aboutmidi/android.php) for what I want to do.
Thank you.
Due to the breaking changes in Android WebRTC client's example, I'm looking for the code-example which shows how to add and work with DataChannel in Android. I need to just send "Hello Worlds" via DataChannel between 2 Android devices. Here's the old code:
https://chromium.googlesource.com/external/webrtc/stable/talk/+/master/examples/android/src/org/appspot/apprtc/AppRTCDemoActivity.java#177
It uses some classes and interfaces which don't exist in the new version anymore.
So how can I add support of DataChannel to my Android WebRTC application, send and receive a text through it?
I added DataChannel in a project with an older version of webrtc. I looked at the most up to date classes and it seems the methods and callbacks are still there, so hopefully it will work for you.
Changes to PeerConnectionClient:
Create DataChannel in createPeerConnectionInternal after isInitiator = false;:
DataChannel.Init dcInit = new DataChannel.Init();
dcInit.id = 1;
dataChannel = pc.createDataChannel("1", dcInit);;
dataChannel.registerObserver(new DcObserver());
Changes to onDataChannel:
#Override
public void onDataChannel(final DataChannel dc) {
Log.d(TAG, "onDataChannel");
executor.execute(new Runnable() {
#Override
public void run() {
dataChannel = dc;
String channelName = dataChannel.label();
dataChannel.registerObserver(new DcObserver());
}
});
}
Add the channel observer:
private class DcObserver implements DataChannel.Observer {
#Override
public void onMessage(final DataChannel.Buffer buffer) {
ByteBuffer data = buffer.data;
byte[] bytes = new byte[data.remaining()];
data.get(bytes);
final String command = new String(bytes);
executor.execute(new Runnable() {
public void run() {
events.onReceivedData(command);
}
});
}
#Override
public void onStateChange() {
Log.d(TAG, "DataChannel: onStateChange: " + dataChannel.state());
}
}
I added onReceivedDataevents to PeerConnectionEvents interface and all the events are implemented in the CallActivity so I handle the data received on the channel from there.
To send data, from CallActivity:
public void sendData(final String data) {
ByteBuffer buffer = ByteBuffer.wrap(data.getBytes());
peerConnectionClient.getPCDataChannel().send(new DataChannel.Buffer(buffer, false));
}
I only took a quick look at the new classes and made minor changes to my code, I hope it will work for you with no more changes.
Good luck
I'm sorry that I have a question to the code from Guy S.
In your code, there are two following statements in both createPeerConnectionInternal() and onDataChannel().
dataChannel.registerObserver(new DcObserver());
I think it may cause twice registrations. Is it correct??
I mean, before making a call, it created a dataChannal and registered an Observer. Then.. if there is a call comes in, the onDataChannel called, then the dataChannel point to dc and register again??
I am customizing Jitsi to play a Wav file when a call is in progress.
I am facing trouble doing it, and would appreciate if you can help me out.
I can switch the data source before the call starts, by using a custom AudioFileMediaDevice and switching it on in CallPeerMediaHandler.
But I am having problems in replacing the datasource when the call is in progress.
=============================================================
I've tried the following but couldn't make it work.
1) I tried getting the device's output datasource and added a URLDatasource of the wav file using addInDataSource method. Didn't work.
DataSource dataSource = device.createOutputDataSource();
DataSource fileDataSource = Manager.createDataSource(new URL("file://resources/sounds/Sample.wav"));
((AudioMixingPushBufferDataSource)dataSource).addInDataSource(fileDataSource);
2) I tried adding a custom Capture device and switch it, but its not working too:
CaptureDeviceInfo2 fileDevice =
new CaptureDeviceInfo2("Recorded Audio 1",
fileDataSource.getLocator(), null, null, null, null);
((MediaServiceImpl) LibJitsi.getMediaService())
.getDeviceConfiguration().getAudioSystem().setDevice(AudioSystem.DataFlow.CAPTURE, fileDevice, false);
This is working for playback though, not as a capture device.
3) I even tried adding a new Audio system with the playback device as the file data source, but thats not working too.
=============================================================
I am new to libjitsi, so I'm having tough time trying to decode what is happening.
Any directions on how to resolve this would be great.
I made playback sound in call with this code:
public void startPlaying(CallPeer callPeer, DataSource soundDataSource) throws OperationFailedException {
assert callPeer instanceof CallPeerSipImpl;
CallPeerSipImpl cp = (CallPeerSipImpl) callPeer;
AudioMediaStreamImpl audioMediaStream = (AudioMediaStreamImpl) cp.getMediaHandler().getStream(MediaType.AUDIO);
AudioMediaDeviceSession deviceSession = audioMediaStream.getDeviceSession();
assert deviceSession != null;
assert deviceSession.getDevice() instanceof AudioMixerMediaDevice;
AudioMixerMediaDevice dev = (AudioMixerMediaDevice) deviceSession.getDevice();
dev.getAudioMixer().addInDataSource(soundDataSource);
}
Note that AudioMixerMediaDevice.getAudioMixer() has private access in libjitsi, so I made it public and recompiled.
I needed to play an audio file during a call, but only on the remote side of the call. So I played around a little bit with stokitos example and modified it for my needs. In case somebody ever needs it, here is what I did:
private void playAudioFromDataSource(final CallPeerSipImpl callPeer, final DataSource audioDataSource, final MediaDirection direction) {
final CallPeerMediaHandlerSipImpl mediaHandler = callPeer.getMediaHandler();
final AudioMediaStreamImpl audioMediaStream = (AudioMediaStreamImpl) mediaHandler.getStream(AUDIO);
final AudioMediaDeviceSession deviceSession = audioMediaStream.getDeviceSession();
if (null != deviceSession) {
if (RECVONLY == direction) {
// plays audio local only:
deviceSession.addPlaybackDataSource(audioDataSource);
} else {
final AudioMixerMediaDevice mediaDevice = (AudioMixerMediaDevice) deviceSession.getDevice();
final AudioMixer audioMixer = getAudioMixer(mediaDevice);
if (null != audioMixer) {
if (SENDONLY == direction) {
// plays audio remote only:
audioMixer.getLocalOutDataSource().addInDataSource(audioDataSource);
} else if (SENDRECV == direction) {
// plays audio on both sides of call (local and remote):
audioMixer.addInDataSource(audioDataSource);
}
}
}
}
}
private AudioMixer getAudioMixer(final AudioMixerMediaDevice device) {
try {
final Method privateGetAudioMixerMethod = device.getClass().getDeclaredMethod("getAudioMixer");
privateGetAudioMixerMethod.setAccessible(true);
final Object audioMixerObject = privateGetAudioMixerMethod.invoke(device, (Object[]) null);
return (AudioMixer) audioMixerObject;
} catch (final Exception e) {
log.error("Could not get AudioMixer", e);
}
return null;
}
NOTE: I sued reflection to get the private AudioMixer object. I admit it's not the cleanest approach, but it works. :)