I am customizing Jitsi to play a Wav file when a call is in progress.
I am facing trouble doing it, and would appreciate if you can help me out.
I can switch the data source before the call starts, by using a custom AudioFileMediaDevice and switching it on in CallPeerMediaHandler.
But I am having problems in replacing the datasource when the call is in progress.
=============================================================
I've tried the following but couldn't make it work.
1) I tried getting the device's output datasource and added a URLDatasource of the wav file using addInDataSource method. Didn't work.
DataSource dataSource = device.createOutputDataSource();
DataSource fileDataSource = Manager.createDataSource(new URL("file://resources/sounds/Sample.wav"));
((AudioMixingPushBufferDataSource)dataSource).addInDataSource(fileDataSource);
2) I tried adding a custom Capture device and switch it, but its not working too:
CaptureDeviceInfo2 fileDevice =
new CaptureDeviceInfo2("Recorded Audio 1",
fileDataSource.getLocator(), null, null, null, null);
((MediaServiceImpl) LibJitsi.getMediaService())
.getDeviceConfiguration().getAudioSystem().setDevice(AudioSystem.DataFlow.CAPTURE, fileDevice, false);
This is working for playback though, not as a capture device.
3) I even tried adding a new Audio system with the playback device as the file data source, but thats not working too.
=============================================================
I am new to libjitsi, so I'm having tough time trying to decode what is happening.
Any directions on how to resolve this would be great.
I made playback sound in call with this code:
public void startPlaying(CallPeer callPeer, DataSource soundDataSource) throws OperationFailedException {
assert callPeer instanceof CallPeerSipImpl;
CallPeerSipImpl cp = (CallPeerSipImpl) callPeer;
AudioMediaStreamImpl audioMediaStream = (AudioMediaStreamImpl) cp.getMediaHandler().getStream(MediaType.AUDIO);
AudioMediaDeviceSession deviceSession = audioMediaStream.getDeviceSession();
assert deviceSession != null;
assert deviceSession.getDevice() instanceof AudioMixerMediaDevice;
AudioMixerMediaDevice dev = (AudioMixerMediaDevice) deviceSession.getDevice();
dev.getAudioMixer().addInDataSource(soundDataSource);
}
Note that AudioMixerMediaDevice.getAudioMixer() has private access in libjitsi, so I made it public and recompiled.
I needed to play an audio file during a call, but only on the remote side of the call. So I played around a little bit with stokitos example and modified it for my needs. In case somebody ever needs it, here is what I did:
private void playAudioFromDataSource(final CallPeerSipImpl callPeer, final DataSource audioDataSource, final MediaDirection direction) {
final CallPeerMediaHandlerSipImpl mediaHandler = callPeer.getMediaHandler();
final AudioMediaStreamImpl audioMediaStream = (AudioMediaStreamImpl) mediaHandler.getStream(AUDIO);
final AudioMediaDeviceSession deviceSession = audioMediaStream.getDeviceSession();
if (null != deviceSession) {
if (RECVONLY == direction) {
// plays audio local only:
deviceSession.addPlaybackDataSource(audioDataSource);
} else {
final AudioMixerMediaDevice mediaDevice = (AudioMixerMediaDevice) deviceSession.getDevice();
final AudioMixer audioMixer = getAudioMixer(mediaDevice);
if (null != audioMixer) {
if (SENDONLY == direction) {
// plays audio remote only:
audioMixer.getLocalOutDataSource().addInDataSource(audioDataSource);
} else if (SENDRECV == direction) {
// plays audio on both sides of call (local and remote):
audioMixer.addInDataSource(audioDataSource);
}
}
}
}
}
private AudioMixer getAudioMixer(final AudioMixerMediaDevice device) {
try {
final Method privateGetAudioMixerMethod = device.getClass().getDeclaredMethod("getAudioMixer");
privateGetAudioMixerMethod.setAccessible(true);
final Object audioMixerObject = privateGetAudioMixerMethod.invoke(device, (Object[]) null);
return (AudioMixer) audioMixerObject;
} catch (final Exception e) {
log.error("Could not get AudioMixer", e);
}
return null;
}
NOTE: I sued reflection to get the private AudioMixer object. I admit it's not the cleanest approach, but it works. :)
Related
I am doing a webRTC videoCall application . At apoint I need a voice record ( Normal), So I just removed the audio track from peerconnection and after record I need to add audio track to peerconnection . But i cann't do it !!
public void removeAudioTrack() {
List<RtpSender> senders = new ArrayList<>();
senders.addAll(peerConnection.getSenders());
try {
for (RtpSender sender : senders) {
if (sender.track() != null) {
if (sender.track().id().equals(AUDIO_TRACK_ID)) {
boolean flag = peerConnection.removeTrack(sender);
rtpSender = sender;
}
}
}
} catch (Exception e) {
}
}
public void addAudioTrack() {
localAudioTrack = createAudioTrack();
mediaStream.addTrack(localAudioTrack);
audioSender = peerConnection.addTrack(localAudioTrack,mediaStreamLabels);
}
The audio voice not getting in another side (error)
As per the webrtc-pc standard - You cannot remove or add stream dynamically without re-negotiation. However, you can replace track to replace the current RTCPSender track with another track. And as per webrtc-pc standard this doesn't require a re-negotiation.
I had an issue where Text to Speech would not speak anything. I realised this was due to the fact that I was attempting to call 'Speak()' before TTS had initialised.
I need to wait until TTS has initialised, so that I can call 'Speak()' successfully. I thought doing something along the lines of this would work:
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
mTTSInitialised = true;
} else {
Log.e("TTS", "Initialisation Failed!");
}
}
...
while(!mTTSInitialised){
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
But this fails to initialise at all. Is there a way to do this effectively?
The initialisation of the Text to Speech engine is asynchronous, which is why you realised you have to 'wait' for it to complete, before requesting that it processes an utterance.
Even when it eventually initialises successfully, it can be subsequently killed by the system, or it can of course fail to initialise, so you always need to be ready to handle a request to speak, where the engine isn't prepared.
Add the following helper class
public class PendingTTS {
private String pendingUtterance;
private int pendingQueueType;
public String getPendingUtterance() {
return this.pendingUtterance;
}
public void setPendingUtterance(#NonNull final String pendingUtterance) {
this.pendingUtterance = pendingUtterance;
}
public int getPendingQueueType() {
return this.pendingQueueType;
}
public void setPendingQueueType(final int pendingQueueType) {
this.pendingQueueType = pendingQueueType;
}
}
Assuming you're using an Activity, you need to declare the following variables:
private volatile PendingTTS pendingTTS;
private static final int MAX_INIT_ATTEMPTS = 4;
private volatile int initCount;
and initialise the Text to Speech object in onCreate()
tts = new TextToSpeech(YOURActivity.this, YOURonInitListener);
In your onInitListener you would check if there is any pending speech:
#Override
public void onInit(final int status) {
switch (status) {
case TextToSpeech.SUCCESS:
initCount = 0;
// Set up tts stuff
tts.setOnUtteranceProgressListener(YOURprogressListener);
if (pendingTTS != null) {
// We have pending speech, process it and check the result
int speechResult = tts.speak(pendingTTS.getPendingUtterance(),pendingTTS.getPendingQueueType(),
// remaining tts variables here)
switch (speechResult){
case TextToSpeech.SUCCESS:
// Result was successful
pendingTTS = null;
break;
case TextToSpeech.ERROR:
// Speech failed
// Check if it has repeatedly failed up to the max attempts
if(initCount < MAX_INIT_ATTEMPTS){
initCount ++;
tts = new TextToSpeech(YOURActivity.this, YOURonInitListener);
} else {
// Totally broken - let the user know it's not working
}
break;
}
} else {
// there was nothing to process
}
break;
case TextToSpeech.ERROR:
// Check if it has repeatedly failed up to the max attempts
if(initCount < MAX_INIT_ATTEMPTS){
initCount ++;
tts = new TextToSpeech(YOURActivity.this, YOURonInitListener);
} else {
// Totally broken - let the user know it's not working
}
break;
}
I've glued the above together from my code - where the speech and initialisation methods are all separated, but I tried to give you an overview above of everything you need to handle.
Elsewhere in your code, when you make a tts.speak(//stuff here) request, you need to check the result as demonstrated above, to make sure it was successful. Again, in my code, this is separated into one single method. If it does fail, you need to set the PendingTTS parameters prior to attempting to initialise again:
pendingTTS = new PendingTTS();
pendingTTS.setPendingQueueType(// your queue type);
pendingTTS.setPendingUtterance(// your utterance);
It is is successful, make sure pendingTTS is set to null.
The overall design is that if the initialisation failed, it will attempt to initialise again, up to the maximum allowed attempts. If the speech fails, it will attempt to initialise the engine again, firstly setting the PendingTTS parameters.
Hope you managed to follow that.
Hmm..
Not a very good idea.
You can try to add the text to the TTS queue and let it do it's work. This snippet can be inside button click, etc as:
tts.speak(toSpeak, TextToSpeech.QUEUE_ADD, null);
Small tutorial that would help.
I have a requirement in my project, where video is being recorded and uploaded to the server, but since mobile networks are not reliable, at the beginning what I decided to do was every 30 secs
stop the recorder
reset the recorder state
retrieve the file written to by the recorder and upload (multipart form data) it in a different thread.
change the outfile of the recorder to a new file based on the hash of the current timestamp.
repeat process every 30 secs
Doing this suits my needs perfectly as each of the 30sec video file sizes are not more than 1MB and upload happens smoothly.
But the problem I am facing is that every time the media recorder stops and starts again there is a delay of about 500ms, so the video that I receive at the server has these 500ms breaks every 30secs which is really bad for my current situation, so I was thinking if it would be possible to just change the file that the recorder is writing to on the fly?
Relevant code:
GenericCallback onTickListener = new GenericCallback() {
#Override
public void execute(Object data) {
int timeElapsedInSecs = (int) data;
if (timeElapsedInSecs % pingIntervalInSecs == 0) {
new API(getActivity().getApplicationContext()).pingServer(objInterviewQuestion.getCurrentAccessToken(),
new NetworkCallback() {
#Override
public void execute(int response_code, Object result) {
// TODO: HANDLE callback
}
});
}
if (timeElapsedInSecs % uploadIntervalInSecs == 0 && timeElapsedInSecs < maxTimeInSeconds) {
if (timeElapsedInSecs / uploadIntervalInSecs >= 1) {
if(stopAndResetRecorder()) {
openConnectionToUploadQueue();
uploadQueue.add(
new InterviewAnswer(0,
objInterviewQuestion.getQid(),
objInterviewQuestion.getAvf(),
objInterviewQuestion.getNext(),
objInterviewQuestion.getCurrentAccessToken()));
objInterviewQuestion.setAvf(MiscHelpers.getOutputMediaFilePath());
initializeAndStartRecording();
}
}
}
}
};
here is initializeAndStartRecording() :
private boolean initializeAndStartRecording() {
Log.i("INFO", "initializeAndStartRecording");
if (mCamera != null) {
try {
mMediaRecorder = CameraHelpers.initializeRecorder(mCamera,
mCameraPreview,
desiredVideoWidth,
desiredVideoHeight);
mMediaRecorder.setOutputFile(objInterviewQuestion.getAvf());
mMediaRecorder.prepare();
mMediaRecorder.start();
img_recording.setVisibility(View.VISIBLE);
is_recording = true;
return true;
} catch (Exception ex) {
MiscHelpers.showMsg(getActivity(),
getString(R.string.err_cannot_start_recorder),
AppMsg.STYLE_ALERT);
return false;
}
} else {
MiscHelpers.showMsg(getActivity(), getString(R.string.err_camera_not_available),
AppMsg.STYLE_ALERT);
return false;
}
}
Here is stopAndResetRecorder:
boolean stopAndResetRecorder() {
boolean success = false;
try {
if (mMediaRecorder != null) {
try {
//stop recording
mMediaRecorder.stop();
mMediaRecorder.reset();
mMediaRecorder.release();
mMediaRecorder = null;
Log.d("MediaRecorder", "Recorder Stopped");
success = true;
} catch (Exception ex) {
if(ex != null && ex.getMessage()!=null && ex.getMessage().isEmpty()){
Crashlytics.log(Log.ERROR, "Failed to stop MediaRecorder", ex.getMessage());
Crashlytics.logException(ex);
}
success = false;
} finally {
mMediaRecorder = null;
is_recording = false;
is_recording = false;
}
}
} catch (Exception ex) {
success = false;
}
Log.d("MediaRecorder", "Success = " + String.valueOf(success));
return success;
}
You can speed it up slightly by not calling the release() method and all of the rest of the destruction that you do in stopAndResetRecorder() (see the documentation for the MediaRecorder state machine).
You also don't need to call both stop() and reset().
You could instead have an intermediate resetRecorder() function which just performed reset() then call initializeAndStartRecording(). When you finish all of your recording, you could then call stopRecorder() which would perform the destruction of your mMediaRecorder.
As I say, this will save you some time, but whether the extra overhead you currently have of destroying and re-initialising the MediaRecorder is a significant portion of the delay I don't know. Give that a try, and if it doesn't fix your problem, I'd be interested to know how much time it did/didn't save.
It seems to me that the setOutputFile calls a native method regarding to MediaRecorder's source, so I don't think there's an easy way to write into seperate files at the same time.
What about uploading it in one chunk at the end, but allow the user to do anything after starting the uploading process? Then the user wouldn't notice how much time the upload takes, and you can notify him later when the upload successed/failed.
[Edit:] Try to stream upload to server, where the server does the chunking mechanism to seperate files. Here you can have a brief explanation how to do so.
Apparently MediaRecorder.setOutputFile() also accepts a FileDescriptor.
So, if you were programming at low level (JNI) you could have represented a process's input stream as a file descriptor and in turn, had that process write to different files when desired. But that would involve managing that native "router" process from java.
Unfortunately, on java API side, you are out of luck.
I'm using the upload component of vaadin(7.1.9), now my trouble is that I'm not able to restrict what kind of files that can be sent with the upload component to the server, but I haven't found any API for that purpose. The only way is that of discarding file of wrong types after the upload.
public OutputStream receiveUpload(String filename, String mimeType) {
if(!checkIfAValidType(filename)){
upload.interruptUpload();
}
return out;
}
Is this a correct way?
No, its not the correct way. The fact is, Vaadin does provide many useful interfaces that you can use to monitor when the upload started, interrupted, finished or failed. Here is a list:
com.vaadin.ui.Upload.FailedListener;
com.vaadin.ui.Upload.FinishedListener;
com.vaadin.ui.Upload.ProgressListener;
com.vaadin.ui.Upload.Receiver;
com.vaadin.ui.Upload.StartedListener;
Here is a code snippet to give you an example:
#Override
public void uploadStarted(StartedEvent event) {
// TODO Auto-generated method stub
System.out.println("***Upload: uploadStarted()");
String contentType = event.getMIMEType();
boolean allowed = false;
for(int i=0;i<allowedMimeTypes.size();i++){
if(contentType.equalsIgnoreCase(allowedMimeTypes.get(i))){
allowed = true;
break;
}
}
if(allowed){
fileNameLabel.setValue(event.getFilename());
progressBar.setValue(0f);
progressBar.setVisible(true);
cancelButton.setVisible(true);
upload.setEnabled(false);
}else{
Notification.show("Error", "\nAllowed MIME: "+allowedMimeTypes, Type.ERROR_MESSAGE);
upload.interruptUpload();
}
}
Here, allowedMimeTypes is an array of mime-type strings.
ArrayList<String> allowedMimeTypes = new ArrayList<String>();
allowedMimeTypes.add("image/jpeg");
allowedMimeTypes.add("image/png");
I hope it helps you.
Can be done.
You can add this and it will work (all done by HTML 5 and most browsers now support accept attribute) - this is example for .csv files:
upload.setButtonCaption("Import");
JavaScript.getCurrent().execute("document.getElementsByClassName('gwt-FileUpload')[0].setAttribute('accept', '.csv')");
I think it's better to throw custom exception from Receiver's receiveUpload:
Upload upload = new Upload(null, new Upload.Receiver() {
#Override
public OutputStream receiveUpload(String filename, String mimeType) {
boolean typeSupported = /* do your check*/;
if (!typeSupported) {
throw new UnsupportedImageTypeException();
}
// continue returning correct stream
}
});
The exception is just a simple custom exception:
public class UnsupportedImageTypeException extends RuntimeException {
}
Then you just simply add a listener if the upload fails and check whether the reason is your exception:
upload.addFailedListener(new Upload.FailedListener() {
#Override
public void uploadFailed(Upload.FailedEvent event) {
if (event.getReason() instanceof UnsupportedImageTypeException) {
// do your stuff but probably don't log it as an error since it's not 'real' error
// better would be to show sth like a notification to inform your user
} else {
LOGGER.error("Upload failed, source={}, component={}", event.getSource(), event.getComponent());
}
}
});
public static boolean checkFileType(String mimeTypeToCheck) {
ArrayList allowedMimeTypes = new ArrayList();
allowedMimeTypes.add("image/jpeg");
allowedMimeTypes.add("application/pdf");
allowedMimeTypes.add("application/vnd.openxmlformats-officedocument.wordprocessingml.document");
allowedMimeTypes.add("image/png");
allowedMimeTypes.add("application/vnd.openxmlformats-officedocument.presentationml.presentation");
allowedMimeTypes.add("application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
for (int i = 0; i < allowedMimeTypes.size(); i++) {
String temp = allowedMimeTypes.get(i);
if (temp.equalsIgnoreCase(mimeTypeToCheck)) {
return true;
}
}
return false;
}
I am working with Vaadin 8 and I there is no change in Upload class.
FileUploader receiver = new FileUploader();
Upload upload = new Upload();
upload.setAcceptMimeTypes("application/json");
upload.setButtonCaption("Open");
upload.setReceiver(receiver);
upload.addSucceededListener(receiver);
FileUploader is the class that I created that handles the upload process. Let me know if you need to see the implementation.
I need to extract audio from a live stream on red5 and stream it separately. On nginx with rtmp module I'd just retranslate this stream via ffmpeg without videodata, but I have no idea how to do anything like this (with or without ffmpeg) on Red5.
The first link on google gave me this:
just register IStreamListeners on your IClientStreams, and then separate AudioData from VideoData in the RTMPEvents
But this doesn't help much. To be honest, this doesn't help at all. What are these IStreamListeners and how do I register them on IClientStream?
And, what is more misterious, how do I separate AudioData from VideoData in some RTMPEvents?
This is how your extend RTMPClient and capture the Audio or Video events
private class TestClient extends RTMPClient {
private int audioCounter;
private int videoCounter;
public void connect() {
private IEventDispatcher streamEventDispatcher = new IEventDispatcher() {
public void dispatchEvent(IEvent event) {
System.out.println("ClientStream.dispachEvent()" + event.toString());
String evt = event.toString();
if (evt.indexOf("Audio") >= 0) {
audioCounter++;
} else if (evt.indexOf("Video") >= 0) {
videoCounter++;
}
}
};
}
}
This simply counts the a/v events, but it will get you part of the way there. I suggest looking though the unit tests in red5 and there you can learn a lot.