I have started with https://doc-kurento.readthedocs.io/en/6.13.2/tutorials/java/tutorial-groupcall.html
Currently, in UI i give user option to decide whether they want only audio or audio+video call. Based on the selection, the constraints for getUserMedia() are passed and this works fine if all the user select same kind of call type.
But, say user 1 select only audio and user 2 selects audio+video, then user 1 receives audio from user 2 while on user 2 end, the html video element keeps loading.
Findings:
I believe this is SDP offer issue, since offer from user 1 and respective SDP answer from user2 does not contain m=video since user 1 has opted only for audio call (this works fine)
But, offer from user 2 and respective SDP answer from user 1 does contain m=video.
So, what i want is, user 2 receive audio from 1, even though user 2 selected video call.
Your stream has both Audio and Video tracks. for some reason, html video element doesn't play audio in this case because it's not getting video and just audio (because the other guy disabled the video). There's two ways you fix it.
Fixing by manipulating mediaStream.
You can create a mediaStream that has only audio tracks when the user has disabled the video.
const audioStream = new MediaStream();
mediaStream.addTrack(originalStream.getAudioTracks()[0]);
/* display audioStream in video element*/
Fixing by generating sdp to right mediaConstraints
You can generate the sdp by passing mediaConstraints as {audio:true,video:false} when creating WebRtcPeer using kurentoUtils. That'll just get you the audio track.
Related
I am trying to perform a simple task, select an input device and set the output device.
The use case is as follows, I have 3.5mm jacks and my user can select the output device (headphones or speaker) from a list.
I can play a sound on a given device (with clip), I can control the input device (mute/volume), but I haven't found any way to specify the target line, it's always the system default.
I can get the mixer
Optional<Mixer.Info> optJackInMixerInfo = Arrays.stream(AudioSystem.getMixerInfo())
.filter(mixerInfo -> {
// Filter based on the device name.
})
.findFirst();
Mixer m = AudioSystem.getMixer(jackInMixerInfo);
// The target
Line.Info[] lineInfos = m.getTargetLineInfo();
for (Line.Info lineInfo : lineInfos) {
m.getLine(lineInfo);
System.out.println("ici");
}
I got only the "master volume control".
How can I select the output device ? I can be happy with changing the system default device too.
The naming of TargetDataLine and SourceDataLine is kind of backwards. Outputs to the local sound system for playback are directed to a SourceDataLine and inputs to Java like microphone lines use TargetDataLine. I used to know why they were named this way but it's slipped my mind at the moment.
There is a tutorial Accessing Audio System Resources with specifics.
Most computers only have a limited number of float controls available, with "master volume" being the one most likely to be implemented. You would use this to alter the volume of the output. Another tutorial in the series, Processing Audio with Controls covers this topic. For myself, I generally convert the audio stream to PCM and handle volume directly (multiply each value by a factor that ranges from 0 to 1) and then convert back to a byte stream, rather than rely on controls which may or may not be present.
I am working on an audio stream android app and I parsed JSON object from a server to a TextView to display 'now Playing' for the song name and artist. So when the play button is clicked, the song name playing artist is displayed to the user. The problem is that I want this automatically loaded to the app view when JSON URL link is updated from the server. I don't want the user pressing pause and play to update the view from the app. How do I go about this because I don't want the user restarting the service each time a new song isPlaying to get song information.
You can either poll server in short intervals to check if song changed or open socket connection to server to make possible server initiating communication to device.
First approach in simplest form is a very bad practice, as it puts strain on both device and server to check it often enough.
However there is different way to use it, called long polling. With this, you send request to the server, and server does not respond immediately, but holds connection open until it has something to say. After getting reply instantaneously new request is created to make sure no delay is made by it.
The best approach is opening a socket connection, but not every server and program support it.
You can try libraries like SignalR (this one is for .NET mostly, but it's the first one that came to my mind) that choose which approach is the best and takes care of holding connection, reconnecting etc.
Are you fetching this JSON metadata every time the song is played? If so, that doesn't sound like a good idea. The ideal would be to add song metadata when adding a song to the playlist, then either update it periodically (once a couple of days perhaps) and save that information into a SQLite database for later retrieval.
I'm creating an app in which multiple devices can connect to each other in a LAN and play songs from each other's device.
Right now, when device A requests a song from device B, B sends the song file back to A over a socket. Once the song finishes downloading, A starts playing it using MediaPlayer.
Now the problem with this is that even on a high speed LAN, it can take a couple of seconds to finish fetching the song, so there's a noticeable delay between selecting a song and it's playback actually starting.
I want to find a way to play a song while it's still being downloaded. Here are some possible solutions I found, but wasn't able to explore for certain reasons:
In the MediaPlayer documentation, I noticed a constructor that takes a MediaDataSource implementation allowing me to manually return parts of the media file whenever the MediaPlayer requires it so I could hold off returning a byte until it finishes downloading (if it hasn't finished downloading already), effectively acting like a "buffering" action. It seemed like a solution but unfortunately, my app's minSdk is set to 16 and MediaDataSource is for 23 and above only, so I couldn't try it out.
One option was to use MediaPlayer's setDataSource(FileDescriptor fd, long offset, long length) method. This would allow me to tell the MediaPlayer to only play up to a certain byte of the song. That way, I could wait for more parts of the file (or the entire file) to become available, and then use setNextMediaPlayer() and pass in a new MediaPlayer object that prepares the entire song and is made to seek up to the point where the previous media player object will stop playing so that there's a seamless transition.
But there's another problem with this. I need to be able to calculate the millisecond position that would be reached at that last specified byte of the first incomplete media player object. Otherwise I wouldn't know what position to seek the next media player object to in order to get a seamless transition. This calculation seems impossible for lossy formats.
I don't really know if this option will work or not, I'm just making assumptions. I noticed that setDataSource() takes a Uri. If that Uri points to a file on the phone, the media player just loads the entire file. But if the Uri points to an audio file on the internet that needs to be streamed, it figures that out on it's own and it handles all the details of downloading and buffering. So what I want to know is, is it possible to expose the song on device B as a Uri to device A so that media player just treats it as if it's a song on the internet? All this time I was using sockets and manually copying a file from one device to another so I have no idea how this would work. Could anyone explain if this is possible?
There's actually a reason why I haven't been exploring ways to "stream" a song from one device to another. That's because I want to song to be cached on device B so that later if I switch to another song and then back the previously played song from device A, it shouldn't have to stream it again. It should just play the cached copy.
Finally, I came across ExoPlayer. It seems to provide a large amount of customization. It seems like I could make custom implementations of it's classes to handle all the buffering. But the documentation and examples are few and far too complicated for me. I have no idea how to pull it off. Is this solution too complex for what I want to achieve?
Hope someone can point me in the right direction here.
Ended up using an answer from here:
MediaPlayer stutters at start of mp3 playback
This solution sets up a local server that the MediaPlayer can use to stream the file. The server only sends bytes to the MediaPlayer while those bytes are available. Otherwise, it blocks until more bytes are available. Thus the MediaPlayer buffers it as if it were a file from a web server.
I took the same class provided in that answer and just tweaked it to fit my situation.
I have a webpage with a video. I have to protect this video from capturing it from browser with video capturing programms. I think that for this task i need to check process list or something like this, but to do this i have to use Java. Could anyone give advice how to create this kind of programm. thanks!
I think that it is not possible to prevent user from video capturing. You can make it harder but you will never prevent user from capture screen of his computer. Even if you will control process list of computer (which i guess impossible or impossible for most users) You still cant prevent video capture from computer's video output.
You asking about thing that looks like DRM. History shows that task unsolveable.
You can try identify capturing users (if they will drop your video to torrent). With special unique marks which you can add to video for user. google: steganography
So, I went over the Java's sound tutorial and I did not find it all so helpful.
Anyways, what I understood from the tutorial for recording sound from a mic is this:
Although they do show how to get a target data line and so on, they do not tell how you can actually record sound [or maybe I didn't get it all well].
My understanding so far has been this:
Mixer can be your sound card or sound software drivers that can be used to process the sound, whether input or output
TargetDataLine is used when you want to output your sound into the computer. Like save it to the disk
Port is where your external devices like mic, etc are connected
Problems that remain
How do I select the proper mixer? Java's tut says that you get all the available mixers and query each one to see if it has what you want. That's quite vague for a beginner
How do I get the port on which my integrated mic is? Specifically, how do I get input from it into the mixer?
How do I output this to the disk?
Using the AudioSystem.getTargetDataLine(AudioFormat format) method you will get
... a target data line that can be used for recording audio data in the format specified by the AudioFormat object. The returned line will be provided by the default system mixer, or, if not possible, by any other mixer installed in the system that supports a matching TargetDataLine object.
See the accepted answer for Java Sound API - capturing microphone for an example of this.
If you want more control of which data line to use you can enumerate all the mixers and the data lines they support and pick the one you want. Here is some more information regarding how you would go about doing that: Java - recording from mixer
Once you've obtained the TargetDataLine you should open() it, and then call read() repeatedly to obtain data from that data line. The byte[] that you fill up with data with each call to read() can be written to disk e.g. through a FileOutputStream.