I try to play short piece of mp3 audio (44100 samples rate) with exoPlayer using it's ClippingMediaSource but it cuts the start of the piece for about 250 milliseconds, it's too much, is it possible to make it more precise?
I start exoplayer like this:
dataSourceFactory = new DefaultDataSourceFactory(context, Util.getUserAgent(context, "com.example.player"));
Uri uri = Uri.fromFile(new File(path));
MediaSource audioSource = new ExtractorMediaSource.Factory(dataSourceFactory).createMediaSource(uri);
ExoPlayer exoPlayer = ExoPlayerFactory.newSimpleInstance(context, new DefaultTrackSelector());
exoPlayer.setPlayWhenReady(true);
ClippingMediaSource clip = new ClippingMediaSource(audioSource, 7_225_000, 8_175_000);
exoPlayer.prepare(clip);
According to the exoplayer documentation, there are the two methods
ClippingMediaSource(MediaSource MediaSource, long startPosition, long endPositionUs)
Creates a new clipping source that wraps the specified source and provides samples between the specified start and end position.
ClippingMediaSource(MediaSource mediaSource, long startPositionUs, long endPositionUs, boolean enableInitialDiscontinuity)
try the second method with false or adjust startPosition and endPosition.
For more check, this clipping documentation of ExoPlayer. Hope this will solve your issue.
Related
So I have two urls one for audio and one for video. I want to play them together but really couldn't find any documentation about this.
I just found the answer just build it like below in kotlin:
val dataSourceFactory: DataSource.Factory =
DefaultHttpDataSource.Factory()
val videoSource: MediaSource = ProgressiveMediaSource.Factory(dataSourceFactory)
.createMediaSource(fromUri(videoInPlayer.videoStreams[0].url))
val audioSource: MediaSource = ProgressiveMediaSource.Factory(dataSourceFactory)
.createMediaSource(fromUri(videoInPlayer.audioStreams[0].url))
val mergeSource: MediaSource = MergingMediaSource(videoSource,audioSource)
I am working on a Java Application on Android Studio. I wanted any code where we can get Bitmap from video. I take video using absolute path. The input from video is 8 FPS.
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(absolutePath);
Just wanted to take Bitmap from video. Any help would be appreciated.
I tried the following code and it worked
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
//path of the video of which you want frames
retriever.setDataSource(absolutePath);
}catch (Exception e) {
}
String duration = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
int duration_millisec = Integer.parseInt(duration); //duration in millisec
int duration_second = duration_millisec / 1000; //millisec to sec.
int frames_per_second = 30; //no. of frames want to retrieve per second
int numeroFrameCaptured = frames_per_second * duration_second;
long frame_us=1000000/30;
capture=="+numeroFrameCaptured);
for (int i = 0; i < numeroFrameCaptured; i++)
{
//setting time position at which you want to retrieve frames
MEventsManager.getInstance().inject(MEventsManager.IMAGE,retriever.getFrameAtTime(frame_us*i,MediaMetadataRetriever.OPTION_CLOSEST));
}
retriever.release();
If you want image from video then you can use library called FFMPEG by implementing in gradle with this:
implementation 'com.arthenica:mobile-ffmpeg-min:4.3.1.LTS'
By using several commands you can convert all frames of video into image or you can get single frame also
for more information please visit:-
https://github.com/tanersener/mobile-ffmpeg
if you want only frame for displaying in imageview then you can use glide/picasso library.
Hope this will Help you..!!!
I am currently trying to reduce the quality of videos and audio before uploading to and online cloud database. Below is the code I have been using to record videos.
recordVideoIntent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 0);
Changing the 0 to 1 in EXTRA_VIDEO_QUALITY will increase the quality and vice versa, but the file is still too large to download if it a 30 second or more video.
private void RecordVideoMode() {
Intent recordVideoIntent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
// Ensure that there's a camera activity to handle the intent
if (recordVideoIntent.resolveActivity(getPackageManager()) != null) {
videoFile = createVideoFile();
// Continue only if the File was successfully created
if (videoFile != null) {
videoURI = FileProvider.getUriForFile(this,
"com.example.android.fileprovider",
videoFile);
recordVideoIntent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 0);
recordVideoIntent.putExtra(MediaStore.EXTRA_OUTPUT, videoURI);
startActivityForResult(recordVideoIntent, REQUEST_VIDEO_CAPTURE);
}
}
}
Any help is very much appreciated!
You can go with this two methods :
Encode it to a lower bit rate and/or lower resolution. Have a look here:
Is it possible to compress video on Android?
Try to zip/compress it. Have a look here:
http://www.jondev.net/articles/Zipping_Files_with_Android_%28Programmatically%29
I use this code:
mediaPlayer.setDataSource("http://some online radio");
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mediaPlayer.setOnPreparedListener(this);
mediaPlayer.prepareAsync();
and onPrepared method is:
if (mediaPlayer != null) {
mediaPlayer.start();
}
In general, the problem is: then i run this code, playback does not start right away, but about 10 seconds later. + some streams do not start at all, on the emulator it works a little better, than on the device, but still. It depends on concrete radio, some are better, other very bad.
I assume that the matter is in the preparation and buffering. It can be possible to make an InputStream from this stream and write to some temporary file / buffer, and read/play this file in the MediaPlayer, but how to implement it is, not yet clear .. Help please
If you just do mp.prepare, and then mp.start - the result is the same
On a PC in Chrome, all the radio streams that I tried to use immediately start playing
Sorry for my english, thank you.
If somebody get here for same reason (just in case), solution is ExoPlayer
Something like this:
LoadControl loadControl = new DefaultLoadControl();
bandwidthMeter = new DefaultBandwidthMeter();
extractorsFactory = new DefaultExtractorsFactory();
trackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter);
trackSelector = new DefaultTrackSelector(trackSelectionFactory);
defaultBandwidthMeter = new DefaultBandwidthMeter();
dataSourceFactory = new DefaultDataSourceFactory(this,
Util.getUserAgent(this, "mediaPlayerSample"), defaultBandwidthMeter);
mediaSource = new ExtractorMediaSource(Uri.parse(radioURL), dataSourceFactory, extractorsFactory, null, null);
player = ExoPlayerFactory.newSimpleInstance(this, trackSelector, loadControl);
player.prepare(mediaSource);
player.setPlayWhenReady(true);
But u better check actual version of examples here: https://github.com/google/ExoPlayer
It seems like audio bitrate might be the key here: https://stackoverflow.com/a/12850965/3454741. There is no easy way to get around this, it seems.
If the streaming does not start at all it might be due to some error, which you could catch using MediaPlayer.setOnErrorListener().
TargetDataLine is, for me so far, the easiest way to capture microphone input in Java. I want to encode the audio that I capture with a video of the screen [in a screen recorder software] so that the user can create a tutorial, slide case etc.
I use Xuggler to encode the video.
They do have a tutorial on encoding audio with video but they take their audio from a file. In my case, the audio is live.
To encode the video I use com.xuggle.mediaTool.IMediaWriter. The IMediaWriter object allows me to add a video stream and has an
encodeAudio(int streamIndex, short[] samples, long timeStamp, TimeUnit timeUnit)
I can use that if I can get the samples from target data line as short[]. It returns byte[]
So two questions are:
How can I encode the live audio with video?
How do I maintain the proper timing of the audio packets so that they are encoded at the proper time?
References:
1. DavaDoc for TargetDataLine: http://docs.oracle.com/javase/1.4.2/docs/api/javax/sound/sampled/TargetDataLine.html
2. Xuggler Documentation: http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/javadoc/java/api/index.html
Update
My code for capturing video
public void run(){
final IRational FRAME_RATE = IRational.make(frameRate, 1);
final IMediaWriter writer = ToolFactory.makeWriter(completeFileName);
writer.addVideoStream(0, 0,FRAME_RATE, recordingArea.width, recordingArea.height);
long startTime = System.nanoTime();
while(keepCapturing==true){
image = bot.createScreenCapture(recordingArea);
PointerInfo pointerInfo = MouseInfo.getPointerInfo();
Point globalPosition = pointerInfo.getLocation();
int relativeX = globalPosition.x - recordingArea.x;
int relativeY = globalPosition.y - recordingArea.y;
BufferedImage bgr = convertToType(image,BufferedImage.TYPE_3BYTE_BGR);
if(cursor!=null){
bgr.getGraphics().drawImage(((ImageIcon)cursor).getImage(), relativeX,relativeY,null);
}
try{
writer.encodeVideo(0,bgr,System.nanoTime()-startTime,TimeUnit.NANOSECONDS);
}catch(Exception e){
writer.close();
JOptionPane.showMessageDialog(null,
"Recording will stop abruptly because" +
"an error has occured", "Error",JOptionPane.ERROR_MESSAGE,null);
}
try{
sleep(sleepTime);
}catch(InterruptedException e){
e.printStackTrace();
}
}
writer.close();
}
I answered most of that recently under this question: Xuggler encoding and muxing
Code sample:
writer.addVideoStream(videoStreamIndex, 0, videoCodec, width, height);
writer.addAudioStream(audioStreamIndex, 0, audioCodec, channelCount, sampleRate);
while (... have more data ...)
{
BufferedImage videoFrame = ...;
long videoFrameTime = ...; // this is the time to display this frame
writer.encodeVideo(videoStreamIndex, videoFrame, videoFrameTime, DEFAULT_TIME_UNIT);
short[] audioSamples = ...; // the size of this array should be number of samples * channelCount
long audioSamplesTime = ...; // this is the time to play back this bit of audio
writer.encodeAudio(audioStreamIndex, audioSamples, audioSamplesTime, DEFAULT_TIME_UNIT);
}
In the case of TargetDataLine, getMicrosecondPosition() will tell you the time you need for audioSamplesTime. This appears to start from the time the TargetDataLine was opened. You need to figure out how to get a video timestamp referenced to the same clock, which depends on the video device and/or how you capture video. The absolute values do not matter as long as they are both using the same clock. You could subtract the initial value (at start of stream) from both your video and your audio times so that the timestamps match, but that is only a somewhat approximate match (probably close enough in practice).
You need to call encodeVideo and encodeAudio in strictly increasing order of time; you may have to buffer some audio and some video to make sure you can do that. More details here.