I am working on a Java Application on Android Studio. I wanted any code where we can get Bitmap from video. I take video using absolute path. The input from video is 8 FPS.
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(absolutePath);
Just wanted to take Bitmap from video. Any help would be appreciated.
I tried the following code and it worked
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
//path of the video of which you want frames
retriever.setDataSource(absolutePath);
}catch (Exception e) {
}
String duration = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
int duration_millisec = Integer.parseInt(duration); //duration in millisec
int duration_second = duration_millisec / 1000; //millisec to sec.
int frames_per_second = 30; //no. of frames want to retrieve per second
int numeroFrameCaptured = frames_per_second * duration_second;
long frame_us=1000000/30;
capture=="+numeroFrameCaptured);
for (int i = 0; i < numeroFrameCaptured; i++)
{
//setting time position at which you want to retrieve frames
MEventsManager.getInstance().inject(MEventsManager.IMAGE,retriever.getFrameAtTime(frame_us*i,MediaMetadataRetriever.OPTION_CLOSEST));
}
retriever.release();
If you want image from video then you can use library called FFMPEG by implementing in gradle with this:
implementation 'com.arthenica:mobile-ffmpeg-min:4.3.1.LTS'
By using several commands you can convert all frames of video into image or you can get single frame also
for more information please visit:-
https://github.com/tanersener/mobile-ffmpeg
if you want only frame for displaying in imageview then you can use glide/picasso library.
Hope this will Help you..!!!
Related
I try to play short piece of mp3 audio (44100 samples rate) with exoPlayer using it's ClippingMediaSource but it cuts the start of the piece for about 250 milliseconds, it's too much, is it possible to make it more precise?
I start exoplayer like this:
dataSourceFactory = new DefaultDataSourceFactory(context, Util.getUserAgent(context, "com.example.player"));
Uri uri = Uri.fromFile(new File(path));
MediaSource audioSource = new ExtractorMediaSource.Factory(dataSourceFactory).createMediaSource(uri);
ExoPlayer exoPlayer = ExoPlayerFactory.newSimpleInstance(context, new DefaultTrackSelector());
exoPlayer.setPlayWhenReady(true);
ClippingMediaSource clip = new ClippingMediaSource(audioSource, 7_225_000, 8_175_000);
exoPlayer.prepare(clip);
According to the exoplayer documentation, there are the two methods
ClippingMediaSource(MediaSource MediaSource, long startPosition, long endPositionUs)
Creates a new clipping source that wraps the specified source and provides samples between the specified start and end position.
ClippingMediaSource(MediaSource mediaSource, long startPositionUs, long endPositionUs, boolean enableInitialDiscontinuity)
try the second method with false or adjust startPosition and endPosition.
For more check, this clipping documentation of ExoPlayer. Hope this will solve your issue.
I am unable to get the size of the image. Below is my code.
File root = android.os.Environment.getExternalStorageDirectory();
File dirM = new File(root.getAbsolutePath()+"/Download/images.jpeg");
Long length = dirM.length();
You are actually fetching the length, not the size of the particular file. Use the below example to achieve what you want.
long imgSize = yourImageFileName.getAbsoluteFile().getTotalSpace();
Right now I am creating thumbnail using below method-
Bitmap thumb = ThumbnailUtils.createVideoThumbnail(path,
MediaStore.Images.Thumbnails.MICRO_KIND);
This method is working fine, The problem is I am getting thumbnail from the beginning of the video (Might be from 00:00 or 00:01).
My Ques is, Can I get Thumbnail from specified position(Let's say 00:05)?
Thanks.
Yes you can by below way..
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(filePath);
//here 5 means frame at the 5th sec.
bitmap = retriever.getFrameAtTime(5);
} catch (Exception ex) {
// Assume this is a corrupt video file
}
For more info check it MediaMetadataRetriever
FFmpegMediaMetadataRetriever will accomplish what you want and it works with API 7+:
FFmpegMediaMetadataRetriever retriever = new FFmpegMediaMetadataRetriever();
try {
retriever.setDataSource(filePath);
//here 5 means frame at the 5th sec.
bitmap = retriever.getFrameAtTime(5);
} catch (Exception ex) {
// Assume this is a corrupt video file
}
I have a JavaCV application using external camera, but it's not working... The result is a black image from the camera...
I have another project that use the same code and it's works fine...
I don't understando why it's not working in my new project
capture = cvCreateCameraCapture(1);
imgCamera = cvQueryFrame(capture);
the code is simple, first capture the image from external webcam and set it in a IplImage
why it works in a project and don't in another?
You can iterate through all your camera attached to your system and then get index for particular device either it is a webcam or external camera and use it in code. I am giving you a sample code for it
String cameraInformation = "";
int n = com.googlecode.javacv.cpp.videoInputLib.videoInput.listDevices();
for (int i = 0; i < n; i++) {
String info = com.googlecode.javacv.cpp.videoInputLib.videoInput
.getDeviceName(i);
//cameraInformation = info + " Device id:" + i + "\n";
system.out.println("Your information for camera:"+info+" and device index is="+i);
}
From here you come to know which is the index of which device and use it in this code
capture = cvCreateCameraCapture(deviceIndex);
imgCamera = cvQueryFrame(capture);
Hope this helps
TargetDataLine is, for me so far, the easiest way to capture microphone input in Java. I want to encode the audio that I capture with a video of the screen [in a screen recorder software] so that the user can create a tutorial, slide case etc.
I use Xuggler to encode the video.
They do have a tutorial on encoding audio with video but they take their audio from a file. In my case, the audio is live.
To encode the video I use com.xuggle.mediaTool.IMediaWriter. The IMediaWriter object allows me to add a video stream and has an
encodeAudio(int streamIndex, short[] samples, long timeStamp, TimeUnit timeUnit)
I can use that if I can get the samples from target data line as short[]. It returns byte[]
So two questions are:
How can I encode the live audio with video?
How do I maintain the proper timing of the audio packets so that they are encoded at the proper time?
References:
1. DavaDoc for TargetDataLine: http://docs.oracle.com/javase/1.4.2/docs/api/javax/sound/sampled/TargetDataLine.html
2. Xuggler Documentation: http://build.xuggle.com/view/Stable/job/xuggler_jdk5_stable/javadoc/java/api/index.html
Update
My code for capturing video
public void run(){
final IRational FRAME_RATE = IRational.make(frameRate, 1);
final IMediaWriter writer = ToolFactory.makeWriter(completeFileName);
writer.addVideoStream(0, 0,FRAME_RATE, recordingArea.width, recordingArea.height);
long startTime = System.nanoTime();
while(keepCapturing==true){
image = bot.createScreenCapture(recordingArea);
PointerInfo pointerInfo = MouseInfo.getPointerInfo();
Point globalPosition = pointerInfo.getLocation();
int relativeX = globalPosition.x - recordingArea.x;
int relativeY = globalPosition.y - recordingArea.y;
BufferedImage bgr = convertToType(image,BufferedImage.TYPE_3BYTE_BGR);
if(cursor!=null){
bgr.getGraphics().drawImage(((ImageIcon)cursor).getImage(), relativeX,relativeY,null);
}
try{
writer.encodeVideo(0,bgr,System.nanoTime()-startTime,TimeUnit.NANOSECONDS);
}catch(Exception e){
writer.close();
JOptionPane.showMessageDialog(null,
"Recording will stop abruptly because" +
"an error has occured", "Error",JOptionPane.ERROR_MESSAGE,null);
}
try{
sleep(sleepTime);
}catch(InterruptedException e){
e.printStackTrace();
}
}
writer.close();
}
I answered most of that recently under this question: Xuggler encoding and muxing
Code sample:
writer.addVideoStream(videoStreamIndex, 0, videoCodec, width, height);
writer.addAudioStream(audioStreamIndex, 0, audioCodec, channelCount, sampleRate);
while (... have more data ...)
{
BufferedImage videoFrame = ...;
long videoFrameTime = ...; // this is the time to display this frame
writer.encodeVideo(videoStreamIndex, videoFrame, videoFrameTime, DEFAULT_TIME_UNIT);
short[] audioSamples = ...; // the size of this array should be number of samples * channelCount
long audioSamplesTime = ...; // this is the time to play back this bit of audio
writer.encodeAudio(audioStreamIndex, audioSamples, audioSamplesTime, DEFAULT_TIME_UNIT);
}
In the case of TargetDataLine, getMicrosecondPosition() will tell you the time you need for audioSamplesTime. This appears to start from the time the TargetDataLine was opened. You need to figure out how to get a video timestamp referenced to the same clock, which depends on the video device and/or how you capture video. The absolute values do not matter as long as they are both using the same clock. You could subtract the initial value (at start of stream) from both your video and your audio times so that the timestamps match, but that is only a somewhat approximate match (probably close enough in practice).
You need to call encodeVideo and encodeAudio in strictly increasing order of time; you may have to buffer some audio and some video to make sure you can do that. More details here.