Right now I am creating thumbnail using below method-
Bitmap thumb = ThumbnailUtils.createVideoThumbnail(path,
MediaStore.Images.Thumbnails.MICRO_KIND);
This method is working fine, The problem is I am getting thumbnail from the beginning of the video (Might be from 00:00 or 00:01).
My Ques is, Can I get Thumbnail from specified position(Let's say 00:05)?
Thanks.
Yes you can by below way..
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(filePath);
//here 5 means frame at the 5th sec.
bitmap = retriever.getFrameAtTime(5);
} catch (Exception ex) {
// Assume this is a corrupt video file
}
For more info check it MediaMetadataRetriever
FFmpegMediaMetadataRetriever will accomplish what you want and it works with API 7+:
FFmpegMediaMetadataRetriever retriever = new FFmpegMediaMetadataRetriever();
try {
retriever.setDataSource(filePath);
//here 5 means frame at the 5th sec.
bitmap = retriever.getFrameAtTime(5);
} catch (Exception ex) {
// Assume this is a corrupt video file
}
Related
I am trying to make an AirPlay server in java with this library. I am able to start the server and connect to it and I am getting video input, however the input is in h264 format and I tried decoding it with JCodec but it always says I need an sps/pps and I don't know how to create/find this with just a byte[]. This is the onVideo method which is pretty much just copy-pasted from some websites:
#Override
public void onVideo(byte[] video) {
try {
videoFileChannel.write(ByteBuffer.wrap(video));
ByteBuffer bb = ByteBuffer.wrap(video);
H264Decoder decoder = new H264Decoder();
decoder.addSps(List.of(ByteBuffer.wrap(video)));
Picture out = Picture.create(1920, 1088, ColorSpace.YUV420);
var real = decoder.decodeFrame(bb, out.getData());
// decoder.decodeFrame prints "[WARN] . (:0): Skipping frame as no SPS/PPS have been seen so far..." in console and returns null => NullPointer in next line
var img = AWTUtil.toBufferedImage(real.createCompatible());
// ...
} catch (IOException e) {
e.printStackTrace();
}
}
Edit: I've uploaded a ("working") version to github, but the decoded image is discolored and doesn't update all pixels so when something is on the screen and the frame changes, that something can still be on the image.
I trying to create a custom photo gallery which the user can store Images from MediaStore photo gallery,
And my target is that the user can only the store different Image in database, if it is a duplicate Image it will not store .So , Im thinking hey, I can use the filepath as sort of A Id to check for duplicate Images. But evertime I restart the app and repick the same picture the Uri/filepath is not the same espicially the lastpart of the filePath/Uri
My Weird Implementation to Check duplicate:
public void checkAndSetPhotoGalleryData() throws IOException {
RoomDB db = RoomDB.getInstance(this);
PhotoGalleryData photoGalleryData = new PhotoGalleryData();
String stringUri = String.valueOf(imageUri);
String lastPartFile = stringUri.substring(stringUri.lastIndexOf('/')+1);
TextView testThis = findViewById(R.id.testName);
testThis.setText(db_path);
if(photoGarList.isEmpty()) {
try {
InputStream iStream = getContentResolver().openInputStream(imageUri);
byte[] inputData = getBytes(iStream);
photoGalleryData.setKey_Value_Quiz(String.valueOf(lastPartFile));
photoGalleryData.setPhoto(inputData);
db.questionDao().insert(photoGalleryData);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}if(!photoGarList.isEmpty())
{for(int i = 0; i<photoGarList.size();i++){
if (photoGarList.get(i).getKey_Value_Quiz().length() > 0 && !photoGarList.isEmpty() && photoGarList.get(i).getKey_Value_Quiz().equals(String.valueOf(lastPartFile))) {
showLongToast("Are Identical");
}
if (photoGarList.get(i).getKey_Value_Quiz().length() > 0 && !photoGarList.isEmpty() && !photoGarList.get(i).getKey_Value_Quiz().equals(String.valueOf(lastPartFile)) && i == photoGarList.size()-1) {
try {
InputStream iStream = getContentResolver().openInputStream(imageUri);
byte[] inputData = getBytes(iStream);
photoGalleryData.setKey_Value_Quiz(String.valueOf(lastPartFile));
photoGalleryData.setPhoto(inputData);
db.questionDao().insert(photoGalleryData);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
}
}
}
lastPart of filePath:
///both of this are the same picture:
KeyValue:924270434
KeyValue:239256090
I guess that for security reason they change the last part every time, So I want to know is there any constant factor of a picture except for byte[] or Bitmap(Size is to Huge to store at once and this is not the case).
You could use simple pixel by pixel comparison of the images.
Try to use ImageMagic:
ImageMagic
Or you could try Java OpenCV: How to compare two images using Java OpenCV library
I am working on a Java Application on Android Studio. I wanted any code where we can get Bitmap from video. I take video using absolute path. The input from video is 8 FPS.
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(absolutePath);
Just wanted to take Bitmap from video. Any help would be appreciated.
I tried the following code and it worked
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
//path of the video of which you want frames
retriever.setDataSource(absolutePath);
}catch (Exception e) {
}
String duration = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
int duration_millisec = Integer.parseInt(duration); //duration in millisec
int duration_second = duration_millisec / 1000; //millisec to sec.
int frames_per_second = 30; //no. of frames want to retrieve per second
int numeroFrameCaptured = frames_per_second * duration_second;
long frame_us=1000000/30;
capture=="+numeroFrameCaptured);
for (int i = 0; i < numeroFrameCaptured; i++)
{
//setting time position at which you want to retrieve frames
MEventsManager.getInstance().inject(MEventsManager.IMAGE,retriever.getFrameAtTime(frame_us*i,MediaMetadataRetriever.OPTION_CLOSEST));
}
retriever.release();
If you want image from video then you can use library called FFMPEG by implementing in gradle with this:
implementation 'com.arthenica:mobile-ffmpeg-min:4.3.1.LTS'
By using several commands you can convert all frames of video into image or you can get single frame also
for more information please visit:-
https://github.com/tanersener/mobile-ffmpeg
if you want only frame for displaying in imageview then you can use glide/picasso library.
Hope this will Help you..!!!
I am taking a series of pictures using Android Camera2 API for real time pose estimation and environment reconstruction (the SLAM problem). Currently I simply save all of these pictures in my SD card for off-line processing.
I setup the processing pipeline according to google's Camera2Basic using a TextureView as well as an ImageReader, where they are both set as target surfaces for a repeat preview request.
mButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if(mIsShooting){
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = false;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
else{
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = true;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
}
});
The ImageReader is added/removed when pressing the button. The ImageReader's OnImageAvailableListener is implemented as follow:
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireLatestImage();
if(null == img){
return;
}
if(img.getTimestamp() <= mLatestFrameTime){
Log.i(Tag, "disorder detected!");
return;
}
mLatestFrameTime = img.getTimestamp();
ImageSaver saver = new ImageSaver(img, img.getTimestamp());
saver.run();
}
};
I use acquireLatestImage (with buffer size set to 2) to discard old frames and have also checked the image's timestamp to make sure they are monotonously increasing.
The reader does receive images at an acceptable rate (about 25fps). However a closer look at the saved image sequence show they are not
always saved in chronological order.
The following pictures come from a long sequence shot by the program (sorry for not being able to post pictures directly :( ):
Image 1:
Image 2:
Image 3:
Such disorder does not occur very often but they can occur any time and seems not to be an initialization problem. I suppose it has something to do with the ImageReader's buffer size as with larger buffer less "flash backs" are occurred. Does anyone have the same problem?
I finally find that such disorder disappears when setting ImageReader's format to be YUV_420_888 in its constructor. Originally I set this field as JPEG.
Using JPEG format incurs not only large processing delay but also disorder. I guess the conversion from image sensor data to desired format utilizes other hardware such as DSP or GPU which does not guarantee chronological order.
Are you using TEMPLATE_STILL_CAPTURE for the capture requests when you enable the ImageReader, or just TEMPLATE_PREVIEW? What devices are you seeing issues with?
If you're using STILL_CAPTURE, make sure you check if the device supports the ENABLE_ZSL flag, and set it to false. When it is set to true (generally the default on devices that support it, for the STILL_CAPTURE template), images may be returned out of order since there's a zero-shutter-lag queue in place within the camera device.
I'm using OpenGL ES to make a game in Android. I got some code from a tutorial and I'm trying to change it to suit my app but I'm having a problem. I want to dynamically get an image resource using a string passed into a function as the resource name. I know usually you use getIdentifier() in this case, but that returns an int and I need an input stream. Is there any way of getting an input stream from a resource dynamically?
Alternatively, is there a better way of doing this?
Code below:
InputStream is = mContext.getResources().openRawResource(R.drawable.<imagename>);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
}
finally {
try {
is.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
yes u can Suppose u have images stored in drawable with naming img1,img2,img3,img4,img5,img6,img7 than
first make an array like
String[] imgarray={"img1","img2","img3","img4","img5","img6","img7"};
public static String PACKAGE_NAME ;
PACKAGE_NAME=getApplicationContext().getPackageName();
Random r = new Random();
int n=r.nextInt(imgarray.length());
int resID = getResources().getIdentifier( PACKAGE_NAME+":drawable/" +imgarray[n] , null, null);
imageview.setImageResource(resID);
if want bitmap image than just add below line
Bitmap bm = BitmapFactory.decodeResource(getResources(),resID);
if u want other way with less coding than see accepted answer at Other Example