I am creating this game for Android with Java. I am using this code: http://developer.android.com/training/displaying-bitmaps/load-bitmap.html for resizing Bitmaps, I am using .PNG format on all my images. It works really well except that the Bitmaps quality drops quite a lot, even when I scale it down from a larger image. Is this solvable?
You can try out this code.
public static Bitmap shrinkBitmap(String p_file, int p_width, int p_height)
{
BitmapFactory.Options m_bmpFactoryOptions = new BitmapFactory.Options();
m_bmpFactoryOptions.inJustDecodeBounds = true;
Bitmap m_bitmap = BitmapFactory.decodeFile(p_file, m_bmpFactoryOptions);
int m_heightRatio =
(int) Math.ceil(m_bmpFactoryOptions.outHeight / (float) p_height);
int m_widthRatio =
(int) Math.ceil(m_bmpFactoryOptions.outWidth / (float) p_width);
if (m_heightRatio > 1 || m_widthRatio > 1)
{
if (m_heightRatio > m_widthRatio)
{
m_bmpFactoryOptions.inSampleSize = m_heightRatio;
}
else
{
m_bmpFactoryOptions.inSampleSize = m_widthRatio;
}
}
m_bmpFactoryOptions.inJustDecodeBounds = false;
m_bitmap = BitmapFactory.decodeFile(p_file, m_bmpFactoryOptions);
return m_bitmap;
}
EDITED:
Here i have tried out other way also.
Can you please let me know how you are scaling .PNGs so that i can get the idea.
Related
I am working on a Java Application on Android Studio. I wanted any code where we can get Bitmap from video. I take video using absolute path. The input from video is 8 FPS.
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(absolutePath);
Just wanted to take Bitmap from video. Any help would be appreciated.
I tried the following code and it worked
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
//path of the video of which you want frames
retriever.setDataSource(absolutePath);
}catch (Exception e) {
}
String duration = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
int duration_millisec = Integer.parseInt(duration); //duration in millisec
int duration_second = duration_millisec / 1000; //millisec to sec.
int frames_per_second = 30; //no. of frames want to retrieve per second
int numeroFrameCaptured = frames_per_second * duration_second;
long frame_us=1000000/30;
capture=="+numeroFrameCaptured);
for (int i = 0; i < numeroFrameCaptured; i++)
{
//setting time position at which you want to retrieve frames
MEventsManager.getInstance().inject(MEventsManager.IMAGE,retriever.getFrameAtTime(frame_us*i,MediaMetadataRetriever.OPTION_CLOSEST));
}
retriever.release();
If you want image from video then you can use library called FFMPEG by implementing in gradle with this:
implementation 'com.arthenica:mobile-ffmpeg-min:4.3.1.LTS'
By using several commands you can convert all frames of video into image or you can get single frame also
for more information please visit:-
https://github.com/tanersener/mobile-ffmpeg
if you want only frame for displaying in imageview then you can use glide/picasso library.
Hope this will Help you..!!!
I want to find a face on an image from camera. But detector can't find faces. My app does photo and save it in file.
Below code which create file, start camera, and in onActivityResult in trying detect face and save file path to the room, its saving correctrly and showing in recycler view as expected, but face detector dont finding faces. how can i fix this?
private fun takePhoto() {
val takePictureIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
if (takePictureIntent.resolveActivity(activity?.packageManager!!) != null) {
val photoFile: File
try {
photoFile = createImageFile()
} catch (e: IOException) {
error { e }
}
val photoURI = FileProvider.getUriForFile(activity?.applicationContext!!, "com.nasibov.fakhri.neurelia.fileprovider", photoFile)
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, photoURI)
takePictureIntent.putExtra("android.intent.extras.CAMERA_FACING", 1)
startActivityForResult(takePictureIntent, PhotoFragment.REQUEST_TAKE_PHOTO)
}
}
#Suppress("SimpleDateFormat")
private fun createImageFile(): File {
val date = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())
val fileName = "JPEG_$date"
val filesDir = activity?.getExternalFilesDir(Environment.DIRECTORY_PICTURES)
val image = File.createTempFile(fileName, ".jpg", filesDir)
mCurrentImage = image
return mCurrentImage
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK) {
val bitmap = BitmapFactory.decodeFile(mCurrentImage.absolutePath)
val frame = Frame.Builder().setBitmap(bitmap).build()
val detectedFaces = mFaceDetector.detect(frame)
mViewModel.savePhoto(mCurrentImage)
}
}
Android Face detection API tracks face in photos, videos using some landmarks like eyes, nose, ears, cheeks, and mouth.
Rather than detecting the individual features, the API detects the face at once and then if defined, detects the landmarks and classifications. Besides, the API can detect faces at various angles too.
https://www.journaldev.com/15629/android-face-detection
Android SDK contains an API for Face Detection: android.media.FaceDetector class. This class detects faces on the image. To detect faces call findFaces method of FaceDetector class. findFaces method returns a number of detected faces and fills the FaceDetector.Faces[] array. Please note, that findFaces method supports only bitmaps in RGB_565 format at this time.
Each instance of the FaceDetector.Face class contains the following information:
Confidence that it’s actually a face – a float value between 0 and 1.
Distance between the eyes – in pixels.
Position (x, y) of the mid-point between the eyes.
Rotations (X, Y, Z).
Unfortunately, it doesn’t contain a framing rectangle that includes the detected face.
Here is sample source code for face detection. This sample code enables a custom View that shows a saved image from an SD Card and draws transparent red circles on the detected faces.
class Face_Detection_View extends View {
private static final int MAX_FACES = 10;
private static final String IMAGE_FN = "face.jpg";
private Bitmap background_image;
private FaceDetector.Face[] faces;
private int face_count;
// preallocate for onDraw(...)
private PointF tmp_point = new PointF();
private Paint tmp_paint = new Paint();
public Face_Detection_View(Context context) {
super(context);
// Load an image from SD Card
updateImage(Environment.getExternalStorageDirectory() + "/" + IMAGE_FN);
}
public void updateImage(String image_fn) {
// Set internal configuration to RGB_565
BitmapFactory.Options bitmap_options = new BitmapFactory.Options();
bitmap_options.inPreferredConfig = Bitmap.Config.RGB_565;
background_image = BitmapFactory.decodeFile(image_fn, bitmap_options);
FaceDetector face_detector = new FaceDetector(
background_image.getWidth(), background_image.getHeight(),
MAX_FACES);
faces = new FaceDetector.Face[MAX_FACES];
// The bitmap must be in 565 format (for now).
face_count = face_detector.findFaces(background_image, faces);
Log.d("Face_Detection", "Face Count: " + String.valueOf(face_count));
}
public void onDraw(Canvas canvas) {
canvas.drawBitmap(background_image, 0, 0, null);
for (int i = 0; i < face_count; i++) {
FaceDetector.Face face = faces[i];
tmp_paint.setColor(Color.RED);
tmp_paint.setAlpha(100);
face.getMidPoint(tmp_point);
canvas.drawCircle(tmp_point.x, tmp_point.y, face.eyesDistance(),
tmp_paint);
}
}
}
I found the demo code here: https://github.com/googlesamples/android-vision/blob/master/visionSamples/FaceTracker/app/src/main/java/com/google/android/gms/samples/vision/face/facetracker/FaceTrackerActivity.java
and my question is how to take picture when face detected and save it to device, and when we take 1st picture next picture will be take after 5s when face detected because we can't save to many picture to device.
You have to add FaceDetectionListener in camera API then call startFaceDetection() method,
CameraFaceDetectionListener fDListener = new CameraFaceDetectionListener();
mCamera.setFaceDetectionListener(fDetectionListener);
mCamera.startFaceDetection();
Implement Camera.FaceDetectionListener, you receive the detected face in onFaceDetection override method,
private class MyFaceDetectionListener
implements Camera.FaceDetectionListener {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length == 0) {
Log.i(TAG, "No faces detected");
} else if (faces.length > 0) {
Log.i(TAG, "Faces Detected = " +
String.valueOf(faces.length));
public List<Rect> faceRects;
faceRects = new ArrayList<Rect>();
for (int i=0; i<faces.length; i++) {
int left = faces[i].rect.left;
int right = faces[i].rect.right;
int top = faces[i].rect.top;
int bottom = faces[i].rect.bottom;
Rect uRect = new Rect(left0, top0, right0, bottom0);
faceRects.add(uRect);
}
// add function to draw rects on view/surface/canvas
}
}
As per your case, new Handler().postDelayed(new Runnable,long seconds) take 2nd picture inside runnable after 5 seconds.
Please let me know if you have any queries.
I'm trying to implement a DCT code in Android. I'm testing out using the code but just changing to DCT instead of DFT : Convert OpenCv DFT example from C++ to Android. There have been changes to the code, thanks to timegalore. Now I'm having problems converting the image back to BGR.
public void transformImage(){
image = Highgui.imread(imageName, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
try {
secondImage = new Mat(image.rows(), image.cols(), CvType.CV_64FC1);
image.convertTo(secondImage, CvType.CV_64FC1);
int m = Core.getOptimalDFTSize(image.rows());
int n = Core.getOptimalDFTSize(image.cols()); // on the border add zero values
Mat padded = new Mat(new Size(n, m), CvType.CV_64FC1); // expand input image to optimal size
Imgproc.copyMakeBorder(secondImage, padded, 0, m - secondImage.rows(), 0, n - secondImage.cols(), Imgproc.BORDER_CONSTANT);
Mat result = new Mat(padded.size(), padded.type());
Core.dct(padded, result);
Mat transformedImage = new Mat(padded.size(), padded.type());
Core.idct(result, watermarkedImage);
completedImage = new Mat(image.rows(), image.cols(), CvType.CV_64FC1);
Imgproc.cvtColor(transformedImage, completedImage, Imgproc.COLOR_GRAY2BGR);
} catch (Exception e) {
Log.e("Blargh", e.toString());
}
}
Now, I have obtained this error
04-09 21:35:52.362: E/cv::error()(23460): OpenCV Error: Assertion failed (depth == CV_8U || depth == CV_16U || depth == CV_32F) in void cv::cvtColor(cv::InputArray, cv::OutputArray, int, int), file /home/reports/ci/slave_desktop/50-SDK/opencv/modules/imgproc/src/color.cpp, line 3642
I am not sure what I should do, please advise. Your help is very much appreciated!
You have this line:
image.convertTo(secondImage, CvType.CV_64FC1);
but then you don't use secondImage again, just image. Try:
Imgproc.copyMakeBorder(secondImage, padded, 0, m - secondImage.rows(), 0, n - secondImage.cols(), Imgproc.BORDER_CONSTANT);
and see how you get on.
Also DCT looks like it only works on reals not complex numbers like DFT and so you don't need to add a second channel to zero the imaginary part. You can work directly with the padded variable, so:
Mat result = new Mat(padded.size(), padded.type());
then
Core.dct(padded, result);
also, the original image needs to be single channel - so a greyscale. When you call Highgui.imread the image that will be loaded is multichannel - on my device it is 3 channel in BGR format. You can convert it to greyscale using Imgproc.cvtColor but it would be simpler just to load it as grey scale in the first place:
image = Highgui.imread(imageName, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
I was practicing on some face recognition and detection codes using Java on JavaCv on Eclpise Juno. The Thing is i was trying to run the sample code below but i cant get the expected result or output. The sample code is as follows
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.*;
import com.googlecode.javacv.cpp.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_calib3d.*;
import static com.googlecode.javacv.cpp.opencv_objdetect.*;
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = null;
if (args.length > 0) {
classifierName = args[0];
} else {
System.err.println("C://opencv/data/haarcascades\"haarcascade_frontalface_alt.xml\".");
System.exit(1);
}
// Preload the opencv_objdetect module to work around a known bug.
Loader.load(opencv_objdetect.class);
// We can "cast" Pointer objects by instantiating a new object of the desired class.
CvHaarClassifierCascade classifier = new CvHaarClassifierCascade(cvLoad(classifierName));
if (classifier.isNull()) {
System.err.println("Error loading classifier file \"" + classifierName + "\".");
System.exit(1);
}
// CanvasFrame is a JFrame containing a Canvas component, which is hardware accelerated.
// It can also switch into full-screen mode when called with a screenNumber.
CanvasFrame frame = new CanvasFrame("Some Title");
// OpenCVFrameGrabber uses opencv_highgui, but other more versatile FrameGrabbers
// include DC1394FrameGrabber, FlyCaptureFrameGrabber, OpenKinectFrameGrabber,
// PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
FrameGrabber grabber = new OpenCVFrameGrabber(0);
grabber.start();
// FAQ about IplImage:
// - For custom raw processing of data, getByteBuffer() returns an NIO direct
// buffer wrapped around the memory pointed by imageData.
// - To get a BufferedImage from an IplImage, you may call getBufferedImage().
// - The createFrom() factory method can construct an IplImage from a BufferedImage.
// - There are also a few copy*() methods for BufferedImage<->IplImage data transfers.
IplImage grabbedImage = grabber.grab();
int width = grabbedImage.width();
int height = grabbedImage.height();
IplImage grayImage = IplImage.create(width, height, IPL_DEPTH_8U, 1);
IplImage rotatedImage = grabbedImage.clone();
// Let's create some random 3D rotation...
CvMat randomR = CvMat.create(3, 3), randomAxis = CvMat.create(3, 1);
// We can easily and efficiently access the elements of CvMat objects
// with the set of get() and put() methods.
randomAxis.put((Math.random()-0.5)/4, (Math.random()-0.5)/4, (Math.random()-0.5)/4);
cvRodrigues2(randomAxis, randomR, null);
double f = (width + height)/2.0; randomR.put(0, 2, randomR.get(0, 2)*f);
randomR.put(1, 2, randomR.get(1, 2)*f);
randomR.put(2, 0, randomR.get(2, 0)/f); randomR.put(2, 1, randomR.get(2, 1)/f);
System.out.println(randomR);
// Objects allocated with a create*() or clone() factory method are automatically released
// by the garbage collector, but may still be explicitly released by calling release().
// You shall NOT call cvReleaseImage(), cvReleaseMemStorage(), etc.
//on objects allocated this way.
CvMemStorage storage = CvMemStorage.create();
// We can allocate native arrays using constructors taking an integer as argument.
CvPoint hatPoints = new CvPoint(3);
// Again, FFmpegFrameRecorder also exists as a more versatile alternative.
FrameRecorder recorder = new OpenCVFrameRecorder("output.avi", width, height);
recorder.start();
while (frame.isVisible() && (grabbedImage = grabber.grab()) != null) {
cvClearMemStorage(storage);
// Let's try to detect some faces! but we need a grayscale image...
cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
CvSeq faces = cvHaarDetectObjects(grayImage, classifier, storage,
1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
int total = faces.total();
for (int i = 0; i < total; i++) {
CvRect r = new CvRect(cvGetSeqElem(faces, i));
int x = r.x(), y = r.y(), w = r.width(), h = r.height();
cvRectangle(grabbedImage, cvPoint(x, y), cvPoint(x+w, y+h), CvScalar.RED, 1, CV_AA, 0);
// To access the elements of a native array, use the position() method.
hatPoints.position(0).x(x-w/10) .y(y-h/10);
hatPoints.position(1).x(x+w*11/10).y(y-h/10);
hatPoints.position(2).x(x+w/2) .y(y-h/2);
cvFillConvexPoly(grabbedImage, hatPoints.position(0), 3, CvScalar.GREEN, CV_AA, 0);
}
// Let's find some contours! but first some thresholding...
cvThreshold(grayImage, grayImage, 64, 255, CV_THRESH_BINARY);
// To check if an output argument is null we may call either isNull() or equals(null).
CvSeq contour = new CvSeq(null);
cvFindContours(grayImage, storage, contour, Loader.sizeof(CvContour.class),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
while (contour != null && !contour.isNull()) {
if (contour.elem_size() > 0) {
CvSeq points = cvApproxPoly(contour, Loader.sizeof(CvContour.class),
storage, CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0);
cvDrawContours(grabbedImage, points, CvScalar.BLUE, CvScalar.BLUE, -1, 1, CV_AA);
}
contour = contour.h_next();
}
cvWarpPerspective(grabbedImage, rotatedImage, randomR);
frame.showImage(rotatedImage);
recorder.record(rotatedImage);
}
recorder.stop();
grabber.stop();
frame.dispose();
}
}
The Output i am getting is a line printed in red and its like.
C://opencv/data/haarcascades"haarcascade_frontalface_alt.xml".
Can anybody show what i missed?
I am new to image processing and so please can anyone indicate me where i could get good tutorials and sample source codes that could teach me how to master all the in-built functions in JavaCv and their functionalities? I was working on my final year project and really need your hand on this one.
With lots of respect
Sisay
haarcascade_frontalface_alt.xml is trained classifier for detecting frontal face. It is usually present in opencv_installation_folder/opencv/data/haarcascade folder. you can give the direct path of your classifier instead of taking it from command line as
classifierName = opencv_installation_folder/opencv/data/harcascade/haarcascade_frontalface_alt.xml
that demo expects you to give it the cascade-file as an argument. it just stops, if it does not get one.
maybe you want to change the beginning like this:
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = "C:/opencv/data/haarcascades/haarcascade_frontalface_alt.xml";
if (args.length > 0) {
classifierName = args[0];
}
like that, it takes an arg from cmdline if present, else it takes the default-value