Convert OpenCv DCT to Android - java

I'm trying to implement a DCT code in Android. I'm testing out using the code but just changing to DCT instead of DFT : Convert OpenCv DFT example from C++ to Android. There have been changes to the code, thanks to timegalore. Now I'm having problems converting the image back to BGR.
public void transformImage(){
image = Highgui.imread(imageName, Highgui.CV_LOAD_IMAGE_GRAYSCALE);
try {
secondImage = new Mat(image.rows(), image.cols(), CvType.CV_64FC1);
image.convertTo(secondImage, CvType.CV_64FC1);
int m = Core.getOptimalDFTSize(image.rows());
int n = Core.getOptimalDFTSize(image.cols()); // on the border add zero values
Mat padded = new Mat(new Size(n, m), CvType.CV_64FC1); // expand input image to optimal size
Imgproc.copyMakeBorder(secondImage, padded, 0, m - secondImage.rows(), 0, n - secondImage.cols(), Imgproc.BORDER_CONSTANT);
Mat result = new Mat(padded.size(), padded.type());
Core.dct(padded, result);
Mat transformedImage = new Mat(padded.size(), padded.type());
Core.idct(result, watermarkedImage);
completedImage = new Mat(image.rows(), image.cols(), CvType.CV_64FC1);
Imgproc.cvtColor(transformedImage, completedImage, Imgproc.COLOR_GRAY2BGR);
} catch (Exception e) {
Log.e("Blargh", e.toString());
}
}
Now, I have obtained this error
04-09 21:35:52.362: E/cv::error()(23460): OpenCV Error: Assertion failed (depth == CV_8U || depth == CV_16U || depth == CV_32F) in void cv::cvtColor(cv::InputArray, cv::OutputArray, int, int), file /home/reports/ci/slave_desktop/50-SDK/opencv/modules/imgproc/src/color.cpp, line 3642
I am not sure what I should do, please advise. Your help is very much appreciated!

You have this line:
image.convertTo(secondImage, CvType.CV_64FC1);
but then you don't use secondImage again, just image. Try:
Imgproc.copyMakeBorder(secondImage, padded, 0, m - secondImage.rows(), 0, n - secondImage.cols(), Imgproc.BORDER_CONSTANT);
and see how you get on.
Also DCT looks like it only works on reals not complex numbers like DFT and so you don't need to add a second channel to zero the imaginary part. You can work directly with the padded variable, so:
Mat result = new Mat(padded.size(), padded.type());
then
Core.dct(padded, result);
also, the original image needs to be single channel - so a greyscale. When you call Highgui.imread the image that will be loaded is multichannel - on my device it is 3 channel in BGR format. You can convert it to greyscale using Imgproc.cvtColor but it would be simpler just to load it as grey scale in the first place:
image = Highgui.imread(imageName, Highgui.CV_LOAD_IMAGE_GRAYSCALE);

Related

Converting 16 bit depth frame from Intel Realsense D455 to OpenCV Mat in Android Java

I am trying to convert a DepthFrame object that I have obtained from the Intel Realsense D455 camera to an OpenCV Mat object in Java. I can get the the target depth of a pixel using DepthFrame.getDistance(x,y) but I am trying to get the whole matrix so that I can get the distance values in meters, similar to the sample code in their Github repo, which is in C++.
I can convert any color image obtained from the camera stream (VideoFrame or colored DepthFrame) to a Mat since they are 8 bits per pixel using the following function:
public static Mat VideoFrame2Mat(final VideoFrame frame) {
Mat frameMat = new Mat(frame.getHeight(), frame.getWidth(), CV_8UC3);
final int bufferSize = (int)(frameMat.total() * frameMat.elemSize());
byte[] dataBuffer = new byte[bufferSize];
frame.getData(dataBuffer);
ByteBuffer.wrap(dataBuffer).order(ByteOrder.LITTLE_ENDIAN).asReadOnlyBuffer().get(dataBuffer);
frameMat.put(0,0, dataBuffer);
return frameMat;
}
However, the un-colorized DepthFrame values are 16 bits per pixel, and the above code gives an error when the CV_8UC1 is substituted with CV_16UC1. The error arises because in the Java wrapper of the OpenCV function Mat.put(row, col, data[]), there is a type check that allows only 8 bit Mats to be processed:
// javadoc:Mat::put(row,col,data)
public int put(int row, int col, byte[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
throw new UnsupportedOperationException(
"Provided data element number (" +
(data == null ? 0 : data.length) +
") should be multiple of the Mat channels count (" +
CvType.channels(t) + ")");
if (CvType.depth(t) == CvType.CV_8U || CvType.depth(t) == CvType.CV_8S) {
return nPutB(nativeObj, row, col, data.length, data);
}
throw new UnsupportedOperationException("Mat data type is not compatible: " + t);
}
Therefore I tried to use the constructor of Mat that accepts the array and wrote the following method:
public static Mat DepthFrame2Mat(final DepthFrame frame) {
byte[] dataBuffer = new byte[frame.getDataSize()];
frame.getData(dataBuffer);
ByteBuffer buffer = ByteBuffer.wrap(dataBuffer).order(ByteOrder.LITTLE_ENDIAN).asReadOnlyBuffer().get(dataBuffer);
Log.d(TAG, String.format("DepthFrame2Mat: w: %s h: %s capacity: %s remaining %s framedatasize: %s databufferlen: %s framedepth: %s type: %s " ,
frame.getWidth(), frame.getHeight(), buffer.capacity(), buffer.remaining(), frame.getDataSize(), dataBuffer.length, frame.getBitsPerPixel(), frame.getProfile().getFormat()));
return new Mat(frame.getHeight(), frame.getWidth(), CV_16UC1, buffer);
}
But now I keep getting the error E/cv::error(): OpenCV(4.5.5) Error: Assertion failed (total() == 0 || data != NULL) in Mat, file /build/master_pack-android/opencv/modules/core/src/matrix.cpp, line 428
Using the log command seen in the function, I am checking if the data is empty or null, but it is not. Moreover, the DepthFrame bit depth and type seems to be correct, too:
D/CvHelpers: DepthFrame2Mat: w: 640 h: 480 capacity: 614400 remaining 0 framedatasize: 614400 databufferlen: 614400 framedepth: 16 type: Z16
What could be the reason of this error? Is there a better way to handle this conversion?
Note: I have checked the SO questions such as this and examples on the web, however, all of them are in C++. I don't want to add JNI support just for creating a Mat.
Even though not directly an OpenCV API solution, converting the byte array to short array in Java seems to work:
public static Mat DepthFrame2Mat(final DepthFrame frame) {
Mat frameMat = new Mat(frame.getHeight(), frame.getWidth(), CV_16UC1);
final int bufferSize = (int)(frameMat.total() * frameMat.elemSize());
byte[] dataBuffer = new byte[bufferSize];
short[] s = new short[dataBuffer.length / 2];
frame.getData(dataBuffer);
ByteBuffer.wrap(dataBuffer).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(s);
frameMat.put(0,0, s);
return frameMat;
}

Writing ".shape" equivalent in Java

I am rewriting Python code into Java code (Android) and I don't know how to perform the function .shape on an image.
Python:
def getProcessedImage(srcImage):
image = copy.copy(srcImage)
print ("image.shape"), image.shape
r = 600.0 / image.shape[1]
dim = (600, int(image.shape[0] * r))
print ("dimension ="), dim
Java:
public Mat getProcessedImage(srcImage) {
Mat image = new Mat(srcImage);
srcImage.copyTo(image);
Object[] truple = new Object[Array.getLength(srcImage)] //Got stuck here and don't even think its correct
// Mat processedImage =.......
return processedImage;
}
The equivalent in java to shape in python is srcImage.size().width and srcImage.size().height.

Is there any way to make image compression and saving faster on Android?

The situation
I should show 200-350 frames animation in my application. Images have 500x300ish resolution. If user wants to share animation, i have to convert it to Video. For convertion i am using ffmpeg command.
ffmpeg -y -r 1 -i /sdcard/videokit/pic00%d.jpg -i /sdcard/videokit/in.mp3 -strict experimental -ar 44100 -ac 2 -ab 256k -b 2097152 -ar 22050 -vcodec mpeg4 -b 2097152 -s 320x240 /sdcard/videokit/out.mp4
To convert images to video ffmpeg wants actual files not Bitmap or byte[].
Problem
Compressing bitmaps to image files taking to much time. 210 image convertion takes about 1 minute to finish on average device(HTC ONE m7). Converting image files to mp4 takes about 15 seconds on the same device. All together user have to wait about 1.5 minutes.
What i have tried
I changed comrpession format form PNG to JPEG(1.5 minute result is
achieved with JPEG compression(quality=80),with PNG it takes about
2-2.5 minutes) success
Tried to find how pass byte[] or bitmap to ffmpeg - no succes.
QUESTION
Is there any way(library (even native)) to make saving process faster.
Is there any way to pass byte[] or Bitmap objects (i mean png file decompressed to Android Bitmap Class Object) to ffmpeg library video creating method
Is there any other working library which will create mp4(or any supported format(supported by main Social Networks)) from byte[] or Bitmap objects in about 30 seconds(for 200 frames).
You can convert Bitmap (or byte[]) to YUV format quickly, using renderscript (see https://stackoverflow.com/a/39877029/192373). You can pass these YUV frames to ffmpeg library (as suggests halfelf), or use the built-in native MediaCodec which uses dedicated hardware on modt devices (but compression options are less flexible than all-software ffmpeg).
There are two steps slow us down. Compressing image to PNG/JPG and writing them to disk. Both can be skipped if we directly code against ffmpeg libs, instead of calling ffmpeg command. (There are other improvements too, such like GPU encoding and multithreading, but much more complicated.)
Some approaches to code:
Only use C/C++ NDK for android programming. FFmpeg will happily work. But I guess it's not an option here.
Build it from scratch by Java JNI. Not much experience here. I only know this could link java to c/c++ libs.
Some java wrapper. Luckily I found javacpp-presets. (There are others too, but this one is good enough and up to date.)
This library includes a good example ported from famous dranger's ffmpeg tutorial, though it is a demuxing one.
We can try to write a muxing one, following ffmpeg's muxing.c example.
import java.io.*;
import org.bytedeco.javacpp.*;
import static org.bytedeco.javacpp.avcodec.*;
import static org.bytedeco.javacpp.avformat.*;
import static org.bytedeco.javacpp.avutil.*;
import static org.bytedeco.javacpp.swscale.*;
public class Muxer {
public class OutputStream {
public AVStream Stream;
public AVCodecContext Ctx;
public AVFrame Frame;
public SwsContext SwsCtx;
public void setStream(AVStream s) {
this.Stream = s;
}
public AVStream getStream() {
return this.Stream;
}
public void setCodecCtx(AVCodecContext c) {
this.Ctx = c;
}
public AVCodecContext getCodecCtx() {
return this.Ctx;
}
public void setFrame(AVFrame f) {
this.Frame = f;
}
public AVFrame getFrame() {
return this.Frame;
}
public OutputStream() {
Stream = null;
Ctx = null;
Frame = null;
SwsCtx = null;
}
}
public static void main(String[] args) throws IOException {
Muxer t = new Muxer();
OutputStream VideoSt = t.new OutputStream();
AVOutputFormat Fmt = null;
AVFormatContext FmtCtx = new AVFormatContext(null);
AVCodec VideoCodec = null;
AVDictionary Opt = null;
SwsContext SwsCtx = null;
AVPacket Pkt = new AVPacket();
int GotOutput;
int InLineSize[] = new int[1];
String FilePath = "/path/xxx.mp4";
avformat_alloc_output_context2(FmtCtx, null, null, FilePath);
Fmt = FmtCtx.oformat();
AVCodec codec = avcodec_find_encoder_by_name("libx264");
av_format_set_video_codec(FmtCtx, codec);
VideoCodec = avcodec_find_encoder(Fmt.video_codec());
VideoSt.setStream(avformat_new_stream(FmtCtx, null));
AVStream stream = VideoSt.getStream();
VideoSt.getStream().id(FmtCtx.nb_streams() - 1);
VideoSt.setCodecCtx(avcodec_alloc_context3(VideoCodec));
VideoSt.getCodecCtx().codec_id(Fmt.video_codec());
VideoSt.getCodecCtx().bit_rate(5120000);
VideoSt.getCodecCtx().width(1920);
VideoSt.getCodecCtx().height(1080);
AVRational fps = new AVRational();
fps.den(25); fps.num(1);
VideoSt.getStream().time_base(fps);
VideoSt.getCodecCtx().time_base(fps);
VideoSt.getCodecCtx().gop_size(10);
VideoSt.getCodecCtx().max_b_frames();
VideoSt.getCodecCtx().pix_fmt(AV_PIX_FMT_YUV420P);
if ((FmtCtx.oformat().flags() & AVFMT_GLOBALHEADER) != 0)
VideoSt.getCodecCtx().flags(VideoSt.getCodecCtx().flags() | AV_CODEC_FLAG_GLOBAL_HEADER);
avcodec_open2(VideoSt.getCodecCtx(), VideoCodec, Opt);
VideoSt.setFrame(av_frame_alloc());
VideoSt.getFrame().format(VideoSt.getCodecCtx().pix_fmt());
VideoSt.getFrame().width(1920);
VideoSt.getFrame().height(1080);
av_frame_get_buffer(VideoSt.getFrame(), 32);
// should be at least Long or even BigInteger
// it is a unsigned long in C
int nextpts = 0;
av_dump_format(FmtCtx, 0, FilePath, 1);
avio_open(FmtCtx.pb(), FilePath, AVIO_FLAG_WRITE);
avformat_write_header(FmtCtx, Opt);
int[] got_output = { 0 };
while (still_has_input) {
// convert or directly copy your Bytes[] into VideoSt.Frame here
// AVFrame structure has two important data fields:
// AVFrame.data (uint8_t*[]) and AVFrame.linesize (int[])
// data includes pixel values in some formats and linesize is size of each picture line.
// For example, if formats is RGB. linesize should has 3 valid values equaling to `image_width * 3`. And data will point to three arrays containing rgb values.
// But I guess we'll need swscale() to convert pixel format here. From RGB to yuv420p (or other yuv family formats).
Pkt = new AVPacket();
av_init_packet(Pkt);
VideoSt.getFrame().pts(nextpts++);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, VideoSt.getFrame(), got_output);
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
av_packet_unref(Pkt);
}
// get delayed frames
for (got_output[0] = 1; got_output[0] != 0;) {
Pkt = new AVPacket();
av_init_packet(Pkt);
avcodec_encode_video2(VideoSt.getCodecCtx(), Pkt, null, got_output);
if (got_output[0] > 0) {
av_packet_rescale_ts(Pkt, VideoSt.getCodecCtx().time_base(), VideoSt.getStream().time_base());
Pkt.stream_index(VideoSt.getStream().index());
av_interleaved_write_frame(FmtCtx, Pkt);
}
av_packet_unref(Pkt);
}
// free c structs
avcodec_free_context(VideoSt.getCodecCtx());
av_frame_free(VideoSt.getFrame());
avio_closep(FmtCtx.pb());
avformat_free_context(FmtCtx);
}
}
For porting C code, normally several changes should be done:
Mostly the work is to replace every C struct member access (. and ->) to java getter/setter.
Also there are many C address-of operators &, just delete them.
Change C NULL macro and C++ nullptr pointer to Java null object.
C codes used to check bool result of an int type in if, for, while. Have to compare them with 0 in java.
And there may be other API changes, as long as referencing to javacpp-presets docs, it'll be ok.
Note that I omitted all error handling codes here. It may be needed in real development/production.
Really I don't want to make publicity but to use pkzip and its SDK may be a good
solution. Pkzip compress file to 95% as they say.
The Smartcrypt SDK is available in all major programming languages, including C++, Java, and C#, and can be used to encrypt both structured and unstructured data. Changes to existing applications typically consist of two or three lines of code.

Setting the DPI meta-information for a jpeg file in Android

In an Android application that I am writing, I have a document image (jpeg) that is being uploaded to a server that recognizes the document and sends me back the relevant details. While all that is well and fine, the code in the server expects me to set the "Image DPI" meta-information as seen in mac like so,
The "Image DPI" that is shown in the above screenshot, isn't exactly its value. I have written a method that calculates the dpi-value. How do I set the thusly calculated dpi-value to the meta-information of my jpeg document? I have been able to set this particular meta-information in the application's iOS counterpart, but in Android, two days of relentless trying has left my errand futile.
I do know about ExifInterface, and I have been unlucky in using its setAttribute(String key,String value) method. (What should be the key? What should be the value? How do I set the unit? SHOULD I set the unit?).
I have also seen Java-related solutions to this that suggest the use of javax.imageio.*package which is simply unavailable for Android.
Has anyone faced issues like this? How do I go on about this issue?
To edit the value, you need to first create a byte[] array that will store the Bitmap.compress(). Here is a part of my code where I do just that(input being the source Bitmap).
ByteArrayOutputStream uploadImageByteArray = new ByteArrayOutputStream();
input.compress(Bitmap.CompressFormat.JPEG, 100, uploadImageByteArray);
byte[] uploadImageData = uploadImageByteArray.toByteArray();
Based on the JFIF structure, you need to edit the 13th, 14th, 15th, 16th, and 17th indexes in the byte array. 13th specifying the density type, 14th and 15th the X resolution, and 16th and 17th holding the Y resolution. I got the dpi using the following method:
private long getDPIinFloat(int width, int height) {
return (long) Math.sqrt(width * width + height * height) / 4;
}
After I got the DPI, I had to do some bit manipulation like so:
long firstPart = dpiInFloat >> 8;
if (GlobalState.debugModeOn) {
Log.d(TAG, "First Part: " + firstPart);
}
long lastPart = dpiInFloat & 0xff;
if (GlobalState.debugModeOn) {
Log.d(TAG, "Last Part: " + lastPart);
}
And then, manipulate the byte information like so:
uploadImageData[13] = 1;
uploadImageData[14] = (byte) firstPart;
uploadImageData[15] = (byte) lastPart;
uploadImageData[16] = (byte) firstPart;
uploadImageData[17] = (byte) lastPart;
//Upload Image data to the server
This way, I was able to set the dpi information on the metadata.

I cant get any output from the Sample Code of Face Detection and recognition code using JavaCV on Eclipse(Juno).

I was practicing on some face recognition and detection codes using Java on JavaCv on Eclpise Juno. The Thing is i was trying to run the sample code below but i cant get the expected result or output. The sample code is as follows
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.*;
import com.googlecode.javacv.cpp.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_calib3d.*;
import static com.googlecode.javacv.cpp.opencv_objdetect.*;
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = null;
if (args.length > 0) {
classifierName = args[0];
} else {
System.err.println("C://opencv/data/haarcascades\"haarcascade_frontalface_alt.xml\".");
System.exit(1);
}
// Preload the opencv_objdetect module to work around a known bug.
Loader.load(opencv_objdetect.class);
// We can "cast" Pointer objects by instantiating a new object of the desired class.
CvHaarClassifierCascade classifier = new CvHaarClassifierCascade(cvLoad(classifierName));
if (classifier.isNull()) {
System.err.println("Error loading classifier file \"" + classifierName + "\".");
System.exit(1);
}
// CanvasFrame is a JFrame containing a Canvas component, which is hardware accelerated.
// It can also switch into full-screen mode when called with a screenNumber.
CanvasFrame frame = new CanvasFrame("Some Title");
// OpenCVFrameGrabber uses opencv_highgui, but other more versatile FrameGrabbers
// include DC1394FrameGrabber, FlyCaptureFrameGrabber, OpenKinectFrameGrabber,
// PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
FrameGrabber grabber = new OpenCVFrameGrabber(0);
grabber.start();
// FAQ about IplImage:
// - For custom raw processing of data, getByteBuffer() returns an NIO direct
// buffer wrapped around the memory pointed by imageData.
// - To get a BufferedImage from an IplImage, you may call getBufferedImage().
// - The createFrom() factory method can construct an IplImage from a BufferedImage.
// - There are also a few copy*() methods for BufferedImage<->IplImage data transfers.
IplImage grabbedImage = grabber.grab();
int width = grabbedImage.width();
int height = grabbedImage.height();
IplImage grayImage = IplImage.create(width, height, IPL_DEPTH_8U, 1);
IplImage rotatedImage = grabbedImage.clone();
// Let's create some random 3D rotation...
CvMat randomR = CvMat.create(3, 3), randomAxis = CvMat.create(3, 1);
// We can easily and efficiently access the elements of CvMat objects
// with the set of get() and put() methods.
randomAxis.put((Math.random()-0.5)/4, (Math.random()-0.5)/4, (Math.random()-0.5)/4);
cvRodrigues2(randomAxis, randomR, null);
double f = (width + height)/2.0; randomR.put(0, 2, randomR.get(0, 2)*f);
randomR.put(1, 2, randomR.get(1, 2)*f);
randomR.put(2, 0, randomR.get(2, 0)/f); randomR.put(2, 1, randomR.get(2, 1)/f);
System.out.println(randomR);
// Objects allocated with a create*() or clone() factory method are automatically released
// by the garbage collector, but may still be explicitly released by calling release().
// You shall NOT call cvReleaseImage(), cvReleaseMemStorage(), etc.
//on objects allocated this way.
CvMemStorage storage = CvMemStorage.create();
// We can allocate native arrays using constructors taking an integer as argument.
CvPoint hatPoints = new CvPoint(3);
// Again, FFmpegFrameRecorder also exists as a more versatile alternative.
FrameRecorder recorder = new OpenCVFrameRecorder("output.avi", width, height);
recorder.start();
while (frame.isVisible() && (grabbedImage = grabber.grab()) != null) {
cvClearMemStorage(storage);
// Let's try to detect some faces! but we need a grayscale image...
cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
CvSeq faces = cvHaarDetectObjects(grayImage, classifier, storage,
1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
int total = faces.total();
for (int i = 0; i < total; i++) {
CvRect r = new CvRect(cvGetSeqElem(faces, i));
int x = r.x(), y = r.y(), w = r.width(), h = r.height();
cvRectangle(grabbedImage, cvPoint(x, y), cvPoint(x+w, y+h), CvScalar.RED, 1, CV_AA, 0);
// To access the elements of a native array, use the position() method.
hatPoints.position(0).x(x-w/10) .y(y-h/10);
hatPoints.position(1).x(x+w*11/10).y(y-h/10);
hatPoints.position(2).x(x+w/2) .y(y-h/2);
cvFillConvexPoly(grabbedImage, hatPoints.position(0), 3, CvScalar.GREEN, CV_AA, 0);
}
// Let's find some contours! but first some thresholding...
cvThreshold(grayImage, grayImage, 64, 255, CV_THRESH_BINARY);
// To check if an output argument is null we may call either isNull() or equals(null).
CvSeq contour = new CvSeq(null);
cvFindContours(grayImage, storage, contour, Loader.sizeof(CvContour.class),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
while (contour != null && !contour.isNull()) {
if (contour.elem_size() > 0) {
CvSeq points = cvApproxPoly(contour, Loader.sizeof(CvContour.class),
storage, CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0);
cvDrawContours(grabbedImage, points, CvScalar.BLUE, CvScalar.BLUE, -1, 1, CV_AA);
}
contour = contour.h_next();
}
cvWarpPerspective(grabbedImage, rotatedImage, randomR);
frame.showImage(rotatedImage);
recorder.record(rotatedImage);
}
recorder.stop();
grabber.stop();
frame.dispose();
}
}
The Output i am getting is a line printed in red and its like.
C://opencv/data/haarcascades"haarcascade_frontalface_alt.xml".
Can anybody show what i missed?
I am new to image processing and so please can anyone indicate me where i could get good tutorials and sample source codes that could teach me how to master all the in-built functions in JavaCv and their functionalities? I was working on my final year project and really need your hand on this one.
With lots of respect
Sisay
haarcascade_frontalface_alt.xml is trained classifier for detecting frontal face. It is usually present in opencv_installation_folder/opencv/data/haarcascade folder. you can give the direct path of your classifier instead of taking it from command line as
classifierName = opencv_installation_folder/opencv/data/harcascade/haarcascade_frontalface_alt.xml
that demo expects you to give it the cascade-file as an argument. it just stops, if it does not get one.
maybe you want to change the beginning like this:
public class Demo {
public static void main(String[] args) throws Exception {
String classifierName = "C:/opencv/data/haarcascades/haarcascade_frontalface_alt.xml";
if (args.length > 0) {
classifierName = args[0];
}
like that, it takes an arg from cmdline if present, else it takes the default-value

Categories

Resources