OpenCV library in Tomcat(8.5.32) Server unable to execute - java

Facing issue while image processing code setup. In spite of doing all code changes and different approaches facing the issue.
Libraries used – OpenCV – 3.4.2
Jars Used – opencv-3.4.2-0
tess4j-3.4.8
Lines added in pom.xml
<!-- https://mvnrepository.com/artifact/org.openpnp/opencv -->
<dependency>
<groupId>org.openpnp</groupId>
<artifactId>opencv</artifactId>
<version>3.4.2-0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/net.sourceforge.tess4j/tess4j -->
<dependency>
<groupId>net.sourceforge.tess4j</groupId>
<artifactId>tess4j</artifactId>
<version>3.4.8</version>
</dependency>
Steps for OpenCV installation :
Download opencv.exe from the official site’
Run opencv.exe, it will create an opencv folder
We have now the opencv library available which we can use for eclipse.
Steps for Tesseract installation :
Download tess4j.zip file from the official link
Extract the zip folder after download
Provide the path of the tess4j folder
Following are the steps which we have performed for the setup in eclipse :
We have added native library by providing path to openCV library from build path settings
We downloaded tesseract for image reading.
We provided the path to the Tesseraact in the code
We have used System.loadlibrary(Core.NATIVE_LIBRARY_NAME) and openCv.loadLocally() for loading the library.
Then we have made the WAR export for deployment
There has been no changes or setup in apache tomcat
For loading the libraries in Tomcat we have to provide some setup here :-
Now for the new code we have used, Load Library static class in the code (as solutions stated on stack overflow)
In here System.loadLibrary is not working
We had to use System.load along with hardcoded path which is resulting in internal error
We have used System.load – 2 time in the static class out of which the when the 1st one is giving -std error -bad allocation
As there are 2 paths in opencv-
This is the 1st one
System.load("C:\Users\Downloads\opencv\build\bin\opencv_java342.dll");
and the 2nd one is giving the assertion error based on which one is kept above
This is the 2nd one
System.load("C:\User\Downloads\opencv\build\java\x64\opencv_java3412.dll");
The code is executing till mid-way and then getting out and till now not yet code has reached till tesseract.
Here is the code for the same :
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.TimeUnit;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import org.apache.commons.logging.impl.Log4JLogger;
import org.apache.log4j.Logger;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.opencv.core.Core;
import org.opencv.core.CvException;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Rect;
import org.opencv.core.Size;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import net.sourceforge.tess4j.Tesseract;
import nu.pattern.OpenCV;
public class ReadImageBox {
public String readDataFromImage(String imageToReadPath,String tesseractPath)
{
String result = "";
try {
String i = Core.NATIVE_LIBRARY_NAME;
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
logger.info("Img to read "+imageToReadPath);
String imagePath =imageToReadPath; // bufferNameOfImagePath = "";
logger.info(imagePath);
/*
* The class Mat represents an n-dimensional dense numerical single-channel or
* multi-channel array. It can be used to store real or complex-valued vectors
* and matrices, grayscale or color images, voxel volumes, vector fields, point
* clouds, tensors, histograms (though, very high-dimensional histograms may be
* better stored in a SparseMat ).
*/
logger.info("imagepath::"+imagePath);
OpenCV.loadLocally();
logger.info("imagepath::"+imagePath);
//logger.info("Library Information"+Core.getBuildInformation());
logger.info("imagepath::"+imagePath);
Mat source = Imgcodecs.imread(imagePath);
logger.info("Source image "+source);
String directoryPath = imagePath.substring(0,imagePath.lastIndexOf('/'));
logger.info("Going for Image Processing :" + directoryPath);
// calling image processing here to process the data from it
result = updateImage(100,20,10,3,3,2,source, directoryPath,tesseractPath);
logger.info("Data read "+result);
return result;
}
catch (UnsatisfiedLinkError error) {
// Output expected UnsatisfiedLinkErrors.
logger.error(error);
}
catch (Exception exception)
{
logger.error(exception);
}
return result;
}
public static String updateImage(int boxSize, int horizontalRemoval, int verticalRemoval, int gaussianBlur,
int denoisingClosing, int denoisingOpening, Mat source, String tempDirectoryPath,String tesseractPath) throws Exception{
// Tesseract Object
logger.info("Tesseract Path :"+tesseractPath);
Tesseract tesseract = new Tesseract();
tesseract.setDatapath(tesseractPath);
// Creating the empty destination matrix for further processing
Mat grayScaleImage = new Mat();``
Mat gaussianBlurImage = new Mat();
Mat thresholdImage = new Mat();
Mat morph = new Mat();
Mat morphAfterOpreation = new Mat();
Mat dilate = new Mat();
Mat hierarchy = new Mat();
logger.info("Image type"+source.type());
// Converting the image to gray scale and saving it in the grayScaleImage matrix
Imgproc.cvtColor(source, grayScaleImage, Imgproc.COLOR_RGB2GRAY);
//Imgproc.cvtColor(source, grayScaleImage, 0);
// Applying Gaussain Blur
logger.info("source image "+source);
Imgproc.GaussianBlur(grayScaleImage, gaussianBlurImage, new org.opencv.core.Size(gaussianBlur, gaussianBlur),
0);
// OTSU threshold
Imgproc.threshold(gaussianBlurImage, thresholdImage, 0, 255, Imgproc.THRESH_OTSU | Imgproc.THRESH_BINARY_INV);
logger.info("Threshold image "+gaussianBlur);
// remove the lines of any table inside the invoice
Mat horizontal = thresholdImage.clone();
Mat vertical = thresholdImage.clone();
int horizontal_size = horizontal.cols() / 30;
if(horizontal_size%2==0)
horizontal_size+=1;
// showWaitDestroy("Horizontal Lines Detected", horizontal);
Mat horizontalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(horizontal_size, 1));
Imgproc.erode(horizontal, horizontal, horizontalStructure);
Imgproc.dilate(horizontal, horizontal, horizontalStructure);
int vertical_size = vertical.rows() / 30;
if(vertical_size%2==0)
vertical_size+=1;
// Create structure element for extracting vertical lines through morphology
// operations
Mat verticalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(1, vertical_size));
// Apply morphology operations
Imgproc.erode(vertical, vertical, verticalStructure);
Imgproc.dilate(vertical, vertical, verticalStructure);
Core.absdiff(thresholdImage, horizontal, thresholdImage);
Core.absdiff(thresholdImage, vertical, thresholdImage);
logger.info("Vertical Structure "+verticalStructure);
Mat newImageFortest = thresholdImage;
logger.info("Threshold image "+thresholdImage);
// applying Closing operation
Imgproc.morphologyEx(thresholdImage, morph, Imgproc.MORPH_CLOSE, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingClosing, denoisingClosing)));
logger.info("Morph image "+morph);
// applying Opening operation
Imgproc.morphologyEx(morph, morphAfterOpreation, Imgproc.MORPH_OPEN, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingOpening, denoisingOpening)));
logger.info("Morph After operation image "+morphAfterOpreation);
// Applying dilation on the threshold image to create bounding box edges
Imgproc.dilate(morphAfterOpreation, dilate,
Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(boxSize, boxSize)));
logger.info("Dilate image "+dilate);
// creating string buffer object
String text = "";
try
{
// finding contours
List<MatOfPoint> contourList = new ArrayList<MatOfPoint>(); // A list to store all the contours
// finding contours
Imgproc.findContours(dilate, contourList, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
logger.info("Contour List "+contourList);
// Creating a copy of the image
//Mat copyOfImage = source;
Mat copyOfImage = newImageFortest;
logger.info("Copy of Image "+copyOfImage);
// Rectangle for cropping
Rect rectCrop = new Rect();
logger.info("Rectangle Crop New Object "+rectCrop);
// loop through the identified contours and crop them from the image to feed
// into Tesseract-OCR
for (int i = 0; i < contourList.size(); i++) {
// getting bound rectangle
rectCrop = Imgproc.boundingRect(contourList.get(i));
logger.info("Rectangle cropped"+rectCrop);
// cropping Image
Mat croppedImage = copyOfImage.submat(rectCrop.y, rectCrop.y + rectCrop.height, rectCrop.x,
rectCrop.x + rectCrop.width);
// writing cropped image to disk
logger.info("Path to write cropped image "+ tempDirectoryPath);
String writePath = tempDirectoryPath + "/croppedImg.png";
logger.info("writepath"+writePath);
// imagepath = imagepath.
Imgcodecs.imwrite(writePath, croppedImage);
try {
// extracting text from cropped image, goes to the image, extracts text and adds
// them to stringBuffer
logger.info("Exact Path where Image was written with Name "+ writePath);
String textExtracted = (tesseract
.doOCR(new File(writePath)));
//Adding Seperator
textExtracted = textExtracted + "_SEPERATOR_";
logger.info("Text Extracted "+textExtracted);
textExtracted = textExtracted + "\n";
text = textExtracted + text;
logger.info("Text Extracted Completely"+text);
// System.out.println("Andar Ka Text => " + text.toString());
} catch (Exception exception) {
logger.error(exception);
}
writePath = "";
logger.info("Making write Path empty for next Image "+ writePath);
}
}
catch(CvException ae)
{
logger.error("cv",ae);
}
catch(UnsatisfiedLinkError ae)
{
logger.error("unsatdif",ae);
}
catch(Exception ae)
{
logger.error("general",ae);
}
// converting into string
return text.toUpperCase();
}
// convert Mat to Image for GUI output
public static Image toBufferedImage(Mat m) {
// getting BYTE_GRAY formed image
int type = BufferedImage.TYPE_BYTE_GRAY;
if (m.channels() > 1) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = m.channels() * m.cols() * m.rows();
byte[] b = new byte[bufferSize];
m.get(0, 0, b); // get all the pixels
// creating buffered Image
BufferedImage image = new BufferedImage(m.cols(), m.rows(), type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(b, 0, targetPixels, 0, b.length);
// returning Image
return image;
}
// method to display Mat format images using the GUI
private static void showWaitDestroy(String winname, Mat img) {
HighGui.imshow(winname, img);
HighGui.moveWindow(winname, 500, 0);
HighGui.waitKey(0);
HighGui.destroyWindow(winname);
}
}

Related

Java - extract text from pdf from selected area to txt

The idea is next,
user selects a pdf file, and then this file converted into an image and such an image is displayed in the application.
In the image the user can choose positions that wants to read from a pdf file, and when the finish with selection position in the background program reads the original pdf and text stored in a txt file.
It is important that the resulting image from pdf file is the same size as himself pdf file
The next code convert pdf to image. I use pdfrenderer-0.9.1.jar
import java.awt.Rectangle;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import javax.imageio.ImageIO;
import com.sun.pdfview.PDFFile;
import com.sun.pdfview.PDFPage;
public class Pdf2Image {
public static void main(String[] args) {
File file = new File("E:\\invoice-template-1.pdf");
RandomAccessFile raf;
try {
raf = new RandomAccessFile(file, "r");
FileChannel channel = raf.getChannel();
ByteBuffer buf = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size());
PDFFile pdffile = new PDFFile(buf);
// draw the first page to an image
int num=pdffile.getNumPages();
for(int i=0;i<num;i++)
{
PDFPage page = pdffile.getPage(i);
//get the width and height for the doc at the default zoom
int width=(int)page.getBBox().getWidth();
int height=(int)page.getBBox().getHeight();
Rectangle rect = new Rectangle(0,0,width,height);
int rotation=page.getRotation();
Rectangle rect1=rect;
if(rotation==90 || rotation==270)
rect1=new Rectangle(0,0,rect.height,rect.width);
//generate the image
BufferedImage img = (BufferedImage)page.getImage(
rect.width, rect.height, //width & height
rect1, // clip rect
null, // null for the ImageObserver
true, // fill background with white
true // block until drawing is done
);
ImageIO.write(img, "png", new File("E:/invoice-template-"+i+".png"));
}
}
catch (FileNotFoundException e1) {
System.err.println(e1.getLocalizedMessage());
} catch (IOException e) {
System.err.println(e.getLocalizedMessage());
}
}
}
Then the image is displayed to the user in JavaFX application in ImageView components.
Can you help me to get the exact position of the mouse, the mouse when the user selects a portion of the image from which you want to read the text in the pdf file?
With this code I read pdf file and get text from the set position, only I must to manually input position:( . I use pdfbox-1.3.1.jar.
I would like to position the client chooses to keep a picture in the list and read the text from the pdf file with all of these positions.
File file = new File("E:/invoice-template-1.pdf");
PDDocument document = PDDocument.load(file);
PDFTextStripperByArea stripper = new PDFTextStripperByArea();
stripper.setSortByPosition(true);
Rectangle rect1 = new Rectangle(38, 275, 15, 100);
Rectangle rect2 = new Rectangle(54, 275, 40, 100);
stripper.addRegion("row1column1", rect1);
stripper.addRegion("row1column2", rect2);
List allPages = document.getDocumentCatalog().getAllPages();
List<PDPage> pages = document.getDocumentCatalog().getAllPages();
int j = 0;
for (PDPage page : pages) {
stripper.extractRegions(page);
stripper.setSortByPosition(true);
List<String> regions = stripper.getRegions();
for (String region : regions) {
String text = stripper.getTextForRegion(region);
System.out.println("Region: " + region + " on Page " + j);
System.out.println("\tText: \n" + text);
}
For example,
in the next invoice, I want to select the 4 positions to export the text, and when you select the picture, the dimensions of keeping in the list, then go through the list and from those positions export text from pdf file.

OpenCV 3 (Java Binding) : Apply CLAHE to image

I try to use the java bindings of open cv to apply an non-global contrast (histogram) optimization for a (color) png image, but I fail to get it to work.
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import javax.imageio.ImageIO;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.CLAHE;
import org.opencv.imgproc.Imgproc;
public class Main {
public static void main( String[] args ) {
try {
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
// fetch the png
File input = new File("test.png");
BufferedImage buffImage = ImageIO.read(input);
byte[] data = ((DataBufferByte) buffImage.getRaster().getDataBuffer()).getData();
// build MAT for original image
Mat orgImage = new Mat(buffImage.getHeight(),buffImage.getWidth(), CvType.CV_8UC3);
orgImage.put(0, 0, data);
// transform from to LAB
Mat labImage = new Mat(buffImage.getHeight(), buffImage.getWidth(), CvType.CV_8UC4);
Imgproc.cvtColor(orgImage, labImage, Imgproc.COLOR_BGR2Lab);
// apply CLAHE
CLAHE clahe = Imgproc.createCLAHE()
Mat destImage = new Mat(buffImage.getHeight(),buffImage.getWidth(), CvType.CV_8UC4);
clahe.apply(labImage, destImage);
Imgcodecs.imwrite("test_clahe.png", destImage);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
}
I get the exception:
Error: cv::Exception: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\imgproc\src\clahe.cpp:354: error: (-215) _src.type() == CV_8UC1 || _src.type() == CV_16UC1 in function `anonymous
-namespace'::CLAHE_Impl::apply
I guess I need to work with the individual channels, but I cannot figure out how. The code is inspired from this c++ example, but somehow I fail to extract the corresponding layers (I guess I need only L chanel for clahe.apply())
This Example just splits the Lab image and applies Clahe on the L channel which is the intensity channel. So just use this code for java.
List<Mat> channels = new LinkedList();
Core.split(labImage, channels);
CLAHE clahe = Imgproc.createCLAHE()
Mat destImage = new Mat(buffImage.getHeight(),buffImage.getWidth(), CvType.CV_8UC4);
clahe.apply(channels.get(0), destImage);
Core.merge(channels, labImage);
and finally merge the intensity channel to the other channels. I haven't changed any parameters as I don't know how your image looks but I guess that isn't the problem. Hope it helps!

JavaCL - flip input image

I am using JavaCL and I want to rotate the image and save image back to "out.png".
Unfortunately the line of code:
write_imagef(output, (int2){coords.y, coords.x}, pixel );
(x and y coordinates are switched) seems to have no any effect. I can do whatever I want but the result is still the original image! Why the output is not affected?
CopyImagesExample.java
import static java.lang.Math.cos;
import static java.lang.Math.sin;
import static org.bridj.Pointer.allocateFloats;
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.ByteOrder;
import javax.imageio.ImageIO;
import org.bridj.Pointer;
import com.nativelibs4java.opencl.CLBuffer;
import com.nativelibs4java.opencl.CLContext;
import com.nativelibs4java.opencl.CLDevice;
import com.nativelibs4java.opencl.CLEvent;
import com.nativelibs4java.opencl.CLImage2D;
import com.nativelibs4java.opencl.CLKernel;
import com.nativelibs4java.opencl.CLMem.Usage;
import com.nativelibs4java.opencl.CLSampler;
import com.nativelibs4java.opencl.CLSampler.AddressingMode;
import com.nativelibs4java.opencl.CLSampler.FilterMode;
import com.nativelibs4java.opencl.CLProgram;
import com.nativelibs4java.opencl.CLQueue;
import com.nativelibs4java.opencl.JavaCL;
import com.nativelibs4java.util.IOUtils;
public class CopyImagesExample {
public static void main(String[] args) throws IOException {
CLContext context = JavaCL.createBestContext();
CLQueue queue = context.createDefaultQueue();
ByteOrder byteOrder = context.getByteOrder();
CLDevice[] devices = context.getDevices();
System.out.println("Devices count " + context.getDeviceCount());
System.out.println(devices[0].getMaxComputeUnits());
System.out.println(devices[0].getOpenCLVersion());
int n = 1024;
Pointer<Float> aPtr = allocateFloats(n).order(byteOrder);
Pointer<Float> bPtr = allocateFloats(n).order(byteOrder);
Pointer<Float> oPtr = allocateFloats(n).order(byteOrder);
for (int i = 0; i < n; i++) {
aPtr.set(i, (float) cos(i));
bPtr.set(i, (float) sin(i));
}
// Create OpenCL input buffers (using the native memory pointers aPtr
// and bPtr) :
BufferedImage img = ImageIO.read(new FileInputStream("images/lena256.png"));
CLImage2D inputImage = context.createImage2D(Usage.Input, img, false);
// Create an OpenCL output buffer :
CLImage2D outImage = context.createImage2D(Usage.Output, img, false);
CLSampler sampler = context.createSampler(false, AddressingMode.ClampToEdge, FilterMode.Nearest);
// Read the program sources and compile them :
String src = IOUtils.readText(CopyImagesExample.class
.getResource("CopyImage.cl"));
CLProgram program = context.createProgram(devices, src);
// Get and call the kernel :
CLKernel nnKernel = program.createKernel("rotate_image");
nnKernel.setArgs(inputImage, outImage, sampler);
CLEvent addEvt = nnKernel.enqueueNDRange(queue, new int[] { img.getWidth()*img.getHeight() });
BufferedImage outPtr = outImage.read(queue, addEvt); // blocks until
// add_floats
// finished
ImageIO.write(outPtr, "png", new File("out.png"));
}
}
CopyImage.cl
__kernel void rotate_image(
__read_only image2d_t input,
__write_only image2d_t output,
sampler_t sampler)
{
// Store each work-items unique row and column
int2 coords = (int2){get_global_id(0), get_global_id(1)};
float4 pixel = read_imagef(input, sampler, coords);
write_imagef(output, (int2){coords.y, coords.x}, pixel );
}
Input:
Output:
But desired output is that the image is rotated (x and y coordinates of each pixel is switched).
Quick fix: replace code :
CLEvent addEvt = nnKernel.enqueueNDRange(queue, new int[] { img.getWidth()*img.getHeight() });
with:
CLEvent addEvt = nnKernel.enqueueNDRange(queue, new int[] { img.getWidth(), img.getHeight() });
The reason is, that in OpenCL code I am using
get_global_id(0)
and
get_global_id(1)
So I need enqueue 2D range.

Determine whether an image is grayscale in Java

I'm using something along the lines of this to do a (naive, apparently) check for the color space of JPEG images:
import java.io.*;
import java.awt.color.*;
import java.awt.image.*;
import javax.imageio.*;
class Test
{
public static void main(String[] args) throws java.lang.Exception
{
File f = new File(args[0]);
if (f.exists())
{
BufferedImage bi = ImageIO.read(f);
ColorSpace cs = bi.getColorModel().getColorSpace();
boolean isGrayscale = cs.getType() == ColorSpace.TYPE_GRAY;
System.out.println(isGrayscale);
}
}
}
Unfortunately this reports false for images that (visually) appear gray-only.
What check would do the right thing?
You can use this code:
File input = new File("inputImage.jpg");
BufferedImage image = ImageIO.read(input);
Raster ras = image.getRaster();
int elem = ras.getNumDataElements();
System.out.println("Number of Elems: " + elem);
If the number of elems returns 1, then its a greyscale image. If it returns 3, then its a color image.
the image looks like gray beacuse the r=g=b but actually it is a full color image, it has three channel r g b and the real gray image only have one channel

Incorrect number of Contours on javacv?

I have write simple code to retrieve number of Contours in a image and get the number of Contours in the image. But it always gives incorrect answer. Please can some one explain about this ?
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.CanvasFrame;
import static com.googlecode.javacpp.Loader.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
import java.io.File;
import javax.swing.JFileChooser;
public class TestBeam {
public static void main(String[] args) {
CvMemStorage storage=CvMemStorage.create();
CvSeq squares = new CvContour();
squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);
JFileChooser f=new JFileChooser();
int result=f.showOpenDialog(f);//show dialog box to choose files
File myfile=null;
String path="";
if(result==0){
myfile=f.getSelectedFile();//selected file taken to myfile
path=myfile.getAbsolutePath();//get the path of the file
}
IplImage src = cvLoadImage(path);//hear path is actual path to image
IplImage grayImage = IplImage.create(src.width(), src.height(), IPL_DEPTH_8U, 1);
cvCvtColor(src, grayImage, CV_RGB2GRAY);
cvThreshold(grayImage, grayImage, 127, 255, CV_THRESH_BINARY);
CvSeq cvSeq=new CvSeq();
CvMemStorage memory=CvMemStorage.create();
cvFindContours(grayImage, memory, cvSeq, Loader.sizeof(CvContour.class), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
System.out.println(cvSeq.elem_size());
CanvasFrame cnvs=new CanvasFrame("Beam");
cnvs.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
cnvs.showImage(src);
//cvShowImage("Final ", src);
}
}
This is the sample image that I used
But Code always returns output as 8. Please can someone explain this?
cvSeq.elem_size() will return size of sequence element in bytes and not the number of contours. That is why output is 8 every time.
Please refer following link for more information.
http://opencv.willowgarage.com/documentation/dynamic_structures.html#cvseq
For finding number of contours you can use following snippet
int i = 0;
while(cvSeq != null){
i = i + 1;
cvSeq = cvSeq.h_next();
}
System.out.println(i);
With the parameters you have provided CV_RETR_EXTERNAL will only provide external contour that is image boundary in your image(provided you are not inverting the image). You can use CV_RETR_LIST to get all the contours. Visit following link for more information on the parameters.
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours

Categories

Resources