When trying to do the watershed method i got this error:Unsupported format or combination of formats (Only 32-bit, 1-channel output images are supported) in cvWatershed
I think this is because my markers has 3 channels and a depth of 8. I think i need to convert this 3channel 8 depth image to a 32 bit 1 channel image. My question is am i right? and how do i do this conversion?
EDIT: Updated code with solutions
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package kitouch;
import org.OpenNI.*;
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.*;
import com.googlecode.javacv.cpp.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_calib3d.*;
import static com.googlecode.javacv.cpp.opencv_objdetect.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
import java.nio.ShortBuffer;
import java.awt.*;
import java.awt.image.*;
/**
*
* #author olivierjanssens
*/
public class watherShedExample {
// CanvasFrame frame5 = new CanvasFrame("Some T");
public static void main(String s[])
{
CanvasFrame frame1 = new CanvasFrame("Foreground");
CanvasFrame frame2 = new CanvasFrame("Dilated");
CanvasFrame frame3 = new CanvasFrame("Background");
CanvasFrame frame4 = new CanvasFrame("Markers");
CanvasFrame frame5 = new CanvasFrame("Watershed");
// Read input image
IplImage image = cvLoadImage("/Users/olivierjanssens/Downloads/images/group.jpg");
IplImage test = cvLoadImage("/Users/olivierjanssens/Downloads/images/binary.bmp");
IplImage binary = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U, 1);
cvCvtColor(test, binary, CV_BGR2GRAY);
// Eliminate noise and smaller objects, repeat erosion 6 times
IplImage fg = cvCreateImage(cvGetSize(binary), binary.depth(), binary.nChannels() /* channels */);
cvErode(binary, fg, null /* 3x3 square */ , 6 /* iterations */);
frame1.showImage(fg);
// Identify image pixels pixels objects
IplImage bg = cvCreateImage(cvGetSize(binary), binary.depth(), binary.nChannels() /* channels */);
cvDilate(binary, bg, null /* 3x3 square */ , 6 /* iterations */);
frame2.showImage(bg);
cvThreshold(bg, bg, 1 /* threshold */ , 128 /* max value */ , CV_THRESH_BINARY_INV);
frame3.showImage(bg);
// Create marker image
IplImage markers = cvCreateImage(cvGetSize(binary), IPL_DEPTH_8U, binary.nChannels() /* channels */);
cvAdd(fg, bg, markers, null);
frame4.showImage(markers);
/*
* TEST SOLUTION 1
IplImage gray = cvCreateImage(cvGetSize(markers), IPL_DEPTH_8U, 1);
cvCvtColor(markers, gray, CV_BGR2GRAY);
IplImage img32bit1chan = cvCreateImage(cvGetSize(gray), IPL_DEPTH_32F, 1);
double ve;
for (int i = 0; i < gray.width(); i++) // markers width
{
for (int j = 0; j < gray.height(); j++) // markers height
{
ve = cvGetReal2D((IplImage)gray, j, i);
cvSetReal2D((IplImage)img32bit1chan , i, j, ve);
}
}
*/
//SOLUTION 2
IplImage markers32f = cvCreateImage(cvGetSize(binary), IPL_DEPTH_32F, binary.nChannels());
cvConvertScale(markers, markers32f, 1, 0); // converts from IPL_DEPTH_8U to IPL_DEPTH_32F
cvWatershed(image, markers32f);
frame5.showImage(image);
}
}
A manual conversion would look something like the following (haven't tested the code):
IplImage gray = cvCreateImage(cvGetSize(markers), IPL_DEPTH_8U, 1);
cvCvtColor(markers, gray, CV_BGR2GRAY);
IplImage img32bit1chan = cvCreateImage(cvGetSize(gray), IPL_DEPTH_32S, 1);
// convert 8-bit 1-channel image to 32-bit 1-channel
cvConvertScale(gray, img32bit1chan , 1/255.);
cvWatershed(image, img32bit1chan);
I'm not familiar with the Java API, but here is how you might go about it:
IplImage image = cvLoadImage("/Users/olivierjanssens/Downloads/images/group.jpg");
// this will force the image to be read as grayscale (i.e. single channel)
// sometimes saving a "binary" bitmap will result in a 3 channel image
IplImage binary = cvLoadImage("/Users/olivierjanssens/Downloads/images/binary.bmp", 0);
...
IplImage markers = cvCreateImage(cvGetSize(binary), IPL_DEPTH_8U, binary.nChannels() /* channels */);
cvAdd(fg, bg, markers, null);
frame4.showImage(markers);
IplImage markers32f = cvCreateImage(cvGetSize(binary), IPL_DEPTH_32F, binary.nChannels());
cvConvertScale(markers, markers32f, 1, 0); // converts from IPL_DEPTH_8U to IPL_DEPTH_32F
cvWatershed(image, markers32f);
See if that works for you.
Related
Facing issue while image processing code setup. In spite of doing all code changes and different approaches facing the issue.
Libraries used – OpenCV – 3.4.2
Jars Used – opencv-3.4.2-0
tess4j-3.4.8
Lines added in pom.xml
<!-- https://mvnrepository.com/artifact/org.openpnp/opencv -->
<dependency>
<groupId>org.openpnp</groupId>
<artifactId>opencv</artifactId>
<version>3.4.2-0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/net.sourceforge.tess4j/tess4j -->
<dependency>
<groupId>net.sourceforge.tess4j</groupId>
<artifactId>tess4j</artifactId>
<version>3.4.8</version>
</dependency>
Steps for OpenCV installation :
Download opencv.exe from the official site’
Run opencv.exe, it will create an opencv folder
We have now the opencv library available which we can use for eclipse.
Steps for Tesseract installation :
Download tess4j.zip file from the official link
Extract the zip folder after download
Provide the path of the tess4j folder
Following are the steps which we have performed for the setup in eclipse :
We have added native library by providing path to openCV library from build path settings
We downloaded tesseract for image reading.
We provided the path to the Tesseraact in the code
We have used System.loadlibrary(Core.NATIVE_LIBRARY_NAME) and openCv.loadLocally() for loading the library.
Then we have made the WAR export for deployment
There has been no changes or setup in apache tomcat
For loading the libraries in Tomcat we have to provide some setup here :-
Now for the new code we have used, Load Library static class in the code (as solutions stated on stack overflow)
In here System.loadLibrary is not working
We had to use System.load along with hardcoded path which is resulting in internal error
We have used System.load – 2 time in the static class out of which the when the 1st one is giving -std error -bad allocation
As there are 2 paths in opencv-
This is the 1st one
System.load("C:\Users\Downloads\opencv\build\bin\opencv_java342.dll");
and the 2nd one is giving the assertion error based on which one is kept above
This is the 2nd one
System.load("C:\User\Downloads\opencv\build\java\x64\opencv_java3412.dll");
The code is executing till mid-way and then getting out and till now not yet code has reached till tesseract.
Here is the code for the same :
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.TimeUnit;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import org.apache.commons.logging.impl.Log4JLogger;
import org.apache.log4j.Logger;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.opencv.core.Core;
import org.opencv.core.CvException;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Rect;
import org.opencv.core.Size;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import net.sourceforge.tess4j.Tesseract;
import nu.pattern.OpenCV;
public class ReadImageBox {
public String readDataFromImage(String imageToReadPath,String tesseractPath)
{
String result = "";
try {
String i = Core.NATIVE_LIBRARY_NAME;
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
logger.info("Img to read "+imageToReadPath);
String imagePath =imageToReadPath; // bufferNameOfImagePath = "";
logger.info(imagePath);
/*
* The class Mat represents an n-dimensional dense numerical single-channel or
* multi-channel array. It can be used to store real or complex-valued vectors
* and matrices, grayscale or color images, voxel volumes, vector fields, point
* clouds, tensors, histograms (though, very high-dimensional histograms may be
* better stored in a SparseMat ).
*/
logger.info("imagepath::"+imagePath);
OpenCV.loadLocally();
logger.info("imagepath::"+imagePath);
//logger.info("Library Information"+Core.getBuildInformation());
logger.info("imagepath::"+imagePath);
Mat source = Imgcodecs.imread(imagePath);
logger.info("Source image "+source);
String directoryPath = imagePath.substring(0,imagePath.lastIndexOf('/'));
logger.info("Going for Image Processing :" + directoryPath);
// calling image processing here to process the data from it
result = updateImage(100,20,10,3,3,2,source, directoryPath,tesseractPath);
logger.info("Data read "+result);
return result;
}
catch (UnsatisfiedLinkError error) {
// Output expected UnsatisfiedLinkErrors.
logger.error(error);
}
catch (Exception exception)
{
logger.error(exception);
}
return result;
}
public static String updateImage(int boxSize, int horizontalRemoval, int verticalRemoval, int gaussianBlur,
int denoisingClosing, int denoisingOpening, Mat source, String tempDirectoryPath,String tesseractPath) throws Exception{
// Tesseract Object
logger.info("Tesseract Path :"+tesseractPath);
Tesseract tesseract = new Tesseract();
tesseract.setDatapath(tesseractPath);
// Creating the empty destination matrix for further processing
Mat grayScaleImage = new Mat();``
Mat gaussianBlurImage = new Mat();
Mat thresholdImage = new Mat();
Mat morph = new Mat();
Mat morphAfterOpreation = new Mat();
Mat dilate = new Mat();
Mat hierarchy = new Mat();
logger.info("Image type"+source.type());
// Converting the image to gray scale and saving it in the grayScaleImage matrix
Imgproc.cvtColor(source, grayScaleImage, Imgproc.COLOR_RGB2GRAY);
//Imgproc.cvtColor(source, grayScaleImage, 0);
// Applying Gaussain Blur
logger.info("source image "+source);
Imgproc.GaussianBlur(grayScaleImage, gaussianBlurImage, new org.opencv.core.Size(gaussianBlur, gaussianBlur),
0);
// OTSU threshold
Imgproc.threshold(gaussianBlurImage, thresholdImage, 0, 255, Imgproc.THRESH_OTSU | Imgproc.THRESH_BINARY_INV);
logger.info("Threshold image "+gaussianBlur);
// remove the lines of any table inside the invoice
Mat horizontal = thresholdImage.clone();
Mat vertical = thresholdImage.clone();
int horizontal_size = horizontal.cols() / 30;
if(horizontal_size%2==0)
horizontal_size+=1;
// showWaitDestroy("Horizontal Lines Detected", horizontal);
Mat horizontalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(horizontal_size, 1));
Imgproc.erode(horizontal, horizontal, horizontalStructure);
Imgproc.dilate(horizontal, horizontal, horizontalStructure);
int vertical_size = vertical.rows() / 30;
if(vertical_size%2==0)
vertical_size+=1;
// Create structure element for extracting vertical lines through morphology
// operations
Mat verticalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(1, vertical_size));
// Apply morphology operations
Imgproc.erode(vertical, vertical, verticalStructure);
Imgproc.dilate(vertical, vertical, verticalStructure);
Core.absdiff(thresholdImage, horizontal, thresholdImage);
Core.absdiff(thresholdImage, vertical, thresholdImage);
logger.info("Vertical Structure "+verticalStructure);
Mat newImageFortest = thresholdImage;
logger.info("Threshold image "+thresholdImage);
// applying Closing operation
Imgproc.morphologyEx(thresholdImage, morph, Imgproc.MORPH_CLOSE, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingClosing, denoisingClosing)));
logger.info("Morph image "+morph);
// applying Opening operation
Imgproc.morphologyEx(morph, morphAfterOpreation, Imgproc.MORPH_OPEN, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingOpening, denoisingOpening)));
logger.info("Morph After operation image "+morphAfterOpreation);
// Applying dilation on the threshold image to create bounding box edges
Imgproc.dilate(morphAfterOpreation, dilate,
Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(boxSize, boxSize)));
logger.info("Dilate image "+dilate);
// creating string buffer object
String text = "";
try
{
// finding contours
List<MatOfPoint> contourList = new ArrayList<MatOfPoint>(); // A list to store all the contours
// finding contours
Imgproc.findContours(dilate, contourList, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
logger.info("Contour List "+contourList);
// Creating a copy of the image
//Mat copyOfImage = source;
Mat copyOfImage = newImageFortest;
logger.info("Copy of Image "+copyOfImage);
// Rectangle for cropping
Rect rectCrop = new Rect();
logger.info("Rectangle Crop New Object "+rectCrop);
// loop through the identified contours and crop them from the image to feed
// into Tesseract-OCR
for (int i = 0; i < contourList.size(); i++) {
// getting bound rectangle
rectCrop = Imgproc.boundingRect(contourList.get(i));
logger.info("Rectangle cropped"+rectCrop);
// cropping Image
Mat croppedImage = copyOfImage.submat(rectCrop.y, rectCrop.y + rectCrop.height, rectCrop.x,
rectCrop.x + rectCrop.width);
// writing cropped image to disk
logger.info("Path to write cropped image "+ tempDirectoryPath);
String writePath = tempDirectoryPath + "/croppedImg.png";
logger.info("writepath"+writePath);
// imagepath = imagepath.
Imgcodecs.imwrite(writePath, croppedImage);
try {
// extracting text from cropped image, goes to the image, extracts text and adds
// them to stringBuffer
logger.info("Exact Path where Image was written with Name "+ writePath);
String textExtracted = (tesseract
.doOCR(new File(writePath)));
//Adding Seperator
textExtracted = textExtracted + "_SEPERATOR_";
logger.info("Text Extracted "+textExtracted);
textExtracted = textExtracted + "\n";
text = textExtracted + text;
logger.info("Text Extracted Completely"+text);
// System.out.println("Andar Ka Text => " + text.toString());
} catch (Exception exception) {
logger.error(exception);
}
writePath = "";
logger.info("Making write Path empty for next Image "+ writePath);
}
}
catch(CvException ae)
{
logger.error("cv",ae);
}
catch(UnsatisfiedLinkError ae)
{
logger.error("unsatdif",ae);
}
catch(Exception ae)
{
logger.error("general",ae);
}
// converting into string
return text.toUpperCase();
}
// convert Mat to Image for GUI output
public static Image toBufferedImage(Mat m) {
// getting BYTE_GRAY formed image
int type = BufferedImage.TYPE_BYTE_GRAY;
if (m.channels() > 1) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = m.channels() * m.cols() * m.rows();
byte[] b = new byte[bufferSize];
m.get(0, 0, b); // get all the pixels
// creating buffered Image
BufferedImage image = new BufferedImage(m.cols(), m.rows(), type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(b, 0, targetPixels, 0, b.length);
// returning Image
return image;
}
// method to display Mat format images using the GUI
private static void showWaitDestroy(String winname, Mat img) {
HighGui.imshow(winname, img);
HighGui.moveWindow(winname, 500, 0);
HighGui.waitKey(0);
HighGui.destroyWindow(winname);
}
}
I've written a program to modify images.
First, I get the image, and get its drawing context like this:
BufferedImage image;
try {
image = ImageIO.read(inputFile);
} catch (IOException ioe) { /* exception handling ... */ }
Graphics g = image.createGraphics();
And then I modify the image like this:
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++) {
g.setColor( /* calculate color ... */ );
g.fillRect(x, y, 1, 1);
}
}
After I've finished modifying the image, I save the image like this:
try {
ImageIO.write(image, "PNG", save.getSelectedFile());
} catch (IOException ioe) { /* exception handling ... */ }
Now most of the time this works just fine.
However, when I tried recoloring this texture
to this
I get this instead:
Inside the debugger, though, the Graphics's color is the shade of pink I want it to be.
The comments seem to suggest that the image the user opens might have some color limitations, and since I'm drawing to the same image I'm reading from, my program has to abide by these limitations. The example image seems to be pretty grayscale-y, and apparently its bit depth is 8 bit. So maybe the pink I'm drawing on it is converted to grayscale, because the image has to stay 8-bit?
As suggested in the comments, the main problem here indeed is the wrong color model. When you load the original image, and print some information about it...
BufferedImage image = ImageIO.read(
new URL("https://i.stack.imgur.com/pSUFR.png"));
System.out.println(image);
it will say
BufferedImage#5419f379: type = 13 IndexColorModel: #pixelBits = 8 numComponents = 3 color space = java.awt.color.ICC_ColorSpace#7dc7cbad transparency = 1 transIndex = -1 has alpha = false isAlphaPre = false ByteInterleavedRaster: width = 128 height = 128 #numDataElements 1 dataOff[0] = 0
The IndexColorModel does not necessarily support all the colors, but only a subset of them. (Basically, the image supports only the colors that it "needs", which allows for a more compact storage).
The solution here is to convert the image into one that has the appropriate color model. A generic method for this is shown in the following example:
import java.awt.Color;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.net.URL;
import javax.imageio.ImageIO;
public class ImageColors
{
public static void main(String[] args) throws IOException
{
BufferedImage image = ImageIO.read(
new URL("https://i.stack.imgur.com/pSUFR.png"));
// This will show that the image has an IndexColorModel.
// This does not necessarily support all colors.
System.out.println(image);
// Convert the image to a generic ARGB image
image = convertToARGB(image);
// Now, the image has a DirectColorModel, supporting all colors
System.out.println(image);
Graphics2D g = image.createGraphics();
g.setColor(Color.PINK);
g.fillRect(50, 50, 50, 50);
g.dispose();
ImageIO.write(image, "PNG", new File("RightColors.png"));
}
public static BufferedImage convertToARGB(BufferedImage image)
{
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(),
BufferedImage.TYPE_INT_ARGB);
Graphics2D g = newImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
}
I have read through many related questions and other web resources for days, but I just can't find a solution.
I want to scale down very large images (e.g. 1300 x 27000 Pixel).
I cannot use a larger heap space for eclipse than 1024.
I rather don't want to use an external tool like JMagick since I want to export a single executable jar to run on other devices. Also from what I read I am not sure if even JMagick could do this scaling of very large images. Does anyone know?
Everything I tried so far results in "OutOfMemoryError: Java heap space"
I trieg e.g. coobird.thumbnailator or awt.Graphics2D, ...
Performance and quality are not the most important factors. Mainly I just want to be sure, that all sizes of images can be scaled down without running out of heap space.
So, is there a way to scale images? may be in small chunks so that the full image doesn't need to be loaded? Or any other way to do this?
As a workaround it would also be sufficient if I could just make a thumbnail of a smaller part of the image. But I guess cropping an large image will have the same problems as if scaling a large image?
Thanks and cheers!
[EDIT:]
With the Thumbnailator
Thumbnails.of(new File(".../20150601161616.png"))
.size(160, 160);
works for the particular picture, but
Thumbnails.of(new File(".../20150601161616.png"))
.size(160, 160)
.toFile(new File(".../20150601161616_t.png"));
runs out of memory.
I've never had to do that; but I would suggest loading the image in tiled pieces, scaling them down, printing the scaled-down version on the new BufferedImage, and then loading the next tile over the first.
Psuedocode (parameters may also be a little out of order):
Image finalImage;
Graphics2D g2D = finalImage.createGraphics();
for each yTile:
for each xTile:
Image orig = getImage(path, x, y, xWidth, yWidth);
g2D.drawImage(x * scaleFactor, y * scaleFactor, xWidth * scaleFactor, yWidth * scaleFactor, orig);
return orig;
Of course you could always do it the dreaded binary way; but this apparently addresses how to load only small chunks of an image:
Draw part of image to screen (without loading all to memory)
It seems that there are already a large number of prebuilt utilities for loading only part of a file.
I apologize for the somewhat scattered nature of my answer; you actually have me curious about this now and I'll be researching it further tonight. I'll try and make note of what I run into here. Good luck!
With your hints and questions I was able to write a class that actually does what I want. It might not scale all sizes, but works for very large images. The performance is very bad (10-15 Sec for an 1300 x 27000 png), but it works for my purposes.
import java.awt.Color;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import net.coobird.thumbnailator.Thumbnails;
public class ImageManager {
private int tileHeight;
private String pathSubImgs;
/**
* #param args
*/
public static void main(String[] args) {
int tileHeightL = 2000;
String imageBasePath = "C:.../screenshots/";
String subImgsFolderName = "subImgs/";
String origImgName = "TestStep_319_20150601161652.png";
String outImgName = origImgName+"scaled.png";
ImageManager imgMngr = new ImageManager(tileHeightL,imageBasePath+subImgsFolderName);
if(imgMngr.scaleDown(imageBasePath+origImgName, imageBasePath+outImgName))
System.out.println("Scaled.");
else
System.out.println("Failed.");
}
/**
* #param origImgPath
* #param outImgPath
* #param tileHeight
* #param pathSubImgs
*/
public ImageManager(int tileHeight,
String pathSubImgs) {
super();
this.tileHeight = tileHeight;
this.pathSubImgs = pathSubImgs;
}
private boolean scaleDown(String origImgPath, String outImgPath){
try {
BufferedImage image = ImageIO.read(new File(origImgPath));
int origH = image.getHeight();
int origW = image.getWidth();
int tileRestHeight;
int yTiles = (int) Math.ceil(origH/tileHeight);
int tyleMod = origH%tileHeight;
for(int tile = 0; tile <= yTiles ; tile++){
if(tile == yTiles)
tileRestHeight = tyleMod;
else
tileRestHeight = tileHeight;
BufferedImage out = image.getSubimage(0, tile * tileHeight, origW, tileRestHeight);
ImageIO.write(out, "png", new File(pathSubImgs + tile + ".png"));
Thumbnails.of(new File(pathSubImgs + tile + ".png"))
.size(400, 400)
.toFile(new File(pathSubImgs + tile + ".png"));
}
image = ImageIO.read(new File(pathSubImgs + 0 + ".png"));
BufferedImage img2;
for(int tile = 1; tile <= yTiles ; tile++){
if(tile == yTiles)
tileRestHeight = tyleMod;
else
tileRestHeight = tileHeight;
img2 = ImageIO.read(new File(pathSubImgs + tile + ".png"));
image = joinBufferedImage(image, img2);
}
ImageIO.write(image, "png", new File(outImgPath));
return true;
} catch (IOException e) {
e.printStackTrace();
return false;
}
}
public static BufferedImage joinBufferedImage(BufferedImage img1,BufferedImage img2) {
//do some calculate first
int height = img1.getHeight()+img2.getHeight();
int width = Math.max(img1.getWidth(),img2.getWidth());
//create a new buffer and draw two image into the new image
BufferedImage newImage = new BufferedImage(width,height, BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = newImage.createGraphics();
Color oldColor = g2.getColor();
//fill background
g2.setPaint(Color.WHITE);
g2.fillRect(0, 0, width, height);
//draw image
g2.setColor(oldColor);
g2.drawImage(img1, null, 0, 0);
g2.drawImage(img2, null, 0, img1.getHeight());
g2.dispose();
return newImage;
}
}
I'm making my own Image processing application that completely operates in BufferedImage.
Now i have stumbled upon a code on Face detection in a blog of [OpenShift.com]
Now i want to integrate that code into my own GUI application.But facing problems as the Face Detector code the image is an instance of iplImage object and for that i need to first convert the buffered image to IplImage so that the method accepts the now converted image.
Please help..
i am leaving below the Face detector code.
public class FaceDetection{
//Load haar classifier XML file
public static final String XML_FILE =
"C:\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt2.xml";
public static void main(String[] args){
//Load image
IplImage img = cvLoadImage("C:\\Users\\The Blue Light\\Desktop\\13.jpg");
detect(img);
}
//Detect for face using classifier XML file
public static void detect(IplImage src){
//Define classifier
CvHaarClassifierCascade cascade = new CvHaarClassifierCascade(cvLoad(XML_FILE));
CvMemStorage storage = CvMemStorage.create();
//Detect objects
CvSeq sign = cvHaarDetectObjects(
src,
cascade,
storage,
1.5,
3,
CV_HAAR_DO_CANNY_PRUNING);
cvClearMemStorage(storage);
int total_Faces = sign.total();
//Draw rectangles around detected objects
for(int i = 0; i < total_Faces; i++){
CvRect r = new CvRect(cvGetSeqElem(sign, i));
cvRectangle (
src,
cvPoint(r.x(), r.y()),
cvPoint(r.width() + r.x(), r.height() + r.y()),
CvScalar.CYAN,
2,
CV_AA,
0);
}
//Display result
cvShowImage("Result", src);
cvWaitKey(0);
}
}
IplImage image = IplImage.createFrom(yourBufferedImage);
Thanks #Marco13
exactly what i needed..
I'm using something along the lines of this to do a (naive, apparently) check for the color space of JPEG images:
import java.io.*;
import java.awt.color.*;
import java.awt.image.*;
import javax.imageio.*;
class Test
{
public static void main(String[] args) throws java.lang.Exception
{
File f = new File(args[0]);
if (f.exists())
{
BufferedImage bi = ImageIO.read(f);
ColorSpace cs = bi.getColorModel().getColorSpace();
boolean isGrayscale = cs.getType() == ColorSpace.TYPE_GRAY;
System.out.println(isGrayscale);
}
}
}
Unfortunately this reports false for images that (visually) appear gray-only.
What check would do the right thing?
You can use this code:
File input = new File("inputImage.jpg");
BufferedImage image = ImageIO.read(input);
Raster ras = image.getRaster();
int elem = ras.getNumDataElements();
System.out.println("Number of Elems: " + elem);
If the number of elems returns 1, then its a greyscale image. If it returns 3, then its a color image.
the image looks like gray beacuse the r=g=b but actually it is a full color image, it has three channel r g b and the real gray image only have one channel