I'm using something along the lines of this to do a (naive, apparently) check for the color space of JPEG images:
import java.io.*;
import java.awt.color.*;
import java.awt.image.*;
import javax.imageio.*;
class Test
{
public static void main(String[] args) throws java.lang.Exception
{
File f = new File(args[0]);
if (f.exists())
{
BufferedImage bi = ImageIO.read(f);
ColorSpace cs = bi.getColorModel().getColorSpace();
boolean isGrayscale = cs.getType() == ColorSpace.TYPE_GRAY;
System.out.println(isGrayscale);
}
}
}
Unfortunately this reports false for images that (visually) appear gray-only.
What check would do the right thing?
You can use this code:
File input = new File("inputImage.jpg");
BufferedImage image = ImageIO.read(input);
Raster ras = image.getRaster();
int elem = ras.getNumDataElements();
System.out.println("Number of Elems: " + elem);
If the number of elems returns 1, then its a greyscale image. If it returns 3, then its a color image.
the image looks like gray beacuse the r=g=b but actually it is a full color image, it has three channel r g b and the real gray image only have one channel
Related
Facing issue while image processing code setup. In spite of doing all code changes and different approaches facing the issue.
Libraries used – OpenCV – 3.4.2
Jars Used – opencv-3.4.2-0
tess4j-3.4.8
Lines added in pom.xml
<!-- https://mvnrepository.com/artifact/org.openpnp/opencv -->
<dependency>
<groupId>org.openpnp</groupId>
<artifactId>opencv</artifactId>
<version>3.4.2-0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/net.sourceforge.tess4j/tess4j -->
<dependency>
<groupId>net.sourceforge.tess4j</groupId>
<artifactId>tess4j</artifactId>
<version>3.4.8</version>
</dependency>
Steps for OpenCV installation :
Download opencv.exe from the official site’
Run opencv.exe, it will create an opencv folder
We have now the opencv library available which we can use for eclipse.
Steps for Tesseract installation :
Download tess4j.zip file from the official link
Extract the zip folder after download
Provide the path of the tess4j folder
Following are the steps which we have performed for the setup in eclipse :
We have added native library by providing path to openCV library from build path settings
We downloaded tesseract for image reading.
We provided the path to the Tesseraact in the code
We have used System.loadlibrary(Core.NATIVE_LIBRARY_NAME) and openCv.loadLocally() for loading the library.
Then we have made the WAR export for deployment
There has been no changes or setup in apache tomcat
For loading the libraries in Tomcat we have to provide some setup here :-
Now for the new code we have used, Load Library static class in the code (as solutions stated on stack overflow)
In here System.loadLibrary is not working
We had to use System.load along with hardcoded path which is resulting in internal error
We have used System.load – 2 time in the static class out of which the when the 1st one is giving -std error -bad allocation
As there are 2 paths in opencv-
This is the 1st one
System.load("C:\Users\Downloads\opencv\build\bin\opencv_java342.dll");
and the 2nd one is giving the assertion error based on which one is kept above
This is the 2nd one
System.load("C:\User\Downloads\opencv\build\java\x64\opencv_java3412.dll");
The code is executing till mid-way and then getting out and till now not yet code has reached till tesseract.
Here is the code for the same :
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.TimeUnit;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import org.apache.commons.logging.impl.Log4JLogger;
import org.apache.log4j.Logger;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.opencv.core.Core;
import org.opencv.core.CvException;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Rect;
import org.opencv.core.Size;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import net.sourceforge.tess4j.Tesseract;
import nu.pattern.OpenCV;
public class ReadImageBox {
public String readDataFromImage(String imageToReadPath,String tesseractPath)
{
String result = "";
try {
String i = Core.NATIVE_LIBRARY_NAME;
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
logger.info("Img to read "+imageToReadPath);
String imagePath =imageToReadPath; // bufferNameOfImagePath = "";
logger.info(imagePath);
/*
* The class Mat represents an n-dimensional dense numerical single-channel or
* multi-channel array. It can be used to store real or complex-valued vectors
* and matrices, grayscale or color images, voxel volumes, vector fields, point
* clouds, tensors, histograms (though, very high-dimensional histograms may be
* better stored in a SparseMat ).
*/
logger.info("imagepath::"+imagePath);
OpenCV.loadLocally();
logger.info("imagepath::"+imagePath);
//logger.info("Library Information"+Core.getBuildInformation());
logger.info("imagepath::"+imagePath);
Mat source = Imgcodecs.imread(imagePath);
logger.info("Source image "+source);
String directoryPath = imagePath.substring(0,imagePath.lastIndexOf('/'));
logger.info("Going for Image Processing :" + directoryPath);
// calling image processing here to process the data from it
result = updateImage(100,20,10,3,3,2,source, directoryPath,tesseractPath);
logger.info("Data read "+result);
return result;
}
catch (UnsatisfiedLinkError error) {
// Output expected UnsatisfiedLinkErrors.
logger.error(error);
}
catch (Exception exception)
{
logger.error(exception);
}
return result;
}
public static String updateImage(int boxSize, int horizontalRemoval, int verticalRemoval, int gaussianBlur,
int denoisingClosing, int denoisingOpening, Mat source, String tempDirectoryPath,String tesseractPath) throws Exception{
// Tesseract Object
logger.info("Tesseract Path :"+tesseractPath);
Tesseract tesseract = new Tesseract();
tesseract.setDatapath(tesseractPath);
// Creating the empty destination matrix for further processing
Mat grayScaleImage = new Mat();``
Mat gaussianBlurImage = new Mat();
Mat thresholdImage = new Mat();
Mat morph = new Mat();
Mat morphAfterOpreation = new Mat();
Mat dilate = new Mat();
Mat hierarchy = new Mat();
logger.info("Image type"+source.type());
// Converting the image to gray scale and saving it in the grayScaleImage matrix
Imgproc.cvtColor(source, grayScaleImage, Imgproc.COLOR_RGB2GRAY);
//Imgproc.cvtColor(source, grayScaleImage, 0);
// Applying Gaussain Blur
logger.info("source image "+source);
Imgproc.GaussianBlur(grayScaleImage, gaussianBlurImage, new org.opencv.core.Size(gaussianBlur, gaussianBlur),
0);
// OTSU threshold
Imgproc.threshold(gaussianBlurImage, thresholdImage, 0, 255, Imgproc.THRESH_OTSU | Imgproc.THRESH_BINARY_INV);
logger.info("Threshold image "+gaussianBlur);
// remove the lines of any table inside the invoice
Mat horizontal = thresholdImage.clone();
Mat vertical = thresholdImage.clone();
int horizontal_size = horizontal.cols() / 30;
if(horizontal_size%2==0)
horizontal_size+=1;
// showWaitDestroy("Horizontal Lines Detected", horizontal);
Mat horizontalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(horizontal_size, 1));
Imgproc.erode(horizontal, horizontal, horizontalStructure);
Imgproc.dilate(horizontal, horizontal, horizontalStructure);
int vertical_size = vertical.rows() / 30;
if(vertical_size%2==0)
vertical_size+=1;
// Create structure element for extracting vertical lines through morphology
// operations
Mat verticalStructure = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new org.opencv.core.Size(1, vertical_size));
// Apply morphology operations
Imgproc.erode(vertical, vertical, verticalStructure);
Imgproc.dilate(vertical, vertical, verticalStructure);
Core.absdiff(thresholdImage, horizontal, thresholdImage);
Core.absdiff(thresholdImage, vertical, thresholdImage);
logger.info("Vertical Structure "+verticalStructure);
Mat newImageFortest = thresholdImage;
logger.info("Threshold image "+thresholdImage);
// applying Closing operation
Imgproc.morphologyEx(thresholdImage, morph, Imgproc.MORPH_CLOSE, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingClosing, denoisingClosing)));
logger.info("Morph image "+morph);
// applying Opening operation
Imgproc.morphologyEx(morph, morphAfterOpreation, Imgproc.MORPH_OPEN, Imgproc.getStructuringElement(
Imgproc.MORPH_RECT, new Size(denoisingOpening, denoisingOpening)));
logger.info("Morph After operation image "+morphAfterOpreation);
// Applying dilation on the threshold image to create bounding box edges
Imgproc.dilate(morphAfterOpreation, dilate,
Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(boxSize, boxSize)));
logger.info("Dilate image "+dilate);
// creating string buffer object
String text = "";
try
{
// finding contours
List<MatOfPoint> contourList = new ArrayList<MatOfPoint>(); // A list to store all the contours
// finding contours
Imgproc.findContours(dilate, contourList, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
logger.info("Contour List "+contourList);
// Creating a copy of the image
//Mat copyOfImage = source;
Mat copyOfImage = newImageFortest;
logger.info("Copy of Image "+copyOfImage);
// Rectangle for cropping
Rect rectCrop = new Rect();
logger.info("Rectangle Crop New Object "+rectCrop);
// loop through the identified contours and crop them from the image to feed
// into Tesseract-OCR
for (int i = 0; i < contourList.size(); i++) {
// getting bound rectangle
rectCrop = Imgproc.boundingRect(contourList.get(i));
logger.info("Rectangle cropped"+rectCrop);
// cropping Image
Mat croppedImage = copyOfImage.submat(rectCrop.y, rectCrop.y + rectCrop.height, rectCrop.x,
rectCrop.x + rectCrop.width);
// writing cropped image to disk
logger.info("Path to write cropped image "+ tempDirectoryPath);
String writePath = tempDirectoryPath + "/croppedImg.png";
logger.info("writepath"+writePath);
// imagepath = imagepath.
Imgcodecs.imwrite(writePath, croppedImage);
try {
// extracting text from cropped image, goes to the image, extracts text and adds
// them to stringBuffer
logger.info("Exact Path where Image was written with Name "+ writePath);
String textExtracted = (tesseract
.doOCR(new File(writePath)));
//Adding Seperator
textExtracted = textExtracted + "_SEPERATOR_";
logger.info("Text Extracted "+textExtracted);
textExtracted = textExtracted + "\n";
text = textExtracted + text;
logger.info("Text Extracted Completely"+text);
// System.out.println("Andar Ka Text => " + text.toString());
} catch (Exception exception) {
logger.error(exception);
}
writePath = "";
logger.info("Making write Path empty for next Image "+ writePath);
}
}
catch(CvException ae)
{
logger.error("cv",ae);
}
catch(UnsatisfiedLinkError ae)
{
logger.error("unsatdif",ae);
}
catch(Exception ae)
{
logger.error("general",ae);
}
// converting into string
return text.toUpperCase();
}
// convert Mat to Image for GUI output
public static Image toBufferedImage(Mat m) {
// getting BYTE_GRAY formed image
int type = BufferedImage.TYPE_BYTE_GRAY;
if (m.channels() > 1) {
type = BufferedImage.TYPE_3BYTE_BGR;
}
int bufferSize = m.channels() * m.cols() * m.rows();
byte[] b = new byte[bufferSize];
m.get(0, 0, b); // get all the pixels
// creating buffered Image
BufferedImage image = new BufferedImage(m.cols(), m.rows(), type);
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(b, 0, targetPixels, 0, b.length);
// returning Image
return image;
}
// method to display Mat format images using the GUI
private static void showWaitDestroy(String winname, Mat img) {
HighGui.imshow(winname, img);
HighGui.moveWindow(winname, 500, 0);
HighGui.waitKey(0);
HighGui.destroyWindow(winname);
}
}
Can anyone explain why this happens. I read an image and render it into an output writer. If it is a color file (or black and white), it renders fine. However, if the source image is grayscale, all I get is a black box.
Sample files available at https://www.dropbox.com/sh/kyfsh5curobwxrw/AACfWr1NhX8lPUZpzVGWIPQia?dl=0
My pom plugin dependancy snippets follow.
<dependency>
<groupId>javax.media</groupId>
<artifactId>jai_core</artifactId>
<version>1.1.3</version>
</dependency>
<dependency>
<groupId>com.sun.media</groupId>
<artifactId>jai_imageio</artifactId>
<version>1.1</version>
</dependency>
A test program. I understand that this bit of code in itself is really of no value, but in reality it is part of a larger suite of operations. This code represents my efforts to narrow down the issue to a small piece of code.
import javax.imageio.IIOImage;
import javax.imageio.ImageIO;
import javax.imageio.ImageReader;
import javax.imageio.ImageWriter;
import javax.imageio.stream.ImageInputStream;
import javax.imageio.stream.ImageOutputStream;
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
public class GrayScaleImaging {
public static void main(String[] args)
throws IOException {
//works
// final File inputFile = new File("/home/vinayb/Downloads/page1_color.tif");
// final File outputFile = new File("/home/vinayb/Downloads/page1_color_mod.tif");
//doesn't work
final File inputFile = new File("/home/vinayb/Downloads/page1_grayscale.tif");
final File outputFile = new File("/home/vinayb/Downloads/page1_grayscale_mod.tif");
if (outputFile.exists()) {
outputFile.delete();
}
ImageReader imageReader = null;
ImageWriter imageWriter = null;
Graphics2D g = null;
try (final ImageInputStream imageInputStream = ImageIO.createImageInputStream(inputFile);
final ImageOutputStream imageOutputStream = ImageIO.createImageOutputStream(outputFile);) {
//setup reader
imageReader = ImageIO.getImageReaders(imageInputStream).next();
imageReader.setInput(imageInputStream);
//read image
final BufferedImage initialImage = imageReader.read(0);
//prepare graphics for the output
final BufferedImage finalImage = new BufferedImage(initialImage.getWidth(), initialImage.getHeight(), imageType(initialImage));
g = finalImage.createGraphics();
//do something to the image
//doSomething(g)
//draw image
g.drawImage(initialImage, 0, 0, initialImage.getWidth(), initialImage.getHeight(), null);
//setup writer based on reader
imageWriter = ImageIO.getImageWriter(imageReader);
imageWriter.setOutput(imageOutputStream);
//write
imageWriter.write(null, new IIOImage(initialImage, null, imageReader.getImageMetadata(0)), imageWriter.getDefaultWriteParam());
} finally {
//cleanup
if (imageWriter != null) {
imageWriter.dispose();
}
if (imageReader != null) {
imageReader.dispose();
}
if (g != null) {
g.dispose();
}
}
}
private static int imageType(BufferedImage bufferedImage) {
return bufferedImage.getType() == 0 ? BufferedImage.TYPE_INT_ARGB : bufferedImage.getType();
}
}
Okay, I have now fixed my decoder, thanks for the sample file! :-)
Anyway, after some research, I have come to the conclusion that the problem is definitively the sample file, not your code nor the library you are using.
The issue with this file is that the TIFF metadata contains PhotometricInterpretation == 3/Palette and ColorMap tags. Ie. the image uses indexed color model/palette. If the image is read as it should according to (my understanding of) the spec, using the supplied color map, the image comes out all black. If instead I ignore this, and rather read it as gray scale (assuming PhotometricInterpretation == 1/BlackIsZero), it comes out as black text on white (light gray) background.
Edit:
A better explanation, is that the values in the color map are all 8 bit quantities (using the low 8 bits of each color entry) instead of using the full 16 bit as they should... If I detect this while reading and creating a palette using only the low 8 bits, the image comes out as intended (as in DropBox). This is still a bad image according to the spec, but detectable.
I have write simple code to retrieve number of Contours in a image and get the number of Contours in the image. But it always gives incorrect answer. Please can some one explain about this ?
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.CanvasFrame;
import static com.googlecode.javacpp.Loader.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
import java.io.File;
import javax.swing.JFileChooser;
public class TestBeam {
public static void main(String[] args) {
CvMemStorage storage=CvMemStorage.create();
CvSeq squares = new CvContour();
squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);
JFileChooser f=new JFileChooser();
int result=f.showOpenDialog(f);//show dialog box to choose files
File myfile=null;
String path="";
if(result==0){
myfile=f.getSelectedFile();//selected file taken to myfile
path=myfile.getAbsolutePath();//get the path of the file
}
IplImage src = cvLoadImage(path);//hear path is actual path to image
IplImage grayImage = IplImage.create(src.width(), src.height(), IPL_DEPTH_8U, 1);
cvCvtColor(src, grayImage, CV_RGB2GRAY);
cvThreshold(grayImage, grayImage, 127, 255, CV_THRESH_BINARY);
CvSeq cvSeq=new CvSeq();
CvMemStorage memory=CvMemStorage.create();
cvFindContours(grayImage, memory, cvSeq, Loader.sizeof(CvContour.class), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
System.out.println(cvSeq.elem_size());
CanvasFrame cnvs=new CanvasFrame("Beam");
cnvs.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
cnvs.showImage(src);
//cvShowImage("Final ", src);
}
}
This is the sample image that I used
But Code always returns output as 8. Please can someone explain this?
cvSeq.elem_size() will return size of sequence element in bytes and not the number of contours. That is why output is 8 every time.
Please refer following link for more information.
http://opencv.willowgarage.com/documentation/dynamic_structures.html#cvseq
For finding number of contours you can use following snippet
int i = 0;
while(cvSeq != null){
i = i + 1;
cvSeq = cvSeq.h_next();
}
System.out.println(i);
With the parameters you have provided CV_RETR_EXTERNAL will only provide external contour that is image boundary in your image(provided you are not inverting the image). You can use CV_RETR_LIST to get all the contours. Visit following link for more information on the parameters.
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours
I'm trying to use the RGBImageFilter to produce a simple transparency effect on an image, however during testing I found that all of my filtered images come out as blank with black backgound. I simplified my filter to a simple "No Operation" filter that just returns the RGB value that was passed in. Even this simplified NoOp filter produces a blank image. I've tried this with importing and exporting both JPG and PNG with the same affect. I've tried various example filters off the internet that produce the same all black image problem. Has anyone encountered this before? The "load status" and the "write status" below are both returning true so I know from MediaTracker that the original image is loading fully and that the produced image is writing successfully. Thanks in advance for any help!
Here is the code:
import java.awt.Graphics2D;
import java.awt.Image;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import java.awt.image.FilteredImageSource;
import java.awt.image.ImageProducer;
import java.awt.image.RGBImageFilter;
import java.io.File;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
class NoOpFilter extends RGBImageFilter {
public NoOpFilter() {
canFilterIndexColorModel = true;
}
public int filterRGB(int x, int y, int rgb) {
return rgb;
}
public static void main(String[] args) throws Exception{
String originalImageLocation = args[0];
String baseFileName = args[1];
String directory = args[2];
ImageIcon originalImage = new ImageIcon(Toolkit.getDefaultToolkit().getImage(originalImageLocation));
NoOpFilter filter2 = new NoOpFilter();
ImageProducer ip = new FilteredImageSource(originalImage.getImage().getSource(), filter2);
Image filteredImage = Toolkit.getDefaultToolkit().createImage(ip);
System.out.println("Load Status: " + ( originalImage.getImageLoadStatus() == java.awt.MediaTracker.COMPLETE));
BufferedImage bim = new BufferedImage( originalImage.getIconWidth(),originalImage.getIconHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = bim.createGraphics();
g2.drawImage(filteredImage, 0, 0, null);
g2.dispose();
File file = new File(directory + File.separator + baseFileName + ".png");
file.createNewFile();
boolean status = ImageIO.write(bim, "png", file);
System.out.print("write() status: " + status);
}
}
Here is the fix that worked for me. Right after creating the fileredImage above, it was necessary to actually add the filtered image to a JLabel and the JLabel to a JFrame in order to force it to be rendered to screen first before writing it into my buffer and saving to disk:
...
Image filteredImage = Toolkit.getDefaultToolkit().createImage(ip);
ImageIcon swingImage = new ImageIcon(filteredImage);
JFrame frame = new JFrame();
JLabel label = new JLabel(swingImage);
frame.add(label);
frame.setVisible(true);
frame.dispose();
BufferedImage bim = new BufferedImage( originalImage.getIconWidth(),originalImage.getIconHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = bim.createGraphics();
g2.drawImage(filteredImage, 0, 0, null);
g2.dispose();
File file = new File(directory + File.separator + baseFileName + ".png");
file.createNewFile();
boolean status = ImageIO.write(bim, "png", file);
System.out.print("write() status: " + status);
}
I don't quite yet understand why this can't be done completely off-screen like I wanted to do, however it was easy enough to dispose the JFrame right after opening it. A better solution would not require using UI components to do this since I don't really need anything displayed on screen in this case and just want the end product.
Thanks to Hovercraft for pointing me in the right direction.
You have to call prepareImage, then you don't have to use the JLabel-workaround.
Image filteredImage = Toolkit.getDefaultToolkit().createImage(ip);
Toolkit.getDefaultToolkit().prepareImage(filteredImage, -1, -1, null);
I've been having a problem with my Java program. It's for resizing images. You drop it into a folder and run it, and it creates a new folder with the resized images. It works great on color, but it has a problem with grayscale. The images are converted, but they become lighter and more washed out, as if someone has messed with the curves or levels. All the input files and output files are sRGB color space jpegs, saved in RGB color mode. I have thousands of 50 megapixel film scans I'm trying to convert down to 15 megapixels or less. Any help or ideas anyone could offer would be most appreciated. The programs full code is below, it's about 130 lines. I have a feeling the problem may be in the toBufferedImage function but I'm lost as to what it could be.
package jpegresize;
import java.awt.*;
import java.awt.image.*;
import java.util.*;
import java.io.*;
import javax.imageio.*;
import javax.imageio.stream.*;
import javax.swing.*;
public class Main {
public static void main(String[] args) {
System.out.println("JPEGResize running . . .");
int max_side = 4096;
float quality = 0.9f;
if(args.length == 0) System.out.println("No maximum side resolution or compression quality arguments given, using default values.\nUsage: java -jar JPEGResize.jar <maximum side resolution in pixels> <quality 0 to 100 percent>");
if(args.length >= 1) max_side = Integer.parseInt(args[0]);
if(args.length >= 2) quality = Float.parseFloat(args[1]) / 100.0f;
System.out.println("Maximum side resolution: " + max_side);
System.out.println("Compression quality: " + (quality * 100) + "%");
File folder = new File(".");
File[] listOfFiles = folder.listFiles(new JPEGFilter());
for(int i = 0; i < listOfFiles.length; i++) {
System.out.println("Processing " + listOfFiles[i].getName() + " . . .");
resizeFile(listOfFiles[i].getName(), max_side, quality);
System.out.println("Saved /resized/" + listOfFiles[i].getName());
}
System.out.println("Operations complete.");
}
public static void resizeFile(String filename, int max_side, float quality) {
try
{
BufferedImage input_img = ImageIO.read(new File(filename));
double aspect_ratio = ((double)input_img.getWidth()) / ((double)input_img.getHeight());
int width, height;
if(input_img.getWidth() >= input_img.getHeight()) {
width = max_side;
height = (int)(((double)max_side) / aspect_ratio);
}
else {
width = (int)(((double)max_side) * aspect_ratio);
height = max_side;
}
Image scaled_img = input_img.getScaledInstance(width, height, Image.SCALE_SMOOTH);
BufferedImage output_img = toBufferedImage(scaled_img);
Iterator iter = ImageIO.getImageWritersByFormatName("jpeg");
ImageWriter writer = (ImageWriter)iter.next();
ImageWriteParam iwp = writer.getDefaultWriteParam();
iwp.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
iwp.setCompressionQuality(quality);
File doesDirExist = new File("resized/");
if(!doesDirExist.exists())
new File("resized").mkdir();
File file = new File("resized/" + filename);
FileImageOutputStream output = new FileImageOutputStream(file);
writer.setOutput(output);
IIOImage image = new IIOImage(output_img, null, null);
writer.write(null, image, iwp);
writer.dispose();
}
catch (IOException e)
{
e.printStackTrace();
}
}
// This method returns a buffered image with the contents of an image
public static BufferedImage toBufferedImage(Image image) {
if (image instanceof BufferedImage) {
return (BufferedImage)image;
}
// This code ensures that all the pixels in the image are loaded
image = new ImageIcon(image).getImage();
// Create a buffered image with a format that's compatible with the screen
BufferedImage bimage = null;
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
try {
// Determine the type of transparency of the new buffered image
int transparency = Transparency.OPAQUE;
// Create the buffered image
GraphicsDevice gs = ge.getDefaultScreenDevice();
GraphicsConfiguration gc = gs.getDefaultConfiguration();
bimage = gc.createCompatibleImage(
image.getWidth(null), image.getHeight(null), transparency);
} catch (HeadlessException e) {
// The system does not have a screen
}
if (bimage == null) {
// Create a buffered image using the default color model
int type = BufferedImage.TYPE_INT_RGB;
bimage = new BufferedImage(image.getWidth(null), image.getHeight(null), type);
}
// Copy image to buffered image
Graphics g = bimage.createGraphics();
// Paint the image onto the buffered image
g.drawImage(image, 0, 0, null);
g.dispose();
return bimage;
}
}
class JPEGFilter implements FilenameFilter {
public boolean accept(File dir, String name) {
return (name.toLowerCase().endsWith(".jpg")) || (name.toLowerCase().endsWith(".jpeg"));
}
}
If jdk's classes and methods are buggy, report the bug to oracle (oh! I wish I could go on saying to SUN..).
And, while the next release will correct the bug ;), try some work arounds, scaling image by yourself like proposed here.
Regards,
Stéphane
In your code, you assume jpeg are encoded in RGB, but that's not always the case. It's also possible to encode 8 bit gray scaled jpeg. So I suggest that you try this when building your BufferedImage, replace :
BufferedImage.TYPE_INT_RGB;
by
BufferedImage.TYPE_BYTE_GRAY;
and see if it works for those images.
If so, then you still have to find out a way to determine the encoding type to automatically change the type of BufferedImage color encoding to use, but you will be one stop closer.
Regards,
Stéphane