I will modify and add Tiff-Tags to existing tif-files with java. JAI imageio crashed, because it could not deal with certain tags from Tiff 6.0. Apache Commons-Imaging seems to be able to deal with these tags. But I have no idea, how to do that. I found a post here, I used for beginning (How to embed ICC_Profile in TiffOutputSet).
Using the example code creates an image, which I can't open because of an LZW error. If I use the Imaging.writeImage(...) methods, It changes the color model from 8Bit to 24Bit and the Exif metadata hase gone.
What i have done is:
bufferedImage = Imaging.getBufferedImage(srcTiff);
byte[] imageBytes = Imaging.writeImageToBytes(tifFile, imageFormat, optional_params)
exifDirectory = tiffOutputSet.getOrCreateRootDirectory();
...
TiffImageWriterLossLess lossLessWriter = new TiffImageWriterLossless(imageBytes);
os = new FileOutputStream(tmpFile);
os = new BufferedOutputStream(os);
lossLessWriter.writeImage(bufferedImage, os, image_params);
Playing around with image_params, like compression or defining the outputset as params, results in different issues. But one is constant, the destImage is bigger then the src image, even when the source image is 24 bit like the dest image.
How could I get Commons-Imaging work for me?
I can respond to the destImage bigger than the src, it is because TIFF images have a compression that is not carried over when the image is read into memory. On writing the image back to storage, you must apply the compression explicitly.
Related
Im trying to apply the code posted in this post:
How to convert from CMYK to RGB in Java correctly?
The Answer from the guy named Codo works for me so far, but my source is not a file, its an object that gets converted into a BufferedImage with
stream = (PRStream)object;
PdfImageObject image = new PdfImageObject(stream);
//this does not work
BufferedImage bi = image.getBufferedImage();
The guy has a method that returns a BufferedImage from a file like so
public BufferedImage readImage(File file) throws IOException, ImageReadException
but i want to use
BufferedImage bi = readImage(image.getBufferedImage());
instead of
File f = new File("/Users/adlib/Documents/projekte/pdf_compress/envirement/eclipse_luna/WORKSPACE/PDFCompression/src/Bild.jpg");
BufferedImage bi = readImage(f);
cause im ectracting all the images from a pdf file using iText.
I messed around with the code (changed file to BufferedImage and added streams) but a just dont get it to work. The File as Input image works fine, but not really what i need. What do i need to change to get This guys code to work with BufferedImage as input for the readImage() method?
Here is the complete code of this guy
https://stackoverflow.com/a/12132630/4944643
He uses Sanselan / Apache Commons Imaging
I'm not sure how iText extracts images, but chances are good it's using ImageIO. If so, you can just install (or depend on, using Maven) the TwelveMonkeys JPEG ImageIO plugin, and
BufferedImage bi = image.getBufferedImage();
...should just work.
The above mentioned plugin does support CMYK (and Adobe YCCK) JPEGs.
If iText doesn't use ImageIO, the above won't work (ie. when you already have a BufferedImage, it's too late to make the correct conversion). You will instead need to get to the bytes of the underlying PDF (using the getImageAsBytes() method), and use ImageIO (via the TwelveMonkeys JPEG plugin) to decode it:
byte[] imgBytes = image.getImageAsBytes();
BufferedImage bi = ImageIO.read(new ByteArrayInputStream(imgBytes));
I have multiple images with a custom profile embedded in them and want to convert the image to sRGB in order to serve it up to a browser. I have seen code like the following:
BufferedImage image = ImageIO.read(fileIn);
ColorSpace ics = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorConvertOp cco = new ColorConvertOp(ics, null);
BufferedImage result = cco.filter(image, null);
ImageIO.write(result, "PNG", fileOut);
where fileIn and fileOut are File objects representing the input file and output file respectively. This works to an extent. The problem is that the resulting image is lighter than the original. If I was to convert the color space in photoshop the colors would appear the same. In fact if I pull up both images with photoshop and take a screen shot and sample the colors, they are the same. What is photoshop doing that the code above isn't and what can I do to correct the problem?
There are various types of images being converted, including JPEG, PNG, and TIFF. I have tried using TwelveMonkeys to read in JPEG and TIFF images and I still get the same effect, where the image is too light. The conversion process seems worst when applied to an image that didn't have an embedded profile in the first place.
Edit:
I've added some sample images to help explain the problem.
This image is the one with the color profile embedded in it. Viewed on some browsers there won't be a noticeable difference between this one and the next but viewed in Chrome on Mac OSX and Windows it currently appears darker than it should. This is where my problem originates in the first place. I need to convert the image to something that will show up correctly in Chrome.
This is an image converted with ImageMagick to the Adobe RGB 1998 color profile, which Chrome appears to be able to display correctly.
This is the image that I converted using the code above and it appears lighter than it should.
(Note that the images above are on imgur so to make them larger, simply remove the "t" from the end of the filename, before the file extension.)
This was my initial solution which worked but I didn't like having to use ImageMagick. I have created another answer based off of the solution I ended up sticking with.
I gave in and ended up using im4java, which is a wrapper around the command line tool of image magick. When I use the following code to get a BufferedImage, it works really well.
IMOperation op = new IMOperation();
op.addImage(fileIn.getAbsolutePath());
op.profile(colorFileIn.getAbsolutePath());
op.addImage("png:-");
ConvertCmd cmd = new ConvertCmd();
Stream2BufferedImage s2b = new Stream2BufferedImage();
cmd.setOutputConsumer(s2b);
cmd.run(op);
BufferedImage image = s2b.getImage();
I can also use the library to apply a CMYK profile for print when needed. It would be nice if ColorConvertOp did the conversion correctly but for now, at least, this is my solution. In order to reach parity with my question the im4java code to achieve the same effect in the question is:
ConvertCmd cmd = new ConvertCmd();
IMOperation op = new IMOperation();
op.addImage(fileIn.getAbsolutePath());
op.profile(colorFileIn.getAbsolutePath());
op.addImage(fileOut.getAbsolutePath());
cmd.run(op);
where colorFileIn.getAboslutePath() is the location of the sRGB color profile on the machine. Since im4java is using the command line it's not as straight forward how to perform operations but the library is explained in detail here. I originally had issues with image magick not working on my Mac as explained in the question. I installed it using brew but it turns out on a Mac you have to install it like brew install imagemagick --with-little-cms. After that image magick worked fine for me.
I found a solution that doesn't require ImageMagick. Basically Java doesn't respect the profile when loading the image so if there is one it needs to get loaded. Here is a code snippet of what I did to accomplish this:
private BufferedImage loadBufferedImage(InputStream inputStream) throws IOException, BadElementException {
byte[] imageBytes = IOUtils.toByteArray(inputStream);
BufferedImage incorrectImage = ImageIO.read(new ByteArrayInputStream(imageBytes));
if (incorrectImage.getColorModel() instanceof ComponentColorModel) {
// Java does not respect the color profile embedded in a component based image, so if there is a color
// profile, detected using iText, then create a buffered image with the correct profile.
Image iTextImage = Image.getInstance(imageBytes);
com.itextpdf.text.pdf.ICC_Profile iTextProfile = iTextImage.getICCProfile();
if (iTextProfile == null) {
// If no profile is present than the image should be processed as is.
return incorrectImage;
} else {
// If there is a profile present then create a buffered image with the profile embedded.
byte[] profileData = iTextProfile.getData();
ICC_Profile profile = ICC_Profile.getInstance(profileData);
ICC_ColorSpace ics = new ICC_ColorSpace(profile);
boolean hasAlpha = incorrectImage.getColorModel().hasAlpha();
boolean isAlphaPremultiplied = incorrectImage.isAlphaPremultiplied();
int transparency = incorrectImage.getTransparency();
int transferType = DataBuffer.TYPE_BYTE;
ComponentColorModel ccm = new ComponentColorModel(ics, hasAlpha, isAlphaPremultiplied, transparency, transferType);
return new BufferedImage(ccm, incorrectImage.copyData(null), isAlphaPremultiplied, null);
}
}
else if (incorrectImage.getColorModel() instanceof IndexColorModel) {
return incorrectImage;
}
else {
throw new UnsupportedEncodingException("Unsupported color model type.");
}
}
This answer does use iText, which is generally used for PDF creation and manipulation, but it happens to process the ICC profiles correctly and I'm already depending on it for my project so it happens to be a much better choice than ImageMagick.
The code in the question then ends up as follows:
BufferedImage image = loadBufferedImage(new FileInputStream(fileIn));
ColorSpace ics = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorConvertOp cco = new ColorConvertOp(ics, null);
BufferedImage result = cco.filter(image, null);
ImageIO.write(result, "PNG", fileOut);
which works great.
I'm trying to use ZXing library to decode Datamatrix barcode. Here are my code sample:
BufferedImage bi = img.getBufferedImage();
Hashtable<DecodeHintType, Object> hints = new Hashtable<DecodeHintType, Object>();
hints.put(DecodeHintType.TRY_HARDER, Boolean.TRUE);
LuminanceSource source = new BufferedImageLuminanceSource(bi);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
DataMatrixReader dataMatrixReader = new DataMatrixReader();
try {
Result res = dataMatrixReader.decode(bitmap,hints);
System.out.println("resultText = "+res.getText());
} catch (Exception e) {
System.out.println("failed to get resultText");
e.printStackTrace();
}
I've seen almost the same samples many times accross https://stackoverflow.com/ and other sites, but this approach does not working for me in this form.
As a source I'm using images grabbed from IR-camera. Here are example image:
As you see, the barcode is almost exactly at the center of an image, as Sean Owen recommended here and here. If I programmatically convert this image to black&white and crop image to bound barcode with some white space around it only, then ZXing works perfectly with images like this. But the problem is that barcode in real could have little deformations, so my simple algorythm can't help me to crop image properly. More over barcode could be placed not exactly in the center of an image and cold have a little bit different brightness. I saw threads mentioning OpenCV capabilities to find out placement of speciects objects on the image, like this one, but they are quite old. Is something changed since then? And what should i yet certainly consider to write 100% reliable datamatrix decoder (and detector) in my specific situation?
I decided to supply LuminanceSource and BinaryBitmap images made of .toString() text output of correcponding objects for reference:
http://s28.postimg.org/l53sykhx9/Binary_Bitmap.png
and /65z0vlbpl/Luminance_Source.png (at the same domain). They are looking good and ready for decoding, but what is wrong with decoding then.
After all this image and similar ones recognized and decoded very well with smartphone software and i'm just wanted achieve same results.
you need to enable it from settings programmatically or manually.
in class DecodeThread.java you can see the line that enables data matrix encoding
decodeFormats.addAll(DecodeFormatManager.DATA_MATRIX_FORMATS);
I have a tiff image stored as Base64 encoded String in a file. My aim is to create a tiff file out of it. This is what I am doing:
String base64encodedTiff = IOUtils.toString(new FileInputStream("C:/tiff-attachment.txt"));
byte[] imgBytes = DatatypeConverter.parseBase64Binary(base64encodedTiff);
BufferedImage bufImg = ImageIO.read(new ByteArrayInputStream(imgBytes));
ImageIO.write(bufImg, "tiff", new File("c:/new-darksouls-imageIO-tiff.tiff"));
ImageIO.write() is throwing IllegalArgumentException because bufImg is null. I don't understand what am I doing wrong here.
On the contrary if I use IOUtils to write, it works fine:
IOUtils.write(imgBytes, new FileOutputStream("c:/new-darksouls-io-tiff.tiff"));
Please help me understand
Why ImageIO is throwing exception
What is the right API and way for what I am trying to achieve.
ImageIO would be useful if, for example, you wanted to convert a PNG to a JPEG. Since you don't need to manipulate the image or convert to another format, don't bother with ImageIO. Just use IOUtils.write() to save the TIFF data verbatim.
ImageIO.read() is returning a null image because it can't read the TIFF file, probably because TIFF isn't one of the standard ImageIO plugin formats. The standard supported image formats are listed here:
http://docs.oracle.com/javase/6/docs/api/javax/imageio/package-summary.html
An additional note -- the code you posted buffers the entire image in memory. If you're concerned about using memory efficiently, consider using some kind of Base64 decoding input stream to perform the decoding on the fly. That might look like this:
try (FileOutputStream out = new FileOutputStream("c:/new-darksouls-io-tiff.tiff");
FileInputStream in = new FileInputStream("C:/tiff-attachment.txt");
Base64InputStream decodedIn = new Base64InputStream(in)) {
IOUtils.copy(decodedIn, out);
}
I am receiving large size CCITT Group 4 compressed TIFF files that need to be written elsewhere as uncompressed TIFF files. I am using the jai_imageio TIFF reader and writer to do that and it works well as long as the product _width * height_ of the image fits in an integer.
Here is the code I am using:
TIFFImageReaderSpi readerSpi= new TIFFImageReaderSpi();
ImageReader imageReader = readerSpi.createReaderInstance();
byte[] data = blobManager.getObjectForIdAndVersion(id, version);
ImageInputStream imageInputStream = ImageIO.createImageInputStream(data);
imageReader.setInput(imageInputStream);
TIFFImageWriterSpi writerSpi = new TIFFImageWriterSpi();
ImageWriter imageWriter = writerSpi.createWriterInstance();
ImageWriteParam imageWriteParam = imageWriter.getDefaultWriteParam();
imageWriteParam.setCompressionMode(ImageWriteParam.MODE_DISABLED);
//bufferFile is created in the constructor
ImageOutputStream imageOutputStream = ImageIO.createImageOutputStream(bufferFile);
imageWriter.setOutput(imageOutputStream);
//Now read the bitmap
BufferedImage bufferedImage = imageReader.read(0);
IIOImage iIOImage = new IIOImage(bufferedImage, null, null);
//and write it
imageWriter.write(null, iIOImage, imageWriteParam);
Unfortunately, the files that I receive are often very large and the BufferedImage cannot be created.
I have been trying to find a way to stream from the ImageReader directly to the ImageWriter but I cannot find out how to do that.
Anybody with a suggestion?
I've had the some issues, and the end result might surprise you :
I ended up calling IrfanView with some command-line options using the Runtime.exec() method. That way, I am not worried about compression or size, it just works and outputs the correct files in the correct folder for me.
If you are on Linux, you can use ImageMagik or something similar.
You can use TIFF tiles to segment a TIFF into smaller portions ("tiles"). If you control the code creating the big images, JAI allows you to retrieve image content tile-by-tile.
Here is an example on how to create tiled image with JAI:
ColorModel cm = source.createColorModel();
// SampleModel with the tilesize
SampleModel sm = cm.createCompatibleSampleModel(tileWidth, tileHeight);
TiledImage image = new TiledImage(0, 0, imageWidth, imageHeight, 0, 0, sm, cm);
TIFFEncodeParam tep = new TIFFEncodeParam();
tep.setTileSize(tileWidth, tileHeight); // Set tile size to avoid OOM
tep.setWriteTiled(true);
JAI.create("filestore", image, filepath, "TIFF", tep);
If you can't control the TIFF production, my knowledge of JAI is too limited to be of much help.
Give your Java VM more memory.
If that doesn't work, look at the source code of the TIFF plugin in the JAI source code. You might be able to write your own processor which just decompresses the data structures using a streaming approach (so you'll never have to keep the whole image in memory at any time).
If that also doesn't work, look at JNA which allows you to call code from a DLL from Java (no C code required; everything is done from pure Java, unlike with Sun's JNI API).