I am searching for a open-source java-library that enables me to render single pages of PDFs as JPG or PNG on server-side.
Unfortunately it mustn't use any other java.awt.* classes then
java.awt.datatransfer.DataFlavor
java.awt.datatransfer.MimeType
java.awt.datatransfer.Transferable
If there is any way, a little code-snippet would be fantastic.
i believe icepdf might have what you are looking for.
I've used this open source project a while back to turn uploaded pdfs into images for use in an online catalog.
import org.icepdf.core.exceptions.PDFException;
import org.icepdf.core.exceptions.PDFSecurityException;
import org.icepdf.core.pobjects.Document;
import org.icepdf.core.pobjects.Page;
import org.icepdf.core.util.GraphicsRenderingHints;
public byte[][] convert(byte[] pdf, String format) {
Document document = new Document();
try {
document.setByteArray(pdf, 0, pdf.length, null);
} catch (PDFException ex) {
System.out.println("Error parsing PDF document " + ex);
} catch (PDFSecurityException ex) {
System.out.println("Error encryption not supported " + ex);
} catch (FileNotFoundException ex) {
System.out.println("Error file not found " + ex);
} catch (IOException ex) {
System.out.println("Error handling PDF document " + ex);
}
byte[][] imageArray = new byte[document.getNumberOfPages()][];
// save page captures to bytearray.
float scale = 1.75f;
float rotation = 0f;
// Paint each pages content to an image and write the image to file
for (int i = 0; i < document.getNumberOfPages(); i++) {
BufferedImage image = (BufferedImage)
document.getPageImage(i,
GraphicsRenderingHints.SCREEN,
Page.BOUNDARY_CROPBOX, rotation, scale);
try {
//get the picture util object
PictureUtilLocal pum = (PictureUtilLocal) Component
.getInstance("pictureUtil");
//load image into util
pum.loadBuffered(image);
//write image in desired format
imageArray[i] = pum.imageToByteArray(format, 1f);
System.out.println("\t capturing page " + i);
} catch (IOException e) {
e.printStackTrace();
}
image.flush();
}
// clean up resources
document.dispose();
return imageArray;
}
Word of caution though, I have had trouble with this library throwing a SegFault on open-jdk. worked fine on Sun's. Not sure what it would do on GAE. I can't remember what version it was that had the problem so just be aware.
You can apache PDF box APi for this purpose and use following to code to convert two pdfs into JPG page by page .
public void convertPDFToJPG(String src,String FolderPath){
try{
File folder1 = new File(FolderPath+"\\");
comparePDF cmp=new comparePDF();
cmp.rmdir(folder1);
//load pdf file in the document object
PDDocument doc=PDDocument.load(new FileInputStream(src));
//Get all pages from document and store them in a list
List<PDPage> pages=doc.getDocumentCatalog().getAllPages();
//create iterator object so it is easy to access each page from the list
Iterator<PDPage> i= pages.iterator();
int count=1; //count variable used to separate each image file
//Convert every page of the pdf document to a unique image file
System.out.println("Please wait...");
while(i.hasNext()){
PDPage page=i.next();
BufferedImage bi=page.convertToImage();
ImageIO.write(bi, "jpg", new File(FolderPath+"\\Page"+count+".jpg"));
count++;
}
System.out.println("Conversion complete");
}catch(IOException ie){ie.printStackTrace();}
}
Related
I'm trying to convert PDFs as represented by the org.apache.pdfbox.pdmodel.PDDocument class and the icafe library (https://github.com/dragon66/icafe/) to a multipage tiff with group 4 compression and 300 dpi. The sample code works for me for 288 dpi but strangely NOT for 300 dpi, the exported tiff remains just white. Has anybody an idea what the issue is here?
The sample pdf which I use in the example is located here: http://www.bergophil.ch/a.pdf
import java.awt.image.BufferedImage;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import cafe.image.ImageColorType;
import cafe.image.ImageParam;
import cafe.image.options.TIFFOptions;
import cafe.image.tiff.TIFFTweaker;
import cafe.image.tiff.TiffFieldEnum.Compression;
import cafe.io.FileCacheRandomAccessOutputStream;
import cafe.io.RandomAccessOutputStream;
public class Pdf2TiffConverter {
public static void main(String[] args) {
String pdf = "a.pdf";
PDDocument pddoc = null;
try {
pddoc = PDDocument.load(pdf);
} catch (IOException e) {
}
try {
savePdfAsTiff(pddoc);
} catch (IOException e) {
}
}
private static void savePdfAsTiff(PDDocument pdf) throws IOException {
BufferedImage[] images = new BufferedImage[pdf.getNumberOfPages()];
for (int i = 0; i < images.length; i++) {
PDPage page = (PDPage) pdf.getDocumentCatalog().getAllPages()
.get(i);
BufferedImage image;
try {
// image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 288); //works
image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 300); // does not work
images[i] = image;
} catch (IOException e) {
e.printStackTrace();
}
}
FileOutputStream fos = new FileOutputStream("a.tiff");
RandomAccessOutputStream rout = new FileCacheRandomAccessOutputStream(
fos);
ImageParam.ImageParamBuilder builder = ImageParam.getBuilder();
ImageParam[] param = new ImageParam[1];
TIFFOptions tiffOptions = new TIFFOptions();
tiffOptions.setTiffCompression(Compression.CCITTFAX4);
builder.imageOptions(tiffOptions);
builder.colorType(ImageColorType.BILEVEL);
param[0] = builder.build();
TIFFTweaker.writeMultipageTIFF(rout, param, images);
rout.close();
fos.close();
}
}
Or is there another library to write multi-page TIFFs?
EDIT:
Thanks to dragon66 the bug in icafe is now fixed. In the meantime I experimented with other libraries and also with invoking ghostscript. As I think ghostscript is very reliable as id is a widely used tool, on the other hand I have to rely that the user of my code has an ghostscript-installation, something like this:
/**
* Converts a given pdf as specified by its path to an tiff using group 4 compression
*
* #param pdfFilePath The absolute path of the pdf
* #param tiffFilePath The absolute path of the tiff to be created
* #param dpi The resolution of the tiff
* #throws MyException If the conversion fails
*/
private static void convertPdfToTiffGhostscript(String pdfFilePath, String tiffFilePath, int dpi) throws MyException {
// location of gswin64c.exe
String ghostscriptLoc = context.getGhostscriptLoc();
// enclose src and dest. with quotes to avoid problems if the paths contain whitespaces
pdfFilePath = "\"" + pdfFilePath + "\"";
tiffFilePath = "\"" + tiffFilePath + "\"";
logger.debug("invoking ghostscript to convert {} to {}", pdfFilePath, tiffFilePath);
String cmd = ghostscriptLoc + " -dQUIET -dBATCH -o " + tiffFilePath + " -r" + dpi + " -sDEVICE=tiffg4 " + pdfFilePath;
logger.debug("The following command will be invoked: {}", cmd);
int exitVal = 0;
try {
exitVal = Runtime.getRuntime().exec(cmd).waitFor();
} catch (Exception e) {
logger.error("error while converting to tiff using ghostscript", e);
throw new MyException(ErrorMessages.GHOSTSTSCRIPT_ERROR, e);
}
if (exitVal != 0) {
logger.error("error while converting to tiff using ghostscript, exitval is {}", exitVal);
throw new MyException(ErrorMessages.GHOSTSTSCRIPT_ERROR);
}
}
I found that the produced tif from ghostscript strongly differs in quality from the tiff produced by icafe (the group 4 tiff from ghostscript looks greyscale-like)
It's been a while since the question was asked and I finally find time and a wonderful ordered dither matrix which allows me to give some details on how "icafe" can be used to get similar or better results than calling external ghostscript executable. Some new features were added to "icafe" recently such as better quantization and ordered dither algorithms which is used in the following example code.
Here the sample pdf I am going to use is princeCatalogue. Most of the following code is from the OP with some changes due to package name change and more ImageParam control settings.
import java.awt.image.BufferedImage;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import com.icafe4j.image.ImageColorType;
import com.icafe4j.image.ImageParam;
import com.icafe4j.image.options.TIFFOptions;
import com.icafe4j.image.quant.DitherMethod;
import com.icafe4j.image.quant.DitherMatrix;
import com.icafe4j.image.tiff.TIFFTweaker;
import com.icafe4j.image.tiff.TiffFieldEnum.Compression;
import com.icafe4j.io.FileCacheRandomAccessOutputStream;
import com.icafe4j.io.RandomAccessOutputStream;
public class Pdf2TiffConverter {
public static void main(String[] args) {
String pdf = "princecatalogue.pdf";
PDDocument pddoc = null;
try {
pddoc = PDDocument.load(pdf);
} catch (IOException e) {
}
try {
savePdfAsTiff(pddoc);
} catch (IOException e) {
}
}
private static void savePdfAsTiff(PDDocument pdf) throws IOException {
BufferedImage[] images = new BufferedImage[pdf.getNumberOfPages()];
for (int i = 0; i < images.length; i++) {
PDPage page = (PDPage) pdf.getDocumentCatalog().getAllPages()
.get(i);
BufferedImage image;
try {
// image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 288); //works
image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 300); // does not work
images[i] = image;
} catch (IOException e) {
e.printStackTrace();
}
}
FileOutputStream fos = new FileOutputStream("a.tiff");
RandomAccessOutputStream rout = new FileCacheRandomAccessOutputStream(
fos);
ImageParam.ImageParamBuilder builder = ImageParam.getBuilder();
ImageParam[] param = new ImageParam[1];
TIFFOptions tiffOptions = new TIFFOptions();
tiffOptions.setTiffCompression(Compression.CCITTFAX4);
builder.imageOptions(tiffOptions);
builder.colorType(ImageColorType.BILEVEL).ditherMatrix(DitherMatrix.getBayer8x8Diag()).applyDither(true).ditherMethod(DitherMethod.BAYER);
param[0] = builder.build();
TIFFTweaker.writeMultipageTIFF(rout, param, images);
rout.close();
fos.close();
}
}
For ghostscript, I used command line directly with the same parameters provided by the OP. The screenshots for the first page of the resulted TIFF images are showing below:
The lefthand side shows the output of "ghostscript" and the righthand side the output of "icafe". It can be seen, at least in this case, the output from "icafe" is better than the output from "ghostscript".
Using CCITTFAX4 compression, the file size from "ghostscript" is 2.22M and the file size from "icafe" is 2.08M. Both are not so good given the fact dither is used while creating the black and white output. In fact, a different compression algorithm will create way smaller file size. For example, using LZW, the same output from "icafe" is only 634K and if using DEFLATE compression the output file size went down to 582K.
Here's some code to save in a multipage tiff which I use with PDFBox. It requires the TIFFUtil class from PDFBox (it isn't public, so you have to make a copy).
void saveAsMultipageTIFF(ArrayList<BufferedImage> bimTab, String filename, int dpi) throws IOException
{
Iterator<ImageWriter> writers = ImageIO.getImageWritersByFormatName("tiff");
ImageWriter imageWriter = writers.next();
ImageOutputStream ios = ImageIO.createImageOutputStream(new File(filename));
imageWriter.setOutput(ios);
imageWriter.prepareWriteSequence(null);
for (BufferedImage image : bimTab)
{
ImageWriteParam param = imageWriter.getDefaultWriteParam();
IIOMetadata metadata = imageWriter.getDefaultImageMetadata(new ImageTypeSpecifier(image), param);
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
TIFFUtil.setCompressionType(param, image);
TIFFUtil.updateMetadata(metadata, image, dpi);
imageWriter.writeToSequence(new IIOImage(image, null, metadata), param);
}
imageWriter.endWriteSequence();
imageWriter.dispose();
ios.flush();
ios.close();
}
I experimented on this for myself some time ago by using this code:
https://www.java.net/node/670205 (I used solution 2)
However...
If you create an array with lots of images, your memory consumption
really goes up. So it would probably be better to render an image, then
add it to the tiff file, then render the next page and lose the
reference of the previous one so that the gc can get the space if needed.
Refer to my github code for an implementation with PDFBox.
Since some dependencies used by solutions for this problem looks not maintained. I got a solution by using latest version (2.0.16) pdfbox:
ByteArrayOutputStream imageBaos = new ByteArrayOutputStream();
ImageOutputStream output = ImageIO.createImageOutputStream(imageBaos);
ImageWriter writer = ImageIO.getImageWritersByFormatName("TIFF").next();
try (final PDDocument document = PDDocument.load(new File("/tmp/tmp.pdf"))) {
PDFRenderer pdfRenderer = new PDFRenderer(document);
int pageCount = document.getNumberOfPages();
BufferedImage[] images = new BufferedImage[pageCount];
// ByteArrayOutputStream[] baosArray = new ByteArrayOutputStream[pageCount];
writer.setOutput(output);
ImageWriteParam params = writer.getDefaultWriteParam();
params.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
// Compression: None, PackBits, ZLib, Deflate, LZW, JPEG and CCITT
// variants allowed
params.setCompressionType("Deflate");
writer.prepareWriteSequence(null);
for (int page = 0; page < pageCount; page++) {
BufferedImage image = pdfRenderer.renderImageWithDPI(page, DPI, ImageType.RGB);
images[page] = image;
IIOMetadata metadata = writer.getDefaultImageMetadata(new ImageTypeSpecifier(image), params);
writer.writeToSequence(new IIOImage(image, null, metadata), params);
// ImageIO.write(image, "tiff", baosArray[page]);
}
System.out.println("imageBaos size: " + imageBaos.size());
// Finished write to output
writer.endWriteSequence();
document.close();
} catch (IOException e) {
e.printStackTrace();
throw new Exception(e);
} finally {
// avoid memory leaks
writer.dispose();
}
Then you may using imageBaos write to your local file. But if you want to pass your image to ByteArrayOutputStream and return to privious method like me. Then we need other steps.
After processing is done, the image bytes would be available in the ImageOutputStream
output object. We need to position the offset to the beginning of the output object and then read the butes to write to new ByteArrayOutputStream, a concise way like this:
ByteArrayOutputStream bos = new ByteArrayOutputStream();
long counter = 0;
while (true) {
try {
bos.write(ios.readByte());
counter++;
} catch (EOFException e) {
System.out.println("End of Image Stream");
break;
} catch (IOException e) {
System.out.println("Error processing the Image Stream");
break;
}
}
return bos
Or you can just ImageOutputStream.flush() at end to get your imageBaos Byte then return.
Inspired by Yusaku answer,
I made my own version,
This can convert multiple pdf pages to a byte array.
I Used pdfbox 2.0.16 in combination with imageio-tiff 3.4.2
//PDF converter to tiff toolbox method.
private byte[] bytesToTIFF(#Nonnull byte[] in) {
int dpi = 300;
ImageWriter writer = ImageIO.getImageWritersByFormatName("TIFF").next();
try(ByteArrayOutputStream imageBaos = new ByteArrayOutputStream(255)){
writer.setOutput(ImageIO.createImageOutputStream(imageBaos));
writer.prepareWriteSequence(null);
PDDocument document = PDDocument.load(in);
PDFRenderer pdfRenderer = new PDFRenderer(document);
ImageWriteParam params = writer.getDefaultWriteParam();
for (int page = 0; page < document.getNumberOfPages(); page++) {
BufferedImage image = pdfRenderer.renderImageWithDPI(page, dpi, ImageType.RGB);
IIOMetadata metadata = writer.getDefaultImageMetadata(new ImageTypeSpecifier(image), params);
writer.writeToSequence(new IIOImage(image, null, metadata), params);
}
LOG.trace("size found: {}", imageBaos.size());
writer.endWriteSequence();
writer.reset();
return imageBaos.toByteArray();
} catch (Exception ex) {
LOG.warn("can't instantiate the bytesToTiff method with: PDF", ex);
} finally {
writer.dispose();
}
}
Hello I'm creating javafx app with iText. I have html editor to write text and I want to create pdf from it. Everything works but when I have a really long line that is wrapped in html editor, in pdf it isn't wrapped, its out of page, how can I set wrapping page? here is my code:
PdfWriter writer = null;
try {
writer = new PdfWriter("doc.pdf");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
//Initialize PDF document
PdfDocument pdf = new PdfDocument(writer);
// Initialize document
Document document = new Document(pdf, PageSize.A4);
List<IElement> list = null;
try {
list = HtmlConverter.convertToElements(editor.getHtmlText());
} catch (IOException e) {
e.printStackTrace();
}
// add elements to document
for (IElement p : list) {
document.add((IBlockElement) p);
}
// close document
document.close();
I also want to set line spacing for this text
Thank you for help
I don't get any errors for the following code:
public class stack_overflow_0008 extends AbstractSupportTicket{
private static String LONG_PIECE_OF_TEXT =
"Once upon a midnight dreary, while I pondered, weak and weary," +
"Over many a quaint and curious volume of forgotten lore—" +
"While I nodded, nearly napping, suddenly there came a tapping," +
"As of some one gently rapping, rapping at my chamber door." +
"Tis some visitor,” I muttered, “tapping at my chamber door—" +
"Only this and nothing more.";
public static void main(String[] args)
{
PdfWriter writer = null;
try {
writer = new PdfWriter(getOutputFile());
} catch (FileNotFoundException e) {
e.printStackTrace();
}
//Initialize PDF document
PdfDocument pdf = new PdfDocument(writer);
// Initialize document
Document document = new Document(pdf, PageSize.A4);
List<IElement> list = null;
try {
list = HtmlConverter.convertToElements("<p>" + LONG_PIECE_OF_TEXT + "</p>");
} catch (IOException e) {
e.printStackTrace();
}
for (IElement p : list) {
document.add((IBlockElement) p);
}
document.close();
}
}
The document is a single (A4) page PDF with one string neatly wrapped.
I think perhaps the content of your string is to blame?
Could you post the HTML you get from this editor object?
Update:
Using the code from this answer on the HTML shared in a new comment to the question, I get the following result:
As you can see, the content is distributed over two lines. No content "falls off the page."
I am currently modifying the logic for generating a PDF report, using itext package (itext-2.1.7.jar). The report is generated first, then - table of contents (TOC) and then TOC is inserted into the report file, using PdfStamper and it's method - replacePage:
PdfStamper stamper = new PdfStamper(new PdfReader(finalFile.getAbsolutePath()),
new FileOutputStream(reportFile));
stamper.replacePage(new PdfReader(tocFile.getAbsolutePath()), 1, 2);
However, I wanted TOC to also have internal links to point to the right chapter. Using replacePage() method, unfortunately, removes all links and annotations, so I tried another way. I've used a method I've found to merge the report file and the TOC file. It did the trick, meaning that the links were now preserved, but they don't seem to work, the user can click them, but nothing happens. I've placed internal links on other places in the report file, for testing purposes, and they seem to work, but they don't work when placed on the actual TOC. TOC is generated separately, here's how I've created the links (addLeadingDots is a local method, not pertinent to the problem):
Chunk anchor = new Chunk(idx + ". " + chapterName + getLeadingDots(chapterName, pageNumber) + " " + pageNumber, MainReport.FONT12);
String anchorLocation = idx+chapterName.toUpperCase().replaceAll("\\s","");
anchor.setLocalGoto(anchorLocation);
line = new Paragraph();
line.add(anchor);
line.setLeading(6);
table.addCell(line);
And this is how I've created anchor destinations on the chapters in the report file:
Chunk target = new Chunk(title.toUpperCase(), font);
String anchorDestination = title.toUpperCase().replace(".", "").replaceAll("\\s","");
target.setLocalDestination(anchorDestination);
Paragraph p = new Paragraph(target);
p.setAlignment(Paragraph.ALIGN_CENTER);
doc.add(p);
Finally, here's the method that supposed to merge report file and the TOC:
public static void mergePDFs(String originalFilePath, String fileToInsertPath, String outputFile, int location) {
PdfReader originalFileReader = null;
try {
originalFileReader = new PdfReader(originalFilePath);
} catch (IOException ex) {
System.out.println("ITextHelper.addPDFToPDF(): can't read original file: " + ex);
}
PdfReader fileToAddReader = null;
try {
fileToAddReader = new PdfReader(fileToInsertPath);
} catch (IOException ex) {
System.out.println("ITextHelper.addPDFToPDF(): can't read fileToInsert: " + ex);
}
if (originalFileReader != null && fileToAddReader != null) {
int numberOfOriginalPages = originalFileReader.getNumberOfPages();
Document document = new Document();
PdfCopy copy = null;
try {
copy = new PdfCopy(document, new FileOutputStream(outputFile));
document.open();
for (int i = 1; i <= numberOfOriginalPages; i++) {
if (i == location) {
for (int j = 1; j <= fileToAddReader.getNumberOfPages(); j++) {
copy.addPage(copy.getImportedPage(fileToAddReader, j));
}
}
copy.addPage(copy.getImportedPage(originalFileReader, i));
}
document.close();
} catch (DocumentException | FileNotFoundException ex) {
m_logger.error("ITextHelper.addPDFToPDF(): can't read output location: " + ex);
} catch (IOException ex) {
m_logger.error(Level.SEVERE, ex);
}
}
}
So, the main question is, what happens to the links, after both pdf documents are merged and how to make them work? I'm not too experienced with itext, so any help is appreciated.
Thanks for your help
I'm trying to convert PDFs as represented by the org.apache.pdfbox.pdmodel.PDDocument class and the icafe library (https://github.com/dragon66/icafe/) to a multipage tiff with group 4 compression and 300 dpi. The sample code works for me for 288 dpi but strangely NOT for 300 dpi, the exported tiff remains just white. Has anybody an idea what the issue is here?
The sample pdf which I use in the example is located here: http://www.bergophil.ch/a.pdf
import java.awt.image.BufferedImage;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import cafe.image.ImageColorType;
import cafe.image.ImageParam;
import cafe.image.options.TIFFOptions;
import cafe.image.tiff.TIFFTweaker;
import cafe.image.tiff.TiffFieldEnum.Compression;
import cafe.io.FileCacheRandomAccessOutputStream;
import cafe.io.RandomAccessOutputStream;
public class Pdf2TiffConverter {
public static void main(String[] args) {
String pdf = "a.pdf";
PDDocument pddoc = null;
try {
pddoc = PDDocument.load(pdf);
} catch (IOException e) {
}
try {
savePdfAsTiff(pddoc);
} catch (IOException e) {
}
}
private static void savePdfAsTiff(PDDocument pdf) throws IOException {
BufferedImage[] images = new BufferedImage[pdf.getNumberOfPages()];
for (int i = 0; i < images.length; i++) {
PDPage page = (PDPage) pdf.getDocumentCatalog().getAllPages()
.get(i);
BufferedImage image;
try {
// image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 288); //works
image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 300); // does not work
images[i] = image;
} catch (IOException e) {
e.printStackTrace();
}
}
FileOutputStream fos = new FileOutputStream("a.tiff");
RandomAccessOutputStream rout = new FileCacheRandomAccessOutputStream(
fos);
ImageParam.ImageParamBuilder builder = ImageParam.getBuilder();
ImageParam[] param = new ImageParam[1];
TIFFOptions tiffOptions = new TIFFOptions();
tiffOptions.setTiffCompression(Compression.CCITTFAX4);
builder.imageOptions(tiffOptions);
builder.colorType(ImageColorType.BILEVEL);
param[0] = builder.build();
TIFFTweaker.writeMultipageTIFF(rout, param, images);
rout.close();
fos.close();
}
}
Or is there another library to write multi-page TIFFs?
EDIT:
Thanks to dragon66 the bug in icafe is now fixed. In the meantime I experimented with other libraries and also with invoking ghostscript. As I think ghostscript is very reliable as id is a widely used tool, on the other hand I have to rely that the user of my code has an ghostscript-installation, something like this:
/**
* Converts a given pdf as specified by its path to an tiff using group 4 compression
*
* #param pdfFilePath The absolute path of the pdf
* #param tiffFilePath The absolute path of the tiff to be created
* #param dpi The resolution of the tiff
* #throws MyException If the conversion fails
*/
private static void convertPdfToTiffGhostscript(String pdfFilePath, String tiffFilePath, int dpi) throws MyException {
// location of gswin64c.exe
String ghostscriptLoc = context.getGhostscriptLoc();
// enclose src and dest. with quotes to avoid problems if the paths contain whitespaces
pdfFilePath = "\"" + pdfFilePath + "\"";
tiffFilePath = "\"" + tiffFilePath + "\"";
logger.debug("invoking ghostscript to convert {} to {}", pdfFilePath, tiffFilePath);
String cmd = ghostscriptLoc + " -dQUIET -dBATCH -o " + tiffFilePath + " -r" + dpi + " -sDEVICE=tiffg4 " + pdfFilePath;
logger.debug("The following command will be invoked: {}", cmd);
int exitVal = 0;
try {
exitVal = Runtime.getRuntime().exec(cmd).waitFor();
} catch (Exception e) {
logger.error("error while converting to tiff using ghostscript", e);
throw new MyException(ErrorMessages.GHOSTSTSCRIPT_ERROR, e);
}
if (exitVal != 0) {
logger.error("error while converting to tiff using ghostscript, exitval is {}", exitVal);
throw new MyException(ErrorMessages.GHOSTSTSCRIPT_ERROR);
}
}
I found that the produced tif from ghostscript strongly differs in quality from the tiff produced by icafe (the group 4 tiff from ghostscript looks greyscale-like)
It's been a while since the question was asked and I finally find time and a wonderful ordered dither matrix which allows me to give some details on how "icafe" can be used to get similar or better results than calling external ghostscript executable. Some new features were added to "icafe" recently such as better quantization and ordered dither algorithms which is used in the following example code.
Here the sample pdf I am going to use is princeCatalogue. Most of the following code is from the OP with some changes due to package name change and more ImageParam control settings.
import java.awt.image.BufferedImage;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import com.icafe4j.image.ImageColorType;
import com.icafe4j.image.ImageParam;
import com.icafe4j.image.options.TIFFOptions;
import com.icafe4j.image.quant.DitherMethod;
import com.icafe4j.image.quant.DitherMatrix;
import com.icafe4j.image.tiff.TIFFTweaker;
import com.icafe4j.image.tiff.TiffFieldEnum.Compression;
import com.icafe4j.io.FileCacheRandomAccessOutputStream;
import com.icafe4j.io.RandomAccessOutputStream;
public class Pdf2TiffConverter {
public static void main(String[] args) {
String pdf = "princecatalogue.pdf";
PDDocument pddoc = null;
try {
pddoc = PDDocument.load(pdf);
} catch (IOException e) {
}
try {
savePdfAsTiff(pddoc);
} catch (IOException e) {
}
}
private static void savePdfAsTiff(PDDocument pdf) throws IOException {
BufferedImage[] images = new BufferedImage[pdf.getNumberOfPages()];
for (int i = 0; i < images.length; i++) {
PDPage page = (PDPage) pdf.getDocumentCatalog().getAllPages()
.get(i);
BufferedImage image;
try {
// image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 288); //works
image = page.convertToImage(BufferedImage.TYPE_INT_RGB, 300); // does not work
images[i] = image;
} catch (IOException e) {
e.printStackTrace();
}
}
FileOutputStream fos = new FileOutputStream("a.tiff");
RandomAccessOutputStream rout = new FileCacheRandomAccessOutputStream(
fos);
ImageParam.ImageParamBuilder builder = ImageParam.getBuilder();
ImageParam[] param = new ImageParam[1];
TIFFOptions tiffOptions = new TIFFOptions();
tiffOptions.setTiffCompression(Compression.CCITTFAX4);
builder.imageOptions(tiffOptions);
builder.colorType(ImageColorType.BILEVEL).ditherMatrix(DitherMatrix.getBayer8x8Diag()).applyDither(true).ditherMethod(DitherMethod.BAYER);
param[0] = builder.build();
TIFFTweaker.writeMultipageTIFF(rout, param, images);
rout.close();
fos.close();
}
}
For ghostscript, I used command line directly with the same parameters provided by the OP. The screenshots for the first page of the resulted TIFF images are showing below:
The lefthand side shows the output of "ghostscript" and the righthand side the output of "icafe". It can be seen, at least in this case, the output from "icafe" is better than the output from "ghostscript".
Using CCITTFAX4 compression, the file size from "ghostscript" is 2.22M and the file size from "icafe" is 2.08M. Both are not so good given the fact dither is used while creating the black and white output. In fact, a different compression algorithm will create way smaller file size. For example, using LZW, the same output from "icafe" is only 634K and if using DEFLATE compression the output file size went down to 582K.
Here's some code to save in a multipage tiff which I use with PDFBox. It requires the TIFFUtil class from PDFBox (it isn't public, so you have to make a copy).
void saveAsMultipageTIFF(ArrayList<BufferedImage> bimTab, String filename, int dpi) throws IOException
{
Iterator<ImageWriter> writers = ImageIO.getImageWritersByFormatName("tiff");
ImageWriter imageWriter = writers.next();
ImageOutputStream ios = ImageIO.createImageOutputStream(new File(filename));
imageWriter.setOutput(ios);
imageWriter.prepareWriteSequence(null);
for (BufferedImage image : bimTab)
{
ImageWriteParam param = imageWriter.getDefaultWriteParam();
IIOMetadata metadata = imageWriter.getDefaultImageMetadata(new ImageTypeSpecifier(image), param);
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
TIFFUtil.setCompressionType(param, image);
TIFFUtil.updateMetadata(metadata, image, dpi);
imageWriter.writeToSequence(new IIOImage(image, null, metadata), param);
}
imageWriter.endWriteSequence();
imageWriter.dispose();
ios.flush();
ios.close();
}
I experimented on this for myself some time ago by using this code:
https://www.java.net/node/670205 (I used solution 2)
However...
If you create an array with lots of images, your memory consumption
really goes up. So it would probably be better to render an image, then
add it to the tiff file, then render the next page and lose the
reference of the previous one so that the gc can get the space if needed.
Refer to my github code for an implementation with PDFBox.
Since some dependencies used by solutions for this problem looks not maintained. I got a solution by using latest version (2.0.16) pdfbox:
ByteArrayOutputStream imageBaos = new ByteArrayOutputStream();
ImageOutputStream output = ImageIO.createImageOutputStream(imageBaos);
ImageWriter writer = ImageIO.getImageWritersByFormatName("TIFF").next();
try (final PDDocument document = PDDocument.load(new File("/tmp/tmp.pdf"))) {
PDFRenderer pdfRenderer = new PDFRenderer(document);
int pageCount = document.getNumberOfPages();
BufferedImage[] images = new BufferedImage[pageCount];
// ByteArrayOutputStream[] baosArray = new ByteArrayOutputStream[pageCount];
writer.setOutput(output);
ImageWriteParam params = writer.getDefaultWriteParam();
params.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
// Compression: None, PackBits, ZLib, Deflate, LZW, JPEG and CCITT
// variants allowed
params.setCompressionType("Deflate");
writer.prepareWriteSequence(null);
for (int page = 0; page < pageCount; page++) {
BufferedImage image = pdfRenderer.renderImageWithDPI(page, DPI, ImageType.RGB);
images[page] = image;
IIOMetadata metadata = writer.getDefaultImageMetadata(new ImageTypeSpecifier(image), params);
writer.writeToSequence(new IIOImage(image, null, metadata), params);
// ImageIO.write(image, "tiff", baosArray[page]);
}
System.out.println("imageBaos size: " + imageBaos.size());
// Finished write to output
writer.endWriteSequence();
document.close();
} catch (IOException e) {
e.printStackTrace();
throw new Exception(e);
} finally {
// avoid memory leaks
writer.dispose();
}
Then you may using imageBaos write to your local file. But if you want to pass your image to ByteArrayOutputStream and return to privious method like me. Then we need other steps.
After processing is done, the image bytes would be available in the ImageOutputStream
output object. We need to position the offset to the beginning of the output object and then read the butes to write to new ByteArrayOutputStream, a concise way like this:
ByteArrayOutputStream bos = new ByteArrayOutputStream();
long counter = 0;
while (true) {
try {
bos.write(ios.readByte());
counter++;
} catch (EOFException e) {
System.out.println("End of Image Stream");
break;
} catch (IOException e) {
System.out.println("Error processing the Image Stream");
break;
}
}
return bos
Or you can just ImageOutputStream.flush() at end to get your imageBaos Byte then return.
Inspired by Yusaku answer,
I made my own version,
This can convert multiple pdf pages to a byte array.
I Used pdfbox 2.0.16 in combination with imageio-tiff 3.4.2
//PDF converter to tiff toolbox method.
private byte[] bytesToTIFF(#Nonnull byte[] in) {
int dpi = 300;
ImageWriter writer = ImageIO.getImageWritersByFormatName("TIFF").next();
try(ByteArrayOutputStream imageBaos = new ByteArrayOutputStream(255)){
writer.setOutput(ImageIO.createImageOutputStream(imageBaos));
writer.prepareWriteSequence(null);
PDDocument document = PDDocument.load(in);
PDFRenderer pdfRenderer = new PDFRenderer(document);
ImageWriteParam params = writer.getDefaultWriteParam();
for (int page = 0; page < document.getNumberOfPages(); page++) {
BufferedImage image = pdfRenderer.renderImageWithDPI(page, dpi, ImageType.RGB);
IIOMetadata metadata = writer.getDefaultImageMetadata(new ImageTypeSpecifier(image), params);
writer.writeToSequence(new IIOImage(image, null, metadata), params);
}
LOG.trace("size found: {}", imageBaos.size());
writer.endWriteSequence();
writer.reset();
return imageBaos.toByteArray();
} catch (Exception ex) {
LOG.warn("can't instantiate the bytesToTiff method with: PDF", ex);
} finally {
writer.dispose();
}
}
Is there a way to convert a XFA PDF into a set of images (png or jpeg) with Apache's PDFBox?
I use version (1.8.6) which is supposed to have XFA support.
The PDF file to convert is a dynamic PDF form (XFA). Converting a static PDF form does not make any problem.
I was successful with PDFBox V 1.8.6 by calling the Page.convertToImage() method.
My attempt with an XFA PDF resulted in this image:
Here is the code I used to test the PDF conversion:
public void convertToImages(File sourceFile, File destinationDir){
if (!destinationDir.exists()) {
destinationDir.mkdir();
System.out.println("Folder Created -> "+ destinationDir.getAbsolutePath());
}
if (sourceFile.exists()) {
System.out.println("Images copied to Folder: "+ destinationDir.getName());
PDDocument document = null;
try {
//The classical way to lod the PDF document doesn't work here
//document = PDDocument.load(sourceFile);
File scratch = new File(destinationDir, "scratch");
if(scratch.exists())scratch.delete();
document = PDDocument.loadNonSeq(sourceFile, new RandomAccessFile(scratch, "rw"));
//Doesn't seem to have an effect in my case but I keep it ;-)
document.setAllSecurityToBeRemoved(true);
#SuppressWarnings("unchecked")
List<PDPage> list = document.getDocumentCatalog().getAllPages();
System.out.println("Total files to be converted -> "+ list.size());
String fileName = sourceFile.getName();
int pos = fileName.lastIndexOf('.');
fileName = fileName.substring(0, pos);
int pageNumber = 1;
for (PDPage page : list) {
File outputfile = new File(destinationDir, fileName +"_"+ pageNumber +".png");
try {
BufferedImage image = page.convertToImage();
ImageIO.write(image, "png", outputfile);
pageNumber++;
if(outputfile.exists()){
System.out.println("Image Created -> "+ outputfile.getName());
} else {
System.out.println("Image NOT Created -> "+ outputfile.getName());
}
} catch (Exception e) {
System.out.println("Error while creating image file "+ outputfile.getName());
e.printStackTrace();
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
if(document != null){
try {
document.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
System.out.println("Converted Images are saved at -> "+ destinationDir.getAbsolutePath());
} else {
System.err.println(sourceFile.getName() +" File does not exists");
}
}
Is there anything special to do in that case?
I tried the GIMP despite the fact that it should be used on a server but it doesn't work with dynamic PDF either.
I also have tried ImageMagick but it didn't work at all. Since I would be very surprised that it could solve anything I gave up further investigations.