I have some input PDF all with full set fonts, I want to "shrink" them all creating fonts subset. I know there is the way to unembed fonts and embed subset font, but the problem is that i don't have the source file of fonts. I just have fonts embedded in source PDF.
Someone can help me to troubleshoot this issue ?
ENV: java8, itext7.1.5
Here's a thread on a similar question (about embedding, not subsetting, despite the OP's question): How to subset fonts into an existing PDF file. The following statement is relevant:
If you want to subset it, you'd need to parse all the content streams
in the PDF to find out which glyphs are used. That's NOT a trivial
task.
I wouldn't recommend attempting this in iText, unless it's really necessary. It would likely end up buggy unless you have a very complete understanding of the PDF specs. It might be worth pursuing other avenues such as changing the way the PDFs are created, or use something like Distiller that can do this for you.
If you do want to do this in iText, I'm afraid you will likely have to use a PdfCanvasProcessor and some custom operator handlers. You would need to find all text fields, determine which font they use, build a new subset font with the applicable glyphs, and replace the fonts with new subset copies. This is how you would create a copy of the complete font to prepare for subsetting (assuming you don't have copies of the font files):
String encoding = PdfEncodings.WINANSI; // or another encoding if needed for more glyph support
PdfFont completeFont = ...; // get complete font from font dictionary
PdfFont subsetFont = PdfFontFactory.createFont(completeFont.getFontProgram(), encoding, true);
subsetFont.setSubset(true);
When you encounter a Font change operator (Tf), you would need to look up that font in the font dictionary and create a new (or lookup an already created) subset copy of that font to prepare for upcoming text fields. Don't forget to keep the font in a stack so you can pop back to the previous font (look for q and Q operators). And don't forget to check parent forms and page groups for the fonts if they don't exist in the current XObject or page resource dictionary.
When you encounter text (a Tj, TJ, ', or " operator), you would need to decode the text using the complete font, then re-encode it to the new subset font's encoding (unless you know for sure that all your source fonts are ASCII-compatible). Add that text's characters to the subset like this:
subsetFont.addSubsetRange(new int[]{character});
Related
The PDFlib example search and replace text copies pages and pastes rectangles and text.
Instead of loading a font from my hard disk (like it is done in the example with int font = p.load_font(REPLACEMENT_FONT, "unicode", "");) I'd like to use the original font from the source document.
How can I achieve this?
What I tried is this:
When using int font = 0 (which is equivalent to the value of tet.fontid in line 244), PDFlib throws an exception like this:
com.pdflib.PDFlibException: Option 'font' has bad font handle 0
at com.pdflib.pdflib.PDF_fit_textline(Native Method)
at com.pdflib.pdflib.fit_textline(pdflib.java:1086)
What could work (and what I'm also not able to get to run)
Maybe I could read the fonts in the target document. Reading fonts in source document is feasible with this: (int) lib.pcos_get_number(pdiHandle, "length:fonts");. Trying to read the fonts in target document with (int) lib.pcos_get_number(outputPdfHandle, "length:fonts"); (with outputPdfHandle = p.begin_document(outfilename, "") from example line 560) throws exception
com.pdflib.PDFlibException: Handle parameter or option of type 'PDI document' has bad value 1
at com.pdflib.pdflib.PDF_pcos_get_number(Native Method)
at com.pdflib.pdflib.pcos_get_number(pdflib.java:1539)
It is not possible to use a font from a document imported via PDI to create text in an output document. In theory the idea sounds attractive to access the font data from the input document via pCOS functions. One could think that it should be possible to reassemble the font data into for example a valid TrueType font that then can be loaded via the PDFlib load_font() function.
But that is not possible for the following reasons:
The font data that is stored in a PDF document is not the complete data that is stored in a TrueType font. Important TrueType tables are missing and cannot be reconstructed from the font data in the PDF file.
A font in a PDF file is almost always a subset that contains only the glyphs that are actually used in the document. So even if it would be possible to use the font data from the input document, you could use only glyphs from the subset to create new text in an output document.
Also the fontid value provided by TET cannot be used as a font handle when creating new output via PDFlib. The fontid value is the index in the pCOS pseudo object array fonts[], and it is totally unrelated to any handles used to create new output via the PDFlib API.
Is there a way to check if otf font file contains glyphs for small caps variation?
Is there a way to do it in java?
Thanks!
There is, just don't use standard-library-Java for it. Use either a Java OpenType font analyser, or do the more sensible thing and consult Freetype2 or Harfbuzz for that information.
Really, anything that lets you check OpenType features will do: check whether the font encodes for the smcp feature - if it does, it support proper smallcaps as per the OpenType spec. If not, it doesn't, and whatever text engine you're using is going to fake it.
I'm using a Ubuntu-PC to create PDFs with iText which are partly in Chinese. To read them I use Evince. So far there were hardly any problems
On my PC I tried the following three BaseFonts and they worked with success:
bf = BaseFont.createFont("MSungStd-Light", "UniCNS-UCS2-H", BaseFont.NOT_EMBEDDED);
bf = BaseFont.createFont("STSong-Light", "UniGB-UCS2-H", BaseFont.NOT_EMBEDDED);
bf = BaseFont.createFont("MSung-Light","UniCNS-UCS2-H", BaseFont.NOT_EMBEDDED);
Unfortunately in the moment the final PDF is opened on Windows with the Acrobat-Reader the document can't be displayed correctly any more.
After I googled the Fonts to get a solution I came to that Forum where the problem is explained in an understandable way (Here MSung-Light was used): http://community.jaspersoft.com/questions/531457/chinese-font-cannot-be-seen
You are using a built-in Chinese font in PDF. I'm not sure about the
ability of this font to support both English and Chinese, or mixed
language anyway.
The advantage of using an Acrobat Reader built-in font is that it
produces smaller PDF files, because it relies on those fonts being
available on the client machine that display the PDF, through the
pre-installed Acribat Asian Font Pack.
However, using the PDF built-in fonts has some disadvantages that were
discovered through testing on different machines, when we investegated
a similar problem related to a built-in Korean font.
What should I do about it?
It's not so important to be able to copy the Chinese letters. Can iText convert a paragraph to an image? Or are there any better solutions?
You're using a CJK font. CJK fonts are never embedded and they require a font pack when opening such a file in Adobe Reader. Normally, Adobe Reader will ask you if you want to install such a font pack automatically. If it doesn't, you can download the appropriate font pack here.
It seems that you want to avoid having an end user install a font pack. That's understandable to some extent. What is really bad, is your suggestion to avoid using a font and to draw the glyphs one by one instead. This is possible with iText (and documented in my book), but it comes with a severe warning: Don't do this! Your file will be bloated and print results risk being awful!
An alternative is to use another font, e.g. arialuni.ttf, YaHei, SimHei,... These fonts contain Chinese glyphs and you can embed a subset of these fonts into your PDF (embedding the whole font would be overkill). See for instance the FontTest example.
If you have a font program such as arialuni.ttf, you can use this code to create a BaseFont object:
BaseFont.createFont("c:/windows/fonts/arialuni.ttf", BaseFont.IDENTITY_H, BaseFont.EMBEDDED);
With this font, you can display Chinese characters that will be visible using any viewer on any OS. If you don't have arialuni.ttf, you need to look for another font and use the FontText example to test if Chinese is supported (if you don't see any text after "Chinese:", then Chinese isn't supported).
Extra answer in reply to your comment:
Please forget about iText-Asian as that is a jar you need when you want to use CJK fonts. You explicitly say you don't want to use CJK fonts, so you don't need to use iText-Asian.
If you want to embed the font (as opposed to rely on a font pack), you need to pick a font program that knows how to draw Chinese characters. This immediately makes your question regarding "Can you point me to an example that draws Chinese characters?" void. I could point you to such an example, but you'd still need a font program.
Once you have that font program: why wouldn't you use it the correct way? You should use that font program the way you're supposed to use it. You shouldn't use that font program to draw your glyphs as images as that would result in a PDF file with a huge filesize and a bad resolution (bad quality of the glyphs because you draw each separate character instead of using the font program in the PDF).
Did you look for a font program yet? There was a similar question about Vietnamese fonts a while ago: Can't export Vietnamese characters to PDF using iText It took me less than a quarter of my time to Google for a font that could be used. Why don't you spend a quarter of your time finding a font that supports Chinese?
Extra answer in reply to your extra comment:
When we refer to CJK, we refer to a specific approach in which fonts aren't embedded, but rely on a font pack being installed on the end users machine, so that Adobe Reader can use that font. You don't want this so all your questions about using the itext asian jar and MSung-Light and so on are irrelevant.
The Chinese character set is huge and many computers ship without any Chinese fonts (especially in the US), so the answer to your question "Isn't there any way to use a built-in arialuni" is "No, you shouldn't count on that!"
What you say about Vietnamese is irrelevant. A font is a font is a font. You have a character code on one side and a glyph on the other side. The glue that connects one with the other is the encoding. For instance: You have the hexadecimal character code B2E2 and the hexadecimal character code CAD4. If the encoding is GBK, the corresponding glyphs are 测 and 试. Note that when you'd want to represent the very same characters in UNICODE, you'd use the characters 6D4D and 8BD5. There is very little difference with other systems. For instance: you have the hexadecimal character code 41 (65 in decimals) and if the encoding is Latin-1, the corresponding glyph is A.
I have asked you to search for a font that supports Chinese. I have opened Google and I searched for the keywords "Chinese fonts". I found this page: http://www.freechinesefont.com/ and I picked a font that seemed OK to me: http://www.freechinesefont.com/simplified-hxb-mei-xin-download/
Now I use this code snippet:
import java.io.FileOutputStream;
import java.io.IOException;
import com.itextpdf.text.Document;
import com.itextpdf.text.DocumentException;
import com.itextpdf.text.Font;
import com.itextpdf.text.Paragraph;
import com.itextpdf.text.pdf.BaseFont;
import com.itextpdf.text.pdf.PdfWriter;
public class ChineseTest {
/** Path to the resulting PDF file. */
public static final String DEST = "results/test.pdf";
/** Path to the vietnamese font. */
public static final String FONT = "resources/hxb-meixinti.ttf";
/**
* Creates a PDF file: hello.pdf
* #param args no arguments needed
*/
public static void main(String[] args) throws DocumentException, IOException {
new ChineseTest().createPdf(DEST);
}
/**
* Creates a PDF document.
* #param filename the path to the new PDF document
* #throws DocumentException
* #throws IOException
*/
public void createPdf(String filename) throws DocumentException, IOException {
// step 1
Document document = new Document();
// step 2
PdfWriter.getInstance(document, new FileOutputStream(filename));
// step 3
document.open();
BaseFont bf = BaseFont.createFont(FONT, BaseFont.IDENTITY_H, BaseFont.EMBEDDED);
Font font = new Font(bf,15);
// step 4
document.add(new Paragraph("\u6d4b\u8bd5", font));
// step 5
document.close();
}
}
The result looks like this on Windows:
How is this different from Vietnamese? The word test is displayed correctly in Chinese. A subset of the font is embedded, which means you can keep the file size low. The text is not embedded as an image which means the quality of the text is excellent.
Extra answer in answer to your extra comment: In your comment, you claim that the example that uses the file hxb-meixinti.ttf requires the installation of a font. That is incorrect. hxb-meixinti.ttf is merely a file that is read by iText and used to embed the definition of specific glyphs (a subset of the font) into a PDF.
When you write: Related to a Font-Program: Java seems to be able to do it without using external software. Java is able to use fonts because Java uses font files, just the same way as iText uses font files.
For more info, read Supported Fonts in the Java manual. I quote:
Physical fonts need to be installed in locations known to the Java
runtime environment. The JRE looks in two locations: the lib/fonts
directory within the JRE itself, and the normal font location(s)
defined by the host operating system. If fonts with the same name
exist in both locations, the one in the lib/fonts directory is used.
What I tried explaining (and what you have been ignoring since the start of this thread) is that iText needs access to a physical font. iText can accept a font from file or as a byte[], but you need to provide something like a TTF, OTF, TTC, AFM+PFB. This is not different from how Java works.
In your comment you also say that you want Adobe Reader to accept a byte stream instead of reading a PDF from file. This is not possible. Adobe Reader always requires the presence of the PDF file on disk. Even if the PDF file is served by a browser, the bytes of the PDF are stored as a temporary file. This is inherent to your request that the file needs to be viewed in Adobe Reader.
The rest of your comment is unclear. What do you mean by If everyone would just upload anything he might need a switch causes difficulties. Are you talking about downloading instead of uploading? Also: I gave you a solution that doesn't require downloading anything extra on the client side, yet you keep on nagging that no one will install anything on Acrobat.
As for your remark For BS I got a solution recently, I have no idea what you mean by BS.
I know that Java supports TrueType Fonts (.ttf) and that .ttc is extension of TrueType format, but i can't find information that Java also supports the TrueType collection (.ttc) to be explicitly set as font on JLabel for example.
I made an example, where I successfully load a .ttc file in my application with the following code:
InputStream is = getClass().getResourceAsStream("/resources/simsun.ttc");
Font font = Font.createFont(Font.TRUETYPE_FONT, is);
Font fontBase = font.deriveFont(15f);
field.setFont(fontBase);
The code is working well, there are no exceptions related to the creation, loading or setting of the .ttc file as a font in Swing components.
My question is: Can someone confirm this to be working well and that all glyphs from the fonts inside the .ttc are used in components, or there are any disadvantages related to this?
Also, is there any difference if the .ttc is loaded from jar on client machine or it has to be installed in system fonts?
I'm using Windows 7.
First of all, the difference between TTC and TTF is: TTC can (and usually) contain multiple fonts, but TTF only have font defined. The reason to put multiple font into one file is to save space by share glyphs (or sub glyphs). For example, in SimSun and NSimSun, most of glyphs are same, save them together can save lots of space.
Second, Java support TTC font format, but by using Font.createFont() you can only get the first font defined in the TTC file. Currently, there is no way to specify the font index. Take a look at sun.font.FontManager.createFont2D(), when they invoke new TrueTypeFont(), the fontIndex is alway zero. Shame!
For your question: if all you need is the first font in TTC file, then everything would be okay. All the glyphs defined for first font would be available. But, if you expect second or other font defined in that file, then you hit a block. You cannot even get the font name by using this API.
There is no difference between system loaded fonts and created font. However there is no good way to specify the font index, you may try to hack into FontManager and come up with some platform specific code.
I wrote some code in Java using the pdfbox API that splits a pdf document into it's individual pages, looks through the pages for a specific string, and then makes a new pdf from the page with the string on it. My problem is that when the new page is saved, I lose my font. I just made a quick word document to test it and the default font was calibri, so when I run the program I get an error box that reads: "Cannot extract the embedded font..." So it replaces the font with some other default.
I have seen a lot of example code that shows how to change the font when you are inputting text to be placed in the pdf, but nothing that sets the font for the pdf.
If anyone is familiar with a way to do this, (or can find documentation/examples), I would greatly appreciate it!
Edit: forgot to include some sample code
if (pageContent.indexOf(findThis) >= 0){
PDPage pageToRip = pages.get(i);
>>set the font of pageToRip here
res.importPage(pageToRip); //res is the new document that will be saved
}
I don't know if that helps any, but I figured I'd include it.
Also, this is what the change looks like if the pdf is written in calibri and split:
Note: This might be a nonissue, it depends on the font used in the files that will need to be processed. I tried some things besides Calibri and it worked out fine.
From How to extract fonts from a PDF:
You actually cannot extract a font from a PDF, not even if the font is
fully embedded. There are two reasons why this is not feasible:
•Most fonts are copyrighted, making it illegal to use an extractor.
•When a font is embedded in a PDF, not all of the font data are
included. Obviously the font outline data are included as well as the
font width tables. Other information, such as data about ligatures,
are irrelevant within the PDF so those data do not get enclosed in a
PDF. I am not aware of any font extraction tools but if you come
across one, the above reasons should make it clear that these
utilities are to be avoided.