I have an unknown file type uploaded. It can be doc, pdf, xls, etc.
My ultimate goal is to:
Determine if there are paragraphs of text in the file (as opposed to, say, a bunch of picture captions or text from a chart or table)
If (1) is true and there are paragraphs of text, extract a few sample paragraphs from the file.
I know that I can use a program like Apache Tika to extract the file to a String.
However, I would like to also get the format of the extracted text and determine where there are paragraphs of full, written text (as opposed to captions, etc.).
So I also would like a way to analyze the extracted text. Specifically, I would like a library that can identify full, written paragraphs, as opposed to text that was simply taken from things like photo captions, charts, etc.
While Tika is a rather large library, I would be willing to add it if it can perform the tasks that I need.
However, I can not find anything in Tika that would allow me to analyze the structure of the text in such a way.
Is there something I missed?
Other than Tika, I am aware of some API’s for analyzing text, specifically Comprehend or Textract, but I still couldn't find something that can ensure the extraction of full, written paragraphs as I require.
I am looking for any suggestion using the libraries I listed above or others. Again, I'd like to avoid things like photo captions and such and only get text that was part of full, written paragraphs.
Is there any library that can help me with this or will I have to code the logic myself (for detecting paragraphs as well as detecting the difference between full paragraphs and text that was extracted from charts and captions)?
Related
I want to extract the data present inside a PDF file and present it in the format of a CSV/Excel sheet.I got to know that this can be done using Tika library in java.But,i did find the solution as to how extract the data as simple text,but i want to know how to store it in an excel sheet.
If someone has done such type of work earlier,then please help me.
The first part (and the hard one) is to parse original data and interpret it as a table. Apache Tika will give you xhtml representation (or call your own handler with SAX events) but it usually won't construct a table for you. From pdf file, I mean, since pdf isn't a tabular format by itself.
So, you'll have to take Tika-produced paragraphs, split them and pass resulting cells to some csv/xls/xlsx writter.
It might work if you have some regular table in you pdf (one line per table row, clean cell logical separation etc). But it will look like parsing plain text, of course.
In case I wouldn't work, you'll have to take pdf parser (like Apache PDFBox) and try to interpret its output.
The second part (output) is simple. If csv/ssv/tsv is suitable for you -- use your preferred library to produce it (I can recommend Apache commons-csv).
But take into account that MS Excel requires BOM for UTF-8 and UTF-16 csv to understand that file isn't in one-byte encoding (like CP-1252 etc).
If you want Excel xls or xlsx format -- just use Apache POI to write it.
I have a pdf textbook which has math equations like this:
However, if i attempt a simple text extraction i get something along the lines of:
V(r) = - 3 - -
2R R2
This is not an image, it is text but I don't know how to preserve the way it looks and get the actual characters into a text file.
The problem you are running into is a frequently encountered one. PDF essentially doesn't care about structure. It has no notion of a column, paragraph, a line of text or even a word, let alone a mathematical formula with lots of special formatting.
PDF - essentially - is only interested in placing things on a page at a specific location. And that's exactly what it does with your formulas as well, it will use the characters and graphics you need for your formulas and put them somewhere on the page. Without any additional knowledge that you could use afterwards to figure out that these characters and graphics even belong to a formula; let alone reconstruct it while doing text extraction.
Two additional points:
1) If you share an example of such a PDF document, we could have a look if there is some useful information in it that could be used to extract this formula in a more competent way; but the chance is close to zero.
2) You would also have to define what a "useful way" from your point of view is. Formulas don't translate well to plain text files, so you probably need something like MathML to store them in.
Is there a way I can edit a PDF document text? like find and replace specific text ?
I have a PDF document which contains placeholders for text that I need to identify and be replaced or just delete that text.
I am able to edit the pdf with a specific coordinates (x, y) but unable to identify and replace. All the libraries that I saw created PDF from scratch and small editing functionality.
Is there anyway I can edit above explained using itext?
please advise...thank you!
**Example : A pdf document contains following paragaph. In this paragraph, I need to identify DATE: and FROM: as a text and replace it with something else.
The oldest classical Greek and Latin writing had little or no spaces between words or other ones, and could be written in boustrophedon (alternating directions). Over time, text direction (left to right) became standardized, and word dividers and terminal punctuation became common.
**DATE:
FROM:
The first way to divide sentences into groups was the original paragraphos, similar to an underscore at the beginning of the new group
-----------------------------------------------------------**
Allow me to copy the intro of chapter 6 of my book:
When I wrote the first book about iText, the publisher didn’t like the
subtitle “Creating and Manipulating PDF.” He didn’t like the word
manipulating because of some of its pejorative meanings. If you consult the dictionary on Yahoo! education, you’ll find the
following definitions:
To influence or manage shrewdly or deviously
To tamper with or falsify for personal gain
Obviously, that’s not what the book is about. The publisher suggested
“Creating and Editing PDF” as a better subtitle. I explained that
PDF isn’t a document format well suited for editing. PDF is an end
product. It’s a display format. It’s not a word processing
format.
In a word processing format, the content is distributed over different
pages when you open the document in an application, not earlier. This
has some disadvantages: if you open the same document in different
applications, you can end up with a different page count. The same
text snippet can be on page X when looked at in Microsoft Word, and
on page Y when viewed in Open Office. That’s exactly the kind of
problem you want to avoid by choosing PDF.
In a PDF document, every character or glyph on a PDF page has its
fixed position, regardless of the application that’s used to view the
document. This is an advantage, but it also comes with a disadvantage.
Suppose you want to replace the word “edit” with the word “manipulate”
in a sentence, you’d have to reflow the text. You’d have to reposition
all the characters that follow that word. Maybe you’d even have to
move a portion of the text to the next page. That’s not trivial, if
not impossible.
If you want to “edit” a PDF, it’s advised that you change the original
source of the document and remake the PDF. If the original document
was written using Microsoft Word, change the Word document, and make
the PDF from the new version of the Word document. Don’t expect any
tool to be able to edit a PDF file the same way you’d edit a Word
document.
This being said, the verb “to manipulate” also means
To move, arrange, operate, or control by the hands or by mechanical means, especially in a skillful manner
That’s exactly what you’re going to do in this chapter. Using iText,
you’re going to manipulate the pages of a PDF file in a skillful
manner. You’re going to treat a PDF document as if it were made of
digital paper.
In your question, you say: "All the libraries that I saw created PDF from scratch and small editing functionality."
Well, that's only normal. It's inherent to the document format you've chosen. Your design that involves "placeholders for text that you need to identify and replace or just delete" is seriously flawed. It suffers from a wrong choice of document format. You should have chosen a format that is suited for editing. PDF isn't such a format.
I have a PDF that contains placeholders like <%DATE_OF_BIRTH%>, i want to be able to read in the PDF and change the PDF placeholder values to text using iText.
So read in PDF, use maybe a replaceString() method and change the placeholders then generate the new PDF.
Is this possible?
Thanks.
The use of placeholders in PDF is very, very limited. Theoretically it can be done and there are some instances where it would be feasible to do what you say, but because PDF doesn't know about structure very much, it's hard:
simply extracting words is difficult so recognising your placeholders in the PDF would already be difficult in many cases.
Replacing text in PDF is a nightmare because PDF files generally don't have a concept of words, lines and paragraphs. Hence no nice reflow of text for example.
Like I said, it could theoretically work under special conditions, but it's not a very good solution.
What would be a better approach depends on your use case:
1) For some forms it may be acceptable to have the complete form as a background image or PDF file and then generate your text as an overlay to that background (filling in the blanks so to speak) As pointed out by Bruno and mlk in comments, in this case you can also look into using form fields which can be dynamically filled.
2) For other forms it may be better to have your template in a structured format such as XML or HTML, do the text replacement in that format and then convert it into PDF.
A PDF file extension can be verified by the magic signature: 25 50 44 46
However, I want to detect whether a PDF contains text or image (i.e. whether the PDF contains text that can be searched with ctrl+f OR whether it contains scanned documents)
Is there a way to do this?
Well technically, you could parse the PDF document structure and look for elements that contain text. I imagine this would require a big effort to implement.
So you may want to use a premade PDF package to do the parsing for you (PDFBox, BfoPDF or something similar). Still, I think it will require some effort to implement.
The simplest way that I know of would be to use a package that can extract the plain text for you. Apache TIKA can do this. Just feed it the document and see if you get something back.
In any case it will be hard to classify PDF's that contain both images and text.