this is the way i made an XML file to a Java object(s).
i used "xjc" and a valid XML schema, and i got back some "generated" *.java files.
i imported them into a different package in eclipse.
I am reading the XML file in 2 way now.
1) Loading the XML file:
System.out.println("Using FILE approach:");
File f = new File ("C:\\test_XML_files\\complex.apx");
JAXBElement felement = (JAXBElement) u.unmarshal(f);
MyObject fmainMyObject = (MyObject) felement.getValue ();
2) Using a DOM buider:
System.out.println("Using DOM BUILDER Approach:");
JAXBElement element = (JAXBElement) u.unmarshal(test());;
MyObject mainMyObject = (MyObject ) element.getValue ();
now in method "test()" the code below is included:
public static Node test(){
Document document = parseXmlDom();
return document.getFirstChild();
}
private static Document parseXmlDom() {
Document document = null;
try {
// getting the default implementation of DOM builder
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = factory.newDocumentBuilder();
// parsing the XML file
document = builder.parse(new File("C:\\test_XML_files\\MyXML_FILE.apx"));
} catch (Exception e) {
// catching all exceptions
System.out.println();
System.out.println(e.toString());
}
return document;
}
is this the standard way of doing XML to an Java Object?
I tested if I could access the object and everything works fine. (so far)
Do you suggest a different approach?? or is this one sufficient?
I don't know about a "standard way", but either way looks OK to me. The first way looks simpler ( less code ) so that's the way I'd probably do it, unless there were other factors / requirements.
FWIW, I'd expect that the unmarshal(File) method was implemented to do pretty much what you are doing in your second approach.
However, it is really up to you (and your co-workers) to make judgments about what is "sufficient" for your project.
Related
I have to parse a bunch of XML files in Java that sometimes -- and invalidly -- contain HTML entities such as —, > and so forth. I understand the correct way of dealing with this is to add suitable entity declarations to the XML file before parsing. However, I can't do that as I have no control over those XML files.
Is there some kind of callback I can override that is invoked whenever the Java XML parser encounters such an entity? I haven't been able to find one in the API.
I'd like to use:
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = dbf.newDocumentBuilder();
Document doc = parser.parse( stream );
I found that I can override resolveEntity in org.xml.sax.helpers.DefaultHandler, but how do I use this with the higher-level API?
Here's a full example:
public class Main {
public static void main( String [] args ) throws Exception {
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = dbf.newDocumentBuilder();
Document doc = parser.parse( new FileInputStream( "test.xml" ));
}
}
with test.xml:
<?xml version="1.0" encoding="UTF-8"?>
<foo>
<bar>Some text — invalid!</bar>
</foo>
Produces:
[Fatal Error] :3:20: The entity "nbsp" was referenced, but not declared.
Exception in thread "main" org.xml.sax.SAXParseException; lineNumber: 3; columnNumber: 20; The entity "nbsp" was referenced, but not declared.
Update: I have been poking around in the JDK source code with a debugger, and boy, what an amount of spaghetti. I have no idea what the design is there, or whether there is one. Just how many layers of an onion can one layer on top of each other?
They key class seems to be com.sun.org.apache.xerces.internal.impl.XMLEntityManager, but I cannot find any code that either lets me add stuff into it before it gets used, or that attempts to resolve entities without going through that class.
I would use a library like Jsoup for this purpose. I tested the following below and it works. I don't know if this helps. It can be located here: http://jsoup.org/download
public static void main(String args[]){
String html = "<?xml version=\"1.0\" encoding=\"UTF-8\"?><foo>" +
"<bar>Some text — invalid!</bar></foo>";
Document doc = Jsoup.parse(html, "", Parser.xmlParser());
for (Element e : doc.select("bar")) {
System.out.println(e);
}
}
Result:
<bar>
Some text — invalid!
</bar>
Loading from a file can be found here:
http://jsoup.org/cookbook/input/load-document-from-file
Issue - 1: I have to parse a bunch of XML files in Java that sometimes -- and
invalidly -- contain HTML entities such as —
XML has only five predefined entities. The —, is not among them. It works only when used in plain HTML or in legacy JSP. So, SAX will not help. It can be done using StaX which has high level iterator based API. (Collected from this link)
Issue - 2: I found that I can override resolveEntity in
org.xml.sax.helpers.DefaultHandler, but how do I use this with the
higher-level API?
Streaming API for XML, called StaX, is an API for reading and writing XML Documents.
StaX is a Pull-Parsing model. Application can take the control over parsing the XML documents by pulling (taking) the events from the parser.
The core StaX API falls into two categories and they are listed below. They are
Cursor based API: It is low-level API. cursor-based API allows the application to process XML as a stream of tokens aka events
Iterator based API: The higher-level iterator-based API allows the application to process XML as a series of event objects, each of which communicates a piece of the XML structure to the application.
STaX API has support for the notion of not replacing character entity references, by way of the IS_REPLACING_ENTITY_REFERENCES property:
Requires the parser to replace internal entity references with their
replacement text and report them as characters
This can be set into an XmlInputFactory, which is then in turn used to construct an XmlEventReader or XmlStreamReader.
However, the API is careful to say that this property is only intended to force the implementation to perform the replacement, rather than forcing it to notreplace them.
You may try it. Hope it will solve your issue. For your case,
Main.java
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import javax.xml.stream.XMLEventReader;
import javax.xml.stream.XMLInputFactory;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.events.EntityReference;
import javax.xml.stream.events.XMLEvent;
public class Main {
public static void main(String[] args) {
XMLInputFactory inputFactory = XMLInputFactory.newInstance();
inputFactory.setProperty(
XMLInputFactory.IS_REPLACING_ENTITY_REFERENCES, false);
XMLEventReader reader;
try {
reader = inputFactory
.createXMLEventReader(new FileInputStream("F://test.xml"));
while (reader.hasNext()) {
XMLEvent event = reader.nextEvent();
if (event.isEntityReference()) {
EntityReference ref = (EntityReference) event;
System.out.println("Entity Reference: " + ref.getName());
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (XMLStreamException e) {
e.printStackTrace();
}
}
}
test.xml:
<?xml version="1.0" encoding="UTF-8"?>
<foo>
<bar>Some text — invalid!</bar>
</foo>
Output:
Entity Reference: nbsp
Entity Reference: mdash
Credit goes to #skaffman.
Related Link:
http://www.journaldev.com/1191/how-to-read-xml-file-in-java-using-java-stax-api
http://www.journaldev.com/1226/java-stax-cursor-based-api-read-xml-example
http://www.vogella.com/tutorials/JavaXML/article.html
Is there a Java XML API that can parse a document without resolving character entities?
UPDATE:
Issue - 3: Is there a way to use StaX to "filter" the entities (replacing them
with something else, for example) and still produce a Document at the
end of the process?
To create a new document using the StAX API, it is required to create an XMLStreamWriter that provides methods to produce XML opening and closing tags, attributes and character content.
There are 5 methods of XMLStreamWriter for document.
xmlsw.writeStartDocument(); - initialises an empty document to which
elements can be added
xmlsw.writeStartElement(String s) -creates a new element named s
xmlsw.writeAttribute(String name, String value)- adds the attribute
name with the corresponding value to the last element produced by a
call to writeStartElement. It is possible to add attributes as long
as no call to writeElementStart,writeCharacters or writeEndElement
has been done.
xmlsw.writeEndElement - close the last started element
xmlsw.writeCharacters(String s) - creates a new text node with
content s as content of the last started element.
A sample example is attached with it:
StAXExpand.java
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import javax.xml.stream.XMLOutputFactory;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamWriter;
import java.util.Arrays;
public class StAXExpand {
static XMLStreamWriter xmlsw = null;
public static void main(String[] argv) {
try {
xmlsw = XMLOutputFactory.newInstance()
.createXMLStreamWriter(System.out);
CompactTokenizer tok = new CompactTokenizer(
new FileReader(argv[0]));
String rootName = "dummyRoot";
// ignore everything preceding the word before the first "["
while(!tok.nextToken().equals("[")){
rootName=tok.getToken();
}
// start creating new document
xmlsw.writeStartDocument();
ignorableSpacing(0);
xmlsw.writeStartElement(rootName);
expand(tok,3);
ignorableSpacing(0);
xmlsw.writeEndDocument();
xmlsw.flush();
xmlsw.close();
} catch (XMLStreamException e){
System.out.println(e.getMessage());
} catch (IOException ex) {
System.out.println("IOException"+ex);
ex.printStackTrace();
}
}
public static void expand(CompactTokenizer tok, int indent)
throws IOException,XMLStreamException {
tok.skip("[");
while(tok.getToken().equals("#")) {// add attributes
String attName = tok.nextToken();
tok.nextToken();
xmlsw.writeAttribute(attName,tok.skip("["));
tok.nextToken();
tok.skip("]");
}
boolean lastWasElement=true; // for controlling the output of newlines
while(!tok.getToken().equals("]")){ // process content
String s = tok.getToken().trim();
tok.nextToken();
if(tok.getToken().equals("[")){
if(lastWasElement)ignorableSpacing(indent);
xmlsw.writeStartElement(s);
expand(tok,indent+3);
lastWasElement=true;
} else {
xmlsw.writeCharacters(s);
lastWasElement=false;
}
}
tok.skip("]");
if(lastWasElement)ignorableSpacing(indent-3);
xmlsw.writeEndElement();
}
private static char[] blanks = "\n".toCharArray();
private static void ignorableSpacing(int nb)
throws XMLStreamException {
if(nb>blanks.length){// extend the length of space array
blanks = new char[nb+1];
blanks[0]='\n';
Arrays.fill(blanks,1,blanks.length,' ');
}
xmlsw.writeCharacters(blanks, 0, nb+1);
}
}
CompactTokenizer.java
import java.io.Reader;
import java.io.IOException;
import java.io.StreamTokenizer;
public class CompactTokenizer {
private StreamTokenizer st;
CompactTokenizer(Reader r){
st = new StreamTokenizer(r);
st.resetSyntax(); // remove parsing of numbers...
st.wordChars('\u0000','\u00FF'); // everything is part of a word
// except the following...
st.ordinaryChar('\n');
st.ordinaryChar('[');
st.ordinaryChar(']');
st.ordinaryChar('#');
}
public String nextToken() throws IOException{
st.nextToken();
while(st.ttype=='\n'||
(st.ttype==StreamTokenizer.TT_WORD &&
st.sval.trim().length()==0))
st.nextToken();
return getToken();
}
public String getToken(){
return (st.ttype == StreamTokenizer.TT_WORD) ? st.sval : (""+(char)st.ttype);
}
public String skip(String sym) throws IOException {
if(getToken().equals(sym))
return nextToken();
else
throw new IllegalArgumentException("skip: "+sym+" expected but"+
sym +" found ");
}
}
For more, you can follow the tutorial
https://docs.oracle.com/javase/tutorial/jaxp/stax/example.html
http://www.ibm.com/developerworks/library/x-tipstx2/index.html
http://www.iro.umontreal.ca/~lapalme/ForestInsteadOfTheTrees/HTML/ch09s03.html
http://staf.sourceforge.net/current/STAXDoc.pdf
Another approach, since you're not using a rigid OXM approach anyway.
You might want to try using a less rigid parser such as JSoup?
This will stop immediate problems with invalid XML schemas etc, but it will just devolve the problem into your code.
Just to throw in a different approach to a solution:
You might envelope your input stream with a stream inplementation that replaces the entities by something legal.
While this is a hack for sure, it should be a quick and easy solution (or better say: workaround).
Not as elegant and clean as a xml framework internal solution, though.
I made yesterday something similar i need to add value from unziped XML in stream to database.
//import I'm not sure if all are necessary :)
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.xpath.*;
import org.w3c.dom.Document;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
//I didnt checked this code now because i'm in work for sure its work maybe
you will need to do little changes
InputSource is = new InputSource(new FileInputStream("test.xml"));
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(is);
XPathFactory xpf = XPathFactory.newInstance();
XPath xpath = xpf.newXPath();
String words= xpath.evaluate("/foo/bar", doc.getDocumentElement());
ParsingHexToChar.parseToChar(words);
// lib which i use common-lang3.jar
//metod to parse
public static String parseToChar( String words){
String decode= org.apache.commons.lang3.StringEscapeUtils.unescapeHtml4(words);
return decode;
}
Try this using org.apache.commons package :
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = dbf.newDocumentBuilder();
InputStream in = new FileInputStream(xmlfile);
String unescapeHtml4 = IOUtils.toString(in);
CharSequenceTranslator obj = new AggregateTranslator(new LookupTranslator(EntityArrays.ISO8859_1_UNESCAPE()),
new LookupTranslator(EntityArrays.HTML40_EXTENDED_UNESCAPE())
);
unescapeHtml4 = obj.translate(unescapeHtml4);
StringReader readerInput= new StringReader(unescapeHtml4);
InputSource is = new InputSource(readerInput);
Document doc = parser.parse(is);
I'm working on an application that exports and imports data from / to a DB. The format of the data extract is XML and I'm using JAXB for the serialization / (un)marshalling. I want some elements to be marshalled as CDATA elements and am using this solution which sets OutputKeys.CDATA_SECTION_ELEMENTS to the Transformer properties.
So far this was working quite well, but now I came to a field in the DB that itself contains an XML string, which I also would like to place inside of a CDATA element. Now, for some reason the Transformer is now adding some unnecessary carriage return characters (\r) to each line end, so that the output looks like this:
This is my code:
private static final String IDENT_LENGTH = "3";
private static final String CDATA_XML_ELEMENTS = "text definition note expression mandatoryExpression optionalExpression settingsXml";
public static void marshall(final Object rootObject, final Schema schema, final Writer writer) throws Exception {
final JAXBContext ctx = JAXBContext.newInstance(rootObject.getClass());
final Document document = createDocument();
final Marshaller marshaller = ctx.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.setSchema(schema);
marshaller.marshal(rootObject, document);
createTransformer().transform(new DOMSource(document), new StreamResult(writer));
}
private static Document createDocument() throws ParserConfigurationException {
// the DocumentBuilderFactory is actually being hold in a singleton
final DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
return builderFactory.newDocumentBuilder().newDocument();
}
private static Transformer createTransformer() throws TransformerConfigurationException, TransformerFactoryConfigurationError {
// the TransformerFactory is actually being hold in a singleton
final TransformerFactory transformerFactory = TransformerFactory.newInstance();
final Transformer transformer = transformerFactory.newTransformer();
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty(OutputKeys.STANDALONE, "yes");
transformer.setOutputProperty(OutputKeys.CDATA_SECTION_ELEMENTS, CDATA_XML_ELEMENTS);
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", IDENT_LENGTH);
return transformer;
}
I'm passing a FileWriter to the marshall method.
My annotated model class looks like this:
#XmlType
#XmlRootElement
public class DashboardSettings {
#XmlElement
private String settingsXml;
public String getSettingsXml() {
return settingsXml;
}
public void setSettingsXml(final String settingsXml) {
this.settingsXml = settingsXml;
}
}
NOTE:
The XML string coming from the DB has Windows style line endings, i.e. \r and \n. I have the feeling that JAXB expects currently Linux style input (i. e. only \n) and is therefore adding a \r character because I'm currently running on a Windows machine. So the question is actually, what's the best way to solve this? Is there any parameter I can pass to control the line ending characters when marshalling? Or should I convert the line endings to Linux style prior marshalling? How will my program behave on different platforms (Windows, Linux, Mac OS)?
I don't necessarily need a platform independent solution, it's OK if the output is in Windows, Linux or whatever style. What I want to avoid is the combination \r\r\n as shown in the above screenshot.
I realise this question is pretty old, but I ran into a similar problem, so maybe an answer can help someone else.
It seems to be an issue with CDATA sections. In my case, I was using the createCDATASection method to create them. When the code was running on a Windows machine, an additional CR was added, as in your example.
I've tried a bunch of things to solve this "cleanly", to no avail.
In my project, the XML document was then exported to a string to POST to a Linux server. So once the string was generated, I just removed the CR characters, leaving only the LF:
myXmlString.replaceAll("\\r", "");
I might not be an appropriate solution for the specific question, but once again, it may help you (or someone else) find a solution.
Note: I'm stuck with Java 7 for this specific project, so it may have been fixed in a more recent version.
I am a new bee with xml and XSL, work with legacy platforms...
I am looking for a solution to create XML from XPATH. Happen to see this post How to Generate an XML File from a set of XPath Expressions?
which helped me a lot.
Similar to the request discussed under "Comments" section, I am trying to pass whole XSLT as string, and receiving result as a sting back using Saxon. Receiving result as string, no issues. But when passing XSL as string, it complain about "document()" which is part of < xsl:variable name="vStylesheet" select="document('')" />. Error is "SXXP0003: Error reported by XML parser: Content is not allowed in prolog."
My basic requirement is I should be able to pass XSL (whole file or "vPop" portion) as a string and should receive result in another string without involving any files. That way I can improve the performance and make it generic so that anyone in our shop who does not know how to deal with XML can still generate one...
My java code looks like..
public static String simpleTransform(final String xsltStr) {
String strResult = "";
TransformerFactory tFactory = TransformerFactory.newInstance();
String tempXMLStr = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<OpnXMLTAG>Dummy</OpnXMLTAG>";
try {
StringWriter writer = new StringWriter();
StreamResult result = new StreamResult(writer);
StreamSource XSLSource = new StreamSource(new StringReader(xsltStr));
Transformer transformer = tFactory.newTransformer(XSLSource );
transformer.transform(new StreamSource(new StringReader(tempXMLStr)), result);
strResult = writer.toString();
} catch (Exception e) {
e.printStackTrace();
}
return strResult;
}
And XSLT string I am passing is the same from earlier post.
When you call document(''), it treats '' as a relative URI reference to be resolved against the base URI of the stylesheet. This won't work if the base URI of the stylesheet is unknown, which is the case when you supply it as a StreamSource wrapping a StringReader with no systemId.
In the case of Saxon, document('') actually needs to re-read the stylesheet. It doesn't keep the source file around at run-time just in case it's needed. So you'll need to (a) supply a URI as the systemId property on the StreamSource (any URI will do, it won't actually be read), and (b) supply a URIResolver to resolve the call on document('') and supply the original string.
I do have few Word templates, and my requirement is to replace some of the words/place holders in the document based on the user input, using Java. I tried lot of libraries including 2-3 versions of docx4j but nothing work well, they all just didn't do anything!
I know this question has been asked before, but I tried all options I know. So, using what java library I can "really" replace/edit these templates? My preference goes to the "easy to use / Few line of codes" type libraries.
I am using Java 8 and my MS Word templates are in MS Word 2007.
Update
This code is written by using the code sample provided by SO member Joop Eggen
public Main() throws URISyntaxException, IOException, ParserConfigurationException, SAXException
{
URI docxUri = new URI("C:/Users/Yohan/Desktop/yohan.docx");
Map<String, String> zipProperties = new HashMap<>();
zipProperties.put("encoding", "UTF-8");
FileSystem zipFS = FileSystems.newFileSystem(docxUri, zipProperties);
Path documentXmlPath = zipFS.getPath("/word/document.xml");
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setNamespaceAware(true);
DocumentBuilder builder = factory.newDocumentBuilder();
Document doc = builder.parse(Files.newInputStream(documentXmlPath));
byte[] content = Files.readAllBytes(documentXmlPath);
String xml = new String(content, StandardCharsets.UTF_8);
//xml = xml.replace("#DATE#", "2014-09-24");
xml = xml.replace("#NAME#", StringEscapeUtils.escapeXml("Sniper"));
content = xml.getBytes(StandardCharsets.UTF_8);
Files.write(documentXmlPath, content);
}
However this returns the below error
java.nio.file.ProviderNotFoundException: Provider "C" Not found
at: java.nio.file.FileSystems.newFileSystem(FileSystems.java:341) at java.nio.file.FileSystems.newFileSystem(FileSystems.java:341)
at java.nio.fileFileSystems.newFileSystem(FileSystems.java:276)
One may use for docx (a zip with XML and other files) a java zip file system and XML or text processing.
URI docxUri = ,,, // "jar:file:/C:/... .docx"
Map<String, String> zipProperties = new HashMap<>();
zipProperties.put("encoding", "UTF-8");
try (FileSystem zipFS = FileSystems.newFileSystem(docxUri, zipProperties)) {
Path documentXmlPath = zipFS.getPath("/word/document.xml");
When using XML:
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setNamespaceAware(true);
DocumentBuilder builder = factory.newDocumentBuilder();
Document doc = builder.parse(Files.newInputStream(documentXmlPath));
//Element root = doc.getDocumentElement();
You can then use XPath to find the places, and write the XML back again.
It even might be that you do not need XML but could replace place holders:
byte[] content = Files.readAllBytes(documentXmlPath);
String xml = new String(content, StandardCharsets.UTF_8);
xml = xml.replace("#DATE#", "2014-09-24");
xml = xml.replace("#NAME#", StringEscapeUtils.escapeXml("Sniper")));
...
content = xml.getBytes(StandardCharsets.UTF_8);
Files.delete(documentXmlPath);
Files.write(documentXmlPath, content);
For a fast development, rename a copy of the .docx to a name with the .zip file extension, and inspect the files.
File.write should already apply StandardOpenOption.TRUNCATE_EXISTING, but I have added Files.delete as some error occured. See comments.
Try Apache POI. POI can work with doc and docx, but docx is more documented therefore support of it better.
UPD: You can use XDocReport, which use POI. Also I recomend to use xlsx for templates because it more suitable and more documented
I have spent a few days on this issue, until I found that what makes the difference is the try-with-resources on the FileSystem instance, appearing in Joop Eggen's snippet but not in question snippet:
try (FileSystem zipFS = FileSystems.newFileSystem(docxUri, zipProperties))
Without such try-with-resources block, the FileSystem resource will not be closed (as explained in Java tutorial), and the word document not modified.
Stepping back a bit, there are about 4 different approaches for editing words/placeholders:
MERGEFIELD or DOCPROPERTY fields (if you are having problems with this in docx4j, then you have probably not set up your input docx correctly)
content control databinding
variable replacement on the document surface (either at the DOM/SAX level, or using a library)
do stuff as XHTML, then import that
Before choosing one, you should decide whether you also need to be able to handle:
repeating data (eg adding table rows)
conditional content (eg entire paragraphs which will either be present or absent)
adding images
If you need these, then MERGEFIELD or DOCPROPERTY fields are probably out (though you can also use IF fields, if you can find a library which supports them). And adding images makes DOM/SAX manipulation as advocated in one of the other answers, messier and error prone.
The other things to consider are:
your authors: how technical are they? What does that imply for the authoring UI?
the "user input" you mention for variable replacement, is this given, or is obtaining it part of the problem you are solving?
Please try this to edit or replace the word in document
public class UpdateDocument {
public static void main(String[] args) throws IOException {
UpdateDocument obj = new UpdateDocument();
obj.updateDocument(
"c:\\test\\template.docx",
"c:\\test\\output.docx",
"Piyush");
}
private void updateDocument(String input, String output, String name)
throws IOException {
try (XWPFDocument doc = new XWPFDocument(
Files.newInputStream(Paths.get(input)))
) {
List<XWPFParagraph> xwpfParagraphList = doc.getParagraphs();
//Iterate over paragraph list and check for the replaceable text in each paragraph
for (XWPFParagraph xwpfParagraph : xwpfParagraphList) {
for (XWPFRun xwpfRun : xwpfParagraph.getRuns()) {
String docText = xwpfRun.getText(0);
//replacement and setting position
docText = docText.replace("${name}", name);
xwpfRun.setText(docText, 0);
}
}
// save the docs
try (FileOutputStream out = new FileOutputStream(output)) {
doc.write(out);
}
}
}
}
I have this function:
private Node getDOM(String str) throws SearchEngineException {
DOMResult result = new DOMResult();
try {
XMLReader reader = new Parser();
reader.setFeature(Parser.namespacesFeature, false);
reader.setFeature(Parser.namespacePrefixesFeature, false);
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.transform(new SAXSource(reader,new InputSource(new StringReader(str))), result);
} catch (Exception ex) {
throw new SearchEngineException("NukatSearchEngine.getDom: " + ex.getMessage());
}
return result.getNode();
}
It takes a String that contains the html document sent by the http server after a POST request, but fails to parse it properly - I only get like four nodes from the entire document. The string itself looks fine - if I print it out and copypasta it into a text document I see the page I expected.
When I use an overloaded version of the above method:
private Node getDOM(URL url) throws SearchEngineException {
DOMResult result = new DOMResult();
try {
XMLReader reader = new Parser();
reader.setFeature(Parser.namespacesFeature, false);
reader.setFeature(Parser.namespacePrefixesFeature, false);
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.transform(new SAXSource(reader, new InputSource(url.openStream())), result);
} catch (Exception ex) {
throw new SearchEngineException("NukatSearchEngine.getDom: " + ex.getMessage());
}
return result.getNode();
}
then everything works just fine - I get a proper DOM tree, but I need to somehow retrieve the POST answer from server.
Storing the string in a file and reading it back does not work - still getting the same results.
What could be the problem?
Is it maybe a problem with the xml encoding?
This seems like an encoding problem. In the code example of yours that doesn't work you're passing the url as a string into the constructor, which uses it as the systemId, and you get problems with Tagsoup parsing the html. In the example that works you're passing the stream in to the InputSource constructor. The difference is that when you pass in the stream then the SAX implementation can figure out the encoding from the stream.
If you want to test this you could try these steps:
Stream the html you're parsing through a java.io.InputStreamReader and call getEncoding on it to see what encoding it detects.
In your first example code, call setEncoding on the InputSource passing in the encoding that the inputStreamReader reported.
See if the first example, changed to explicitly set the encoding, parses the html correctly.
There's a discussion of this toward the end of an article on using the SAX InputSource.
To get a POST response you first need to do a POST request, new InputSource(url.openStream()) probably opens a connection and reads the response from a GET request. Check out Sending a POST Request Using a URL.
Other possibilities that might be interesting to check out for doing POST requests and getting the response:
Jersey Web Client
HtmlUnit