Javax DocumentBuilder produces “double-UTF-8’ed” charset encoding - java

I’ve got a Java DOM Document which MyFilter has rewritten. From logging output I know that the contents of the Document are still correct. I am using the following lines to convert theDocument to a List<String> to pass it back through an interface:
Transformer transformer = TransformerFactory.newInstance().newTransformer();
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
transformer.transform(new DOMSource(theDocument), new StreamResult(buffer));
return Arrays.asList(new String(buffer.toByteArray()).split("\r?\n"));
The filter is called from this file copying method using org.apache.commons.io.FileUtils:
List<String> lines = FileUtils.readLines(source, "UTF-8");
if (filters != null) {
for (final MyFilter filter : filters) {
lines = filter.filter(lines);
}
}
FileUtils.writeLines(destination, "UTF-8", lines);
This works perfectly fine on my machine (where I could debug it), but on other machines just running the code, reproducibly any non-ASCII characters get double-UTF-8’ed (e.g., Größe becomes Größe). The code is executed within a web app running in Tomcat. I am sure they are differently configured, but what I want is that I get the non-corrupt result on any configuration.
Any ideas what I could be missing?

When you have Document object created you have to read it Content.
After it you have to write it to file using LSSerializer interface, which DOM standart provides for this purpose.
By default, the LSSerializer produces an XML document without spaces or line
breaks. As a result, the output looks less pretty, but it is actually more suitable for parsing by another program because it is free from unnecessary white space.
If you want white space, you use yet another magic incantation after creating the serializer:
ser.getDomConfig().setParameter("format-pretty-print", true);
Code snippets looks like:
private String getContentFromDocument(Document doc) {
String content;
DOMImplementation impl = doc.getImplementation();
DOMImplementationLS implLS = (DOMImplementationLS) impl.getFeature("LS", "3.0");
LSSerializer ser = implLS.createLSSerializer();
ser.getDomConfig().setParameter("format-pretty-print", true);
content = ser.writeToString(doc);
return content;
}
And after you have string content you can write it to file, like:
public void writeToXmlFile(String xmlContent) {
File theDir = new File("./output");
if (!theDir.exists())
theDir.mkdir();
String fileName = "./output/" + this.getClass().getSimpleName() + "_"
+ Calendar.getInstance().getTimeInMillis() + ".xml";
try (OutputStream stream = new FileOutputStream(new File(fileName))) {
try (OutputStreamWriter out = new OutputStreamWriter(stream, StandardCharsets.UTF_8)) {
out.write(xmlContent);
out.write("\n");
}
} catch (IOException ex) {
System.err.println("Cannot write to file!" + ex.getMessage());
}
}
BTW:
Have you tried to get Document object at a little bit easier, like:
DocumentBuilderFactory documentFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = documentFactory.newDocumentBuilder();
Document doc = builder.parse(new File(fileName));
You can try this as well. It should be enough for parsing xml file.

I finally found it: The problem was within the String(byte[]) constructor which interprets byte[] relative to the platform’s default charset. This should at least have been tagged deprecated. The transformer obviously produces UTF-8 output independent of the platform. Changing the method like below passes the same charset to both:
final String ENCODING = "UTF-8";
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.setOutputProperty(OutputKeys.ENCODING, ENCODING);
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
transformer.transform(new DOMSource(theDocument), new StreamResult(buffer));
return Arrays.asList(new String(buffer.toByteArray(), ENCODING).split("\r?\n"));
To get it working, it does not really matter which encoding, just both should use the same. Hovever, it is good to choose some unicode charset as otherwise unmappable characters may get lost. However, the charset will be reflected in the XML declaration, thus when the List<String> gets saved later, it is important to save it accordigly.

Related

Java transformer w3c.dom.document to inputstream

My scenario is this:
I have a HTML which I loaded into a w3c.dom.Document, after loading it as a doc, I parsed through its nodes and made a few changes in their values, but now I need to transform this document into a String, or preferably into a InputStream directly.
And I managed to do so, however, to the ends I need this HTML it must keep some properties of the initial file, for instance (and this is the one thing I'm struggling a lot trying to solve), all tags must be closed.
Say, I have a link tag on the header, <link .... /> I NEED the dash (/) at the end. However after the transformer transform my doc into a outputStream (which then I proceed to send to an inputStream) all the '/' before the > disappear. All my tags, which ended in /> are changed into simple >.
The reason I need this structure is that one of the libraries I'm using (and I'm afraid I can't go looking for another one, specially not at this point) require all tags to be closed, if not it throws exceptions everywhere and my program crashes....
Does anyone have any good ideas or solutions for me? This is my first contact with the Transform class, so I might be missing something that could help me.
Thank you all so very much,
Warm regards
Some bit of the code to explain the scenario a little bit
DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder docBuilder = docFactory.newDocumentBuilder();
org.w3c.dom.Document doc = docBuilder.parse(his); // his = the HTML inputStream
XPath xPath = XPathFactory.newInstance().newXPath();
String expression = "//*[#id='pessoaNome']";
org.w3c.dom.Element pessoaNome = null;
try
{
pessoaNome = (org.w3c.dom.Element) (Node) xPath.compile(expression).evaluate(doc, XPathConstants.NODE);
}
catch (Exception e)
{
e.printStackTrace();
}
pessoaNome.setTextContext("The new values for the node");
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
Source xmlSource = new DOMSource(doc);
Result outputTarget = new StreamResult(outputStream);
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, "HTML");
transformer.transform(xmlSource, outputTarget);
InputStream is = new ByteArrayInputStream(outputStream.toByteArray()); // At this point outputStream is already all messed up, not just the '/'. but this is the only thing causing me problems
as #Lee pointed out, I changed it to use Jsoup. Code got a lot cleaner, just had to set up the outputSettings for it to work like a charm. Code below
org.jsoup.nodes.Document doc = Jsoup.parse(new File(HTML), "UTF-8");
org.jsoup.nodes.Element pessoaNome = doc.getElementById("pessoaNome");
pessoaNome.html("My new html in here");
OutputSettings oSettings = new OutputSettings();
oSettings.syntax(org.jsoup.nodes.Document.OutputSettings.Syntax.xml);
doc.outputSettings(oSettings);
InputStream is = new ByteArrayInputStream(doc.outerHtml().getBytes());
Have a look at jTidy which cleans HTML. There is also jsoup which is newer as supposedly does the same things only better.

Converting XML to document in java creates null document

I'm trying to parse xml, downloaded from the web, in java, following examples from here (stackoverflow) and other sources.
First I pack the xml in a string:
String xml = getXML(url, logger);
If I printout the xml string at this point:
System.out.println("XML " + xml);
I get a printout of the xml so I'm assuming there is no fault up to this point.
Then I try to create a document that I can evaluate:
InputSource is= new InputSource(new StringReader(xml));
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setNamespaceAware(true);
DocumentBuilder builder = factory.newDocumentBuilder();
Document doc = builder.parse(is);
If I print out the document here:
System.out.println("Doc: " + doc);
I get:
Doc: [#document: null]
When I later try to evaluate expressions with Xpath I get java.lang.NullPointerException and also when just trying to get the length of the root:
System.out.println("Root length " + rootNode.getLength());
which leaves me to believe the document (and later the node) is truly null.
When I try to print out the Input Source or the Node I get eg.
Input Source: org.xml.sax.InputSource#29453f44
which I don't know how to interpret.
Can any one see what I've done wrong or suggest a way forward?
Thanks in advance.
You may need another way to render the document as a string.
For JDOM:
public static String toString(final Document document) {
try {
final ByteArrayOutputStream out = new ByteArrayOutputStream(1024);
final XMLOutputter outp = new XMLOutputter();
outp.output(document, out);
final String string = out.toString("UTF-8");
return string;
}
catch (final Exception e) {
throw new IllegalStateException("Cannot stringify document.", e);
}
}
The output
org.xml.sax.InputSource#29453f44
simply is the class name + the hash code of the instance (as defined in the Object class). It indicates that the class of the instance has toString not overridden.

Set xml encoding

I am sending xml to a web service and there I am converting input xml to string and now I am having a problem setting its encoding. Here is a code:
Element soapinElement = (Element) streams.getSoapin().getValue().getAny();
Node node = (Node) soapinElement;
Document document = node.getOwnerDocument();
DOMImplementationLS domImplLS = (DOMImplementationLS) document.getImplementation();
LSSerializer serializer = domImplLS.createLSSerializer();
LSOutput output = domImplLS.createLSOutput();
output.setEncoding("UTF-8");
Writer stringWriter = new StringWriter();
output.setCharacterStream(stringWriter);
serializer.write(document, output);
String soapinString = stringWriter.toString();
This code makes a String from request xml. The problem is that when the request XML is encoded not in UTF-8 it produces unreadable characters inside xml elements:
<some element>РћР’Р” Р’РћР</some element>
When I send UTF-8 encoded xml there is no problem. So the question is how to set UTF-8 encoding when converting xml to String.
Default encoding used by JVM is ISO8859-1.
The setEncoding method says what the encoding actually is, not what you want it to be. The XML library won't convert the characters.
See this question: Meaning of XML encoding
If you want to convert the encoding, that is another question.
I would rethink my whole approach if I were you, generally XML should be kept as a tree.
But if you really need a string, try this
final StringWriter sw = new StringWriter();
try {
TransformerFactory.newInstance().newTransformer().transform(
new DOMSource(document),
new StreamResult(sw)
);
} catch (TransformerException e) {
throw new RuntimeException(e);
}
// Now you have the XML as a String:
System.out.println(sw.toString());

get node raw text

How get node value with its children nodes? For example I have following node parsed into dom Document instance:
<root>
<ch1>That is a text with <value name="val1">value contents</value></ch1>
</root>
I select ch1 node using xpath. Now I need to get its contents, everything what is containing between <ch1> and </ch1>, e.g. That is a text with <value name="val1">value contents</value>.
How can I do it?
I have found the following code snippet that uses transformation, it gives almost exactly what I want. It is possible to tune result by changing output method.
public static String serializeDoc(Node doc) {
StringWriter outText = new StringWriter();
StreamResult sr = new StreamResult(outText);
Properties oprops = new Properties();
oprops.put(OutputKeys.METHOD, "xml");
TransformerFactory tf = TransformerFactory.newInstance();
Transformer t = null;
try {
t = tf.newTransformer();
t.setOutputProperties(oprops);
t.transform(new DOMSource(doc), sr);
} catch (Exception e) {
System.out.println(e);
}
return outText.toString();
}
If this is server side java (ie you do not need to worry about it running on other jvm's) and you are using the Sun/Oracle JDK, you can do the following:
import com.sun.org.apache.xml.internal.serialize.OutputFormat;
import com.sun.org.apache.xml.internal.serialize.XMLSerializer;
...
Node n = ...;
OutputFormat outputFormat = new OutputFormat();
outputFormat.setOmitXMLDeclaration(true);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
XMLSerializer ser = new XMLSerializer(baos, outputFormat);
ser.serialize(n);
System.out.println(new String(baos.toByteArray()));
Remember to ensure your ultimate conversion to string may need to take an encoding parameter if the parsed xml dom has its text nodes in a different encoding than your platforms default one or you'll get garbage on the unusual characters.
You could use jOOX to wrap your DOM objects and get many utility functions from it, such as the one you need. In your case, this will produce the result you need (using css-style selectors to find <ch1/>:
String xml = $(document).find("ch1").content();
Or with XPath as you did:
String xml = $(document).xpath("//ch1").content();
Internally, jOOX will use a transformer to generate that output, as others have mentioned
As far as I know, there is no equivalent of innerHTML in Document. DOM is meant to hide the details of the markup from you.
You can probably get the effect you want by going through the children of that node. Suppose for example that you want to copy out the text, but replace each "value" tag with a programmatically supplied value:
HashMap<String, String> values = ...;
StringBuilder str = new StringBuilder();
for(Element child = ch1.getFirstChild; child != null; child = child.getNextSibling()) {
if(child.getNodeType() == Node.TEXT_NODE) {
str.append(child.getTextContent());
} else if(child.getNodeName().equals("value")) {
str.append(values.get(child.getAttributes().getNamedItem("name").getTextContent()));
}
}
String output = str.toString();

Writing XML in different character encodings with Java

I am attempting to write an XML library file that can be read again into my program.
The file writer code is as follows:
XMLBuilder builder = new XMLBuilder();
Document doc = builder.build(bookList);
DOMImplementation impl = doc.getImplementation();
DOMImplementationLS implLS = (DOMImplementationLS) impl.getFeature("LS", "3.0");
LSSerializer ser = implLS.createLSSerializer();
String out = ser.writeToString(doc);
//System.out.println(out);
try{
FileWriter fstream = new FileWriter(location);
BufferedWriter outwrite = new BufferedWriter(fstream);
outwrite.write(out);
outwrite.close();
}catch (Exception e){
}
The above code does write an xml document.
However, in the XML header, it is an attribute that the file is encoded in UTF-16.
when i read in the file, i get the error:
"content not allowed in prolog"
this error does not occur when the encoding attribute is manually changed to UTF-8.
I am trying to get the above code to write an XML document encoded in UTF-8, or successfully parse a UTF-16 file.
the code for parsing in is
DocumentBuilderFactory factory =
DocumentBuilderFactory.newInstance();
DocumentBuilder loader = factory.newDocumentBuilder();
Document document = loader.parse(filename);
the last line returns the error.
the LSSerializer writeToString method does not allow the Serializer to pick a encoding.
with the setEncoding method of an instance of LSOutput, LSSerializer's write method can be used to change encoding. the LSOutput CharacterStream can be set to an instance of the BufferedWriter, such that calls from LSSerializer to write will write to the file.

Categories

Resources