I am trying to serialize DOM documents with supplementary unicode characters such as U+1D49C (𝒜, mathematical script capital A). Creating a node with such a character is not a problem (I just set the node value to the UTF-16 equivalent, "\uD835\uDC9C"). When serializing, however, Xalan and XSLTC (with a Transformer) and Xerces (with LSSerializer) all create invalid character entities like "𝒜" instead of "𝒜". I tried the "normalize-characters" parameter for LSSerializer, but it is not supported. Only Saxon gets it right, without using a character entity when the encoding is unicode.
I cannot use Saxon in practice (among other reasons, I use Java applets and do not want to load another jar), so I am looking for a solution with the default JDK libraries. Is it possible to get valid XML documents serialized from a DOM document with supplementary unicode characters ?
[edit] I found someone else who encountered this problem : http://www.dragishak.com/?p=131
[edit2] actually, it seems to work with LSSerializer when I don't have xerces on the classpath (the class used is com.sun.org.apache.xml.internal.serialize.DOMSerializerImpl). It does not work with a transformer and com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl.
Since I didn't see any answer coming, and other people seem to have the same problem, I looked into it further...
To find the origin of the bug, I used the serializer source code from Xalan 2.7.1, which is also used in Xerces.
org.apache.xml.serializer.dom3.LSSerializerImpl uses org.apache.xml.serializer.ToXMLStream, which extends org.apache.xml.serializer.ToStream.
ToStream.characters(final char chars[], final int start, final int length) handles the characters, and does not support unicode characters properly (note: org.apache.xml.serializer.ToTextSream (which can be used with a Transformer) does a better job in the characters method, but it only handles plain text and ignores all markup; one would think that XML files are text, but for some reason ToXMLStream does not extend ToTextStream).
org.apache.xalan.transformer.TransformerIdentityImpl is also using org.apache.xml.serializer.ToXMLStream (which is returned by org.apache.xml.serializer.SerializerFactory.getSerializer(Properties format)), so it suffers from the same bug.
ToStream is using org.apache.xml.serializer.CharInfo to check if a character should be replaced by a String, so the bug could also be fixed there instead of directly in ToStream. CharInfo is using a propery file, org.apache.xml.serializer.XMLEntities.properties, with a list of character entities, so changing this file could also be a way to fix the bug, although so far it is designed just for the special XML characters (quot,amp,lt,gt). The only way to make ToXMLStream use a different property file than the one in the package would be to add a org.apache.xml.serializer.XMLEntities.properties file before in the classpath, which would not be very clean...
With the default JDK (1.6 and 1.7), TransformerFactory returns a com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl, which uses com.sun.org.apache.xml.internal.serializer.ToXMLStream. In com.sun.org.apache.xml.internal.serializer.ToStream, characters() is sometimes calling processDirty(), which calls accumDefaultEscape(), which could handle unicode characters better, but in practice it does not seem to work (maybe processDirty is not called for unicode characters)...
com.sun.org.apache.xml.internal.serialize.DOMSerializerImpl is using com.sun.org.apache.xml.internal.serialize.XMLSerializer, which supports unicode. Strangely enough, XMLSerializer comes from Xerces, and yet it is not used by Xerces when xalan or xsltc are on the classpath. This is because org.apache.xerces.dom.CoreDOMImplementationImpl.createLSSerializer is using org.apache.xml.serializer.dom3.LSSerializerImpl when it is available instead of org.apache.xerces.dom.DOMSerializerImpl. With serializer.jar on the classpath, org.apache.xml.serializer.dom3.LSSerializerImpl is used. Warning: xalan.jar and xsltc.jar both reference serializer.jar in the manifest, so serializer.jar ends up on the classpath if it is in the same directory and either xalan.jar or xsltc.jar is on the classpath ! If only xercesImpl.jar and xml-apis.jar are on the classpath, org.apache.xerces.dom.DOMSerializerImpl is used as the LSSerializer, and unicode characters are properly handled.
CONCLUSION AND WORKAROUND: the bug lies in Apache's org.apache.xml.serializer.ToStream class (renamed com.sun.org.apache.xml.internal.serializer.ToStream inside the JDK). A serializer that handles unicode characters properly is org.apache.xml.serialize.DOMSerializerImpl (renamed com.sun.org.apache.xml.internal.serialize.DOMSerializerImpl inside the JDK). However, Apache prefers ToStream instead of DOMSerializerImpl when it is available, so maybe it behaves better for other things (or maybe it's just a reorganization). On top of that, they went as far as deprecating DOMSerializerImpl in Xerces 2.9.0. Hence the following workaround, which might have side effects :
when Xerces and Apache's serializer are on the classpath, replace "(doc.getImplementation()).createLSSerializer()" by "new org.apache.xerces.dom.DOMSerializerImpl()"
when Apache's serializer is on the classpath (for instance because of xalan) but not Xerces, try to replace "(doc.getImplementation()).createLSSerializer()" by "new com.sun.org.apache.xml.internal.serialize.DOMSerializerImpl()" (a fallback is necessary because this class might disappear in the future)
These 2 workarounds produce a warning when compiling.
I don't have a workaround for XSLT transforms, but this is beyond the scope of the question. I guess one could do a transform to another DOM document and use DOMSerializerImpl to serialize.
Some other workarounds, which might be a better solution for some people :
use Saxon with a Transformer
use XML documents with UTF-16 encoding
Here is an example that worked for me. Code is written in Groovy running on Java 7, which you can easily translate to Java since I've used all Java APIs in the example. If you pass in a DOM document that has supplementary (plane 1) unicode characters and you will get back out a String which has those characters properly serialized. For example, if the document has a unicode Script L (see http://www.fileformat.info/info/unicode/char/1d4c1/index.htm), it will be serialized in the returned String as 𝓁 instead of 𝓁 (which is what you will get with a Xalan Transformer).
import org.w3c.dom.Document
...
def String writeToStringLS( Document doc ) {
def domImpl = doc.getImplementation()
def implLS = domImpl.getFeature("LS", "3.0")
def lsOutput = implLS.createLSOutput()
lsOutput.encoding = "UTF-8"
def bo = new ByteArrayOutputStream()
def out = new BufferedWriter( new OutputStreamWriter( bo, "UTF-8") )
lsOutput.characterStream = out
def lsWriter = implLS.createLSSerializer()
def result = lsWriter.write(doc, lsOutput)
return bo.toString()
}
Related
Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?
That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.
A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.
The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.
IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);
I am experiencing a weird behavior with german "Umlaute" (ä, ö, ü, ß) when using Java's equality checks (either directly or indirectly.
Everything works as expected when running, debugging or testing from Eclipse and input containing "Umlaute" is treated as equal or not as expected.
However when I build the application using Spring Boot and run it, these equality checks fail for words that contain "Umlaute", i.e. for words like "Nationalität".
Input is retrieved from a webpage via Jsoup and content of a table is extracted for some keywords. The encoding of the page is UTF-8 and I have handling in place for Jsoup to convert it if this is not the case.
The encoding of the source files is UTF-8 as well.
Connection connection = Jsoup.connect(url)
.header("accept-language", "de-de, de, en")
.userAgent("Mozilla/5.0")
.timeout(10000)
.method(Method.GET);
Response response = connection.execute();
if(logger.isDebugEnabled())
logger.debug("Encoding of response: " +response.charset());
Document doc;
if(response.charset().equalsIgnoreCase("UTF-8"))
{
logger.debug("Response has expected charset");
doc = Jsoup.parse(response.body(), baseURL);
}
else
{
logger.debug("Response doesn't have exepcted charset and is converted");
doc = Jsoup.parse(new String(response.bodyAsBytes(), "UTF-8"), baseURL);
}
logger.debug("Encoding of document: " +doc.charset());
if(!doc.charset().equals(Charset.forName("UTF-8")))
{
logger.debug("Changing encoding of document from " +doc.charset());
doc.updateMetaCharsetElement(true);
doc.charset(Charset.forName("UTF-8"));
logger.debug("Changed encoding of document to: " +doc.charset());
}
return doc;
Example log output (from deployed app) of reading content.
Encoding of response: utf-8
Response has expected charset
Encoding of document: UTF-8
Example input:
<tr><th>Nationalität:</th> <td> [...] </td> </tr>
Example code that fails for words containing ä, ö, ü or ß but works fine for other words:
Element header = row.select("th").first();
String text = header.ownText();
if("Nationalität:".equals(text))
{
// goes here in eclipse
}
else
{
// and here in deployed spring boot app
}
Is there any difference between running from Eclipse and a built & deployed app that I am missing? Where else could this behavior come from and how I this be resolved?
As far as I can see this is not (directly) an encoding issue since the input shows "Umlaute" correctly...
Since this is not reproducible when debugging, I am having a hard time figuring out what exactly goes wrong.
Edit: While input looks fine in logs (i.e. diacritics show up correctly) I realized that they don't look correct in the console:
<th>Nationalität:</th>
I am currently using a Normalizer as suggested by Mirko like this:
Normalizer.normalize(input, Form.NFC);
(also tried it with NFD).
How do (SpringBoot-) console and (logback) logoutput differ?
Diacritics like umlauts can often be represented in two different ways in unicode: As a single-codepoint character or as a composition of two characters. This isn't a problem of the encoding, it can happen in UTF-8, UTF-16, UTF-32 etc.
Java's equals method may not consider composite characters equal to single-codepoint characters, even though they look exactly the same.
Try to have a look at the binary representation of the strings you are comparing, this way you should be able to track down the differences.
You could also use the methods of the "Character" class to iterate through the strings and print out the properties of all the characters. Maybe this helps, too, to figure out differences.
In any case, it could help if you use java.text.Normalizer on both "sides" of the "equals", to normalize the text to, for example, Unicode Normalization Form C. This way, differences like the aforementioned should be straightened out and the strings should compare as expected.
Have you tried printing the keycode to console to see if they actually match when compiled? Maybe Eclipse is handling the charset gracefully but when it's compiled it's down to some Java/System settings?
I think I tracked this down to the build of the standalone app being the culprit.
As described above, when running from Eclipse all is fine, the problem only occurred when I ran the standalone Spring Boot app.
This is being built with Gradle. In my build.gradle I have
compileJava.options.encoding = 'UTF-8'
in order to force UTF-8 being used for encoding. This should (usually) be enough. I however also use AspectJ (via gradle-aspectj plugin) which apparently breaks this behavior (involuntarily?) and results in a default encoding to be used instead of the one explicitly defined.
In order to solve this I added
compileAspect {
additionalAjcArgs = ['encoding' : 'UTF-8']
}
to my build.gradle which passes the encoding option on to the ajc compiler. This seems to have fixed the problem for the regular build.
The problem still occurs however when tests are run from gradle. I was not yet able to find out what needs to be done there and why the above configuration is not enough.
This is now tracked in a separate question.
Is there any good parser which can parser HL7 V2.7 message using Java except HAPI. My goal is to convert the message into a XML file.
There is my own open source alternative called HL7X, which does work with any HL7v2 version. It converts your HL7 String into a XML String.
Example:
MSH|^~\&|||||20121116122025||ADT^A01|5730224|P|2.5||||||UNICODE UTF-8
EVN|A01|20130120151827
PID||0|123||Name^Firstname^^^^||193106170000|w
PV1||E|
Gets transformed to
<?xml version="1.0" encoding="UTF-8"?>
<HL7X>
<HL7X>
<MSH>
<MSH.1>^~\&</MSH.1>
<MSH.6>20121116122025</MSH.6>
<MSH.8>
<MSH.8.1>ADT</MSH.8.1>
<MSH.8.2>A01</MSH.8.2>
</MSH.8>
<MSH.9>5730224</MSH.9>
<MSH.10>P</MSH.10>
<MSH.11>2.5</MSH.11>
<MSH.17>UNICODE UTF-8</MSH.17>
</MSH>
<EVN>
<EVN.1>A01</EVN.1>
<EVN.2>20130120151827</EVN.2>
</EVN>
<PID>
<PID.2>0</PID.2>
<PID.3>123</PID.3>
<PID.5>
<PID.5.1>Name</PID.5.1>
<PID.5.2>Firstname</PID.5.2>
</PID.5>
<PID.7>193106170000</PID.7>
<PID.8>F</PID.8>
</PID>
<PV1>
<PV1.2>E</PV1.2>
</PV1>
</HL7X>
this http://www.dcm4che.org/confluence/display/ee2/Home open source Java software can receive various HL7 messages through the MLLP protocol, convert them to XML, run through XSLT transformer and then load them into database and serve to DICOM clients as needed. In order to do this in the code base there is the HL7->XML code. Just find it, copy/paste it and use it.
Once I knew where exactly this code is as I was troubleshooting message character set problem. At that time I have found that the HL7 parser is rather simple-minded and can understand only 1 character set provided in the configuration. It does not read/use character set (MSH-18, Table 0211, Grahame Grieve's encoding tips) provided in the messages neither does it support switching character sets during the message decoding (see chapter "Escape sequences supporting multiple character sets" in HL7 specification).
So I know the parser code is there. It is in Java. It produces XML inputs for the customer-specific XSLT transformation script. It should be quite easy to reuse.
You should be able to find it by yourself. Otherwise your question would turn out as plain finding a tool §4 is an off-topic :)
For development I'm using ResourceBundle to read a UTF-8 encoded properties-file (I set that in Eclipse' file properties on that file) directly from my resources-directory in the IDE (native2ascii is used on the way to production), e.g.:
menu.file.open.label=&Öffnen...
label.btn.add.name=&Hinzufügen
label.btn.remove.name=&Löschen
Since that causes issues with the character encoding when using non-ASCII characters I thought I'd be happy with:
ResourceBundle resourceBundle = ResourceBundle.getBundle("messages", Locale.getDefault());
String value = resourceBundle.getString(key);
value = new String(value.getBytes(), "UTF-8");
Well, it does work nicely for lower-case German umlauts, but not for the upper-case ones, the ß also doesn't work. Here's the value read with getString(key) and the value after the conversion with new String(value.getBytes(), "UTF-8"):
&Löschen => &Löschen
&Hinzufügen => &Hinzufügen
&Ã?ber => &??ber
&SchlieÃ?en => &Schlie??en
&Ã?ffnen... => &??ffnen...
The last three should be:
&Ã?ber => &Über
&SchlieÃ?en => &Schließen
&Ã?ffnen... => &Öffnen...
I guess that I'm not too far away from the truth, but what am I missing here?
Google found something similar, but that remained unanswered.
EDIT: a little more code
The problem is you're calling String.getBytes() without specifying an encoding - which will use the default platform encoding. You're then using the binary result of that operation as if it were in UTF-8.
If you use UTF-8 in both directions, it'll be fine:
// Should be a round-trip
value = new String(value.getBytes("UTF-8"), "UTF-8");
... but if you were trying to use this to read a UTF-8-encoded property file without telling the code which is performing the initial read, that won't work.
The code you've presented is basically always the wrong approach. Your "Since that causes issues with the character encoding" suggests that you'd already run across an earlier problem - so I'd go back to that, instead of trying to apply a broken fix. If you've already lost data when constructing the ResourceBundle, it's too late to go back later... you need to make sure the ResourceBundle itself is loaded correctly.
Please tell us exactly what problems you had with the ResourceBundle, and we can see if we can fix the root cause.
EDIT: It's not clear how you're running native2ascii. The fix may be as simple as changing to use:
native2ascii -encoding UTF-8 input.properties output.properties
Some notes:
If it is a String it is UTF-16 and if it isn't it is a corrupt string (and too late to fix.)
new String(value.getBytes(), "UTF-8"); - this code will (at best) do nothing on a system that uses UTF-8 as the default encoding; otherwise it will corrupt the string.
.properties files must be ISO 8859-1 (the Properties type supports other formats and encodings, but I don't know how you would tell ResourceBundle that.)
System.out can introduce its own transcoding bugs (the PrintStream encodes UTF-16 strings to the default encoding; the receiving device must decode the bytes using the same encoding.)
I suspect you are trying to fix your problems in the wrong place.
You are encoding the text with a different encoding to the one you are decoding with.
Try instead using the same character set for encoding and decoding.
value = new String(value.getBytes("UTF-8"), "UTF-8");
String s = "ßßßßß";
s += s.toUpperCase();
s = new String(s.getBytes("UTF-8"), "UTF-8");
System.out.println(s);
prints
ßßßßßSSSSSSSSSS
Today I was talking to one of my colleagues and he was pretty much on the same path as the other answers have mentioned. So I tried to achieve what Jon Skeet had mentioned, meaning creating the same file as in production. Since rebuilding the project after each change of a resource is out of question and I hadn't done any of what solved this (and I guess this will be new to some) let me line it out (even if it may be just for personal reference ;) ). In short this uses Eclipse' project builders.
Create an Ant-style build.xml
<?xml version="1.0" encoding="UTF-8"?>
<project>
<property name="dir.resources" value="src/main/resources" />
<property name="dir.target" value="bin/main" />
<target name="native-to-ascii">
<delete dir="${dir.target}" includes="**/*.properties" />
<native2ascii src="${dir.resources}" dest="${dir.target}" includes="**/*.properties" />
</target>
</project>
Its intention is to delete the properties-files in the target directory and use native2ascii to recreate them. The delete is necessary as native2ascii won't overwrite existing files.
In Eclipse go to the project properties and select "Builders", click "New...", pick "Ant Builder" (that's the slightly enhanced editor for run configurations)
In "Main" let "Buildfile" point to the Ant-script, set "Base Directory" to ${project_loc}
In "Refresh" tick "Refresh resources upon completion" and pick "The project containing the selected resource"
In "Targets" click "Set Targets" next to the "Auto Build" and pick native-to-ascii there (note that for some reason I had to do this later again)
This might not be necessary for everybody, but in "JRE" pick a proper execution environment
In "Build Options" tick off "Allocate Console" (however, you may want to keep this ticked on until you see that it's all working)
"Apply", "OK"
I was told that the newly created builder should be somewhere underneath the Java Builder (use Up/Down-button)
In the "Java Build Path" select the source folder with the resources (src/main/resources for me) and add an exclusion for **/*.properties
That should have been it. If you edit a properties-file and save it, it should automatically be converted to ASCII in the output folder. You can try with entering ü, which should end up as \u00fc.
Note that if you have a lot of properties-files, this may take some time. Just don't save after every keypress. :)
I have program that needs to parse XML that contains character entities. The program itself doesn't need to have them resolved, and the list of them is large and will change, so I want to avoid explicit support for these entities if I can.
Here's a simple example:
<?xml version="1.0" encoding="UTF-8"?>
<xml>Hello there &something;</xml>
Is there a Java XML API that can parse a document successfully without resolving (non-standard) character entities? Ideally it would translate them into a special event or object that could be handled specially, but I'd settle for an option that would silently suppress them.
Answer & Example:
Skaffman gave me the answer: use a StAX parser with IS_REPLACING_ENTITY_REFERENCES set to false.
Here's the code I whipped up to try it out:
XMLInputFactory inputFactory = XMLInputFactory.newInstance();
inputFactory.setProperty(XMLInputFactory.IS_REPLACING_ENTITY_REFERENCES, false);
XMLEventReader reader = inputFactory.createXMLEventReader(
new FileInputStream("your file here"));
while (reader.hasNext()) {
XMLEvent event = reader.nextEvent();
if (event.isEntityReference()) {
EntityReference ref = (EntityReference) event;
System.out.println("Entity Reference: " + ref.getName());
}
}
For the above XML, it will print "Entity Reference: something".
The STaX API has support for the notion of not replacing character entity references, by way of the IS_REPLACING_ENTITY_REFERENCES property:
Requires the parser to replace
internal entity references with their
replacement text and report them as
characters
This can be set into an XmlInputFactory, which is then in turn used to construct an XmlEventReader or XmlStreamReader. However, the API is careful to say that this property is only intended to force the implementation to perform the replacement, rather than forcing it to not replace them. Still, it's got to be worth a try.
Works for me only when disabling support of external entities:
XMLInputFactory inputFactory = XMLInputFactory.newInstance();
inputFactory.setProperty(XMLInputFactory.IS_REPLACING_ENTITY_REFERENCES, false);
inputFactory.setProperty(XMLInputFactory.IS_SUPPORTING_EXTERNAL_ENTITIES, false);
A SAX parse with an org.xml.sax.EntityResolver might suit your purpose. You could for sure suppress them, and you could probably find a way to leave them unresolved.
This tutorial seems the most relevant: it shows how to resolve entities into strings.
I am not a Java developer, but I "think" Java xml classes support a similar functionality to .net for accomplishing this. IN .net the xmlreadersettings class you set the ProhibitDtd property false and set the XmlResolver property to null. This will cause the parser to ignore externally referenced entities without throwing an exception when they are read. I just did a google search for "Java ignore enity" and got lots of hits, some of which appear to address this topic. I realize this is not a total answer to your question but it should point you in a useful direction.