I'm reading an XML file using the default Woodstox EventReader, e.g.:
XMLInputFactory.newInstance().createXMLEventReader(new FileInputStream(fileName));
If an input file happens to have the Unicode NULL character in some textual content, the following Exception/Stacktrace occurs:
WstxUnexpectedCharException.<init>(String, Location, char) line: 17
ValidatingStreamReader(StreamScanner).constructNullCharException() line: 604
ValidatingStreamReader(StreamScanner).throwInvalidSpace(int, boolean) line: 633
ValidatingStreamReader(BasicStreamReader).readTextSecondary(int, boolean) line: 4624
ValidatingStreamReader(BasicStreamReader).finishToken(boolean) line: 3661
ValidatingStreamReader(BasicStreamReader).next() line: 1063
WstxEventReader(Stax2EventReaderImpl).nextEvent() line: 255
I'd like to avoid validating textual content. Setting IS_VALIDATING on the XMLInputFactory does not solve the problem.
After inspecting the source code, it looks like BasicStreamReader's next() refers to the "mValidateText" variable to determine whether to validate or not.
From the Source:
/**
* Flag that indicates that textual content (CDATA, CHARACTERS) is to
* be validated within current element's scope. Enabled if one of
* validators returns {#link XMLValidator#CONTENT_ALLOW_VALIDATABLE_TEXT},
* and will prevent lazy parsing of text.
*/
protected boolean mValidateText = false;
I can't seem to figure out how to change/set this value in the InputFactory or EventReader? Perhaps I need to direct the InputFactory to not use the ValidatingStreamReader, but instead the TypedStreamReader?
A conformant XML parser is required to reject ill-formed content. You need to fix your (non-)XML, and let the parser do its job.
That is not validation but basic well-formedness problem. Validation is used with schemas like DTD, RelaxNG or XML Schema, which can define specific structure or values for textual content. So validation-related settings will not have any effect, as that would be handled if content is well-formed XML.
What you need to do is to pre-process content to remove or replace small number of characters that are illegal in XML. This includes 0 byte.
Related
I have a XML Document where there are nested tags that should not be interpreted as XML tags
For example something like this
<something>cbaabc</something> should be parsed as a plain String "cbaabc" (it should be mentioned that the document has other elements as well that get parsed just fine). Jackson tho tries to interpret it as an Object and I don't know how to prevent this. I tried using #JacksonXmlText, turning off wrapping and a custom Deserializer, but I didn't get it to work.
The <a should be translated to <a. This back and forth conversion normally happens with every XML API, setting and getting text will use those entities &...;.
An other option is to use an additional CDATA section: <![CDATA[ ... ]]>.
<something><![CDATA[cbaabc]]></something>
If you cannot correct that, and have to live with an already corrupted XML text, you must do your own hack:
Load the wrong XML in a String
Repair the XML
Pass the XML string to jackson
Repairing:
String xml = ...
xml = xml.replaceAll("<(/?a\\b[^>]*)>", "<$1>"); // Links
StringReader in = new StringReader(xml);
I have a Java program which process xml files. When transforming xml into another xml file base on certain schema( xsd/xsl) it throws following error.
This error only throws for one xml file which has a xml tag like this.
<abc>xxx yyyy “ggggg vvvv” uuuu</abc>
But after removing or re-type two quotes, it doesn’t throw the error.
Anybody, please assist me to resolve this issue.
java.io.CharConversionException: Character larger than 4 bytes are not supported: byte 0x93 implies a length of more than 4 bytes
at .org.apache.xmlbeans..impl.piccolo.xml.UTF8XMLDecoder.decode(UTF8XMLDecoder.java:162)
<?xml version= “1.0’ encoding =“UTF-8” standalone =“yes “?><xyz xml s=“http://pqr.yy”><Header><abc> aaa “cccc” aaaaa vvv</abc></Header></xyz>.
As others have reported in comments, it has failed because the typographical quotation marks are encoded in Windows-1292 encoding, not in UTF-8, so the parser hasn't managed to decode them.
The encoding declared in the XML declaration must match the actual encoding used for the characters.
To find out how this error arose, and to prevent it happening again, we would need to know where this (wannabe) XML file came from, and how it was created.
My guess would be that someone used a "smart" editor; Microsoft editors in particular are notorious for changing what you type to what Microsoft think you wanted to type. If you're editing XML by hand it's best to use an XML-aware editor.
Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?
That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.
A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.
The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.
IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);
I´m using the Jackson XmlMapper to map and xml into a POJO but I have the following problem:
My XML looks like this (not the original one, only an example):
<?xml version="1.0" encoding="UTF-8"?>
<result>
<pojo>
<name>test</name>
</pojo>
</result>
The problem is, I don´t want to parse the "result" object. I wan´t to parse the pojo as an own object. Can I do this with XmlMapper?
thank you!
Artur
You can do it but you must write some boiler plate code.
You must create an instance of XMLStreamReader to be able to do customized reading of your xml input. The next() method allows to go to the next parsing event of the reader. It's rather a tricky method() related to the internal rules of the reader. So read the documentation to understands particularities :
From the Javadoc:
int javax.xml.stream.XMLStreamReader.next() throws XMLStreamException
Get next parsing event - a processor may return all contiguous
character data in a single chunk, or it may split it into several
chunks. If the property javax.xml.stream.isCoalescing is set to true
element content must be coalesced and only one CHARACTERS event must
be returned for contiguous element content or CDATA Sections. By
default entity references must be expanded and reported transparently
to the application. An exception will be thrown if an entity reference
cannot be expanded. If element content is empty (i.e. content is "")
then no CHARACTERS event will be reported.
Given the following XML: content
textHello</greeting>]]>other content The
behavior of calling next() when being on foo will be: 1- the comment
(COMMENT) 2- then the characters section (CHARACTERS) 3- then the
CDATA section (another CHARACTERS) 4- then the next characters section
(another CHARACTERS) 5- then the END_ELEMENT
NOTE: empty element (such as ) will be reported with two
separate events: START_ELEMENT, END_ELEMENT - This preserves parsing
equivalency of empty element to . This method will throw an
IllegalStateException if it is called after hasNext() returns false.
Returns: the integer code corresponding to the current parse event
Let me illustrate the way to proceed with an unit test :
#Test
public void mapXmlToPojo() throws Exception {
XMLInputFactory factory = XMLInputFactory2.newFactory();
InputStream inputFile = MapXmlToPojo.class.getResourceAsStream("pojo.xml");
XMLStreamReader xmlStreamReader = factory.createXMLStreamReader(inputFile);
XmlMapper xmlMapper = new XmlMapper();
xmlStreamReader.next();
xmlStreamReader.next();
Pojo pojo = xmlMapper.readValue(xmlStreamReader, Pojo.class);
Assert.assertEquals("test", pojo.getName());
}
Just to add more on this (In order yo make this generic), I had a scenario where I had to extract a specific element and map that to java object, in this case we can put a conditional check whenever that tag encountered get that out and map the same.
I have added DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES to ignore fields which is not needed, in case our pojo has less fields to map than what we are getting from end source.
Below is the tested code -
while (xmlStreamReader.hasNext()) {
xmlStreamReader.next();
if (xmlStreamReader.nextTag() == XMLEvent.START_ELEMENT) {
QName name = xmlStreamReader.getName();
if (("spcific_name").equalsIgnoreCase(name.getLocalPart())) {
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
result = objectMapper.readValue(xmlStreamReader, Pojo.class);
break;
}
}
}
I'm developing a Java class that converts an HTML string into a FO string via an XSLT document.
Then, the resulting FO string is processed by FOP to create a PDF file.
The problem is that when a special character is found by FOP, i get an error:
(e.g.) The entity "ldquo" was referenced, but not declared.
Now my solution is to replace all these special characters with their Unicode reference.
In this example, "“" becomes "“"
Can I declare those entities in my XSLT file without doing zillions of StringUtils.replaceAll()?
Solved using JTidy with setXmlOut(true)