Prevent xml escaping in Restful (JAXRS) - java

I have a restful service(jersey) that returns a url with request parameters in one of the tags. Example :
<url>http://abc:9080/testMe.jsp?req1=a&req2=b</url>
(It's part of the response)
When I get the response, I get as below ('&' becomes '& a m p;', without any space. I added space to avoid escaping here):
<url>http://abc:9080/testMe.jsp?req1=a&req2=b</url>
I looked up in google and found many ways to do it in jaxb but nothing in Restful (JAXRS). Also, I tried a lame solution of adding backslash but with no success.
How can I prevent it from happening in java 1.6?

There is nothing you should change, since this is like XML works: & is a special character in XML and any & contained in text is escaped as &
Your expected result ...=a&req2=b... would not be a well-formed XML document, whereas the result returned by Jersey is well-formed.
When you want to access the url value in the response document, you will need to parse the response with a XML parser (e.g. into a DOM document) and the parsed document will have the url value as you expect.

Related

How to parse xml with java

I'm calling a soap webservice from my java application.
I get response and I want to parse it and get data.
The problem is that field <tranData>, contains structure with >< instead of <>. How can I parse this document to get data from field <tranData>?
This is response structure:
<response>
<Portfolio>
<ID>1</ID>
<holder>2</holder>
</Portfolio>
<tranData> <responseOne><header><code>1</code></header></responseOne></tranData>
Please remember that, this is only a example of response, and the amount of data will be much bigger, so the solution should be fast.
What you show us is the actual document as it is received over the wire, right? So <tranData> contains an XML string that has been escaped to not interfere with the markup of the rest of the containing document.
When you read the content of the <tranData> element, the XML processor will 'unescape' the string and give you the 'original' value:
<responseOne><header><code>1</code></header></responseOne>
What you do with that value is a different story. You can parse it as yet another XML document and retrieve the value of the <code> element, or just pass the string along to some other processing step.

Is there any way to process my rest of xml file despite of any fatal error like SAXParserException encountered [duplicate]

Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?
That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000‌​}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.
A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.
The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.
IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);

SAAJ: XML in TextNode. Quotes not Escaped

together
I ty to build a SOAP Request with SAAJ. All works fine but now I have to send an other XMl Document inside the SOAP Body inside SOAP Element.
I tried follwing code:
SOAPElement file = service.addChildElement(new QName("nameOfTextNode"));
file.addTextNode(xmlString);
The Problem is most characters are correctly escaped (e.g. '<' -> <) but not single or double quotes. I can't use CDATA or let the quotes as they are, because i don't have control over the SOAP Service and they can't support CDATA or want to change anything.
When I use anothe Library to escape the String first. It will be escaped twice in the SOAP Request.
Do aynone have an idea? Please help.
SAAJ doesn't escape single or double quotes in this case because it's not required. If the service requires that, then it doesn't conform to the XML specification and therefore isn't a SOAP service. Since SAAJ implements SOAP, you won't be able to use it.

Java WebService adding CDATA field in response

I'm working on a webservice where, i created the wsdl and generated the java classes using apache axis2.
The problem I'm trying to solve is, while creating web service response I have to set text with special characters like Books & Pens Or Value is <10> in some of the fields. I am looking for ways to just put these field content in CDATA sections.
Example response my WebService has to send:
<BOOKSHOP>
<ITEM><![CDATA[BOOKS & PENS]]></ITEM>
</BOOKSHOP>
I'm not able to find a way to do this. I have googled but found no solution.
Any help would be really appreciated.
I'm aware of converting these special characters explicitly into & amp ; or & lt; but that does not work for us.
Also, We want to put just the required fields in CDATA and not entire XML response.
I tried #XmlCDATA but it works only if my text is XML structured.

how to use XML sent by html form?

i have html form with textarea in which i paste some XML, for example:
<network ip_addr="10.0.0.0/8" save_ip="true">
<subnet interf_used="200" name="lan1" />
<subnet interf_used="254" name="lan2" />
</network>
When user submit form, that data is send to Java server, so in headers i get something like that:
GET /?we=%3Cnetwork+ip_addr%3D%2210.0.0.0%2F8%22+save_ip%3D%22true%22%3E%0D%0A%3Csubnet+interf_used%3D%22200%22+name%3D%22lan1%22+%2F%3E%0D%0A%3Csubnet+interf_used%3D%22254%22+name%3D%22lan2%22+%2F%3E%0D%0A%3C%2Fnetwork%3E HTTP/1.1
how can i use that in my Java applications? I need to make some calculations on that data and re-send new generated XML.
This answer shows how to use the URLDecoder/URLEncoder classes to decode and encode url strings. It should work if you passed the 'GET' string to the URLDecoders decode method.
To answer your following question (comment)
First you need to extract this xml based response from the url string. Maybe it's enough to create a substring starting with the first < char.
The String should be fed into a XML parser to create a DOM document. The last easy task would be walking through that document and copying the values to your internal network model.
Do not think about using RegExp to extract the data. Use a parser.

Categories

Resources