Saxon is slow parsing - java

I am trying to parse some xml with saxon to make some xpath querying on it but got 2 problems : the first one is that saxon is very long to build a very short document in xhtml.
code is this :
Processor processorInstance = new Processor(false);
processorInstance.setConfigurationProperty(FeatureKeys.DTD_VALIDATION, false);
XPathCompiler XPathCompilerInstance = processorInstance.newXPathCompiler();
XPathCompilerInstance.setBackwardsCompatible(false);
String expressionTitre = "//div[#class='score_global']/preceding-sibling::img[1]";
XPathExecutable XPathExecutableInstance = XPathCompilerInstance.compile(expressionTitre);
XPathSelector selector = XPathExecutableInstance.load();
logger.info("Xpath compiled.");
// Phase 2, load xml document.
DocumentBuilder documentBuilderInstance = processorInstance.newDocumentBuilder();
documentBuilderInstance.setSchemaValidator(null);
documentBuilderInstance.setLineNumbering(false);
documentBuilderInstance.setRetainPSVI(false);
XdmNode context = documentBuilderInstance.build(new File("sample/sample.xml")); // This line takes ages to return.
What I don't understand is that if I do it with SAX, it loads at normal speed :(.
What did I forget to provide in saxon ?
Java 1.6
Saxon 9.1.0.8
Second problem is that he is unable to process accented characters while my xml was like this:
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
So I removed xml:lang en lang= attributes but got no better luck :(
Do you have any ideas ?
Thank you !

Well After much reading, it was simply necessary to define a CatalogResolver and downloading locally the Xhtml dtds. I dropped saxon and used simple JaxP/SaxReader instead.
This page http://xml.apache.org/commons/components/resolver/resolver-article.html proved very interesting.
Hope this considerations will prove themselves useful to someone :)

Ok, I've found out that although I configured Saxon not to validate, he nonetheless tried to resolve the URI and did not manage to find it locally, so he went online and gets & 503 from W3c which takes a long time to return.
I removed the DTD declaration in my xml, and it worked.
My next step is to make it stop to try to resolve it. I am currently reading saxon doc and playing with entity resolver and it should be ok.

Related

ITEXT dataElements Loop Performance

Hi recently I'm working on a project and one of the reporting module is using iText 2.0.8 version library. Everything work fine until the number of data became huge (around 50,000+ of row). I really need suggestion from every expert on Stackoverflow to improve my code.
My code logic: I wrote HTML code with all data contained inside. After the full HTML code is done, it will store into a variable called as "content", then I'll convert the "content" variable into IElement list and perform a for loop to add into the document. I realise this loop is causing a bad performance (CPU usage is high) and the report is generating very slow (even caused connection timeout).
The following the part of code that caused a very high CPU usage for Java process.
//The **content** String variable is contain the HTML code of the report
//(From <head> to <body> with <table> as the main content to structure the data row).
//I didnt include here because the code is huge.
String PDFFileName = "123.pdf";
PdfDocument pdf = new PdfDocument(new PdfWriter(new FileOutputStream(PDFFileName)));
Document document = new Document(pdf);
List<IElement> dataElements = HtmlConverter.convertToElements(content.toString(), converterProperties);
for (IElement element : dataElements) {
if (element instanceof IBlockElement) {
document.add((IBlockElement) element);
}
}
I know the loop is the issue, but I don't any other way is better and efficient for my case, hope someone can help me on this! Thank you. Please comment below if need extra information (Sorry cant really include all the code since it's very huge).
Specification: itext 2.0.8, Java 8.0, HTML, CSS.

Workaround for XMLSchema not supporting maxOccurs larger than 5000

My problem is with parsing an XSD Schema that has elements with maxOccurs larger than 5000 (but not unbounded).
This is actually a know issue in either Xerces (which I'm using, version 2.9.1) or JAXP, as described here: http://bugs.sun.com/view_bug.do;jsessionid=85335466c2c1fc52f0245d20b2e?bug_id=4990915
I already know that if I changed the maxOccurs numbers in my XSD from numbers larger than 5000 to unbounded all works well. Sadly, this is not an option in my case (I cannot meddle with the XSD file).
My question is:
Does someone know some other workaround in Xerces for this issue? Or
Can someone recommend another XML parser that does not have this limitation?
Thanks!
I had the same problem. I used this:
System.setProperty("jdk.xml.maxOccurLimit", "XXXXX");
I have found a solution that doesn't require changing the parser.
There is a FEATURE_SECURE_PROCESSING feature which puts that 5000 limitation on maxOccurs (along with several others).
And here is the document describing the limitations: http://docs.oracle.com/javase/7/docs/technotes/guides/xml/jaxp/JAXP-Compatibility_160.html#JAXP_security
I came across this thread when looking for solutions for this problem when using xjc command in console.
For anyone who is using xjc command to parse xsd, this works for me:
$ xjc -nv foo.xsd
Be aware though:
By default, the XJC binding compiler performs strict validation of the source schema before processing it. Use this option to disable strict schema validation. This does not mean that the binding compiler will not perform any validation, but means that it will perform a less-strict validation.
So if you think your xsd is from a good source, using less strict validation should not be a problem.
If you use Eclipse IDE with the Dali plugin for JAXB you may obtain the aforementioned error in the console.
Such error can be avoided if you uncheck 'Use strict validation' on panel 'Classes Generator Options' when establishing the options for JAXB generation from a XSD file.
Such panel is the third one after 'Java Project' and 'Generate classes from Schema'.
Adding the additional argument -nv suggested by #minjun-yu also works. Instead of applying my first suggestion, you can set such argument in the fourth panel labeled 'Classes Generator Extension Configurations'
On parsing data to load the JAXB generated classes, if you are validating against a schema, you still may obtain the SAXException. As pointed by #mzywiol and #marioosh, the exception is avoided setting a special feature on creating the SchemaFactory
SchemaFactory sf = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
// Avoid SAXParseException on maxOccurs > 5000
sf.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false);
URL xsdURL = TestParse.class.getResource(xsdLocation);
Schema schema = sf.newSchema(xsdURL);
JAXBContext ctx = JAXBContext.newInstance(MyJAXBClass.class.getPackage().getName());
Unmarshaller unmarshaller = ctx.createUnmarshaller();
unmarshaller.setSchema(schema);

Is there a decent, customisable, HTML to Markdown Java API?

I want to save text I scrape from various sources without the HTML tags that are on it, but also keeping as much of the structure as I reasonably can.
Markdown seems to be the solution to this (or possibly MultiMarkdown).
There is a question which offers a suggestion on converting from HTML to markdown, but I want to specify some specific things:
ALL links (including images) are referenced at the END only (i.e. no inline urls)
NO embeded HTML (I'm not even 100% sure yet how I'd like to deal with difficult HTML... but it won't be embeded!)
So my question is as stated in the title: Is there a decent, customisable, HTML to Markdown Java API?
You could try adapting HtmlCleaner which provides a workable interface onto the DOM:
TagNode root = htmlCleaner.clean( stream );
Object[] found = root.evaluateXPath( "//div[id='something']" );
if( found.length > 0 && found instanceof TagNode ) {
((TagNode)found[0]).removeFromTree();
}
This would allow you to structure your output stream in any format that you want using a fairly simple API.
There is a great library for JS called Turndown, you can try it online here. It can be partially customized. For example, links can be referenced at the end. And as far as I know there is no embedded html, everything is transformed.
I needed it for Java (as the linked question), so I ported it. The library for Java is called CopyDown, it has the same test suite as Turndown.
To install with gradle:
dependencies {
compile 'io.github.furstenheim:copy_down:1.0'
}
Then to use it:
CopyDown converter = new CopyDown();
String myHtml = "<h1>Some title</h1><div>Some html<p>Another paragraph</p></div>";
String markdown = converter.convert(myHtml);
System.out.println(markdown);
> Some title\n==========\n\nSome html\n\nAnother paragraph\n

How to sanitize HTML code in Java to prevent XSS attacks?

I'm looking for class/util etc. to sanitize HTML code i.e. remove dangerous tags, attributes and values to avoid XSS and similar attacks.
I get html code from rich text editor (e.g. TinyMCE) but it can be send malicious way around, ommiting TinyMCE validation ("Data submitted form off-site").
Is there anything as simple to use as InputFilter in PHP? Perfect solution I can imagine works like that (assume sanitizer is encapsulated in HtmlSanitizer class):
String unsanitized = "...<...>..."; // some potentially
// dangerous html here on input
HtmlSanitizer sat = new HtmlSanitizer(); // sanitizer util class created
String sanitized = sat.sanitize(unsanitized); // voila - sanitized is safe...
Update - the simpler solution, the better! Small util class with as little external dependencies on other libraries/frameworks as possible - would be best for me.
How about that?
You can try OWASP Java HTML Sanitizer. It is very simple to use.
PolicyFactory policy = new HtmlPolicyBuilder()
.allowElements("a")
.allowUrlProtocols("https")
.allowAttributes("href").onElements("a")
.requireRelNofollowOnLinks()
.build();
String safeHTML = policy.sanitize(untrustedHTML);
Thanks to #Saljack's answer. Just to elaborate more to OWASP Java HTML Sanitizer. It worked out really well (quick) for me. I just added the following to the pom.xml in my Maven project:
<dependency>
<groupId>com.googlecode.owasp-java-html-sanitizer</groupId>
<artifactId>owasp-java-html-sanitizer</artifactId>
<version>20150501.1</version>
</dependency>
Check here for latest release.
Then I added this function for sanitization:
private String sanitizeHTML(String untrustedHTML){
PolicyFactory policy = new HtmlPolicyBuilder()
.allowAttributes("src").onElements("img")
.allowAttributes("href").onElements("a")
.allowStandardUrlProtocols()
.allowElements(
"a", "img"
).toFactory();
return policy.sanitize(untrustedHTML);
}
More tags can be added by extending the comma delimited parameter in allowElements method.
Just add this line prior passing the bean off to save the data:
bean.setHtml(sanitizeHTML(bean.getHtml()));
That's it!
For more complex logic, this library is very flexible and it can handle more sophisticated sanitizing implementation.
You could use OWASP ESAPI for Java, which is a security library that is built to do such operations.
Not only does it have encoders for HTML, it also has encoders to perform JavaScript, CSS and URL encoding. Sample uses of ESAPI can be found in the XSS prevention cheatsheet published by OWASP.
You could use the OWASP AntiSamy project to define a site policy that states what is allowed in user-submitted content. The site policy can be later used to obtain "clean" HTML that is displayed back. You can find a sample TinyMCE policy file on the AntiSamy downloads page.
HTML escaping inputs works very well. But in some cases business rules might require you NOT to escape the HTML. Using REGEX is not fit for the task and it is too hard to come up with a good solution using it.
The best solution I found was to use: http://jsoup.org/cookbook/cleaning-html/whitelist-sanitizer
It builds a DOM tree with the provided input and filters any element not previosly allowed by a Whitelist. The API also has other functions for cleaning up html.
And it can also be used with javax.validation #SafeHtml(whitelistType=, additionalTags=)
Regarding Antisamy, you may want to check this regarding the dependencies:
http://code.google.com/p/owaspantisamy/issues/detail?id=95&can=1&q=redyetidave

Is Scala/Java not respecting w3 "excess dtd traffic" specs?

I'm new to Scala, so I may be off base on this, I want to know if the problem is my code. Given the Scala file httpparse, simplified to:
object Http {
import java.io.InputStream;
import java.net.URL;
def request(urlString:String): (Boolean, InputStream) =
try {
val url = new URL(urlString)
val body = url.openStream
(true, body)
}
catch {
case ex:Exception => (false, null)
}
}
object HTTPParse extends Application {
import scala.xml._;
import java.net._;
def fetchAndParseURL(URL:String) = {
val (true, body) = Http request(URL)
val xml = XML.load(body) // <-- Error happens here in .load() method
"True"
}
}
Which is run with (URL doesn't matter, this is a joke example):
scala> HTTPParse.fetchAndParseURL("http://stackoverflow.com")
The result invariably:
java.io.IOException: Server returned HTTP response code: 503 for URL: http://www.w3.org/TR/html4/strict.dtd
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1187)
at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:973)
at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(XMLEnti...
I've seen the Stack Overflow thread on this with respect to Java, as well as the W3C's System Team Blog entry about not trying to access this DTD via the web. I've also isolated the error to the XML.load() method, which is a Scala library method as far as I can tell.
My Question: How can I fix this? Is this something that is a by product of my code (cribbed from Raphael Ferreira's post), a by product of something Java specific that I need to address as in the previous thread, or something that is Scala specific? Where is this call happening, and is it a bug or a feature? ("Is it me? It's her, right?")
I've bumped into the SAME issue, and I haven't found an elegant solution (I'm thinking into posting the question to the Scala mailing list) Meanwhile, I found a workaround: implement your own SAXParserFactoryImpl so you can set the f.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true); property. The good thing is it doesn't require any code change to the Scala code base (I agree that it should be fixed, though).
First I'm extending the default parser factory:
package mypackage;
public class MyXMLParserFactory extends SAXParserFactoryImpl {
public MyXMLParserFactory() throws SAXNotRecognizedException, SAXNotSupportedException, ParserConfigurationException {
super();
super.setFeature("http://xml.org/sax/features/validation", false);
super.setFeature("http://apache.org/xml/features/disallow-doctype-decl", false);
super.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false);
super.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);
}
}
Nothing special, I just want the chance to set the property.
(Note: that this is plain Java code, most probably you can write the same in Scala too)
And in your Scala code, you need to configure the JVM to use your new factory:
System.setProperty("javax.xml.parsers.SAXParserFactory", "mypackage.MyXMLParserFactory");
Then you can call XML.load without validation
Without addressing, for now, the problem, what do you expect to happen if the function request return false below?
def fetchAndParseURL(URL:String) = {
val (true, body) = Http request(URL)
What will happen is that an exception will be thrown. You could rewrite it this way, though:
def fetchAndParseURL(URL:String) = (Http request(URL)) match {
case (true, body) =>
val xml = XML.load(body)
"True"
case _ => "False"
}
Now, to fix the XML parsing problem, we'll disable DTD loading in the parser, as suggested by others:
def fetchAndParseURL(URL:String) = (Http request(URL)) match {
case (true, body) =>
val f = javax.xml.parsers.SAXParserFactory.newInstance()
f.setNamespaceAware(false)
f.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);
val MyXML = XML.withSAXParser(f.newSAXParser())
val xml = MyXML.load(body)
"True"
case _ => "False"
}
Now, I put that MyXML stuff inside fetchAndParseURL just to keep the structure of the example as unchanged as possible. For actual use, I'd separate it in a top-level object, and make "parser" into a def instead of val, to avoid problems with mutable parsers:
import scala.xml.Elem
import scala.xml.factory.XMLLoader
import javax.xml.parsers.SAXParser
object MyXML extends XMLLoader[Elem] {
override def parser: SAXParser = {
val f = javax.xml.parsers.SAXParserFactory.newInstance()
f.setNamespaceAware(false)
f.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);
f.newSAXParser()
}
}
Import the package it is defined in, and you are good to go.
This is a scala problem. Native Java has an option to disable loading the DTD:
f.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);
There are no equivalent in scala.
If you somewhat want to fix it yourself, check scala/xml/parsing/FactoryAdapter.scala and put the line in
278 def loadXML(source: InputSource): Node = {
279 // create parser
280 val parser: SAXParser = try {
281 val f = SAXParserFactory.newInstance()
282 f.setNamespaceAware(false)
<-- insert here
283 f.newSAXParser()
284 } catch {
285 case e: Exception =>
286 Console.err.println("error: Unable to instantiate parser")
287 throw e
288 }
GClaramunt's solution worked wonders for me. My Scala conversion is as follows:
package mypackage
import org.xml.sax.{SAXNotRecognizedException, SAXNotSupportedException}
import com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl
import javax.xml.parsers.ParserConfigurationException
#throws(classOf[SAXNotRecognizedException])
#throws(classOf[SAXNotSupportedException])
#throws(classOf[ParserConfigurationException])
class MyXMLParserFactory extends SAXParserFactoryImpl() {
super.setFeature("http://xml.org/sax/features/validation", false)
super.setFeature("http://apache.org/xml/features/disallow-doctype-decl", false)
super.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false)
super.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false)
}
As mentioned his the original post, it is necessary to place the following line in your code somewhere:
System.setProperty("javax.xml.parsers.SAXParserFactory", "mypackage.MyXMLParserFactory")
It works. After some detective work, the details as best I can figure them:
Trying to parse a developmental RESTful interface, I build the parser and get the above (rather, a similar) error. I try various parameters to change the XML output, but get the same error. I try to connect to an XML document I quickly whip up (cribbed stupidly from the interface itself) and get the same error. Then I try to connect to anything, just for kicks, and get the same (again, likely only similar) error.
I started questioning whether it was an error with the sources or the program, so I started searching around, and it looks like an ongoing issue- with many Google and SO hits on the same topic. This, unfortunately, made me focus on the upstream (language) aspects of the error, rather than troubleshoot more downstream at the sources themselves.
Fast forward and the parser suddenly works on the original XML output. I confirmed that there was some additional work has been done server side (just a crazy coincidence?). I don't have either earlier XML but suspect that it is related to the document identifiers being changed.
Now, the parser works fine on the RESTful interface, as well any well formatted XML I can throw at it. It also fails on all XHTML DTD's I've tried (e.g. www.w3.org). This is contrary to what #SeanReilly expects, but seems to jive with what the W3 states.
I'm still new to Scala, so can't determine if I have a special, or typical case. Nor can I be assured that this problem won't re-occur for me in another form down the line. It does seem that pulling XHTML will continue to cause this error unless one uses a solution similar to those suggested by #GClaramunt $ #J-16 SDiZ have used. I'm not really qualified to know if this is a problem with the language, or my implementation of a solution (likely the later)
For the immediate timeframe, I suspect that the best solution would've been for me to ensure that it was possible to parse that XML source-- rather than see that other's have had the same error and assume there was a functional problem with the language.
Hope this helps others.
There are two problems with what you are trying to do:
Scala's xml parser is trying to physically retrieve the DTD when it shouldn't. J-16 SDiZ seems to have some advice for this problem.
The Stack overflow page you are trying to parse isn't XML. It's Html4 strict.
The second problem isn't really possible to fix in your scala code. Even once you get around the dtd problem, you'll find that the source just isn't valid XML (empty tags aren't closed properly, for example).
You have to either parse the page with something besides an XML parser, or investigate using a utility like tidy to convert the html to xml.
My knowledge of Scala is pretty poor, but couldn't you use ConstructingParser instead?
val xml = new java.io.File("xmlWithDtd.xml")
val parser = scala.xml.parsing.ConstructingParser.fromFile(xml, true)
val doc = parser.document()
println(doc.docElem)
For scala 2.7.7 I managed to do this with scala.xml.parsing.XhtmlParser
Setting Xerces switches only works if you are using Xerces. An entity resolver works for any JAXP parser.
There are more generalized entity resolvers out there, but this implementation does the trick when all I'm trying to do is parse valid XHTML.
http://code.google.com/p/java-xhtml-cache-dtds-entityresolver/
Shows how trivial it is to cache the DTDs and forgo the network traffic.
In any case, this is how I fix it. I always forget. I always get the error. I always go fetch this entity resolver. Then I'm back in business.

Categories

Resources