How to generate xpath from xsd? - java

How can I generate xpath from an xsd? XSD validates an xml. I am working in a project where I am generating a sample XML from the xsd using java and then generating xpath from that XML. If there is any way to generate xpath directly from xsd please let me know.

This might be of use:
import java.io.File;
import java.util.HashMap;
import java.util.Map;
import java.util.Stack;
import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.DefaultHandler;
/**
* SAX handler that creates and prints XPath expressions for each element encountered.
*
* The algorithm is not infallible, if elements appear on different levels in the hierarchy.
* Something like the following is an example:
* - <elemA/>
* - <elemA/>
* - <elemB/>
* - <elemA/>
* - <elemC>
* - <elemB/>
* - </elemC>
*
* will report
*
* //elemA[0]
* //elemA[1]
* //elemB[0]
* //elemA[2]
* //elemC[0]
* //elemC[0]/elemB[1] (this is wrong: should be //elemC[0]/elemB[0] )
*
* It also ignores namespaces, and thus treats <foo:elemA> the same as <bar:elemA>.
*/
public class SAXCreateXPath extends DefaultHandler {
// map of all encountered tags and their running count
private Map<String, Integer> tagCount;
// keep track of the succession of elements
private Stack<String> tags;
// set to the tag name of the recently closed tag
String lastClosedTag;
/**
* Construct the XPath expression
*/
private String getCurrentXPath() {
String str = "//";
boolean first = true;
for (String tag : tags) {
if (first)
str = str + tag;
else
str = str + "/" + tag;
str += "["+tagCount.get(tag)+"]";
first = false;
}
return str;
}
#Override
public void startDocument() throws SAXException {
tags = new Stack();
tagCount = new HashMap<String, Integer>();
}
#Override
public void startElement (String namespaceURI, String localName, String qName, Attributes atts)
throws SAXException
{
boolean isRepeatElement = false;
if (tagCount.get(localName) == null) {
tagCount.put(localName, 0);
} else {
tagCount.put(localName, 1 + tagCount.get(localName));
}
if (lastClosedTag != null) {
// an element was recently closed ...
if (lastClosedTag.equals(localName)) {
// ... and it's the same as the current one
isRepeatElement = true;
} else {
// ... but it's different from the current one, so discard it
tags.pop();
}
}
// if it's not the same element, add the new element and zero count to list
if (! isRepeatElement) {
tags.push(localName);
}
System.out.println(getCurrentXPath());
lastClosedTag = null;
}
#Override
public void endElement (String uri, String localName, String qName) throws SAXException {
// if two tags are closed in succession (without an intermediate opening tag),
// then the information about the deeper nested one is discarded
if (lastClosedTag != null) {
tags.pop();
}
lastClosedTag = localName;
}
public static void main (String[] args) throws Exception {
if (args.length < 1) {
System.err.println("Usage: SAXCreateXPath <file.xml>");
System.exit(1);
}
// Create a JAXP SAXParserFactory and configure it
SAXParserFactory spf = SAXParserFactory.newInstance();
spf.setNamespaceAware(true);
spf.setValidating(false);
// Create a JAXP SAXParser
SAXParser saxParser = spf.newSAXParser();
// Get the encapsulated SAX XMLReader
XMLReader xmlReader = saxParser.getXMLReader();
// Set the ContentHandler of the XMLReader
xmlReader.setContentHandler(new SAXCreateXPath());
String filename = args[0];
String path = new File(filename).getAbsolutePath();
if (File.separatorChar != '/') {
path = path.replace(File.separatorChar, '/');
}
if (!path.startsWith("/")) {
path = "/" + path;
}
// Tell the XMLReader to parse the XML document
xmlReader.parse("file:"+path);
}
}

I've been working on a little library to do just this, though for larger and more complex schemas, there are issues you will need to address on a case-by-case basis (e.g., filters for certain nodes). See https://stackoverflow.com/a/45020739/3096687 for a description of the solution.

There are a number of problems with such tools:
The XPath expression generated rarely is a good one. No such tool will produce meaningful predicates beyond position information.
There is no tool (to my knowledge) that would generate an XPath expression that selects exactly a set of selected nodes.
Apart from this, such tools used without learning XPath are really harmful -- they support ignorance.
I would recommend serious learning of XPath using books and other resources such as following.
https://stackoverflow.com/questions/339930/any-good-xslt-tutorial-book-blog-site-online/341589#341589
See the following answer for more information..
Is there an online tester for xPath selectors?

Related

Is it possible to convert XSD to XPath in Java? [duplicate]

This question already has answers here:
How to generate xpath from xsd?
(3 answers)
Closed 7 years ago.
I need to represent all the elements from an XSD Schema as XPath. Is there any way for it? Like consider there are five elements in XSD Schema I need to display XPath of all the five elements separately.
My suggestion is at the background XML corresponding to XSD has to be created and XPath has to be generated. Please suggest solution for the same if the approach is correct or suggest other approaches..
Thanks.
M.Sasi kumar
import java.io.File;
import java.util.HashMap;
import java.util.Map;
import java.util.Stack;
import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.DefaultHandler;
/**
* SAX handler that creates and prints XPath expressions for each element encountered.
*
* The algorithm is not infallible, if elements appear on different levels in the hierarchy.
* Something like the following is an example:
* - <elemA/>
* - <elemA/>
* - <elemB/>
* - <elemA/>
* - <elemC>
* - <elemB/>
* - </elemC>
*
* will report
*
* //elemA[0]
* //elemA[1]
* //elemB[0]
* //elemA[2]
* //elemC[0]
* //elemC[0]/elemB[1] (this is wrong: should be //elemC[0]/elemB[0] )
*
* It also ignores namespaces, and thus treats <foo:elemA> the same as <bar:elemA>.
*/
public class SAXCreateXPath extends DefaultHandler {
// map of all encountered tags and their running count
private Map<String, Integer> tagCount;
// keep track of the succession of elements
private Stack<String> tags;
// set to the tag name of the recently closed tag
String lastClosedTag;
/**
* Construct the XPath expression
*/
private String getCurrentXPath() {
String str = "//";
boolean first = true;
for (String tag : tags) {
if (first)
str = str + tag;
else
str = str + "/" + tag;
str += "["+tagCount.get(tag)+"]";
first = false;
}
return str;
}
#Override
public void startDocument() throws SAXException {
tags = new Stack();
tagCount = new HashMap<String, Integer>();
}
#Override
public void startElement (String namespaceURI, String localName, String qName, Attributes atts)
throws SAXException
{
boolean isRepeatElement = false;
if (tagCount.get(localName) == null) {
tagCount.put(localName, 0);
} else {
tagCount.put(localName, 1 + tagCount.get(localName));
}
if (lastClosedTag != null) {
// an element was recently closed ...
if (lastClosedTag.equals(localName)) {
// ... and it's the same as the current one
isRepeatElement = true;
} else {
// ... but it's different from the current one, so discard it
tags.pop();
}
}
// if it's not the same element, add the new element and zero count to list
if (! isRepeatElement) {
tags.push(localName);
}
System.out.println(getCurrentXPath());
lastClosedTag = null;
}
#Override
public void endElement (String uri, String localName, String qName) throws SAXException {
// if two tags are closed in succession (without an intermediate opening tag),
// then the information about the deeper nested one is discarded
if (lastClosedTag != null) {
tags.pop();
}
lastClosedTag = localName;
}
public static void main (String[] args) throws Exception {
if (args.length < 1) {
System.err.println("Usage: SAXCreateXPath <file.xml>");
System.exit(1);
}
// Create a JAXP SAXParserFactory and configure it
SAXParserFactory spf = SAXParserFactory.newInstance();
spf.setNamespaceAware(true);
spf.setValidating(false);
// Create a JAXP SAXParser
SAXParser saxParser = spf.newSAXParser();
// Get the encapsulated SAX XMLReader
XMLReader xmlReader = saxParser.getXMLReader();
// Set the ContentHandler of the XMLReader
xmlReader.setContentHandler(new SAXCreateXPath());
String filename = args[0];
String path = new File(filename).getAbsolutePath();
if (File.separatorChar != '/') {
path = path.replace(File.separatorChar, '/');
}
if (!path.startsWith("/")) {
path = "/" + path;
}
// Tell the XMLReader to parse the XML document
xmlReader.parse("file:"+path);
}
}

Java XML library that preserves attribute order

I am writing a Java program that reads an XML file, makes some modifications, and writes back the XML.
Using the standard Java XML DOM API, the order of the attributes is not preserved.
That is, if I have an input file such as:
<person first_name="john" last_name="lederrey"/>
I might get an output file as:
<person last_name="lederrey" first_name="john"/>
That's correct, because the XML specification says that order attribute is not significant.
However, my program needs to preserve the order of the attributes, so that a person can easily compare the input and output document with a diff tool.
One solution for that is to process the document with SAX (instead of DOM):
Order of XML attributes after DOM processing
However, this does not work for my case,
because the transformation I need to do in one node might depend on a XPath expression on the whole document.
So, the simplest thing would be to have a XML library very similar to the standard Java DOM library, with the exception that it preserves the attribute order.
Is there such a library?
PS: Please, avoid discussing whether I should the preserve attribute order or not. This is a very interesting discussion, but it is not the point of this question.
Saxon these days offers a serialization option[1] to control the order in which attributes are output. It doesn't retain the input order (because Saxon doesn't know the input order), but it does allow you to control, for example, that the ID attribute always appears first.
And this can be very useful if the XML is going to be hand-edited; XML in which the attributes appear in the "wrong" order can be very disorienting to a human reader or editor.
If you're using this as part of a diff process then you would want to put both files through a process that normalizes the attribute order before comparing them. However, for comparing files my preferred approach is to parse them both and use the XPath deep-equal() function; or to use a specialized tool like DeltaXML.
[1] saxon:attribute-order - see http://www.saxonica.com/documentation/index.html#!extensions/output-extras/serialization-parameters
You might also want to try DecentXML, as it can preserve the attribute order, comments and even indentation.
It is very nice if you need to programmatically update an XML file that's also supposed to be human-editable. We use it for one of our configuration tools.
-- edit --
It seems it is no longer available on its original location; try these ones:
https://github.com/cartermckinnon/decentxml
https://github.com/haroldo-ok/decentxml (unnoficial and unmaintained fork; kept here just in case the other forks disappear, too)
https://directory.fsf.org/wiki/DecentXML
Do it twice:
Read the document in using a DOM parser so you have references, a repository, if you will.
Then read it again using SAX. At the point where you need to make the transformation, reference the DOM version to determine what you need, then output what you need in the middle of the SAX stream.
Your best bet would be to use StAX instead of DOM for generating the original document. StAX gives you a lot of fine control over these things and lets you stream output progressively to an output stream instead of holding it all in memory.
We had similar requirements per Dave's description. A solution that worked was based on Java reflection.
The idea is to set the propOrder for the attributes at runtime. In our case there's APP_DATA element containing three attributes: app, key, and value. The generated AppData class includes "content" in propOrder and none of the other attributes:
#XmlAccessorType(XmlAccessType.FIELD)
#XmlType(name = "AppData", propOrder = {
"content"
})
public class AppData {
#XmlValue
protected String content;
#XmlAttribute(name = "Value", required = true)
protected String value;
#XmlAttribute(name = "Name", required = true)
protected String name;
#XmlAttribute(name = "App", required = true)
protected String app;
...
}
So Java reflection was used as follows to set the order at runtime:
final String[] propOrder = { "app", "name", "value" };
ReflectionUtil.changeAnnotationValue(
AppData.class.getAnnotation(XmlType.class),
"propOrder", propOrder);
final JAXBContext jaxbContext = JAXBContext
.newInstance(ADI.class);
final Marshaller adimarshaller = jaxbContext.createMarshaller();
adimarshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT,
true);
adimarshaller.marshal(new JAXBElement<ADI>(new QName("ADI"),
ADI.class, adi),
new StreamResult(fileOutputStream));
The changeAnnotationValue() was borrowed from this post:
Modify a class definition's annotation string parameter at runtime
Here's the method for your convenience (credit goes to #assylias and #Balder):
/**
* Changes the annotation value for the given key of the given annotation to newValue and returns
* the previous value.
*/
#SuppressWarnings("unchecked")
public static Object changeAnnotationValue(Annotation annotation, String key, Object newValue) {
Object handler = Proxy.getInvocationHandler(annotation);
Field f;
try {
f = handler.getClass().getDeclaredField("memberValues");
} catch (NoSuchFieldException | SecurityException e) {
throw new IllegalStateException(e);
}
f.setAccessible(true);
Map<String, Object> memberValues;
try {
memberValues = (Map<String, Object>) f.get(handler);
} catch (IllegalArgumentException | IllegalAccessException e) {
throw new IllegalStateException(e);
}
Object oldValue = memberValues.get(key);
if (oldValue == null || oldValue.getClass() != newValue.getClass()) {
throw new IllegalArgumentException();
}
memberValues.put(key, newValue);
return oldValue;
}
You may override AttributeSortedMap and sort attributes as you need...
The main idea: load the document, recursively copy to elements that support sorted attributeMap and serialize using the existing XMLSerializer.
File test.xml
<root>
<person first_name="john1" last_name="lederrey1"/>
<person first_name="john2" last_name="lederrey2"/>
<person first_name="john3" last_name="lederrey3"/>
<person first_name="john4" last_name="lederrey4"/>
</root>
File AttOrderSorter.java
import com.sun.org.apache.xerces.internal.dom.AttrImpl;
import com.sun.org.apache.xerces.internal.dom.AttributeMap;
import com.sun.org.apache.xerces.internal.dom.CoreDocumentImpl;
import com.sun.org.apache.xerces.internal.dom.ElementImpl;
import com.sun.org.apache.xml.internal.serialize.OutputFormat;
import com.sun.org.apache.xml.internal.serialize.XMLSerializer;
import org.w3c.dom.*;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.Writer;
import java.util.List;
import static java.util.Arrays.asList;
public class AttOrderSorter {
private List<String> sortAtts = asList("last_name", "first_name");
public void format(String inFile, String outFile) throws Exception {
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = dbFactory.newDocumentBuilder();
Document outDocument = builder.newDocument();
try (FileInputStream inputStream = new FileInputStream(inFile)) {
Document document = dbFactory.newDocumentBuilder().parse(inputStream);
Element sourceRoot = document.getDocumentElement();
Element outRoot = outDocument.createElementNS(sourceRoot.getNamespaceURI(), sourceRoot.getTagName());
outDocument.appendChild(outRoot);
copyAtts(sourceRoot.getAttributes(), outRoot);
copyElement(sourceRoot.getChildNodes(), outRoot, outDocument);
}
try (Writer outxml = new FileWriter(new File(outFile))) {
OutputFormat format = new OutputFormat();
format.setLineWidth(0);
format.setIndenting(false);
format.setIndent(2);
XMLSerializer serializer = new XMLSerializer(outxml, format);
serializer.serialize(outDocument);
}
}
private void copyElement(NodeList nodes, Element parent, Document document) {
for (int i = 0; i < nodes.getLength(); i++) {
Node node = nodes.item(i);
if (node.getNodeType() == Node.ELEMENT_NODE) {
Element element = new ElementImpl((CoreDocumentImpl) document, node.getNodeName()) {
#Override
public NamedNodeMap getAttributes() {
return new AttributeSortedMap(this, (AttributeMap) super.getAttributes());
}
};
copyAtts(node.getAttributes(), element);
copyElement(node.getChildNodes(), element, document);
parent.appendChild(element);
}
}
}
private void copyAtts(NamedNodeMap attributes, Element target) {
for (int i = 0; i < attributes.getLength(); i++) {
Node att = attributes.item(i);
target.setAttribute(att.getNodeName(), att.getNodeValue());
}
}
public class AttributeSortedMap extends AttributeMap {
AttributeSortedMap(ElementImpl element, AttributeMap attributes) {
super(element, attributes);
nodes.sort((o1, o2) -> {
AttrImpl att1 = (AttrImpl) o1;
AttrImpl att2 = (AttrImpl) o2;
Integer pos1 = sortAtts.indexOf(att1.getNodeName());
Integer pos2 = sortAtts.indexOf(att2.getNodeName());
if (pos1 > -1 && pos2 > -1) {
return pos1.compareTo(pos2);
} else if (pos1 > -1 || pos2 > -1) {
return pos1 == -1 ? 1 : -1;
}
return att1.getNodeName().compareTo(att2.getNodeName());
});
}
}
public void main(String[] args) throws Exception {
new AttOrderSorter().format("src/main/resources/test.xml", "src/main/resources/output.xml");
}
}
Result - file output.xml
<?xml version="1.0" encoding="UTF-8"?>
<root>
<person last_name="lederrey1" first_name="john1"/>
<person last_name="lederrey2" first_name="john2"/>
<person last_name="lederrey3" first_name="john3"/>
<person last_name="lederrey4" first_name="john4"/>
</root>
You can't use the DOM, but you can use SAX, or querying children using XPath.
Visit the answer Order of XML attributes after DOM processing.

Using Jsoup, how can I fetch each and every information resides in each link?

package com.muthu;
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.helper.Validate;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import org.jsoup.select.NodeVisitor;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import org.jsoup.nodes.*;
public class TestingTool
{
public static void main(String[] args) throws IOException
{
Validate.isTrue(args.length == 0, "usage: supply url to fetch");
String url = "http://www.stackoverflow.com/";
print("Fetching %s...", url);
Document doc = Jsoup.connect(url).get();
Elements links = doc.select("a[href]");
System.out.println(doc.text());
Elements tags=doc.getElementsByTag("div");
String alls=doc.text();
System.out.println("\n");
for (Element link : links)
{
print(" %s ", link.attr("abs:href"), trim(link.text(), 35));
}
BufferedWriter bw = new BufferedWriter(new FileWriter(new File("C:/tool
/linknames.txt")));
for (Element link : links) {
bw.write("Link: "+ link.text().trim());
bw.write(System.getProperty("line.separator"));
}
bw.flush();
bw.close();
} }
private static void print(String msg, Object... args) {
System.out.println(String.format(msg, args));
}
private static String trim(String s, int width) {
if (s.length() > width)
return s.substring(0, width-1) + ".";
else
return s;
}
}
If you connect to an URL it will only parse the current page. But you can 1.) connect to an URL, 2.) parse the informations you need, 3.) select all further links, 4.) connect to them and 5.) continue this as long as there are new links.
considerations:
You need a list (?) or something else where you've store the links you already parsed
You have to decide if you need only links of this page or externals too
You have to skip pages like "about", "contact" etc.
Edit:
(Note: you have to add some changes / errorhandling code)
List<String> visitedUrls = new ArrayList<>(); // Store all links you've already visited
public void visitUrl(String url) throws IOException
{
url = url.toLowerCase(); // now its case insensitive
if( !visitedUrls.contains(url) ) // Do this only if not visted yet
{
Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
/* ... Select your Data here ... */
Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
for( Element next : nextLinks ) // Iterate over all Links
{
visitUrl(next.absUrl("href")); // Recursive call for all next Links
}
}
}
You have to add more restrictions / checks at the part where next links are selected (maybe you want to skip / ignore some); and some error handling.
Edit 2:
To skip ignored links you can use this:
Create a Set / List / whatever, where you store ignored keywords
Fill it with those keywords
Before you call the visitUrl() method with the new Link to parse, you check if this new Url contains any of the ignored keywords. If it contains at least one it will be skipped.
I modified the example a bit to do so (but it's not tested yet!).
List<String> visitedUrls = new ArrayList<>(); // Store all links you've already visited
Set<String> ignore = new HashSet<>(); // Store all keywords you want ignore
// ...
/*
* Add keywords to the ignorelist. Each link that contains one of this
* words will be skipped.
*
* Do this in eg. constructor, static block or a init method.
*/
ignore.add(".twitter.com");
// ...
public void visitUrl(String url) throws IOException
{
url = url.toLowerCase(); // Now its case insensitive
if( !visitedUrls.contains(url) ) // Do this only if not visted yet
{
Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
/* ... Select your Data here ... */
Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
for( Element next : nextLinks ) // Iterate over all Links
{
boolean skip = false; // If false: parse the url, if true: skip it
final String href = next.absUrl("href"); // Select the 'href' attribute -> next link to parse
for( String s : ignore ) // Iterate over all ignored keywords - maybe there's a better solution for this
{
if( href.contains(s) ) // If the url contains ignored keywords it will be skipped
{
skip = true;
break;
}
}
if( !skip )
visitUrl(next.absUrl("href")); // Recursive call for all next Links
}
}
}
Parsing the next link is done by this:
final String href = next.absUrl("href");
/* ... */
visitUrl(next.absUrl("href"));
But possibly you should add some more stop-conditions to this part.

How to improve splitting xml file performance

I've see quite a lot posts/blogs/articles about splitting XML file into a smaller chunks and decided to create my own because I have some custom requirements. Here is what I mean, consider the following XML :
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<company>
<staff id="1">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="2">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="3">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="4">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="5">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<salary>100000</salary>
</staff>
</company>
I want to split this xml into n parts, each containing 1 file, but the staff element must contain nickname , if it's not there I don't want it. So this should produce 4 xml splits, each containing staff id starting at 1 until 4.
Here is my code :
public int split() throws Exception{
BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(inputFilePath)));
String line;
List<String> tempList = null;
while((line=br.readLine())!=null){
if(line.contains("<?xml version=\"1.0\"") || line.contains("<" + rootElement + ">") || line.contains("</" + rootElement + ">")){
continue;
}
if(line.contains("<"+ element +">")){
tempList = new ArrayList<String>();
}
tempList.add(line);
if(line.contains("</"+ element +">")){
if(hasConditions(tempList)){
writeToSplitFile(tempList);
writtenObjectCounter++;
totalCounter++;
}
}
if(writtenObjectCounter == itemsPerFile){
writtenObjectCounter = 0;
fileCounter++;
tempList.clear();
}
}
if(tempList.size() != 0){
writeClosingRootElement();
}
return totalCounter;
}
private void writeToSplitFile(List<String> itemList) throws Exception{
BufferedWriter wr = new BufferedWriter(new FileWriter(outputDirectory + File.separator + "split_" + fileCounter + ".xml", true));
if(writtenObjectCounter == 0){
wr.write("<" + rootElement + ">");
wr.write("\n");
}
for (String string : itemList) {
wr.write(string);
wr.write("\n");
}
if(writtenObjectCounter == itemsPerFile-1)
wr.write("</" + rootElement + ">");
wr.close();
}
private void writeClosingRootElement() throws Exception{
BufferedWriter wr = new BufferedWriter(new FileWriter(outputDirectory + File.separator + "split_" + fileCounter + ".xml", true));
wr.write("</" + rootElement + ">");
wr.close();
}
private boolean hasConditions(List<String> list){
int matchList = 0;
for (String condition : conditionList) {
for (String string : list) {
if(string.contains(condition)){
matchList++;
}
}
}
if(matchList >= conditionList.size()){
return true;
}
return false;
}
I know that opening/closing stream for each written staff element which does impact the performance. But if I write once per file(which may contain n number of staff). Naturally root and split elements are configurable.
Any ideas how can I improve the performance/logic? I'd prefer some code, but good advice can be better sometimes
Edit:
This XML example is actually a dummy example, the real XML which I'm trying to split is about 300-500 different elements under split element all appearing at the random order and number varies. Stax may not be the best solution after all?
Bounty update :
I'm looking for a solution(code) that will:
Be able to split XML file into n parts with x split elements(from the dummy XML example staff is the split element).
The content of the spitted files should be wrapped in the root element from the original file(like in the dummy example company)
I'd like to be able to specify condition that must be in the split element i.e. I want only staff which have nickname, I want to discard those without nicknames. But be able to also split without conditions while running split without conditions.
The code doesn't necessarily have to improve my solution(lacking good logic and performance), but it works.
And not happy with "but it works". And I can't find enough examples of Stax for these kind of operations, user community is not great as well. It doesn't have to be Stax solution as well.
I'm probably asking too much, but I'm here to learn stuff, giving good bounty for the solution I think.
First piece of advice: don't try to write your own XML handling code. Use an XML parser - it's going to be much more reliable and quite possibly faster.
If you use an XML pull parser (e.g. StAX) you should be able to read an element at a time and write it out to disk, never reading the whole document in one go.
Here's my suggestion. It requires a streaming XSLT 3.0 processor: which means in practice that it needs Saxon-EE 9.3.
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="3.0">
<xsl:mode streamable="yes">
<xsl:template match="/">
<xsl:apply-templates select="company/staff"/>
</xsl:template>
<xsl:template match=staff">
<xsl:variable name="v" as="element(staff)">
<xsl:copy-of select="."/>
</xsl:variable>
<xsl:if test="$v/nickname">
<xsl:result-document href="{#id}.xml">
<xsl:copy-of select="$v"/>
</xsl:result-document>
</xsl:if>
</xsl:template>
</xsl:stylesheet>
In practice, though, unless you have hundreds of megabytes of data, I suspect a non-streaming solution will be quite fast enough, and probably faster than your hand-written Java code, given that your Java code is nothing to get excited about. At any rate, give an XSLT solution a try before you write reams of low-level Java. It's a routine problem, after all.
You could do the following with StAX:
Algorithm
Read and hold onto the root element event.
Read first chunk of XML:
Queue events until condition has been met.
If condition has been met:
Write start document event.
Write out root start element event
Write out split start element event
Write out queued events
Write out remaining events for this section.
If condition was not met then do nothing.
Repeat step 2 with next chunk of XML
Code for Your Use Case
The following code uses StAX APIs to break up the document as outlined in your question:
package forum7408938;
import java.io.*;
import java.util.*;
import javax.xml.namespace.QName;
import javax.xml.stream.*;
import javax.xml.stream.events.*;
public class Demo {
public static void main(String[] args) throws Exception {
Demo demo = new Demo();
demo.split("src/forum7408938/input.xml", "nickname");
//demo.split("src/forum7408938/input.xml", null);
}
private void split(String xmlResource, String condition) throws Exception {
XMLEventFactory xef = XMLEventFactory.newFactory();
XMLInputFactory xif = XMLInputFactory.newInstance();
XMLEventReader xer = xif.createXMLEventReader(new FileReader(xmlResource));
StartElement rootStartElement = xer.nextTag().asStartElement(); // Advance to statements element
StartDocument startDocument = xef.createStartDocument();
EndDocument endDocument = xef.createEndDocument();
XMLOutputFactory xof = XMLOutputFactory.newFactory();
while(xer.hasNext() && !xer.peek().isEndDocument()) {
boolean metCondition;
XMLEvent xmlEvent = xer.nextTag();
if(!xmlEvent.isStartElement()) {
break;
}
// BOUNTY CRITERIA
// Be able to split XML file into n parts with x split elements(from
// the dummy XML example staff is the split element).
StartElement breakStartElement = xmlEvent.asStartElement();
List<XMLEvent> cachedXMLEvents = new ArrayList<XMLEvent>();
// BOUNTY CRITERIA
// I'd like to be able to specify condition that must be in the
// split element i.e. I want only staff which have nickname, I want
// to discard those without nicknames. But be able to also split
// without conditions while running split without conditions.
if(null == condition) {
cachedXMLEvents.add(breakStartElement);
metCondition = true;
} else {
cachedXMLEvents.add(breakStartElement);
xmlEvent = xer.nextEvent();
metCondition = false;
while(!(xmlEvent.isEndElement() && xmlEvent.asEndElement().getName().equals(breakStartElement.getName()))) {
cachedXMLEvents.add(xmlEvent);
if(xmlEvent.isStartElement() && xmlEvent.asStartElement().getName().getLocalPart().equals(condition)) {
metCondition = true;
break;
}
xmlEvent = xer.nextEvent();
}
}
if(metCondition) {
// Create a file for the fragment, the name is derived from the value of the id attribute
FileWriter fileWriter = null;
fileWriter = new FileWriter("src/forum7408938/" + breakStartElement.getAttributeByName(new QName("id")).getValue() + ".xml");
// A StAX XMLEventWriter will be used to write the XML fragment
XMLEventWriter xew = xof.createXMLEventWriter(fileWriter);
xew.add(startDocument);
// BOUNTY CRITERIA
// The content of the spitted files should be wrapped in the
// root element from the original file(like in the dummy example
// company)
xew.add(rootStartElement);
// Write the XMLEvents that were cached while when we were
// checking the fragment to see if it matched our criteria.
for(XMLEvent cachedEvent : cachedXMLEvents) {
xew.add(cachedEvent);
}
// Write the XMLEvents that we still need to parse from this
// fragment
xmlEvent = xer.nextEvent();
while(xer.hasNext() && !(xmlEvent.isEndElement() && xmlEvent.asEndElement().getName().equals(breakStartElement.getName()))) {
xew.add(xmlEvent);
xmlEvent = xer.nextEvent();
}
xew.add(xmlEvent);
// Close everything we opened
xew.add(xef.createEndElement(rootStartElement.getName(), null));
xew.add(endDocument);
fileWriter.close();
}
}
}
}
#Jon Skeet is spot on as usual in his advice. #Blaise Doughan gave you a very basic picture of using StAX (which would be my preferred choice, although you can do basically the same thing with SAX). You seem to be looking for something more explicit, so here's some pseudo code to get you started (based on StAX):
find first "staff" StartElement
set a flag indicating you are in a "staff" element and start tracking the depth (StartElement is +1, EndElement is -1)
now, process the "staff" sub-elements, grab any of the data you care about and put it in a file (or where ever)
keep processing until your depth reaches 0 (when you find the matching "staff" EndElement)
unset the flag indicating you are in a "staff" element
search for the next "staff" StartElement
if found, go to 2. and repeat
if not found, document is complete
EDIT:
wow, i have to say i'm amazed at the number of people willing to do someone else's work for them. i didn't realize SO was basically a free version of rent-a-coder.
#Gandalf StormCrow:
Let me divide your problem into three separate issues:-
i) Reading XML and simultaenous split XML in best possible way
ii) Checking condition in split file
iii) If condition met, process that spilt file.
for i), there are ofcourse mutliple solutions: SAX, STAX and other parsers and as simple as that as you mentioned just read using simple java io operations and search for tags.
I believe SAX/STAX/simple java IO, anything will do. I have taken your example as base for my solution.
ii) Checking condition in split file: you have used contains() method to check for existence of nickname. This does not seem best way: what if your conditions are as complex as if nickname should be present but length>5 or salary should be numeric etc.
I would use new java XML validation framework for this which make uses of XML schema.Please note we can cache schema object in memory so to reuse it again and again. This new validation framework is pretty fast.
iii) If condition met, process that spilt file.
You may want use java concurrent APIs to submit async tasks(ExecutorService class) to acheive parallel execution for faster performance.
So considering above points, one possible solution can be:-
You can create a company.xsd file like:-
<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.example.org/NewXMLSchema"
xmlns:tns="http://www.example.org/NewXMLSchema"
elementFormDefault="unqualified">
<element name="company">
<complexType>
<sequence>
<element name="staff" type="tns:stafftype"/>
</sequence>
</complexType>
</element>
<complexType name="stafftype">
<sequence>
<element name="firstname" type="string" minOccurs="0" />
<element name="lastname" type="string" minOccurs="0" />
<element name="nickname" type="string" minOccurs="1" />
<element name="salary" type="int" minOccurs="0" />
</sequence>
</complexType>
</schema>
then your java code would look like:-
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import javax.xml.transform.stream.StreamSource;
import javax.xml.validation.Schema;
import javax.xml.validation.SchemaFactory;
import javax.xml.validation.Validator;
import org.xml.sax.SAXException;
public class testXML {
// Lookup a factory for the W3C XML Schema language
static SchemaFactory factory = SchemaFactory
.newInstance("http://www.w3.org/2001/XMLSchema");
// Compile the schema.
static File schemaLocation = new File("company.xsd");
static Schema schema = null;
static {
try {
schema = factory.newSchema(schemaLocation);
} catch (SAXException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private final ExecutorService pool = Executors.newFixedThreadPool(20);;
boolean validate(StringBuffer splitBuffer) {
boolean isValid = false;
Validator validator = schema.newValidator();
try {
validator.validate(new StreamSource(new ByteArrayInputStream(
splitBuffer.toString().getBytes())));
isValid = true;
} catch (SAXException ex) {
System.out.println(ex.getMessage());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return isValid;
}
void split(BufferedReader br, String rootElementName,
String splitElementName) {
StringBuffer splitBuffer = null;
String line = null;
String startRootElement = "<" + rootElementName + ">";
String endRootElement = "</" + rootElementName + ">";
String startSplitElement = "<" + splitElementName + ">";
String endSplitElement = "</" + splitElementName + ">";
String xmlDeclaration = "<?xml version=\"1.0\"";
boolean startFlag = false, endflag = false;
try {
while ((line = br.readLine()) != null) {
if (line.contains(xmlDeclaration)
|| line.contains(startRootElement)
|| line.contains(endRootElement)) {
continue;
}
if (line.contains(startSplitElement)) {
startFlag = true;
endflag = false;
splitBuffer = new StringBuffer(startRootElement);
splitBuffer.append(line);
} else if (line.contains(endSplitElement)) {
endflag = true;
startFlag = false;
splitBuffer.append(line);
splitBuffer.append(endRootElement);
} else if (startFlag) {
splitBuffer.append(line);
}
if (endflag) {
//process splitBuffer
boolean result = validate(splitBuffer);
if (result) {
//send it to a thread for processing further
//it is async so that main thread can continue for next
pool.submit(new ProcessingHandler(splitBuffer));
}
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
class ProcessingHandler implements Runnable {
String splitXML = null;
ProcessingHandler(StringBuffer splitXMLBuffer) {
this.splitXML = splitXMLBuffer.toString();
}
#Override
public void run() {
// do like writing to a file etc.
}
}
Have a look at this. This is slightly reworked sample from xmlpull.org:
http://www.xmlpull.org/v1/download/unpacked/doc/quick_intro.html
The following should do all you need unless you have nested splitting tags like:
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<company>
<staff id="1">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
<other>
<staff>
...
</staff>
</other>
</staff>
</company>
To run it in pass-through mode simply pass null as splitting tag.
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import org.apache.commons.io.FileUtils;
import org.xmlpull.v1.XmlPullParser;
import org.xmlpull.v1.XmlPullParserException;
import org.xmlpull.v1.XmlPullParserFactory;
public class XppSample {
private String rootTag;
private String splitTag;
private String requiredTag;
private int flushThreshold;
private String fileName;
private String rootTagEnd;
private boolean hasRequiredTag = false;
private int flushCount = 0;
private int fileNo = 0;
private String header;
private XmlPullParser xpp;
private StringBuilder nodeBuf = new StringBuilder();
private StringBuilder fileBuf = new StringBuilder();
public XppSample(String fileName, String rootTag, String splitTag, String requiredTag, int flushThreshold) throws XmlPullParserException, FileNotFoundException {
this.rootTag = rootTag;
rootTagEnd = "</" + rootTag + ">";
this.splitTag = splitTag;
this.requiredTag = requiredTag;
this.flushThreshold = flushThreshold;
this.fileName = fileName;
XmlPullParserFactory factory = XmlPullParserFactory.newInstance(System.getProperty(XmlPullParserFactory.PROPERTY_NAME), null);
factory.setNamespaceAware(true);
xpp = factory.newPullParser();
xpp.setInput(new FileReader(fileName));
}
public void processDocument() throws XmlPullParserException, IOException {
int eventType = xpp.getEventType();
do {
if(eventType == XmlPullParser.START_TAG) {
processStartElement(xpp);
} else if(eventType == XmlPullParser.END_TAG) {
processEndElement(xpp);
} else if(eventType == XmlPullParser.TEXT) {
processText(xpp);
}
eventType = xpp.next();
} while (eventType != XmlPullParser.END_DOCUMENT);
saveFile();
}
public void processStartElement(XmlPullParser xpp) {
int holderForStartAndLength[] = new int[2];
String name = xpp.getName();
char ch[] = xpp.getTextCharacters(holderForStartAndLength);
int start = holderForStartAndLength[0];
int length = holderForStartAndLength[1];
if(name.equals(rootTag)) {
int pos = start + length;
header = new String(ch, 0, pos);
} else {
if(requiredTag==null || name.equals(requiredTag)) {
hasRequiredTag = true;
}
nodeBuf.append(xpp.getText());
}
}
public void flushBuffer() throws IOException {
if(hasRequiredTag) {
fileBuf.append(nodeBuf);
if(((++flushCount)%flushThreshold)==0) {
saveFile();
}
}
nodeBuf = new StringBuilder();
hasRequiredTag = false;
}
public void saveFile() throws IOException {
if(fileBuf.length()>0) {
String splitFile = header + fileBuf.toString() + rootTagEnd;
FileUtils.writeStringToFile(new File((fileNo++) + "_" + fileName), splitFile);
fileBuf = new StringBuilder();
}
}
public void processEndElement (XmlPullParser xpp) throws IOException {
String name = xpp.getName();
if(name.equals(rootTag)) {
flushBuffer();
} else {
nodeBuf.append(xpp.getText());
if(name.equals(splitTag)) {
flushBuffer();
}
}
}
public void processText (XmlPullParser xpp) throws XmlPullParserException {
int holderForStartAndLength[] = new int[2];
char ch[] = xpp.getTextCharacters(holderForStartAndLength);
int start = holderForStartAndLength[0];
int length = holderForStartAndLength[1];
String content = new String(ch, start, length);
nodeBuf.append(content);
}
public static void main (String args[]) throws XmlPullParserException, IOException {
//XppSample app = new XppSample("input.xml", "company", "staff", "nickname", 3);
XppSample app = new XppSample("input.xml", "company", "staff", null, 3);
app.processDocument();
}
}
Normally I would suggest using StAX, but it is unclear to me how 'stateful' your real XML is. If simple, then use SAX for ultimate performance, if not-so-simple, use StAX. So you need to
read bytes from disk
convert them to characters
parse the XML
determine whether to keep XML or throw away (skip out subtree)
write XML
convert characters to bytes
write to disk
Now, it might seem like steps 3-5 are the most resource-intensive, but I would rate them as
Most: 1 + 7
Middle: 2 + 6
Least: 3 + 4 + 5
As operations 1 and 7 are kind of seperate of the rest, you should do them in an async way, at least creating multiple small files is best done in n other threads, if you are familiar with multi-threading. For increased performance, you might also look into the new IO stuff in Java.
Now for steps 2 + 3 and 5 + 6 you can go a long way with FasterXML, it really does a lot of the stuff you are looking for, like triggering JVM hot-spot attention in the right places; might even support async reading/writing looking through the code quickly.
So then we are left with step 5, and depending on your logic, you should either
a. make an object binding, then decide how what to do
b. write XML anyways, hoping for the best, and then throw it away if no 'staff' element is present.
Whatever you do, object reuse is sensible. Note that both alternatives (obisously) requires the same amount of parsing (skip out of subtree ASAP), and for alternative b, that a little extra XML is actually not so bad performancewise, ideally make sure your char buffers are > one unit.
Alternative b is the most easy to implement, simply copy the 'xml event' from your reader to writer, example for StAX:
private static void copyEvent(int event, XMLStreamReader reader, XMLStreamWriter writer) throws XMLStreamException {
if (event == XMLStreamConstants.START_ELEMENT) {
String localName = reader.getLocalName();
String namespace = reader.getNamespaceURI();
// TODO check this stuff again before setting in production
if (namespace != null) {
if (writer.getPrefix(namespace) != null) {
writer.writeStartElement(namespace, localName);
} else {
writer.writeStartElement(reader.getPrefix(), localName, namespace);
}
} else {
writer.writeStartElement(localName);
}
// first: namespace definition attributes
if(reader.getNamespaceCount() > 0) {
int namespaces = reader.getNamespaceCount();
for(int i = 0; i < namespaces; i++) {
String namespaceURI = reader.getNamespaceURI(i);
if(writer.getPrefix(namespaceURI) == null) {
String namespacePrefix = reader.getNamespacePrefix(i);
if(namespacePrefix == null) {
writer.writeDefaultNamespace(namespaceURI);
} else {
writer.writeNamespace(namespacePrefix, namespaceURI);
}
}
}
}
int attributes = reader.getAttributeCount();
// the write the rest of the attributes
for (int i = 0; i < attributes; i++) {
String attributeNamespace = reader.getAttributeNamespace(i);
if (attributeNamespace != null && attributeNamespace.length() != 0) {
writer.writeAttribute(attributeNamespace, reader.getAttributeLocalName(i), reader.getAttributeValue(i));
} else {
writer.writeAttribute(reader.getAttributeLocalName(i), reader.getAttributeValue(i));
}
}
} else if (event == XMLStreamConstants.END_ELEMENT) {
writer.writeEndElement();
} else if (event == XMLStreamConstants.CDATA) {
String array = reader.getText();
writer.writeCData(array);
} else if (event == XMLStreamConstants.COMMENT) {
String array = reader.getText();
writer.writeComment(array);
} else if (event == XMLStreamConstants.CHARACTERS) {
String array = reader.getText();
if (array.length() > 0 && !reader.isWhiteSpace()) {
writer.writeCharacters(array);
}
} else if (event == XMLStreamConstants.START_DOCUMENT) {
writer.writeStartDocument();
} else if (event == XMLStreamConstants.END_DOCUMENT) {
writer.writeEndDocument();
}
}
And for a subtree,
private static void copySubTree(XMLStreamReader reader, XMLStreamWriter writer) throws XMLStreamException {
reader.require(XMLStreamConstants.START_ELEMENT, null, null);
copyEvent(XMLStreamConstants.START_ELEMENT, reader, writer);
int level = 1;
do {
int event = reader.next();
if(event == XMLStreamConstants.START_ELEMENT) {
level++;
} else if(event == XMLStreamConstants.END_ELEMENT) {
level--;
}
copyEvent(event, reader, writer);
} while(level > 0);
}
From which you probably can deduct how to skip out to a certain level. In general, for stateful StaX parsing, use the pattern
private static void parseSubTree(XMLStreamReader reader) throws XMLStreamException {
int level = 1;
do {
int event = reader.next();
if(event == XMLStreamConstants.START_ELEMENT) {
level++;
// do stateful stuff here
// for child logic:
if(reader.getLocalName().equals("Whatever")) {
parseSubTreeForWhatever(reader);
level --; // read from level 1 to 0 in submethod.
}
// alternatively, faster
if(level == 4) {
parseSubTreeForWhateverAtRelativeLevel4(reader);
level --; // read from level 1 to 0 in submethod.
}
} else if(event == XMLStreamConstants.END_ELEMENT) {
level--;
// do stateful stuff here, too
}
} while(level > 0);
}
where you in the start of the document read till the first start element and break (add the writer+copy for your use of course, as above).
Note that if you do an object binding, these methods should be placed in that object, and equally for the serialization methods.
I am pretty sure you will get 10s of MB/s on a modern system, and that should be sufficient. An issue to be investigate further, is approaches to use multiple cores for the actualy input, if you know for a fact the encoding subset, like non-crazy UTF-8, or ISO-8859, then random access might be possible -> send to different cores.
Have fun, and tell use how it went ;)
Edit: Almost forgot, if you for some reason are the one who is creating the file in the first place, or you will be reading them after splitting, you will se HUGE performance gains using XML binarization; there exist XML Schema generators which again can go into code generators. (And some XSLT transform libs use code generation too.) And run with the -server option for JVM.
How to make i faster:
Use asynchronous writes, possibly in parallel, might boost your perf if you have RAID-X something disks
Write to an SSD instead of HDD
My suggestion is that SAX, STAX, or DOM are not the ideal xml parser for your problem, the perfect solutions is called vtd-xml, there is an article on this subject explaining why DOM sax and STAX all done something very wrong... the code below is the shortest you have to write, yet performs 10x faster than DOM or SAX. http://www.javaworld.com/javaworld/jw-07-2006/jw-0724-vtdxml.html
Here is a latest paper entitled Processing XML with Java – A Performance Benchmark: http://recipp.ipp.pt/bitstream/10400.22/1847/1/ART_BrunoOliveira_2013.pdf
import com.ximpleware.*;
import java.io.*;
public class gandalf {
public static void main(String a[]) throws VTDException, Exception{
VTDGen vg = new VTDGen();
if (vg.parseFile("c:\\xml\\gandalf.txt", false)){
VTDNav vn=vg.getNav();
AutoPilot ap = new AutoPilot(vn);
ap.selectXPath("/company/staff[nickname]");
int i=-1;
int count=0;
while((i=ap.evalXPath())!=-1){
vn.dumpFragment("c:\\xml\\staff"+count+".xml");
count++;
}
}
}
}
Here is DOM based solution. I have tested this with the xml you provided. This needs to be checked against the actual xml files that you have.
Since this is based on DOM parser, please remember that this will require a lot of memory depending upon your xml file size. But its much faster as it's DOM based.
Algorithm :
Parse the document
Extract the root element name
Get list he nodes based on the split criteria (using XPath)
For each node, create an empty document with root element name as extracted in step #2
Insert the node in this new document
Check if nodes are to be filtered or not.
If nodes are to be filtered, then check if a specified element is present in the newly created doc.
If node is not present, don't write to the file.
If the nodes are NOT to be filtered at all, don't check for condition in #7, and write the document to the file.
This can be run from command prompt as follows
java XMLSplitter xmlFileLocation splitElement filter filterElement
For the xml you mentioned it will be
java XMLSplitter input.xml staff true nickname
In case you don't want to filter
java XMLSplitter input.xml staff
Here is the complete java code:
package com.xml.xpath;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.StringReader;
import java.io.StringWriter;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.transform.OutputKeys;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerConfigurationException;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpression;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;
import org.w3c.dom.DOMException;
import org.w3c.dom.DOMImplementation;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
public class XMLSplitter {
DocumentBuilder builder = null;
XPath xpath = null;
Transformer transformer = null;
String filterElement;
String splitElement;
String xmlFileLocation;
boolean filter = true;
public static void main(String[] arg) throws Exception{
XMLSplitter xMLSplitter = null;
if(arg.length < 4){
if(arg.length < 2){
System.out.println("Insufficient arguments !!!");
System.out.println("Usage: XMLSplitter xmlFileLocation splitElement filter filterElement ");
return;
}else{
System.out.println("Filter is off...");
xMLSplitter = new XMLSplitter();
xMLSplitter.init(arg[0],arg[1],false,null);
}
}else{
xMLSplitter = new XMLSplitter();
xMLSplitter.init(arg[0],arg[1],Boolean.parseBoolean(arg[2]),arg[3]);
}
xMLSplitter.start();
}
public void init(String xmlFileLocation, String splitElement, boolean filter, String filterElement )
throws ParserConfigurationException, TransformerConfigurationException{
//Initialize the Document builder
System.out.println("Initializing..");
DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance();
domFactory.setNamespaceAware(true);
builder = domFactory.newDocumentBuilder();
//Initialize the transformer
TransformerFactory transformerFactory = TransformerFactory.newInstance();
transformer = transformerFactory.newTransformer();
transformer.setOutputProperty(OutputKeys.METHOD, "xml");
transformer.setOutputProperty(OutputKeys.ENCODING,"UTF-8");
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "4");
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
//Initialize the xpath
XPathFactory factory = XPathFactory.newInstance();
xpath = factory.newXPath();
this.filterElement = filterElement;
this.splitElement = splitElement;
this.xmlFileLocation = xmlFileLocation;
this.filter = filter;
}
public void start() throws Exception{
//Parser the file
System.out.println("Parsing file.");
Document doc = builder. parse(xmlFileLocation);
//Get the root node name
System.out.println("Getting root element.");
XPathExpression rootElementexpr = xpath.compile("/");
Object rootExprResult = rootElementexpr.evaluate(doc, XPathConstants.NODESET);
NodeList rootNode = (NodeList) rootExprResult;
String rootNodeName = rootNode.item(0).getFirstChild().getNodeName();
//Get the list of split elements
XPathExpression expr = xpath.compile("//"+splitElement);
Object result = expr.evaluate(doc, XPathConstants.NODESET);
NodeList nodes = (NodeList) result;
System.out.println("Total number of split nodes "+nodes.getLength());
for (int i = 0; i < nodes.getLength(); i++) {
//Wrap each node inside root of the parent xml doc
Node sigleNode = wrappInRootElement(rootNodeName,nodes.item(i));
//Get the XML string of the fragment
String xmlFragment = serializeDocument(sigleNode);
//System.out.println(xmlFragment);
//Write the xml fragment in file.
storeInFile(xmlFragment,i);
}
}
private Node wrappInRootElement(String rootNodeName, Node fragmentDoc)
throws XPathExpressionException, ParserConfigurationException, DOMException,
SAXException, IOException, TransformerException{
//Create empty doc with just root node
DOMImplementation domImplementation = builder.getDOMImplementation();
Document doc = domImplementation.createDocument(null,null,null);
Element theDoc = doc.createElement(rootNodeName);
doc.appendChild(theDoc);
//Insert the fragment inside the root node
InputSource inStream = new InputSource();
String xmlString = serializeDocument(fragmentDoc);
inStream.setCharacterStream(new StringReader(xmlString));
Document fr = builder.parse(inStream);
theDoc.appendChild(doc.importNode(fr.getFirstChild(),true));
return doc;
}
private String serializeDocument(Node doc) throws TransformerException, XPathExpressionException{
if(!serializeThisNode(doc)){
return null;
}
DOMSource domSource = new DOMSource(doc);
StringWriter stringWriter = new StringWriter();
StreamResult streamResult = new StreamResult(stringWriter);
transformer.transform(domSource, streamResult);
String xml = stringWriter.toString();
return xml;
}
//Check whether node is to be stored in file or rejected based on input
private boolean serializeThisNode(Node doc) throws XPathExpressionException{
if(!filter){
return true;
}
XPathExpression filterElementexpr = xpath.compile("//"+filterElement);
Object result = filterElementexpr.evaluate(doc, XPathConstants.NODESET);
NodeList nodes = (NodeList) result;
if(nodes.item(0) != null){
return true;
}else{
return false;
}
}
private void storeInFile(String content, int fileIndex) throws IOException{
if(content == null || content.length() == 0){
return;
}
String fileName = splitElement+fileIndex+".xml";
File file = new File(fileName);
if(file.exists()){
System.out.println(" The file "+fileName+" already exists !! cannot create the file with the same name ");
return;
}
FileWriter fileWriter = new FileWriter(file);
fileWriter.write(content);
fileWriter.close();
System.out.println("Generated file "+fileName);
}
}
Let me know if this works for you or any other help regarding this code.

Parse document structure with Java

We need to get tree like structure from a given text document using Java. Used file type should be common and open (rtf, odt, ...). Currently we use Apache Tika to parse plain text from multiple documents.
What file type and API we should use so that we could most reliably get the correct structure parsed? If this is possible with Tika, I would be happy to see any demonstrations.
For example, we should get this kind of data from the given document:
Main Heading
Heading 1
Heading 1.1
Heading 2
Heading 2.2
Main Heading is the title of the paper. Paper has two main headings, Heading 1 and Heading 2 and they both have one subheadings. We should also get contents under each heading (paragraph text).
Any help is appreciated.
OpenDocument (.odt) is practically a zip package containing multiple xml files. Content.xml contains the actual textual content of the document. We are interested in headings and they can be found inside text:h tags. Read more about ODT.
I found an implementation for extracting headings from .odt files with QueryPath.
Since the original question was about Java, here it is. First we need to get access to content.xml by using ZipFile. Then we use SAX to parse xml content out of content.xml. Sample code simply prints out all the headings:
Test3.odt
content.xml
3764
1 My New Great Paper
2 Abstract
2 Introduction
2 Content
3 More content
3 Even more
2 Conclusions
Sample code:
public void printHeadingsOfOdtFIle(File odtFile) {
try {
ZipFile zFile = new ZipFile(odtFile);
System.out.println(zFile.getName());
ZipEntry contentFile = zFile.getEntry("content.xml");
System.out.println(contentFile.getName());
System.out.println(contentFile.getSize());
XMLReader xr = XMLReaderFactory.createXMLReader();
OdtDocumentContentHandler handler = new OdtDocumentContentHandler();
xr.setContentHandler(handler);
xr.parse(new InputSource(zFile.getInputStream(contentFile)));
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new OdtDocumentStructureExtractor().printHeadingsOfOdtFIle(new File("Test3.odt"));
}
Relevant parts of used ContentHandler look like this:
#Override
public void startElement(String uri, String localName, String qName, Attributes atts) throws SAXException {
temp = "";
if("text:h".equals(qName)) {
String headingLevel = atts.getValue("text:outline-level");
if(headingLevel != null) {
System.out.print(headingLevel + " ");
}
}
}
#Override
public void characters(char[] ch, int start, int length) throws SAXException {
char[] subArray = new char[length];
System.arraycopy(ch, start, subArray, 0, length);
temp = new String(subArray);
fullText.append(temp);
}
#Override
public void endElement(String uri, String localName, String qName) throws SAXException {
if("text:h".equals(qName)) {
System.out.println(temp);
}
}

Categories

Resources