Is it possible to convert XSD to XPath in Java? [duplicate] - java

This question already has answers here:
How to generate xpath from xsd?
(3 answers)
Closed 7 years ago.
I need to represent all the elements from an XSD Schema as XPath. Is there any way for it? Like consider there are five elements in XSD Schema I need to display XPath of all the five elements separately.
My suggestion is at the background XML corresponding to XSD has to be created and XPath has to be generated. Please suggest solution for the same if the approach is correct or suggest other approaches..
Thanks.
M.Sasi kumar

import java.io.File;
import java.util.HashMap;
import java.util.Map;
import java.util.Stack;
import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.DefaultHandler;
/**
* SAX handler that creates and prints XPath expressions for each element encountered.
*
* The algorithm is not infallible, if elements appear on different levels in the hierarchy.
* Something like the following is an example:
* - <elemA/>
* - <elemA/>
* - <elemB/>
* - <elemA/>
* - <elemC>
* - <elemB/>
* - </elemC>
*
* will report
*
* //elemA[0]
* //elemA[1]
* //elemB[0]
* //elemA[2]
* //elemC[0]
* //elemC[0]/elemB[1] (this is wrong: should be //elemC[0]/elemB[0] )
*
* It also ignores namespaces, and thus treats <foo:elemA> the same as <bar:elemA>.
*/
public class SAXCreateXPath extends DefaultHandler {
// map of all encountered tags and their running count
private Map<String, Integer> tagCount;
// keep track of the succession of elements
private Stack<String> tags;
// set to the tag name of the recently closed tag
String lastClosedTag;
/**
* Construct the XPath expression
*/
private String getCurrentXPath() {
String str = "//";
boolean first = true;
for (String tag : tags) {
if (first)
str = str + tag;
else
str = str + "/" + tag;
str += "["+tagCount.get(tag)+"]";
first = false;
}
return str;
}
#Override
public void startDocument() throws SAXException {
tags = new Stack();
tagCount = new HashMap<String, Integer>();
}
#Override
public void startElement (String namespaceURI, String localName, String qName, Attributes atts)
throws SAXException
{
boolean isRepeatElement = false;
if (tagCount.get(localName) == null) {
tagCount.put(localName, 0);
} else {
tagCount.put(localName, 1 + tagCount.get(localName));
}
if (lastClosedTag != null) {
// an element was recently closed ...
if (lastClosedTag.equals(localName)) {
// ... and it's the same as the current one
isRepeatElement = true;
} else {
// ... but it's different from the current one, so discard it
tags.pop();
}
}
// if it's not the same element, add the new element and zero count to list
if (! isRepeatElement) {
tags.push(localName);
}
System.out.println(getCurrentXPath());
lastClosedTag = null;
}
#Override
public void endElement (String uri, String localName, String qName) throws SAXException {
// if two tags are closed in succession (without an intermediate opening tag),
// then the information about the deeper nested one is discarded
if (lastClosedTag != null) {
tags.pop();
}
lastClosedTag = localName;
}
public static void main (String[] args) throws Exception {
if (args.length < 1) {
System.err.println("Usage: SAXCreateXPath <file.xml>");
System.exit(1);
}
// Create a JAXP SAXParserFactory and configure it
SAXParserFactory spf = SAXParserFactory.newInstance();
spf.setNamespaceAware(true);
spf.setValidating(false);
// Create a JAXP SAXParser
SAXParser saxParser = spf.newSAXParser();
// Get the encapsulated SAX XMLReader
XMLReader xmlReader = saxParser.getXMLReader();
// Set the ContentHandler of the XMLReader
xmlReader.setContentHandler(new SAXCreateXPath());
String filename = args[0];
String path = new File(filename).getAbsolutePath();
if (File.separatorChar != '/') {
path = path.replace(File.separatorChar, '/');
}
if (!path.startsWith("/")) {
path = "/" + path;
}
// Tell the XMLReader to parse the XML document
xmlReader.parse("file:"+path);
}
}

Related

Finding javascript code in PDF using Apache PDFBox

My goal is to extract and process any JavasSript code that a PDF document might contain. By opening a PDF in editor I can see objects like this:
402 0 obj
<</S/JavaScript/JS(\n\r\n /* Set day 25 */\r\n FormRouter_SetCurrentDate\("25"\);\r)>>
endobj
I am trying to use Apache PDFBox to accomplish this but so far with no luck.
This line returns an empty list:
jsObj = doc.getObjectsByType(COSName.JAVA_SCRIPT);
Can anyone can give me some direction?
This tool is based on the PrintFields example in PDFBox. It will show the Javascript fields in forms. I wrote it last year for a guy who had problems with relationship between AcroForm fields (some fields were enabled / disabled depending on the values of other fields). There are still other places where there can be Javascript.
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package pdfboxpageimageextraction;
import java.io.File;
import java.io.IOException;
import java.util.List;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDDocumentCatalog;
import org.apache.pdfbox.pdmodel.interactive.action.PDAction;
import org.apache.pdfbox.pdmodel.interactive.action.PDActionJavaScript;
import org.apache.pdfbox.pdmodel.interactive.action.PDFormFieldAdditionalActions;
import org.apache.pdfbox.pdmodel.interactive.annotation.PDAnnotationWidget;
import org.apache.pdfbox.pdmodel.interactive.form.PDAcroForm;
import org.apache.pdfbox.pdmodel.interactive.form.PDField;
import org.apache.pdfbox.pdmodel.interactive.form.PDNonTerminalField;
import org.apache.pdfbox.pdmodel.interactive.form.PDTerminalField;
/**
* This example will take a PDF document and print all the fields from the file.
*
* #author Ben Litchfield
*
*/
public class PrintJavaScriptFields
{
/**
* This will print all the fields from the document.
*
* #param pdfDocument The PDF to get the fields from.
*
* #throws IOException If there is an error getting the fields.
*/
public void printFields(PDDocument pdfDocument) throws IOException
{
PDDocumentCatalog docCatalog = pdfDocument.getDocumentCatalog();
PDAcroForm acroForm = docCatalog.getAcroForm();
List<PDField> fields = acroForm.getFields();
//System.out.println(fields.size() + " top-level fields were found on the form");
for (PDField field : fields)
{
processField(field, "|--", field.getPartialName());
}
}
private void processField(PDField field, String sLevel, String sParent) throws IOException
{
String partialName = field.getPartialName();
if (field instanceof PDTerminalField)
{
PDTerminalField termField = (PDTerminalField) field;
PDFormFieldAdditionalActions fieldActions = field.getActions();
if (fieldActions != null)
{
System.out.println(field.getFullyQualifiedName() + ": " + fieldActions.getClass().getSimpleName() + " js field actionS:\n" + fieldActions.getCOSObject());
printPossibleJS(fieldActions.getK());
printPossibleJS(fieldActions.getC());
printPossibleJS(fieldActions.getF());
printPossibleJS(fieldActions.getV());
}
for (PDAnnotationWidget widgetAction : termField.getWidgets())
{
PDAction action = widgetAction.getAction();
if (action instanceof PDActionJavaScript)
{
System.out.println(field.getFullyQualifiedName() + ": " + action.getClass().getSimpleName() + " js widget action:\n" + action.getCOSObject());
printPossibleJS(action);
}
}
}
if (field instanceof PDNonTerminalField)
{
if (!sParent.equals(field.getPartialName()))
{
if (partialName != null)
{
sParent = sParent + "." + partialName;
}
}
//System.out.println(sLevel + sParent);
for (PDField child : ((PDNonTerminalField) field).getChildren())
{
processField(child, "| " + sLevel, sParent);
}
}
else
{
String fieldValue = field.getValueAsString();
StringBuilder outputString = new StringBuilder(sLevel);
outputString.append(sParent);
if (partialName != null)
{
outputString.append(".").append(partialName);
}
outputString.append(" = ").append(fieldValue);
outputString.append(", type=").append(field.getClass().getName());
//System.out.println(outputString);
}
}
private void printPossibleJS(PDAction kAction)
{
if (kAction instanceof PDActionJavaScript)
{
PDActionJavaScript jsAction = (PDActionJavaScript) kAction;
String jsString = jsAction.getAction();
if (!jsString.contains("\n"))
{
// avoid display problems with netbeans
jsString = jsString.replaceAll("\r", "\n").replaceAll("\n\n", "\n");
}
System.out.println(jsString);
System.out.println();
}
}
/**
* This will read a PDF file and print out the form elements. <br />
* see usage() for commandline
*
* #param args command line arguments
*
* #throws IOException If there is an error importing the FDF document.
*/
public static void main(String[] args) throws IOException
{
PDDocument pdf = null;
try
{
pdf = PDDocument.load(new File("XXXX", "YYYYY.pdf"));
PrintJavaScriptFields exporter = new PrintJavaScriptFields();
exporter.printFields(pdf);
}
finally
{
if (pdf != null)
{
pdf.close();
}
}
}
}
As a bonus, here's code to show all COSString objects:
public class ShowAllCOSStrings
{
static Set<COSString> strings = new HashSet<COSString>();
static void crawl(COSBase base)
{
if (base instanceof COSString)
{
strings.add((COSString)base);
return;
}
if (base instanceof COSDictionary)
{
COSDictionary dict = (COSDictionary) base;
for (COSName key : dict.keySet())
{
crawl(dict.getDictionaryObject(key));
}
return;
}
if (base instanceof COSArray)
{
COSArray ar = (COSArray) base;
for (COSBase item : ar)
{
crawl(item);
}
return;
}
if (base instanceof COSNull ||
base instanceof COSObject ||
base instanceof COSName ||
base instanceof COSNumber ||
base instanceof COSBoolean ||
base == null)
{
return;
}
System.out.println("huh? " + base);
}
public static void main(String[] args) throws IOException
{
PDDocument doc = PDDocument.load(new File("XXX","YYY.pdf"));
for (COSObject obj : doc.getDocument().getObjects())
{
COSBase base = obj.getObject();
//System.out.println(obj + ": " + base);
crawl(base);
}
System.out.println(strings.size() + " strings:");
for (COSString s : strings)
{
String str = s.getString();
if (!str.contains("\n"))
{
// avoid display problems with netbeans
str = str.replaceAll("\r", "\n").replaceAll("\n\n", "\n");
}
System.out.println(str);
}
doc.close();
}
}
However Javascript can also be in a stream. See in the PDF spec "Additional entries specific to a rendition action", the JS entry:
A text string or stream containing a JavaScript script that shall be
executed when the action is triggered.
You can change the code above to catch COSStream objects too; COSStream is extended from COSDictionary.

How to convert xml node into string without changing order of attributes? [duplicate]

This question already has answers here:
Order of XML attributes after DOM processing
(12 answers)
Closed 8 years ago.
I am trying to convert a XML Node to String using the following code :
private String nodeToString(final Node node) {
final StringWriter stringWriter = new StringWriter();
try {
final Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
transformer.setOutputProperty(OutputKeys.INDENT, "no");
transformer.transform(new DOMSource(node), new StreamResult(stringWriter));
} catch (final TransformerException e) {
JOptionPane.showMessageDialog(this, e.getMessage(), "Error", JOptionPane.ERROR_MESSAGE);
}
return stringWriter.toString();
}
My problem is that it formats attributes of XML node in alphabetical orders. Is there any property I could apply to ignore formatting of Node attributes ?
The DOM API does not preserve attribute order:
NamedNodeMaps are not maintained in any particular order
If you have a Node then you have already lost any attribute ordering. Consider this XML:
<?xml version="1.0" encoding="UTF-8"?>
<!-- attrs.xml -->
<attrs
a="a"
z="z"
b="b"
m="m" />
There are no guarantees about the ordering of the output of this application:
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.*;
public class Attrs {
public static void main(String[] args) throws Exception {
NamedNodeMap attrs = DocumentBuilderFactory.newInstance()
.newDocumentBuilder()
.parse("attrs.xml")
.getElementsByTagName("attrs")
.item(0)
.getAttributes();
for (int i = 0; i < attrs.getLength(); i++) {
Attr attribute = (Attr) attrs.item(i);
System.out.println(attribute.getName() + "=" + attribute.getValue());
}
}
}
If they are alphabetical then that is only an implementation side-effect, not a requirement. If attribute order is significant to you then you are using the wrong tools.
I figure it out how to do this, I have read xml file, and read only specific node from that xml file as a string. And applied operations on string to match my conditions. By doing this obviously I cannot leverage the Parser API, but that fulfilled my requirements. Following is my code snippet:
/**
* #param in InputStream of xml file
*/
private String getNodeString(InputStream in) throws IOException {
String nodeString = "";
InputStreamReader is = new InputStreamReader(in);
StringBuilder sb = new StringBuilder();
BufferedReader br = new BufferedReader(is);
String read = br.readLine();
String fileData;
while (read != null) {
//System.out.println(read);
sb.append(read);
read = br.readLine();
}
fileData = sb.toString().trim();
// Start index of node
int start = fileData.indexOf("<" + mSignedNode);
// End index of node, next node name
int end = fileData.indexOf("</Configuration>");
nodeString = fileData.substring(start, end);
return nodeString.trim();
}
The method is quite dirty, but you can pass parameters to find start index and end index.
Hope this would help someone, rather just closing their question ;)

Using Jsoup, how can I fetch each and every information resides in each link?

package com.muthu;
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.helper.Validate;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import org.jsoup.select.NodeVisitor;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import org.jsoup.nodes.*;
public class TestingTool
{
public static void main(String[] args) throws IOException
{
Validate.isTrue(args.length == 0, "usage: supply url to fetch");
String url = "http://www.stackoverflow.com/";
print("Fetching %s...", url);
Document doc = Jsoup.connect(url).get();
Elements links = doc.select("a[href]");
System.out.println(doc.text());
Elements tags=doc.getElementsByTag("div");
String alls=doc.text();
System.out.println("\n");
for (Element link : links)
{
print(" %s ", link.attr("abs:href"), trim(link.text(), 35));
}
BufferedWriter bw = new BufferedWriter(new FileWriter(new File("C:/tool
/linknames.txt")));
for (Element link : links) {
bw.write("Link: "+ link.text().trim());
bw.write(System.getProperty("line.separator"));
}
bw.flush();
bw.close();
} }
private static void print(String msg, Object... args) {
System.out.println(String.format(msg, args));
}
private static String trim(String s, int width) {
if (s.length() > width)
return s.substring(0, width-1) + ".";
else
return s;
}
}
If you connect to an URL it will only parse the current page. But you can 1.) connect to an URL, 2.) parse the informations you need, 3.) select all further links, 4.) connect to them and 5.) continue this as long as there are new links.
considerations:
You need a list (?) or something else where you've store the links you already parsed
You have to decide if you need only links of this page or externals too
You have to skip pages like "about", "contact" etc.
Edit:
(Note: you have to add some changes / errorhandling code)
List<String> visitedUrls = new ArrayList<>(); // Store all links you've already visited
public void visitUrl(String url) throws IOException
{
url = url.toLowerCase(); // now its case insensitive
if( !visitedUrls.contains(url) ) // Do this only if not visted yet
{
Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
/* ... Select your Data here ... */
Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
for( Element next : nextLinks ) // Iterate over all Links
{
visitUrl(next.absUrl("href")); // Recursive call for all next Links
}
}
}
You have to add more restrictions / checks at the part where next links are selected (maybe you want to skip / ignore some); and some error handling.
Edit 2:
To skip ignored links you can use this:
Create a Set / List / whatever, where you store ignored keywords
Fill it with those keywords
Before you call the visitUrl() method with the new Link to parse, you check if this new Url contains any of the ignored keywords. If it contains at least one it will be skipped.
I modified the example a bit to do so (but it's not tested yet!).
List<String> visitedUrls = new ArrayList<>(); // Store all links you've already visited
Set<String> ignore = new HashSet<>(); // Store all keywords you want ignore
// ...
/*
* Add keywords to the ignorelist. Each link that contains one of this
* words will be skipped.
*
* Do this in eg. constructor, static block or a init method.
*/
ignore.add(".twitter.com");
// ...
public void visitUrl(String url) throws IOException
{
url = url.toLowerCase(); // Now its case insensitive
if( !visitedUrls.contains(url) ) // Do this only if not visted yet
{
Document doc = Jsoup.connect(url).get(); // Connect to Url and parse Document
/* ... Select your Data here ... */
Elements nextLinks = doc.select("a[href]"); // Select next links - add more restriction!
for( Element next : nextLinks ) // Iterate over all Links
{
boolean skip = false; // If false: parse the url, if true: skip it
final String href = next.absUrl("href"); // Select the 'href' attribute -> next link to parse
for( String s : ignore ) // Iterate over all ignored keywords - maybe there's a better solution for this
{
if( href.contains(s) ) // If the url contains ignored keywords it will be skipped
{
skip = true;
break;
}
}
if( !skip )
visitUrl(next.absUrl("href")); // Recursive call for all next Links
}
}
}
Parsing the next link is done by this:
final String href = next.absUrl("href");
/* ... */
visitUrl(next.absUrl("href"));
But possibly you should add some more stop-conditions to this part.

How to improve splitting xml file performance

I've see quite a lot posts/blogs/articles about splitting XML file into a smaller chunks and decided to create my own because I have some custom requirements. Here is what I mean, consider the following XML :
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<company>
<staff id="1">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="2">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="3">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="4">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
</staff>
<staff id="5">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<salary>100000</salary>
</staff>
</company>
I want to split this xml into n parts, each containing 1 file, but the staff element must contain nickname , if it's not there I don't want it. So this should produce 4 xml splits, each containing staff id starting at 1 until 4.
Here is my code :
public int split() throws Exception{
BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(inputFilePath)));
String line;
List<String> tempList = null;
while((line=br.readLine())!=null){
if(line.contains("<?xml version=\"1.0\"") || line.contains("<" + rootElement + ">") || line.contains("</" + rootElement + ">")){
continue;
}
if(line.contains("<"+ element +">")){
tempList = new ArrayList<String>();
}
tempList.add(line);
if(line.contains("</"+ element +">")){
if(hasConditions(tempList)){
writeToSplitFile(tempList);
writtenObjectCounter++;
totalCounter++;
}
}
if(writtenObjectCounter == itemsPerFile){
writtenObjectCounter = 0;
fileCounter++;
tempList.clear();
}
}
if(tempList.size() != 0){
writeClosingRootElement();
}
return totalCounter;
}
private void writeToSplitFile(List<String> itemList) throws Exception{
BufferedWriter wr = new BufferedWriter(new FileWriter(outputDirectory + File.separator + "split_" + fileCounter + ".xml", true));
if(writtenObjectCounter == 0){
wr.write("<" + rootElement + ">");
wr.write("\n");
}
for (String string : itemList) {
wr.write(string);
wr.write("\n");
}
if(writtenObjectCounter == itemsPerFile-1)
wr.write("</" + rootElement + ">");
wr.close();
}
private void writeClosingRootElement() throws Exception{
BufferedWriter wr = new BufferedWriter(new FileWriter(outputDirectory + File.separator + "split_" + fileCounter + ".xml", true));
wr.write("</" + rootElement + ">");
wr.close();
}
private boolean hasConditions(List<String> list){
int matchList = 0;
for (String condition : conditionList) {
for (String string : list) {
if(string.contains(condition)){
matchList++;
}
}
}
if(matchList >= conditionList.size()){
return true;
}
return false;
}
I know that opening/closing stream for each written staff element which does impact the performance. But if I write once per file(which may contain n number of staff). Naturally root and split elements are configurable.
Any ideas how can I improve the performance/logic? I'd prefer some code, but good advice can be better sometimes
Edit:
This XML example is actually a dummy example, the real XML which I'm trying to split is about 300-500 different elements under split element all appearing at the random order and number varies. Stax may not be the best solution after all?
Bounty update :
I'm looking for a solution(code) that will:
Be able to split XML file into n parts with x split elements(from the dummy XML example staff is the split element).
The content of the spitted files should be wrapped in the root element from the original file(like in the dummy example company)
I'd like to be able to specify condition that must be in the split element i.e. I want only staff which have nickname, I want to discard those without nicknames. But be able to also split without conditions while running split without conditions.
The code doesn't necessarily have to improve my solution(lacking good logic and performance), but it works.
And not happy with "but it works". And I can't find enough examples of Stax for these kind of operations, user community is not great as well. It doesn't have to be Stax solution as well.
I'm probably asking too much, but I'm here to learn stuff, giving good bounty for the solution I think.
First piece of advice: don't try to write your own XML handling code. Use an XML parser - it's going to be much more reliable and quite possibly faster.
If you use an XML pull parser (e.g. StAX) you should be able to read an element at a time and write it out to disk, never reading the whole document in one go.
Here's my suggestion. It requires a streaming XSLT 3.0 processor: which means in practice that it needs Saxon-EE 9.3.
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="3.0">
<xsl:mode streamable="yes">
<xsl:template match="/">
<xsl:apply-templates select="company/staff"/>
</xsl:template>
<xsl:template match=staff">
<xsl:variable name="v" as="element(staff)">
<xsl:copy-of select="."/>
</xsl:variable>
<xsl:if test="$v/nickname">
<xsl:result-document href="{#id}.xml">
<xsl:copy-of select="$v"/>
</xsl:result-document>
</xsl:if>
</xsl:template>
</xsl:stylesheet>
In practice, though, unless you have hundreds of megabytes of data, I suspect a non-streaming solution will be quite fast enough, and probably faster than your hand-written Java code, given that your Java code is nothing to get excited about. At any rate, give an XSLT solution a try before you write reams of low-level Java. It's a routine problem, after all.
You could do the following with StAX:
Algorithm
Read and hold onto the root element event.
Read first chunk of XML:
Queue events until condition has been met.
If condition has been met:
Write start document event.
Write out root start element event
Write out split start element event
Write out queued events
Write out remaining events for this section.
If condition was not met then do nothing.
Repeat step 2 with next chunk of XML
Code for Your Use Case
The following code uses StAX APIs to break up the document as outlined in your question:
package forum7408938;
import java.io.*;
import java.util.*;
import javax.xml.namespace.QName;
import javax.xml.stream.*;
import javax.xml.stream.events.*;
public class Demo {
public static void main(String[] args) throws Exception {
Demo demo = new Demo();
demo.split("src/forum7408938/input.xml", "nickname");
//demo.split("src/forum7408938/input.xml", null);
}
private void split(String xmlResource, String condition) throws Exception {
XMLEventFactory xef = XMLEventFactory.newFactory();
XMLInputFactory xif = XMLInputFactory.newInstance();
XMLEventReader xer = xif.createXMLEventReader(new FileReader(xmlResource));
StartElement rootStartElement = xer.nextTag().asStartElement(); // Advance to statements element
StartDocument startDocument = xef.createStartDocument();
EndDocument endDocument = xef.createEndDocument();
XMLOutputFactory xof = XMLOutputFactory.newFactory();
while(xer.hasNext() && !xer.peek().isEndDocument()) {
boolean metCondition;
XMLEvent xmlEvent = xer.nextTag();
if(!xmlEvent.isStartElement()) {
break;
}
// BOUNTY CRITERIA
// Be able to split XML file into n parts with x split elements(from
// the dummy XML example staff is the split element).
StartElement breakStartElement = xmlEvent.asStartElement();
List<XMLEvent> cachedXMLEvents = new ArrayList<XMLEvent>();
// BOUNTY CRITERIA
// I'd like to be able to specify condition that must be in the
// split element i.e. I want only staff which have nickname, I want
// to discard those without nicknames. But be able to also split
// without conditions while running split without conditions.
if(null == condition) {
cachedXMLEvents.add(breakStartElement);
metCondition = true;
} else {
cachedXMLEvents.add(breakStartElement);
xmlEvent = xer.nextEvent();
metCondition = false;
while(!(xmlEvent.isEndElement() && xmlEvent.asEndElement().getName().equals(breakStartElement.getName()))) {
cachedXMLEvents.add(xmlEvent);
if(xmlEvent.isStartElement() && xmlEvent.asStartElement().getName().getLocalPart().equals(condition)) {
metCondition = true;
break;
}
xmlEvent = xer.nextEvent();
}
}
if(metCondition) {
// Create a file for the fragment, the name is derived from the value of the id attribute
FileWriter fileWriter = null;
fileWriter = new FileWriter("src/forum7408938/" + breakStartElement.getAttributeByName(new QName("id")).getValue() + ".xml");
// A StAX XMLEventWriter will be used to write the XML fragment
XMLEventWriter xew = xof.createXMLEventWriter(fileWriter);
xew.add(startDocument);
// BOUNTY CRITERIA
// The content of the spitted files should be wrapped in the
// root element from the original file(like in the dummy example
// company)
xew.add(rootStartElement);
// Write the XMLEvents that were cached while when we were
// checking the fragment to see if it matched our criteria.
for(XMLEvent cachedEvent : cachedXMLEvents) {
xew.add(cachedEvent);
}
// Write the XMLEvents that we still need to parse from this
// fragment
xmlEvent = xer.nextEvent();
while(xer.hasNext() && !(xmlEvent.isEndElement() && xmlEvent.asEndElement().getName().equals(breakStartElement.getName()))) {
xew.add(xmlEvent);
xmlEvent = xer.nextEvent();
}
xew.add(xmlEvent);
// Close everything we opened
xew.add(xef.createEndElement(rootStartElement.getName(), null));
xew.add(endDocument);
fileWriter.close();
}
}
}
}
#Jon Skeet is spot on as usual in his advice. #Blaise Doughan gave you a very basic picture of using StAX (which would be my preferred choice, although you can do basically the same thing with SAX). You seem to be looking for something more explicit, so here's some pseudo code to get you started (based on StAX):
find first "staff" StartElement
set a flag indicating you are in a "staff" element and start tracking the depth (StartElement is +1, EndElement is -1)
now, process the "staff" sub-elements, grab any of the data you care about and put it in a file (or where ever)
keep processing until your depth reaches 0 (when you find the matching "staff" EndElement)
unset the flag indicating you are in a "staff" element
search for the next "staff" StartElement
if found, go to 2. and repeat
if not found, document is complete
EDIT:
wow, i have to say i'm amazed at the number of people willing to do someone else's work for them. i didn't realize SO was basically a free version of rent-a-coder.
#Gandalf StormCrow:
Let me divide your problem into three separate issues:-
i) Reading XML and simultaenous split XML in best possible way
ii) Checking condition in split file
iii) If condition met, process that spilt file.
for i), there are ofcourse mutliple solutions: SAX, STAX and other parsers and as simple as that as you mentioned just read using simple java io operations and search for tags.
I believe SAX/STAX/simple java IO, anything will do. I have taken your example as base for my solution.
ii) Checking condition in split file: you have used contains() method to check for existence of nickname. This does not seem best way: what if your conditions are as complex as if nickname should be present but length>5 or salary should be numeric etc.
I would use new java XML validation framework for this which make uses of XML schema.Please note we can cache schema object in memory so to reuse it again and again. This new validation framework is pretty fast.
iii) If condition met, process that spilt file.
You may want use java concurrent APIs to submit async tasks(ExecutorService class) to acheive parallel execution for faster performance.
So considering above points, one possible solution can be:-
You can create a company.xsd file like:-
<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.example.org/NewXMLSchema"
xmlns:tns="http://www.example.org/NewXMLSchema"
elementFormDefault="unqualified">
<element name="company">
<complexType>
<sequence>
<element name="staff" type="tns:stafftype"/>
</sequence>
</complexType>
</element>
<complexType name="stafftype">
<sequence>
<element name="firstname" type="string" minOccurs="0" />
<element name="lastname" type="string" minOccurs="0" />
<element name="nickname" type="string" minOccurs="1" />
<element name="salary" type="int" minOccurs="0" />
</sequence>
</complexType>
</schema>
then your java code would look like:-
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import javax.xml.transform.stream.StreamSource;
import javax.xml.validation.Schema;
import javax.xml.validation.SchemaFactory;
import javax.xml.validation.Validator;
import org.xml.sax.SAXException;
public class testXML {
// Lookup a factory for the W3C XML Schema language
static SchemaFactory factory = SchemaFactory
.newInstance("http://www.w3.org/2001/XMLSchema");
// Compile the schema.
static File schemaLocation = new File("company.xsd");
static Schema schema = null;
static {
try {
schema = factory.newSchema(schemaLocation);
} catch (SAXException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private final ExecutorService pool = Executors.newFixedThreadPool(20);;
boolean validate(StringBuffer splitBuffer) {
boolean isValid = false;
Validator validator = schema.newValidator();
try {
validator.validate(new StreamSource(new ByteArrayInputStream(
splitBuffer.toString().getBytes())));
isValid = true;
} catch (SAXException ex) {
System.out.println(ex.getMessage());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return isValid;
}
void split(BufferedReader br, String rootElementName,
String splitElementName) {
StringBuffer splitBuffer = null;
String line = null;
String startRootElement = "<" + rootElementName + ">";
String endRootElement = "</" + rootElementName + ">";
String startSplitElement = "<" + splitElementName + ">";
String endSplitElement = "</" + splitElementName + ">";
String xmlDeclaration = "<?xml version=\"1.0\"";
boolean startFlag = false, endflag = false;
try {
while ((line = br.readLine()) != null) {
if (line.contains(xmlDeclaration)
|| line.contains(startRootElement)
|| line.contains(endRootElement)) {
continue;
}
if (line.contains(startSplitElement)) {
startFlag = true;
endflag = false;
splitBuffer = new StringBuffer(startRootElement);
splitBuffer.append(line);
} else if (line.contains(endSplitElement)) {
endflag = true;
startFlag = false;
splitBuffer.append(line);
splitBuffer.append(endRootElement);
} else if (startFlag) {
splitBuffer.append(line);
}
if (endflag) {
//process splitBuffer
boolean result = validate(splitBuffer);
if (result) {
//send it to a thread for processing further
//it is async so that main thread can continue for next
pool.submit(new ProcessingHandler(splitBuffer));
}
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
class ProcessingHandler implements Runnable {
String splitXML = null;
ProcessingHandler(StringBuffer splitXMLBuffer) {
this.splitXML = splitXMLBuffer.toString();
}
#Override
public void run() {
// do like writing to a file etc.
}
}
Have a look at this. This is slightly reworked sample from xmlpull.org:
http://www.xmlpull.org/v1/download/unpacked/doc/quick_intro.html
The following should do all you need unless you have nested splitting tags like:
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<company>
<staff id="1">
<firstname>yong</firstname>
<lastname>mook kim</lastname>
<nickname>mkyong</nickname>
<salary>100000</salary>
<other>
<staff>
...
</staff>
</other>
</staff>
</company>
To run it in pass-through mode simply pass null as splitting tag.
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import org.apache.commons.io.FileUtils;
import org.xmlpull.v1.XmlPullParser;
import org.xmlpull.v1.XmlPullParserException;
import org.xmlpull.v1.XmlPullParserFactory;
public class XppSample {
private String rootTag;
private String splitTag;
private String requiredTag;
private int flushThreshold;
private String fileName;
private String rootTagEnd;
private boolean hasRequiredTag = false;
private int flushCount = 0;
private int fileNo = 0;
private String header;
private XmlPullParser xpp;
private StringBuilder nodeBuf = new StringBuilder();
private StringBuilder fileBuf = new StringBuilder();
public XppSample(String fileName, String rootTag, String splitTag, String requiredTag, int flushThreshold) throws XmlPullParserException, FileNotFoundException {
this.rootTag = rootTag;
rootTagEnd = "</" + rootTag + ">";
this.splitTag = splitTag;
this.requiredTag = requiredTag;
this.flushThreshold = flushThreshold;
this.fileName = fileName;
XmlPullParserFactory factory = XmlPullParserFactory.newInstance(System.getProperty(XmlPullParserFactory.PROPERTY_NAME), null);
factory.setNamespaceAware(true);
xpp = factory.newPullParser();
xpp.setInput(new FileReader(fileName));
}
public void processDocument() throws XmlPullParserException, IOException {
int eventType = xpp.getEventType();
do {
if(eventType == XmlPullParser.START_TAG) {
processStartElement(xpp);
} else if(eventType == XmlPullParser.END_TAG) {
processEndElement(xpp);
} else if(eventType == XmlPullParser.TEXT) {
processText(xpp);
}
eventType = xpp.next();
} while (eventType != XmlPullParser.END_DOCUMENT);
saveFile();
}
public void processStartElement(XmlPullParser xpp) {
int holderForStartAndLength[] = new int[2];
String name = xpp.getName();
char ch[] = xpp.getTextCharacters(holderForStartAndLength);
int start = holderForStartAndLength[0];
int length = holderForStartAndLength[1];
if(name.equals(rootTag)) {
int pos = start + length;
header = new String(ch, 0, pos);
} else {
if(requiredTag==null || name.equals(requiredTag)) {
hasRequiredTag = true;
}
nodeBuf.append(xpp.getText());
}
}
public void flushBuffer() throws IOException {
if(hasRequiredTag) {
fileBuf.append(nodeBuf);
if(((++flushCount)%flushThreshold)==0) {
saveFile();
}
}
nodeBuf = new StringBuilder();
hasRequiredTag = false;
}
public void saveFile() throws IOException {
if(fileBuf.length()>0) {
String splitFile = header + fileBuf.toString() + rootTagEnd;
FileUtils.writeStringToFile(new File((fileNo++) + "_" + fileName), splitFile);
fileBuf = new StringBuilder();
}
}
public void processEndElement (XmlPullParser xpp) throws IOException {
String name = xpp.getName();
if(name.equals(rootTag)) {
flushBuffer();
} else {
nodeBuf.append(xpp.getText());
if(name.equals(splitTag)) {
flushBuffer();
}
}
}
public void processText (XmlPullParser xpp) throws XmlPullParserException {
int holderForStartAndLength[] = new int[2];
char ch[] = xpp.getTextCharacters(holderForStartAndLength);
int start = holderForStartAndLength[0];
int length = holderForStartAndLength[1];
String content = new String(ch, start, length);
nodeBuf.append(content);
}
public static void main (String args[]) throws XmlPullParserException, IOException {
//XppSample app = new XppSample("input.xml", "company", "staff", "nickname", 3);
XppSample app = new XppSample("input.xml", "company", "staff", null, 3);
app.processDocument();
}
}
Normally I would suggest using StAX, but it is unclear to me how 'stateful' your real XML is. If simple, then use SAX for ultimate performance, if not-so-simple, use StAX. So you need to
read bytes from disk
convert them to characters
parse the XML
determine whether to keep XML or throw away (skip out subtree)
write XML
convert characters to bytes
write to disk
Now, it might seem like steps 3-5 are the most resource-intensive, but I would rate them as
Most: 1 + 7
Middle: 2 + 6
Least: 3 + 4 + 5
As operations 1 and 7 are kind of seperate of the rest, you should do them in an async way, at least creating multiple small files is best done in n other threads, if you are familiar with multi-threading. For increased performance, you might also look into the new IO stuff in Java.
Now for steps 2 + 3 and 5 + 6 you can go a long way with FasterXML, it really does a lot of the stuff you are looking for, like triggering JVM hot-spot attention in the right places; might even support async reading/writing looking through the code quickly.
So then we are left with step 5, and depending on your logic, you should either
a. make an object binding, then decide how what to do
b. write XML anyways, hoping for the best, and then throw it away if no 'staff' element is present.
Whatever you do, object reuse is sensible. Note that both alternatives (obisously) requires the same amount of parsing (skip out of subtree ASAP), and for alternative b, that a little extra XML is actually not so bad performancewise, ideally make sure your char buffers are > one unit.
Alternative b is the most easy to implement, simply copy the 'xml event' from your reader to writer, example for StAX:
private static void copyEvent(int event, XMLStreamReader reader, XMLStreamWriter writer) throws XMLStreamException {
if (event == XMLStreamConstants.START_ELEMENT) {
String localName = reader.getLocalName();
String namespace = reader.getNamespaceURI();
// TODO check this stuff again before setting in production
if (namespace != null) {
if (writer.getPrefix(namespace) != null) {
writer.writeStartElement(namespace, localName);
} else {
writer.writeStartElement(reader.getPrefix(), localName, namespace);
}
} else {
writer.writeStartElement(localName);
}
// first: namespace definition attributes
if(reader.getNamespaceCount() > 0) {
int namespaces = reader.getNamespaceCount();
for(int i = 0; i < namespaces; i++) {
String namespaceURI = reader.getNamespaceURI(i);
if(writer.getPrefix(namespaceURI) == null) {
String namespacePrefix = reader.getNamespacePrefix(i);
if(namespacePrefix == null) {
writer.writeDefaultNamespace(namespaceURI);
} else {
writer.writeNamespace(namespacePrefix, namespaceURI);
}
}
}
}
int attributes = reader.getAttributeCount();
// the write the rest of the attributes
for (int i = 0; i < attributes; i++) {
String attributeNamespace = reader.getAttributeNamespace(i);
if (attributeNamespace != null && attributeNamespace.length() != 0) {
writer.writeAttribute(attributeNamespace, reader.getAttributeLocalName(i), reader.getAttributeValue(i));
} else {
writer.writeAttribute(reader.getAttributeLocalName(i), reader.getAttributeValue(i));
}
}
} else if (event == XMLStreamConstants.END_ELEMENT) {
writer.writeEndElement();
} else if (event == XMLStreamConstants.CDATA) {
String array = reader.getText();
writer.writeCData(array);
} else if (event == XMLStreamConstants.COMMENT) {
String array = reader.getText();
writer.writeComment(array);
} else if (event == XMLStreamConstants.CHARACTERS) {
String array = reader.getText();
if (array.length() > 0 && !reader.isWhiteSpace()) {
writer.writeCharacters(array);
}
} else if (event == XMLStreamConstants.START_DOCUMENT) {
writer.writeStartDocument();
} else if (event == XMLStreamConstants.END_DOCUMENT) {
writer.writeEndDocument();
}
}
And for a subtree,
private static void copySubTree(XMLStreamReader reader, XMLStreamWriter writer) throws XMLStreamException {
reader.require(XMLStreamConstants.START_ELEMENT, null, null);
copyEvent(XMLStreamConstants.START_ELEMENT, reader, writer);
int level = 1;
do {
int event = reader.next();
if(event == XMLStreamConstants.START_ELEMENT) {
level++;
} else if(event == XMLStreamConstants.END_ELEMENT) {
level--;
}
copyEvent(event, reader, writer);
} while(level > 0);
}
From which you probably can deduct how to skip out to a certain level. In general, for stateful StaX parsing, use the pattern
private static void parseSubTree(XMLStreamReader reader) throws XMLStreamException {
int level = 1;
do {
int event = reader.next();
if(event == XMLStreamConstants.START_ELEMENT) {
level++;
// do stateful stuff here
// for child logic:
if(reader.getLocalName().equals("Whatever")) {
parseSubTreeForWhatever(reader);
level --; // read from level 1 to 0 in submethod.
}
// alternatively, faster
if(level == 4) {
parseSubTreeForWhateverAtRelativeLevel4(reader);
level --; // read from level 1 to 0 in submethod.
}
} else if(event == XMLStreamConstants.END_ELEMENT) {
level--;
// do stateful stuff here, too
}
} while(level > 0);
}
where you in the start of the document read till the first start element and break (add the writer+copy for your use of course, as above).
Note that if you do an object binding, these methods should be placed in that object, and equally for the serialization methods.
I am pretty sure you will get 10s of MB/s on a modern system, and that should be sufficient. An issue to be investigate further, is approaches to use multiple cores for the actualy input, if you know for a fact the encoding subset, like non-crazy UTF-8, or ISO-8859, then random access might be possible -> send to different cores.
Have fun, and tell use how it went ;)
Edit: Almost forgot, if you for some reason are the one who is creating the file in the first place, or you will be reading them after splitting, you will se HUGE performance gains using XML binarization; there exist XML Schema generators which again can go into code generators. (And some XSLT transform libs use code generation too.) And run with the -server option for JVM.
How to make i faster:
Use asynchronous writes, possibly in parallel, might boost your perf if you have RAID-X something disks
Write to an SSD instead of HDD
My suggestion is that SAX, STAX, or DOM are not the ideal xml parser for your problem, the perfect solutions is called vtd-xml, there is an article on this subject explaining why DOM sax and STAX all done something very wrong... the code below is the shortest you have to write, yet performs 10x faster than DOM or SAX. http://www.javaworld.com/javaworld/jw-07-2006/jw-0724-vtdxml.html
Here is a latest paper entitled Processing XML with Java – A Performance Benchmark: http://recipp.ipp.pt/bitstream/10400.22/1847/1/ART_BrunoOliveira_2013.pdf
import com.ximpleware.*;
import java.io.*;
public class gandalf {
public static void main(String a[]) throws VTDException, Exception{
VTDGen vg = new VTDGen();
if (vg.parseFile("c:\\xml\\gandalf.txt", false)){
VTDNav vn=vg.getNav();
AutoPilot ap = new AutoPilot(vn);
ap.selectXPath("/company/staff[nickname]");
int i=-1;
int count=0;
while((i=ap.evalXPath())!=-1){
vn.dumpFragment("c:\\xml\\staff"+count+".xml");
count++;
}
}
}
}
Here is DOM based solution. I have tested this with the xml you provided. This needs to be checked against the actual xml files that you have.
Since this is based on DOM parser, please remember that this will require a lot of memory depending upon your xml file size. But its much faster as it's DOM based.
Algorithm :
Parse the document
Extract the root element name
Get list he nodes based on the split criteria (using XPath)
For each node, create an empty document with root element name as extracted in step #2
Insert the node in this new document
Check if nodes are to be filtered or not.
If nodes are to be filtered, then check if a specified element is present in the newly created doc.
If node is not present, don't write to the file.
If the nodes are NOT to be filtered at all, don't check for condition in #7, and write the document to the file.
This can be run from command prompt as follows
java XMLSplitter xmlFileLocation splitElement filter filterElement
For the xml you mentioned it will be
java XMLSplitter input.xml staff true nickname
In case you don't want to filter
java XMLSplitter input.xml staff
Here is the complete java code:
package com.xml.xpath;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.StringReader;
import java.io.StringWriter;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.transform.OutputKeys;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerConfigurationException;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
import javax.xml.xpath.XPath;
import javax.xml.xpath.XPathConstants;
import javax.xml.xpath.XPathExpression;
import javax.xml.xpath.XPathExpressionException;
import javax.xml.xpath.XPathFactory;
import org.w3c.dom.DOMException;
import org.w3c.dom.DOMImplementation;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
public class XMLSplitter {
DocumentBuilder builder = null;
XPath xpath = null;
Transformer transformer = null;
String filterElement;
String splitElement;
String xmlFileLocation;
boolean filter = true;
public static void main(String[] arg) throws Exception{
XMLSplitter xMLSplitter = null;
if(arg.length < 4){
if(arg.length < 2){
System.out.println("Insufficient arguments !!!");
System.out.println("Usage: XMLSplitter xmlFileLocation splitElement filter filterElement ");
return;
}else{
System.out.println("Filter is off...");
xMLSplitter = new XMLSplitter();
xMLSplitter.init(arg[0],arg[1],false,null);
}
}else{
xMLSplitter = new XMLSplitter();
xMLSplitter.init(arg[0],arg[1],Boolean.parseBoolean(arg[2]),arg[3]);
}
xMLSplitter.start();
}
public void init(String xmlFileLocation, String splitElement, boolean filter, String filterElement )
throws ParserConfigurationException, TransformerConfigurationException{
//Initialize the Document builder
System.out.println("Initializing..");
DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance();
domFactory.setNamespaceAware(true);
builder = domFactory.newDocumentBuilder();
//Initialize the transformer
TransformerFactory transformerFactory = TransformerFactory.newInstance();
transformer = transformerFactory.newTransformer();
transformer.setOutputProperty(OutputKeys.METHOD, "xml");
transformer.setOutputProperty(OutputKeys.ENCODING,"UTF-8");
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "4");
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
//Initialize the xpath
XPathFactory factory = XPathFactory.newInstance();
xpath = factory.newXPath();
this.filterElement = filterElement;
this.splitElement = splitElement;
this.xmlFileLocation = xmlFileLocation;
this.filter = filter;
}
public void start() throws Exception{
//Parser the file
System.out.println("Parsing file.");
Document doc = builder. parse(xmlFileLocation);
//Get the root node name
System.out.println("Getting root element.");
XPathExpression rootElementexpr = xpath.compile("/");
Object rootExprResult = rootElementexpr.evaluate(doc, XPathConstants.NODESET);
NodeList rootNode = (NodeList) rootExprResult;
String rootNodeName = rootNode.item(0).getFirstChild().getNodeName();
//Get the list of split elements
XPathExpression expr = xpath.compile("//"+splitElement);
Object result = expr.evaluate(doc, XPathConstants.NODESET);
NodeList nodes = (NodeList) result;
System.out.println("Total number of split nodes "+nodes.getLength());
for (int i = 0; i < nodes.getLength(); i++) {
//Wrap each node inside root of the parent xml doc
Node sigleNode = wrappInRootElement(rootNodeName,nodes.item(i));
//Get the XML string of the fragment
String xmlFragment = serializeDocument(sigleNode);
//System.out.println(xmlFragment);
//Write the xml fragment in file.
storeInFile(xmlFragment,i);
}
}
private Node wrappInRootElement(String rootNodeName, Node fragmentDoc)
throws XPathExpressionException, ParserConfigurationException, DOMException,
SAXException, IOException, TransformerException{
//Create empty doc with just root node
DOMImplementation domImplementation = builder.getDOMImplementation();
Document doc = domImplementation.createDocument(null,null,null);
Element theDoc = doc.createElement(rootNodeName);
doc.appendChild(theDoc);
//Insert the fragment inside the root node
InputSource inStream = new InputSource();
String xmlString = serializeDocument(fragmentDoc);
inStream.setCharacterStream(new StringReader(xmlString));
Document fr = builder.parse(inStream);
theDoc.appendChild(doc.importNode(fr.getFirstChild(),true));
return doc;
}
private String serializeDocument(Node doc) throws TransformerException, XPathExpressionException{
if(!serializeThisNode(doc)){
return null;
}
DOMSource domSource = new DOMSource(doc);
StringWriter stringWriter = new StringWriter();
StreamResult streamResult = new StreamResult(stringWriter);
transformer.transform(domSource, streamResult);
String xml = stringWriter.toString();
return xml;
}
//Check whether node is to be stored in file or rejected based on input
private boolean serializeThisNode(Node doc) throws XPathExpressionException{
if(!filter){
return true;
}
XPathExpression filterElementexpr = xpath.compile("//"+filterElement);
Object result = filterElementexpr.evaluate(doc, XPathConstants.NODESET);
NodeList nodes = (NodeList) result;
if(nodes.item(0) != null){
return true;
}else{
return false;
}
}
private void storeInFile(String content, int fileIndex) throws IOException{
if(content == null || content.length() == 0){
return;
}
String fileName = splitElement+fileIndex+".xml";
File file = new File(fileName);
if(file.exists()){
System.out.println(" The file "+fileName+" already exists !! cannot create the file with the same name ");
return;
}
FileWriter fileWriter = new FileWriter(file);
fileWriter.write(content);
fileWriter.close();
System.out.println("Generated file "+fileName);
}
}
Let me know if this works for you or any other help regarding this code.

How to generate xpath from xsd?

How can I generate xpath from an xsd? XSD validates an xml. I am working in a project where I am generating a sample XML from the xsd using java and then generating xpath from that XML. If there is any way to generate xpath directly from xsd please let me know.
This might be of use:
import java.io.File;
import java.util.HashMap;
import java.util.Map;
import java.util.Stack;
import javax.xml.parsers.*;
import org.xml.sax.*;
import org.xml.sax.helpers.DefaultHandler;
/**
* SAX handler that creates and prints XPath expressions for each element encountered.
*
* The algorithm is not infallible, if elements appear on different levels in the hierarchy.
* Something like the following is an example:
* - <elemA/>
* - <elemA/>
* - <elemB/>
* - <elemA/>
* - <elemC>
* - <elemB/>
* - </elemC>
*
* will report
*
* //elemA[0]
* //elemA[1]
* //elemB[0]
* //elemA[2]
* //elemC[0]
* //elemC[0]/elemB[1] (this is wrong: should be //elemC[0]/elemB[0] )
*
* It also ignores namespaces, and thus treats <foo:elemA> the same as <bar:elemA>.
*/
public class SAXCreateXPath extends DefaultHandler {
// map of all encountered tags and their running count
private Map<String, Integer> tagCount;
// keep track of the succession of elements
private Stack<String> tags;
// set to the tag name of the recently closed tag
String lastClosedTag;
/**
* Construct the XPath expression
*/
private String getCurrentXPath() {
String str = "//";
boolean first = true;
for (String tag : tags) {
if (first)
str = str + tag;
else
str = str + "/" + tag;
str += "["+tagCount.get(tag)+"]";
first = false;
}
return str;
}
#Override
public void startDocument() throws SAXException {
tags = new Stack();
tagCount = new HashMap<String, Integer>();
}
#Override
public void startElement (String namespaceURI, String localName, String qName, Attributes atts)
throws SAXException
{
boolean isRepeatElement = false;
if (tagCount.get(localName) == null) {
tagCount.put(localName, 0);
} else {
tagCount.put(localName, 1 + tagCount.get(localName));
}
if (lastClosedTag != null) {
// an element was recently closed ...
if (lastClosedTag.equals(localName)) {
// ... and it's the same as the current one
isRepeatElement = true;
} else {
// ... but it's different from the current one, so discard it
tags.pop();
}
}
// if it's not the same element, add the new element and zero count to list
if (! isRepeatElement) {
tags.push(localName);
}
System.out.println(getCurrentXPath());
lastClosedTag = null;
}
#Override
public void endElement (String uri, String localName, String qName) throws SAXException {
// if two tags are closed in succession (without an intermediate opening tag),
// then the information about the deeper nested one is discarded
if (lastClosedTag != null) {
tags.pop();
}
lastClosedTag = localName;
}
public static void main (String[] args) throws Exception {
if (args.length < 1) {
System.err.println("Usage: SAXCreateXPath <file.xml>");
System.exit(1);
}
// Create a JAXP SAXParserFactory and configure it
SAXParserFactory spf = SAXParserFactory.newInstance();
spf.setNamespaceAware(true);
spf.setValidating(false);
// Create a JAXP SAXParser
SAXParser saxParser = spf.newSAXParser();
// Get the encapsulated SAX XMLReader
XMLReader xmlReader = saxParser.getXMLReader();
// Set the ContentHandler of the XMLReader
xmlReader.setContentHandler(new SAXCreateXPath());
String filename = args[0];
String path = new File(filename).getAbsolutePath();
if (File.separatorChar != '/') {
path = path.replace(File.separatorChar, '/');
}
if (!path.startsWith("/")) {
path = "/" + path;
}
// Tell the XMLReader to parse the XML document
xmlReader.parse("file:"+path);
}
}
I've been working on a little library to do just this, though for larger and more complex schemas, there are issues you will need to address on a case-by-case basis (e.g., filters for certain nodes). See https://stackoverflow.com/a/45020739/3096687 for a description of the solution.
There are a number of problems with such tools:
The XPath expression generated rarely is a good one. No such tool will produce meaningful predicates beyond position information.
There is no tool (to my knowledge) that would generate an XPath expression that selects exactly a set of selected nodes.
Apart from this, such tools used without learning XPath are really harmful -- they support ignorance.
I would recommend serious learning of XPath using books and other resources such as following.
https://stackoverflow.com/questions/339930/any-good-xslt-tutorial-book-blog-site-online/341589#341589
See the following answer for more information..
Is there an online tester for xPath selectors?

Categories

Resources