Background
I have a situation where I can get data either in the form of an XML-file or Excel/CSV-files. In case the data comes in a non-XML format it will be divided into several different files/tables, representing different subsections of the XML. The end goal is to validate the data and generate a valid XML-file using an existing schema, regardless of the format of the indata.
When receiving an XML-file the idea is to unmarshall and validate it. For simple errors autmatic fixes will be applied, and in the end a new XML-file will be marshalled from the JAXB classes.
Question
In order to be able to generalize as much as possible of the solution, my idea was to try to generate a JAXB representation of the non-XML data too, and then generate the end XML-file from those classes. I have been trying to find a good tutorial or introduction to converting non-XML to a JAXB representation, but I haven't really been able to find anything useful, which makes me wonder, is this a really bad approach? Any better suggestions for how to solve this problem? In the majority of the cases the files are likely to be non-XML, so I am willing to throw out the current approach if anyone has better solution that uses some other technology.
I've worked before with univocity parsers. They work well and are simple to use to converting CSV to Java object which then you searialize using JAXB as well.
In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes:
Programmers who build XML using string concatenation.
My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad?
I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided?
Probably the biggest reason I can think of is you need to escape your strings manually, and most new programmers (and even some experienced programmers) will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one).
When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful?
Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation...
There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.
You can end up with invalid XML, but you will not find out until you parse it again - and then it is too late. I learned this the hard way.
I think readability, flexibility and scalability are important factors. Consider the following piece of Linq-to-Xml:
XDocument doc = new XDocument(new XDeclaration("1.0","UTF-8","yes"),
new XElement("products", from p in collection
select new XElement("product",
new XAttribute("guid", p.ProductId),
new XAttribute("title", p.Title),
new XAttribute("version", p.Version))));
Can you find a way to do it easier than this? I can output it to a browser, save it to a document, add attributes/elements in seconds and so on ... just by adding couple lines of code. I can do practically everything with it without much of effort.
Actually, I find the biggest problem with string concatenation is not getting it right the first time, but rather keeping it right during code maintenance. All too often, a perfectly-written piece of XML using string concat is updated to meet a new requirement, and string concat code is just too brittle.
As long as the alternatives were XML serialization and XmlDocument, I could see the simplicity argument in favor of string concat. However, ever since XDocument et. al., there is just no reason to use string concat to build XML anymore. See Sander's answer for the best way to write XML.
Another benefit of XDocument is that XML is actually a rather complex standard, and most programmers simply do not understand it. I'm currently dealing with a person who sends me "XML", complete with unquoted attribute values, missing end tags, improper case sensitivity, and incorrect escaping. But because IE accepts it (as HTML), it must be right! Sigh... Anyway, the point is that string concatenation lets you write anything, but XDocument will force standards-complying XML.
I wrote a blog entry back in 2006 moaning about XML generated by string concatenation; the simple point is that if an XML document fails to validate (encoding issues, namespace issues and so on) it is not XML and cannot be treated as such.
I have seen multiple problems with XML documents that can be directly attributed to generating XML documents by hand using string concatenation, and nearly always around the correct use of encoding.
Ask yourself this; what character set am I currently encoding my document with ('ascii7', 'ibm850', 'iso-8859-1' etc)? What will happen if I write a UTF-16 string value into an XML document that has been manually declared as 'ibm850'?
Given the richness of the XML support in .NET with XmlDocument and now especially with XDocument, there would have to be a seriously compelling argument for not using these libraries over basic string concatenation IMHO.
I think that the problem is that you aren't watching the xml file as a logical data storage thing, but as a simple textfile where you write strings.
It's obvious that those libraries do string manipulation for you, but reading/writing xml should be something similar to saving datas into a database or something logically similar
If you need trivial XML then it's fine. Its just the maintainability of string concatenation breaks down when the xml becomes larger or more complex. You pay either at development or at maintenance time. The choice is yours always - but history suggests the maintenance is always more costly and thus anything that makes it easier is worthwhile generally.
You need to escape your strings manually. That's right. But is that all? Sure, you can put the XML spec on your desk and double-check every time that you've considered every possible corner-case when you're building an XML string. Or you can use a library that encapsulates this knowledge...
Another point against using string concatenation is that the hierarchical structure of the data is not clear when reading the code. In #Sander's example of Linq-to-XML for example, it's clear to what parent element the "product" element belongs, to what element the "title" attribute applies, etc.
As you said, it's just awkward to build XML correct using string concatenation, especially now you have XML linq that allows for simple construction of an XML graph and will get namespaces, etc correct.
Obviously context and how it is being used matters, such as in the logging example string.Format can be perfectly acceptable.
But too often people ignore these alternatives when working with complex XML graphs and just use a StringBuilder.
The main reason is DRY: Don't Repeat Yourself.
If you use string concat to do XML, you will constantly be repeating the functions that keep your string as a valid XML document. All the validation would be repeated, or not present. Better to rely on a class that is written with XML validation included.
I've always found creating an XML to be more of a chore than reading in one. I've never gotten the hang of serialization - it never seems to work for my classes - and instead of spending a week trying to get it to work, I can create an XML file using strings in a mere fraction of the time and write it out.
And then I load it in using an XMLReader tree. And if the XML file doesn't read as valid, I go back and find the problem within my saving routines and corret it. But until I get a working save/load system, I refuse to perform mission-critical work until I know my tools are solid.
I guess it comes down to programmer preference. Sure, there are different ways of doing things, for sure, but for developing/testing/researching/debugging, this would be fine. However I would also clean up my code and comment it before handing it off to another programmer.
Because regardless of the fact you're using StringBuilder or XMLNodes to save/read your file, if it is all gibberish mess, nobody is going to understand how it works.
Maybe it won't ever happen, but what if your environment switches to XML 2.0 someday? Your string-concatenated XML may or may not be valid in the new environment, but XDocument will almost certainly do the right thing.
Okay, that's a reach, but especially if your not-quite-standards-compliant XML doesn't specify an XML version declaration... just saying.
I have to xml files say abc.xml & 123.xml which are almost similar, i mean has the same content, but the second one i.e, 123.xml has more content than the earlier one.
I want to read both the files using Java, and compare whether the content present in abc.xml for each tag is same as that in 123.xml, something like object comparison.
Please suggest me how to read the xml file using java and start comparing.
Thanks.
if you just want to compare then use this:
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
dbf.setCoalescing(true);
dbf.setIgnoringElementContentWhitespace(true);
dbf.setIgnoringComments(true);
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc1 = db.parse(new File("file1.xml"));
doc1.normalizeDocument();
Document doc2 = db.parse(new File("file2.xml"));
doc2.normalizeDocument();
Assert.assertTrue(doc1.isEqualNode(doc2));
else see this
http://xmlunit.sourceforge.net/
I would go for the XMLUnit.
The features it provides :
the differences between two pieces of XML
The outcome of transforming a piece of XML using XSLT
The evaluation of an XPath expression on a piece of XML
The validity of a piece of XML
Individual nodes in a
piece of XML that are exposed by DOM Traversal
Good Luck!
I would use JAXB to generate Java objects from the XML files and then compare the Java files. They would make the handling much easier.
In general, if you know that you have two files with identical structure but slightly different and unordered content you are going to have to "read" the files to compare the contents.
If you have the XML Schema for your XML files then you could use JAXB to create a set of classes that will represent the specific DOM that is defined by your XML schema. The benefit of this approach is that you will not have to parse the XML file through generic functions for elements and attributes but rather through the actual fields that make sense to your problem.
Of course, to be able to detect the presence of the same entry across both files you are going to have to "match" them through some common field (for example, some ID).
To help you with the duplicates discovery process you could use some relevant data structure from Java's collections, like the Set (or one of its derivatives)
I hope this helps.
Well if you just want to compare and display then you can use Guiffy
It is a good tool. If u want to do the processing in backend then you must use DOM parser load both files to 2 DOM objects and compare attribute by attribute.
The right approach depends on two factors:
(a) how much control do you want over how the comparison is done? For example, do you need to control whether whitespace is significant, whether comments should be ignored, whether namespace prefixes should be ignored, whether redundant namespace declarations should be ignored, whether the XML declaration should be ignored?
(b) what answer do you want? (i) a boolean: same/different, (ii) a list of differences suitable for a human to process, (iii) a list of differences suitable for an application to process.
The two techniques I use are: (a) convert both files to Canonical XML and then compare strings. This gives very little control and only gives a boolean result. (b) compare the two trees using the XPath 2.0 deep-equal() function or the extended Saxon version saxon:deep-equal(). The Saxon version gives more control over how the comparison is done, and a more detailed report of the differences found (for human reading, not for application use).
If you want to write Java code, you could of course implement your own comparison logic - for example you could find an open source implementation of XPath deep-equal, and modify it to meet your requirements. It's only a hundred or so lines of code.
it's a bit overkill, but if your XML has schema, you can convert it into EMF metamodel & then use EMF Compare to compare.
Is there a way to lookup the line number that a given element is at in an xml file via the w3c dom api?
My use case for this is that we have 30,000+ maps in kml/xml format. I wrote a unit test that iterates over each file found on the hard drive (about 17GB worth) and tests that it is parseable by our application. When it fails I throw an exception that contains the element instance that was considered "invalid". In order for our mapping department (nobody here knows how to program) to easily track down the typo we would like to log the line number of the element that caused the exception.
Can anybody suggest a way to do this? Please note we are using the W3C dom api included in the Android 1.6 SDK.
I'm not sure whether the Android API is different, but a normal Java application could catch a SAXParseException when parsing and look at the line number.
I may be wrong, but the line number shouldn't be relevant to your XML parser/reader as long as the XML structure itself is valid.
You might try to extrapolate the line-number programatically on the assumption that each node/content must be on a distinct line but it's going to be tricky.
It looks like you're validating your XML files. That is, you're not interested in whether the documents are syntactically correct ("well-formed"), but if they are semantically valid for your application. The right tool for this would be a validating XML parser, coupled with a dedicated XML scheme. See for example this tutorial on XML validation in Java. Validation errors will usually contain detailed error information, including the line number of problematic elements.
I am looking for a tool or java code or class library/API that can generate XSD from XML files. (Something like the xsd.exe utility in the .NET Framework sdk)
These tools can provide a good starting point, but they aren't a substitute for thinking through what the actual schema constraints ought to be. You get the opportunity for two kinds of errors: (1) allowing XML that shouldn't be allowed and (2) disallowing XML that should be ok.
As an example, pretend that you want to infer an XSD from a few thousand patient records that include a 'gender' tag (I used to work on medical records software). The tool would likely encounter 'M' and 'F' as values and might deduce that the element is an enumeration. However, other valid (although rarely used) values are B (both), U (unknown), or N (none). These are rare, of course. So, if you used your derived schema as an input validator, it would perform well until a patient with multiple sex organs was admitted to the hospital.
Conversely, to avoid this error, an XSD generator might not add enumerated type restrictions (I can't remember what these are called in schemas), and your application would work well until it encountered an errant record with gender=X.
So, beware. It's best to use these tools only as a starting point. Also, they tend to produce verbose and redundant schemas because they can't figure out patterns as well as humans.
Check Castor, I think it has the functionality you are looking for. They also provide you with an ant task that creates XSD schemas from XML files.
PS I suggest you to add more specific tags in the future: For instance, using xml, xsd and java will increment the possibility of getting answers.
You can use xsd-gen-0.2.0-jar-with-dependencies.jar file to convert xml to xsd.
And Command for it is "java -jar xsd-gen-VERSION-jar-with-dependencies.jar /path/to/xml.xml > /path/to/my.xsd"
Try the xsd-gen project from Google.
https://code.google.com/p/xsd-gen/