I have a serious problem to get any reasoner up and running.
Also the examples from the documentation: https://jena.apache.org/documentation/inference/
does not work here.
I transferred the example into a unit test, so that the problem might be easier reproduced.
Is reasoning limited to certain environment like a spatial JDK or so on, or am i getting something wrong?
Thanks
Here the example code (as java unit test):
import static org.junit.Assert.assertNotNull;
import java.io.PrintWriter;
import java.util.Iterator;
import org.junit.Before;
import org.junit.Test;
import com.hp.hpl.jena.rdf.model.InfModel;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.Property;
import com.hp.hpl.jena.rdf.model.Resource;
import com.hp.hpl.jena.rdf.model.Statement;
import com.hp.hpl.jena.rdf.model.StmtIterator;
import com.hp.hpl.jena.reasoner.Derivation;
import com.hp.hpl.jena.reasoner.rulesys.GenericRuleReasoner;
import com.hp.hpl.jena.reasoner.rulesys.Rule;
import com.hp.hpl.jena.vocabulary.RDFS;
public class ReasonerTest {
String NS = "urn:x-hp-jena:eg/";
// Build a trivial example data set
Model model = ModelFactory.createDefaultModel();
InfModel inf;
Resource A = model.createResource(NS + "A");
Resource B = model.createResource(NS + "B");
Resource C = model.createResource(NS + "C");
Resource D = model.createResource(NS + "D");
Property p = model.createProperty(NS, "p");
Property q = model.createProperty(NS, "q");
#Before
public void init() {
// Some small examples (subProperty)
model.add(p, RDFS.subPropertyOf, q);
model.createResource(NS + "A").addProperty(p, "foo");
String rules = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)]";
GenericRuleReasoner reasoner = new GenericRuleReasoner(Rule.parseRules(rules));
reasoner.setDerivationLogging(true);
inf = ModelFactory.createInfModel(reasoner, model);
// Derivations
A.addProperty(p, B);
B.addProperty(p, C);
C.addProperty(p, D);
}
#Test
public void subProperty() {
Statement statement = A.getProperty(q);
System.out.println("Statement: " + statement);
assertNotNull(statement);
}
#Test
public void derivations() {
String trace = null;
PrintWriter out = new PrintWriter(System.out);
for (StmtIterator i = inf.listStatements(A, p, D); i.hasNext(); ) {
Statement s = i.nextStatement();
System.out.println("Statement is " + s);
for (Iterator id = inf.getDerivation(s); id.hasNext(); ) {
Derivation deriv = (Derivation) id.next();
deriv.printTrace(out, true);
trace += deriv.toString();
}
}
out.flush();
assertNotNull(trace);
}
#Test
public void listStatements() {
StmtIterator stmtIterator = inf.listStatements();
while(stmtIterator.hasNext()) {
System.out.println(stmtIterator.nextStatement());
}
}
}
The prefix eg: isn't what you think it is:
The eg: prefix in the rules doesn't expand to what you think it does. I modified your rules string to
String rules = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)] [rule2: -> (<urn:ex:a> eg:foo <urn:ex:b>)]";
so that rule2 will always insert the triple urn:ex:a eg:foo urn:ex:b into the graph. Then, the output from your tests includes:
[urn:ex:a, urn:x-hp:eg/foo, urn:ex:b]
[urn:x-hp-jena:eg/C, urn:x-hp-jena:eg/p, urn:x-hp-jena:eg/D]
The first line shows the triple that my rule2 inserted, whereas the second uses the prefix you entered by hand. We see that the eg: prefix is short for urn:x-hp:eg/. If you change your NS string accordingly, with String NS = "urn:x-hp:eg/";, then your derivations test will pass.
You need to ask the right model
The subProperty test fails for two reasons. First, it's checking in the wrong model.
You're checking with A.getProperty(q):
Statement statement = A.getProperty(q);
System.out.println("Statement: " + statement);
assertNotNull(statement);
A is a resource that you created for the the model model, not the model inf, so when you ask for A.getProperty(q), it's actually asking model for the statement, so you won't see the inferences in inf. You can use inModel to get A "in inf" so that getProperty looks in the right model:
Statement statement = A.inModel(inf).getProperty(q);
Alternatively, you could also ask inf directly whether it contains a triple of the form A q <something>:
inf.contains( A, q, (RDFNode) null );
Or you could enumerate all such statements:
StmtIterator stmts = inf.listStatements( A, q, (RDFNode) null );
assertTrue( stmts.hasNext() );
while ( stmts.hasNext() ) {
System.out.println( "Statement: "+stmts.next() );
}
You need RDFS reasoning too
Even if you're querying the right model, your inference model still needs to do RDFS reasoning as well as your custom rule that makes the property p transitive. To do that, we can pull the rules out from an RDFS reasoner, add your rule to that a copy of that list, and then create a custom reasoner with the new list of rules:
// Get an RDFS reasoner
GenericRuleReasoner rdfsReasoner = (GenericRuleReasoner) ReasonerRegistry.getRDFSReasoner();
// Steal its rules, and add one of our own, and create a
// reasoner with these rules
List<Rule> customRules = new ArrayList<>( rdfsReasoner.getRules() );
String customRule = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)]";
customRules.add( Rule.parseRule( customRule ));
Reasoner reasoner = new GenericRuleReasoner( customRules );
The complete result
Here's the modified code, all together for easy copying and pasting. All the tests pass.
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import org.junit.Before;
import org.junit.Test;
import com.hp.hpl.jena.rdf.model.InfModel;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.Property;
import com.hp.hpl.jena.rdf.model.RDFNode;
import com.hp.hpl.jena.rdf.model.Resource;
import com.hp.hpl.jena.rdf.model.Statement;
import com.hp.hpl.jena.rdf.model.StmtIterator;
import com.hp.hpl.jena.reasoner.Derivation;
import com.hp.hpl.jena.reasoner.Reasoner;
import com.hp.hpl.jena.reasoner.ReasonerRegistry;
import com.hp.hpl.jena.reasoner.rulesys.GenericRuleReasoner;
import com.hp.hpl.jena.reasoner.rulesys.Rule;
import com.hp.hpl.jena.vocabulary.RDFS;
public class ReasonerTest {
String NS = "urn:x-hp:eg/";
// Build a trivial example data set
Model model = ModelFactory.createDefaultModel();
InfModel inf;
Resource A = model.createResource(NS + "A");
Resource B = model.createResource(NS + "B");
Resource C = model.createResource(NS + "C");
Resource D = model.createResource(NS + "D");
Property p = model.createProperty(NS, "p");
Property q = model.createProperty(NS, "q");
#Before
public void init() {
// Some small examples (subProperty)
model.add(p, RDFS.subPropertyOf, q);
A.addProperty(p, "foo" );
// Get an RDFS reasoner
GenericRuleReasoner rdfsReasoner = (GenericRuleReasoner) ReasonerRegistry.getRDFSReasoner();
// Steal its rules, and add one of our own, and create a
// reasoner with these rules
List<Rule> customRules = new ArrayList<>( rdfsReasoner.getRules() );
String customRule = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)]";
customRules.add( Rule.parseRule( customRule ));
Reasoner reasoner = new GenericRuleReasoner( customRules );
reasoner.setDerivationLogging(true);
inf = ModelFactory.createInfModel(reasoner, model);
// Derivations
A.addProperty(p, B);
B.addProperty(p, C);
C.addProperty(p, D);
}
#Test
public void subProperty() {
StmtIterator stmts = inf.listStatements( A, q, (RDFNode) null );
assertTrue( stmts.hasNext() );
while ( stmts.hasNext() ) {
System.out.println( "Statement: "+stmts.next() );
}
}
#Test
public void derivations() {
String trace = null;
PrintWriter out = new PrintWriter(System.out);
for (StmtIterator i = inf.listStatements(A, p, D); i.hasNext(); ) {
Statement s = i.nextStatement();
System.out.println("Statement is " + s);
for (Iterator<Derivation> id = inf.getDerivation(s); id.hasNext(); ) {
Derivation deriv = (Derivation) id.next();
deriv.printTrace(out, true);
trace += deriv.toString();
}
}
out.flush();
assertNotNull(trace);
}
#Test
public void listStatements() {
StmtIterator stmtIterator = inf.listStatements();
while(stmtIterator.hasNext()) {
System.out.println(stmtIterator.nextStatement());
}
}
}
Related
I'm trying to write an automated test of an application that basically translates a custom message format into an XML message and sends it out the other end. I've got a good set of input/output message pairs so all I need to do is send the input messages in and listen for the XML message to come out the other end.
When it comes time to compare the actual output to the expected output I'm running into some problems. My first thought was just to do string comparisons on the expected and actual messages. This doens't work very well because the example data we have isn't always formatted consistently and there are often times different aliases used for the XML namespace (and sometimes namespaces aren't used at all.)
I know I can parse both strings and then walk through each element and compare them myself and this wouldn't be too difficult to do, but I get the feeling there's a better way or a library I could leverage.
So, boiled down, the question is:
Given two Java Strings which both contain valid XML how would you go about determining if they are semantically equivalent? Bonus points if you have a way to determine what the differences are.
Sounds like a job for XMLUnit
http://www.xmlunit.org/
https://github.com/xmlunit
Example:
public class SomeTest extends XMLTestCase {
#Test
public void test() {
String xml1 = ...
String xml2 = ...
XMLUnit.setIgnoreWhitespace(true); // ignore whitespace differences
// can also compare xml Documents, InputSources, Readers, Diffs
assertXMLEqual(xml1, xml2); // assertXMLEquals comes from XMLTestCase
}
}
The following will check if the documents are equal using standard JDK libraries.
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
dbf.setCoalescing(true);
dbf.setIgnoringElementContentWhitespace(true);
dbf.setIgnoringComments(true);
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc1 = db.parse(new File("file1.xml"));
doc1.normalizeDocument();
Document doc2 = db.parse(new File("file2.xml"));
doc2.normalizeDocument();
Assert.assertTrue(doc1.isEqualNode(doc2));
normalize() is there to make sure there are no cycles (there technically wouldn't be any)
The above code will require the white spaces to be the same within the elements though, because it preserves and evaluates it. The standard XML parser that comes with Java does not allow you to set a feature to provide a canonical version or understand xml:space if that is going to be a problem then you may need a replacement XML parser such as xerces or use JDOM.
Xom has a Canonicalizer utility which turns your DOMs into a regular form, which you can then stringify and compare. So regardless of whitespace irregularities or attribute ordering, you can get regular, predictable comparisons of your documents.
This works especially well in IDEs that have dedicated visual String comparators, like Eclipse. You get a visual representation of the semantic differences between the documents.
The latest version of XMLUnit can help the job of asserting two XML are equal. Also XMLUnit.setIgnoreWhitespace() and XMLUnit.setIgnoreAttributeOrder() may be necessary to the case in question.
See working code of a simple example of XML Unit use below.
import org.custommonkey.xmlunit.DetailedDiff;
import org.custommonkey.xmlunit.XMLUnit;
import org.junit.Assert;
public class TestXml {
public static void main(String[] args) throws Exception {
String result = "<abc attr=\"value1\" title=\"something\"> </abc>";
// will be ok
assertXMLEquals("<abc attr=\"value1\" title=\"something\"></abc>", result);
}
public static void assertXMLEquals(String expectedXML, String actualXML) throws Exception {
XMLUnit.setIgnoreWhitespace(true);
XMLUnit.setIgnoreAttributeOrder(true);
DetailedDiff diff = new DetailedDiff(XMLUnit.compareXML(expectedXML, actualXML));
List<?> allDifferences = diff.getAllDifferences();
Assert.assertEquals("Differences found: "+ diff.toString(), 0, allDifferences.size());
}
}
If using Maven, add this to your pom.xml:
<dependency>
<groupId>xmlunit</groupId>
<artifactId>xmlunit</artifactId>
<version>1.4</version>
</dependency>
Building on Tom's answer, here's an example using XMLUnit v2.
It uses these maven dependencies
<dependency>
<groupId>org.xmlunit</groupId>
<artifactId>xmlunit-core</artifactId>
<version>2.0.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.xmlunit</groupId>
<artifactId>xmlunit-matchers</artifactId>
<version>2.0.0</version>
<scope>test</scope>
</dependency>
..and here's the test code
import static org.junit.Assert.assertThat;
import static org.xmlunit.matchers.CompareMatcher.isIdenticalTo;
import org.xmlunit.builder.Input;
import org.xmlunit.input.WhitespaceStrippedSource;
public class SomeTest extends XMLTestCase {
#Test
public void test() {
String result = "<root></root>";
String expected = "<root> </root>";
// ignore whitespace differences
// https://github.com/xmlunit/user-guide/wiki/Providing-Input-to-XMLUnit#whitespacestrippedsource
assertThat(result, isIdenticalTo(new WhitespaceStrippedSource(Input.from(expected).build())));
assertThat(result, isIdenticalTo(Input.from(expected).build())); // will fail due to whitespace differences
}
}
The documentation that outlines this is https://github.com/xmlunit/xmlunit#comparing-two-documents
Thanks, I extended this, try this ...
import java.io.ByteArrayInputStream;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.NamedNodeMap;
import org.w3c.dom.Node;
public class XmlDiff
{
private boolean nodeTypeDiff = true;
private boolean nodeValueDiff = true;
public boolean diff( String xml1, String xml2, List<String> diffs ) throws Exception
{
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(true);
dbf.setCoalescing(true);
dbf.setIgnoringElementContentWhitespace(true);
dbf.setIgnoringComments(true);
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc1 = db.parse(new ByteArrayInputStream(xml1.getBytes()));
Document doc2 = db.parse(new ByteArrayInputStream(xml2.getBytes()));
doc1.normalizeDocument();
doc2.normalizeDocument();
return diff( doc1, doc2, diffs );
}
/**
* Diff 2 nodes and put the diffs in the list
*/
public boolean diff( Node node1, Node node2, List<String> diffs ) throws Exception
{
if( diffNodeExists( node1, node2, diffs ) )
{
return true;
}
if( nodeTypeDiff )
{
diffNodeType(node1, node2, diffs );
}
if( nodeValueDiff )
{
diffNodeValue(node1, node2, diffs );
}
System.out.println(node1.getNodeName() + "/" + node2.getNodeName());
diffAttributes( node1, node2, diffs );
diffNodes( node1, node2, diffs );
return diffs.size() > 0;
}
/**
* Diff the nodes
*/
public boolean diffNodes( Node node1, Node node2, List<String> diffs ) throws Exception
{
//Sort by Name
Map<String,Node> children1 = new LinkedHashMap<String,Node>();
for( Node child1 = node1.getFirstChild(); child1 != null; child1 = child1.getNextSibling() )
{
children1.put( child1.getNodeName(), child1 );
}
//Sort by Name
Map<String,Node> children2 = new LinkedHashMap<String,Node>();
for( Node child2 = node2.getFirstChild(); child2!= null; child2 = child2.getNextSibling() )
{
children2.put( child2.getNodeName(), child2 );
}
//Diff all the children1
for( Node child1 : children1.values() )
{
Node child2 = children2.remove( child1.getNodeName() );
diff( child1, child2, diffs );
}
//Diff all the children2 left over
for( Node child2 : children2.values() )
{
Node child1 = children1.get( child2.getNodeName() );
diff( child1, child2, diffs );
}
return diffs.size() > 0;
}
/**
* Diff the nodes
*/
public boolean diffAttributes( Node node1, Node node2, List<String> diffs ) throws Exception
{
//Sort by Name
NamedNodeMap nodeMap1 = node1.getAttributes();
Map<String,Node> attributes1 = new LinkedHashMap<String,Node>();
for( int index = 0; nodeMap1 != null && index < nodeMap1.getLength(); index++ )
{
attributes1.put( nodeMap1.item(index).getNodeName(), nodeMap1.item(index) );
}
//Sort by Name
NamedNodeMap nodeMap2 = node2.getAttributes();
Map<String,Node> attributes2 = new LinkedHashMap<String,Node>();
for( int index = 0; nodeMap2 != null && index < nodeMap2.getLength(); index++ )
{
attributes2.put( nodeMap2.item(index).getNodeName(), nodeMap2.item(index) );
}
//Diff all the attributes1
for( Node attribute1 : attributes1.values() )
{
Node attribute2 = attributes2.remove( attribute1.getNodeName() );
diff( attribute1, attribute2, diffs );
}
//Diff all the attributes2 left over
for( Node attribute2 : attributes2.values() )
{
Node attribute1 = attributes1.get( attribute2.getNodeName() );
diff( attribute1, attribute2, diffs );
}
return diffs.size() > 0;
}
/**
* Check that the nodes exist
*/
public boolean diffNodeExists( Node node1, Node node2, List<String> diffs ) throws Exception
{
if( node1 == null && node2 == null )
{
diffs.add( getPath(node2) + ":node " + node1 + "!=" + node2 + "\n" );
return true;
}
if( node1 == null && node2 != null )
{
diffs.add( getPath(node2) + ":node " + node1 + "!=" + node2.getNodeName() );
return true;
}
if( node1 != null && node2 == null )
{
diffs.add( getPath(node1) + ":node " + node1.getNodeName() + "!=" + node2 );
return true;
}
return false;
}
/**
* Diff the Node Type
*/
public boolean diffNodeType( Node node1, Node node2, List<String> diffs ) throws Exception
{
if( node1.getNodeType() != node2.getNodeType() )
{
diffs.add( getPath(node1) + ":type " + node1.getNodeType() + "!=" + node2.getNodeType() );
return true;
}
return false;
}
/**
* Diff the Node Value
*/
public boolean diffNodeValue( Node node1, Node node2, List<String> diffs ) throws Exception
{
if( node1.getNodeValue() == null && node2.getNodeValue() == null )
{
return false;
}
if( node1.getNodeValue() == null && node2.getNodeValue() != null )
{
diffs.add( getPath(node1) + ":type " + node1 + "!=" + node2.getNodeValue() );
return true;
}
if( node1.getNodeValue() != null && node2.getNodeValue() == null )
{
diffs.add( getPath(node1) + ":type " + node1.getNodeValue() + "!=" + node2 );
return true;
}
if( !node1.getNodeValue().equals( node2.getNodeValue() ) )
{
diffs.add( getPath(node1) + ":type " + node1.getNodeValue() + "!=" + node2.getNodeValue() );
return true;
}
return false;
}
/**
* Get the node path
*/
public String getPath( Node node )
{
StringBuilder path = new StringBuilder();
do
{
path.insert(0, node.getNodeName() );
path.insert( 0, "/" );
}
while( ( node = node.getParentNode() ) != null );
return path.toString();
}
}
AssertJ 1.4+ has specific assertions to compare XML content:
String expectedXml = "<foo />";
String actualXml = "<bar />";
assertThat(actualXml).isXmlEqualTo(expectedXml);
Here is the Documentation
Below code works for me
String xml1 = ...
String xml2 = ...
XMLUnit.setIgnoreWhitespace(true);
XMLUnit.setIgnoreAttributeOrder(true);
XMLAssert.assertXMLEqual(actualxml, xmlInDb);
skaffman seems to be giving a good answer.
another way is probably to format the XML using a commmand line utility like xmlstarlet(http://xmlstar.sourceforge.net/) and then format both the strings and then use any diff utility(library) to diff the resulting output files. I don't know if this is a good solution when issues are with namespaces.
I'm using Altova DiffDog which has options to compare XML files structurally (ignoring string data).
This means that (if checking the 'ignore text' option):
<foo a="xxx" b="xxx">xxx</foo>
and
<foo b="yyy" a="yyy">yyy</foo>
are equal in the sense that they have structural equality. This is handy if you have example files that differ in data, but not structure!
I required the same functionality as requested in the main question. As I was not allowed to use any 3rd party libraries, I have created my own solution basing on #Archimedes Trajano solution.
Following is my solution.
import java.io.ByteArrayInputStream;
import java.nio.charset.Charset;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import org.junit.Assert;
import org.w3c.dom.Document;
/**
* Asserts for asserting XML strings.
*/
public final class AssertXml {
private AssertXml() {
}
private static Pattern NAMESPACE_PATTERN = Pattern.compile("xmlns:(ns\\d+)=\"(.*?)\"");
/**
* Asserts that two XML are of identical content (namespace aliases are ignored).
*
* #param expectedXml expected XML
* #param actualXml actual XML
* #throws Exception thrown if XML parsing fails
*/
public static void assertEqualXmls(String expectedXml, String actualXml) throws Exception {
// Find all namespace mappings
Map<String, String> fullnamespace2newAlias = new HashMap<String, String>();
generateNewAliasesForNamespacesFromXml(expectedXml, fullnamespace2newAlias);
generateNewAliasesForNamespacesFromXml(actualXml, fullnamespace2newAlias);
for (Entry<String, String> entry : fullnamespace2newAlias.entrySet()) {
String newAlias = entry.getValue();
String namespace = entry.getKey();
Pattern nsReplacePattern = Pattern.compile("xmlns:(ns\\d+)=\"" + namespace + "\"");
expectedXml = transletaNamespaceAliasesToNewAlias(expectedXml, newAlias, nsReplacePattern);
actualXml = transletaNamespaceAliasesToNewAlias(actualXml, newAlias, nsReplacePattern);
}
// nomralize namespaces accoring to given mapping
DocumentBuilder db = initDocumentParserFactory();
Document expectedDocuemnt = db.parse(new ByteArrayInputStream(expectedXml.getBytes(Charset.forName("UTF-8"))));
expectedDocuemnt.normalizeDocument();
Document actualDocument = db.parse(new ByteArrayInputStream(actualXml.getBytes(Charset.forName("UTF-8"))));
actualDocument.normalizeDocument();
if (!expectedDocuemnt.isEqualNode(actualDocument)) {
Assert.assertEquals(expectedXml, actualXml); //just to better visualize the diffeences i.e. in eclipse
}
}
private static DocumentBuilder initDocumentParserFactory() throws ParserConfigurationException {
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
dbf.setNamespaceAware(false);
dbf.setCoalescing(true);
dbf.setIgnoringElementContentWhitespace(true);
dbf.setIgnoringComments(true);
DocumentBuilder db = dbf.newDocumentBuilder();
return db;
}
private static String transletaNamespaceAliasesToNewAlias(String xml, String newAlias, Pattern namespacePattern) {
Matcher nsMatcherExp = namespacePattern.matcher(xml);
if (nsMatcherExp.find()) {
xml = xml.replaceAll(nsMatcherExp.group(1) + "[:]", newAlias + ":");
xml = xml.replaceAll(nsMatcherExp.group(1) + "=", newAlias + "=");
}
return xml;
}
private static void generateNewAliasesForNamespacesFromXml(String xml, Map<String, String> fullnamespace2newAlias) {
Matcher nsMatcher = NAMESPACE_PATTERN.matcher(xml);
while (nsMatcher.find()) {
if (!fullnamespace2newAlias.containsKey(nsMatcher.group(2))) {
fullnamespace2newAlias.put(nsMatcher.group(2), "nsTr" + (fullnamespace2newAlias.size() + 1));
}
}
}
}
It compares two XML strings and takes care of any mismatching namespace mappings by translating them to unique values in both input strings.
Can be fine tuned i.e. in case of translation of namespaces. But for my requirements just does the job.
This will compare full string XMLs (reformatting them on the way). It makes it easy to work with your IDE (IntelliJ, Eclipse), cos you just click and visually see the difference in the XML files.
import org.apache.xml.security.c14n.CanonicalizationException;
import org.apache.xml.security.c14n.Canonicalizer;
import org.apache.xml.security.c14n.InvalidCanonicalizerException;
import org.w3c.dom.Element;
import org.w3c.dom.bootstrap.DOMImplementationRegistry;
import org.w3c.dom.ls.DOMImplementationLS;
import org.w3c.dom.ls.LSSerializer;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.transform.TransformerException;
import java.io.IOException;
import java.io.StringReader;
import static org.apache.xml.security.Init.init;
import static org.junit.Assert.assertEquals;
public class XmlUtils {
static {
init();
}
public static String toCanonicalXml(String xml) throws InvalidCanonicalizerException, ParserConfigurationException, SAXException, CanonicalizationException, IOException {
Canonicalizer canon = Canonicalizer.getInstance(Canonicalizer.ALGO_ID_C14N_OMIT_COMMENTS);
byte canonXmlBytes[] = canon.canonicalize(xml.getBytes());
return new String(canonXmlBytes);
}
public static String prettyFormat(String input) throws TransformerException, ParserConfigurationException, IOException, SAXException, InstantiationException, IllegalAccessException, ClassNotFoundException {
InputSource src = new InputSource(new StringReader(input));
Element document = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(src).getDocumentElement();
Boolean keepDeclaration = input.startsWith("<?xml");
DOMImplementationRegistry registry = DOMImplementationRegistry.newInstance();
DOMImplementationLS impl = (DOMImplementationLS) registry.getDOMImplementation("LS");
LSSerializer writer = impl.createLSSerializer();
writer.getDomConfig().setParameter("format-pretty-print", Boolean.TRUE);
writer.getDomConfig().setParameter("xml-declaration", keepDeclaration);
return writer.writeToString(document);
}
public static void assertXMLEqual(String expected, String actual) throws ParserConfigurationException, IOException, SAXException, CanonicalizationException, InvalidCanonicalizerException, TransformerException, IllegalAccessException, ClassNotFoundException, InstantiationException {
String canonicalExpected = prettyFormat(toCanonicalXml(expected));
String canonicalActual = prettyFormat(toCanonicalXml(actual));
assertEquals(canonicalExpected, canonicalActual);
}
}
I prefer this to XmlUnit because the client code (test code) is cleaner.
Using XMLUnit 2.x
In the pom.xml
<dependency>
<groupId>org.xmlunit</groupId>
<artifactId>xmlunit-assertj3</artifactId>
<version>2.9.0</version>
</dependency>
Test implementation (using junit 5) :
import org.junit.jupiter.api.Test;
import org.xmlunit.assertj3.XmlAssert;
public class FooTest {
#Test
public void compareXml() {
//
String xmlContentA = "<foo></foo>";
String xmlContentB = "<foo></foo>";
//
XmlAssert.assertThat(xmlContentA).and(xmlContentB).areSimilar();
}
}
Other methods : areIdentical(), areNotIdentical(), areNotSimilar()
More details (configuration of assertThat(~).and(~) and examples) in this documentation page.
XMLUnit also has (among other features) a DifferenceEvaluator to do more precise comparisons.
XMLUnit website
Using JExamXML with java application
import com.a7soft.examxml.ExamXML;
import com.a7soft.examxml.Options;
.................
// Reads two XML files into two strings
String s1 = readFile("orders1.xml");
String s2 = readFile("orders.xml");
// Loads options saved in a property file
Options.loadOptions("options");
// Compares two Strings representing XML entities
System.out.println( ExamXML.compareXMLString( s1, s2 ) );
Since you say "semantically equivalent" I assume you mean that you want to do more than just literally verify that the xml outputs are (string) equals, and that you'd want something like
<foo> some stuff here</foo></code>
and
<foo>some stuff here</foo></code>
do read as equivalent. Ultimately it's going to matter how you're defining "semantically equivalent" on whatever object you're reconstituting the message from. Simply build that object from the messages and use a custom equals() to define what you're looking for.
While working on my DAG in hazelcast Jet, I stumbled into a weird problem. To check for the error I dumbed down my approach completely and: it seems that the edges are not working according to the tutorial.
The code below is almost as simple as it gets. Two vertices (one source, one sink), one edge.
The source is reading from a map, the sink should put into a map.
The data.addEntryListener correctly tells me that the map is filled with 100 lists (each with 25 objects at 400 byte) by another application ... and then nothing. The map fills up, but the dag doesn't interact with it at all.
Any idea where to look for the problem?
package be.andersch.clusterbench;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.hazelcast.config.Config;
import com.hazelcast.config.SerializerConfig;
import com.hazelcast.core.EntryEvent;
import com.hazelcast.jet.*;
import com.hazelcast.jet.config.JetConfig;
import com.hazelcast.jet.stream.IStreamMap;
import com.hazelcast.map.listener.EntryAddedListener;
import be.andersch.anotherpackage.myObject;
import java.util.List;
import java.util.concurrent.ExecutionException;
import static com.hazelcast.jet.Edge.between;
import static com.hazelcast.jet.Processors.*;
/**
* Created by abernard on 24.03.2017.
*/
public class Analyzer {
private static final ObjectMapper mapper = new ObjectMapper();
private static JetInstance jet;
private static final IStreamMap<Long, List<String>> data;
private static final IStreamMap<Long, List<String>> testmap;
static {
JetConfig config = new JetConfig();
Config hazelConfig = config.getHazelcastConfig();
hazelConfig.getGroupConfig().setName( "name" ).setPassword( "password" );
hazelConfig.getNetworkConfig().getInterfaces().setEnabled( true ).addInterface( "my_IP_range_here" );
hazelConfig.getSerializationConfig().getSerializerConfigs().add(
new SerializerConfig().
setTypeClass(myObject.class).
setImplementation(new OsamKryoSerializer()));
jet = Jet.newJetInstance(config);
data = jet.getMap("data");
testmap = jet.getMap("testmap");
}
public static void main(String[] args) throws ExecutionException, InterruptedException {
DAG dag = new DAG();
Vertex source = dag.newVertex("source", readMap("data"));
Vertex test = dag.newVertex("test", writeMap("testmap"));
dag.edge(between(source, test));
jet.newJob(dag).execute()get();
data.addEntryListener((EntryAddedListener<Long, List<String>>) (EntryEvent<Long, List<String>> entryEvent) -> {
System.out.println("Got data: " + entryEvent.getKey() + " at " + System.currentTimeMillis() + ", Size: " + jet.getHazelcastInstance().getMap("data").size());
}, true);
testmap.addEntryListener((EntryAddedListener<Long, List<String>>) (EntryEvent<Long, List<String>> entryEvent) -> {
System.out.println("Got test: " + entryEvent.getKey() + " at " + System.currentTimeMillis());
}, true);
Runtime.getRuntime().addShutdownHook(new Thread(() -> Jet.shutdownAll()));
}
}
The Jet job is already finished at the line jet.newJob(dag).execute().get(), before you even created the entry listeners. This means that the job runs on an empty map. Maybe your confusion is about the nature of this job: it's a batch job, not an infinite stream processing one. Jet version 0.3 does not yet support infinite stream processing.
Could any one help me in below issue
I want to pass test cases in QC through Java, I used con4j and reached till test sets but I am unable to fetch the test cases under respective test set.
could any one please help me in how to pass test cases in QC through com4j
import com.qc.ClassFactory;
import com.qc.ITDConnection;
import com.qc.ITestLabFolder;
import com.qc.ITestSetFactory;
import com.qc.ITestSetTreeManager;
import com.qc.ITestSetFolder;
import com.qc.IList;
import com.qc.ITSTest;
import com.qc.ITestSet;
import com.qc.ITestFactory;
import com4j.*;
import com4j.stdole.*;
import com4j.tlbimp.*;
import com4j.tlbimp.def.*;
import com4j.tlbimp.driver.*;
import com4j.util.*;
import com4j.COM4J;
import java.util.*;
import com.qc.IRun;
import com.qc.IRunFactory;
public class Qc_Connect {
public static void main(String[] args) {
// TODO Auto-generated method stub
String url="http://abc/qcbin/";
String domain="abc";
String project="xyz";
String username="132222";
String password="Xyz";
String strTestLabPath = "Root\\Test\\";
String strTestSetName = "TestQC";
try{
ITDConnection itd=ClassFactory.createTDConnection();
itd.initConnectionEx(url);
System.out.println("COnnected To QC:"+ itd.connected());
itd.connectProjectEx(domain,project,username,password);
System.out.println("Logged into QC");
//System.out.println("Project_Connected:"+ itd.connected());
ITestSetFactory objTestSetFactory = (itd.testSetFactory()).queryInterface(ITestSetFactory.class);
ITestSetTreeManager objTestSetTreeManager = (itd.testSetTreeManager()).queryInterface(ITestSetTreeManager.class);
ITestSetFolder objTestSetFolder =(objTestSetTreeManager.nodeByPath(strTestLabPath)).queryInterface(ITestSetFolder.class);
IList its1 = objTestSetFolder.findTestSets(strTestSetName, true, null);
//IList ls= objTestSetFolder.findTestSets(strTestSetName, true, null);
System.out.println("No. of Test Set:" + its1.count());
ITestSet tst= (ITestSet) objTestSetFolder.findTestSets(strTestSetName, true, null).queryInterface(ITSTest.class);
System.out.println(tst.name());
//System.out.println( its1.queryInterface(ITestSet.class).name());
/* foreach (ITestSet testSet : its1.queryInterface(ITestSet.class)){
ITestSetFolder tsFolder = (ITestSetFolder)testSet.TestSetFolder;
ITSTestFactory tsTestFactory = (ITSTestFactory)testSet.TSTestFactory;
List tsTestList = tsTestFactory.NewList("");
}*/
/* Com4jObject comObj = (Com4jObject) its1.item(0);
ITestSet tst = comObj.queryInterface(ITestSet.class);
System.out.println("Test Set Name : " + tst.name());
System.out.println("Test Set ID : " + tst.id());
System.out.println("Test Set ID : " + tst.status());
System.out.println("Test Set ID : " );*/
System.out.println(its1.count());
System.out.println("TestSet Present");
Iterator itr = its1.iterator();
System.out.println(itr.hasNext());
while (itr.hasNext())
{
Com4jObject comObj = (Com4jObject) itr.next();
ITestSet sTestSet = comObj.queryInterface(ITestSet.class);
System.out.println(sTestSet.name());
Com4jObject comObj2 = sTestSet.tsTestFactory();
ITestSetFactory test = comObj2.queryInterface(ITestSetFactory.class);
}
// ITSTest tsTest=null;
// tsTest.
//its1.
/* comObj = (Com4jObject) its1.item(1);
ITSTest tst2=comObj.queryInterface(ITSTest.class);*/
// System.out.println( tst2.name());
/* foreach (ITSTest tsTest : tst2)
{
IRun lastRun = (IRun)tsTest.lastRun();
if (lastRun == null)
{
IRunFactory runFactory = (IRunFactory)tsTest.runFactory;
String date = "20160203";
IRun run = (IRun)runFactory.addItem( date);
run.status("Pass");
run.autoPost();
}
}*/
}
catch(Exception e){
e.printStackTrace();
}
}
}
I know the post is quite old. I have to struggle alot in OTA with Java and couldn't get a complete post for solving the issue.
Now i have running code after too much research.
so thought of sharing my code in case someone is looking for help.
Here is complete Solution.
`
ITestFactory sTestFactory = (connection.testFactory())
.queryInterface(ITestFactory.class);
ITest iTest1 = (sTestFactory.item(12081)).queryInterface(ITest.class);
System.out.println(iTest1.execDate());
System.out.println(iTest1.name());
ITestSetFactory sTestSetFactory = (connection.testSetFactory())
.queryInterface(ITestSetFactory.class);
ITestSet sTestSet = (sTestSetFactory.item(1402))
.queryInterface(ITestSet.class);
System.out.println(sTestSet.name() + "\n Test Set ID" + sTestSet.id());
IBaseFactory testFactory1 = sTestSet.tsTestFactory().queryInterface(
IBaseFactory.class);
testFactory1.addItem(iTest1);
System.out.println("Test case has been Added");
System.out.println(testFactory1.newList("").count());
IList tsTestlist = testFactory1.newList("");
ITSTest tsTest;
for (int tsTestIndex = 1; tsTestIndex <= tsTestlist.count(); tsTestIndex++) {
Com4jObject comObj = (Com4jObject) tsTestlist.item(tsTestIndex);
tsTest = comObj.queryInterface(ITSTest.class);
if (tsTest.name().equalsIgnoreCase("[3]TC_OTA_API_Test")) {
System.out.println("Hostname" + tsTest.hostName() + "\n"
+ tsTest.name() + "\n" + tsTest.status());
IRun lastRun = (IRun) tsTest.lastRun();
// IRun lastRun = comObjRun.queryInterface(IRun.class);
// don't update test if it may have been modified by someone
// else
if (lastRun == null) {
System.out.println("I am here last Run = Null");
runFactory = tsTest.runFactory().queryInterface(
IRunFactory.class);
System.out.println(runFactory.newList("").count());
String runName = "TestRun_Automated";
Com4jObject comObjRunForThisTS = runFactory
.addItem(runName);
IRun runObjectForThisTS = comObjRunForThisTS
.queryInterface(IRun.class);
runObjectForThisTS.status("Passed");
runObjectForThisTS.post();
runObjectForThisTS.refresh();
}
}
}
`
Why not build a client to access the REST API instead of passing through the OTA interface?
Once you build a basic client, you can post runs and update their status quite easily.
If you use c#/vb.net this has been easily completed. But you are working on java, I would suggest to provide interface above dlls to deal with operation. This will be much more easier than using com4j.
Similar query, probably following may help you. I would suggest to drop idea of using com4j and use solution provided in thread below which is proven,fail safe and auto-recoverable.
QC API JAR to connect using java
it was always been difficult to use com4j specially for HPQC/ALM. As dlls for QC are faulty and there are memory leaking/allocation problems which crashes dll executions frequently on certain platforms.
This question is quite out of box but i need it.
In list(collection), we can retrieve the nth element in the list by list.get(i);
similarly is there any method, in hbase, using java API, where i can get the nth qualifier given the row id and ColumnFamily name.
NOTE: I have million qualifiers in single row in single columnFamily.
Sorry for being unresponsive. Busy with something important. Try this for right now :
package org.myorg.hbasedemo;
import java.io.IOException;
import java.util.Scanner;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;
public class GetNthColunm {
public static void main(String[] args) throws IOException {
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "TEST");
Get g = new Get(Bytes.toBytes("4"));
Result r = table.get(g);
System.out.println("Enter column index :");
Scanner reader = new Scanner(System.in);
int index = reader.nextInt();
System.out.println("index : " + index);
int count = 0;
for (KeyValue kv : r.raw()) {
if(++count!=index)
continue;
System.out.println("Qualifier : "
+ Bytes.toString(kv.getQualifier()));
System.out.println("Value : " + Bytes.toString(kv.getValue()));
}
table.close();
System.out.println("Done.");
}
}
Will let you know if I get a better way to do this.
I'm currently testing Apache Mahout Parallel Frequent Pattern Mining . Before using it in the real project, I started with a simple code, just to be sure it works as I expect it to do...
I did not find complete example with code, data, and output.
I have currently a compiling and executing version (see java / scala code below), but the returned frequent patterns contain only one tuple (see sample output below).
Is this the intended behavior?
What did I do wrong?
Thanks for your help...
scala code :
import org.apache.mahout.fpm.pfpgrowth.fpgrowth.FPGrowth
import java.util.HashSet
import org.apache.mahout.common.iterator.StringRecordIterator
import org.apache.mahout.common.iterator.FileLineIterable
import org.apache.mahout.fpm.pfpgrowth.convertors._
import org.apache.mahout.fpm.pfpgrowth.convertors.integer._
import org.apache.mahout.fpm.pfpgrowth.convertors.string._
import org.apache.hadoop.io.SequenceFile.Writer
import org.apache.mahout.fpm.pfpgrowth.convertors.StatusUpdater
import org.apache.hadoop.mapred.OutputCollector
import scala.collection.JavaConversions._
import java.util.{ List => JList }
import org.apache.mahout.common.{ Pair => JPair }
import java.lang.{ Long => JLong }
import org.apache.hadoop.io.{ Text => JText }
val minSupport = 5L
val k: Int = 50
val fps: FPGrowth[String] = new FPGrowth[String]()
val milk = "milk"
val bread = "bread"
val butter = "butter"
val bier = "bier"
val transactionStream: Iterator[JPair[JList[String], JLong]] = Iterator(
new JPair(List(milk, bread), 10L),
new JPair(List(butter), 10L),
new JPair(List(bier), 10L),
new JPair(List(milk, bread, butter), 5L),
new JPair(List(milk, bread, bier), 5L),
new JPair(List(bread), 10L)
)
val frequencies: Collection[JPair[String, JLong]] = fps.generateFList(
transactionStream, minSupport.toInt)
println("freqList :" + frequencies)
var returnableFeatures: Collection[String] = List(
milk, bread, butter, bier)
var output: OutputCollector[String, JList[JPair[JList[String], JLong]]] = (
new OutputCollector[String, JList[JPair[JList[String], JLong]]] {
def collect(x1: String,
x2: JList[JPair[JList[String], JLong]]) = {
println(x1 + ":" +
x2.map(pair => "[" + pair.getFirst.mkString(",") + "] : " +
pair.getSecond).mkString("; "))
}
}
)
val updater: StatusUpdater = new StatusUpdater {
def update(status: String) = println("updater : " + status)
}
fps.generateTopKFrequentPatterns(
transactionStream,
frequencies,
minSupport,
k,
null, //returnableFeatures
output,
updater)
java code :
import org.apache.mahout.fpm.pfpgrowth.fpgrowth.*;
import java.io.IOException;
import java.util.*;
import org.apache.mahout.common.iterator.*;
import org.apache.mahout.fpm.pfpgrowth.convertors.*;
import org.apache.mahout.fpm.pfpgrowth.convertors.integer.*;
import org.apache.mahout.fpm.pfpgrowth.convertors.string.*;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.mahout.common.*;
import org.apache.hadoop.io.Text;
class FPGrowthDemo {
public static void main(String[] args) {
long minSupport = 1L;
int k = 50;
FPGrowth<String> fps = new FPGrowth<String>();
String milk = "milk";
String bread = "bread";
String butter = "butter";
String bier = "bier";
LinkedList<Pair<List<String>, Long>> data =
new LinkedList<Pair<List<String>, Long>>();
data.add(new Pair(Arrays.asList(milk, bread), 1L));
data.add(new Pair(Arrays.asList(butter), 1L));
data.add(new Pair(Arrays.asList(bier), 1L));
data.add(new Pair(Arrays.asList(milk, bread, butter), 1L));
data.add(new Pair(Arrays.asList(milk, bread, bier), 1L));
data.add(new Pair(Arrays.asList(milk, bread), 1L));
Iterator<Pair<List<String>, Long>> transactions = data.iterator();
Collection<Pair<String, Long>> frequencies = fps.generateFList(
transactions, (int) minSupport);
System.out.println("freqList :" + frequencies);
Collection<String> returnableFeatures =
Arrays.asList(milk, bread, butter, bier);
OutputCollector<String, List<Pair<List<String>, Long>>> output =
new OutputCollector<String, List<Pair<List<String>, Long>>>() {
#Override
public void collect(String x1,
List<Pair<List<String>, Long>> listPair)
throws IOException {
StringBuffer sb = new StringBuffer();
sb.append(x1 + ":");
for (Pair<List<String>, Long> pair : listPair) {
sb.append("[");
String sep = "";
for (String item : pair.getFirst()) {
sb.append(item + sep);
sep = ", ";
}
sb.append("]:" + pair.getSecond());
}
System.out.println(" " + sb.toString());
}
};
StatusUpdater updater = new StatusUpdater() {
public void update(String status){
System.out.println("updater :" + status);
}
};
try {
fps.generateTopKFrequentPatterns(
transactions,
frequencies,
minSupport,
k,
null, //returnableFeatures
output,
updater);
}catch (Exception e){
e.printStackTrace();
}
}
}
sample output :
freqList :[(bread,4), (milk,4), (bier,2), (butter,2)]
17:48:19,108 INFO ~ Number of unique items 4
17:48:19,109 INFO ~ Number of unique pruned items 4
17:48:19,121 INFO ~ Number of Nodes in the FP Tree: 0
17:48:19,122 INFO ~ Mining FTree Tree for all patterns with 3
updater :FPGrowth Algorithm for a given feature: 3
butter:[butter]:2
17:48:19,130 INFO ~ Found 1 Patterns with Least Support 2
17:48:19,130 INFO ~ Mining FTree Tree for all patterns with 2
updater :FPGrowth Algorithm for a given feature: 2
updater :FPGrowth Algorithm for a given feature: 3
bier:[bier]:2
17:48:19,130 INFO ~ Found 1 Patterns with Least Support 2
17:48:19,130 INFO ~ Mining FTree Tree for all patterns with 1
updater :FPGrowth Algorithm for a given feature: 1
updater :FPGrowth Algorithm for a given feature: 2
updater :FPGrowth Algorithm for a given feature: 3
milk:[milk]:4
17:48:19,131 INFO ~ Found 1 Patterns with Least Support 4
17:48:19,131 INFO ~ Mining FTree Tree for all patterns with 0
updater :FPGrowth Algorithm for a given feature: 0
updater :FPGrowth Algorithm for a given feature: 1
updater :FPGrowth Algorithm for a given feature: 2
updater :FPGrowth Algorithm for a given feature: 3
bread:[bread]:4
17:48:19,131 INFO ~ Found 1 Patterns with Least Support 4
17:48:19,131 INFO ~ Tree Cache: First Level: Cache hits=6 Cache Misses=4
The code is buggy : the iterator on transactions is called first to compute the frequencies, and will be called again by the fp-growth algorithm. The problem is that this second call will return no value, because the iterator has reached its end...
For reference, here is the correct java code :
import org.apache.mahout.fpm.pfpgrowth.fpgrowth.*;
import java.io.IOException;
import java.util.*;
import org.apache.mahout.common.iterator.*;
import org.apache.mahout.fpm.pfpgrowth.convertors.*;
import org.apache.mahout.fpm.pfpgrowth.convertors.integer.*;
import org.apache.mahout.fpm.pfpgrowth.convertors.string.*;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.mahout.common.*;
import org.apache.hadoop.io.Text;
class FPGrowthDemo {
public static void main(String[] args) {
long minSupport = 1L;
int k = 50;
FPGrowth<String> fps = new FPGrowth<String>();
String milk = "milk";
String bread = "bread";
String butter = "butter";
String bier = "bier";
LinkedList<Pair<List<String>, Long>> data =
new LinkedList<Pair<List<String>, Long>>();
data.add(new Pair(Arrays.asList(milk, bread), 1L));
data.add(new Pair(Arrays.asList(butter), 1L));
data.add(new Pair(Arrays.asList(bier), 1L));
data.add(new Pair(Arrays.asList(milk, bread, butter), 1L));
data.add(new Pair(Arrays.asList(milk, bread, bier), 1L));
data.add(new Pair(Arrays.asList(milk, bread), 1L));
// This lines is removed...
// Iterator<Pair<List<String>, Long>> transactions = data.iterator();
Collection<Pair<String, Long>> frequencies = fps.generateFList(
data.iterator(), // use an iterator here...
(int) minSupport);
System.out.println("freqList :" + frequencies);
OutputCollector<String, List<Pair<List<String>, Long>>> output =
new OutputCollector<String, List<Pair<List<String>, Long>>>() {
#Override
public void collect(String x1,
List<Pair<List<String>, Long>> listPair)
throws IOException {
StringBuffer sb = new StringBuffer();
sb.append(x1 + ":");
for (Pair<List<String>, Long> pair : listPair) {
sb.append("[");
String sep = "";
for (String item : pair.getFirst()) {
sb.append(item + sep);
sep = ", ";
}
sb.append("]:" + pair.getSecond());
}
System.out.println(" " + sb.toString());
}
};
StatusUpdater updater = new StatusUpdater() {
public void update(String status) {
System.out.println("updater :" + status);
}
};
try {
fps.generateTopKFrequentPatterns(
// changed here (previously : transactions)
data.iterator(), // use a "fresh" iterator
frequencies,
minSupport,
k,
null,
output,
updater);
} catch (Exception e) {
e.printStackTrace();
}
}
}