While working on my DAG in hazelcast Jet, I stumbled into a weird problem. To check for the error I dumbed down my approach completely and: it seems that the edges are not working according to the tutorial.
The code below is almost as simple as it gets. Two vertices (one source, one sink), one edge.
The source is reading from a map, the sink should put into a map.
The data.addEntryListener correctly tells me that the map is filled with 100 lists (each with 25 objects at 400 byte) by another application ... and then nothing. The map fills up, but the dag doesn't interact with it at all.
Any idea where to look for the problem?
package be.andersch.clusterbench;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.hazelcast.config.Config;
import com.hazelcast.config.SerializerConfig;
import com.hazelcast.core.EntryEvent;
import com.hazelcast.jet.*;
import com.hazelcast.jet.config.JetConfig;
import com.hazelcast.jet.stream.IStreamMap;
import com.hazelcast.map.listener.EntryAddedListener;
import be.andersch.anotherpackage.myObject;
import java.util.List;
import java.util.concurrent.ExecutionException;
import static com.hazelcast.jet.Edge.between;
import static com.hazelcast.jet.Processors.*;
/**
* Created by abernard on 24.03.2017.
*/
public class Analyzer {
private static final ObjectMapper mapper = new ObjectMapper();
private static JetInstance jet;
private static final IStreamMap<Long, List<String>> data;
private static final IStreamMap<Long, List<String>> testmap;
static {
JetConfig config = new JetConfig();
Config hazelConfig = config.getHazelcastConfig();
hazelConfig.getGroupConfig().setName( "name" ).setPassword( "password" );
hazelConfig.getNetworkConfig().getInterfaces().setEnabled( true ).addInterface( "my_IP_range_here" );
hazelConfig.getSerializationConfig().getSerializerConfigs().add(
new SerializerConfig().
setTypeClass(myObject.class).
setImplementation(new OsamKryoSerializer()));
jet = Jet.newJetInstance(config);
data = jet.getMap("data");
testmap = jet.getMap("testmap");
}
public static void main(String[] args) throws ExecutionException, InterruptedException {
DAG dag = new DAG();
Vertex source = dag.newVertex("source", readMap("data"));
Vertex test = dag.newVertex("test", writeMap("testmap"));
dag.edge(between(source, test));
jet.newJob(dag).execute()get();
data.addEntryListener((EntryAddedListener<Long, List<String>>) (EntryEvent<Long, List<String>> entryEvent) -> {
System.out.println("Got data: " + entryEvent.getKey() + " at " + System.currentTimeMillis() + ", Size: " + jet.getHazelcastInstance().getMap("data").size());
}, true);
testmap.addEntryListener((EntryAddedListener<Long, List<String>>) (EntryEvent<Long, List<String>> entryEvent) -> {
System.out.println("Got test: " + entryEvent.getKey() + " at " + System.currentTimeMillis());
}, true);
Runtime.getRuntime().addShutdownHook(new Thread(() -> Jet.shutdownAll()));
}
}
The Jet job is already finished at the line jet.newJob(dag).execute().get(), before you even created the entry listeners. This means that the job runs on an empty map. Maybe your confusion is about the nature of this job: it's a batch job, not an infinite stream processing one. Jet version 0.3 does not yet support infinite stream processing.
Related
This is my "revenue_data.csv" file:
Client ReportDate Revenue
C1 2019-1-7 12
C2 2019-1-7 34
C1 2019-1-16 56
C2 2019-1-16 78
C3 2019-1-16 90
And my case class to read the file is:
package com.source.code;
import java.time.LocalDate;
public class RevenueRecorder {
private String clientCode;
private LocalDate reportDate;
private int revenue;
public RevenueRecorder(String clientCode, LocalDate reportDate, int revenue) {
this.clientCode = clientCode;
this.reportDate = reportDate;
this.revenue = revenue;
}
public String getClientCode() {
return clientCode;
}
public LocalDate getReportDate() {
return reportDate;
}
public int getRevenue() {
return revenue;
}
}
I can read the file and group by ReportDate, sum(revenue) in the following manner:
import com.source.code.RevenueRecorder;
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import static java.util.stream.Collectors.groupingBy;
import static java.util.stream.Collectors.summingInt;
public class RevenueRecorderMain {
public static void main(String[] args) throws IOException {
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-M-d");
List<RevenueRecorder> revenueRecords = new ArrayList<>();
Path path = FileSystems.getDefault().getPath("src", "main", "resources",
"data", "revenue_data.csv");
Files.lines(path)
.skip(1)
.map(s -> s.split(","))
.forEach(s ->
{
String clientCode = s[0];
LocalDate reportDate = LocalDate.parse(s[1], formatter);
int revenue = Integer.parseInt(s[2]);
revenueRecords.add(new RevenueRecorder(clientCode, reportDate, revenue));
});
Map<LocalDate, Integer> reportDateRev = revenueRecords.stream()
.collect(groupingBy(RevenueRecorder::getReportDate,
summingInt(RevenueRecorder::getRevenue)));
}
}
My question is how can I group by ReportDate, count(clientCode) and sum(revenue) in Java 8, specifically:
what collection to use instead of the Map
how to groupby and collect in this case (and generally for more than 2 groupingBy's)
I'm trying:
//import org.apache.commons.lang3.tuple.ImmutablePair;
//import org.apache.commons.lang3.tuple.Pair;
Map<LocalDate, Pair<Integer, Integer>> pairedReportDateRev = revenueRecords.stream()
.collect(groupingBy(RevenueRecorder::getReportDate,
new ImmutablePair(summingInt(RevenueRecorder::getRevenue),
groupingBy(RevenueRecorder::getClientCode, Collectors.counting()))));
But getting the Intellij red-squiggle underneath RevenueRecorder::getReportDate with the hover-message 'Non-static method cannot be referenced from a static context'.
Thanks
EDIT
For clarification, here's the corresponding SQL query that I'm trying to get at:
select
reportDate, count(distinct(clientCode)), sum(revenue)
from
revenue_data_table
group by
reportDate
Although your trying has not been successful, but I think is what you most want to express. So I just follow your code and fix it. Try this one!
Map<LocalDate, ImmutablePair<Integer, Map<String, Long>>> map = revenueRecords.stream()
.collect(groupingBy(RevenueRecorder::getReportDate,
collectingAndThen(toList(), list -> new ImmutablePair(list.stream().collect(summingInt(RevenueRecorder::getRevenue)),
list.stream().collect(groupingBy(RevenueRecorder::getClientCode, Collectors.counting()))))));
And I borrowed some sample data code from #Lyashko Kirill to test my code, the result is below
This's my own idea, I hope I can help you. ╰( ̄▽ ̄)╭
If you already use Java 12, there is a new collector Collectors.teeing() which allows to collect using two independent collectors, then merge their results using the supplied BiFunction. Every element passed to the resulting collector is processed by both downstream collectors, then their results are merged using the specified merge function into the final result. Therefor Collectors.teeing() may be a good fit since you want counting and summing.
Map<LocalDate, Result> pairedReportDateMRR =
revenueRecords.stream().collect(Collectors.groupingBy(RevenueRecorder::getReportDate,
Collectors.teeing(Collectors.counting(),
Collectors.summingInt(RevenueRecorder::getRevenue), Result::new)));
System.out.println(pairedReportDateMRR);
//output: {2019-01-07={count=2, sum=46}, 2019-01-16={count=3, sum=224}}
For testing purposes I used the following simple static class
static class Result {
private Long count;
private Integer sum;
public Result(Long count, Integer sum) {
this.count = count;
this.sum = sum;
}
#Override
public String toString() {
return "{" + "count=" + count + ", sum=" + sum + '}';
}
}
First of all you can't produce map Map<LocalDate, Pair<Integer, Integer>> due to you want to do the second grouping, what means that for the same date you may have multiple Client Codes with separate counters per each of them.
So if I've got you right you wont to get something like this Map<LocalDate, MutablePair<Integer, Map<String, Integer>>>, if it's correct try this code snippet:
public static void main(String[] args) {
String data = "C1,2019-1-7,12\n" +
"C2,2019-1-7,34\n" +
"C1,2019-1-16,56\n" +
"C2,2019-1-16,78\n" +
"C3,2019-1-16,90";
Stream.of(data.split("\n")).forEach(System.out::println);
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-M-d");
List<RevenueRecorder> revenueRecords = Stream.of(data.split("\n")).map(line -> {
String[] s = line.split(",");
String clientCode = s[0];
LocalDate reportDate = LocalDate.parse(s[1].trim(), formatter);
int revenue = Integer.parseInt(s[2]);
return new RevenueRecorder(clientCode, reportDate, revenue);
}).collect(toList());
Supplier<MutablePair<Integer, Map<String, Integer>>> supplier = () -> MutablePair.of(0, new HashMap<>());
BiConsumer<MutablePair<Integer, Map<String, Integer>>, RevenueRecorder> accumulator = (pair, recorder) -> {
pair.setLeft(pair.getLeft() + recorder.getRevenue());
pair.getRight().merge(recorder.getClientCode(), 1, Integer::sum);
};
BinaryOperator<MutablePair<Integer, Map<String, Integer>>> combiner = (p1, p2) -> {
p1.setLeft(p1.getLeft() + p2.getLeft());
p2.getRight().forEach((key, val) -> p1.getRight().merge(key, val, Integer::sum));
return p1;
};
Map<LocalDate, MutablePair<Integer, Map<String, Integer>>> pairedReportDateMRR = revenueRecords.stream()
.collect(
groupingBy(RevenueRecorder::getReportDate,
Collector.of(supplier, accumulator, combiner))
);
System.out.println(pairedReportDateMRR);
}
I have words with prefix. eg:
city|new york
city|London
travel|yes
...
city|new york
I want to count how many city|new york and city|London(which is classic wordcount). But, the reducer output should be a key-val pair like city:{"new york" :2, "london":1}. Meaning for each city prefix, I want to aggregate all the Strings and their counts.
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
// Instead of just result count, I need something like {"city":{"new york" :2, "london":1}}
context.write(key, result);
}
Any ideas?
You can use cleanup() method of the reducer to achieve this (assuming, you have just one reducer). It is called once at the end of the reduce task.
I will explain this for "city" data.
Following is the code:
package com.hadooptests;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
public class Cities {
public static class CityMapper
extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text outKey = new Text();
private IntWritable outValue = new IntWritable(1);
public void map(LongWritable key, Text value, Context context
) throws IOException, InterruptedException {
outKey.set(value);
context.write(outKey, outValue);
}
}
public static class CityReducer
extends Reducer<Text,IntWritable,Text,Text> {
HashMap<String, Integer> cityCount = new HashMap<String, Integer>();
public void reduce(Text key, Iterable<IntWritable>values,
Context context
) throws IOException, InterruptedException {
for (IntWritable val : values) {
String keyStr = key.toString();
if(keyStr.toLowerCase().startsWith("city|")) {
String[] tokens = keyStr.split("\\|");
if(cityCount.containsKey(tokens[1])) {
int count = cityCount.get(tokens[1]);
cityCount.put(tokens[1], ++count);
}
else
cityCount.put(tokens[1], val.get());
}
}
}
#Override
public void cleanup(org.apache.hadoop.mapreduce.Reducer.Context context)
throws IOException,
InterruptedException
{
String output = "{\"city\":{";
Iterator iterator = cityCount.entrySet().iterator();
while(iterator.hasNext())
{
Map.Entry entry = (Map.Entry) iterator.next();
output = output.concat("\"" + entry.getKey() + "\":" + Integer.toString((Integer) entry.getValue()) + ", ");
}
output = output.substring(0, output.length() - 2);
output = output.concat("}}");
context.write(output, "");
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "KeyValue");
job.setJarByClass(Cities.class);
job.setMapperClass(CityMapper.class);
job.setReducerClass(CityReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("/in/in.txt"));
FileOutputFormat.setOutputPath(job, new Path("/out/"));
System.exit(job.waitForCompletion(true) ? 0:1);
}
}
Mapper:
It just outputs count for each key it encounters. For e.g. if it encounters record "city|new york", then it will output (key, value) as ("city|new york", 1)
Reducer:
For each record, it checks if the key contains "city|". It splits the key on pipe ("|"). And stores the count for each city in a HashMap.
Reducer also overrides cleanup method. This method gets called once the reduce task is over. In this task, the contents of the HashMap are composed into the desired output.
In the cleanup(), the key is output as the contents of HashMap and value is output as empty string.
For e.g. I took the following data as input:
city|new york
city|London
city|new york
city|new york
city|Paris
city|Paris
I got the following output:
{"city":{"London":1, "new york":3, "Paris":2}}
It's simple.
Emit from mapper using the "city" as output key and the whole record as output value.
U will get city partitioned as a single group in a reducer and travel as another group.
Count the city and the travel instances using and hash map to grain down to lower levels.
I'm very new to Selenium and I've been trying to make the test suite gather data from a table. I don't have the slightest clue on how to do this.
Here's the table I am working with:
http://i.imgur.com/vdITVug.jpg
New appointments (dates) are randomly added at random times of the day. I've created a test suite that will constantly refresh at this page. The next step, would be to save all the dates in the table, create a loop to compare if the dates after a refresh happen to be different the original stored dates.
If they are different, notify the user.
Here's a theoretical example of what I'm trying to accomplish.
//Navigate to the appointment page
//Store all the current dates from the table
for (until a new appointment pops up)
{
//Refresh the page
// Compare the dates to the stored dates
if (the dates =/ stored dates)
{
notify the user(me in this case)
}
}
I'm also trying to figure out how I can find the element ID of the table.
Here's a screenshot with some of the html code: http://i.imgur.com/GD4yOp9.png
The statement that is highlighted has the first date stored.
Any advice would be appreciated, thanks!
Tried replicating a similar HTML structure (in fact 2 of them, one after the refresh). Here is a quick solution for you to compare the HTML tables after refresh.
The key here is organizing your table data into a Map<String, List<String>> like data structure.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class CheckTables {
public WebDriver driver;
public static void main(String[] args) throws Exception {
CheckTables objTest = new CheckTables();
objTest.runTest();
}
public void runTest(){
driver = new FirefoxDriver();
driver.navigate().to("file:///D:/00_FX_WorkSpace/X_Hour/RoadTest_1.html");
Map<String, List<String>> objTable_1 = readTable();
System.out.println("TABLE:1" + objTable_1);
//event to refresh the table
driver.navigate().to("file:///D:/00_FX_WorkSpace/X_Hour/RoadTest_2.html");
Map<String, List<String>> objTable_2 = readTable();
System.out.println("TABLE:2" + objTable_2);
compareTables(objTable_1, objTable_2);
}
public Map<String, List<String>> readTable(){
Map<String, List<String>> objTable = new HashMap<>();
List<WebElement> objRows = driver.findElements(By.cssSelector("tr#data"));
for(int iCount=0; iCount<objRows.size(); iCount++){
List<WebElement> objCol = objRows.get(iCount).findElements(By.cssSelector("td.tableTxt"));
List<String> columns = new ArrayList<>();
for(int col=0; col<objCol.size(); col++){
columns.add(objCol.get(col).getText());
}
objTable.put(String.valueOf(iCount), columns);
}
return objTable;
}
public void compareTables(Map<String, List<String>> objTable1, Map<String, List<String>> objTable2){
for(int count=0; count<objTable1.size(); count++){
List<String> objList1 = objTable1.get(String.valueOf(count));
System.out.println(objList1);
List<String> objList2 = objTable2.get(String.valueOf(count));
System.out.println(objList2);
if(objList1.containsAll(objList2)){
System.out.println("Row [" + count + "] is SAME");
}
else{
//notify
System.out.println("Row [" + count + "] has CHANGED");
}
}
}
}
Here are the HTML snippets for RoadTest_1.html and RoadTest_2.html --
https://gist.github.com/anonymous/43c3b1f44817c69bd03d/
I have a serious problem to get any reasoner up and running.
Also the examples from the documentation: https://jena.apache.org/documentation/inference/
does not work here.
I transferred the example into a unit test, so that the problem might be easier reproduced.
Is reasoning limited to certain environment like a spatial JDK or so on, or am i getting something wrong?
Thanks
Here the example code (as java unit test):
import static org.junit.Assert.assertNotNull;
import java.io.PrintWriter;
import java.util.Iterator;
import org.junit.Before;
import org.junit.Test;
import com.hp.hpl.jena.rdf.model.InfModel;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.Property;
import com.hp.hpl.jena.rdf.model.Resource;
import com.hp.hpl.jena.rdf.model.Statement;
import com.hp.hpl.jena.rdf.model.StmtIterator;
import com.hp.hpl.jena.reasoner.Derivation;
import com.hp.hpl.jena.reasoner.rulesys.GenericRuleReasoner;
import com.hp.hpl.jena.reasoner.rulesys.Rule;
import com.hp.hpl.jena.vocabulary.RDFS;
public class ReasonerTest {
String NS = "urn:x-hp-jena:eg/";
// Build a trivial example data set
Model model = ModelFactory.createDefaultModel();
InfModel inf;
Resource A = model.createResource(NS + "A");
Resource B = model.createResource(NS + "B");
Resource C = model.createResource(NS + "C");
Resource D = model.createResource(NS + "D");
Property p = model.createProperty(NS, "p");
Property q = model.createProperty(NS, "q");
#Before
public void init() {
// Some small examples (subProperty)
model.add(p, RDFS.subPropertyOf, q);
model.createResource(NS + "A").addProperty(p, "foo");
String rules = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)]";
GenericRuleReasoner reasoner = new GenericRuleReasoner(Rule.parseRules(rules));
reasoner.setDerivationLogging(true);
inf = ModelFactory.createInfModel(reasoner, model);
// Derivations
A.addProperty(p, B);
B.addProperty(p, C);
C.addProperty(p, D);
}
#Test
public void subProperty() {
Statement statement = A.getProperty(q);
System.out.println("Statement: " + statement);
assertNotNull(statement);
}
#Test
public void derivations() {
String trace = null;
PrintWriter out = new PrintWriter(System.out);
for (StmtIterator i = inf.listStatements(A, p, D); i.hasNext(); ) {
Statement s = i.nextStatement();
System.out.println("Statement is " + s);
for (Iterator id = inf.getDerivation(s); id.hasNext(); ) {
Derivation deriv = (Derivation) id.next();
deriv.printTrace(out, true);
trace += deriv.toString();
}
}
out.flush();
assertNotNull(trace);
}
#Test
public void listStatements() {
StmtIterator stmtIterator = inf.listStatements();
while(stmtIterator.hasNext()) {
System.out.println(stmtIterator.nextStatement());
}
}
}
The prefix eg: isn't what you think it is:
The eg: prefix in the rules doesn't expand to what you think it does. I modified your rules string to
String rules = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)] [rule2: -> (<urn:ex:a> eg:foo <urn:ex:b>)]";
so that rule2 will always insert the triple urn:ex:a eg:foo urn:ex:b into the graph. Then, the output from your tests includes:
[urn:ex:a, urn:x-hp:eg/foo, urn:ex:b]
[urn:x-hp-jena:eg/C, urn:x-hp-jena:eg/p, urn:x-hp-jena:eg/D]
The first line shows the triple that my rule2 inserted, whereas the second uses the prefix you entered by hand. We see that the eg: prefix is short for urn:x-hp:eg/. If you change your NS string accordingly, with String NS = "urn:x-hp:eg/";, then your derivations test will pass.
You need to ask the right model
The subProperty test fails for two reasons. First, it's checking in the wrong model.
You're checking with A.getProperty(q):
Statement statement = A.getProperty(q);
System.out.println("Statement: " + statement);
assertNotNull(statement);
A is a resource that you created for the the model model, not the model inf, so when you ask for A.getProperty(q), it's actually asking model for the statement, so you won't see the inferences in inf. You can use inModel to get A "in inf" so that getProperty looks in the right model:
Statement statement = A.inModel(inf).getProperty(q);
Alternatively, you could also ask inf directly whether it contains a triple of the form A q <something>:
inf.contains( A, q, (RDFNode) null );
Or you could enumerate all such statements:
StmtIterator stmts = inf.listStatements( A, q, (RDFNode) null );
assertTrue( stmts.hasNext() );
while ( stmts.hasNext() ) {
System.out.println( "Statement: "+stmts.next() );
}
You need RDFS reasoning too
Even if you're querying the right model, your inference model still needs to do RDFS reasoning as well as your custom rule that makes the property p transitive. To do that, we can pull the rules out from an RDFS reasoner, add your rule to that a copy of that list, and then create a custom reasoner with the new list of rules:
// Get an RDFS reasoner
GenericRuleReasoner rdfsReasoner = (GenericRuleReasoner) ReasonerRegistry.getRDFSReasoner();
// Steal its rules, and add one of our own, and create a
// reasoner with these rules
List<Rule> customRules = new ArrayList<>( rdfsReasoner.getRules() );
String customRule = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)]";
customRules.add( Rule.parseRule( customRule ));
Reasoner reasoner = new GenericRuleReasoner( customRules );
The complete result
Here's the modified code, all together for easy copying and pasting. All the tests pass.
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import org.junit.Before;
import org.junit.Test;
import com.hp.hpl.jena.rdf.model.InfModel;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.rdf.model.Property;
import com.hp.hpl.jena.rdf.model.RDFNode;
import com.hp.hpl.jena.rdf.model.Resource;
import com.hp.hpl.jena.rdf.model.Statement;
import com.hp.hpl.jena.rdf.model.StmtIterator;
import com.hp.hpl.jena.reasoner.Derivation;
import com.hp.hpl.jena.reasoner.Reasoner;
import com.hp.hpl.jena.reasoner.ReasonerRegistry;
import com.hp.hpl.jena.reasoner.rulesys.GenericRuleReasoner;
import com.hp.hpl.jena.reasoner.rulesys.Rule;
import com.hp.hpl.jena.vocabulary.RDFS;
public class ReasonerTest {
String NS = "urn:x-hp:eg/";
// Build a trivial example data set
Model model = ModelFactory.createDefaultModel();
InfModel inf;
Resource A = model.createResource(NS + "A");
Resource B = model.createResource(NS + "B");
Resource C = model.createResource(NS + "C");
Resource D = model.createResource(NS + "D");
Property p = model.createProperty(NS, "p");
Property q = model.createProperty(NS, "q");
#Before
public void init() {
// Some small examples (subProperty)
model.add(p, RDFS.subPropertyOf, q);
A.addProperty(p, "foo" );
// Get an RDFS reasoner
GenericRuleReasoner rdfsReasoner = (GenericRuleReasoner) ReasonerRegistry.getRDFSReasoner();
// Steal its rules, and add one of our own, and create a
// reasoner with these rules
List<Rule> customRules = new ArrayList<>( rdfsReasoner.getRules() );
String customRule = "[rule1: (?a eg:p ?b) (?b eg:p ?c) -> (?a eg:p ?c)]";
customRules.add( Rule.parseRule( customRule ));
Reasoner reasoner = new GenericRuleReasoner( customRules );
reasoner.setDerivationLogging(true);
inf = ModelFactory.createInfModel(reasoner, model);
// Derivations
A.addProperty(p, B);
B.addProperty(p, C);
C.addProperty(p, D);
}
#Test
public void subProperty() {
StmtIterator stmts = inf.listStatements( A, q, (RDFNode) null );
assertTrue( stmts.hasNext() );
while ( stmts.hasNext() ) {
System.out.println( "Statement: "+stmts.next() );
}
}
#Test
public void derivations() {
String trace = null;
PrintWriter out = new PrintWriter(System.out);
for (StmtIterator i = inf.listStatements(A, p, D); i.hasNext(); ) {
Statement s = i.nextStatement();
System.out.println("Statement is " + s);
for (Iterator<Derivation> id = inf.getDerivation(s); id.hasNext(); ) {
Derivation deriv = (Derivation) id.next();
deriv.printTrace(out, true);
trace += deriv.toString();
}
}
out.flush();
assertNotNull(trace);
}
#Test
public void listStatements() {
StmtIterator stmtIterator = inf.listStatements();
while(stmtIterator.hasNext()) {
System.out.println(stmtIterator.nextStatement());
}
}
}
This question is quite out of box but i need it.
In list(collection), we can retrieve the nth element in the list by list.get(i);
similarly is there any method, in hbase, using java API, where i can get the nth qualifier given the row id and ColumnFamily name.
NOTE: I have million qualifiers in single row in single columnFamily.
Sorry for being unresponsive. Busy with something important. Try this for right now :
package org.myorg.hbasedemo;
import java.io.IOException;
import java.util.Scanner;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;
public class GetNthColunm {
public static void main(String[] args) throws IOException {
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "TEST");
Get g = new Get(Bytes.toBytes("4"));
Result r = table.get(g);
System.out.println("Enter column index :");
Scanner reader = new Scanner(System.in);
int index = reader.nextInt();
System.out.println("index : " + index);
int count = 0;
for (KeyValue kv : r.raw()) {
if(++count!=index)
continue;
System.out.println("Qualifier : "
+ Bytes.toString(kv.getQualifier()));
System.out.println("Value : " + Bytes.toString(kv.getValue()));
}
table.close();
System.out.println("Done.");
}
}
Will let you know if I get a better way to do this.