Java code in Hadoop - java

I am running a map only job in Hadoop. The data-set is a set of html pages in a single file (returned by a crawler)
The mapper code is written in Java. I am using JSoup to parse. What I want as my output is a key that has both the contents of the title tag and the content of a meta tag. Ideally I should get 1592 records for my map output records. I am getting 3184.
The concatenation I attempt to do with this line of code is not happening.
String MN_Job = (jobT + "\t" + jobsDetail);
What I get instead is each of these separately, hence double the number of outputs. What am I doing wrong here?
public class JobsDataMapper extends Mapper<LongWritable, Text, Text, Text> {
private Text keytext = new Text();
private Text valuetext = new Text();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
Document doc = Jsoup.parse(line);
Elements desc = doc.select("head title, meta[name=twitter:description]");
for (Element jobhtml : desc) {
Elements title = jobhtml.select("title");
String jobT = "";
for (Element titlehtml : title) {
jobT = titlehtml.text();
}
Elements meta = jobhtml.select("meta[name=twitter:description]");
String jobsDetail ="";
for (Element metahtml : meta) {
String content = metahtml.attr("content");
String content1 = content.replaceAll("\\p{Punct}+", " ");
jobsDetail = content1.replaceAll(" (?i)a | (?i)able | (?i)about | (?i)across | (?i)after | (?i)all | (?i)almost | (?i)also | (?i)am | (?i)among | (?i)an | (?i)and | (?i)any | (?i)are | (?i)as | (?i)at | (?i)be | (?i)because | (?i)been | (?i)but | (?i)by | (?i)can | (?i)cannot | (?i)could | (?i)dear | (?i)did | (?i)do | (?i)does | (?i)either | (?i)else | (?i)ever | (?i)every | (?i)for | (?i)from | (?i)get | (?i)got | (?i)had | (?i)has | (?i)have | (?i)he | (?i)her | (?i)hers | (?i)him | (?i)his | (?i)how | (?i)however | (?i)i | (?i)if | (?i)in | (?i)into | (?i)is | (?i)it | (?i)its | (?i)just | (?i)least | (?i)let | (?i)like | (?i)likely | (?i)may | (?i)me | (?i)might | (?i)most | (?i)must | (?i)my | (?i)neither | (?i)no | (?i)nor | (?i)not | (?i)nbsp | (?i)of | (?i)off | (?i)often | (?i)on | (?i)only | (?i)or | (?i)other | (?i)our | (?i)own | (?i)rather | (?i)said | (?i)say | (?i)says | (?i)she | (?i)should | (?i)since | (?i)so | (?i)some | (?i)than | (?i)that | (?i)the | (?i)their | (?i)them | (?i)then | (?i)there | (?i)these | (?i)they | (?i)this | (?i)tis | (?i)to | (?i)too | (?i)twas | (?i)us | (?i)wants | (?i)was | (?i)we | (?i)were | (?i)what | (?i)when | (?i)where | (?i)which | (?i)while | (?i)who | (?i)whom | (?i)why | (?i)will | (?i)with | (?i)would | (?i)yet | (?i)you | (?i)your "," ");
}
String IT_Job = (jobT + "\t" + jobsDetail);
keytext.set(IT_Job) ;
valuetext.set("JobDetail");
context.write( keytext, valuetext );
}
}
}

Edit: I know what the problem is. But the thing is that the solution might not be obvious in MapReduce. You might have to write your custom RecordReader. Let me explain the problem.
In your code you read line by line. Then you apply this to the line you read:
Elements desc = doc.select("head title, meta[name=twitter:description]");
But evidently, it might only have a title or a <meta name=twitter:description> tag. So you read one of those and store it. The other one remains blank. So at a time, only one of your variables, jobT and jobsDetail has any data. So for the code snippet:
String IT_Job = (jobT + "\t" + jobsDetail);
one time, the first one is blank and the second time, the other one is blank. So if you are expecting n records, you get 2n records. Similarly, if you'll attempt to extract three fields, then you should get 3n records. So you can test this theory by extracting another field and then checking if you are getting thrice the number of expected records.
If the theory turns out to be correct, you might want to delimit the webpages you extract with a specific delimiter string. Then you want to write a custom RecordReader which will read one html file at a time according to the delimiter and then process the entire html file at once. That way you'll get the title and the meta tags together.

Just by the look at the numbers: 3184/2 = 1592.
I think that your file is just duplicated in the input folder. I can't tell for sure, because you have not given the code how you submit the job, but maybe you can verify it with a simple:
bin/hadoop fs -ls /your/input_path
When submitting, either make sure that there is just the single file in there, or just reference the single file in your submission logic.

I made changes to the original code removing the loops that were not necessary. What was happening the older code was that when there is a title in the record, it is output, and later when there is a content, it is output as well. So, there are two writes per HTML file.
public class JobsDataMapper extends Mapper<LongWritable, Text, Text, Text> {
private Text keytext = new Text();
private Text valuetext = new Text();
private String jobT = new String();
private String jobName= new String();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
Document doc = Jsoup.parse(line);
Elements desc = doc.select("head title, meta[name=twitter:description]");
for (Element jobhtml : desc){
Elements title = jobhtml.select("title");
String jobTT = title.text();
jobT =jobTT ;
if (jobT.length()> 0){
jobName=jobTT;
}
Elements meta = jobhtml.select("meta[name=twitter:description]");
String jobsDetail ="";
String content = meta.attr("content");
String content1 = content.replaceAll("\\p{Punct}+", " ");
jobsDetail = content1.toLowerCase();
jobsDetail = content1.replaceAll(" a| able | about | across | after | all | almost | also | am | among | an | and | any | are | as | at | be| because | been | but | by | can | cannot | could | dear | did | do | does | either | else | ever | every | for | from | get | got | had | has | have | he | her | hers | him | his | how | however | i | if | in | into | is | it | its | just | least | let | like | likely | may | me | might | most | must | my | neither | no | nor | not | nbsp | of | off | often | on | only | or | other | our | own | rather | said | say | says | she | should | since | so | some | than | that | the | their | them | then | there | these | they | this | tis | to | too | twas | us | wants | was | we | were | what | when | where | which | while | who | whom | why | will | with | would | yet | you | your "," ");
if (jobsDetail.length()>0) {
String MN_Job = (jobName+ "\t" + jobsDetail);
keytext.set(MN_Job) ;
valuetext.set("JobInIT");
context.write( keytext, valuetext );
}
}
}
}

Related

Deleting an element from the database observing the id order in Room [Java Android]

When I delete element from database my id stop go in order. Example:
Before deleting:
| ID | | ELEMENT|
| 1 | | text1 |
| 2 | | text2 |
| 3 | | text3 |
After deleting 2nd element:
| ID | | ELEMENT|
| 1 | | text1 |
| 3 | | text3 |
But I need the id to go in order. Example:
| ID | | ELEMENT|
| 1 | | text1 |
| 2 | | text3 |
I have this deleting method in Dao:
#Query("DELETE FROM database WHERE id = :itemId")
void deleteByItemId(long itemId);
How most effective way for solving this problem?

ANTLR4 How would I extract python expression variables

Using the following ANTLR grammar: https://github.com/bkiers/python3-parser/blob/master/src/main/antlr4/nl/bigo/pythonparser/Python3.g4 I want to parse from a given expression, lets say:
x.split(y, 3)
or
x + y
The variables x and y. How would I achieve this?
I tried the following approach but it seems cumbersome since I must add all build-in python functions:
Define a Listener interface
const listener = new MyPythonListener()
antlr.tree.ParseTreeWalker.DEFAULT.walk(listener, abstractTree)
Use regex + pattern matching:
const symbolicNames = ['TRUE', 'FALSE', 'NUMEBRS', 'STRING', 'LIST', 'TUPLE', 'DICTIONARY', 'INT', 'LONG', 'FLOAT', 'COMPLEX',
'BOOL', 'STR', 'INT', 'RANGE', 'NONE', 'LEN']
class MyPythonListener extends Python3Listener {
variables = []
enterExpr(ctx) {
const text = this.getElementText(ctx)
if (text && this.verifyIsVariable(text)) {
this.variables.push(text)
}
}
verifyIsVariable(leafText) {
return !leafText.includes('"') && !leafText.includes('\'') && isNaN(leafText) &&
!symbolicNames.includes(leafText.toUpperCase()) && leafText.match(/^[0-9a-zA-Z_]+$/)
}
}
I didn't look too closely at it, but after inspecting the parse tree for the Python code:
def some_method_name(some_param_name):
x.split(y, 3)
it appears that the variable names are children of the atom rule:
atom
: '(' ( yield_expr | testlist_comp )? ')'
| '[' testlist_comp? ']'
| '{' dictorsetmaker? '}'
| NAME
| number
| str+
| '...'
| NONE
| TRUE
| FALSE
;
where NAME is a variable name.
So you could do something like this:
String source = "def some_method_name(some_param_name):\n x.split(y, 3)\n";
Python3Lexer lexer = new Python3Lexer(CharStreams.fromString(source));
Python3Parser parser = new Python3Parser(new CommonTokenStream(lexer));
ParseTreeWalker.DEFAULT.walk(new Python3BaseListener() {
#Override
public void enterAtom(Python3Parser.AtomContext ctx) {
if (ctx.NAME() != null) {
System.out.println(ctx.NAME().getText());
}
}
}, parser.file_input());
which will print:
x
y
and not the method and parameter names.
Again: not thoroughly tested, I leave that for you. You can pretty print the parse tree like this:
String source = "def some_method_name(some_param_name):\n x.split(y, 3)\n";
Python3Lexer lexer = new Python3Lexer(CharStreams.fromString(source));
Python3Parser parser = new Python3Parser(new CommonTokenStream(lexer));
System.out.println(new Builder.Tree(source).toStringASCII());
to inspect for yourself where the nodes you're intereseted in occur in the parse tree. The code above will print:
'- file_input
|- stmt
| '- compound_stmt
| '- funcdef
| |- def
| |- some_method_name
| |- parameters
| | |- (
| | |- typedargslist
| | | '- tfpdef
| | | '- some_param
| | '- )
| |- :
| '- suite
| |- <NEWLINE>
| |- <INDENT>
| |- stmt
| | '- simple_stmt
| | |- small_stmt
| | | '- expr_stmt
| | | '- testlist_star_expr
| | | '- test
| | | '- or_test
| | | '- and_test
| | | '- not_test
| | | '- comparison
| | | '- star_expr
| | | '- expr
| | | '- xor_expr
| | | '- and_expr
| | | '- shift_expr
| | | '- arith_expr
| | | '- term
| | | '- factor
| | | '- power
| | | |- atom
| | | | '- x
| | | |- trailer
| | | | |- .
| | | | '- split
| | | '- trailer
| | | |- (
| | | |- arglist
| | | | |- argument
| | | | | '- test
| | | | | '- or_test
| | | | | '- and_test
| | | | | '- not_test
| | | | | '- comparison
| | | | | '- star_expr
| | | | | '- expr
| | | | | '- xor_expr
| | | | | '- and_expr
| | | | | '- shift_expr
| | | | | '- arith_expr
| | | | | '- term
| | | | | '- factor
| | | | | '- power
| | | | | '- atom
| | | | | '- y
| | | | |- ,
| | | | '- argument
| | | | '- test
| | | | '- or_test
| | | | '- and_test
| | | | '- not_test
| | | | '- comparison
| | | | '- star_expr
| | | | '- expr
| | | | '- xor_expr
| | | | '- and_expr
| | | | '- shift_expr
| | | | '- arith_expr
| | | | '- term
| | | | '- factor
| | | | '- power
| | | | '- atom
| | | | '- number
| | | | '- integer
| | | | '- 3
| | | '- )
| | '- <NEWLINE>
| '- <DEDENT>
'- <EOF>
Note that the Builder.Tree class is not part of the ANTLR library, it resides in the/my repo you linked to in your question: https://github.com/bkiers/python3-parser/blob/master/src/main/java/nl/bigo/pythonparser/Builder.java

Text Block and Println Formatting, Java: How to Remove Extra Lines

This is for a hangman game, and the logic works great- the word fills in correctly, and the hangman gets more and more hanged with each word. However, the "graphics" are a little difficult for the user, as you can see below in the output:
3
[ .----------------.
| .--------------. |
| | _______ | |
| | |_ __ \ | |
| | | |__) | | |
| | | __ / | |
| | _| | \ \_ | |
| | |____| |___| | |
| | | |
| '--------------' |
'----------------' , .----------------.
| .--------------. |
| | ____ ____ | |
| | |_ || _| | |
| | | |__| | | |
| | | __ | | |
| | _| | | |_ | |
| | |____||____| | |
| | | |
| '--------------' |
'----------------' , .----------------.s
| .--------------. |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| '--------------' |
'----------------'\, .----------------.s
| .--------------. |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| '--------------' |
'----------------'\, .----------------.
| .--------------. |
| | ____ | |
| | .' `. | |
| | / .--. \ | |
| | | | | | | |
| | \ `--' / | |
| | `.____.' | |
| | | |
| '--------------' |
'----------------' , .----------------.s
| .--------------. |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| '--------------' |
'----------------'\, .----------------.s
| .--------------. |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| '--------------' |
'----------------'\, .----------------.
| .--------------. |
| | _______ | |
| | |_ __ \ | |
| | | |__) | | |
| | | __ / | |
| | _| | \ \_ | |
| | |____| |___| | |
| | | |
| '--------------' |
'----------------' , .----------------.
| .--------------. |
| | ____ | |
| | .' `. | |
| | / .--. \ | |
| | | | | | | |
| | \ `--' / | |
| | `.____.' | |
| | | |
| '--------------' |
'----------------' , .----------------.s
| .--------------. |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| '--------------' |
'----------------'\]
How can it be made flat?
The three boxes below show the relevant code for how it was created.
if (word.contains(guess)) {
for (int location = 0; location < word.length(); location++) {
String letter = String.valueOf(word.charAt(location));
if (letter.equalsIgnoreCase(guess)) {
progressOnWord.set(location, guess);
}
}
numberCorrect++;
p(numberCorrect);
}
The above code fills in an ArrayList, progressOnWord, when there is a correct word. progressOnWord starts out as "0000000000", as many zeros as there are characters in the word up for guessing. It fills in with the right letters if they are guessed, for example:
"rh00o00ro0" at the current stage shown above. This is then converted to ASCII: if the String is 0, add an ASCII block blank to ASCIIform, the ASCII form of progressOnWord. For example, if progressOnWord is rh00o00ro0, then the ASCII block will be what you see above. The blank-setting is indiscriminate-it does not care what location the blank is, it just sets all 0s to blanks.
for (int j = 0; j < progressOnWord.size(); j++) {
String wordPlace = progressOnWord.get(j);
if (wordPlace.equals("0")) {
ASCIIform.set(j, ASCIIblank);
}
This statement is discriminate, it makes sure the letter is in the right spot.
for (int k = 0; k < lowercase.length; k++) {
String letter = String.valueOf(lowercase[k]);
String ASCIIletter = ASCIIalphabet[k];
if (wordPlace.equalsIgnoreCase(letter)) {
ASCIIform.set(j, ASCIIletter);
}
}
}
Note, the two blocks of code are continuous, but split here for commentary. So back to the question: why are the blocks each on a new line, and how can it be fixed? It's getting the blocks from an ArrayList<String> (condensed below):
public static String[] getASCIIalphabet() {
String[] ASCIIalphabet = {
"""
.----------------.\s
| .--------------. |
| | __ | |
| | / \\ | |
| | / /\\ \\ | |
| | / ____ \\ | |
| | _/ / \\ \\_ | |
| ||____| |____|| |
| | | |
| '--------------' |
'----------------'\s""",
"""
.----------------.\s
| .--------------. |
| | ______ | |
| | |_ _ \\ | |
| | | |_) | | |
| | | __'. | |
| | _| |__) | | |
| | |_______/ | |
| | | |
| '--------------' |
'----------------'\s"""
};
Could the formatting of the ArrayList be the problem? Like if you had
"0",
"1",
"2",
in an ArrayList, would it print the following?
0
1
2
I don't think so.
Is it due to the adding of the blocks? They are added 1 by 1 as they are guessed, is that causing the stacking?
Or is it some character in the blocks themselves- in the text blocks in the alphabet ArrayList, is it possibly the \s?
To provide more detail regarding Daniel's answer,
StringBuilder.append(char c) and ArrayList.set(int index, String element) are being used.
Here's the code:
for (int j = 0; j < progressOnWord.size(); j++) {
String wordPlace = progressOnWord.get(j);
if (wordPlace.equals("0")) {
ASCIIform.set(j, ASCIIblank);
}
above, as 0 is the placeholder, when 0 is encountered,
it is replaced with an ASCII Block Blank:
'----------------'
| .--------------. |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| '--------------' |
'----------------'\,
for (int k = 0; k < lowercase.length; k++) {
String letter = String.valueOf(lowercase[k]);
String ASCIIletter = ASCIIalphabet[k];
if (wordPlace.equalsIgnoreCase(letter)) {
ASCIIform.set(j, ASCIIletter);
}
}
The code above iterates through an alphabet,
within the loop that is iterating
through each character in the word.
for example, with "rhinoceros":
if the progress is rh00o00ro0,
it checks if r is 0. No.
It then moves on to iterate through a, b, c...z,
until it finds r.
when it finds r in the alphabet,
it notes that r is in spot 0 in the progress,
and notes that r is the 18th letter in the alphabet
and proceeds to set spot 0 in the ASCII word to
the 18th String in the ASCII Block Alphabet:
[ .----------------.
| .--------------. |
| | _______ | |
| | |_ __ \ | |
| | | |__) | | |
| | | __ / | |
| | _| | \ \_ | |
| | |____| |___| | |
| | | |
| '--------------' |
'----------------' ,
It is necessary use the ArrayList.set(int index, String element) method because character location matters in this game.
In contrast, StringBuilder.append(char c) is used when created a demo of the ASCII conversion; if the user entered 'rhinoceros' it prints the full word 'rhinoceros' in the ASCII block style you see above in order to give the user a taste of the "graphics" style. Here's the code:
for (int i = 0; i < allCaps.length(); i++) {
char c = allCaps.charAt(i);
for (int j = 0; j < uppercase.length; j++) {
char bigChar = uppercase[j];
String ASCIIblock = getASCIIalphabet()[j];
if (c == bigChar) ASCIIword.append(ASCIIblock);
}
}
It's worth noting that StringBuilder.append() and ArrayList.set() produce the same result: stacking. Is there a way to append() or set() "sideways"? The append() and set() methods are by default stacking the letters; it seems it must be something with the text blocks formatting in the ArrayList. Although horizontal would normally be the default, can this willy-nilly appending/setting be forced in the horizontal direction, to result in R---->H---->I---->N...?
I now understand your problem, the problem is that you are appending lines after drawing each character which means you cannot go back to the same line to draw another character so instead you can save everyline in a character in a String array (String[]) in your original ArrayList so it will be ArrayList<String[]> and you can iterate through every first line of a character take look at this code I made to fix the problem:
final int numberOfLines = 11;
for (int i = 0; i < numberOfLines; i++) {
for (int j = 0; j < asciiForm.size(); j++) {
String[] word = asciiForm.get(j);
System.out.print(word[i]);
}
System.out.println("\t");
}
Example of how a character would look like in the ArrayList:
asciiForm.add(
new String[] {
" .----------------. ",
"| .--------------. |",
"| | _______ | |",
"| | |_ __ \\ | |",
"| | | |__) | | |",
"| | | __ / | |",
"| | _| | \\ \\_ | |",
"| | |____| |___| | |",
"| | | |",
"| '--------------' |",
" '----------------' "
}
);
The output I got:
Note: I did not add all characters to the ArrayList because it was not neccesery for me to do so, I only added R and H

Spark - sample() function duplicating data?

I want to randomly select a subset of my data and then limit it to 200 entries. But after using the sample() function, I'm getting duplicate rows, and I don't know why. Let me show you:
DataFrame df= sqlContext.sql("SELECT * " +
" FROM temptable" +
" WHERE conditions");
DataFrame df1 = df.select(df.col("col1"))
.where(df.col("col1").isNotNull())
.distinct()
.orderBy(df.col("col1"));
df.show();
System.out.println(df.count());
Up until now, everything is OK. I get the output:
+-----------+
|col1 |
+-----------+
| 10016|
| 10022|
| 100281|
| 10032|
| 100427|
| 100445|
| 10049|
| 10070|
| 10076|
| 10079|
| 10081|
| 10082|
| 100884|
| 10092|
| 10099|
| 10102|
| 10103|
| 101039|
| 101134|
| 101187|
+-----------+
only showing top 20 rows
10512
with 10512 records without duplicates. AND THEN!
df = df.sample(true, 0.5).limit(200);
df.show();
System.out.println(users.count());
This returns 200 rows full of duplicates:
+-----------+
|col1 |
+-----------+
| 10022|
| 100445|
| 100445|
| 10049|
| 10079|
| 10079|
| 10081|
| 10081|
| 10082|
| 10092|
| 10102|
| 10102|
| 101039|
| 101134|
| 101134|
| 101134|
| 101345|
| 101345|
| 10140|
| 10141|
+-----------+
only showing top 20 rows
200
Can anyone tell me why? This is driving me crazy. Thank you!
You explicitly ask for a sample with replacement so there is nothing unexpected about getting duplicates:
public Dataset<T> sample(boolean withReplacement, double fraction)

Parsing SPARQL Result into jtable

I'm working on an Apache Jena project. I've got a Fuseki server running on my localhost.
I want to create a Java Program for my Fuseki server, that shows all the data in the triplestore in a JTable. I just have no idea how to parse the result from my query into a JTable.
My code sofar:
(left out the part where the window, table, frame etc is created)
private void Go() {
String query = "SELECT ?subject ?predicate ?object \n" +
"WHERE { \n" +
"?subject ?predicate ?object }";
Query sparqlQuery = QueryFactory.create(query, Syntax.syntaxARQ) ;
QueryEngineHTTP httpQuery = new QueryEngineHTTP("http://localhost:3030/AnimalDataSet/", sparqlQuery);
ResultSet results = httpQuery.execSelect();
System.out.println(ResultSetFormatter.asText(results));
while (results.hasNext()) {
QuerySolution solution = results.next();
}
httpQuery.close();
}
The sysout prints this, which is the correct data:
-------------------------------------------------------------------------------------------------------------------------------------
| subject | predicate | object |
=====================================================================================================================================
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#Seq> |
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_1> | <urn:animals:lion> |
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_2> | <urn:animals:tarantula> |
| <urn:animals:data> | <http://www.w3.org/1999/02/22-rdf-syntax-ns#_3> | <urn:animals:hippopotamus> |
| <urn:animals:lion> | <http://www.some-ficticious-zoo.com/rdf#name> | "Lion" |
| <urn:animals:lion> | <http://www.some-ficticious-zoo.com/rdf#species> | "Panthera leo" |
| <urn:animals:lion> | <http://www.some-ficticious-zoo.com/rdf#class> | "Mammal" |
| <urn:animals:tarantula> | <http://www.some-ficticious-zoo.com/rdf#name> | "Tarantula" |
| <urn:animals:tarantula> | <http://www.some-ficticious-zoo.com/rdf#species> | "Avicularia avicularia" |
| <urn:animals:tarantula> | <http://www.some-ficticious-zoo.com/rdf#class> | "Arachnid" |
| <urn:animals:hippopotamus> | <http://www.some-ficticious-zoo.com/rdf#name> | "Hippopotamus" |
| <urn:animals:hippopotamus> | <http://www.some-ficticious-zoo.com/rdf#species> | "Hippopotamus amphibius" |
| <urn:animals:hippopotamus> | <http://www.some-ficticious-zoo.com/rdf#class> | "Mammal" |
-------------------------------------------------------------------------------------------------------------------------------------
I really hope someone here knows how to parse the data from the query into a JTbale :D
Thanks in advance!
I've done some further research and finally found the solution! It's quite easy actually.
You just simply change the while loop like this:
while(rs.hasNext())
{
QuerySolution sol = rs.nextSolution();
RDFNode object = sol.get("object");
RDFNode predicate = sol.get("predicate");
RDFNode subject = sol.get("subject");
DefaultTableModel model = (DefaultTableModel) table.getModel();
model.addRow(new Object[]{subject, predicate, object});
}
And that works fine for me!
For everyone who's interested, i've puplished my version as it is now to pastebin which has comments:
The link to the full (current) version of my project

Categories

Resources