htmlcleaner parse with tags - java

I try to extract some part of page. I use parser HtmlCleaner, and it remove all tags. Are there some settings to save all html tags? Or maybe is better way to extract this part of code, using something else?
My code:
static final String XPATH_STATS = "//div[#class='text']/p/";
// config cleaner properties
HtmlCleaner htmlCleaner = new HtmlCleaner();
CleanerProperties props = htmlCleaner.getProperties();
props.setAllowHtmlInsideAttributes(false);
props.setAllowMultiWordAttributes(true);
props.setRecognizeUnicodeChars(true);
props.setOmitComments(true);
props.setTransSpecialEntitiesToNCR(true);
// create URL object
URL url = new URL(BLOG_URL);
// get HTML page root node
TagNode root = htmlCleaner.clean(url);
Object[] statsNode = root.evaluateXPath(XPATH_STATS);
for (Object tag : statsNode) {
stats = stats + tag.toString().trim();
}
return stats;
thanks for nikhil.thakkar!
I do this by JSON.
The code may help someone:
URL url2 = new URL(BLOG_URL);
Document doc2 = Jsoup.parse(url2, 3000);
Element masthead = doc2.select("div.main_text").first();
String linkOuterH = masthead.outerHtml();

You can use jSoup parser.
More info here: http://jsoup.org/

Related

Upload documents into Watson's Retrieve & Rank service

I'm implementing a solution using Watson's Retrieve & Rank service.
When I use the tooling interface, I upload my documents and they appear as a list, where I can click on any of them to open up all the Titles that are inside the document ( Answer Units ), as you can see on the Picture 1 and Picture 2.
When I try to upload documents via Java, it wont recognize the documents, they get uploaded in parts ( Answer units as documents ), each part as a new document.
I would like to know how can I upload my documents as a entire document and not only parts of it?
Here's the codes for the upload function in Java:
public Answers ConvertToUnits(File doc, String collection) throws ParseException, SolrServerException, IOException{
DC.setUsernameAndPassword(USERNAME,PASSWORD);
Answers response = DC.convertDocumentToAnswer(doc).execute();
SolrInputDocument newdoc = new SolrInputDocument();
WatsonProcessing wp = new WatsonProcessing();
Collection<SolrInputDocument> newdocs = new ArrayList<SolrInputDocument>();
for(int i=0; i<response.getAnswerUnits().size(); i++)
{
String titulo = response.getAnswerUnits().get(i).getTitle();
String id = response.getAnswerUnits().get(i).getId();
newdoc.addField("title", titulo);
for(int j=0; j<response.getAnswerUnits().get(i).getContent().size(); j++)
{
String texto = response.getAnswerUnits().get(i).getContent().get(j).getText();
newdoc.addField("body", texto);
}
wp.IndexDocument(newdoc,collection);
newdoc.clear();
}
wp.ComitChanges(collection);
return response;
}
public void IndexDocument(SolrInputDocument newdoc, String collection) throws SolrServerException, IOException
{
UpdateRequest update = new UpdateRequest();
update.add(newdoc);
UpdateResponse addResponse = solrClient.add(collection, newdoc);
}
You can specify config options in this line:
Answers response = DC.convertDocumentToAnswer(doc).execute();
I think something like this should do the trick:
String configAsString = "{ \"conversion_target\":\"answer_units\", \"answer_units\": { \"selector_tags\": [] } }";
JsonParser jsonParser = new JsonParser();
JsonObject customConfig = jsonParser.parse(configAsString).getAsJsonObject();
Answers response = DC.convertDocumentToAnswer(doc, null, customConfig).execute();
I've not tried it out, so might not have got the syntax exactly right, but hopefully this will get you on the right track.
Essentially, what I'm trying to do here is use the selector_tags option in the config (see https://www.ibm.com/watson/developercloud/doc/document-conversion/customizing.shtml#htmlau for doc on this) to specify which tags the document should be split on. By specifying an empty list with no tags in, it results in it not being split at all - and coming out in a single answer unit as you want.
(Note that you can do this through the tooling interface, too - by unticking the "Split my documents up into individual answers for me" option when you upload the document)

Jsoup: null result in absUrl (abs:)

I tried to make a image links downloader with jsoup. I have made a downloader HTML code part, and when I have done a parse part, I recognized, that sometimes links to images appeared without main part. So I found absUrl solution, but by some reasons it did not work (it gave me null). So I tried use uri.resolve(), but it gave me unchanged result. So now I do not know how to solve it. I attached part of my code, that responsible for parsing ant writing url to string:
public static String finalcode(String textin) throws Exception {
String text = source(textin);
Document doc = Jsoup.parse(text);
Elements images = doc.getElementsByTag("img");
String Simages = images.toString();
int Limages = countLines(Simages);
StringBuilder src = new StringBuilder();
while (Limages > 0) {
Limages--;
Element image = images.get(Limages);
String href = image.attr("src");
src.append(href);
src.append("\n");
}
String result = src.toString();
return result;
}
It looks like you are parsing HTML from String, not from URL. Because of that jsoup can't know from which URL this HTML codes comes from, so it can't create absolute path.
To set this URL for Document you should parse it using Jsoup.parse(String html, String baseUri) version, like
String url = "http://server/pages/document.htlm";
String text = "<img src = '../images/image_name1.jpg'/><img src = '../images/image_name2.jpg'/>'";
Document doc = Jsoup.parse(text, url);
Elements images = doc.getElementsByTag("img");
for (Element image : images){
System.out.println(image.attr("src")+" -> "+image.attr("abs:src"));
}
Output:
../images/image_name1.jpg -> http://server/images/image_name1.jpg
../images/image_name2.jpg -> http://server/images/image_name2.jpg
Other option would be letting Jsoup parse page directly by supplying URL instead of String with HTML
Document doc = Jsoup.connect("http://example.com").get();
This way Document will know from which URL it came, so it will be able to create absolute paths.

Retrieving absolute URL from a webpage

I want to extract full link from a HTML file. Full link I mean absolute links. I used Tika for this purpose. Here is my code:
URL url = new URL("http://www.domainname.com/");
InputStream input = url.openStream();
LinkContentHandler linkHandler = new LinkContentHandler();
ContentHandler textHandler = new BodyContentHandler();
ToHTMLContentHandler toHTMLHandler = new ToHTMLContentHandler();
TeeContentHandler teeHandler = new TeeContentHandler(linkHandler,
textHandler, toHTMLHandler);
Metadata metadata = new Metadata();
ParseContext parseContext = new ParseContext();
HtmlParser parser = new HtmlParser();
parser.parse(input, teeHandler, metadata, parseContext);
System.out.println("title:\n" + metadata.get("title"));
for (Link link : linkHandler.getLinks()) {
System.out.println(link.getUri());
}
This give me relative url like /index.html or documents/US/economicreport.html but the absolute url in this case is http://domainname.com/index.html.
How can I get all the link correctly means the full link including domain name? How can I do that in Java?
If you have stored the base website URL in url, the following should work:
URL url = new URL("http://www.domainname.com/");
String givenUrl = ""; //This is the parsed address
if (givenUrl.charAt(0) == '/') {
String absoluteUrl = url + givenURL;
} else {
String absoluteUrl = givenUrl;
}
Slightly better than the previous, but only slightly, is
URL targetDocumentUrl = new URL("http://www.domainname.com/content.html");
String parsedUrl = link.getURI();
String absoluteLink = new URL(targetDocumentUrl, parsedURL);
However, it is still not a good solution as it has problems when the html document has the following tag
base href="/"
and the link being parsed is relative and starts with "../".
Of course you can get around this a number of ways but they involve a bit of work such as implementing a ContentHandler. I have to think for something so basic there must be a simple way to do this with the Tika LinkContentHandler.

Jsoup href request and to output on file

I made this sample to request one url query through a java application. The request connection and query are right. But, I'm missing how am I able to get all href elements from the query and write on one output file? Anyone has any guidelines?
Thanks in advance
Document engineSearch=Jsoup.connect("http://ask.com/web?q="+URLEncoder.encode(query))
.userAgent("Mozilla/5.0 (X11; U; Linux x86_64; en-GB; rv:1.8.1.6) Gecko/20070723 Iceweasel/2.0.0.6 (Debian-2.0.0.6-0etch1)")
.get();
String title = engineSearch.title();
Elements links = engineSearch.select("a[href]").first().getAllElements();
String queryEncoding=engineSearch.outputSettings().charset().name();
file = new File(folder.getPath()+"\\"+date+" "+Tag+".html");
OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream(file),queryEncoding);
writer.write(engineSearch.html());
writer.close();
Here is an example of exactly what you want, I dont have a dev environment handy but something along those lines should work
http://jsoup.org/cookbook/extracting-data/attributes-text-html
Document doc = Jsoup.parse(html);
Elements links = doc.select("a");
for (Element e : links) {
String text = doc.body().text(); // "An example link"
String linkHref = link.attr("href"); // "http://example.com/", which you can save to file
}

HTML Parser fetch link text

I'm using HTML Parser to fetch links from a web page. I need to store the URL, link text and the URL to the parent page containing the link. I have managed to get the link URL as well as the parent URL.
I still ned to get the link text.
link text
Unfortunately I'm having a hard time figuring it out, any help would be greatly appreciated.
public static List<LinkContainer> findUrls(String resource) {
String[] tagNames = {"A", "AREA"};
List<LinkContainer> urls = new ArrayList<LinkContainer>();
Tag tag;
String url;
String sourceUrl;
try {
for (String tagName : tagNames) {
Parser parser = new Parser(resource);
NodeList nodes = parser.parse(new TagNameFilter(tagName));
NodeIterator i = nodes.elements();
while (i.hasMoreNodes()) {
tag = (Tag) i.nextNode();
url = tag.getAttribute("href");
sourceUrl = tag.getPage().getUrl();
if (RegexUtil.verifyUrl(url)) {
urls.add(new LinkContainer(url, null, sourceUrl));
}
}
}
} catch (ParserException pe) {
pe.printStackTrace();
}
return urls;
}
Have you tried ((LinkTag) tag).getLinkText() ? Personally I prefer n html parser which produces XML according to a well used standard, e.g., xerces or similar. This is what you get from using e.g., http://nekohtml.sourceforge.net/.
You would need to check the children of each A Tag. If you assume that your A tags only have a single child (the text itself), you can use the getFirstChild() method. This should be an instance of TextNode, and you can call getText() on this to get the link text.

Categories

Resources