jsoup won't extract email only website - java

I am having trouble with jsoup. It only is extracting website links not the email link. Here is my code:
try {
Document doc = Jsoup.connect(url2).get();
Elements links = doc.select("a[href]");
for (Element web: links) {
Log.i("websites/emails/etc.", web.attr("abs:href"));
}
Elements links2 = doc.select("link[href]");
for (Element web: links2) {
Log.i("websites/emails/etc.", web.attr("abs:href"));
}
The logcat is only showing the website links.
Here is the inspected page:

Edit --
I missed you were using Android. I tested on the JVM and your code looked good, I re-tested on Android and see the same thing. The solution appears to be remove the abs: qualifier from your attr call.
Log.i("websites/emails/etc.", web.attr("href"));
Original answer, which may apply to other attempts to extract mailto:.
This is almost certainly the intended behavior from the website creator. Due to the mailto: tags being easily scraped by spambot email harvesters, there are a variety of techniques used to make the mailto: tag not-obvious when you pull the raw HTML. Instead, they cleverly encoded, or are generated dynamically by javascript. See here for an example. Safari is showing you the element because these technique are designed to be correct in the browser, even when the just the HTML looks funky. If you download with file with curl and look at the raw text, there is likely no "mailto:" tag there.

Related

How to get URL of video or audio from a website with jsoup

i'm using jsoup to parse all the HTML from this website: news
I can fetch all the tilte, description with select some Elements I need. But can't find the video URL element to select. How can i get the video link with jsoup or another kind of library. Thanks!
Maybe I misunderstood your question, but can't you search for <video> elements using JSoup?
All <video> elements have a so-called src attribute.
Maybe try something like this?
// HTML from your webpage
final var html = "this should hold your HTML";
// Deconstruct into element objects
final var document = Jsoup.parse(html);
// Use CSS to select the first <video> element
final var videoElement = document.select("video").first();
// Grab the video's URL by fetching the "src" attribute
final var src = videoElement.attr("src");
Now I did not thoroughly check the website you linked. But some websites insert videos using JavaScript. If this website inserts a video tag after loading, you might be out of luck as Jsoup does not run JavaScript. It only runs on the initial HTML fetched from the page.
Jsoup is an HTML parser, which is why it only parses HTML and not, say, generated HTML.

Extract the list of URLs obtained during a HTML page render in Java

I want to be able to get the list of all URLs that a browser will do a GET request for when we try to open a page. For example, if we try to open cnn.com, there are multiple URLs within the first HTTP response which the browser recursively requests for.
I'm not trying to render a page, but I'm trying to obtain a list of all the URLs that are requested when a page is rendered. Doing a simple scan of the HTTP response content wouldn't be sufficient, as there could potentially be images in the CSS which are downloaded. Is there any way I can do this in Java?
My question is similar to this question, but I want to write this in Java.
You can use Jsoup library to extract all the links from a webpage, e.g.:
Document document = Jsoup.connect("http://google.com").get();
Elements links = document.select("a[href]");
for(Element link : links) {
System.out.println(link.attr("href"));
}
Here's the documentation.

web scraping jsoup java unable to scrape full information

I have an information to be scraped from a website. I could scrape it. But not all the information is being scraped. There is so much of data loss. The following images helps you further to understand :
I used Jsoup, connected it to URL and then extracted this particular data using the following code :
Document doc = Jsoup.connect("https://www.awattar.com/tariffs/hourly#").userAgent("Mozilla/17.0").get();
Elements durationCycle = doc.select("g.x.axis g.tick text");
But in the result, I couldn't find any of that related information at all. So I printed the whole document from the URL and it shows the following :
I could see the information when I download the page and read it as an input file but not when I connect directly to URL. But I want to connect it to URL. Is there any suggestion?
I hope my question is understandable. Let me know in case if it is not explanatory.
There is a request body limitation in Jsoup. you should use the maxBodySize parameter:
Document doc = Jsoup.connect("https://www.awattar.com/tariffs/hourly#").userAgent("Mozilla/17.0").maxBodySize(0).get();
"0" is no limit.

How to get navigable links to pages from a site using jsoup?

I am implementing a basic crawler with the purpose of later use in a vulnerability scanner. I am using jsoup for the connection/retrieving and parsing of html document.
I supply manually the base/root of the intended site(www.example.com) and connect.
...
Connection connection = Jsoup.connect(url).userAgent(USER_AGENT);
Document htmlDocument = connection.get();
this.htmlDocument = htmlDocument;
...
Then i retrieve all the links on the page.
...
Elements linksOnPage = htmlDocument.select("a[href]");
...
After this I loop between the links and try to get the links to all the pages on the site.
for (Element link : linksOnPage) {
this.links.add(link.absUrl("href"));
}
The problem is as follows. Depending on the links I get, some might not be links to new pages or not even links to pages at all. As an example a got links like:
https://example.example.com/webmail
http://193.231.21.13
mailto:example.example#exampl.com
What i need some help whit is the filtering of the links so that i get only links to new pages of the same root/base site.
This is easy. Check if absUrl ends with image format or js or css:
if(absUrl.startsWith("http://www.ics.uci.edu/") && !absUrl.matches(".*\\.(bmp|gif|jpg|png|js|css)$"))
{
//here absUrl starts with domain name and is not image or js or css
}

Parsing shopping websites usign jsoup

I have the following code:
doc = Jsoup.connect("http://www.amazon.com/gp/goldbox").userAgent("Mozilla").timeout(5000).get();
Elements hrefs = doc.select("div.a-row.layer");
System.out.println("Results:"+ hrefs); //I am trying to print out contents but not able to see the output.
Problem: Want to display all image src within the div with class name "a-row layer". But, i am unable to see the output.
What is the mistake with my query?
I have taken a look at the website and tested it myself. The issue seems to be that the piece of html code you want to extract (div.a-row.layer) is generated by JavaScript.
Jsoup does not support JavaScript and cannot parse those generated by it. You would need a headless web browser to deal with this, such as HTMLUnit.

Categories

Resources