I parsed a website with Jsoup and extracted the links. Now I tried to store just a part of that link in an ArrayList. Somehow I cannot store one link at a time.
I tried several String methods, Scanner and BufferedReader without success.
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
public class DatenImportUnternehmen {
public static void main(String[] args) throws IOException {
ArrayList<String> aktien = new ArrayList<String>();
String searchUrl = "https://www.ariva.de/aktiensuche/_result_table.m";
for(int i = 0; i < 1; i++) {
String searchBody = "page=" + Integer.toString(i) +
"&page_size=25&sort=ariva_name&sort_d=asc
&ariva_performance_1_year=_&ariva_per
formance_3_years=&ariva_performance_5_years=
&index=0&founding_year=&land=0&ind
ustrial_sector=0§or=0¤cy=0
&type_of_share=0&year=_all_years&sales=_&p
rofit_loss=&sum_assets=&sum_liabilities=
&number_of_shares=&earnings_per_share=
÷nd_per_share=&turnover_per_share=
&book_value_per_share=&cashflow_per_sh
are=&balance_sheet_total_per_share=
&number_of_employees=&turnover_per_employee
=_&profit_per_employee=&kgv=_&kuv=_&kbv=_÷nd
_yield=_&return_on_sales=_";
// post request to search URL
Document document =
Jsoup.connect(searchUrl).requestBody(searchBody).post();
// find links in returned HTML
for(Element link:document.select("a[href]")) {
String link1 = link.toString();
String link2 = link1.substring(link1.indexOf('/'));
String link3 = link2.substring(0, link2.indexOf('"'));
aktien.add(link3);
System.out.println(aktien);
}
}
}
}
My output looks like (just a part of it):
[/1-1_drillisch-aktie]
[/1-1_drillisch-aktie, /11_88_0_solutions-aktie]
[/1-1_drillisch-aktie, /11_88_0_solutions-aktie, /1st_red-aktie]
[/1-1_drillisch-aktie, /11_88_0_solutions-aktie, /1st_red-aktie, /21st-
_cent-_fox_b_new-aktie]
[/1-1_drillisch-aktie, /11_88_0_solutions-aktie, /1st_red-aktie, /21st-
_cent-_fox_b_new-aktie, /21st_century_fox-aktie]
[/1-1_drillisch-aktie, /11_88_0_solutions-aktie, /1st_red-aktie, /21st-
_cent-_fox_b_new-aktie, /21st_century_fox-aktie, /2g_energy-aktie]
[/1-1_drillisch-aktie, /11_88_0_solutions-aktie, /1st_red-aktie, /21st-
_cent-_fox_b_new-aktie, /21st_century_fox-aktie, /2g_energy-aktie,
/3i_group-aktie]
[/1-1_drillisch-aktie, /11_88_0_solutions-aktie, /1st_red-aktie, /21st-
_cent-_fox_b_new-aktie, /21st_century_fox-aktie, /2g_energy-aktie,
/3i_group-aktie, /3i_infrastructure-aktie]
What I want to achieve is:
[/1-1_drillisch-aktie]
[/11_88_0_solutions-aktie]
[/1st_red-aktie]
[/21st-_cent-_fox_b_new-aktie]
and so on.
I just don't now what the problem is at this stage.
Your problem is that you are printing the array whilst adding to it in the loop.
To resolve the issue you can print the array outside of the array to print everything in one go, or you can print link3 (which is what you are adding to the ArrayList), instead of the array in the loop.
Option 1:
for(Element link:document.select("a[href]")) {
String link1 = link.toString();
String link2 = link1.substring(link1.indexOf('/'));
String link3 = link2.substring(0, link2.indexOf('"'));
aktien.add(link3);
}
System.out.println(aktien);
Option 2:
for(Element link:document.select("a[href]")) {
String link1 = link.toString();
String link2 = link1.substring(link1.indexOf('/'));
String link3 = link2.substring(0, link2.indexOf('"'));
aktien.add(link3);
System.out.println(link3);
}
Related
My code pulls the links and adds them to the HashSet. I want the link to replace the original link and repeat the process till no more new links can be found to add. The program keeps running but the link isn't updating and the program gets stuck in an infinite loop doing nothing. How do I get the link to update so the program can repeat until no more links can be found?
package downloader;
import java.io.IOException;
import java.net.URL;
import java.util.HashSet;
import java.util.Scanner;
import java.util.Set;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
public class Stage2 {
public static void main(String[] args) throws IOException {
int q = 0;
int w = 0;
HashSet<String> chapters = new HashSet();
String seen = new String("/manga/manabi-ikiru-wa-fuufu-no-tsutome/i1778063/v1/c1");
String source = new String("https://mangapark.net" + seen);
// 0123456789
while( q == w ) {
String source2 = new String(source.substring(21));
String last = new String(source.substring(source.length() - 12));
String last2 = new String(source.substring(source.length() - 1));
chapters.add(seen);
for (String link : findLinks(source)) {
if(link.contains("/manga") && !link.contains(last) && link.contains("/i") && link.contains("/c") && !chapters.contains(link)) {
chapters.add(link);
System.out.println(link);
seen = link;
System.out.print(chapters);
System.out.println(seen);
}
}
}
System.out.print(chapters);
}
private static Set<String> findLinks(String url) throws IOException {
Set<String> links = new HashSet<>();
Document doc = Jsoup.connect(url)
.data("query", "Java")
.userAgent("Mozilla")
.cookie("auth", "token")
.timeout(3000)
.get();
Elements elements = doc.select("a[href]");
for (Element element : elements) {
links.add(element.attr("href"));
}
return links;
}
}
Your progamm didn't stop becouse yout while conditions never change:
while( q == w )
is always true. I run your code without the while and I got 2 links print twice(!) and the programm stop.
If you want the links to the other chapters you have the same problem like me. In the element
Element element = doc.getElementById("sel_book_1");
the links are after the pseudoelement ::before. So they will not be in your Jsoup Document.
Here is my questsion to this topic:
How can I find a HTML tag with the pseudoElement ::before in jsoup
I want to check and verify that all of the contents in the ArrayList are similar to the value of a String variable. If any of the value is not similar, the index number to be printed with an error message like (value at index 2 didn't match the value of expectedName variable).
After I run the code below, it will print all the three indexes with the error message, it will not print only the index number 1.
Please note that here I'm getting the data from CSV file, putting it into arraylist and then validating it against the expected data in String variable.
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVParser;
import org.apache.commons.csv.CSVRecord;
import java.io.IOException;
import java.io.Reader;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
public class ValidateVideoDuration {
private static final String CSV_FILE_PATH = "C:\\Users\\videologs.csv";
public static void main(String[] args) throws IOException {
String expectedVideo1Duration = "00:00:30";
String expectedVideo2Duration = "00:00:10";
String expectedVideo3Duration = "00:00:16";
String actualVideo1Duration = "";
String actualVideo2Duration = "";
String actualVideo3Duration = "";
ArrayList<String> actualVideo1DurationList = new ArrayList<String>();
ArrayList<String> actualVideo2DurationList = new ArrayList<String>();
ArrayList<String> actualVideo3DurationList = new ArrayList<String>();
try (Reader reader = Files.newBufferedReader(Paths.get(CSV_FILE_PATH));
CSVParser csvParser = new CSVParser(reader,
CSVFormat.DEFAULT.withFirstRecordAsHeader().withIgnoreHeaderCase().withTrim());) {
for (CSVRecord csvRecord : csvParser) {
// Accessing values by Header names
actualVideo1Duration = csvRecord.get("Video 1 Duration");
actualVideo1DurationList.add(actualVideo1Duration);
actualVideo2Duration = csvRecord.get("Video 2 Duration");
actualVideo2DurationList.add(actualVideo2Duration);
actualVideo3Duration = csvRecord.get("Video 3 Duration");
actualVideo3DurationList.add(actualVideo3Duration);
}
}
for (int i = 0; i < actualVideo2DurationList.size(); i++) {
if (actualVideo2DurationList.get(i) != expectedVideo2Duration) {
System.out.println("Duration of Video 1 at index number " + Integer.toString(i)
+ " didn't match the expected duration");
}
}
The data inside my CSV file look like the following:
video 1 duration, video 2 duration, video 3 duration
00:00:30, 00:00:10, 00:00:16
00:00:30, 00:00:15, 00:00:15
00:00:25, 00:00:10, 00:00:16
Don't use == or != for string compare. == checks the referential equality of two Strings and not the equality of the values. Use the .equals() method instead.
Change your if condition to if (!actualVideo2DurationList.get(i).equals(expectedVideo2Duration))
Using the code below, i am trying to open a link page and then go to mobile section and sort the items on the basis of name order. now i want to check if the mobile devices are sorted by Name means alphabetically.
i tried to convert my List below to arraylist but not able to check if elements printed are in ascending order, kindly help
package selflearning;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.Select;
public class Guru99Ecommerce1 {
public static void main(String[] args) throws Exception {
System.setProperty("webdriver.gecko.driver","C:\\geckodriver\\geckodriver.exe");
WebDriver driver = new FirefoxDriver();
driver.get("http://live.guru99.com/index.php/");
String title=driver.getTitle();
String expectedTitle = "Home page";
System.out.println("The title of the webPage is " + title);
expectedTitle.equalsIgnoreCase(title);
System.out.println("Title is verified");
driver.findElement(By.xpath("//a[text()='Mobile']")).click();
String nextTitle = driver.getTitle();
System.out.println("The title of next page" + nextTitle);
String nextExpectedTitle = "pageMobile";
nextExpectedTitle.equalsIgnoreCase(nextTitle);
System.out.println("The next title is verified");
Select s = new Select(driver.findElement(By.xpath("//div[#class='category-products']//div/div[#class='sorter']/div/select[#title='Sort By']")));
s.selectByVisibleText("Name");
List<WebElement> element = driver.findElements(By.xpath("//div[#class='product-info']/h2/a"));
for(WebElement e: element)
{
String str = e.getText();
System.out.println("The items are " + str);
}
HashSet<WebElement> value = new
List<WebElement> list = new ArrayList<WebElement>(element);
list.addAll(element);
System.out.println("arrangement" + list);
}
}
The easiest way to do this is to just grab the list of products, loop through them, and see if the current product name (a String) is "greater" than the last product name using String#compareToIgnoreCase().
I would write some functions for the common tasks you are likely to repeat for this page.
public static void sortBy(String sortValue)
{
new Select(driver.findElement(By.cssSelector("select[title='Sort By']"))).selectByVisibleText(sortValue);
}
public static List<String> getProductNames()
{
List<String> names = new ArrayList<>();
List<WebElement> products = driver.findElements(By.cssSelector("ul.products-grid h2.product-name"));
for (WebElement product : products)
{
names.add(product.getText());
}
return names;
}
public static boolean isListSorted(List<String> list)
{
String last = list.get(0);
for (int i = 1; i < list.size(); i++)
{
String current = list.get(i);
if (last.compareToIgnoreCase(current) > 0)
{
return false;
}
last = current;
}
return true;
}
NOTE: You should be using JUnit or TestNG for your assertions instead of writing your own because it makes it much, much easier (and you don't have to write and debug your own which saves time). The code I wrote below is using TestNG. You can see how much shorter (and simpler) the code below is when using a library like TestNG.
String url = "http://live.guru99.com/index.php";
driver.navigate().to(url);
Assert.assertEquals(driver.getTitle(), "Home page");
driver.findElement(By.xpath("//nav[#id='nav']//a[.='Mobile']")).click();
Assert.assertEquals(driver.getTitle(), "Mobile");
sortBy("Name");
System.out.println(getProductNames());
System.out.println(isListSorted(getProductNames()));
Where getProductNames() returns
[IPHONE, SAMSUNG GALAXY, SONY XPERIA]
Alright so I finished my Yelp scanner, and everything is running great. What I want to do now is have the program retrieve the url for each link to each business, go to that page, and scan for whether it contains:
xlink:href="#30x30_bullhorn"></use>
I pretty much have a good idea of how I'm going to go about doing that, however, I can't seem to find a jSoup method that would retrieve a link's url. Is there somewhere in the page's HTML that would have the url? I'm not very proficient with HTML at all, so 90% of what I'm looking at is gibbering. Here's an example link if you want to check out what I'm referring to.
https://www.yelp.com/search?find_loc=nj&start=10 is the main page, that I need to obtain the url for the page https://www.yelp.com/biz/la-cocina-newark. The orange bullhorn is what I am trying to get it to retrieve. Here's my code btw:
import java.util.ArrayList;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import java.io.IOException;
import java.util.Scanner;
public class YelpScrapper
{
public static void main(String[] args) throws IOException, Exception
{
//Variables
String description;
String location;
int pages;
int parseCount = 0;
Document document;
Scanner keyboard = new Scanner(System.in);
//Perform a Search
System.out.print("Enter a description: ");
description = keyboard.nextLine();
System.out.print("Enter a state: ");
location = keyboard.nextLine();
System.out.print("How many pages should we scan? ");
pages = keyboard.nextInt();
String descString = "find_desc=" + description.replace(' ', '+') + "&";
String locString = "find_loc=" + location.replace(' ', '+') + "&";
int number = 0;
String url = "https://www.yelp.com/search?" + descString + locString + "start=" + number;
ArrayList<String> names = new ArrayList<String>();
ArrayList<String> address = new ArrayList<String>();
ArrayList<String> phone = new ArrayList<String>();
//Fetch Data From Yelp
for (int i = 0 ; i <= pages ; i++)
{
document = Jsoup.connect(url).get();
Elements nameElements = document.select(".indexed-biz-name span");
Elements addressElements = document.select(".secondary-attributes address");
Elements phoneElements = document.select(".biz-phone");
for (Element element : nameElements)
{
names.add(element.text());
}
for (Element element : addressElements)
{
address.add(element.text());
}
for (Element element : phoneElements)
{
phone.add(element.text());
}
for (int index = 0 ; index < 10 ; index++)
{
System.out.println("\nLead " + parseCount);
System.out.println("Company Name: " + names.get(parseCount));
System.out.println("Address: " + address.get(parseCount));
System.out.println("Phone Number: " + phone.get(parseCount));
parseCount = parseCount + 1;
}
number = number + 10;
}
}
}
Learn how to use the Inspect element of Chrome Developer tools, as it makes it incredibly easy to locate elements in the DOM (you said you aren't comfortable with HTML, well you certainly will be after this and using Inspect is a great learning tool). Focusing the inspector on the "View Now" button, you'll get to this:
View Now.
You'll have to figure out how to traverse down to this, and childNodes() will be helpful in traversing down. Then you can use getElementsByClass("ybtn ybtn--primary ybtn--small ybtn-cta") to get to that specific class where the link is, and then use the .attr() method of the Element class to get the href: .attr("href");.
I was wondering if anyone knows how to successfully parse the company name "Alcoa Inc." shown in the URL below. It would be much easier to show a picture but I do not have enough reputation. Any help would be appreciated.
http://www.google.com/finance?q=NYSE%3AAA&ei=LdwVUYC7Fp_YlgPBiAE
This is what I have tried so far using jsoup to parse the div class:
<div class="appbar-snippet-primary">
<span>Alcoa Inc.</span>
</div>
public Elements htmlParser(String url, String element, String elementType, String returnElement){
try {
Document doc = Jsoup.connect(url).get();
Document parse = Jsoup.parse(doc.html());
if (returnElement == null){
return parse.select(elementType + "." + element);
}
else {
return parse.select(elementType + "." + element + " " + returnElement);
}
}
public String htmlparseGoogleStocks(String url){
String pr = "pr";
String appbar_center = "appbar-snippet-primary";
String val = "val";
String span = "span";
String div = "div";
String td = "td";
Elements price_data;
Elements title_data;
Elements more_data;
price_data = htmlParser(url, pr, span, null);
title_data = htmlParser(url, appbar_center, div, span);
//more_data = htmlParser(url, val, td, null);
//String stockprice = price_data.text().toString();
String title = title_data.text().toString();
//System.out.println(more_data.text());
return title;
Myself, I'd analyze the page of interest's source HTML, and then just use JSoup to extract the information. For instance, using a very small JSoup program like so:
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
public class GoogleFinance {
public static final String PAGE = "https://www.google.com/finance?q=NASDAQ:XONE";
public static void main(String[] args) throws IOException {
Document doc = Jsoup.connect(PAGE).get();
Elements title = doc.select("title");
System.out.println(title.text());
}
}
You get in return:
ExOne Co: NASDAQ:XONE quotes & news - Google Finance
It doesn't get much easier than that.