Hello i have to save from a page same <a> in the page and has got as class=my_img i save this elements in a List an after i try to go in firts element of list and after go back get the second element but the selenium give me this error
Exception in thread "main"
org.openqa.selenium.StaleElementReferenceException: Element not found
in the cache - perhaps the page has changed since it was looked up
and this is my code
List <WebElement> Element = drivers.findElements(By.cssSelector(".my_img"));
System.out.println("Megethos"+Element.size());
System.out.println("Pame stous epomenous \n");
for (i = 1; i < Element.size(); i++) {
drivers.manage().timeouts().implicitlyWait(35, TimeUnit.SECONDS);
System.out.println(i+" "+Element.size());
System.out.println(i+" "+Element.get(i));
action.click(Element.get(i)).perform();
Thread.sleep(2000);
System.out.println("go back");
drivers.navigate().back();
Thread.sleep(6000);
drivers.navigate().refresh();
Thread.sleep(6000);
}
Your action.click() and/or navigate() calls are resulting in a page reload, causing the WebElement's in your list to no longer be valid. Put your findElements() call inside the loop:
List <WebElement> Element = drivers.findElements(By.cssSelector(".my_img"));
for (i = 1; i < Element.size(); i++) {
Element = drivers.findElements(By.cssSelector(".my_img"));
drivers.manage().timeouts().implicitlyWait(35, TimeUnit.SECONDS);
System.out.println(i+" "+Element.size());
System.out.println(i+" "+Element.get(i));
action.click(Element.get(i)).perform();
Thread.sleep(2000);
System.out.println("go back");
drivers.navigate().back();
Thread.sleep(6000);
drivers.navigate().refresh();
Thread.sleep(6000);
}
If the primary purpose is to click on the links and get back to the previous page, its better to get "href" attributes for all the "a" elements in the page and navigate to each of them. The way you've followed will always result in StaleElementReferenceExeception, as when you navigate back to the original DOM changes.
Below is the way like I suggested:
List<WebElement> linkElements = driver.findElements(By.xpath("//a[#class='my_img']"));
System.out.println("The number of links under URL is: "+linkElements.size());
//Getting all the 'href' attributes from the 'a' tag and putting into the String array linkhrefs
String[] linkhrefs = new String[linkElements.size()];
int j = 0;
for (WebElement e : linkElements) {
linkhrefs[j] = e.getAttribute("href");
j++;
}
// test each link
int k=0;
for (String t : linkhrefs) {
try{
if (t != null && !t.isEmpty()) {
System.out.println("Navigating to link number "+(++k)+": '"+t+"'");
driver.navigate().to(t);
String title;
title = driver.getTitle();
System.out.println("title is: "+title);
//Some known errors, if and when, found in the navigated to page.
if((title.contains("You are not authorized to view this page"))||(title.contains("Page not found"))
||(title.contains("503 Service Unavailable"))
||(title.contains("Problem loading page")))
{
System.err.println(t + " the link is not working because title is: "+title);
} else {
System.out.println("\"" + t + "\"" + " is working.");
}
}else{
System.err.println("Link's href is null.");
}
}catch(Throwable e){
System.err.println("Error came while navigating to link: "+t+". Error message: "+e.getMessage());
}
System.out.println("++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++");
}
Related
Below is the code which I am trying to get to work.
//Method to fetch all links from the sitemap container
public void GetAllLinks() {
WebElement pointer = LinksContainer;
String url = "";
List <WebElement> allURLs = pointer.findElements(By.tagName("a"));
System.out.println("Total links on the page: "+ allURLs.size());
for (int i=0; i<allURLs.size(); i++) {
WebElement link = allURLs.get(i);
url = link.getAttribute("href");
OpenAllLinks(url);
}
}
//Method to hit all the fetched URLs
public void OpenAllLinks(String linkURL) {
driver.get(linkURL);
}
I am fetching all the anchor elements from a sitemap page and then putting all those elements into a list. Then, I am getting the URLs from all those elements using the getAttribute(href). The code is working fine till here.
However, after that I am taking these URLs as arguments and passing into the method OpenAllLinks() to open all these URLs one by one using driver.get(). The code works till the first link but as soon as the first page is loaded, it gives me the stale element exception.
At the moment you are leaving the page where all these links are appearing all the web elements in allURLs list becoming stale elements.
What you can do is first to extract and save in a list all the links, not the web elements, and after that iterate with loop opening all these links.
Like this:
public void GetAllLinks() {
WebElement pointer = LinksContainer;
String url = "";
List <WebElement> allURLs = pointer.findElements(By.tagName("a"));
System.out.println("Total links on the page: "+ allURLs.size());
List<String>links = new ArrayList<>();
for (int i=0; i<allURLs.size(); i++) {
WebElement link = allURLs.get(i);
url = link.getAttribute("href");
links.add(url);
}
for (int i=0; i<links.size(); i++) {
OpenLink(links.get(i));
}
}
//Method to open the fetched URLs
public void OpenLink(String linkURL) {
driver.get(linkURL);
}
I use this Java code with Selenium to select table row based on found text:
WebElement tableContainer = driver.findElement(By.xpath("//div[#class='ag-center-cols-container']"));
List<WebElement> list = tableContainer.findElements(By.xpath("./child::*"));
// check for list elements and print all found elements
if(!list.isEmpty())
{
for (WebElement element : list)
{
System.out.println("Found inner WebElement " + element.getText());
}
}
// iterate sub-elements
for ( WebElement element : list )
{
System.out.println("Searching for " + element.getText());
if(element.getText().equals(valueToSelect))
{
element.click();
break; // We need to put break because the loop will continue and we will get exception
}
}
Full code: https://pastebin.com/ANMqY01y
For some reason table text is not clicked. I don't have exception. Any idea why it's not working properly?
See there are 2 divs with //div[#class='ag-center-cols-container'] with this xpath.
first div does not have anything, while second div has child divs.
I would suggest you to use :
List<WebElement> list = driver.findElements(By.xpath("//div[#class='ag-center-cols-container']//div"));
Remove this line from your code :
WebElement tableContainer = driver.findElement(By.xpath("//div[#class='ag-center-cols-container']"));
My object is to scrape data by using Java Selenium. I am able to load selenium driver, connect to the website and fetch the first column then go to the next pagination button until its become disable and write it to the console. Here is what I did so far:
public static WebDriver driver;
public static void main(String[] args) throws Exception {
System.setProperty("webdriver.chrome.driver", "E:\\eclipse-workspace\\package-name\\src\\working\\selenium\\driver\\chromedriver.exe");
System.setProperty("webdriver.chrome.silentOutput", "true");
driver = new ChromeDriver();
driver.get("https://datatables.net/examples/basic_init/zero_configuration.html");
driver.manage().window().maximize();
compareDispalyedRowCountToActualRowCount();
}
public static void compareDispalyedRowCountToActualRowCount() throws Exception {
try {
Thread.sleep(5000);
List<WebElement> namesElements = driver.findElements(By.cssSelector("#example>tbody>tr>td:nth-child(1)"));
System.out.println("size of names elements : " + namesElements.size());
List<String> names = new ArrayList<String>();
//Adding column1 elements to the list
for (WebElement nameEle : namesElements) {
names.add(nameEle.getText());
}
//Displaying the list elements on console
for (WebElement s : namesElements) {
System.out.println(s.getText());
}
//locating next button
String nextButtonClass = driver.findElement(By.id("example_next")).getAttribute("class");
//traversing through the table until the last button and adding names to the list defined about
while (!nextButtonClass.contains("disabled")) {
driver.findElement(By.id("example_next")).click();
Thread.sleep(1000);
namesElements = driver.findElements(By.cssSelector("#example>tbody>tr>td:nth-child(1)"));
for (WebElement nameEle : namesElements) {
names.add(nameEle.getText());
}
nextButtonClass = driver.findElement(By.id("example_next")).getAttribute("class");
}
//printing the whole list elements
for (String name : names) {
System.out.println(name);
}
//counting the size of the list
int actualCount = names.size();
System.out.println("Total number of names :" + actualCount);
//locating displayed count
String displayedCountString = driver.findElement(By.id("example_info")).getText().split(" ")[5];
int displayedCount = Integer.parseInt(displayedCountString);
System.out.println("Total Number of Displayed Names count:" + displayedCount);
Thread.sleep(1000);
// Actual count calculated Vs Dispalyed Count
if (actualCount == displayedCount) {
System.out.println("Actual row count = Displayed row Count");
} else {
System.out.println("Actual row count != Displayed row Count");
throw new Exception("Actual row count != Displayed row Count");
}
} catch (Exception e) {
e.printStackTrace();
}
}
I want to:
scrape more than one column or may be selected columns for example on this LINK name, office and age column
Then want to write these columns data in CSV file
Update
I tried like this but not running:
for(WebElement trElement : tr_collection){
int col_num=1;
List<WebElement> td_collection = trElement.findElements(
By.xpath("//*[#id=\"example\"]/tbody/tr[rown_num]/td[col_num]")
);
for(WebElement tdElement : td_collection){
rows += tdElement.getText()+"\t";
col_num++;
}
rows = rows + "\n";
row_num++;
}
Scraping:
Usually when I want to gather list elements I will select by Xpath instead of CssSelector. The structure of how to access elements through the Xpath is usually more clear, and depends on one or two integer values specifying the element.
So for your example where you want to find the names, you would find an element by the Xpath, the next element in the list's Xpath, and find the differing value:
The first name, 'Airi Satou' is found at the following Xpath:
//*[#id="example"]/tbody/tr[1]/td[1]
Airi's position has the following Xpath:
//*[#id="example"]/tbody/tr[1]/td[2]
You can see that across rows the Xpath for each piece of information differs on the 'td' markup.
The next name in the list, 'Angela Ramos' is found:
//*[#id="example"]/tbody/tr[2]/td[1]
And Angela's position is found:
//*[#id="example"]/tbody/tr[2]/td[2]
You can see that the difference in the column is controlled by the 'tr' markup.
By iterating over values of 'tr' and 'td' you can get the whole table.
As for writing to a CSV, there are a some solid Java libraries for writing to CSVs. I think a straightforward example to follow is here:
Java - Writing strings to a CSV file
UPDATE:
#User169 It looks like you're gathering a list of elements for each row in the table. You want to gather the Xpaths one by one, iterating over the list of webElements that you found originally. Try this, then add to it so it will get text and save it to an array.
for (int num_row = 1; num_row < total_rows; num_row++){
for (int num_col = 1; num_col < total_col; num_col++){
webElement info = driver.findElement(By.xpath("//*[#id=\"example\"]/tbody/tr[" + row_num + ']/td[' + col_num + "]");
}
}
I haven't tested it so it may need a few small changes.
I am trying to navigate through search results from google with selenium webdriver. I have a interface for user to inset word to search and site title to choose. If the result is not on the first page the driver should go to next page to look for the site, and if not there than to next page and so on..
Somehow I don't manage to get beyond the second page end if I did get to the second page and the right site is there, the driver doesn't click on it.
Here is some of the code in Java:
private void setLoopNum(int l){
String getText = urlText.getText();
String getSiteName = linkToChoose.getText();
System.setProperty("webdriver.chrome.driver", "C:\\selenium-2.44.0\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize(); //Maximize window
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
for(int i=0;i<l;i++){
//WebDriver driver = new FirefoxDriver();
driver.get("http://google.com");
//driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
WebElement element1 = driver.findElement(By.name("q"));
element1.sendKeys(getText);
element1.submit();
//driver.manage().timeouts().implicitlyWait(30,TimeUnit.SECONDS); //wait for page to load
//try{
boolean flag = false;
String page_number = "1";
while(! flag){
//get all the search results
List<WebElement> linkElements = driver.findElements(By.xpath("//h3[#class='r']/a"));
for(WebElement eachResult: linkElements){
if(eachResult.getAttribute(getSiteName).equals(getSiteName)){
eachResult.findElement(By.xpath("//a[#href='" + getSiteName + "']")).click();;
flag =true;
}else{
driver.findElement(By.xpath("//a[#id='pnnext']/span")).click();
linkElements.clear(); //celean list
break;
} //end else
}
}//end while loop
//}catch(Exception e){
// System.out.println("Error!");
// }
}
driver.quit(); //clear memory
}
Three things that you are missing in your code:
Firstly, in your code you are looking for only first element in your list.
Secondly, in getAttribute you are passing link instead of href:
if(eachResult.getAttribute(getSiteName).equals(getSiteName)){
it should be:
if(eachResult.getAttribute("href").equals(getSiteName)){
Thirdly, on clicking next the page is loaded via Google Ajax Api. Thus webdriver click will never block the execution of your code and will load linkElements with previous page links only. To avoid this let the driver get refreshed or put some wait for certain condition in your code.
Can u try out with this code:
WebDriverWait wait = new WebDriverWait(driver, 10)
while (!flag) {
// get all the search results
linkElements = wait
.until(ExpectedConditions
.presenceOfAllElementsLocatedBy(By
.xpath("//h3[#class='r']/a")));
for (WebElement eachResult : linkElements) {
if (eachResult.getAttribute("href").contains(getSiteName)) {
eachResult.click();
flag = true;
break;
}
}
if (!flag) {
driver.findElement(By.xpath("//a[#id='pnnext']/span[1]"))
.click();
pageNumber++;
linkElements.clear(); // celean list
wait.until(ExpectedConditions
.textToBePresentInElementLocated(
By.xpath("//td[#class='cur']"), pageNumber
+ "")); // Checking whether page number is changed as expected.
}
}// end while loop
EDIT:
List<WebElement> linkElements = new ArrayList<WebElement>();
ListIterator<WebElement> itr = null;
System.setProperty("webdriver.chrome.driver",
"webdrivers/chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize(); // Maximize window
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
driver.get("http://google.com");
WebElement element1 = driver.findElement(By.name("q"));
WebElement toClick = null;
element1.sendKeys(getText);
element1.submit();
// try{
int pageNumber = 1;
WebDriverWait wait = new WebDriverWait(driver, 10);
boolean flag = false;
while (!flag) {
linkElements = wait.until(ExpectedConditions
.presenceOfAllElementsLocatedBy(By
.xpath("//h3[#class='r']/a")));
itr = linkElements.listIterator(); // re-initializing iterator
while (itr.hasNext()) {
toClick = itr.next();
if (toClick.getAttribute("href").contains(getSiteName)) {
toClick.click();
flag = true;
break;
}
}
if (!flag) {
driver.findElement(By.xpath("//a[#id='pnnext']/span[1]"))
.click();
pageNumber++;
linkElements.clear(); // clean list
wait.until(ExpectedConditions.textToBePresentInElementLocated(
By.xpath("//td[#class='cur']"), pageNumber + ""));
}
}
driver.quit(); // clear memory
}
It looks like you're moving to the next page every time ANY of the WebElements in linkElements isn't what you're looking for. This will cause problems, as you need to relocate any elements that are re-rendered.
Give this a shot:
boolean found = false;
int page_number = 1; //If you need this as a string, you can make it one later
while(! found){
//get all the search results
List<WebElement> linkElements = driver.findElements(By.xpath("//h3[#class='r']/a"));
for(WebElement result: linkElements){
if(result.getAttribute("href").equals(getSiteName))
{
result.click();
found=true;
break;
}
}//End of foreach-loop
if(!found){
driver.findElement(By.xpath("//a[#id='pnnext']/span")).click();
page_number++;
}
}//End of while-loop
Also, you'll want to have some element-finding protection. Say that you search for something that has 0 results, or only one page of them (rare though that is). In the first case, you're lucky, because driver.findElements() should just return an empty list rather than throwing some exception, and the foreach loop just won't run, but in both cases, there won't be the anchor #pnnext, which will cause driver.findElement to throw an exception when you search for it. There are several ways to protect against this, such as writing a small wrapper function (IIRC, they have a simple implementation for findelementwithtimeoutwait() written on the Selenium website somewhere). I suggest you pick/write one and start using it, instead of the raw Selenium functions.
Here is the link to print name and meaning columns of all pages using drop down
Try to build the script for following:
1. Go to http://babynames.merschat.com/index.cgi?function=Search&origin=Sanskrit&gender=f
2. print the name and meaning columns to syso.
I was able to print page 1 as it is a default page.
Here is the code:
public class BabyNamesAndMeanings {
WebDriver driver = new FirefoxDriver();
#BeforeClass
public void setUp() {
driver.get("http://babynames.merschat.com/index.cgi?function=Search&origin=Sanskrit&gender=f");
driver.manage().window().maximize();
}
#Test
public void printBabyNamesAndMeaningsOfFirstPage() {
WebElement baby_names = driver
.findElement(By
.xpath("//tbody/tr[7]/td[3]/table[2]/tbody/tr[2]/td[2]/font/table[1]/tbody"));
List<WebElement> names = baby_names.findElements(By
.xpath("//tr/td[1]/font/a"));
List<WebElement> meanings = baby_names.findElements(By
.xpath("//tr/td[4]/font/a"));
for (int i = 0; i < names.size(); i++) {
System.out.println("Name: " + names.get(i).getText()
+ " Meaning: " + meanings.get(i).getText());
}
}
I don't know how to loop through rest of the options in the drop down list at the bottom of the page and hit submit button to print name and meaning of all the pages.
There are 100+ pages.
Thanks in advance.
The code below will do your job.
driver.get("http://babynames.merschat.com/index.cgi?function=Search&origin=Sanskrit&gender=f");
List<WebElement> pageOptions = new Select(driver.findElement(By.xpath("//select[#name='page']"))).getOptions();//Get all options in dropdown
ArrayList<String> pageDd = new ArrayList<String>();
for(WebElement eachPage:pageOptions){
pageDd.add(eachPage.getText());//Save text of each option
}
int i=1;
for(String eachVal:pageDd){
new Select(driver.findElement(By.xpath("//select[#name='page']"))).selectByVisibleText(eachVal);//Select page
driver.findElement(By.xpath("//input[#value='Go']")).click();//Click on go
List<WebElement> names = driver.findElements(By.xpath("//a[contains(#title,' meanings and popularity')]"));//Get all names on page
for(WebElement eachName:names){
String name = eachName.getText(); //Get each name's text
WebElement mean = eachName.findElement(By.xpath("./../../..//a[contains(#title,'Names for baby with meanings like ')]"));//Get meaning for that name
String meaning = mean.getText();//Get text of meaning
System.out.println(i+") Name: " +name+ " Meaning: " + meaning);//Print the data
i++;
}
}
Try and understand the way requirement is achieved. If you have any doubt ask.
Another method to iterate and select all the Dropdown values
Select dropdown= new Select(WebUIDriver.webDr.findElement(By.xpath("enter xpath")));
int noOfDropDownValues= dropdown.getOptions().size()-1;
for(int i=0;i<noOfDropDownValues;i++){
new Select(WebUIDriver.webDr.findElement(By.xpath("Enter Xpath']"))).selectByValue(String.valueOf(i));
}