Navigation through link list massive in Selenium + Java - java

I need a script that will navigate through online profiles and return. I have some code that shows me how much online profiles links on page:
driver.get("http://mygirlfund.com");
driver.findElement(By.id("email")).sendKeys("somemail");
driver.findElement(By.id("password")).sendKeys("somepass");
driver.findElement(By.id("btn-submit")).submit();
driver.findElement(By.xpath(".//*[#id='btn-2i']/a")).click();
// log in
List<WebElement> allLinks = driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"));
// miracle, have found links of all online profiles
System.out.println(allLinks.size());
for (int i = 1; i < allLinks.size(); i++)
{
for (WebElement link : allLinks)
{
link.click();
driver.navigate().back();
// here write a message
}
i++;
// navigating through user profiles
}
So I need to click on a link then return to previous (main) page but it only navigates to the first link and returns back.

What is the outer for-loop for? Why do you initialise i with 1 (instead of 0)? Why do you increment i twice? The inner loop should be sufficient:
List<WebElement> allLinks = driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"));
for (WebElement link : allLinks) {
link.click();
driver.navigate().back();
}
Alternatively, you could retrieve the web elements one by one in a for loop like this (but this will throw an Exception, if there are less than 25 links):
for (int i = 0; i < 25; i++) {
String xpath = "//img[#alt='Online Now!']/../..//a[" + (i+1) + "]";
WebElement link = driver.findElement(By.xpath(xpath));
link.click();
//....
}

I have discovered that when I update webpage the consequences of profile links is breaking down. So, the decision was to open profile link in new window. Do some action and close it.
As guys above said using two loops was stupid decision. This code works for me perfect:
for(WebElement link : driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"))){
String originalWindow =driver.getWindowHandle();
System.out.println("Original handle is: "+ originalWindow);
//open link in new window
act.contextClick(link).perform();
act.sendKeys("w").perform();
Thread.sleep(4000);
for (String newWindow : driver.getWindowHandles())
{
driver.switchTo().window(newWindow);
System.out.println("NOW THE CURRENT Handle is: "+ newWindow);
}
Thread.sleep(2000);
//here write a message
driver.close();
driver.switchTo().window(originalWindow);
}
Note:
When I store found links in variable and use it in loop:
List<WebElement> allLinks = driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"));
//have found links of all online profiles
System.out.println(allLinks.size());
for (WebElement link : allLinks)
{
String originalWindow =driver.getWindowHandle();
System.out.println("Original handle is: "+ originalWindow);
//open link in new window
act.contextClick(link).perform();
act.sendKeys("w").perform();
Thread.sleep(4000);
//continue handling new window
My script opens just first founded link perpetually.
May be for someone it will be useful. Thanks all!

Related

Selenium | Element not interactable error: Explored all the options of stack overflow

I am trying to get all drop downs from a web page and select a value from them in one go.
I have attached a code snippet which gets all the dropdowns which are bootstrapped and under tag on the web page.
I want to access children of each ul tag which are under li tag and click on any of those children.
I am attaching the screen shot taken from web site.
It always says element not interactable eventhough it is clikable element.
Please help.
Application screenshot
Code:
List<WebElement> dropDowns = webDriver.findElements(By.xpath("//ul[contains(#class,'dropdown')]"));
try{Thread.sleep(5000);}catch (Exception e){};
for(WebElement webElement : dropDowns){
try{
List<WebElement> elementList = webElement.findElements(By.xpath("//ul[contains(#class,'dropdown')]//li"));
for (int i = 0 ; i < elementList.size();i++){
elementList.get(i).click();
Thread.sleep(3000);
}
}
catch (Exception e){
System.out.println("-----------Error----------");
continue ;
}
}
try{Thread.sleep(10000);}
catch (Exception e){}
webDriver.quit();
}
I see below issues in your code.
You are trying to use the webElement from dropDowns list which will through stale element exception if you use webElement in the for loop.
Your code will perform the operation on the first operation on the first dropdwn all the times as you are not getting the downdown based on the index.
you mentioned you want to select an item in the list but you are clicking on the each item in the dropdown.
Please try with the below logic.
int dropDowns = webDriver.findElements(By.xpath("//ul[contains(#class,'dropdown')]")).size();
try{Thread.sleep(5000);}catch (Exception e){};
JavascriptExecutor js = (JavascriptExecutor) webDriver;
for(int dropdownIndex =0; dropdownIndex < dropDowns; dropdownIndex++){
WebElement dropdown = webDriver.findElements(By.xpath("//ul[contains(#class,'dropdown')]")).get(dropdownIndex);
try{
List<WebElement> elementList = dropdown.findElements(By.xpath(".//li"));
for (int i = 0 ; i < elementList.size();i++){ // not sure if you really want to click each item in the dropdown, hence not modified this part.
WebElement item = elementList.get(i);
js.executeScript("arugments[0].click()",item);
Thread.sleep(3000);
}
}
catch (Exception e){
System.out.println("-----------Error----------");
continue ;
}
}

Locating WebElement using different locators(NoSuchElementException)

I am having problem with locating WebElement using different locators. In the below html tag I tried locating the "write a review" WebElement with different locators like linkText,xpath,classname but still getting NoSuchElementException
-->url https://www.tripadvisor.in/-->search for Club Mahindra-->click on Club Mahindra-->click on write a review.
<a href="/UserReview-g641714-d1156207-Club_Mahindra_Madikeri_Coorg-
Madikeri_Kodagu_Coorg_Karnataka.html" target="_blank" class="ui_button
primary">Write a review</a>
Locators used
By.xpath("//*[#id="component_12"]/div/div[2]/div/div[2]/div/div[1]/a")
By.xpath("//a[#href='/UserReview-g641714-d1156207-
Club_Mahindra_Madikeri_Coorg-Madikeri_Kodagu_Coorg_Karnataka.html']")
By.className("ui_button primary")
By.linkText("Write a review")
I am really confused. What am I doing wrong?
I have tired to analyse and implement the same. Below are my findings:
-> Application waiting time is more as there are lots of dynamic loads applicable for the page.
-> Proper waits needs to be implemented
-> Check whether all the pages are getting opened in the same tab or clicking on each link is redirecting to new tabs, if so then we have to switch to that particular window.
-> Below code works like a pro for me.
driver.get("https://www.tripadvisor.in/");
WebDriverWait wait = new WebDriverWait(driver, 120);
WebElement ele1 =
wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//*[text()='Where to?']")));
ele1.click();
WebElement ele2= wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//*[#placeholder='Where to?']")));
ele2.sendKeys("club mahindra, india");
WebElement ele3= wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//span[contains(text(),'Search for ')]")));
ele3.click();
WebElement ele4= wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//span[contains(text(),'Club Mahindra Madikeri, Coorg')]")));
ele4.click(); //this click leads to a new tab
Set<String> winHandles = driver.getWindowHandles();
for(String str : winHandles) {
driver.switchTo().window(str);
}
System.out.println(driver.getTitle());
WebElement ele;
int i=1;
while(true) {
try {
ele = wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//*[text()='Write a review']")));
break;
}catch(Exception e) {
System.out.print(i++);
}
}
System.out.println();
Actions action = new Actions(driver);
action.moveToElement(ele);
ele.click();
System.out.println("Clicked on the 'Write a review button'");
you can try
//a[contains(text(),'Write a review')]

eliminating duplicate links on the webpage and avoid link is stale error

I have a list of 20 links and some of them are duplicates. I click onto the first link which leads me to the next page, I download some files from the next page.
Page 1
Link 1
Link 2
Link 3
link 1
link 3
link 4
link 2
Link 1 (click) --> (opens) Page 2
Page 2 (click back button browser) --> (goes back to) Page 1
Now I click on Link 2 and repeat the same thing.
System.setProperty("webdriver.chrome.driver", "C:\\chromedriver.exe");
String fileDownloadPath = "C:\\Users\\Public\\Downloads";
//Set properties to supress popups
Map<String, Object> prefsMap = new HashMap<String, Object>();
prefsMap.put("profile.default_content_settings.popups", 0);
prefsMap.put("download.default_directory", fileDownloadPath);
prefsMap.put("plugins.always_open_pdf_externally", true);
prefsMap.put("safebrowsing.enabled", "false");
//assign driver properties
ChromeOptions option = new ChromeOptions();
option.setExperimentalOption("prefs", prefsMap);
option.addArguments("--test-type");
option.addArguments("--disable-extensions");
option.addArguments("--safebrowsing-disable-download-protection");
option.addArguments("--safebrowsing-disable-extension-blacklist");
WebDriver driver = new ChromeDriver(option);
driver.get("http://www.mywebpage.com/");
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')]"));
Thread.sleep(500);
pageSize = listOfLinks.size();
System.out.println( "The number of links in the page is: " + pageSize);
//iterate through all the links on the page
for ( int i = 0; i < pageSize; i++)
{
System.out.println( "Clicking on link: " + i );
try
{
linkText = listOfLinks.get(i).getText();
listOfLinks.get(i).click();
}
catch(org.openqa.selenium.StaleElementReferenceException ex)
{
listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')]"));
linkText = listOfLinks.get(i).getText();
listOfLinks.get(i).click();
}
try
{
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
}
catch (org.openqa.selenium.NoSuchElementException ee)
{
driver.navigate().back();
Thread.sleep(300);
continue;
}
Thread.sleep(300);
driver.navigate().back();
Thread.sleep(100);
}
The code is working fine and clicks on all the links and downloads the files. Now I need to improve the logic omit the duplicate links. I tried to filter out the duplicates in the list but then not sure how should I handle the org.openqa.selenium.StaleElementReferenceException. The solution I am looking for is to click on the first occurrence of the link and avoid clicking on the link if it re-occurs.
(This is part of a complex logic to download multiple files from a portal >that I don't have control over. Hence please don't come back with the >questions like why there are duplicate links on the page at the first place.)
First I don't suggest you to be doing requests (findElements) to the WebDriver repeatedly, you will see a lot of performance issues following this path, mainly if you have a lot of links, and pages.
Also if you are doing the same thing always on the same tab, you will need to wait the refresh 2 times ( page of the links and page of the download ), now if you open each link in a new tab, you just need to wait the refresh of the page where you will download.
I have a suggestion, just distinct repeated links as #supputuri said and open each link in a NEW tab, in this way you don't need to handle stale, don't need to be searching on the screen every time for the links and don't need to wait the refresh of the page with links in each iteration.
List<WebElement> uniqueLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
for ( int i = 0; i < uniqueLinks.size(); i++)
{
new Actions(driver)
.keyDown(Keys.CONTROL)
.click(uniqueLinks.get(i))
.keyUp(Keys.CONTROL)
.build()
.perform();
// if you want you can create the array here on this line instead of create inside the method below.
driver.switchTo().window(new ArrayList<>(driver.getWindowHandles()).get(1));
//do your wait stuff.
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
//do your wait stuff.
driver.close();
driver.switchTo().window(new ArrayList<>(driver.getWindowHandles()).get(0));
}
I'm not in a place where I was able to test my code properly right now, any issues on this code just comment and I will update the answer, but the idea is right and it's pretty simple.
First lets see the xpath.
Sample HTML:
<!DOCTYPE html>
<html>
<body>
<div>
<a href='https://google.com'>Google</a>
<a href='https://yahoo.com'>Yahoo</a>
<a href='https://google.com'>Google</a>
<a href='https://msn.com'>MSN</a>
</body>
</html>
Let's see the xpath to get the distinct Links out of the above.
//a[not(#href = following::a/#href)]
The logic in xpath is we are making sure the href of the link is not matching with any following links href, if it's match then it's considered as duplicate and xpath does not return that element.
Stale Element:
So, now it's time to handle the stale element issue in your code.
The moment you click on the Link 1 all the references stored in listOfLinks will be invalid as selenium will get assign the new references to the elements each time they load on the page. And when you try to access the elements with old reference you will get the stale element exception.
Here is the snippet of code that should give you an idea.
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
Thread.sleep(500);
pageSize = listOfLinks.size();
System.out.println( "The number of links in the page is: " + pageSize);
//iterate through all the links on the page
for ( int i = 0; i < pageSize; i++)
{
// ===> consider adding step to explicit wait for the Link element with "//a[contains(#href,'Link')][not(#href = following::a/#href)]" xpath present using WebDriverWait
// don't hard code the sleep
// ===> added this line
<WebElement> link = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]")).get(i);
System.out.println( "Clicking on link: " + i );
// ===> updated next 2 lines
linkText = link.getText();
link.click();
// ===> consider adding explicit wait using WebDriverWait to make sure the span exist before clicking.
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
// ===> check this answer (https://stackoverflow.com/questions/34548041/selenium-give-file-name-when-downloading/56570364#56570364) for make sure the download is completed before clicking on browser back rather than sleep for x seconds.
driver.navigate().back();
// ===> removed hard coded wait time (sleep)
}
xpath ScreenShot:
Edit1:
If you want to open the link in the new window then use the below logic.
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.presenceOfAllElementsLocatedBy(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]")));
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
JavascriptExecutor js = (JavascriptExecutor) driver;
for (WebElement link : listOfLinks) {
// get the href
String href = link.getAttribute("href");
// open the link in new tab
js.executeScript("window.open('" + href +"')");
// switch to new tab
ArrayList<String> tabs = new ArrayList<String> (driver.getWindowHandles());
driver.switchTo().window(tabs.get(1));
//click on download
//close the new tab
driver.close();
// switch to parent window
driver.switchTo().window(tabs.get(0));
}
Screenshot: Sorry for the poor quality of the screenshot, could not upload the high quality video due to size limit.
you can do like this.
Save Index of element in the list to a hashtable
if Hashtable already contains, skip it
once done, HT has only unique elements, ie first foundones
Values of HT are the index from listOfLinks
HashTable < String, Integer > hs1 = new HashTable(String, Integer);
for (int i = 0; i < listOfLinks.size(); i++) {
if (!hs1.contains(e.getText()) {
hs1.add(e.getText(), i);
}
}
for (int i: hs1.values()) {
listOfLinks.get(i).click();
}

How to read youtube comments using Selenium?

I'm trying to read youtube video comments using the following code:
FirefoxDriver driver = new FirefoxDriver();
driver.get("https://www.youtube.com/watch?v=JcbBNpYkuW4");
WebElement element = driver.findElementByCssSelector("#watch-discussion");
System.out.println(element.getText()); // this prints: loading..
// scrolll down so that comments start to load
driver.executeScript("window.scrollBy(0,500)", "");
Thread.sleep(10000);
element = driver.findElementByCssSelector("#watch-discussion");
System.out.println(element.getText());
Last statement prints an empty string. Why?
It would be little tricky because all the comments are written in a separate iframe tag inside watch discussion. You will have to switch on that iframe first using driver.switchTo().frame("put ID or Name here"); but the iframe id is random value. After switch to that iframe you can find the comments all comments in a div that have class name 'Ct' so you can get those using XPATH. see the below working code
FirefoxDriver driver = new FirefoxDriver();
driver.get("https://www.youtube.com/watch?v=JcbBNpYkuW4");
WebElement element = driver.findElementByCssSelector("#watch-discussion");
System.out.println(element.getText()); // this prints: loading..
// scrolll down so that comments start to load
driver.executeScript("window.scrollBy(0,500)", "");
Thread.sleep(20000);
List<WebElement> iframes = driver.findElements(By.xpath("//iframe"));
for(WebElement e : iframes) {
if(e.getAttribute("id") != null && e.getAttribute("id").startsWith("I0_")) {
// switch to iframe which contains comments
driver.switchTo().frame(e);
break;
}
}
// fetch all comments
List<WebElement> comments = driver.findElements(By.xpath("//div[#class='Ct']"));
for(WebElement e : comments) {
System.out.println(e.getText());
}
I suggest you to try this API which is very easy/reliable instead of relying on the X-path of the elements. Also you cannot rely on the Xpath for dynamic pages/content.

Search an element in all pages in Selenium WebDriver (Pagination)

I need to search for particular text in a table on all the pages. Say i have got to search for text (e.g : "xxx") and this text is present at 5th row of table on 3rd page.
I have tried with some code :
List<WebElement> allrows = table.findElements(By.xpath("//div[#id='table']/table/tbody/tr"));
List<WebElement> allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
System.out.println("Total pages :" +allpages.size());
for(int i=0; i<=(allpages.size()); i++)
{
for(int row=1; row<=allrows.size(); row++)
{
System.out.println("Total rows :" +allrows.size());
String name = driver.findElement(By.xpath("//div[#id='table']/table/tbody/tr["+row+"]/td[1]")).getText();
//System.out.println(name);
System.out.println("Row loop");
if(name.contains("xxxx"))
{
WebElement editbutton = table.findElement(By.xpath("//div[#id='table']/table/tbody/tr["+row+"]/td[3]"));
editbutton.click();
break;
}
else
{
System.out.println("Element doesn't exist");
}
allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
}
allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
driver.manage().timeouts().pageLoadTimeout(5, TimeUnit.SECONDS);
allpages.get(i).click();
}
Sorry, i missed to describe the error. Well this code gets executed properly, it checks for element "xxx" on each row of every page and clicks on editbutton when its found.
After that it moves to
"allpages.get(i).click();" // code is for click on pages
But its unable to find any pagination, so it displays error of "Element is not clickable at point (893, 731). Other element would receive the click...."
For every page loop you use one table WebElement object. So I assume that after going to the next page you get StaleElementReferenceException. I guess the solution could be with defining table on every page loop. Move this line List<WebElement> allrows = table.findElements(By.xpath("//div[#id='table']/table/tbody/tr")); after for(int i=0; i<=(allpages.size()); i++) too
EDIT: And, btw, at this line allpages.get(i).click() I think you must click the next page link, not the current one as it seems to be

Categories

Resources