Search an element in all pages in Selenium WebDriver (Pagination) - java

I need to search for particular text in a table on all the pages. Say i have got to search for text (e.g : "xxx") and this text is present at 5th row of table on 3rd page.
I have tried with some code :
List<WebElement> allrows = table.findElements(By.xpath("//div[#id='table']/table/tbody/tr"));
List<WebElement> allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
System.out.println("Total pages :" +allpages.size());
for(int i=0; i<=(allpages.size()); i++)
{
for(int row=1; row<=allrows.size(); row++)
{
System.out.println("Total rows :" +allrows.size());
String name = driver.findElement(By.xpath("//div[#id='table']/table/tbody/tr["+row+"]/td[1]")).getText();
//System.out.println(name);
System.out.println("Row loop");
if(name.contains("xxxx"))
{
WebElement editbutton = table.findElement(By.xpath("//div[#id='table']/table/tbody/tr["+row+"]/td[3]"));
editbutton.click();
break;
}
else
{
System.out.println("Element doesn't exist");
}
allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
}
allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
driver.manage().timeouts().pageLoadTimeout(5, TimeUnit.SECONDS);
allpages.get(i).click();
}
Sorry, i missed to describe the error. Well this code gets executed properly, it checks for element "xxx" on each row of every page and clicks on editbutton when its found.
After that it moves to
"allpages.get(i).click();" // code is for click on pages
But its unable to find any pagination, so it displays error of "Element is not clickable at point (893, 731). Other element would receive the click...."

For every page loop you use one table WebElement object. So I assume that after going to the next page you get StaleElementReferenceException. I guess the solution could be with defining table on every page loop. Move this line List<WebElement> allrows = table.findElements(By.xpath("//div[#id='table']/table/tbody/tr")); after for(int i=0; i<=(allpages.size()); i++) too
EDIT: And, btw, at this line allpages.get(i).click() I think you must click the next page link, not the current one as it seems to be

Related

Pagination code required to fetch all links and perform action?

I need to write a selenium with java code where I need to perform all image font and size check but there is a pagination
where the default page set is 50 what if I need to perform font and size check in each page. I have attached my code but it will
check only first page.
Scenario:
1.On Parent page only 50 links are displayed I need to click on each record and perform font/size check in same page
I need to again click on next page and perform same font/size check after completing it should navigate to parent page and
again click on other records and vice versa but again we have pagination on parent page as well.
List<WebElement> list=driver.findElements(By.xpath("//a[#class='primary-cell-text link']"));
System.out.println(list.size());
//Here Pagination code is required ???
ArrayList<String> hrefs = new ArrayList<String>(); //List for storing all href values
for (WebElement var : list) {
System.out.println(var.getText()); // fetch the text present between the anchor tags
System.out.println(var.getAttribute("href"));
hrefs.add(var.getAttribute("href"));
System.out.println("*************************************");
}
//Navigating to each link
int i=0;
for (String href : hrefs) {
driver.navigate().to(href);
System.out.println((++i)+": navigated to URL with href: "+href);
Thread.sleep(3000); // To check if the navigation is done properly.
System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++");
Thread.sleep(5000);
//Here Pagination code is required ????
//Below code is to perform action
List<WebElement> block=driver.findElements(By.xpath("//*[text()='High' or text()='Medium' or text()='Low']/preceding-sibling::div"));
for(int b=0;b<block.size();b++) {
System.out.println(block.get(b).getCssValue("height"));
System.out.println(block.get(b).getCssValue("width"));
String h=block.get(b).getCssValue("height");
String w=block.get(b).getCssValue("width");
if(h.equals("16px") && w.equals("16px")) {
System.out.println("Height/Width is Matching ");
}
else {
System.out.println("height/Width not matching");
}
}
Thread.sleep(3000);
//High Color Check
List<WebElement> high=driver.findElements(By.xpath("//*[text()='High']/preceding-sibling::div"));
for(int h=0;h<high.size();h++) {
WebElement hvar=driver.findElement(By.xpath(("(//*[text()='High']/preceding-sibling::div)["+(h + 1)+"]")));
String highColor=hvar.getCssValue("background-color");
System.out.println(highColor);
String hexHighcolor=Color.fromString(highColor).asHex();
System.out.println(hexHighcolor);
if(hexHighcolor.equals("#e11900")) {
System.out.println(": High color is matching i.e:#e11900");
}
else {
System.out.println(": Low color is not matching :#e11900");
}
}
You have to add logic for going to the next page like in web pages, we can either scroll down further or click on next page to perform pagination.
So for all pages that you have got, you can call a separate function to check your css values onto them.

Selenium | Element not interactable error: Explored all the options of stack overflow

I am trying to get all drop downs from a web page and select a value from them in one go.
I have attached a code snippet which gets all the dropdowns which are bootstrapped and under tag on the web page.
I want to access children of each ul tag which are under li tag and click on any of those children.
I am attaching the screen shot taken from web site.
It always says element not interactable eventhough it is clikable element.
Please help.
Application screenshot
Code:
List<WebElement> dropDowns = webDriver.findElements(By.xpath("//ul[contains(#class,'dropdown')]"));
try{Thread.sleep(5000);}catch (Exception e){};
for(WebElement webElement : dropDowns){
try{
List<WebElement> elementList = webElement.findElements(By.xpath("//ul[contains(#class,'dropdown')]//li"));
for (int i = 0 ; i < elementList.size();i++){
elementList.get(i).click();
Thread.sleep(3000);
}
}
catch (Exception e){
System.out.println("-----------Error----------");
continue ;
}
}
try{Thread.sleep(10000);}
catch (Exception e){}
webDriver.quit();
}
I see below issues in your code.
You are trying to use the webElement from dropDowns list which will through stale element exception if you use webElement in the for loop.
Your code will perform the operation on the first operation on the first dropdwn all the times as you are not getting the downdown based on the index.
you mentioned you want to select an item in the list but you are clicking on the each item in the dropdown.
Please try with the below logic.
int dropDowns = webDriver.findElements(By.xpath("//ul[contains(#class,'dropdown')]")).size();
try{Thread.sleep(5000);}catch (Exception e){};
JavascriptExecutor js = (JavascriptExecutor) webDriver;
for(int dropdownIndex =0; dropdownIndex < dropDowns; dropdownIndex++){
WebElement dropdown = webDriver.findElements(By.xpath("//ul[contains(#class,'dropdown')]")).get(dropdownIndex);
try{
List<WebElement> elementList = dropdown.findElements(By.xpath(".//li"));
for (int i = 0 ; i < elementList.size();i++){ // not sure if you really want to click each item in the dropdown, hence not modified this part.
WebElement item = elementList.get(i);
js.executeScript("arugments[0].click()",item);
Thread.sleep(3000);
}
}
catch (Exception e){
System.out.println("-----------Error----------");
continue ;
}
}

eliminating duplicate links on the webpage and avoid link is stale error

I have a list of 20 links and some of them are duplicates. I click onto the first link which leads me to the next page, I download some files from the next page.
Page 1
Link 1
Link 2
Link 3
link 1
link 3
link 4
link 2
Link 1 (click) --> (opens) Page 2
Page 2 (click back button browser) --> (goes back to) Page 1
Now I click on Link 2 and repeat the same thing.
System.setProperty("webdriver.chrome.driver", "C:\\chromedriver.exe");
String fileDownloadPath = "C:\\Users\\Public\\Downloads";
//Set properties to supress popups
Map<String, Object> prefsMap = new HashMap<String, Object>();
prefsMap.put("profile.default_content_settings.popups", 0);
prefsMap.put("download.default_directory", fileDownloadPath);
prefsMap.put("plugins.always_open_pdf_externally", true);
prefsMap.put("safebrowsing.enabled", "false");
//assign driver properties
ChromeOptions option = new ChromeOptions();
option.setExperimentalOption("prefs", prefsMap);
option.addArguments("--test-type");
option.addArguments("--disable-extensions");
option.addArguments("--safebrowsing-disable-download-protection");
option.addArguments("--safebrowsing-disable-extension-blacklist");
WebDriver driver = new ChromeDriver(option);
driver.get("http://www.mywebpage.com/");
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')]"));
Thread.sleep(500);
pageSize = listOfLinks.size();
System.out.println( "The number of links in the page is: " + pageSize);
//iterate through all the links on the page
for ( int i = 0; i < pageSize; i++)
{
System.out.println( "Clicking on link: " + i );
try
{
linkText = listOfLinks.get(i).getText();
listOfLinks.get(i).click();
}
catch(org.openqa.selenium.StaleElementReferenceException ex)
{
listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')]"));
linkText = listOfLinks.get(i).getText();
listOfLinks.get(i).click();
}
try
{
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
}
catch (org.openqa.selenium.NoSuchElementException ee)
{
driver.navigate().back();
Thread.sleep(300);
continue;
}
Thread.sleep(300);
driver.navigate().back();
Thread.sleep(100);
}
The code is working fine and clicks on all the links and downloads the files. Now I need to improve the logic omit the duplicate links. I tried to filter out the duplicates in the list but then not sure how should I handle the org.openqa.selenium.StaleElementReferenceException. The solution I am looking for is to click on the first occurrence of the link and avoid clicking on the link if it re-occurs.
(This is part of a complex logic to download multiple files from a portal >that I don't have control over. Hence please don't come back with the >questions like why there are duplicate links on the page at the first place.)
First I don't suggest you to be doing requests (findElements) to the WebDriver repeatedly, you will see a lot of performance issues following this path, mainly if you have a lot of links, and pages.
Also if you are doing the same thing always on the same tab, you will need to wait the refresh 2 times ( page of the links and page of the download ), now if you open each link in a new tab, you just need to wait the refresh of the page where you will download.
I have a suggestion, just distinct repeated links as #supputuri said and open each link in a NEW tab, in this way you don't need to handle stale, don't need to be searching on the screen every time for the links and don't need to wait the refresh of the page with links in each iteration.
List<WebElement> uniqueLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
for ( int i = 0; i < uniqueLinks.size(); i++)
{
new Actions(driver)
.keyDown(Keys.CONTROL)
.click(uniqueLinks.get(i))
.keyUp(Keys.CONTROL)
.build()
.perform();
// if you want you can create the array here on this line instead of create inside the method below.
driver.switchTo().window(new ArrayList<>(driver.getWindowHandles()).get(1));
//do your wait stuff.
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
//do your wait stuff.
driver.close();
driver.switchTo().window(new ArrayList<>(driver.getWindowHandles()).get(0));
}
I'm not in a place where I was able to test my code properly right now, any issues on this code just comment and I will update the answer, but the idea is right and it's pretty simple.
First lets see the xpath.
Sample HTML:
<!DOCTYPE html>
<html>
<body>
<div>
<a href='https://google.com'>Google</a>
<a href='https://yahoo.com'>Yahoo</a>
<a href='https://google.com'>Google</a>
<a href='https://msn.com'>MSN</a>
</body>
</html>
Let's see the xpath to get the distinct Links out of the above.
//a[not(#href = following::a/#href)]
The logic in xpath is we are making sure the href of the link is not matching with any following links href, if it's match then it's considered as duplicate and xpath does not return that element.
Stale Element:
So, now it's time to handle the stale element issue in your code.
The moment you click on the Link 1 all the references stored in listOfLinks will be invalid as selenium will get assign the new references to the elements each time they load on the page. And when you try to access the elements with old reference you will get the stale element exception.
Here is the snippet of code that should give you an idea.
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
Thread.sleep(500);
pageSize = listOfLinks.size();
System.out.println( "The number of links in the page is: " + pageSize);
//iterate through all the links on the page
for ( int i = 0; i < pageSize; i++)
{
// ===> consider adding step to explicit wait for the Link element with "//a[contains(#href,'Link')][not(#href = following::a/#href)]" xpath present using WebDriverWait
// don't hard code the sleep
// ===> added this line
<WebElement> link = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]")).get(i);
System.out.println( "Clicking on link: " + i );
// ===> updated next 2 lines
linkText = link.getText();
link.click();
// ===> consider adding explicit wait using WebDriverWait to make sure the span exist before clicking.
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
// ===> check this answer (https://stackoverflow.com/questions/34548041/selenium-give-file-name-when-downloading/56570364#56570364) for make sure the download is completed before clicking on browser back rather than sleep for x seconds.
driver.navigate().back();
// ===> removed hard coded wait time (sleep)
}
xpath ScreenShot:
Edit1:
If you want to open the link in the new window then use the below logic.
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.presenceOfAllElementsLocatedBy(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]")));
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
JavascriptExecutor js = (JavascriptExecutor) driver;
for (WebElement link : listOfLinks) {
// get the href
String href = link.getAttribute("href");
// open the link in new tab
js.executeScript("window.open('" + href +"')");
// switch to new tab
ArrayList<String> tabs = new ArrayList<String> (driver.getWindowHandles());
driver.switchTo().window(tabs.get(1));
//click on download
//close the new tab
driver.close();
// switch to parent window
driver.switchTo().window(tabs.get(0));
}
Screenshot: Sorry for the poor quality of the screenshot, could not upload the high quality video due to size limit.
you can do like this.
Save Index of element in the list to a hashtable
if Hashtable already contains, skip it
once done, HT has only unique elements, ie first foundones
Values of HT are the index from listOfLinks
HashTable < String, Integer > hs1 = new HashTable(String, Integer);
for (int i = 0; i < listOfLinks.size(); i++) {
if (!hs1.contains(e.getText()) {
hs1.add(e.getText(), i);
}
}
for (int i: hs1.values()) {
listOfLinks.get(i).click();
}

Reading webpage's inspect element data using java

I have a requirement . I am reading file from a dynamic web page , and the values which i require from the webpage lies within
<td>
, and this is visible when i inspect this element . So my question is , is it somehow possible to print the data contained in the inspect element using java?
Using JSOUP. Here is the cookbook
ArrayList<String> downServers = new ArrayList<>();
Element table = doc.select("table").get(0);
Elements rows = table.select("tr");
for (int i = 1; i < rows.size(); i++) {
Element row = rows.get(i);
Elements cols = row.select("td");
// Use cols.get(index) to get the data from td element
}
I found the solution to this one , leaving this answer in case if anyone stuck into this in future.
To print whatever you see inside inspect element can be tracked down using selenium.
Here's the code which i used `
WebDriver driver= new ChromeDriver();
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
driver.manage().window().maximize();
driver.get("http://www.whatever.com");
Thread.sleep(1000);
List<WebElement> frameList = driver.findElements(By.tagName("frame"));
System.out.println(frameList.size());
driver.switchTo().frame(0);
String temp=driver.findElement(By.xpath("/html/body/table/thead/tr/td/div[2]/table/thead/tr[2]/td[2]")).getText();
read here for more .

Navigation through link list massive in Selenium + Java

I need a script that will navigate through online profiles and return. I have some code that shows me how much online profiles links on page:
driver.get("http://mygirlfund.com");
driver.findElement(By.id("email")).sendKeys("somemail");
driver.findElement(By.id("password")).sendKeys("somepass");
driver.findElement(By.id("btn-submit")).submit();
driver.findElement(By.xpath(".//*[#id='btn-2i']/a")).click();
// log in
List<WebElement> allLinks = driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"));
// miracle, have found links of all online profiles
System.out.println(allLinks.size());
for (int i = 1; i < allLinks.size(); i++)
{
for (WebElement link : allLinks)
{
link.click();
driver.navigate().back();
// here write a message
}
i++;
// navigating through user profiles
}
So I need to click on a link then return to previous (main) page but it only navigates to the first link and returns back.
What is the outer for-loop for? Why do you initialise i with 1 (instead of 0)? Why do you increment i twice? The inner loop should be sufficient:
List<WebElement> allLinks = driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"));
for (WebElement link : allLinks) {
link.click();
driver.navigate().back();
}
Alternatively, you could retrieve the web elements one by one in a for loop like this (but this will throw an Exception, if there are less than 25 links):
for (int i = 0; i < 25; i++) {
String xpath = "//img[#alt='Online Now!']/../..//a[" + (i+1) + "]";
WebElement link = driver.findElement(By.xpath(xpath));
link.click();
//....
}
I have discovered that when I update webpage the consequences of profile links is breaking down. So, the decision was to open profile link in new window. Do some action and close it.
As guys above said using two loops was stupid decision. This code works for me perfect:
for(WebElement link : driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"))){
String originalWindow =driver.getWindowHandle();
System.out.println("Original handle is: "+ originalWindow);
//open link in new window
act.contextClick(link).perform();
act.sendKeys("w").perform();
Thread.sleep(4000);
for (String newWindow : driver.getWindowHandles())
{
driver.switchTo().window(newWindow);
System.out.println("NOW THE CURRENT Handle is: "+ newWindow);
}
Thread.sleep(2000);
//here write a message
driver.close();
driver.switchTo().window(originalWindow);
}
Note:
When I store found links in variable and use it in loop:
List<WebElement> allLinks = driver.findElements(By.xpath("//img[#alt='Online Now!']/../..//a"));
//have found links of all online profiles
System.out.println(allLinks.size());
for (WebElement link : allLinks)
{
String originalWindow =driver.getWindowHandle();
System.out.println("Original handle is: "+ originalWindow);
//open link in new window
act.contextClick(link).perform();
act.sendKeys("w").perform();
Thread.sleep(4000);
//continue handling new window
My script opens just first founded link perpetually.
May be for someone it will be useful. Thanks all!

Categories

Resources