Pagination code required to fetch all links and perform action? - java

I need to write a selenium with java code where I need to perform all image font and size check but there is a pagination
where the default page set is 50 what if I need to perform font and size check in each page. I have attached my code but it will
check only first page.
Scenario:
1.On Parent page only 50 links are displayed I need to click on each record and perform font/size check in same page
I need to again click on next page and perform same font/size check after completing it should navigate to parent page and
again click on other records and vice versa but again we have pagination on parent page as well.
List<WebElement> list=driver.findElements(By.xpath("//a[#class='primary-cell-text link']"));
System.out.println(list.size());
//Here Pagination code is required ???
ArrayList<String> hrefs = new ArrayList<String>(); //List for storing all href values
for (WebElement var : list) {
System.out.println(var.getText()); // fetch the text present between the anchor tags
System.out.println(var.getAttribute("href"));
hrefs.add(var.getAttribute("href"));
System.out.println("*************************************");
}
//Navigating to each link
int i=0;
for (String href : hrefs) {
driver.navigate().to(href);
System.out.println((++i)+": navigated to URL with href: "+href);
Thread.sleep(3000); // To check if the navigation is done properly.
System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++");
Thread.sleep(5000);
//Here Pagination code is required ????
//Below code is to perform action
List<WebElement> block=driver.findElements(By.xpath("//*[text()='High' or text()='Medium' or text()='Low']/preceding-sibling::div"));
for(int b=0;b<block.size();b++) {
System.out.println(block.get(b).getCssValue("height"));
System.out.println(block.get(b).getCssValue("width"));
String h=block.get(b).getCssValue("height");
String w=block.get(b).getCssValue("width");
if(h.equals("16px") && w.equals("16px")) {
System.out.println("Height/Width is Matching ");
}
else {
System.out.println("height/Width not matching");
}
}
Thread.sleep(3000);
//High Color Check
List<WebElement> high=driver.findElements(By.xpath("//*[text()='High']/preceding-sibling::div"));
for(int h=0;h<high.size();h++) {
WebElement hvar=driver.findElement(By.xpath(("(//*[text()='High']/preceding-sibling::div)["+(h + 1)+"]")));
String highColor=hvar.getCssValue("background-color");
System.out.println(highColor);
String hexHighcolor=Color.fromString(highColor).asHex();
System.out.println(hexHighcolor);
if(hexHighcolor.equals("#e11900")) {
System.out.println(": High color is matching i.e:#e11900");
}
else {
System.out.println(": Low color is not matching :#e11900");
}
}

You have to add logic for going to the next page like in web pages, we can either scroll down further or click on next page to perform pagination.
So for all pages that you have got, you can call a separate function to check your css values onto them.

Related

eliminating duplicate links on the webpage and avoid link is stale error

I have a list of 20 links and some of them are duplicates. I click onto the first link which leads me to the next page, I download some files from the next page.
Page 1
Link 1
Link 2
Link 3
link 1
link 3
link 4
link 2
Link 1 (click) --> (opens) Page 2
Page 2 (click back button browser) --> (goes back to) Page 1
Now I click on Link 2 and repeat the same thing.
System.setProperty("webdriver.chrome.driver", "C:\\chromedriver.exe");
String fileDownloadPath = "C:\\Users\\Public\\Downloads";
//Set properties to supress popups
Map<String, Object> prefsMap = new HashMap<String, Object>();
prefsMap.put("profile.default_content_settings.popups", 0);
prefsMap.put("download.default_directory", fileDownloadPath);
prefsMap.put("plugins.always_open_pdf_externally", true);
prefsMap.put("safebrowsing.enabled", "false");
//assign driver properties
ChromeOptions option = new ChromeOptions();
option.setExperimentalOption("prefs", prefsMap);
option.addArguments("--test-type");
option.addArguments("--disable-extensions");
option.addArguments("--safebrowsing-disable-download-protection");
option.addArguments("--safebrowsing-disable-extension-blacklist");
WebDriver driver = new ChromeDriver(option);
driver.get("http://www.mywebpage.com/");
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')]"));
Thread.sleep(500);
pageSize = listOfLinks.size();
System.out.println( "The number of links in the page is: " + pageSize);
//iterate through all the links on the page
for ( int i = 0; i < pageSize; i++)
{
System.out.println( "Clicking on link: " + i );
try
{
linkText = listOfLinks.get(i).getText();
listOfLinks.get(i).click();
}
catch(org.openqa.selenium.StaleElementReferenceException ex)
{
listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')]"));
linkText = listOfLinks.get(i).getText();
listOfLinks.get(i).click();
}
try
{
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
}
catch (org.openqa.selenium.NoSuchElementException ee)
{
driver.navigate().back();
Thread.sleep(300);
continue;
}
Thread.sleep(300);
driver.navigate().back();
Thread.sleep(100);
}
The code is working fine and clicks on all the links and downloads the files. Now I need to improve the logic omit the duplicate links. I tried to filter out the duplicates in the list but then not sure how should I handle the org.openqa.selenium.StaleElementReferenceException. The solution I am looking for is to click on the first occurrence of the link and avoid clicking on the link if it re-occurs.
(This is part of a complex logic to download multiple files from a portal >that I don't have control over. Hence please don't come back with the >questions like why there are duplicate links on the page at the first place.)
First I don't suggest you to be doing requests (findElements) to the WebDriver repeatedly, you will see a lot of performance issues following this path, mainly if you have a lot of links, and pages.
Also if you are doing the same thing always on the same tab, you will need to wait the refresh 2 times ( page of the links and page of the download ), now if you open each link in a new tab, you just need to wait the refresh of the page where you will download.
I have a suggestion, just distinct repeated links as #supputuri said and open each link in a NEW tab, in this way you don't need to handle stale, don't need to be searching on the screen every time for the links and don't need to wait the refresh of the page with links in each iteration.
List<WebElement> uniqueLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
for ( int i = 0; i < uniqueLinks.size(); i++)
{
new Actions(driver)
.keyDown(Keys.CONTROL)
.click(uniqueLinks.get(i))
.keyUp(Keys.CONTROL)
.build()
.perform();
// if you want you can create the array here on this line instead of create inside the method below.
driver.switchTo().window(new ArrayList<>(driver.getWindowHandles()).get(1));
//do your wait stuff.
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
//do your wait stuff.
driver.close();
driver.switchTo().window(new ArrayList<>(driver.getWindowHandles()).get(0));
}
I'm not in a place where I was able to test my code properly right now, any issues on this code just comment and I will update the answer, but the idea is right and it's pretty simple.
First lets see the xpath.
Sample HTML:
<!DOCTYPE html>
<html>
<body>
<div>
<a href='https://google.com'>Google</a>
<a href='https://yahoo.com'>Yahoo</a>
<a href='https://google.com'>Google</a>
<a href='https://msn.com'>MSN</a>
</body>
</html>
Let's see the xpath to get the distinct Links out of the above.
//a[not(#href = following::a/#href)]
The logic in xpath is we are making sure the href of the link is not matching with any following links href, if it's match then it's considered as duplicate and xpath does not return that element.
Stale Element:
So, now it's time to handle the stale element issue in your code.
The moment you click on the Link 1 all the references stored in listOfLinks will be invalid as selenium will get assign the new references to the elements each time they load on the page. And when you try to access the elements with old reference you will get the stale element exception.
Here is the snippet of code that should give you an idea.
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
Thread.sleep(500);
pageSize = listOfLinks.size();
System.out.println( "The number of links in the page is: " + pageSize);
//iterate through all the links on the page
for ( int i = 0; i < pageSize; i++)
{
// ===> consider adding step to explicit wait for the Link element with "//a[contains(#href,'Link')][not(#href = following::a/#href)]" xpath present using WebDriverWait
// don't hard code the sleep
// ===> added this line
<WebElement> link = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]")).get(i);
System.out.println( "Clicking on link: " + i );
// ===> updated next 2 lines
linkText = link.getText();
link.click();
// ===> consider adding explicit wait using WebDriverWait to make sure the span exist before clicking.
driver.findElement(By.xpath("//span[contains(#title,'download')]")).click();
// ===> check this answer (https://stackoverflow.com/questions/34548041/selenium-give-file-name-when-downloading/56570364#56570364) for make sure the download is completed before clicking on browser back rather than sleep for x seconds.
driver.navigate().back();
// ===> removed hard coded wait time (sleep)
}
xpath ScreenShot:
Edit1:
If you want to open the link in the new window then use the below logic.
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.presenceOfAllElementsLocatedBy(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]")));
List<WebElement> listOfLinks = driver.findElements(By.xpath("//a[contains(#href,'Link')][not(#href = following::a/#href)]"));
JavascriptExecutor js = (JavascriptExecutor) driver;
for (WebElement link : listOfLinks) {
// get the href
String href = link.getAttribute("href");
// open the link in new tab
js.executeScript("window.open('" + href +"')");
// switch to new tab
ArrayList<String> tabs = new ArrayList<String> (driver.getWindowHandles());
driver.switchTo().window(tabs.get(1));
//click on download
//close the new tab
driver.close();
// switch to parent window
driver.switchTo().window(tabs.get(0));
}
Screenshot: Sorry for the poor quality of the screenshot, could not upload the high quality video due to size limit.
you can do like this.
Save Index of element in the list to a hashtable
if Hashtable already contains, skip it
once done, HT has only unique elements, ie first foundones
Values of HT are the index from listOfLinks
HashTable < String, Integer > hs1 = new HashTable(String, Integer);
for (int i = 0; i < listOfLinks.size(); i++) {
if (!hs1.contains(e.getText()) {
hs1.add(e.getText(), i);
}
}
for (int i: hs1.values()) {
listOfLinks.get(i).click();
}

While fetching all links,Ignore logout link from the loop and continue navigation in selenium java

I am fetching all the links in the page and navigating to all links.
In that one of the link is Logout.
How do i skip/ignore Logout link from the loop?
I want to skip Logout link and proceed
List demovar=driver.findElements(By.tagName("a"));
System.out.println(demovar.size());
ArrayList<String> hrefs = new ArrayList<String>(); //List for storing all href values for 'a' tag
for (WebElement var : demovar) {
System.out.println(var.getText()); // used to get text present between the anchor tags
System.out.println(var.getAttribute("href"));
hrefs.add(var.getAttribute("href"));
System.out.println("*************************************");
}
int logoutlinkIndex = 0;
for (WebElement linkElement : demovar) {
if (linkElement.getText().equals("Log Out")) {
logoutlinkIndex = demovar.indexOf(linkElement);
break;
}
}
demovar.remove(logoutlinkIndex);
//Navigating to each link
int i=0;
for (String href : hrefs) {
driver.navigate().to(href);
System.out.println((++i)+": navigated to URL with href: "+href);
Thread.sleep(5000); // To check if the navigation is happening properly.
System.out.println("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++");
If you want to leave out the Logout link from the loop instead of creating the List as driver.findElements(By.tagName("a")); as an alternative you can use:
driver.findElements(By.xpath("//a[not(contains(.,'Log Out'))]"));
Reference
You can find a couple of relevant discussions in:
Protractor Conditional Selector
How to locate the button element using Selenium through Python
What does contains(., 'some text') refers to within xpath used in Selenium
How does dot(.) in xpath to take multiple form in identifying an element and matching a text
Java approach to remove "not interesting" link using Stream.filter() function:
List<String> hrefs = driver.findElements(By.className("a"))
.stream()
.filter(link -> link.getText().equals("Log out"))
.map(link -> link.getAttribute("href"))
.collect(Collectors.toList());
Using XPath != operator solution to collect only links which text is not equal to Log Out:
List<String> hrefs = driver.findElements(By.xpath("//a[text() != 'Log out']"))
.stream()
.map(link -> link.getAttribute("href"))
.collect(Collectors.toList());

Selenium and Java: How do I get all of the text after a WebElement

I am coding a program in Java using WebDriver and am having a little bit of trouble getting the text after the select webElement.
The HTML code for the part of the website that I want is as follows:
<select name="language" id="langSelect" style="width:100px;">
<option value="1" >Français</option>
</select>
</div>
<div id="content">
<div id="Pagination"></div>
<div id="mid">
</div>
</div>
The textbox class codes for a search bar and a drop down bar of languages
My Java code is currently able to open chrome using the chrome driver and is able to type into the search bar. I am however not able to get the text that results from the entry.
Image
In the image here, I entered "avoir" into the search bar, and I want all of the text inside the boxes after which do not seem to have any id's or names to be used inside the xpath.
Can someone please help me in finding how to get and save the text from those fields after the dropdown language menu?
Thank you in advance!
The code I have so far:
//import statements not shown
public class WebScraper {
public WebScraper() {
}
public WebDriver driver = new ChromeDriver();
public void openTestSite() {
driver.navigate().to(the URL for the website);
}
public void enter(String word) {
WebElement query_editbox =
driver.findElement(By.id("query"));
query_editbox.sendKeys(word);
query_editbox.sendKeys(Keys.RETURN);
}
public void getText() {
//List<WebElement> searchResults =
driver.findElements(By.xpath("//div[#id='mid']/div"));
// Writer writer = new BufferedWriter(new
OutputStreamWriter(new FileOutputStream("status.txt"),
"utf-8"));
//int[] index = {0};
WebElement result=driver.findElement(By.id("mid"));
System.out.println(result.getText());
}
public static void main(String[] args) throws IOException {
System.setProperty("webdriver.chrome.driver", "chromedriver");
System.out.println("Hello");
WebScraper webSrcaper = new WebScraper();
webSrcapper.openTestSite();
webSrcapper.enter("avoir");
webSrcapper.getText();
System.out.println("Hello");
}
}
I have specified three approaches to extract the text from the result box. Please check all the approaches and use the required approach.
If you want to extract all the text, then you can find the element of the result box and then you can get the Text from that.
WebElement result=driver.findElement(By.id("mid"));
System.out.println(result.getText());
If you want to extract the Text based on the Section by section, then you can go with the below approach,
List<WebElement> sectionList=driver.findElements(By.xpath("//div[#id='mid']/div"));
int i=0;
for(WebElement element:sectionList){
System.out.println("Section "+i+":"+element.getText());
i++;
}
If you want to extract the text from specific section, then you can do with the below approach
List<WebElement> sectionList=driver.findElements(By.xpath("//div[#id='mid']/div"));
int i=0;
//Inorder to get the Section 3 Content
int section=2;
for(WebElement element:sectionList){
if(section==i){
System.out.println("Section "+i+":"+element.getText());
}
i++;
}
Edit: To address followup question
I would suggest to use some explicit wait after doing some action which resulting in some element rendering. In your code, after doing some modification, I am getting the result as expected.
In openTestSite method, I have just added the explicit wait to ensure the page load after loading the URL
In enter method, actually you are getting the autocomplete suggestion after entering the query value .So, we need to just select the value from the autocomplete.
In getText method, Search result is taking more time.So, we need to add some explicit wait using any one of the dynamically loading element locator.
Code:
openTestSite Method:
public void openTestSite() {
//driver.navigate().to(the URL for the website);
driver.get("https://wonef.fr/try/");
driver.manage().window().maximize();
//Explicit wait is added after the Page load
WebDriverWait wait=new WebDriverWait(driver,20);
wait.until(ExpectedConditions.titleContains("WoNeF"));
}
enter Method:
public void enter(String word) {
WebElement query_editbox =
driver.findElement(By.id("query"));
query_editbox.sendKeys(word);
//AutoComplete is happening even after sending the Enter Key.
// So, Value needs to be selected from the autocomplete
WebDriverWait wait=new WebDriverWait(driver,20);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//div[#class='autocomplete']/div")));
List<WebElement> matchedList=driver.findElements(By.xpath("//div[#class='autocomplete']/div"));
System.out.println(matchedList.size());
for(WebElement element : matchedList){
if(element.getText().equalsIgnoreCase(word)){
element.click();
}
}
//query_editbox.sendKeys(Keys.RETURN);
}
getText Method:
public void getText() {
WebDriverWait wait=new WebDriverWait(driver,20);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//div[#id='mid']/div")));
WebElement result=driver.findElement(By.id("mid"));
System.out.println(result.getText());
}
I have tested with the above modified code and it is working fine.
In order to inspect the relevant results for your query, common strategy would be to load a list of search results:
List<WebElement> searchResults = driver.findElements(By.xpath("//div[#id='mid']/div"));
Now you can use stream to iterate over the list and extract your relevant text, by getting the text from child elements of each result:
int[] index = {0};
searchResults.stream().forEach(result -> {
System.out.println("Printing query result of index: " + index[0]);
result.findElements(By.xpath(".//*")).stream().forEach(webElement -> {
try {
System.out.println(webElement.getText());
} catch (Exception e) {
// Do nothing
}
});
index[0]++;
});
And you would get the output:

What are the ways of selecting element from auto-suggestion dropdown?

Typing into the input field using sendkeys.
Auto-suggestion comes. The html element is as follows:
<div angucomplete-alt id="ac-{{row.RowId}}" placeholder="Search materials..." maxlength="50" pause="100" selected-object="selectedCon" ng-click="selectedConRow(row)" local-data="materialsConsumables" search-fields="MaterialName" title-field="MaterialName" initial-value="row.materialSelected" minlength="1" input-class="autocomplete" match-class="highlight"></div>
Collecting all the suggestions into a “List” and selecting one of the elements when match occurs with my desired string.
Value is selected and populated into the input field
Fills all other mandatory fields
Clicks “Submit”. But error showing that this input field is empty though it has a value. (P.S.: manually it works, there’s no problem with the application itself.)
Code::
String producttoSelect = "0007950137 - BSS 500ML GLASS -CDN";
WebElement ConProduct1 = objDriver.findElement(By.xpath("//*[#class=\"con_Material ng-isolate-scope\"]/div/input"));
ConProduct1.sendKeys("0007950137 - BSS 500ML GLASS -CDN");
try {
Thread.sleep(5000);
} catch (Exception e) {
}
List<WebElement> productList = objDriver.findElements(By.xpath("//*[#class=\"con_Material ng-isolate-scope\"]/div/div"));
for (WebElement optionP : productList) {
System.out.println(optionP.getText());
if (optionP.getText().equals(producttoSelect)) {
System.out.println("Trying to select Product: " + optionP.getText());
optionP.click();
break;
}
}
Instead of thread.sleep, try using Explicit wait until the list is populated:
By aaa = By.xpath("//*[#class=\"con_Material ng-isolate-scope\"]/div/div");
List<WebElement> suggestList =
wait.until(ExpectedConditions.visibilityOfAllElementsLocatedBy(aaa));

Search an element in all pages in Selenium WebDriver (Pagination)

I need to search for particular text in a table on all the pages. Say i have got to search for text (e.g : "xxx") and this text is present at 5th row of table on 3rd page.
I have tried with some code :
List<WebElement> allrows = table.findElements(By.xpath("//div[#id='table']/table/tbody/tr"));
List<WebElement> allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
System.out.println("Total pages :" +allpages.size());
for(int i=0; i<=(allpages.size()); i++)
{
for(int row=1; row<=allrows.size(); row++)
{
System.out.println("Total rows :" +allrows.size());
String name = driver.findElement(By.xpath("//div[#id='table']/table/tbody/tr["+row+"]/td[1]")).getText();
//System.out.println(name);
System.out.println("Row loop");
if(name.contains("xxxx"))
{
WebElement editbutton = table.findElement(By.xpath("//div[#id='table']/table/tbody/tr["+row+"]/td[3]"));
editbutton.click();
break;
}
else
{
System.out.println("Element doesn't exist");
}
allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
}
allpages = driver.findElements(By.xpath("//div[#id='page-navigation']//a"));
driver.manage().timeouts().pageLoadTimeout(5, TimeUnit.SECONDS);
allpages.get(i).click();
}
Sorry, i missed to describe the error. Well this code gets executed properly, it checks for element "xxx" on each row of every page and clicks on editbutton when its found.
After that it moves to
"allpages.get(i).click();" // code is for click on pages
But its unable to find any pagination, so it displays error of "Element is not clickable at point (893, 731). Other element would receive the click...."
For every page loop you use one table WebElement object. So I assume that after going to the next page you get StaleElementReferenceException. I guess the solution could be with defining table on every page loop. Move this line List<WebElement> allrows = table.findElements(By.xpath("//div[#id='table']/table/tbody/tr")); after for(int i=0; i<=(allpages.size()); i++) too
EDIT: And, btw, at this line allpages.get(i).click() I think you must click the next page link, not the current one as it seems to be

Categories

Resources