Java Selenium - navigating with the website's page-navigator gets stuck - java

I am trying to move through website pages using the website's page navigator, but it always stops clicking at page 13.
This is the page navigator:
This is my code:
for (int i = 1; i <= numOfPages; i++) {
customWebDriver.getWebDriver().findElement(By.className("css-3a6490")).click();
Thread.sleep(3000);
}
This is the HTML structure:
I have also tried using the element under css-3a6490, css-15a7b5o:
for (int i = 1; i <= numOfPages; i++) {
customWebDriver.getWebDriver().findElement(By.className("css-15a7b5o")).click();
Thread.sleep(3000);
}
but it also didn't work.
Does someone knows what is the problem?
Thanks

Try with:
customWebDriver.getWebDriver().findElement(By.xpath("//button[#class='css-19a323y']//following-sibling::button[1]")).click();
Explanation of Xpath: A button with class css-19a323y (Which is the current selected page) and then selecting the following button (Which is the next available page)
Also use more time for Thread.sleep(3000); because sometimes when you make pagination it takes more time, so better add more time, or using WebdriverWait and Expected conditions, here an example:
WebDriverWait wait = new WebDriverWait(webDriver, timeoutInSeconds);
wait.until(ExpectedConditions.elementToBeClickable(By.id<locator>));

Related

getPageSource is not locating item-Java Selenium

I am trying to write a code where an user has typed in Python, hit enter and this app "Python tutorial Tutorials Point" is on the 4th page. My logic is: locate this item using getPageSource and keep on clicking on Next button. However 2 problems I see:
1)Pages found 0- I am expecting a lot more pages and the xpath shows 56+
2) It keeps on clicking on next button and goes to page 6 meaning it did not find that book on page 4.
public class AmazonProductSearchTest {
public static WebDriver driver;
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver", "C:\\Users\\Downloads\\chromedriver_win32\\chromedriver.exe");
driver= new ChromeDriver();
//WebDriverWait wait = new WebDriverWait(driver,30);
driver.get("https://www.amazon.com/");
driver.findElement(By.xpath("//input[#id='twotabsearchtextbox']")).sendKeys("Python");
driver.findElement(By.xpath("//input[#value='Go']")).click();
//Implicit wait till page loads
driver.manage().timeouts().implicitlyWait(10,TimeUnit.SECONDS);
//Scroll down to find pagination
JavascriptExecutor jse = (JavascriptExecutor) driver;
jse.executeScript("window.scrollBy(0,5000)", "");
//Number of Pages
List<WebElement> pagesFound=driver.findElements(By.xpath("//a[contains(#href, '/s?k=python')]"));
System.out.println("Pages found "+ pagesFound.size());
//Will click on Next link until we find a specific book-using page source
while(!driver.getPageSource().equals("Python tutorial Tutorials Point")){
driver.findElement(By.xpath("//a[contains(text(),'Next')]")).click();
if(driver.getPageSource().equals("Python tutorial Tutorials Point")){
System.out.println("Found searched item");
break;
}
}
}
}
I could use some hint/help. Thanks for your time.
pagesFound is referring to Next button (//a[contains(#href, '/s?k=python')]). I believe the pagesFound should use the last disabled number which refer to actual number of pagesFound and not the Next button. Copy pasted above code and the list was empty.

How to load lazy content on Linkedin search page using selenium

Summary
I am trying to scrape all first connections' profile links of an account on LinkedIn search page. But since the page loads the rest of the content dynamically (as you scroll down) I can not get the 'Next' page button which is at the end of the page.
Problem description
https://linkedin.com/search/results/people/?facetGeoRegion=["tr%3A0"]&facetNetwork=["F"]&origin=FACETED_SEARCH&page=YOUR_PAGE_NUMBER
I can navigate to the search page using selenium and the link above. I want to know how many pages there are to navigate them all just changing the page= variable of the link above.
To implement that I wanted to check for the existence of Next button. As long as there is next button I would request the next page for scraping. But if you do not scroll down till the bottom of the page -which is where the 'Next' button is- you can not find the Next button nor you can find the information about other profiles because they are not loaded yet.
Here is how it looks when you do not scroll down and take a screenshot of the whole page using firefox screenshot tool.
How I implemented
I can fix this by hard coding a scroll down action into my code and making the driver wait for visibilityOfElementLocated. But I was wondering whether there is any other way better than my approach. And if by the approach the driver can not find the Next button somehow the program exits with the exit code 1.
And when I inspect the requests when I scroll down the page, it is just requests for images and etc as you can see below. I couldn't figure out how the page loads more info about profiles as I scroll down the page.
Source code
Here is how I implemented it in my code. This app is just a simple implementation which is trying to find the Next button on the page.
package com.andreyuhai;
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
public class App
{
WebDriver driver;
public static void main( String[] args )
{
Bot bot = new Bot("firefox", false, false, 0, 0, null, null, null);
int pagination = 1;
bot.get("https://linkedin.com");
if(bot.attemptLogin("username", "pw")){
bot.get("https://www.linkedin.com/" +
"search/results/people/?facetGeoRegion=" +
"[\"tr%3A0\"]&origin=FACETED_SEARCH&page=" + pagination);
JavascriptExecutor js = (JavascriptExecutor) bot.driver;
js.executeScript("scrollBy(0, 2500)");
WebDriverWait wait = new WebDriverWait(bot.driver, 10);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//button[#class='next']/div[#class='next-text']")));
WebElement nextButton = bot.driver.findElement(By.xpath("//button[#class='next']/div[#class='next-text']"));
if(nextButton != null ) {
System.out.println("Next Button found");
nextButton.click();
}else {
System.out.println("Next Button not found");
}
}
}
}
Another tool for that which I wonder about : LinkedIn Spider
There is this chrome extension called linkedIn Spider
This also does exactly what I am trying to achieve but using JavaScript I guess, I am not sure. But when I run this extension on the same search page. This does not do any scrolling down or loading other pages one by one extract the data.
So my questions are:
Could you please explain me how LinkedIn achieves this? I mean how does it load profile information as I scroll down if not making any request or etc. I really don't know about this. I would appreciate any source links or explanations.
Do you have any better (faster I mean) idea to implement what I am trying to implement?
Could you please explain me how LinkedIn Spider could be working without scrolling down and etc.
I have checked the div structure and the way linkedin is showing the results. So, if you hit the url directly and check the by following xpath: //li[contains(#class,'search-result')] You would find out that all the results are already loaded on the page, but linkedin are showing only 5 results in one go and on scrolling, it shows the next 5 results, however all the results are already loaded on the page and can be found out by the mentioned xpath.
Refer to this image which highlights the div structure and results when you find the results on entering the xpath on hitting the url: https://imgur.com/Owu4NPh and
Refer to this image which highlights the div structure and results after scrolling the page to the bottom and then finding the results using the same xpath: https://imgur.com/7WNR830
You could see the result set is same however there is an additional search-result__occlusion-hint part in the < li > tag in the last 5 results and through this linkedin is hiding the next 5 results and showing only the first 5 results on the first go.
Now comes the implementation part, i have checked "Next" button comes only when you scroll through whole results on the page, so instead of scrolling to a definite coordinates because that can be changed for different screensizes and windows, you can take the results in a list of webelement and get it's size and then scroll to the last element of that list. In this case, if there are total 10 results then the page will be scrolled to the 10th results and if there are only 4 results then the page will be scrolled to the 4th result and after scrolling you can check if the Next button is present on the page or not. For this, you can check the list size of the "Next" button web element list, if the list size is greater than 0, it means the next button is present on the page and if its not greater than 0, that means the Next button is not present on the list and you can stop your execution there.
So to implement it, i have taken a boolean which has an initial value as true and the code will be run in a loop till that boolean becomes false and it will become false when the Next button list size becomes equal to 0.
Please refer to the code below:
public class App
{
WebDriver driver;
// For initialising javascript executor
public Object executeScript(String script, Object... args) {
JavascriptExecutor exe = (JavascriptExecutor) driver;
return exe.executeScript(script, args);
}
// Method for scrolling to the element
public void scrollToElement(WebElement element) {
executeScript("window.scrollTo(arguments[0],arguments[1])", element.getLocation().x, element.getLocation().y);
}
public static void main(String[] args) {
// You can change the driver to bot according to your usecase
driver = new FirefoxDriver();
// Add your direct URL here and perform the login after that, if necessary
driver.get(url);
// Wait for the URL to load completely
Thread.sleep(10000);
// Initialising the boolean
boolean nextButtonPresent = true;
while (nextButtonPresent) {
// Fetching the results on the page by the xpath
List<WebElement> results = driver.findElements(By.xpath("//li[contains(#class,'search-result')]"));
// Scrolling to the last element in the list
scrollToElement(results.get(results.size() - 1));
Thread.sleep(2000);
// Checking if next button is present on the page
List<WebElement> nextButton = driver.findElements(By.xpath("//button[#class='next']"));
if (nextButton.size() > 0) {
// If yes then clicking on it
nextButton.get(0).click();
Thread.sleep(10000);
} else {
// Else setting the boolean as false
nextButtonPresent = false;
System.out.println("Next button is not present, so ending the script");
}
}
}
}
What I have observed was, the content is already loaded in the page and it will be displayed to us when we do the scroll down.
But if we inspect the 'Next >' button by loading the page manually by using the class name 'next' for example like below,
//button[#class='next']
we cannot locate it until we do the scroll down because it is not visible to us. But by using the below XPath, we can identify all the profile links count irrespective of whether they are displayed or not?
//h3[contains(#class, 'search-results__total')]/parent::div/ul/li
As you want to fetch all the profile links from the page, we can use the above XPath help to do that. We will get the links count using the above XPath then we will scroll into each element view at a time and then we will fetch the profile links on the way like below :
// Identifying the all the profile links
List<WebElement> totalProfileLinks = driver.findElements(By.xpath("//h3[contains(#class, 'search-results__total')]/parent::div/ul/li"));
// Looping for getting the profile link
for(int i=1;i<totalProfileLinks.size();i++) {
// Scrolling so that it will be visible
((JavascriptExecutor) driver).executeScript("arguments[0].scrollIntoView(true);", totalProfileLinks.get(i));
// Fetching the anchor node
final WebElement link = driver.findElement(By.xpath("(//h3[contains(#class, 'search-results__total')]/parent::div/ul/li//div[contains(#class, 'search-result__info')]//a)["+i+"]"));
// Avoiding the StaleElementReferenceException
new FluentWait<WebDriver>(driver).withTimeout(1, TimeUnit.MINUTES).pollingEvery(1, TimeUnit.SECONDS).ignoring(StaleElementReferenceException.class).until(new Function<WebDriver, WebElement>() {
public WebElement apply(WebDriver arg0) {
return link;
}
});
// Fetching and printing the link from anchor node
System.out.println(link.getAttribute("href").trim());
}
So, if we want to click on the 'Next >' button first we need to check if its present or not(As we have scrolled while fetching the profile links, the 'next' button also get displyed). We can use the help of `driver.findElements();` method to get the matches of that element count and can store it in some List(Because it returns List of WebElements) like below :
List<WebElement> nextButton = driver.findElements(By.className("next"));
The benefit of using the above technique is, the script won't fail if there are no element matches also and we will have an empty list if there are no matches.
Then we can use the size() method of the List interface to get the matches count like below :
int size = nextButton.size();
And if the size is more than 0 then that element is present otherwise not, we can check that condition like below :
if(size > 0) {
nextButton.get(0).click(); // Do some operation like clicking on it
System.out.println("=> 'Next >' button is there and clicked on it...");
} else {
System.out.println("=> 'Next >' button is NOT there...");
}
As the content is loaded and the element is view-able, we will use the JavaScriptExecutor which will locate and clicks on it.
Wrap the above code in while loop and check the presence of the 'Next >' button every time after clicking on the previous 'Next >' button like below :
boolean next = true;
while(next) {
// Checking 'Next >' button is there or not in the page
List<WebElement> nextButton = driver.findElements(By.className("next"));
// If the 'Next >' button is there then clicking on it otherwise stopping the execution
if(nextButton.size() > 0) {
doClickUsingJSE(nextButton.get(0));
System.out.println("=> 'Next >' button is there and clicked on it...");
} else {
next = false;
System.out.println("=> 'Next >' button is NOT there so stopping the execution...");
}
Thread.sleep(1000);
}
Loop will break if the 'if' condition fails in the above code because 'next' will becomes 'false'. And if we use the Fluent Wait then it will help us in avoiding some 'Exceptions' like 'WebDriverException' and 'StaleElementReferenceException'. So I have written one separate method which will wait for an element by avoiding some exceptions and clicks on it if the conditions get satisfied.
Check the code below :
private static void doClickUsingJSE(final WebElement element) {
// Using the Fluent Wait to avoid some exceptions like WebDriverException and StaleElementReferenceException
Wait<WebDriver> wait = new FluentWait<WebDriver>(driver).withTimeout(1, TimeUnit.MINUTES).pollingEvery(1, TimeUnit.SECONDS).ignoring(WebDriverException.class, StaleElementReferenceException.class);
WebElement waitedElement = wait.until(new Function<WebDriver, WebElement>() {
public WebElement apply(WebDriver driver) {
return element;
}
});
wait.until(ExpectedConditions.visibilityOf(waitedElement));
wait.until(ExpectedConditions.elementToBeClickable(waitedElement));
// Clicking on the particular element using the JavaScriptExcecutor
((JavascriptExecutor) driver).executeScript("arguments[0].click();", waitedElement);
}
As I mentioned about the JavaScriptExecutor earlier, I have included use of that also in the above method only.
Try the below end to end working code :
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.Keys;
import org.openqa.selenium.StaleElementReferenceException;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebDriverException;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.FluentWait;
import org.openqa.selenium.support.ui.Wait;
import com.google.common.base.Function;
public class BasePage
{
// Declaring WebDriver
private static WebDriver driver;
private static void doClickUsingJSE(final WebElement element) {
// Using the Fluent Wait to avoid some exceptions like WebDriverException and StaleElementReferenceException
Wait<WebDriver> wait = new FluentWait<WebDriver>(driver).withTimeout(1, TimeUnit.MINUTES).pollingEvery(1, TimeUnit.SECONDS).ignoring(WebDriverException.class, StaleElementReferenceException.class);
WebElement waitedElement = wait.until(new Function<WebDriver, WebElement>() {
public WebElement apply(WebDriver driver) {
return element;
}
});
wait.until(ExpectedConditions.visibilityOf(waitedElement));
wait.until(ExpectedConditions.elementToBeClickable(waitedElement));
// Clicking on the particular element using the JavaScriptExcecutor
((JavascriptExecutor) driver).executeScript("arguments[0].click();", waitedElement);
}
public static void main( String[] args ) throws Exception
{
System.setProperty("webdriver.chrome.driver", "C:\\NotBackedUp\\chromedriver.exe");
// Initializing the Chrome Driver
driver = new ChromeDriver();
// Launching the LinkedIn site
driver.get("https://linkedin.com/search/results/people/?facetGeoRegion=[\"tr%3A0\"]&facetNetwork=[\"F\"]&origin=FACETED_SEARCH&page=YOUR_PAGE_NUMBER");
// You can avoid this and it to your convience way
// As there are no connections in my page, I have used like this
//------------------------------------------------------------------------------------
// Switching to the login from - iframe involved
driver.switchTo().frame(driver.findElement(By.className("authentication-iframe")));
// Clicking on the Sign In button
doClickUsingJSE(driver.findElement(By.xpath("//a[text()='Sign in']")));
// Entering the User Name
WebElement element = driver.findElement(By.id("username"));
doClickUsingJSE(element);
element.sendKeys("something#gmail.com");
// Entering the Password
element = driver.findElement(By.id("password"));
doClickUsingJSE(element);
element.sendKeys("anything"+Keys.ENTER);
// Clicking on the People drop down
Thread.sleep(8000);
element = driver.findElement(By.xpath("//span[text()='People']"));
doClickUsingJSE(element);
// Selecting the All option
Thread.sleep(2000);
element = driver.findElement(By.xpath("//ul[#class='list-style-none']/li[1]"));
element.click();
// Searching something in the LinkedIn search box
Thread.sleep(3000);
element = driver.findElement(By.xpath("//input[#role='combobox']"));
doClickUsingJSE(element);
element.sendKeys("a"+Keys.ENTER);
Thread.sleep(8000);
//------------------------------------------------------------------------------------
boolean next = true;
while(next) {
// Identifying the all the profile links
List<WebElement> totalProfileLinks = driver.findElements(By.xpath("//h3[contains(#class, 'search-results__total')]/parent::div/ul/li"));
// Looping for getting the profile link
for(int i=1;i<totalProfileLinks.size();i++) {
// Scrolling so that it will be visible
((JavascriptExecutor) driver).executeScript("arguments[0].scrollIntoView(true);", totalProfileLinks.get(i));
// Fetching the anchor node
final WebElement link = driver.findElement(By.xpath("(//h3[contains(#class, 'search-results__total')]/parent::div/ul/li//div[contains(#class, 'search-result__info')]//a)["+i+"]"));
// Avoiding the StaleElementReferenceException
new FluentWait<WebDriver>(driver).withTimeout(1, TimeUnit.MINUTES).pollingEvery(1, TimeUnit.SECONDS).ignoring(StaleElementReferenceException.class).until(new Function<WebDriver, WebElement>() {
public WebElement apply(WebDriver arg0) {
return link;
}
});
// Fetching and printing the link from anchor node
System.out.println(link.getAttribute("href").trim());
}
// Checking 'Next >' button is there or not in the page
List<WebElement> nextButton = driver.findElements(By.className("next"));
// If the 'Next >' button is there then clicking on it otherwise stopping the execution
if(nextButton.size() > 0) {
doClickUsingJSE(nextButton.get(0));
System.out.println("=> 'Next >' button is there and clicked on it...");
} else {
next = false;
System.out.println("=> 'Next >' button is NOT there so stopping the execution...");
}
Thread.sleep(1000);
}
}
}
I hope it helps... Happy Coding...

Scrolling to a WebElement and clicking on it

I am new to automation and am practicing on the flipkart website.
On the page:
http://www.flipkart.com/mobiles/pr?sid=tyy,4io&otracker=clp_mobiles_CategoryLinksModule_0-2_catergorylinks_11_ViewAll
... when I try to click an element that is not in view of the page by scrolling to it, I get the exception: Element is not clickable
Below is the code:
WebElement mobile = driver.findElement(By.xpath ("//a[#title='Apple iPhone 6S (Silver, 128 GB) ']"));
JavascriptExecutor jse = (JavascriptExecutor) driver;
jse.executeScript("arguments[0].scrollIntoView();", mobile);
mobile.click();
I believe this issue is occurring because of the header available in flipkart: even though the window is getting scrolled to that particular element, the header is covering the element so it's not possible to click on it.
Can anyone help resolve this?
you can try like this
Case where you want to click on a element that is not in view of the page (without scrolling) try below
public static void main(String[] args) throws InterruptedException {
WebDriver driver = new FirefoxDriver();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.get(
"http://www.flipkart.com/mobiles/pr?sid=tyy,4io&otracker=clp_mobiles_CategoryLinksModule_0-2_catergorylinks_11_ViewAll");
driver.manage().window().maximize();
// Take everything on the page in list first .
List<WebElement> completecalContent = driver.findElements(By.xpath("//*[#class='fk-display-block']"));
System.out.println(completecalContent.size());
// printing all elements
for (int i = 0; i < completecalContent.size(); i++) {
System.out.println("Print complete Content : " + completecalContent.get(i).getText());
if (completecalContent.get(i).getText().equals("Apple iPhone 5S (Space Grey, 16 GB)")) {
// move to a specific element
((JavascriptExecutor) driver).executeScript("arguments[0].scrollIntoView();",
completecalContent.get(completecalContent.size() - 1));
// move slightly up as blue header comes in the picture
((JavascriptExecutor) driver).executeScript("window.scrollBy(0,-100)");
// then click on the element
completecalContent.get(i).click();
}
}
}
Case where you want to scroll then in that case update above code with these lines.
A. if you want to scroll to the bottom of the page then
((JavascriptExecutor) driver).executeScript("window.scrollTo(0, document.body.scrollHeight)");
B. if u want to scroll to a specific element then try this
WebElement element = driver.findElement(By.xpath("xpath to element"));
((JavascriptExecutor) driver).executeScript(
"arguments[0].scrollIntoView();", element);
C. if you want to scroll on the basis of coordinates then try this
((JavascriptExecutor) driver).executeScript("window.scrollBy(0,500)");
Instead of scrolling up to web element you can try scrolling to little bit down in page like
JavascriptExecutor jse = (JavascriptExecutor)driver;
jse.executeScript("scroll(250, 0)"); //x value '250' can be altered
Else you can try scrolling to element which is good enough above to required element. It means in code you tried instead of taking required webelement just scroll upto web element above the required so that the header does not cover required element.
Thank You,
Murali
Hey if you are not certain about the element's position on the page, you can find the co-ordinates at run time and then execute your text.
You can get elements co-ordinate by using Point
Point point = element.getLocation();
int xcord = point.getX();
int ycord = point.getY();
You can also get the dimensions of a webelement like its Height and Width using Dimension
Once you have the x and y co-ordinates and you have its dimensions. You can write your code to scroll till that particular co-ordinates on the page.
Hope it helps!

Scrolling an ajax page completely using selenium webdriver

I am trying to scroll a page completely using this code:
JavascriptExecutor js = (JavascriptExecutor) Browser;
js.executeScript("javascript:window.onload=toBottom();"+
"function toBottom(){" +"window.scrollTo(0,Math.max(document.documentElement.scrollHeight," +"document.body.scrollHeight,document.documentElement.clientHeight));" +"}");
js.executeScript("window.status = 'fail';");
//Attach the Ajax call back method
js.executeScript( "$(document).ajaxComplete(function() {" + "status = 'success';});");
js.executeScript("window.status = 'fail';");
//Attach the Ajax call back method
js.executeScript( "$(document).ajaxComplete(function() {" +"status = 'success';});");
This code works fine and scroll the page for the first attempt but when page is scrolled down, new data appears at the page and this code failed to scroll it again.
So what I need is that someone will help me to scroll the page till end until scrolling is completed.
Do I use any loop for this?
Help/Suggestions/Response will be appreciated!
I had a page with similar functionality and another question I answered previously. I am not familiar with any generic way to know if page does not have any other elements on load. In my case the page is designed to load 40/80(forgot the exact count) element in each scroll. Since, most of the cases I know an estimated number of scroll(since I am using a test company and I know how many element present for that in db) I can estimate the number of scroll and did the following to handle that page.
public void ScrollPage(int counter)
{
const string script =
#"window.scrollTo(0,Math.max(document.documentElement.scrollHeight,document.body.scrollHeight,document.documentElement.clientHeight));";
int count = 0;
while (count != counter)
{
IJavaScriptExecutor js = _driver as IJavaScriptExecutor;
js.ExecuteScript(script);
Thread.Sleep(500);
count++;
}
}
See my other answer here
Java equivalency code
public void ScrollPage(int counter) throws InterruptedException {
String script = "window.scrollTo(0,Math.max(document.documentElement.scrollHeight,document.body.scrollHeight,document.documentElement.clientHeight));";
int count = 0;
while (count != counter)
{
((JavascriptExecutor)driver).executeScript(script);
Thread.sleep(500);
count++;
}
}
Use
ScrollPage(10);
in wherever the scroll is necessary
So, I would do something like that:
bool pageEnded = false;
while (!pageEnded) {
JavascriptExecutor js = (JavascriptExecutor) Browser;
js.executeScript("window.scrollTo(0, document.body.offsetHeight);");
pageEnded = (String)js.executeScript("return document.readyState;") ? true;
}
The best Way to do this is the following (implemented in Python):
import time
def loadFullPage(Timeout):
reachedbottom = None
while not reachedbottom:
#scroll one pane down
driver.execute_script("window.scrollTo(0,Math.max(document.documentElement.scrollHeight,document.body.scrollHeight,document.documentElement.clientHeight));");
time.sleep(Timeout)
#check if the bottom is reached
a = driver.execute_script("return document.documentElement.scrollTop;")
b = driver.execute_script("return document.documentElement.scrollHeight - document.documentElement.clientHeight;")
relativeHeight = a / b
if(relativeHeight==1):
reachedbottom = True
You have to find a efficient Timeout for your internet connection. A timeout of 3 seconds worked well for me.

Selenium - complete ajax loading autoscroll to bottom of page

I have a webpage where when you scroll to the bottom it then loads more results via ajax. You can do several iterations of this before it completes. Bit like what facebook does.
I am trying to write a selenium script to keep going to the end of the page until it completes.
Something like this sort of half completes it.. I just don't know how to determine if page is at the bottom - so i can put it inside a loop of some sort?
My Attempt
By selBy = By.tagName("body");
driver.findElement(selBy).sendKeys(Keys.PAGE_DOWN);
System.out.println("Sleeping... wleepy");
Thread.sleep(5000);
driver.findElement(selBy).sendKeys(Keys.PAGE_DOWN);
System.out.println("Sleeping... wleepy1");
Thread.sleep(3000);
//etc...
Could Look like this?
hasScroll() isnt a real method. I put that there to demonstrate what im trying to achieve
while (driver.findElement(By.tagName("body")).hasScroll()) {
driver.findElement(selBy).sendKeys(Keys.PAGE_DOWN);
System.out.println("Sleeping... wleepy");
Thread.sleep(2000);
}
Try this approach:
//Get total height
By selBy = By.tagName("body");
int initialHeight = driver.findElement(selBy).getSize().getHeight();
int currentHeight = 0;
while(initialHeight != currentHeight){
initialHeight = driver.findElement(selBy).getSize().getHeight();
//Scroll to bottom
((JavascriptExecutor) driver).executeScript("scroll(0," + initialHeight + ");");
System.out.println("Sleeping... wleepy");
Thread.sleep(2000);
currentHeight = driver.findElement(selBy).getSize().getHeight();
}
Hope help!

Categories

Resources