Selenium implicit wait functionality on page reload - java

I am writing a test where i look for an element on the page and then reload the page.
Steps :
Open firefox browser using selenium webdriver(java).Implicit wait is set to 30 seconds.
Navigate to a webpage . Verify if the element with id="elementid"
exists.
Now reload the page . Verify if the element with id="elementid"
exists.
I would like to know the behaviour in the following usecase.
**
Incase if the reload is not proper and the page that is displayed is
the old page and not the reloaded one , will the selenium return true
when i verify for the presence of element with id="elementid" even
though the page instance is an old one ? Or will it throw an exception ?
**

There are three options:
page reload didn't happen between you trigger the reload and look up the element again
page reload happened (and finished) between you trigger the reload and look up the element again
page reload happened (but didn't finish) between you trigger the reload and look up the element again
Depending on if Selenium assigned internal element ID after page was reloaded and not, you can have StaleElementReferenceException or not.
This is how I'd handle it in my framework:
create waitForPageToLoad() method for every single Page Object you interact with, and call it before any other method on this page; this method should wait until certain (unique for this page) element is present (or displayed)
call this method between triggering page reload and another element lookup
create wrapper methods for typical element intaractions (click, sendKeys etc), handle StaleElementReferenceException on that level (usually by explicit wait and re-doing the same interaction, there is no other way to handle this class of exception)
Check how it goes, add verification of unique page element if possible (if there is any unique page element which is present after page reload, but not before), add explicit wait before 2nd element lookup if necessary.

It doesn't matter if the page is a old one or new one or loaded partially.It will only throw an exception if the element is not present in the DOM of the page.Well that's how Selenium works
And it will return true if you verify the presence of element if it is in DOM
FindElement
This method is affected by the 'implicit wait' times in force at the
time of execution. The findElement(..) invocation will return a
matching row, or try again repeatedly until the configured timeout is
reached.

It will attempt to search for an element anyway.
In case it suddenly disappears - you get StaleElementReferenceException
In case it is not there - NoSuchElementException
If you think that a fact of (not)reloading page matters for your test and you really want to know (or take appropriate actions), check it with document.readyState to be complete (when loaded) or not complete (while still loading)

Related

given a webelement, does clicking it cause a page reload?

What I'm doing
I've been making a utility method to help me find and properly wait for Webelements in Selenium. so far its going well and I have a way to try all kinds of different locators, wait till a webelement is found and then wait untill the webelement is displayed/enabled all with timeouts and nice things like that.
Whats the problem then?
The problem is that I sometimes need to find Webelements after pages reload. I've read up on available solutions to the 'staleness' problem and I know how to solve it (using the old webelement I just found and clicked on I'll wait till it's stale before I search again) BUT I don't want to manually have to check wether a given webelement causes a page to reload. I've tried looking in the Webelement and Expected conditions class to see if there is any method that returns true if a given webelement causes a page reload. I've tried searching about it here and on google and gotten nothing useful. I wanted to know if its possible to have something like this:
Boolean causesPageReload = webElement.causesPageReload;
With some imaginary method named causesPageReload that determines wether a webelement causes a page reload when it is clicked on, submitted to, ect. I know that some webelements just cause javascript to run on the page and others reload the page but If i could programatically determine if it reloads the page I could just say:
if (causesPageReload){
wait.until(ExpectedConditions.stalenessOf("insert old webelement here"));
}
And solve the problem. Is there anything in the underlying HTML, javascript or maybe something already built in that could provide this information? Sure I can manually go through the steps myself and see which webelements under test actually cause a page refresh, but that is subject to change, prone to human error and also time consuming.
Possible alternatives?
Alternatively I could just do my staleness check with a timeout of ten seconds and then if it reloads the page thats fine but if it doesn't it allows 10 seconds for the javascript or whatever to finish what it's doing. (I was also kinda wondering If I needed to wait for non page reloading webelement clicks as well but that seems harder due to javascript and entails a different question) I don't know if I would need to wait if the page isn't going to reload. even if I knew that I did need to wait in the non reload case, I wouldn't know how to. My current waits just wait for the webelement to be found, displayed and enabled so if clicking on it causes something important (that I need to wait for) but doesn't change those things, I'd need something else but that requires another question to be more in depth.
Tl:Dr
I just need to know if I can find out which webelements cause pages to reload programatically and if I can't then is there any need to wait for the non reloading ones (no need to go all in depth about the second case just tell me how to ask that as a second question later)?
Update
I've tried multithreading this and so far I've gotten something that can (in a timely manner) decide wether a given element when clicked changes in the DOM or doesn't. This covers most page reloading cases but might lead to a false positive since I'm pretty sure there are other instances where Element references go stale that don't involve the page reloading. I think the root cause of the problem is there is no data/flag/hook to grab onto to really tell. I suppose a better hook would lead to a better solution but I have no idea what that hook would be. On the bright side I did learn/become familiar with alot of multithreading which is good because its an area I've been weak in. I'm going to try to research the javascript thats been mentioned in answers and see If i can't combine that with my multithread approach. Once I have a good hook all I'd need to change is an ExpectedConditions call on a WebDriverwait waiting object.
Update 2
I found this website:
https://developer.mozilla.org/en-US/docs/Web/Events/DOMContentLoaded
Which details "load" and "DOMcontentloaded" events. Two things in javascript that fire when pages are loaded/reloaded. I have already created a thread application with ExpectedConditions like so:
WebDriverWait wait = new WebDriverWait(driver,timeoutSeconds);
try{
wait.until(ExpectedConditions.stalenessOf(webElement));
}
catch (TimeoutException e){
}
Thus I'm pretty sure I can modify the wait.until line to check for a javascript event firing with a timeout using a Java to Javascript interface. To use the two langauges together I was led to this question:
How can I use JavaScript in Java?
In order to obatin knowledge on how that basically works. I'm going to try to implement this using Nashorn or maybe some other interface depending on whats the best.
What this potentially means
While this doesn't determine for us wether a given webelement causes page reloading BEFORE actually "trying" it, it does determine it just by "trying" the webelement. And, because I used a thread off of main, my check for "no it didn't reload the page" is effectively just the timeout condition which can be configured as needed. I don't think its possible to determine if a Webelement causes a page reload without actually trying it, but at least we can try it, determine if it reloads within a timeout period and then we will know we will have waited sufficiently long enough to not get any stale reference exceptions when searching for the same or a next element if we at least know that the next element we're looking for exists on the new page (assuming that once we execute the method to try to locate it, it waits for said element to be displayed and selectable but I've already done that). This also allows us to determine if a given webelement was deleted from the page by "trying it" because if we combine the javascript pageload call with the stalereference check I already have, then we can use the condition of "the JS load event didn't fire (page static) BUT stalereference excpetion was thrown (DOM element changed)" as the check for "this element was deleted/changed signifigantly but page wasn't reloaded", Which is quite useful information. Additioanlly since these elements can now be grouped into three categories:
Doesn't get deleted, can reference again
Deleted but page static
Deleted but page changes
We can store the results beforehand and (as long as the locators remain intact) we can more easily know wether we must wait or not after clicking the webelement. We could even go further and if we know we have the 2nd case, we could retry locating the element and see if it's locators change or not when we click it since I think stalereference exceptions can be thrown without requiring all the locators to change. How useful this is? I'm not sure but I think its pretty useful stuff but somehow I don't think I'm the first one to try/find a solution for this. I will post an answer when I successfully implement and test this but it will be awhile because I need to learn some basic Javascript and then how to integrate that with my java.
There isn't any way to programatically find out if the click will cause a reload.
You need each case separately, for this you can create main click() method with overload (or not) and call the appropriate one in each case
public void clickOnElement(WebElement element, boolean waitForStaleness) {
element.click();
if (waitForStaleness) {
wait.until(ExpectedConditions.stalenessOf(element));
}
}
public void clickOnElement(WebElement element) {
clickOnElement(element, false);
}
You can use
WaitElement(pBy, pWait);
end if you need if element is visible for continue you can add is_displayed()
finaly is not working you use a java wait :
Thread.sleep(second)
or
TimeUnit.SECONDS.sleep(second);

How to wait until my browser will get into time out using selenium

So i have perform some times tests using Selenium:
My page have simple Drop Down List and from this Element i am selecting different value and wait until the next page will load and calculate the time that this operation took and again try to load different value from this Drop Down List.
So in some values this took a loot of time (because the page need to load many thing) so in this case sometimes this lead into white screen and in this case i want to specify that this is TimeOut so my question is how to recognise that this white screen appears when i cannot expect to any WebElement on this white screen.
You can waitForCondition to wait until a particular javascript condition is true. There's also other waitForX convenience methods.
https://www.neustar.biz/blog/selenium-tips-wait-with-waitforcondition
Note: If using jQuery you can wait for "jQuery.active == 0"
wait for an ajax call to complete with Selenium 2 web driver

How and when to implement refreshed(ExpectedCondition<T> condition) of Selenium WebDriver?

I was going through methods of ExpectedCondtions class and found one method: refreshed
I can understand that the method can be used when you get StaleElementReferenceException and you want to retrieve that element again and this way to avoid StaleElementReferenceException
My above understanding might not be correct hence I want to confirm:
When refreshed should be used?
What should be the code for something part of following code:
wait.until(ExpectedConditions.refreshed(**something**));
Can someone please explain this with an example?
The refreshed method has been very helpful for me when trying to access a search result that has been newly refreshed. Trying to wait on the search result by just ExpectedConditions.elementToBeClickable(...) returns StaleElementReferenceException. To work around that, this is the helper method that would wait and retry for a max of 30s for the search element to be refreshed and clickable.
public WebElement waitForElementToBeRefreshedAndClickable(WebDriver driver, By by) {
return new WebDriverWait(driver, 30)
.until(ExpectedConditions.refreshed(
ExpectedConditions.elementToBeClickable(by)));
}
Then to click on the result after searching:
waitForElementToBeRefreshedAndClickable(driver, By.cssSelector("css_selector_to_search_result_link")).click();
Hope this was helpful for others.
According to the source:
Wrapper for a condition, which allows for elements to update by redrawing.
This works around the problem of conditions which have two parts: find an
element and then check for some condition on it. For these conditions it is
possible that an element is located and then subsequently it is redrawn on
the client. When this happens a {#link StaleElementReferenceException} is
thrown when the second part of the condition is checked.
So basically, this is a method that waits until a DOM manipulation is finished on an object.
Typically, when you do driver.findElement that object represents what the object is.
When the DOM has manipulated, and say after clicking a button, adds a class to that element. If you try to perform an action on said element, it will throw StaleElementReferenceException since now the WebElement returned now does not represent the updated element.
You'll use refreshed when you expect DOM manipulation to occur, and you want to wait until it's done being manipulated in the DOM.
Example:
<body>
<button id="myBtn" class="" onmouseover="this.class = \"hovered\";" />
</body>
// pseudo-code
1. WebElement button = driver.findElement(By.id("myBtn")); // right now, if you read the Class, it will return ""
2. button.hoverOver(); // now the class will be "hovered"
3. wait.until(ExpectedConditions.refreshed(button));
4. button = driver.findElement(By.id("myBtn")); // by this point, the DOM manipulation should have finished since we used refreshed.
5. button.getClass(); // will now == "hovered"
Note that if you perform say a button.click() at line #3, it will throw a StaleReferenceException since the DOM has been manipulated at this point.
In my years of using Selenium, I've never had to use this condition, so I believe that it is an "edge case" situation, that you most likely won't even have to worry about using. Hope this helps!

Action class called twice from IE and not firefox

I have an application where an absolutely normal link (like below).. when clicked, calls the action class twice. And this behaviour happens only in IE. In Firefox, when i click on the same link, it calls the action class only once.
Load
This is an older application and im using Struts 1.3 and Tiles.
Any idea why this is happening and/or how it can be troubleshooted?
It could be that IE is preloading prefetching the link.
To verify this, log the time when the request is received. Load the page, wait ~10 seconds, and click the link. If the difference between the log entries is ~10 seconds, it's a prefetch.
The idea is the browser preloads links the user is likely to click, so the result can be server immediately form the browser cache.
HTML5 makes this explicit by defining rel="prefetch". This attribute and value can be set on a, area, and link tags.
Check your page for <link rel="prefetch" href="url" /> or <link rel="next" href="url" /> in the HEAD element. Also, check your A tags for rel attributed as well.
MicroSoft claims to officially support this in IE 11.
All of this is intended to make pages appear more responsive to the user. Where this can fall apart is when the page being retrieved is not cacheable. This will cause the page to be fetched again when the user clicks it. This can be improved by taking steps to make sure the result cacheable. Set appropriate cache headers. There is a Private cache control for content that is intended for a single recipient. It's only stored in private caches, typically the user's browser.
Additionally, your page may not be considered cacheable if it does not provide a Content-length header.

Selenium -- How to handle situation when driver's current url is not updated quickly when navigating through a web application

I am using the PageObject model to develop a test framework, and I have one instance where it takes some time before a new page's url is available.
The scenario is this: The user is creating a new object. They provide the name of the object and click a button to go to the edit page for the new object.
I want to construct an EditPage object using the url which loads when the button is clicked. It takes some time before the driver's current url is updated to the new url for the edit page after the button is clicked.
In Selenium 2 what is a clean, robust way to handle this situation? So far, I have only encountered issues with specific elements not loading on a page, but this is the first time I have encountered a situation in which there is a long wait for the driver's current url to get updated.
I have used FluentWait for components which are slow to load and I have heard about WebDriverWait. My question is, which one is better to use for this situation? Can someone post a simple example of how to deal with a URL which is slow to be available? I want a solution that doesn't require the CreatePage to know anything about elements on the EditPage, so strategies that involve waiting for elements on the EditPage to be visible violate encapsulation principles.
This is a common problem given that pages will wait a variable amount of time to load and your page objects need to take that into account. Page objects should have a mechanism built in to wait for themselves to load within your preferred allotted time so that if you do throw an exception you know it's either because 1) The page requested did not in fact show up or 2) It took too long. You can throw a different exception in each case to differentiate between the reasons why a test fails.
In your case I think the best thing to do is to have your page objects wait for a certain amount of time for the URL to match the URL of the page object in question. So as apart of their loading process the page object could call a methodlike the following:
public PageObject WaitForURL(string url)
{
Console.WriteLine("WaitForURL : " + url);
WebDriverWait _waitForURL = new WebDriverWait(_driver, TimeSpan.FromMilliseconds(30000));
_waitForURL.Until((d) =>
{
try
{
return d.Url == url;
}
catch (Exception)
{
return false;
}
});
Console.WriteLine("URL changed to : " + _driver.Url);
return this;
}
Please note that even after the URL has changed to your desired URL it doesn't mean that the page has finished loading. You say you don't want to wait for a specific element and I understand that but you may want to do a simple wait after the above for the body tag to be visible or make sure that before you interact with the page object's elements the first time you start first by waiting for the element to be visible.

Categories

Resources