What I'm doing
I've been making a utility method to help me find and properly wait for Webelements in Selenium. so far its going well and I have a way to try all kinds of different locators, wait till a webelement is found and then wait untill the webelement is displayed/enabled all with timeouts and nice things like that.
Whats the problem then?
The problem is that I sometimes need to find Webelements after pages reload. I've read up on available solutions to the 'staleness' problem and I know how to solve it (using the old webelement I just found and clicked on I'll wait till it's stale before I search again) BUT I don't want to manually have to check wether a given webelement causes a page to reload. I've tried looking in the Webelement and Expected conditions class to see if there is any method that returns true if a given webelement causes a page reload. I've tried searching about it here and on google and gotten nothing useful. I wanted to know if its possible to have something like this:
Boolean causesPageReload = webElement.causesPageReload;
With some imaginary method named causesPageReload that determines wether a webelement causes a page reload when it is clicked on, submitted to, ect. I know that some webelements just cause javascript to run on the page and others reload the page but If i could programatically determine if it reloads the page I could just say:
if (causesPageReload){
wait.until(ExpectedConditions.stalenessOf("insert old webelement here"));
}
And solve the problem. Is there anything in the underlying HTML, javascript or maybe something already built in that could provide this information? Sure I can manually go through the steps myself and see which webelements under test actually cause a page refresh, but that is subject to change, prone to human error and also time consuming.
Possible alternatives?
Alternatively I could just do my staleness check with a timeout of ten seconds and then if it reloads the page thats fine but if it doesn't it allows 10 seconds for the javascript or whatever to finish what it's doing. (I was also kinda wondering If I needed to wait for non page reloading webelement clicks as well but that seems harder due to javascript and entails a different question) I don't know if I would need to wait if the page isn't going to reload. even if I knew that I did need to wait in the non reload case, I wouldn't know how to. My current waits just wait for the webelement to be found, displayed and enabled so if clicking on it causes something important (that I need to wait for) but doesn't change those things, I'd need something else but that requires another question to be more in depth.
Tl:Dr
I just need to know if I can find out which webelements cause pages to reload programatically and if I can't then is there any need to wait for the non reloading ones (no need to go all in depth about the second case just tell me how to ask that as a second question later)?
Update
I've tried multithreading this and so far I've gotten something that can (in a timely manner) decide wether a given element when clicked changes in the DOM or doesn't. This covers most page reloading cases but might lead to a false positive since I'm pretty sure there are other instances where Element references go stale that don't involve the page reloading. I think the root cause of the problem is there is no data/flag/hook to grab onto to really tell. I suppose a better hook would lead to a better solution but I have no idea what that hook would be. On the bright side I did learn/become familiar with alot of multithreading which is good because its an area I've been weak in. I'm going to try to research the javascript thats been mentioned in answers and see If i can't combine that with my multithread approach. Once I have a good hook all I'd need to change is an ExpectedConditions call on a WebDriverwait waiting object.
Update 2
I found this website:
https://developer.mozilla.org/en-US/docs/Web/Events/DOMContentLoaded
Which details "load" and "DOMcontentloaded" events. Two things in javascript that fire when pages are loaded/reloaded. I have already created a thread application with ExpectedConditions like so:
WebDriverWait wait = new WebDriverWait(driver,timeoutSeconds);
try{
wait.until(ExpectedConditions.stalenessOf(webElement));
}
catch (TimeoutException e){
}
Thus I'm pretty sure I can modify the wait.until line to check for a javascript event firing with a timeout using a Java to Javascript interface. To use the two langauges together I was led to this question:
How can I use JavaScript in Java?
In order to obatin knowledge on how that basically works. I'm going to try to implement this using Nashorn or maybe some other interface depending on whats the best.
What this potentially means
While this doesn't determine for us wether a given webelement causes page reloading BEFORE actually "trying" it, it does determine it just by "trying" the webelement. And, because I used a thread off of main, my check for "no it didn't reload the page" is effectively just the timeout condition which can be configured as needed. I don't think its possible to determine if a Webelement causes a page reload without actually trying it, but at least we can try it, determine if it reloads within a timeout period and then we will know we will have waited sufficiently long enough to not get any stale reference exceptions when searching for the same or a next element if we at least know that the next element we're looking for exists on the new page (assuming that once we execute the method to try to locate it, it waits for said element to be displayed and selectable but I've already done that). This also allows us to determine if a given webelement was deleted from the page by "trying it" because if we combine the javascript pageload call with the stalereference check I already have, then we can use the condition of "the JS load event didn't fire (page static) BUT stalereference excpetion was thrown (DOM element changed)" as the check for "this element was deleted/changed signifigantly but page wasn't reloaded", Which is quite useful information. Additioanlly since these elements can now be grouped into three categories:
Doesn't get deleted, can reference again
Deleted but page static
Deleted but page changes
We can store the results beforehand and (as long as the locators remain intact) we can more easily know wether we must wait or not after clicking the webelement. We could even go further and if we know we have the 2nd case, we could retry locating the element and see if it's locators change or not when we click it since I think stalereference exceptions can be thrown without requiring all the locators to change. How useful this is? I'm not sure but I think its pretty useful stuff but somehow I don't think I'm the first one to try/find a solution for this. I will post an answer when I successfully implement and test this but it will be awhile because I need to learn some basic Javascript and then how to integrate that with my java.
There isn't any way to programatically find out if the click will cause a reload.
You need each case separately, for this you can create main click() method with overload (or not) and call the appropriate one in each case
public void clickOnElement(WebElement element, boolean waitForStaleness) {
element.click();
if (waitForStaleness) {
wait.until(ExpectedConditions.stalenessOf(element));
}
}
public void clickOnElement(WebElement element) {
clickOnElement(element, false);
}
You can use
WaitElement(pBy, pWait);
end if you need if element is visible for continue you can add is_displayed()
finaly is not working you use a java wait :
Thread.sleep(second)
or
TimeUnit.SECONDS.sleep(second);
Related
I wrote an automated selenium based tests for a web application and they run perfectly with fast internet connection, but unpredictable behavior with less good connection.
Web application was build so, that if duration of response on a request< of some action at the web page, is bigger than 250ms, then appers loader-wrapper element, that prevents any kind of action from user, until response ends. Loader-wrapper can apper at any request in any place of test execution, so i cant use explicit waits of selenium, because i dont know when and where it will appear. As a result i receive an exception:
org.openqa.selenium.WebDriverException: Element is not clickable at point (411, 675). Other element would receive the click:(.show-component .loader-wrapper)
Is there any way to set a "global wait", which will stop test execution if loader-wrapper appered and will wait until it ends, and then test execution will continue? Or any another idea.
I kind of like your idea of the annotation, but not sure how to implement it.
Another possible approach is to write your own ExpectedCondition "loaderWrapperDisappeared" (or something like that), which would wait for the loader wrapper to be gone, and return the target WebElement so that you could chain a click to it.
You would then use it like this;
(new WebDriverWait(targetWebElement, 50))
.until(ExpectedConditions.loaderWrapperDisappeared(By.id("your div id"))).click();
(pardon the syntax it that's wrong...I haven't written java in a few years)
In case of web driver, you can have to use like this.
WebElement webElement = (new WebDriverWait(driver, 50))
.until(ExpectedConditions.elementToBeClickable(By.id("your div id")));
Here 50 refers to 50 seconds.
For more details, refer below the link.
https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/support/ui/WebDriverWait.html#WebDriverWait-org.openqa.selenium.WebDriver-long-
If I understand correctly you are looking for invisibilityOfElementLocated.
You can add it as a decorator to your steps...
Hope this helps!
I am currently learning Selenium, and I learned a lot. One thing the community said; is that you need avoiding thread.sleep as much as possible. Selenium uses implicit and explicit waits in replace. Yes, I understand that concept.
Recently I cam across a problem. This is that without a certain action; going from the login page to another page, without the use of a Thread.sleep(1000). Selenium seems too crash: that it can't find a certain element. I find this behaviour strange. So I was thinking that this conflict occurs, because of the login page that firstly wants to redirects to the main page of the website and without the Thread.sleep(1000); it wants to go to the second page but the login page refuses it because it want's to go first to the main page. With that being said is that why Selenium crashes or do you guys see and strange use of code in the example below?
// Currently on a webpage
WebElement ui_login_button = wait.until(ExpectedConditions.presenceOfElementLocated(By.id("account-login-button")));
ui_login_button.click();
//After the click it logs in and redirects to a webpage
Thread.sleep(1000); // why sleep here? (without this Selenium crashes)
// Go to second page and perform actions
waitForLoad(driver);
driver.navigate().to(URL + "/mymp/verkopen/index.html");
/* -------------------------------------------------------------------
public void waitForLoad(WebDriver driver) {
ExpectedCondition<Boolean> pageLoadCondition = new
ExpectedCondition<Boolean>() {
public Boolean apply(WebDriver driver) {
return ((JavascriptExecutor)driver).executeScript("return document.readyState").equals("complete");
}
};
//WebDriverWait wait = new WebDriverWait(driver, 30);
wait.until(pageLoadCondition);
}
Sorry for the explanation, I tried my best to be clear. English is not my native language. Thanks for your help.
Kind regargds.
As per your question and the updated comments It raises an exception that it can't find the element on the webpage, it is very much possible. Additionally when you mention putting a sleep in between is not an elegant solution to fix, that's pretty correct as inducing Thread.sleep(1000); degrades the overall Test Execution Performance.
Now, what I observed in your commented code block to compare document.readyState to complete was a wiser step. But sometime it may happen that, though Web Browser will send document.readyState as complete to Selenium, due to presence of JavaScript and AJAX Calls the elements with whom we want to interact may not be Visible, Clickable or Interactable which in-turn may raise associated Exception.
So, the solution would be inducing ExplicitWait i.e. WebDriverWait. We will induce ExplicitWait for the element with which we want to interact, with proper ExpectedConditions set. You can find documentation about ExplicitWait here.
An Example:
If you want to wait for a button to be clickable the expected code block may be in the following format along with the imports:
import org.openqa.selenium.By;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
// Go to second page and wait for the element
WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.elementToBeClickable(By.id("id_of_the_element")));
//perform actions
driver.navigate().to(URL + "/mymp/verkopen/index.html");
I am guessing that the exception is raised after you've navigated to URL + "/mymp/verkopen/index.html" and started to take some action.
I am speculating that the main issue here is that your waitForLoad() method is not waiting for some Javascript, or other background task, to complete on the page that Login goes to first. So when you navigate to the next page, something is not yet completed, leaving your user authentication in a bad state. Perhaps you need to wait for some AJAX to complete after login before proceeding with your further navigation? Or it would be better to click on a link on that page to trigger the navigation to your target page (as a real user would), rather than directly entering the URL? You might find it helpful to discuss the actual behavior of the web application with developers.
As DebanjanB has pointed out, once you are on your target page you can then use WebDriverWait for the elements on the page where you are taking actions.
I am using Selenium and Java to write a test. I use ExpectedConditions.elementToBeClickable to click on web elements, but sometimes they are covered by others and as you know ExpectedConditions.elementToBeClickable only checks if the element is enabled and visible, so is there any method to check if it's covered or not?
by the way I do not want to use codes like:
try{
//click on the element
}
catch(Exception e)
{//it's covered
}
I am looking for something like:
blabla.isCovered();
because sometimes for example I want to check of a whole window is covered or not.
If you don't want to use that sort of code, perhaps you should add additional logic to your checkifClickable method than just enabled and visible. Although, I am betting your real question is more along the lines of "How can I check for these conditions prohibiting me from clicking that may be not be apparent from information provided by the DOM".
This is something that makes Automation more difficult than people realize, because you must have detailed knowledge of the site. Is it an element that pops up due to logic or a condition? Then the solution is to close it. Is it a resolution of rendering issue? Then resize window. There is no real cure-all method, you have to tweak the automation logic to suit the site you are working for.
Are you not wanting to click at all and know previously?
If you specifically just don't want to use Try, then implement an assertion that a property changed after you clicked and then set a bool.
I was going through methods of ExpectedCondtions class and found one method: refreshed
I can understand that the method can be used when you get StaleElementReferenceException and you want to retrieve that element again and this way to avoid StaleElementReferenceException
My above understanding might not be correct hence I want to confirm:
When refreshed should be used?
What should be the code for something part of following code:
wait.until(ExpectedConditions.refreshed(**something**));
Can someone please explain this with an example?
The refreshed method has been very helpful for me when trying to access a search result that has been newly refreshed. Trying to wait on the search result by just ExpectedConditions.elementToBeClickable(...) returns StaleElementReferenceException. To work around that, this is the helper method that would wait and retry for a max of 30s for the search element to be refreshed and clickable.
public WebElement waitForElementToBeRefreshedAndClickable(WebDriver driver, By by) {
return new WebDriverWait(driver, 30)
.until(ExpectedConditions.refreshed(
ExpectedConditions.elementToBeClickable(by)));
}
Then to click on the result after searching:
waitForElementToBeRefreshedAndClickable(driver, By.cssSelector("css_selector_to_search_result_link")).click();
Hope this was helpful for others.
According to the source:
Wrapper for a condition, which allows for elements to update by redrawing.
This works around the problem of conditions which have two parts: find an
element and then check for some condition on it. For these conditions it is
possible that an element is located and then subsequently it is redrawn on
the client. When this happens a {#link StaleElementReferenceException} is
thrown when the second part of the condition is checked.
So basically, this is a method that waits until a DOM manipulation is finished on an object.
Typically, when you do driver.findElement that object represents what the object is.
When the DOM has manipulated, and say after clicking a button, adds a class to that element. If you try to perform an action on said element, it will throw StaleElementReferenceException since now the WebElement returned now does not represent the updated element.
You'll use refreshed when you expect DOM manipulation to occur, and you want to wait until it's done being manipulated in the DOM.
Example:
<body>
<button id="myBtn" class="" onmouseover="this.class = \"hovered\";" />
</body>
// pseudo-code
1. WebElement button = driver.findElement(By.id("myBtn")); // right now, if you read the Class, it will return ""
2. button.hoverOver(); // now the class will be "hovered"
3. wait.until(ExpectedConditions.refreshed(button));
4. button = driver.findElement(By.id("myBtn")); // by this point, the DOM manipulation should have finished since we used refreshed.
5. button.getClass(); // will now == "hovered"
Note that if you perform say a button.click() at line #3, it will throw a StaleReferenceException since the DOM has been manipulated at this point.
In my years of using Selenium, I've never had to use this condition, so I believe that it is an "edge case" situation, that you most likely won't even have to worry about using. Hope this helps!
Here's what I do:
selenium.click("link=mylink");
selenium.waitForPageToLoad(60000);
// do something, then navigate to a different page
// (window focus is never changed in-between)
selenium.click("link=mylink");
selenium.waitForPageToLoad(60000);
The link "mylink" does exist, the first invocation of click() always works. But the second click() sometimes seems to work, sometimes not.
It looks like the click() event is not triggered at all, because the page doesn't even start to load. Unfortunately this behaviour is underterministic.
Here's what I already tried:
Set longer time timeout
=> did not help
Wait for an element present after loading one page
=> doesn't work either since the page does not even start to load
For now I ended up invoking click() twice, so:
selenium.click("link=mylink");
selenium.waitForPageToLoad(60000);
// do something, then navigate to a different page
// (window focus is never changed in-between)
selenium.click("link=mylink");
selenium.click("link=mylink");
selenium.waitForPageToLoad(60000);
That will work, but it's not a really nice solution. I've also seen in another forum where someone suggested to write something like a 'clickAndWaitWithRetry':
try {
super.click("link=mylink");
super.waitForPageToLoad(60000);
}
catch (SeleniumException e) {
super.click("link=mylink");
super.waitForPageToLoad(60000);
}
But I think that is also not a proper solution....
Any ideas/explanations why the click() event is sometimes not triggered?
Sometimes, seemingly randomly, Selenium just doesn't like to click certain anchor tags. I am not sure what causes it, but it happens. I find in those cases w/ a troublesome link instead of doing
selenium.click(...)
do
selenium.fireEvent( locator, 'click' );
As others have stated above me, I have specifically had issues with anchor tags that appear as follows:
<a href="javascript:...." >
I've done selenium for awhile, and I really have developed a dislike for waitForPageToLoad(). You might consider always just waiting for the element in question to exist.
I find that this method seems to resolve most weird issues I run into like this. The other possibility is that you may have some javascript preventing the link from doing anything when clicked the first time. It seems unlikely but worth a double-check.
I just tried WebDriver (Selenium 2.0) and found that WebElement#sendKeys(Keys.ENTER) works.
Selenium click() event seems not to be always triggered => results in timeout?
Try selenium.pause before Selenium.click command. I have tried all above but none of them seems to resolve our problem. So finally we got a Magic selenium.pause which solved problem for me..
Hope this will solve your problem as well
I am running into this issue now also. From my usages of this, it seems like the following is the most consistent:
#browser.click(selector, {:wait_for => :page})
Not exactly sure why that would be. But it seems that if you do:
#browser.click(selector)
[maybe some stuff here too]
#browser.wait_for(:wait_for => :page)
Then you could end up waiting for a page that has already been loaded (i.e. you end up waiting forever).
I dug into the Selenium source code and found this nugget:
def click(locator, options={})
remote_control_command "click", [locator,]
wait_for options
end
...
# Waits for a new page to load.
#
# Selenium constantly keeps track of new pages loading, and sets a
# "newPageLoaded" flag when it first notices a page load. Running
# any other Selenium command after turns the flag to false. Hence,
# if you want to wait for a page to load, you must wait immediately
# after a Selenium command that caused a page-load.
#
# * 'timeout_in_seconds' is a timeout in seconds, after which this
# command will return with an error
def wait_for_page(timeout_in_seconds=nil)
remote_control_command "waitForPageToLoad",
[actual_timeout_in_milliseconds(timeout_in_seconds),]
end
alias_method :wait_for_page_to_load, :wait_for_page
Basically, this is doing the following:
#browser.click(selector)
#browser.wait_for(:wait_for => :page)
However, as the comment states, the first thing necessary is to use the :wait_for command immediately after.
And of course... switching the order puts you into the same wait forever state.
#browser.wait_for(:wait_for => :page)
#browser.click(selector)
Without knowing all the details of Selenium, it seems as though Selenium needs to register the :wait_for trigger when it is passed as an option with click. Otherwise, you could end up waiting forever if you somehow tell Selenium to wait the very instant before :wait_for is called.
Here this one will work:
selenium.waitForPageToLoad("60000");
selenium.click("link= my link");
I had the same problem - with Selenium 1.0.12 and Firefox 5.0 ; I managed to make the automated tests work this way:
I removed all "AndWait" commands (sometime they hang the test/browser)
I added a pause before the click
I added a waitForVisible after the click (usually I wait for the next html control I want to interact with on next page).
It goes like this:
waitForVisible OK
pause 1000
click OK
waitForVisible link=Go
pause 1000
click Go
etc...
It seems that the "waitForVisible" is triggered too soon, i.e. before the event handler are plugged into the control (thus clicking on the control has no effect). If you wait for 1 second, it's enought to plug/activate the click handlers...
The page has not loaded properly when you are clicking on it. Check for different elements on the page to be sure that the page has loaded.
Also, wait for the link to appear and be visible before you click on it.
Make sure you are increasing the timeout in the correct place. The lines you posted are:
selenium.click("link=mylink");
selenium.waitForPageToLoad(60000);
This wait is for the page to load that comes back After the click. But the problem you describe is that it is failing when trying to do the click. So, make sure to increase the wait Before this one.
selenium.click("link=mylink");
selenium.waitForPageToLoad(60000);
// do something, then navigate to a different page
// (window focus is never changed in-between)
// after the last click in these steps:
selenium.waitForPageToLoad(60000);
// anything else that happened after that
selenium.click("link=mylink");
selenium.waitForPageToLoad(60000);
If you're using FireFox, make sure you're using 3.6 or later.
WaitForPageToLoad uses the javascript variable 'readyState', but Firefox only supported this in 3.6. Earlier versions just don't wait
(see org.openqa.selenium.internal.seleniumemulation.WaitForPageToLoad)
I am having the same issue :( with selenium IDE 1.0.10 , phpunit 3.5 , selenium RC server 1.0.3
EDITED:
The culprit seems to be browser FF 3.6.13 , after upgrade to FF 3.6.14
all my errors are gone . My tests are working like charm :).
Selenium IDE 1.0.10
PHPUnit: 3.5.6
Selenium Server:selenium-2.0b1 (selenium-2.0b2 is buggy)
selenium.click("link=Continue to this website (not recommended).");
Thread.sleep(5000);
I've been having the same issue and found that I have to get the text of the link first. I know it's not the ideal way to do it, but I'm fortunate my links are uniquely named.
C# code:
var text = Selenium.GetText(myLocator);
Selenium.Click("link=" + text);
Try this:
selenium.fireEvent(ID, "keypress");