How to handle dynamic xpaths - java

I need to select the text that has returned based on my search operation.
For every search xpaths will get differ. These are various xpaths that are returned on search
.//*[#id='messageBoxForm']/div/div[1]/div[1]/div/div[1]/div[1]/span/input
.//*[#id='messageBoxForm']/div/div[1]/div[1]/div/div[2]/div/div/div[2]/div[2]/strong

You could place it in a try-catch-block and use the first x-path in your try, catch the "NoSuchElementException" Selenium could throw and then try the other x-path.
Based on the criteria you posted this should do the job.
WebElement element;
try {
element = webDriver.findElement(By.xpath("xyz"));
} catch (NoSuchElementException e) {
element = webDriver.findElement(By.xpath("abc"));
}
... do things with your element

Related

Getting an element not found exception from an exception handler

I am getting element not found exception while trying to locate the element in a try loop. Below is my code:
private boolean isPresent(WebDriver driver,String findElement)
{
driver.manage().timeouts().implicitlyWait(0, TimeUnit.SECONDS);
try {
driver.findElement(By.xpath(findElement));
return true;
}
catch (NoSuchElementException e) {
return false;
}
finally{
driver.manage().timeouts().implicitlyWait(40,TimeUnit.SECONDS);
}
}
Instead of using find element and timeouts use some waits or until for the element to be present and then do the operation.
eg. This will wait until the element is located, then do what you want to do with your myDynamicELement
WebElement myDynamicElement = (new WebDriverWait(driver, 10))
.until(ExpectedConditions.presenceOfElementLocated(By.id("myElement")));
It looks like you are trying to validate whether your element is present or not. For that use a logic something like this.
A) Inside try
1) Wait for the element to be present
2) Then Use if then else to check if element present and return true or false
B) Inside Catch handle the error.
The better way to do this is to avoid throwing exceptions in the first place.
private boolean isPresent(WebDriver driver, By locator)
{
return !driver.findElements(locator).isEmpty();
}
Instead of passing the locator as string and requiring XPath, use a By locator. Now you can pass the method any locator type... Id, CSS selector, etc.

how to keep the code running when element is not found

hi there my code keep running an error when this element is not found
driver.findElement(By.xpath("(//span[#class='_soakw coreSpriteLikeHeartOpen'])")).click();
can anyone help me? i want the other code to keep running even though this element is not found I've been looking for the answer the whole day on the internet
You can place a try catch block around the find element. After the catch blok the execution of the code wil continue.
The function findelement throws a NoSuchElementException when there is no element found.
// Set the timeout for searching an element
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
try
{
// Try to find the element
driver.findElement(By.xpath("(//span[#class='_soakw coreSpriteLikeHeartOpen'])")).click();
}
catch (NoSuchElementException e)
{
System.out.println("Element Not Found");
}
// Continue
Let me know if it worked or if you need anymore help.
You can do this in following way too :
List<WebElement> element= driver.findElements(By.xpath("//span[#class='_soakw coreSpriteLikeHeartOpen']"));
if(element.size()>0)
{
System.out.println("Found");
}
else
{
System.out.println("Not Found");
}
Important :
FindElement() : It throws a NoSuchElementException exception when it
fails to find If the element.
FindElements() : If the element doesn’t exist or not available on
the page then, the return value will be an empty list.

Selenium How to check the drop down list item is selected

public static WebElement drpdwn_selectMonth() throws Exception{
try{
WebElement monthSelector = driver.findElement(By.id("monthID"));
monthSelector.click();
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
monthSelector = driver.findElement(By.xpath("//*[#id='monthID']/option[2]"));
monthSelector.click();
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
}catch (Exception e){
throw(e);
}
return element;
}
how to do a Boolean check there is a value under drop down list is selected?
how to print and get the value selected in drop down list
According to your given little details , It can be done in below way :
WebElement monthSelector = driver.findElement(By.id("monthID"));
monthSelector.click();
if(monthSelector.isSelected())
{
Select sel = new Select(driver.findElement(By.id("monthID")));
sel.selectByVisibleText("Your-dropdown-value");
}
else
{
System.out.println("Sorry , Dropdown not selected yet");
}
Please replace Your-dropdown-value with your dropdown actual value e.g "January".
Better you also share your HTML code , if above does not work for you.
HTML snip would help, but here's my take. If your menu element is a <select> element, you can make use of the Select API.
Once instantiated with your WebElement representing the root locator of the menu, you can use the getAllSelectedOptions() or getFirstSelectedOption() methods to retrieve the text of the selected option(s). From here, you can print the value, or validate the selected option in your assert statement.
This is only high level concept, but if you read through the API Doc, you should be able to come up with the solution that fits your needs.

Using JSoup to scrape Google Results

I'm trying to use JSoup to scrape the search results from Google. Currently this is my code.
public class GoogleOptimization {
public static void main (String args[])
{
Document doc;
try{
doc = Jsoup.connect("https://www.google.com/search?as_q=&as_epq=%22Yorkshire+Capital%22+&as_oq=fraud+OR+allegations+OR+scam&as_eq=&as_nlo=&as_nhi=&lr=lang_en&cr=countryCA&as_qdr=all&as_sitesearch=&as_occt=any&safe=images&tbs=&as_filetype=&as_rights=").userAgent("Mozilla").ignoreHttpErrors(true).timeout(0).get();
Elements links = doc.select("what should i put here?");
for (Element link : links) {
System.out.println("\n"+link.text());
}
}
catch (IOException e) {
e.printStackTrace();
}
}
}
I'm just trying to get the title of the search results and the snippets below the title. So yea, I just don't know what element to look for in order to scrape these. If anyone has a better method to scrape Google using java I would love to know.
Thanks.
Here you go.
public class ScanWebSO
{
public static void main (String args[])
{
Document doc;
try{
doc = Jsoup.connect("https://www.google.com/search?as_q=&as_epq=%22Yorkshire+Capital%22+&as_oq=fraud+OR+allegations+OR+scam&as_eq=&as_nlo=&as_nhi=&lr=lang_en&cr=countryCA&as_qdr=all&as_sitesearch=&as_occt=any&safe=images&tbs=&as_filetype=&as_rights=").userAgent("Mozilla").ignoreHttpErrors(true).timeout(0).get();
Elements links = doc.select("li[class=g]");
for (Element link : links) {
Elements titles = link.select("h3[class=r]");
String title = titles.text();
Elements bodies = link.select("span[class=st]");
String body = bodies.text();
System.out.println("Title: "+title);
System.out.println("Body: "+body+"\n");
}
}
catch (IOException e) {
e.printStackTrace();
}
}
}
Also, to do this yourself I would suggest using chrome. You just right click on whatever you want to scrape and go to inspect element. It will take you to the exact spot in the html where that element is located. In this case you first want to find out where the root of all the result listings are. When you find that, you want to specify the element, and preferably an unique attribute to search it by. In this case the root element is
<ol eid="" id="rso">
Below that you will see a bunch of listings that start with
<li class="g">
This is what you want to put into your initial elements array, then for each element you will want to find the spot where the title and body are. In this case, I found the title to be under the
<h3 class="r" style="white-space: normal;">
element. So you will search for that element in each listing. The same goes for the body. I found the body to be under so I searched for that using the .text() method and it returned all the text under that element. The key is to ALWAYS try and find the element with an original attribute (using a class name is ideal). If you don't and only search for something like "div" it will search the entire page for ANY element containing div and return that. So you will get WAY more results than you want. I hope this explains it well. Let me know if you have any more questions.

How to resolve, Stale element exception? if element is no longer attached to the DOM?

I have a question regarding "Element is no longer attached to the DOM".
I tried different solutions but they are working intermittent. Please suggest a solution that could be permanent.
WebElement getStaleElemById(String id, WebDriver driver) {
try {
return driver.findElement(By.id(id));
} catch (StaleElementReferenceException e) {
System.out.println("Attempting to recover from StaleElementReferenceException ...");
return getStaleElemById(id, driver);
}
}
WebElement getStaleElemByCss(String css, WebDriver driver) {
try {
return driver.findElement(By.cssSelector(css));
} catch (StaleElementReferenceException e) {
System.out.println("Attempting to recover from StaleElementReferenceException ...");
return getStaleElemByCss(css, driver);
} catch (NoSuchElementException ele) {
System.out.println("Attempting to recover from NoSuchElementException ...");
return getStaleElemByCss(css, driver);
}
}
Thanks,
Anu
The problem
The problem you are probably facing is that the method returns the right (and valid!) element, but when you're trying to access it a second later, it is stale and throws.
This usually arises when:
You click something that loads a new page asynchronously or at least changes it.
You immediatelly (before the page load could finish) search for an element ... and you find it!
The page finally unloads and the new one loads up.
You try to access your previously found element, but now it's stale, even though the new page contains it, too.
The solutions
There are four ways to solve it I know about:
Use proper waits
Use proper waits after every anticipated page-load when facing asynchronous pages. Insert an explicit wait after the initial click and wait for the new page / new content to load. Only after that you can try to search for the element you want. This should be the first thing you'll do. It will increase the robustness of your tests greatly.
The way you did it
I have been using a variant of your method for two years now (together with the technique above in solution 1) and it absolutely works most of the time and fails only on strange WebDriver bugs. Try to access the found element right after it is found (before returning from the method) via a .isDisplayed() method or something. If it throws, you already know how to search again. If it passes, you have one more (false) assurance.
Use a WebElement that re-finds itself when stale
Write a WebElement decorator that remembers how it was found and re-find it when it's accessed and throws. This obviously forces you to use custom findElement() methods that would return instances of your decorator (or, better yet, a decorated WebDriver that would return your instances from usual findElement() and findElemens() methods). Do it like this:
public class NeverStaleWebElement implements WebElement {
private WebElement element;
private final WebDriver driver;
private final By foundBy;
public NeverStaleWebElement(WebElement element, WebDriver driver, By foundBy) {
this.element = element;
this.driver = driver;
this.foundBy = foundBy;
}
#Override
public void click() {
try {
element.click();
} catch (StaleElementReferenceException e) {
// log exception
// assumes implicit wait, use custom findElement() methods for custom behaviour
element = driver.findElement(foundBy);
// recursion, consider a conditioned loop instead
click();
}
}
// ... similar for other methods, too
}
Note that while I think that the foundBy info should be accessible from the generic WebElements to make this easier, Selenium developers consider it a mistake to try something like this and have chosen not to make this information public. It's arguably a bad practice to re-find on stale elements, because you're re-finding elements implicitly without any mechanism for checking whether it's justified. The re-finding mechanism could potentially find a completely different element and not the same one again. Also, it fails horribly with findElements() when there are many found elements (you either need to disallow re-finding on elements found by findElements(), or remember the how-manyeth your element was from the returned List).
I think it would be useful sometimes, but it's true that nobody would ever use options 1 and 2 which are obviously much better solutions for the robustness of your tests. Use them and only after you're sure you need this, go for it.
Use a task queue (that can rerun past tasks)
Implement your whole workflow in a new way!
Make a central queue of jobs to run. Make this queue remember past jobs.
Implement every needed task ("find an element and click it", "find an element and send keys to it" etc.) via the Command pattern way. When called, add the task to the central queue which will then (either synchronously or asynchronously, doesn't matter) run it.
Annotate every task with #LoadsNewPage, #Reversible etc. as needed.
Most of your tasks will handle their exceptions by themselves, they should be stand-alone.
When the queue would encounter a stale element exception, it would take the last task from the task history and re-run it to try again.
This would obviously take a lot of effort and if not thought through very well, could backfire soon. I used a (lot more complex and powerful) variant of this for resuming failed tests after I manually fixed the page they were on. Under some conditions (for example, on a StaleElementException), a fail would not end the test right away, but would wait (before finally time-outing after 15 seconds), popping up an informative window and giving the user an option to manually refresh the page / click the right button / fix the form / whatever. It would then re-run the failed task or even give a possibility to go some steps back in history (e.g. to the last #LoadsNewPage job).
Final nitpicks
All that said, your original solution could use some polishing. You could combine the two methods into one, more general (or at least make them delegate to this one to reduce code repetition):
WebElement getStaleElem(By by, WebDriver driver) {
try {
return driver.findElement(by);
} catch (StaleElementReferenceException e) {
System.out.println("Attempting to recover from StaleElementReferenceException ...");
return getStaleElem(by, driver);
} catch (NoSuchElementException ele) {
System.out.println("Attempting to recover from NoSuchElementException ...");
return getStaleElem(by, driver);
}
}
With Java 7, even a single multicatch block would be sufficient:
WebElement getStaleElem(By by, WebDriver driver) {
try {
return driver.findElement(by);
} catch (StaleElementReferenceException | NoSuchElementException e) {
System.out.println("Attempting to recover from " + e.getClass().getSimpleName() + "...");
return getStaleElem(by, driver);
}
}
This way, you can greatly reduce the amount of code you need to maintain.
I solve this by 1. keeping the stale element and poll it until it throws an exception, and then 2. wait until the element is visible again.
boolean isStillOnOldPage = true;
while (isStillOnOldPage) {
try {
theElement.getAttribute("whatever");
} catch (StaleElementReferenceException e) {
isStillOnOldPage = false;
}
}
WebDriverWait wait = new WebDriverWait(driver, 15);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("theElementId")));
If you are trying to Click on link, that taking you to new page. After that navigating back and clicking on other links. They below code may help you.
public int getNumberOfElementsFound(By by) {
return driver.findElements(by).size();
}
public WebElement getElementWithIndex(By by, int pos) {
return driver.findElements(by).get(pos);
}
/**click on each link */
public void getLinks()throws Exception{
try {
List<WebElement> componentList = driver.findElements(By.tagName("a"));
System.out.println(componentList.size());
for (WebElement component : componentList)
{
//click1();
System.out.println(component.getAttribute("href"));
}
int numberOfElementsFound = getNumberOfElementsFound(By.tagName("a"));
for (int pos = 0; pos < numberOfElementsFound; pos++) {
if (getElementWithIndex(By.tagName("a"), pos).isDisplayed()){
getElementWithIndex(By.tagName("a"), pos).click();
Thread.sleep(200);
driver.navigate().back();
Thread.sleep(200);
}
}
}catch (Exception e){
System.out.println("error in getLinks "+e);
}
}
Solutions to resolve them:
Storing locators to your elements instead of references
driver = webdriver.Firefox();
driver.get("http://www.github.com");
search_input = lambda: driver.find_element_by_name('q');
search_input().send_keys('hello world\n');
time.sleep(5);
search_input().send_keys('hello frank\n') // no stale element exception
Leverage hooks in the JS libraries used
# Using Jquery queue to get animation queue length.
animationQueueIs = """
return $.queue( $("#%s")[0], "fx").length;
""" % element_id
wait_until(lambda: self.driver.execute_script(animationQueueIs)==0)
Moving your actions into JavaScript injection
self.driver.execute_script("$(\"li:contains('Narendra')\").click()");
Proactively wait for the element to go stale
# Wait till the element goes stale, this means the list has updated
wait_until(lambda: is_element_stale(old_link_reference))
This solution, which worked for me
When a Stale Element Exception occurs!!
Stale element exception can happen when the libraries supporting those textboxes/ buttons/ links has changed which means the elements are same but the reference has now changed in the website without affecting the locators. Thus the reference which we stored in our cache including the library reference has now become old or stale because the page has been refreshed with updated libraries.
for(int j=0; j<5;j++)
try {
WebElement elementName=driver.findElement(By.xpath(“somexpath”));
break;
} catch(StaleElementReferenceException e){
e.toString();
System.out.println(“Stale element error, trying :: ” + e.getMessage());
}
elementName.sendKeys(“xyz”);
For Fitnesse you can use:
|start |Smart Web Driver| selenium.properties|
#Fixture(name = "Smart Web Driver")
public class SmartWebDriver extends SlimWebDriver {
private final static Logger LOG = LoggerFactory.getLogger(SmartWebDriver.class);
/**
* Constructs a new SmartWebDriver.
*/
#Start(name = "Start Smart Web Driver", arguments = {"configuration"}, example = "|start |Smart Web Driver| selenium.properties|")
public SmartWebDriver(String configuration) {
super(configuration);
}
/**
* Waits for an element to become invisible (meaning visible and width and height != 0).
*
* #param locator the locator to use to find the element.
*/
#Command(name = "smartWaitForNotVisible", arguments = {"locator"}, example = "|smartWaitForNotVisible; |//path/to/input (of css=, id=, name=, classname=, link=, partiallink=)|")
public boolean smartWaitForNotVisible(String locator) {
try {
waitForNotVisible(locator);
} catch (StaleElementReferenceException sere) {
LOG.info("Element with locator '%s' did not become invisible (visible but also width and height != 0), a StaleElementReferenceException occurred, trying to continue...", locator);
} catch (NoSuchElementException ele) {
LOG.info("Element with locator '%s' did not become invisible (visible but also width and height != 0), a NoSuchElementException occurred, trying to continue...", locator);
} catch (AssertionError ae) {
if (ae.getMessage().contains("No element found")) {
LOG.info("Element with locator '%s' did not become invisible (visible but also width and height != 0), a AssertionError occurred, trying to continue...", locator);
} else {
throw ae;
}
}
return true;
}
}
https://www.swtestacademy.com/selenium-wait-javascript-angular-ajax/ here is a good article about dynamic waiter strategies.
Your problem is not waiting properly all the ajax, jquery or angular calls.
Then you end up with StaleElementException.
If your approach is to use Try-Catch mechanism, I guess it has a flaw. You shouldn't rely on that structure as you'll never know it's gonna work in the catch clause.
Selenium gives you the opportunity to make javascript calls.
You can execute
"return jQuery.active==0"
return
angular.element(document).injector().get('$http').pendingRequests.length
=== 0"
"return document.readyState"
"return angular.element(document).injector() === undefined"
commands just to check the existence and states of those calls.
You can do that before any findBy operation so you always work with the latest page

Categories

Resources