Xpath - issue, detection of element using firebug - java

I have one doubt. Whenever I copy xpath from Firebug and try to use it in my selenium script, the functionality does not work. Or I get this error that unable to locate the element. But, certainly when I try to write expression and execute the same code snippet, it works fine. Why is it so, is there some problem with Firebug which is one of the popular tool.
suggestions welcomed.

There could be multiple reasons for the firebug generated XPath not to work in selenium.
Most common are two:
an element you are trying to find is not yet present in the DOM - this could happen when the page is not completely loaded, or is loaded asynchronously
there are iframe elements on a page. If you need to find and element inside an iframe, you need to switch to it first
Also, don't blindly trust the XPath generated by Firebug - most of the time it would not be the most reliable expression. If possible, operate id and class attributes and don't start your XPath expression from the root HTML element - make it relative (using //).

The issue here is not with the Xpath you are using,
First check where your element is located,
The reason here is your element is not visible to the driver,
The question here is why the element is not visible,
Work on this,
And Do provide the html code in your question so that it will be more helpful to resolve the issue quickly,

Related

Selenium Webdriver: is it a professional practice to use xpath?

I am testing a web app using selenium and java. I've always avoided xpath like it was a disease. Unfortunately, I got stuck on a stubborn web element buried deep inside a table unfortunately with no id or class. I tried everything and even invited my great great grand parents but nay...nothing worked, except xpath...see below.
I tried: className, name, cssSelector e.t.c. with e.g.
driver.findElement(By.className("kujes")).click();
This is what worked.
driver.findElement(By.xpath("/html/body/div[7]/div[3]/div/div[2]/div[1]/div[2]/div/div/div/div/div[2]/div/div[1]/div/div/div[6]/div/div[2]/div[2]/div/table/tbody/tr[1]/td[3]")).click();
I do not want anything less than professional in my work.
So, my questions are is xpath reliable and a good practice?
Is it professional to use xpath?
driver.findElement(By.xpath("/html/body/div[7]/div[3]/div/div[2]/div[1]/div[2]/div/div/div/div/div[2]/div/div[1]/div/div/div[6]/div/div[2]/div[2]/div/table/tbody/tr[1]/td[3]")).click();
The above approach is very very bad practice.
Never use indexes in your xpath. It becomes very fragile and will break every single time even when there is a small change in the target application. Try to ask the developers to add ID to that object.
It depends on the cases. Ultimate goal is to find selector which is unique and never changing until big change happens.
First you can try with id or class name which are unique.
Then we can play with css selector to find,
Element with attribute, classname , id and combination.
Element which is child of another element,
Element which is next sibling of another element.
You are using absolute xpath, which is unreadable and changing one. Using absolute xpath is completely unprofessional.
driver.findElement(By.xpath("/html/body/div[7]/div[3]/div/div[2]/div[1]/div[2]/div/div/div/div/div[2]/div/div[1]/div/div/div[6]/div/div[2]/div[2]/div/table/tbody/tr[1]/td[3]")).click()
You can use relative xpath
driver.findElement(By.xpath("//table[#id='somevalue']//td[text() = 'Name']]/preceding-sibling::td")).click()
There are few cases which are possible only with XPath in selenium
Finding parent element of an element
Finding preceding sibling of an element
Finding an element with innerText
Finding nth element of the locator
The above cases are not possible with css selector and xpath is the only straight forward way to find those element.You can also achieve these indirectly with jquery selector and javascript executor.

Which is the best way to locate an element in selenium webdriver other than XPath?

The application which I'm testing is fast developing, and new features keep being adding, requiring changes to the testing XPaths. So the selenium scripts which were successful before now failed as the XPaths have changed. Is there any reliable way to locate element (which will never change)? FYI, I thought of using ID's but my application does not have ID's for each and every element as it is not recommended to give ID's in the code.
I feel the following is the hierarchy for choosing the element in selenium
1.id
2.class name
3.name
4.css
5.xpath
6.link text
7.Partial link text
8.tag name
In case of changing DOM structure you can try using functions like text() and contains(). The following link explains basic of the mentioned function.
http://www.guru99.com/using-contains-sbiling-ancestor-to-find-element-in-selenium.html
The following link can be referred for Writing reliable locators
https://blog.mozilla.org/webqa/2013/09/26/writing-reliable-locators-for-selenium-and-webdriver-tests/
Hope this helps you.
If you cannot impose #id discipline on the interface that keeps changing, one alternative is to use CSS selectors.
Another alternative to write more robust XPath:
Be smart about using the descendent-or-self axis (//):
Rather than /some/long/and/brittle/path/uniquepart use //uniquepart or //uniquepart/further/path to bypass that which is likely to change.
Don't overspecify label matching.
Use case-insensitive contains(), and try to match critical parts of labels that are likely to remain invariant across interface changes.
One other way I can think if is that you can load your page elements in to DOM and use DOM element navigation. It is a good practice to have id on elements though. If you have to use the xpath way then it is a good practice to split the path to keep the common path separately and adding the leaf elements as needed. In a way change in xpath triggering the test to fail is a good indication of catching the changes.

I am new to Selenium, using Java(Eclipse), I am having issues figuring out the xpath when using firebug

When I use firebug I get back this as the xpath it gives me back this /html/body/div[5]/div[2]/div/div[7]/div/div[4]/div/div[2]/div/ol/li/div/h3/a
I am unclear how to use this in Selenium Webdriver to click on a link.
Thanks!
This is well covered in the documentation.
http://docs.seleniumhq.org/docs/03_webdriver.jsp#selenium-webdriver-api-commands-and-operations
However, because its a very simple answer, you'd simply do something like:
driver.findElement(By.xpath("div[5]/div[2]/div/div[7]/div/div[4]/div/div[2]/div/ol/li/div/h3/a"));
(Assuming driver is a valid WebDriver instance and I've omitted the html/body section - it isn't needed).
Always try to simplify the xpaths, try using firebug with this xpath to see if it's unique, if not you'll need to be a little more specific.
"//h3/a"
Don't use xpath unnecessarily. It would make some problem in future. If there is no other way to locate that element as #Nora told try to simplify the xpath.
In your case you can use By.linkText,By.partialLinkText.
driver.findElement(By.linkText("linkName")).click();
driver.findElement(By.partialLinkText("partialTextOfLink")).click();
driver.findElement(By.xpath("//a[text()='LinkText']")).click(); //simplified xpath
You can use any of the above if there is no other attribute(id, name ..etc) available for that anchor tag.

Selenium: JUnit4: WebDriver: IE: NoSuchElementException when using IE

I recorded my test in Selenium IDE; it runs fine in Firefox and Chrome. When I run it in IE 9, I keep getting this error. I am new to coding and am having trouble finding an answer to this. If anyone can explain to me in lay man's terms what is happening/why this may be the case, I'd appreciate it.
org.openqa.selenium.NoSuchElementException: Unable to find element with css selector == BODY (WARNING: The server did not provide any stacktrace information)
Java Code:
assertTrue(driver.findElement(By.cssSelector("BODY")).getText().matches("^[\\s\\S]*Final[\\s\\S]*$"));
I would suggest printing out driver.getPageSource() before the call and check to see if the html that is on the current page matches your expectations.
If it does match what you'd expect then I'd print out the text content of all of the tags that match By.cssSelector("BODY") and see if any of them contain the text that you'd expect.
This will narrow down where the error is - you might need to find another way of finding this element.
Internally By uses XPath and casing matters with internet explorer.
Try using
assertTrue(driver.findElement(By.cssSelector("body")).getText().matches("^[\\s\\S]*Final[\\s\\S]*$"));
instead of
assertTrue(driver.findElement(By.cssSelector("BODY")).getText().matches("^[\\s\\S]*Final[\\s\\S]
*$"));
I believe that should work for you.
IE browser doesn't support Case-insensitive CSS attribute selectors. Try to use
driver.findElement(By.cssSelector("body"));

Xpath Select Element Selenium

i am trying get a element through selenium with code:
WebElement a = driver.findElement(By.xpath("//div[#id=':r6']/span/text()"));
using this same expression on a firefox plugin, the element is find , but in selenium(java code) this way the element is not found, someone can me help
The command that you might need is: "AllowNativeXPath" - then just use the Xpath (either via Xpather or after 'inspecting element')to identify your element. Sometimes, though ... there's still an issue where Selenium does not 'see' elements described with an Xpath while running a script, but when users click the 'Find' button ... Selenium has no trouble at all. I usually take the focus up a level and down a level before any commands that Selenium has trouble finding the elements for ... and it works well thereafter. It's ugly and very NOT elegant ... but it works.
Selenium uses it's OWN Xpath interpreter ... and the one native to your browser might be better in some cases.
You can try this instead:
WebElement a = driver.findElement(By.xpath("//div[#id=':r6']/span")).getText();

Categories

Resources