First one in list is easy since you can use find Element. I find the element and need to get information from preceding and following divs. For the "n" element in the list, what is the xPath syntax for moving/backwards to other associated divs?
I have tried various x path following syntax such as:
following-sibling::div
/following-sibling::div
./following-sibling::div
And many others. I just have not found the documentation for the correct syntax.
Preceding:
Select all nodes that come before the current node as shown in the below screen.
Following-sibling:
Select the following siblings of the context node. Siblings are at the same level of the current node as shown in the below screen. It will find the element after the current node.
Here you can find the correct syntax with examples.
You can also use drive.findElements(); method in order to find similar elements. It will return you the collection of elements which you can iterate to get the information.
This answer is entirely dependent on what HTML is displayed on the page you are trying to automate.
Without that context, I can only provide a generic answer, but here's how I would loop through a list of elements and grab information from preceding / following div's:
// locate the elements you want to iterate
List<WebElement> elements_to_iterate = driver.findElements(By.xpath(someLocatorHere))
// iterate the elements in a foreach loop
for (WebElement element : elements_to_iterate) {
// get preceding sibling
WebElement preceding_element = element.find_element_by_xpath("./preceding-sibling::div")
print(preceding_element.getText())
// get following sibling
WebElement following_element = element.find_element_by_xpath("./following-sibling::div")
print(following_element.getText())
}
As I mentioned, this is just a generic solution to give you an idea of how this would work. If you want some assistance with the locator strategy, posting your HTML would be helpful.
If you want to find multiple elements then please find below example for your reference. Based on your site you need to implement same kind of logic.
Please refer above screenshot where I am trying to get label of two highlighted button from google
public static void main(String[] args) throws InterruptedException {
System.setProperty("webdriver.chrome.driver", "chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.get("https://www.google.com"); // URL in the browser
driver.manage().window().maximize(); // Maximize the browser
String expectedTooltip="Search";
List<WebElement> Listelements = driver.findElements(By.xpath("//div[#class='FPdoLc VlcLAe']//input"));
for (WebElement element: Listelements)
{
System.out.println("Element Text::"+element.getAttribute("aria-label"));
}
driver.close();
}
output:
Element Text::Google Search
Element Text::I'm Feeling Lucky
since the first one is easy; each elementList has a current-time and at least one picture
for (i=1; i < elementList.size(); i++) {
WebElement nextInList = elmentList(i);
WebElement getTime = nextInList.findElement(By.xpath("following::div[contains(#class, 'current-time')]"));
.
.
.
WebElement picture = nextInList.findElement(By.xpath("preceding::a[1]"));
.
}
So having a WebElement in Selenium, I am trying to get all of its direct childs by XPath including text nodes. I already tried the XPath "*" which works but doesn't give me any text node (if there is any). According to XPath's documentation I then tried many things:
"child::node()" gives me InvalidSelectorException.
Tried to get all the nodes with "*" and then trying to get all the text nodes with "text()": gives me InvalidSelectorException on the "text()" query.
I tried these XPaths with the XPathHelper extension on Chrome and it works as intented but it doesn't seem to work with Selenium (Chrome WebDriver and PhantomJS).
This is my loop that I thought should work:
for(WebElement child : node.findElements(new By.ByXPath("child::node()"))) {
//Do something with child
}
Where could be the problem?
This is what I use.
public String getHiddenText(WebDriver driver, WebElement aWebElement) {
return (String) ((JavascriptExecutor) driver).executeScript("return arguments[0].textContent;", aWebElement);
}
Addressing crawljax comment,
public String getHiddenText(EmbeddedBrowser browser, WebElement aWebElement) {
WebDriver driver = browser.getDriver();
return (String) ((JavascriptExecutor) driver).executeScript("return arguments[0].textContent;", aWebElement);
}
I find that the best way to learn a new tool is to go to its github repository and search the automated tests for examples of how to use it. Just saying. :-)
xpath = //strong[text()='Review the information below, then click "Cancel this Order."']
Description:
With the above xpath 2 elements are getting located in firefox using firepath.
I want to assert whether there are 2 elements available in the page.
Tried with the below code but it returns 0; #Locator(as=As.XPATH,use ="//strong[text()='Review the information below, then click "Cancel this Order.")
Code:
public List<PageElement> reviewTextElement;
public int count(){
int count= reviewTextElement.size();
return count;
}
Code is returning 0 because of xpath is not identifying the elements. Using escape sequence \" surrounding Cancel this Order should solve the problem if the xpath is you have mentioned is correct.
#Locator(as=As.XPATH, use ="//strong[text()='Review the information below, then click \"Cancel this Order.\"")
I have a curious case where the selenium chrome driver getText() method (java) returns an empty string for some elements, even though it returns a non-empty string for other elements with the same xpath. Here is a bit of the page.
<div __gwt_cell="cell-gwt-uid-223" style="outline-style:none;">
<div>Text_1</div>
<div>Text_2</div>
<div>Text_3</div>
<div>Text_4</div>
<div>Text_5</div>
<div>Text_6</div>
</div>
for each of the inner tags, I can get valid return values for getTagName(), getLocation(), isEnabled(), and isDisplayed(). However, getText() returns an empty string for some of the divs.
Further, I notice that if I use the mac chrome driver, it is consistently the ‘Text_5’ for which getText() returns an empty string. If I use the windows chrome driver, it is , it is consistently the ‘Text_2’ for which getText() returns an empty string. If I use the firefox driver, getText() returns the expected text from all the divs.
Has anyone else had this difficulty?
In my code, I use something like this…
ArrayList<WebElement> list = (ArrayList<WebElement>) driver.findElements(By.xpath(“my xPath here”));
for (WebElement e: list) System.out.println(e.getText());
As suggested below, here is the actual xPath I am using. The page snippet above deals with the last two divs.
//*[#class='gwt-DialogBox']//tr[contains(#class,'data-grid-table-row')]//td[contains(#class,'lms-assignment-selection-wizard-cell')]/div/div
Update: The textContent attribute is a better option and supported across the majority of browsers. The differences are explained in detail at this blog post: innerText vs. textContent
As an alternative, the innerText attribute will return the text content of an element which exists in the DOM.
element.getAttribute("innerText")
The isDisplayed() method can sometimes trip over when the element is not really hidden but outside the viewport; getText() returns an empty string for such an element.
You can also bring the element into the viewport by scrolling to it using javascript, as follows:
((JavaScriptExecutor)driver).executeScript("arguments[0].scrollIntoView(true);", element);
and then getText() should return the correct value.
Details on the isDisplayed() method can be found in this SO question:
How does Selenium WebDriver's isDisplayed() method work
WebElement.getAttribute("value") should help you !!
This is not a solution, so I don't know if it belongs in an answer, but it's too long for a comment and includes links, so I'm putting it an answer.
I have had this issue as well. After doing some digging, it seems that the problem arises when trying to get the text of an element that is not visible on the screen.(As #Faiz comments above.)This can happen if the element is not scrolled to, or if you scroll down and the element is near the top of the document and no longer visible after the scroll. I see you have a FindElements() call that gets a list of elements. At least some are probably not visible; you can check this by trying boolean b = webElement.isDisplayed(); on each element in the list and checking the result. (See here for a very long discussion of this issue that's a year old and still no resolution.)
Apparently, this is a deliberate design decision (see here ); gettext on invisible elements is supposed to return empty. Why they are so firm about this, I don't know. Various workarounds have been suggested, including clicking on the element before getting its text or scrolling to it. (See above link for example code for the latter.) I can't vouch for these because I haven't tried them, but they're just trying to bring the element into visiblity so the text will be available. Not sure how practical that is for your application; it wasn't for mine. For some reason, FirefoxDriver does not have this issue, so that's what I use.
I'm sorry I can't give you a better answer - perhaps if you submit a bug report on the issues page they'll see that many people find it to be a bug rather than a feature and they'll change the functionality.
Good luck!
bsg
EDIT
See this question for a possible workaround. You won't be able to use it exactly as given if isDisplayed returns true, but if you know which element is causing the issue, or if the text is not normally blank and you can set an 'if string is empty' condition to catch it when it happens, you can still try it. It doesn't work for everyone, unfortunately.
NEW UPDATE
I just tried the answer given below and it worked for me. So thanks, Faiz!
for (int count=0;count<=sizeofdd;count++)
{
String GetInnerHTML=getddvalue.get(count).getAttribute("innerHTML");
}
where,
1. getddvalue is the WebElement
2. sizeofdd is the size of getddvalue
element.getAttribute("innerText") worked for me, when getText() was returning empty.
I encountered a similar issue recently.
I had to check that the menu tab "LIFE EVENTS" was present in the scroll box. The problem is that there are many menu tabs and you are required to scroll down to see the rest of the menu tabs. So my initial solution worked fine with the visible menu tabs but not the ones that were out of sight.
I used the xpath below to point selenium to the parent element of the entire scroll box.
#FindBy(xpath = "//div[contains(#class, 'menu-tree')]")
protected WebElement menuTree;
I then created a list of WebElements that I could increment through.
The solution worked if the menu tab was visible, and returned a true. But if the menu tab was out of sight, it returned false
public boolean menuTabPresent(String theMenuTab) {
List<WebElement> menuTabs = new ArrayList<WebElement>();
menuTabs = menuTree.findElements(By.xpath("//i/following-sibling::span"));
for(WebElement e: menuTabs) {
System.out.println(e.getText());
if(e.getText().contains(theMenuTab)) {
return true;
}
}
return false;
}
I found 2 solutions to the problem which both work equally well.
for(WebElement e: menuTabs) {
scrollElementIntoView(e); //Solution 1
System.out.println(e.getAttribute("textContent")); //Solution 2
if(e.getAttribute("textContent").contains(theMenuTab)) {
return true;
}
}
return false;
Solution 1 calls the method below. It results in the scroll box to physically move down while selenium is running.
protected void scrollElementIntoView(WebElement element) {
((JavascriptExecutor) driver).executeScript("arguments[0].scrollIntoView(true)", element);
}
Solution 2 gets the text content (even for the menu tabs not currently visible) of the attribute that you are pointing to. Thus doing the job properly that .getText() was not able to do in this situation.
Mine is python, but the core logic is similar:
webElement.text
webElement.get_attribute("innerText")
webElement.get_attribute("textContent")
Full code:
def getText(curElement):
"""
Get Selenium element text
Args:
curElement (WebElement): selenium web element
Returns:
str
Raises:
"""
# # for debug
# elementHtml = curElement.get_attribute("innerHTML")
# print("elementHtml=%s" % elementHtml)
elementText = curElement.text # sometime NOT work
if not elementText:
elementText = curElement.get_attribute("innerText")
if not elementText:
elementText = curElement.get_attribute("textContent")
# print("elementText=%s" % elementText)
return elementText
Calll it:
curTitle = getText(h2AElement)
hope is useful for you.
if you don't care about isDisplayed or scrolling position, you can also write
String text = ((JavaScriptExecutor)driver).executeScript("return $(arguments[0]).text();", element);
or without jquery
String text = ((JavaScriptExecutor)driver).executeScript("return arguments[0].innerText;", element);
Related to getText() I have also an issue and I resolved so:
WebElement errMsg;
errMsg = driver.findElement(By.xpath("//div[#id='mbr-login-error']"));
WebElement parent = driver.findElement(By.xpath("//form[#id='mbr-login-form']"));
List<WebElement> children = parent.findElements(By.tagName("div"));
System.out.println("Size is: "+children.size());
//((JavascriptExecutor)driver).executeScript("arguments[0].scrollIntoView(true);", children);
for(int i = 0;i<children.size();i++)
{
System.out.println(i + " " + children.get(i).getText());
}
int indexErr = children.indexOf(errMsg);
System.out.println("index " + indexErr);
Assert.assertEquals(expected, children.get(indexErr).getText());
None of the above solutions worked for me.
Worked for me:
add as a predicate of xpath the length of string greater than 0:
String text = wait.until(ExpectedConditions.presenceOfElementLocated(By.xpath("//span[string-length(text()) > 0]"))).getText();
I am searching for functions like to find if the link exist in the webpage. and if the link exist than print something link "link found!".
example: ( I want to find XPATH "123")
driver.findElement(By.xpath ("123"));
if (the xpath found the "123"){
System.out.printIn ("link found")
} else {
System.out.printIn ("missing link..")
}
Assuming you are using Selenium Webdriver?
From the API:
WebElement findElement(By by)
Find the first WebElement using the
given method. This method is affected by the 'implicit wait' times in
force at the time of execution. The findElement(..) invocation will
return a matching row, or try again repeatedly until the configured
timeout is reached. findElement should not be used to look for
non-present elements, use findElements(By) and assert zero length
response instead.
So your code will probably look something like this:
List<WebElement> allElements = driver.findElements(By.xpath("123"));
if (allElements == null || allElements.size() == 0) {
System.out.printIn ("missing link..")
} else {
System.out.printIn ("link found")
}
Find link by "href" attribute? '//a[#href="http://www.example.com/somepage"]'
Find link by text of link? '//a[normalize-space(.)="Some text link"]'