How to use select list in selenium? - java

I'm trying to select an element from a select list in selenium using java with WebDriver - based syntax.
I've got the select list by
elements = driver.findElements(By.xpath("//form[#action='inquiry/']/p/select[#name='myselect']"));
if (elements.size() == 0) {
return false;
}
if (guests != null) {
//what do I do here?
}
How do I do that?

WebElement select = driver.findElement(By.name("myselect"));
Select dropDown = new Select(select);
String selected = dropDown.getFirstSelectedOption().getText();
if(selected.equals(valueToSelect)){
//already selected;
//do stuff
}
List<WebElement> Options = dropDown.getOptions();
for(WebElement option:Options){
if(option.getText().equals(valueToSelect)) {
option.click(); //select option here;
}
}
If this is slower, then consider something like
dropDown.selectByValue(value);
or
dropDown.selectByVisibleText(text);

A little side note which applies to Java:
In my case, when I was writing the test according the example of #nilesh, I got a strange error, that the constructor is invalid. My import was:
import org.openqa.jetty.html.Select;
If you happen to have similar errors, you have to correct that import to this:
import org.openqa.selenium.support.ui.Select;
If you use this second import, everything will work.

element = driver.findElements(By.xpath("//form[#action='inquiry/']/p/select[#name='myselect']/option[*** your criteria ***]"));
if (element != null) {
element.click();
}
find the option, and then click it

Try to do it like this :
//method to select an element from the dropdown
public void selectDropDown(String Value) {
webElement findDropDown=driver.findElements(By.id="SelectDropDowm");
wait.until(ExpectedConditions.visibilityOf(findDropDown));
super.highlightElement(findDropDown);
new Select(findDropDown).selectByVisibleText(Value);
}
//method to highlight the element
public void highlightElement(WebElement element) {
for (int i = 0; i < 2; i++) {
JavascriptExecutor js = (JavascriptExecutor) this.getDriver();
js.executeScript(
"arguments[0].setAttribute('style', arguments[1]);",
element, "color: yellow; border: 3px solid yellow;");
js.executeScript(
"arguments[0].setAttribute('style', arguments[1]);",
element, "");
}
}

Related

Element is not clickable in selenium java (chrome driver)

I am getting exceptions, when i am trying to click on Edit Criteria. Please someone can help me as
public void selectCriteriaFromWorklistsOptions() {
waitABit(Constants.medium);
for (int i = 0; i < 3; i++) {
try {
WebElement dropDown = getDriver().findElement(By.cssSelector(div.nav-menu__group > ul:nth-child(2) > li:nth-child(1) > button:nth-child(1)));
dropDown.click();
break;
} catch (Exception e) {
logger.debug((e.getMessage()));
}
}
}
Also, I tried this also -
WebElement element = waitForCondition().until(ExpectedConditions.elementToBeClickable(By.xpath("//button[contains(text(),'Edit criteria')])")));
element.click();
Wait for this element to become clickable.
And try using the following css selector:
.bttn-menu"
Or:
.nav-menu_bttn>.bttn-menu
Try 2:
button[bss-bttn-menu]
Try 3:
It should look like this in Java:
WebDriverWait wait = new WebDriverWait(driver, 10);
WebElement element = wait.until(ExpectedConditions.elementToBeClickable(By.cssSelector(.nav-menu_bttn>button[bss-bttn-menu]")));
Check the amount of underscores _ by yourself. Probably there are two of them.

Shadow Root - click in a href under several shadow roots

I have a list of links inside of several shadowRoots. Already solved this problem.
public WebElement expandRootElement(WebElement element) {
WebElement ele = (WebElement) ((JavascriptExecutor) driver).executeScript("return arguments[0].shadowRoot",element);
return ele;
}
WebElement root5_adminPanel = shadowRoot4_MduiContainerChild2.findElement(By.cssSelector("#layout > border-layout > ng-view > admin-panel"));
WebElement shadowRoot5_AdminPanel= expandRootElement(root5_adminPanel);
WebElement root6_breadCrumb = shadowRoot5_AdminPanel.findElement(By.cssSelector("#layout > border-layout > breadcrumb"));
WebElement shadowRoot6_breadCrumb = expandRootElement(root6_breadCrumb);
WebElement root6_domainPanel = shadowRoot5_AdminPanel.findElement(By.cssSelector("#layout > border-layout > ng-view > gdsr-domain-panel"));
WebElement shadowRoot6_domainPanel = expandRootElement(root6_domainPanel);
WebElement root7_selectDomain = shadowRoot6_domainPanel.findElement(By.cssSelector("#domainContainer > domain-panel-item.ng-binding.last"));
WebElement shadowRoot7_selectDomain = expandRootElement(root7_selectDomain);
When I reach this shadowRoot7, I have a list of items with the same name, which I already created a List to fix it.
List<WebElement> rows_table = shadowRoot6_domainPanel.findElements(By.cssSelector("#domainContainer > domain-panel-item:nth-child(n)"));
(They are around 45 items)
This will select all of them, in this case all the domain-panel-item rows.
My problem is that each domain-panel-item still contain another shadowRoot (the same path for all of them) an i would like to select a random item, not the first or last one, for example, the item number 43.
enter image description here
My solution was this one but it doesn't work because it doesnt access to the link that i want:
public void clickSelectedDomain(String domain) {
List<WebElement> rows_table = shadowRoot6_domainPanel.findElements(By.cssSelector("#domainContainer > gdsr-domain-panel-item:nth-child(n)"));
int rows_count = rows_table.size();
for (int row=0; row<rows_count; row++) {
if(rows_table.get(row).getAttribute("href").contains(domain)) {
rows_table.get(row).click();
}
}
}
Some have an idea how to fix this?
You solved the problem by calling recursively executeScript() in order to get the imbricated Shadow DOMs but actually you could have just called executeScript() once, and inside got the Shadow DOMs successively.
driver.executeScript( function ()
{
var root1 = document.querySelector( 'selector string 1' ).shadowRoot
var root2 = root1.querySelector( 'selector string 2' ).shadowRoot
var root3 = root2.querySelector( 'selector string 3' ).shadowRoot
...
return foundElement
}
Anyways, in the for() {} loop, you should extract the ultimate Shadow DOM one last time, and then select the <a> element to check its content.

I am getting StaleElementReferenceException: element is not attached to the page document

HTMLCODE
I am getting StaleElementReferenceException: element is not attached to the page document. I went through some of the solutions that are already there in StackOverflow. It did not work and it continues to throw the same error. Here is the code I am using which is throwing the stale reference error
WebElement table2 = driver.findElement(By.cssSelector("body > div:nth-child(74) > div.sp-palette-container"));
List<WebElement> allrows2 = table2.findElements(By.tagName("div"));
for(WebElement row2: allrows2){
List<WebElement> cells = row2.findElements(By.tagName("span"));
for(WebElement cell:cells){
if (cell.getAttribute("title").equals("rgb(0, 158, 236)")) {
cell.click();
}
}
}
Because clicking the found cell lead some HTML changes on the current page , due to this changes selenium will treat the page(after click) is an "new" page (even though not redirect to another page actually).
In the next iteration of the loop, the loop still refer to element belongs to "previous" page, this is the root cause of "StateElementReference" exception.
So you need to find those elements again on the "new" page to change the reference of element comes from "new" page.
WebElement table2 = driver.findElement(By.cssSelector("body > div:nth-child(74) > div.sp-palette-container"));
List<WebElement> allrows2 = table2.findElements(By.tagName("div"));
int rowSize, cellSize = 0;
rowSize = allrows2.sie();
for(int rowIndex=0;rowIndex<rowSize;rowIndex++){
WebElement row2 = allrows2.get(rowIndex);
List<WebElement> cells = row2.findElements(By.tagName("span"));
cellSize = cells.size();
for(int cellIndex=0;cellIndex<cellSize;cellIndex++){
WebElement cell = cells.get(cellIndex);
if (cell.getAttribute("title").equals("rgb(0, 158, 236)")) {
cell.click();
// find cells again on "new" page
cells = row2.findElements(By.tagName("span"));
// find rows again on "new" page
allrows2 = table2.findElements(By.tagName("div"));
}
}
}
If your usecase is to click() on the elements with title as rgb(0, 158, 236) you can use the following code block :
String baseURL = driver.getCurrentUrl();
List<WebElement> total_cells = driver.findElements(By.xpath("//div[#class='sp-palette-container']//div//span"));
int size = total_cells.size();
for(int i=0;i<size;i++)
{
List<WebElement> cells = driver.findElements(By.xpath("//div[#class='sp-palette-container']//div//span"));
if (cells.get(i).getAttribute("title").contains("rgb(0, 158, 236)"))
{
cells.get(i).click();
//do your other tasks
driver.get(baseURL);
}
}
Use a "break" after clicking on the element found. The exception occurs because, after clicking on your element, the loop continues.
WebElement table2 = driver.findElement(By.cssSelector("body > div:nth-child(74) > div.sp-palette-container"));
List<WebElement> allrows2 = table2.findElements(By.tagName("div"));
for(WebElement row2: allrows2){
List<WebElement> cells = row2.findElements(By.tagName("span"));
for(WebElement cell:cells){
if (cell.getAttribute("title").equals("rgb(0, 158, 236)")) {
cell.click();
break;
}
}
}

Why is this WebElement not returning the full xpath?

I have several WebElements such that executing the following
List<WebElement> customers = driver.findElements(By.xpath("//div[#id='Customers']/table/tbody/tr"));
System.out.println(customers.size());
would print 5.
So then why does the following code
List<WebElement> customers = driver.findElements(By.xpath("//div[#id='Customers']/table/tbody/tr"));
for (WebElement customer : customers) {
if (customer.getText().equals("SQA")) {
WebElement test = customer;
System.out.println(test);
break;
}
}
print xpath: //div[#id='Customers']/table/tbody/tr and fail to actually include the specific index of the path? The above xpath is absolutely useless; I'm expecting the location of where SQA was found.
xpath: //div[#id='Customers']/table/tbody/tr[4]
I think it just prints the locator used to find the element. If you want the index, just change your code to
List<WebElement> customers = driver.findElements(By.xpath("//div[#id='Customers']/table/tbody/tr"));
for (int i = 0; i < customers.size(); i++)
{
if (customers.get(i).getText().equals("SQA"))
{
System.out.println(i);
break;
}
}

how to click a result from google search that is not on first page with selenium java

I am trying to navigate through search results from google with selenium webdriver. I have a interface for user to inset word to search and site title to choose. If the result is not on the first page the driver should go to next page to look for the site, and if not there than to next page and so on..
Somehow I don't manage to get beyond the second page end if I did get to the second page and the right site is there, the driver doesn't click on it.
Here is some of the code in Java:
private void setLoopNum(int l){
String getText = urlText.getText();
String getSiteName = linkToChoose.getText();
System.setProperty("webdriver.chrome.driver", "C:\\selenium-2.44.0\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize(); //Maximize window
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
for(int i=0;i<l;i++){
//WebDriver driver = new FirefoxDriver();
driver.get("http://google.com");
//driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
WebElement element1 = driver.findElement(By.name("q"));
element1.sendKeys(getText);
element1.submit();
//driver.manage().timeouts().implicitlyWait(30,TimeUnit.SECONDS); //wait for page to load
//try{
boolean flag = false;
String page_number = "1";
while(! flag){
//get all the search results
List<WebElement> linkElements = driver.findElements(By.xpath("//h3[#class='r']/a"));
for(WebElement eachResult: linkElements){
if(eachResult.getAttribute(getSiteName).equals(getSiteName)){
eachResult.findElement(By.xpath("//a[#href='" + getSiteName + "']")).click();;
flag =true;
}else{
driver.findElement(By.xpath("//a[#id='pnnext']/span")).click();
linkElements.clear(); //celean list
break;
} //end else
}
}//end while loop
//}catch(Exception e){
// System.out.println("Error!");
// }
}
driver.quit(); //clear memory
}
Three things that you are missing in your code:
Firstly, in your code you are looking for only first element in your list.
Secondly, in getAttribute you are passing link instead of href:
if(eachResult.getAttribute(getSiteName).equals(getSiteName)){
it should be:
if(eachResult.getAttribute("href").equals(getSiteName)){
Thirdly, on clicking next the page is loaded via Google Ajax Api. Thus webdriver click will never block the execution of your code and will load linkElements with previous page links only. To avoid this let the driver get refreshed or put some wait for certain condition in your code.
Can u try out with this code:
WebDriverWait wait = new WebDriverWait(driver, 10)
while (!flag) {
// get all the search results
linkElements = wait
.until(ExpectedConditions
.presenceOfAllElementsLocatedBy(By
.xpath("//h3[#class='r']/a")));
for (WebElement eachResult : linkElements) {
if (eachResult.getAttribute("href").contains(getSiteName)) {
eachResult.click();
flag = true;
break;
}
}
if (!flag) {
driver.findElement(By.xpath("//a[#id='pnnext']/span[1]"))
.click();
pageNumber++;
linkElements.clear(); // celean list
wait.until(ExpectedConditions
.textToBePresentInElementLocated(
By.xpath("//td[#class='cur']"), pageNumber
+ "")); // Checking whether page number is changed as expected.
}
}// end while loop
EDIT:
List<WebElement> linkElements = new ArrayList<WebElement>();
ListIterator<WebElement> itr = null;
System.setProperty("webdriver.chrome.driver",
"webdrivers/chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize(); // Maximize window
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);
driver.get("http://google.com");
WebElement element1 = driver.findElement(By.name("q"));
WebElement toClick = null;
element1.sendKeys(getText);
element1.submit();
// try{
int pageNumber = 1;
WebDriverWait wait = new WebDriverWait(driver, 10);
boolean flag = false;
while (!flag) {
linkElements = wait.until(ExpectedConditions
.presenceOfAllElementsLocatedBy(By
.xpath("//h3[#class='r']/a")));
itr = linkElements.listIterator(); // re-initializing iterator
while (itr.hasNext()) {
toClick = itr.next();
if (toClick.getAttribute("href").contains(getSiteName)) {
toClick.click();
flag = true;
break;
}
}
if (!flag) {
driver.findElement(By.xpath("//a[#id='pnnext']/span[1]"))
.click();
pageNumber++;
linkElements.clear(); // clean list
wait.until(ExpectedConditions.textToBePresentInElementLocated(
By.xpath("//td[#class='cur']"), pageNumber + ""));
}
}
driver.quit(); // clear memory
}
It looks like you're moving to the next page every time ANY of the WebElements in linkElements isn't what you're looking for. This will cause problems, as you need to relocate any elements that are re-rendered.
Give this a shot:
boolean found = false;
int page_number = 1; //If you need this as a string, you can make it one later
while(! found){
//get all the search results
List<WebElement> linkElements = driver.findElements(By.xpath("//h3[#class='r']/a"));
for(WebElement result: linkElements){
if(result.getAttribute("href").equals(getSiteName))
{
result.click();
found=true;
break;
}
}//End of foreach-loop
if(!found){
driver.findElement(By.xpath("//a[#id='pnnext']/span")).click();
page_number++;
}
}//End of while-loop
Also, you'll want to have some element-finding protection. Say that you search for something that has 0 results, or only one page of them (rare though that is). In the first case, you're lucky, because driver.findElements() should just return an empty list rather than throwing some exception, and the foreach loop just won't run, but in both cases, there won't be the anchor #pnnext, which will cause driver.findElement to throw an exception when you search for it. There are several ways to protect against this, such as writing a small wrapper function (IIRC, they have a simple implementation for findelementwithtimeoutwait() written on the Selenium website somewhere). I suggest you pick/write one and start using it, instead of the raw Selenium functions.

Categories

Resources