Cannot pick a random search result using Selenium with java - java

As a novice to selenium, I am trying to automate a shopping site on selenium webdriver with java, My scenario is that when i search with a keyword and get results, i should be able to pick any one of the results randomly, but I am unable to pick the random search result, either I am getting a "No such element" or when i try to click the same result everytime,search results seem to vary from time to time. please help on how to proceed further.
here is the code :
package newPackage;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.*;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.FluentWait;
import org.openqa.selenium.support.ui.Wait;
public class flipKart {
public static void main(String[] args) throws InterruptedException {
System.setProperty("webdriver.chrome.driver","C:\\chromedriver.exe");
WebDriver dr = new ChromeDriver();
dr.get("http://m.barnesandnoble.com/");
dr.manage().window().maximize();
dr.findElement(By.xpath(".//*[#id='search_icon']")).click();
dr.findElement(By.xpath(".//*
[#id='sk_mobContentSearchInput']")).sendKeys("Golden Book");
dr.findElement(By.xpath(".//*
[#id='sk_mobContentSearchInput']")).sendKeys(Keys.ENTER);
dr.findElement(By.xpath(".//[#id='skMob_productDetails_prd9780735217034']/div/div")).click();
dr.findElement(By.xpath(".//*[#id='pdpAddtoBagBtn']")).click();
}
}

You should write any method that would try to wait for the visibility of the element that needs to be clicked.
You could use driver.sleep() to check.

Hard to answer with your info, but these tips may help:
If you're getting no such element, try to verify the css selector or xpath you are using is correct. Firefox's Firebug Firefinder is an excellent tool for this. It will highlight the element your selector points to.
If your selector is correct, make sure you are using findElementsBy... and not findElementBy...
the plural version will return a list of webelements, from which you can then pull random elements to click on.
Use an intelligent wait to make sure the elements have loaded on the page. Sometimes selenium will try to interact with elements on the page before they appear. The selenium api has plenty of methods to help here, but if you're just debugging a quick Thread.sleep(5) when you load the page will work.

Related

xpath tested and correct but i recieve the error : no such element: Unable to locate element

My Xpath is correct & no iFrame and I can locate element in Chrome console but my program still fails. I have used explicit wait also.
no such element: Unable to locate element: {"method":"xpath","selector":"//*[contains(#ng-click,'authenticationCtrl.onSubmitMage()')]"}
i tested my xpath with Try xpath and it works but when i compile my code i still recieve the error
the page Object :
package com.orange.pageObject;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.CacheLookup;
import org.openqa.selenium.support.FindBy;
import org.openqa.selenium.support.PageFactory;
public class MageReferentiel {
WebDriver webdriver;
public MageReferentiel(WebDriver rwebDriver) {
webdriver = rwebDriver;
PageFactory.initElements(webdriver, this);
}
#FindBy(xpath = "//*[contains(#ng-click,'authenticationCtrl.onSubmitMage()')]")
#CacheLookup
WebElement connexion;
public void clickConnexion() {
connexion.click();
}
The step definition :
#When("l utilisateur choisi le referentiel")
public void l_utilisateur_choisi_le_referentiel() throws Exception {
mr.clickConnexion();
Thread.sleep(3000);
}
im looking to click in button
thanks
I agree with #Prophet, it could be because of some JS call the button, //*[contains(#ng-click,'authenticationCtrl.onSubmitMage()')] changing it's state to some other state. so what we can do about is that, to try with different locator.
such as :
//button[#translate ='LOGIN']
and see if that works, even if it doesn't try changing it to css.
Since ng elements are going very well with Protractor (Angular), better to use in Protractor in that case, so it suppose to be something like element(by.click('authenticationCtrl.onSubmitMage').click();
I guess the ng-click attribute value is dynamically updated on the page, so when you trying to access that element this element is changed, not having it's initial state.
Instead of locator you are using try this XPath:
//button[contains(text(),'Connexion')]
or this
//button[#translate='LOGIN']
The second element with this locator will be
(//button[#translate='LOGIN'])[2]
Looks like the element is not rendering on time. Try using explicit wait. Following gif shows how it is done using Cucumber:
https://nocodebdd.live/waitime-cucumber
Same been implemented using NoCodeBDD:
https://nocodebdd.live/waittime-nocodebdd
Disclaimer: I am the founder of NoCodeBDD so BDD automation can be achieved in minutes and without code. Through NoCodeBDD you could automate majority of the scenarios and it allows you to write your own code if there are edge cases. Would love to get some feedback on the product from the community. Basic version (https://www.nocodebdd.com/download) is free to use.
The default wait strategy in selenium is just that the page is loaded.
You've got an angular page so after your page is loaded there is a short delay while the JS runs and the element is finally ready in the DOM - this delay is causing your script to fail.
Check out the selenium docs here for wait strategies .
Your options are:
An explicit wait - This needs to be set per element you need to sync on.
I note you say you've used an explicit wait - but where? - It's not present in the code you shared and it might be that you've used the wrong expected condition.
Try something like this:
WebElement button = new WebDriverWait(rwebDriver, Duration.ofSeconds(10))
.until(ExpectedConditions.elementToBeClickable(By.xpath("//*[contains(#ng-click,'authenticationCtrl.onSubmitMage()')]")));
button.click();
Use an implicit wait - you only use this once when you initialise driver and it will wait the specified amount of time for all element interaction.
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
There are other reasons that selenium returns a NoSuchElement but synchronisation is the most common one. Give the wait a go and let me know if it is still giving you trouble.
Through discussion in the comments, the trouble is an iframe
If you google it - there are lots of answers out there.
With frames, you need to identify it, switch to it, do your action(s) then switch back:
//find and switch - update the By.
driver.switchTo().frame(driver.findElement(By.id("your frame id")));
//actions go here
//back to normal
driver.switchTo().defaultContent();

I find the Register button using xpath but it's showing 2 matching nodes . How to uniquely identify the register button

Snap of the DOM here two matching nodes displayed for XPATH :
.//*[#id='header']/div/div[2]/div/a[2]
You can try this xpath - //div[#id='header'][not(#class)]//div[#class='right-side']/div/a[contains(.,'Register')]
There are two almost same div containers. Only difference is that the relevant container does not have a class attribute, so the not part.
Or you can use xpath with index - (//div[#class='right-side']/div/a[contains(.,'Register')])[1]
No need to find unique identifier for this, you can use find elements and click by index.
driver.findElements(By.xpath("//*[#id='header']/div/div[2]/div/a[2]")).get(index).click();
OR
You can use CSS Selector and locate the relevant element by using :nth-child(index).
In your case:
driver.findElement(By.cssSelector("#header:nth-child(index) a.button.border:nth-child(1)")).click();
There are more different ways to locate the element using css selectors and I suggest to read about.
And when you inspect the element using the browser you can choose copy css or xpath, this option will give you the unique locator.
A quick and dirty solution (//[#id='header']/div/div[2]/div/a[2])[1] for the first one or (//[#id='header']/div/div[2]/div/a[2])[2] for the second one. But really you should practice writing more relative xpaths and not just taking what the plugins give you.
Don't go for the complicated xpaths
This would work fine: By.xpath("(//A[#href='/Account/Register'])[1]")
I hope the below code helps you.
import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.WebDriverWait;
public class Upmile {
public static void main(String[] args) throws InterruptedException {
// TODO Auto-generated method stub
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize();
driver.get("http://upamile.com");
//This website shows a dialog box at first
//we can skip that dialog by clicking on the body
driver.findElement(By.tagName("body")).click();
Thread.sleep(2000);
driver.findElement(By.xpath("(//A[#href='/Account/Register'])[1]")).click();
System.out.println("Test ran successfully");
}
}

Is there any way to automate with selenium webdriver without the use of xpath/id/css?

I am trying to automate testing with Selenium Webdriver without the need of xpath. I'm facing problem when the site is modified then xpath is being changed. For elements(like buttons, drop downs etc) which needs some action to be performed any how it needs xpath or someother things to identify that element. If I want to fetch data(table contents) from site to validate its excecution,then here I will need lots of xpaths to do so.
Is there a better way to avoid some xpaths?
Instead of using xpath, you can map the elements by css selectors, like this:
driver.findElement(By.cssSelector("css selector"), OR
by ID, like this:
driver.findElement(By.id("coolestWidgetEvah")).
There are much more than these 2. See Selenium docomentation
Steven, you basically have 2 choices the way I see it. One is to inject your own attributes (i.e qa attrib for instance) into your web elements which will never change. Please see this post, on how you can achieve this:
Selenium: Can I set any of the attribute value of a WebElement in Selenium?
Alternatively you can still use 'naked' xpath in order to locate your elements.
By 'naked' i mean generic, so not so specific.
Consider this element sitting below this:
div id="username"
input class="usernameField"
button type='submit
So, instead of locating it like this (which is specific/aggresive):
//div[#id='username']//input[#class='usernameField']//button[#type='submit']
you can use a more mild approach, omitting the specific values, like so:
//div[#id]//input[#class]//button[#type]
Which is less likely to break upon change. However, beware you need to be 100% sure that with the 2nd approach you are locating a unique element. In other, words if there are more than 1 buttons you might select the wrong one or cause a Selenium exception.
I would recommend this Xpath helper add-on for Chrome which highlights on the screen when your xpath is correct and also shows you how many elements match you Xpath (i.e. unique or not?)
xpath Helper
Hope the above, makes sense, don't hesitate to ask if it does not!
Best of luck!
Ofcoarse there are certain other ways without using id/xpath/CSS and even "sendKeys". The solution is to do that via Sikuli.
Things to do:
You have to download the Sikuli exe jar (sikulixsetup-1.1.0). (from https://launchpad.net/sikuli/+download)
Install the Sikuli exe jar which extracts the "sikulixapi" and adds to the PATH variable.
Add the External jar "sikulixapi" at project level through Eclipse.
Now take images of the elements where you want to pass some text or click.
Use the reference of the images in Selenium Java code to write text & perform clicks.
Here is a simple code to browse to "https://www.google.co.in/" move on to Login page & enter Emailid & Password without any xpath or sendkeys.
package SikuliDemo;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.sikuli.script.Pattern;
import org.sikuli.script.Screen;
public class SikuliDemoScript {
public static void main(String[] args) throws Exception
{
Screen screen = new Screen();
Pattern image1 = new Pattern("C:\\Utility\\OP_Resources\\Sikuli_op_images\\gmailLogo.png");
Pattern image2 = new Pattern("C:\\Utility\\OP_Resources\\Sikuli_op_images\\gmailSignIn.png");
Pattern image3 = new Pattern("C:\\Utility\\OP_Resources\\Sikuli_op_images\\Email.png");
Pattern image4 = new Pattern("C:\\Utility\\OP_Resources\\Sikuli_op_images\\EmailNext.png");
Pattern image5 = new Pattern("C:\\Utility\\OP_Resources\\Sikuli_op_images\\Password.png");
Pattern image6 = new Pattern("C:\\Utility\\OP_Resources\\Sikuli_op_images\\SignIn.png");
System.setProperty("webdriver.chrome.driver", "C:\\Utility\\BrowserDrivers\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize();
driver.get("https://www.google.co.in/");
screen.wait(image1, 10);
screen.click(image1);
screen.wait(image2, 10);
screen.click(image2);
screen.type(image3, "selenium");
screen.click(image4);
screen.wait(image5, 10);
screen.type(image5, "admin123");
screen.click(image6);
driver.quit();
}
}
Let me know if this answers your question.

Selenium finding element

I'm trying to find the element but I'm getting an error
This is my code:
driver.get(baseURL);
driver.manage().timeouts().implicitlyWait(10 ,TimeUnit.SECONDS);
driver.manage().window().maximize();
//String parentHandle=driver.getWindowHandle();
driver.findElement(By.linkText("Create Account")).click();
System.out.println(driver.getCurrentUrl());
//String currentWindow=driver.getWindowHandle();
//driver.switchTo().window(currentWindow);
//String currentURL=;
//(currentURL);
for (String winHandle : driver.getWindowHandles()) {
driver.switchTo().window(winHandle); // switch focus of WebDriver to the next found window handle (that's your newly opened window)
}
driver.findElement(By.xpath("//html/body/div/div[1]/div[1]/div/div/form/div[1]/input")).sendKeys("9051902811");
driver.close();
//driver.switchTo().window(parentHandle);
}
catch(NoSuchElementException nsee){
System.out.println(nsee.toString());
}
System.exit(0);
}
And I am getting the exception:
Unable to locate element "method":"xpath","selector":"//html/body/div/div[1]/div[1]/div/div/form/div[1]/input"} Command duration or timeout: 89 milliseconds
please help...
Based on what you have shown it is hard to say exactly what the issue is. It could be a couple different things.
1) Does the element exist? If you show the html this could be answered very quickly.
Xpaths are very brittle and very much error prone.
Try to use a different selector if at all possible, id and class are much more reliable.
Here is the link to the By class.
driver.findElement(By.id("id")).sendKeys("keys");
driver.findElement(By.className("className")).sendKeys("keys");
Something that is more concrete, and when content is added that changes the structure, it doesn't break the tests, xpaths wil certainly make your tests brittle.
2) Is the input box loaded yet?
Selenium sometimes will try to find an element that isn't loaded yet. Explicit Waits will help solve this issue, you can use these waits along with ExpectedConditions to wait for an element to be visible, clickable, not visible, among other things. Do Not use thread.sleep unless there is no other choice(which there probably is).
The below code can be used to wait for the element to be visible.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
WebDriverWait wait = new WebDriverWait(driver, seconds);
wait.until(ExpectedConditions.visibilityOfElementLocated(By
.xpath(xpath)));
You can use this to ensure the element is visible by xpath, but if possible you should consider using a different selector, one that won't make your test very visible. If the element does not have an id/class, you can anchor the xpath to a more reliable selector as well, to reduce some brittleness.
I will be happy to provide more info, if you provide the html.

To identify links regarding the Press Release pages alone

My task is to find the actual Press release links of a given link. Say http://www.apple.com/pr/ for example.
My tool has to find the press release links alone from the above URL excluding other advertisement links, tab links(or whatever) that are found in that site.
The program below is developed and the result this gives is, all the links that are present in the given webpage.
How can I modify the below program to find the Press Release links alone from a given URL?
Also, I want the program to be generic so that it identifies press release links from any press release URLs if given.
import java.io.*;
import java.net.URL;
import java.net.URLConnection;
import java.sql.*;
import org.jsoup.nodes.Document;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Element;
public class linksfind{
public static void main(String[] args) {
try{
URL url = new URL("http://www.apple.com/pr/");
Document document = Jsoup.parse(url, 1000); // Can also take an URL.
for (Element element : document.getElementsByTag("a")) {
System.out.println(element.attr("href"));}
}catch (Exception ex){ex.printStackTrace();}
}
}
I dont think there would be any definitive way to achieve this. You can make a set of all possible keywords like 'press', 'release' and 'pr' etc and match the urls to find the keywords using regex etc. The correctness of this would depend on how comprehensive your set of keywords is.
Look at the site today. Cache to a file whatever links you saw. Look at the site tomorrow; any new links are links to news articles, maybe? You'll get incorrect results - once - any time they change the rest of the page around you.
You could, you know, just use the RSS feed provided, which is designed to do exactly what you're asking for.
Look at the HTML source code. Open the page in a normal webbrowser, rightclick and choose View Source. You have to find a path in the HTML document tree to uniquely identify those links.
They are all housed in a <ul class="stories"> element inside a <div id="releases"> element. The appropriate CSS selector would then be "div#releases ul.stories a".
Here's how it should look like:
public static void main(String... args) throws Exception {
URL url = new URL("http://www.apple.com/pr/");
Document document = Jsoup.parse(url, 3000);
for (Element element : document.select("div#releases ul.stories a")) {
System.out.println(element.attr("href"));
}
}
This yields as of now, exactly what you want:
/pr/library/2010/07/28safari.html
/pr/library/2010/07/27imac.html
/pr/library/2010/07/27macpro.html
/pr/library/2010/07/27display.html
/pr/library/2010/07/26iphone.html
/pr/library/2010/07/23iphonestatement.html
/pr/library/2010/07/20results.html
/pr/library/2010/07/19ipad.html
/pr/library/2010/07/19alert_results.html
/pr/library/2010/07/02appleletter.html
/pr/library/2010/06/28iphone.html
/pr/library/2010/06/23iphonestatement.html
/pr/library/2010/06/22ipad.html
/pr/library/2010/06/16iphone.html
/pr/library/2010/06/15applestoreapp.html
/pr/library/2010/06/15macmini.html
/pr/library/2010/06/07iphone.html
/pr/library/2010/06/07iads.html
/pr/library/2010/06/07safari.html
To learn more about CSS selectors, read the Jsoup manual and the W3 CSS selector spec.
You need to find some attribute which defines a "press release link". In the case of that site, pointing to "/pr/library/" indicates that it's an Apple press release.

Categories

Resources