Selenium no such element exists - java

I am trying to perform a simple checkout flow on my staging website, but I cant seem to find the element. I tried to use selenium IDE which works but when it comes to coding in java I keep getting stuck on secure checkout
this is the element button I want to click
<a class="checkout-anchor click-button display-flex vertical-align-center justify-center" href="javascript:void(0);" onclick="onCheckout()" data-stepid="cartstep04">
<img src="https://release.squareoffnow.com/public/assets/images/checkout/svg/secure.svg" class="secure-pic ls-is-cached lazyloaded" alt="">Secure Checkout</a>
This is the code i have written so far
package googleTestCases;
import org.openqa.selenium.By;
import org.openqa.selenium.Dimension;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
public class simpleCartFlow {
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver","/Users/manavmehta/Desktop/squareoffSeleniumProjects/chromedriver");
WebDriver driver=new ChromeDriver();
driver.get("http://release.squareoffnow.com/");
driver.manage().window().setSize(new Dimension(1440, 789));
driver.findElement(By.linkText("Products")).click();
driver.findElement(By.cssSelector(".store-buy-pro-button")).click();
driver.findElement(By.cssSelector(".pro-twinpack-button")).click();
driver.findElement(By.cssSelector(".whole-purchase-button")).click();
driver.findElement(By.cssSelector(".productAvailability > .click-button")).click();
driver.findElement(By.cssSelector(".giftpackSubmit")).click();
System.out.println("button not clicked");
driver.findElement(By.cssSelector(".checkout-anchor")).click();
System.out.println("button clicked");
driver.quit();
}
}
instead of using a CSS selector I also tried to use linkText but it still didn't work
I keep getting this error

I suspect your issue is due to not waiting long enough for the element to appear. I ran the following successfully using the exact same selectors you had (although with Python instead of Java):
from seleniumbase import BaseCase
class MyTestClass(BaseCase):
def test_base(self):
self.open("https://release.squareoffnow.com/")
self.click_link("Products")
self.click(".store-buy-pro-button")
self.click(".pro-twinpack-button")
self.click(".whole-purchase-button")
self.click(".productAvailability > .click-button")
self.click(".giftpackSubmit")
self.click(".checkout-anchor")
Full disclosure: This particular framework, SeleniumBase is one that I personally built, and it uses smart-waiting to make sure that elements have fully loaded before taking action. Java probably has something similar so that you can wait for the element to be clickable so that you don't have to sleep for an arbitrary amount of time between steps.

Related

Ashot is not taking screenshot of correct element

I am trying to take screenshot of the table given in one webpage. and the same element xpath I am providing in the code however Ashot code is capturing screenshot of some other location.
I have also tried other code of taking screenshot,
Screenshot screenshot = new AShot().takeScreenshot(driver,driver.findElement(By.xpath(webElementXpath)));
but it was giving me error which I was able to fix by reading this link: https://github.com/pazone/ashot/issues/93 and then I used below code:
WebElement myWebElement = driver.findElement(By.xpath("//center/table/tbody/*"));
Screenshot fpScreenshot = new AShot()
.coordsProvider(new WebDriverCoordsProvider()).takeScreenshot(driver,myWebElement);
ImageIO.write(fpScreenshot.getImage(),"PNG",new File("/Users/sanatkumar/eclipse-workspace/com.ScreenshotUtility/Screenshots/error.png"));
Please help as this code is giving me the screenshot of some random part of the webpage. I tried to capture other element as well but again I did not get the correct screenshot:
Please note my table is not fully visible on the webpage, manually I have to scroll down to view full table. do I need to write other code to get full screenshot of the table??
also my website is angular based which I am trying to automate using selenium java. the reason why I am doing this is because in protractor I dint find any API like Ashot. if anybody knows about it please let me know.
By adding a shootingStrategy I was able to capture just the form element with the attribute id = "post-form" at the bottom of this page.
From the documentation at https://github.com/pazone/ashot
Different WebDrivers take screenshots differently. Some WebDrivers
provide a screenshot of the entire page while others handle the
viewport only.
...
There are built-in strategies in ShootingStrategies for different use cases.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import ru.yandex.qatools.ashot.AShot;
import ru.yandex.qatools.ashot.Screenshot;
import ru.yandex.qatools.ashot.shooting.ShootingStrategies;
import javax.imageio.ImageIO;
import java.io.File;
public class Main
{
public static void main(String args[]) throws Exception
{
System.setProperty("webdriver.gecko.driver", "./geckodriver");
System.setProperty("webdriver.firefox.bin", "/usr/bin/firefox");
WebDriver driver = new FirefoxDriver();
driver.get("https://stackoverflow.com/questions/54724963/ashot-is-not-taking-screenshot-of-correct-element");
Thread.sleep(2000);
WebElement webElement = driver.findElement(By.id("post-form"));
Screenshot screenshot = new AShot().shootingStrategy(ShootingStrategies.viewportPasting(100)).takeScreenshot(driver,webElement);
ImageIO.write(screenshot.getImage(),"PNG",new File("/home/dan/ElementScreenshot.png"));
Thread.sleep(2000);
driver.quit();
}
}
Outputs:
This functionaity is possible with Protractor also by requiring an NPM module like 'protractor-image-comparison'. If you wanted to capture the related posts on the sidebar for instance you could use the following code.
Note: I haven't tested this package out with large elements that extend past the range of the browser viewport so can't say how they will work on those.
Spec File
describe('simple test', () => {
it('will save image', async () => {
await browser.get("https://stackoverflow.com/questions/54724963/ashot-is-not-taking-screenshot-of-correct-element");
await browser.driver.sleep(10 * 1000);
let related_questions_sidebar = element(by.className('module sidebar-related'));
await browser.executeScript('arguments[0].scrollIntoView();', related_questions_sidebar);
await browser.driver.sleep(3 * 1000);
// saveElement
await browser.protractorImageComparison.saveElement(related_questions_sidebar, 'sidebar-image');
});
});
Conf.js- in your OnPrepare
onPrepare: async () => {
// await jasmine.getEnv().addReporter(new dbReporter());
const protractorImageComparison = require('protractor-image-comparison');
browser.protractorImageComparison = new protractorImageComparison(
{
baselineFolder: './screen-compare/baselines/',
screenshotPath: './screen-compare/screenshots/'
}
);
);
Image Saved

How can I extract the text of HTML5 Constraint validation in https://www.phptravels.net/ website using Selenium and Java? [duplicate]

This question already has answers here:
How to handle HTML constraint validation pop-up using Selenium?
(3 answers)
Closed 3 months ago.
I have tried switch to alert but it's showing no such alert found error.
And i have also tried ifranes,windowhandling.
The popup stays for only 1-2 sec and I can't use inspect element to get the xpath of that.
Please check the scrrenshot attached.
The alert window in https://www.phptravels.net/ which you are referring is the outcome of Constraint API's element.setCustomValidity() method.
Note: HTML5 Constraint validation doesn't remove the need for validation on the server side. Even though far fewer invalid form requests are to be expected, invalid ones can still be sent by non-compliant browsers (for instance, browsers without HTML5 and without JavaScript) or by bad guys trying to trick your web application. Therefore, like with HTML4, you need to also validate input constraints on the server side, in a way that is consistent with what is done on the client side.
Solution
To retrieve the text which results out from the element.setCustomValidity() method, you can use the following solution:
Code Block:
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.openqa.selenium.By;
public class HTML5_input_field_validation_message {
public static void main(String[] args) {
System.setProperty("webdriver.gecko.driver", "C:\\Utility\\BrowserDrivers\\geckodriver.exe");
WebDriver driver = new FirefoxDriver();
driver.get("https://www.phptravels.net/");
WebElement checkin = new WebDriverWait(driver, 20).until(ExpectedConditions.elementToBeClickable(By.cssSelector("input.form.input-lg.dpd1[name='checkin']")));
System.out.println(checkin.getAttribute("validationMessage"));
}
}
Console Output:
Please fill out this field.
There is possibility we perform mouse actions and move the mouse towards the element, when the mouse pointer moves on to the element then we will get the tool tip text.
So, get the locator values of the element and tooltip text.
Move the mouse pointer to element using element locator.
create webelement object for tooltip text and get text.

PhantomJS can't retrieve a website's entire html / can't find web elements

I'm trying to use phantomjs for headless browser testing, and I noticed that simple commands like driver.get(By.id("")) were returning with an element not found exception. I did manage to find the source of the problem. I did a driver.getPageSource(), and noticed that phantomjs was not retrieving or "seeing" the complete page html.
The code below is what I'm trying to run. I am trying to find the searchbox on the Google home page. Viewing the HTML in the browser, you can see that the id for the searchbox is "lst-ib". However, upon doing the getPageSource(), "lst-ib" is missing from the result. This isn't a huge deal because I can still access the element by name. But on other web pages, entire chunks of HTML are missing, which results in whole elements being completely omitted from the getPageSource(). This makes Testing those elements impossible.
import static org.junit.Assert.*;
import java.io.File;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.phantomjs.PhantomJSDriver;
import org.openqa.selenium.phantomjs.PhantomJSDriverService;
import org.openqa.selenium.remote.DesiredCapabilities;
public class AccessMessageDataRetentionSettings
{
public WebDriver driver;
#Before
public void setup()
{
File f = new File("My path to the phantomjs executable");
System.setProperty("phantomjs.binary.path", f.getAbsolutePath());
DesiredCapabilities caps = new DesiredCapabilities();
caps.setJavascriptEnabled(true);
//caps.setCapability("takesScreenshot", false);
caps.setCapability(PhantomJSDriverService.PHANTOMJS_CLI_ARGS, new String[] {"--ssl-protocol=any", "--ignore-ssl-errors=true", "--web-security=false" });
driver = new PhantomJSDriver(caps);
driver.get("http://www.google.com");
//((JavascriptExecutor) driver).executeScript("var s=window.document.createElement('script'); s.src='path to my javascript file'; window.document.head.appendChild(s);");
//WebDriverWait wait = new WebDriverWait(driver, 10);
//driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
//wait.until(ExpectedConditions.invisibilityOfElementLocated(By.id("lst-ib")));
}
#Test
public void test()
{
System.out.println(driver.getPageSource());
//driver.findElement(By.xpath("//input[#id = 'lst-ib']"));
driver.findElement(By.id("lst-ib"));
}
#After
public void afterTest()
{
driver.quit();
}
}
Things I've tried: Declaring the webdriver as a PhantomJSDriver, setting different combinations of DesiredCapabilities (including setting the ssl-protocol to tlsv1), executing a javascript shim suggested from https://github.com/facebook/react/issues/945 via javascriptExecutor (which doesn't seem to be doing anything), and trying the various waits available in Selenium.
Is PhantomJS just not compatible with modern websites, or am I completely missing something?
PhantomJS is headless browser so many options that you can handle in firefox, IE and chrome will be impossible for example:
Unsupported Features
Support for plugins (such as Flash) was dropped a long time ago. The primary reasons:
Pure headless (no X11) makes it impossible to have windowed plugin
Issues and bugs are hard to debug, due to the proprietary nature of such plugins (binary blobs)
The following features, due to the nature of PhantomJS, are irrelevant:
WebGL would require an OpenGL-capable system. Since the goal of PhantomJS is to become 100% headless and self-contained, this is not acceptable. Using OpenGL emulation via Mesa can overcome the limitation, but then the performance would degrade.
Video and Audio would require shipping a variety of different codecs.
CSS 3-D needs a perspective-correct implementation of texture mapping. It can’t be implemented without a penalty in performance.
Thanks for the answers guys. PhantomJS ended up being way too outdated and buggy to use. I ended up using docker containers running firefox/chrome images, then ran my tests on those with a RemoteWebDriver. Even though that means they are running on actual browsers within the containers, it was "headless" on my end, and that was good enough for my purposes.

Unable to click on Facebook "setting" link through selenium webdriver with java

My Java
package com.palash.healthcare;
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.testng.annotations.Parameters;
import org.testng.annotations.Test;
public class Login {
#Test
#Parameters({"URL","USERNAME","PASSWORD"})
public static void logindata(String url,String Username,String Password)
{
WebDriver driver = new FirefoxDriver();
driver.get(url);
driver.manage().window().maximize();
driver.findElement(By.id("email")).sendKeys(Username);
driver.findElement(By.id("pass")).sendKeys(Password);
driver.findElement(By.id("u_0_v")).click();
driver.findElement(By.id("userNavigationLabel")).click();
//driver.manage().timeouts().implicitlyWait(2, TimeUnit.SECONDS);
//driver.findElement(By.xpath("//span[#class='_54nh'][text()='Settings']")).click();
List<WebElement> All_List = driver.findElements(By.xpath("//ul[#class='_54nf']"));
for(WebElement li:All_List)
{
System.out.println(li.getText());
if(li.getText().equalsIgnoreCase("Settings"));
li.click();
}
}
}
I am writing the facebook Setting link script in selenium webdriver with java but I am unable to click on the Setting link also i have tried the above code.Can Anybody Help? and for the html about the script you can see the facebook Setting link right above the "logout" button.
Not too sure what's going on with that list. I don't think you'll need that but correct me if I'm wrong.
The web app I do work for is really wonky and sometimes you have to do some weird stuff. Try something like:
Actions actions = new Actions(driver);
WebElement settings = driver.findElement(By.xpath("//span[#class='_54nh'][text()='Settings']"));
actions.moveToElement(settings).build().perform();
settings.click();
This kinda breaks the .click() down into smaller steps.
Took me a long while to get .click() commands down. They behave differently on different web applications.
Let me know if that works.
Have you considered just navigating to the settings page after you log in? It's https://www.facebook.com/settings.
driver.findElement(By.id("userNavigationLabel")).click();
Thread.sleep(3000);
driver.findElement(By.xpath("//span[#class='_54nh'][text()='Settings']")).click();
Got the Answer Just Need to reconstruct the Xpath.

Java Performing Actions on a Website

Yesterday I posted this Retrieving Data in Java . I'm curious it is possible to make a java program run while a web browser is open and then have it do stuff on a website. If I have facebook open on a browser, could it type the current time in the status box and then click post? Or let's say I make the program able to take input from the user (perhaps using scanner?) and then based on the input, it could load google, type it into the search bar and then click search.
You can do this by using Selenium:
Selenium automates browsers. That's it. What you do with that power is
entirely up to you. Primarily it is for automating web applications
for testing purposes, but is certainly not limited to just that.
Boring web-based administration tasks can (and should!) also be
automated as well.
This is example from documentation page which searches for the term “Cheese” on Google:
package org.openqa.selenium.example;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.ui.ExpectedCondition;
import org.openqa.selenium.support.ui.WebDriverWait;
public class Selenium2Example {
public static void main(String[] args) {
// Create a new instance of the Firefox driver
// Notice that the remainder of the code relies on the interface,
// not the implementation.
WebDriver driver = new FirefoxDriver();
// And now use this to visit Google
driver.get("http://www.google.com");
// Alternatively the same thing can be done like this
// driver.navigate().to("http://www.google.com");
// Find the text input element by its name
WebElement element = driver.findElement(By.name("q"));
// Enter something to search for
element.sendKeys("Cheese!");
// Now submit the form. WebDriver will find the form for us from the element
element.submit();
// Check the title of the page
System.out.println("Page title is: " + driver.getTitle());
// Google's search is rendered dynamically with JavaScript.
// Wait for the page to load, timeout after 10 seconds
(new WebDriverWait(driver, 10)).until(new ExpectedCondition<Boolean>() {
public Boolean apply(WebDriver d) {
return d.getTitle().toLowerCase().startsWith("cheese!");
}
});
// Should see: "cheese! - Google Search"
System.out.println("Page title is: " + driver.getTitle());
//Close the browser
driver.quit();
}
}

Categories

Resources