Create a web screenshot by headless browser in java - java

I need to implement a function that make a screenshot for a web page in a java backend project. I found some methods like using a headless browser is a good way but none of them performances perfectly (like jbrowser and ashot) for a long page or with too many images. I found that firefox has a function can make a screenshot for me. I wonder is there any java API for this function in headless mode? Or is there any other way to get a better screenshot performance? Thanks a lot
Here is my code to get a screenshot
package screenshot;
import com.machinepublishers.jbrowserdriver.JBrowserDriver;
import com.machinepublishers.jbrowserdriver.Settings;
import com.machinepublishers.jbrowserdriver.Timezone;
import org.openqa.selenium.Dimension;
import org.openqa.selenium.OutputType;
import ru.yandex.qatools.ashot.AShot;
import ru.yandex.qatools.ashot.Screenshot;
import ru.yandex.qatools.ashot.shooting.ShootingStrategies;
import javax.imageio.ImageIO;
import java.io.*;
public class JbrowserTest {
public String chekUrl(String str){
if (str.startsWith("http://") || str.startsWith("https://")) {
return str;
}
return str;
}
public static void main(String[] args) throws UnsupportedEncodingException {
// You can optionally pass a Settings object here,
// constructed using Settings.Builder
JBrowserDriver driver = new JBrowserDriver(Settings.builder().
timezone(Timezone.ASIA_SHANGHAI).screen(new Dimension(1920,1080)).build());
String url3 = "http://www.google.com";
// This will block for the page load and any
// associated AJAX requests
driver.get(url3);
driver.manage().window().maximize();
// You can get status code unlike other Selenium drivers.
// It blocks for AJAX requests and page loads after clicks
// and keyboard events.
System.out.println(driver.getStatusCode());
// Returns the page source in its current state, including
// any DOM updates that occurred after page load
String string2 = new String(driver.getPageSource().getBytes("utf-8"),"gb2312");
System.out.println(string2);
Screenshot screenshot2 = new AShot().shootingStrategy(ShootingStrategies.viewportPasting(100))
.takeScreenshot(driver);
try {
ImageIO.write(screenshot2.getImage(), "PNG",
new File("/Users/*******/Desktop/test2.png"));
byte[] screenshot = driver.getScreenshotAs(OutputType.BYTES);
System.out.println("the bytes" + screenshot.length);
String filePath = "/Users/*******/Desktop/test.png";
File file = new File(filePath);
FileOutputStream fw = new FileOutputStream(file);
fw.write(screenshot);
fw.close();
} catch (Exception ex) {
System.out.println("error" + ex);
}
// Close the browser. Allows this thread to terminate.
driver.quit();
}
}

You don't exactly specify your "performance requirement". An easy way to take screenshots by utilizing selenium and chrome drivers:
private void loadWebpage(){
//Init driver
ChromeOptions options = new ChromeOptions();
options.setHeadless(true);
options.addArguments("--window-size=1200x600", "--log-level=3");
WebDriver driver = new ChromeDriver(options);
//Load your website & wait until loaded with webdriver wait
takeScreenShot(driver, new File("outputFile.png"))
}
public static void takeScreenshot(WebDriver driver, File screenshotFile) throws
IOException {
File scrFile = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
Files.copy(scrFile.toPath(), screenshotFile.toPath(), StandardCopyOption.REPLACE_EXISTING);
}

Related

Taking screenshot from multiple URL's with Ashot and Selenium

I am trying to automate a test case where i have to take the screenshot of a particular screen that exists in different websites. Spcifically, i am trying to test if a particular checkbox is aligned or not.Below is what i have as my script, and i am using Ashot to take the screenshots.The scripts logs into the three systems,and click on the link i want to it to click, however there is only a single screen shot from the last URL vs a screen shot from every URL. Please help me explain how can i iterate the Ashot so that it will take a screenshot for every website instead of what it is doing right now. Essentialy all the steps are iterated except taking the screenshot, and i want the script to iterate through the screenshots as well.
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.Assert;
import org.testng.annotations.*;
import ru.yandex.qatools.ashot.AShot;
import ru.yandex.qatools.ashot.Screenshot;
import ru.yandex.qatools.ashot.shooting.ShootingStrategies;
public class checkboxAlignment {
String driverPath = "C:\\Users\\xxx\\Desktop\\Work\\chromedriver.exe";
public WebDriver driver;
public String expected = null;
public String actual = null;
#BeforeTest
public void launchBrowser() {
System.out.println("launching chrome browser");
System.setProperty("webdriver.chrome.driver", driverPath);
driver = new ChromeDriver();
}
#Test(dataProvider = "URLprovider")
private void notePrice(String url) throws IOException {
driver.get(url);
System.out.println(driver.getCurrentUrl());
WebElement email = driver.findElement(By.xpath("//input[#id='Email']"));
WebElement password = driver.findElement(By.xpath("//input[#id='PWD']"));
WebElement submit = driver.findElement(By.xpath("//button[#type='submit']"));
email.sendKeys("xxx#xxx.com");
password.sendKeys("xxx");
submit.click();
System.out.println(driver.getTitle());
driver.manage().window().maximize();
//click on the PI tab
driver.findElement(By.id("IDpi")).click();
// This doesnot iterate, only one screenshot is taken by Ashot
Screenshot fpScreenshot = new AShot().shootingStrategy(ShootingStrategies.viewportPasting(1000)).takeScreenshot(driver);
ImageIO.write(fpScreenshot.getImage(),"PNG",new File("C://Users//dir//eclipse-workspace//someDir//screenshots//checkbox.jpg"));
}
#DataProvider(name = "URLprovider")
private Object[][] getURLs() {
return new Object[][] { { "http://www.someURL.com/A" }, { "http://www.someurl.com/B" },
{ "http://www.someurl.com/C" } };
}
}
You are saving all the screenshot in the same file checkbox.jpg. That is why your previous screenshots are replaced by the last one. Try to name the file different for every screenshot. Also, save the screenshots with .png extension as that is the actual file type.
Try this for saving the image:
ImageIO.write(fpScreenshot.getImage(),"PNG",new File("C://Users//dir//eclipse-workspace//someDir//screenshots//checkbox-"+driver.getCurrentUrl()+".png"));
I'm doing something like this
#Step("Захват страницы для хранилища")
protected void capturePageToVault(String pageName, String url, int scrollTime) throws IOException {
open(url);
expected = capturePage(scrollTime);
ImageIO.write(expected.getImage(), "png", expectedImg(pageName));
attach = new FileInputStream(expectedImg(pageName));
Allure.addAttachment("Exemplar", "image/png", attach, ".png");
attach.close();
}

Selenium WebDriver throwing TimoutException while invoking getScreenshotAs()

This is my code.
public static void test1() throws IOException {
System.setProperty("webdriver.chrome.driver", "data/chromedriver.exe");
drive = new ChromeDriver();
drive.manage().timeouts().pageLoadTimeout(30, TimeUnit.SECONDS);
try {
drive.get("http://youtube.com");
}catch(TimeoutException e) {
printSS();
}
}
public static void printSS() throws IOException{
String path = "logs/ss/";
File scrFile = ((TakesScreenshot)drive).getScreenshotAs(OutputType.FILE);
FileUtils.copyFile(scrFile, new File(path + "asdasdas" + ".jpg"));
}
All time when driver.get() throw TimeoutException I want to take a screenshot at browser.
But when throw TimeoutException, getScreenshotAs() from printSS() don't take screenshot because throw another TimeoutException.
Why getScreenshotAs() throw TimeoutException and how to take screenshot at browser
P.S.: Increase pageLoadTimeout time is not the answer I want.
While working with Selenium 3.x, ChromeDriver 2.36 and Chrome 65.x you need to mention the relative path of the location (with respect of your project) where you intend to store the screenshot.
I took you code and did a few minor modification as follows :
Declared driver as WebDriver instance as static and added #Test annotation.
Reduced pageLoadTimeout to 2 seconds to purposefully raise the TimeoutException.
Changed the location of String path to a sub-directory wthin the project scope as follows :
String path = "./ScreenShots/";
Added a log as :
System.out.println("Screenshot Taken");
Here is the code block :
package captureScreenShot;
import java.io.File;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import org.apache.commons.io.FileUtils;
import org.openqa.selenium.OutputType;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.TimeoutException;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.Test;
public class q49319748_captureScreenshot
{
public static WebDriver drive;
#Test
public static void test1() throws IOException {
System.setProperty("webdriver.chrome.driver", "C:\\Utility\\BrowserDrivers\\chromedriver.exe");
drive = new ChromeDriver();
drive.manage().timeouts().pageLoadTimeout(2, TimeUnit.SECONDS);
try {
drive.get("http://youtube.com");
}catch(TimeoutException e) {
printSS();
}
}
public static void printSS() throws IOException{
String path = "./ScreenShots/";
File scrFile = ((TakesScreenshot)drive).getScreenshotAs(OutputType.FILE);
FileUtils.copyFile(scrFile, new File(path + "asdasdas" + ".jpg"));
System.out.println("Screenshot Taken");
}
}
Console Output :
[TestNG] Running:
C:\Users\username\AppData\Local\Temp\testng-eclipse--153679036\testng-customsuite.xml
Starting ChromeDriver 2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91) on port 42798
Only local connections are allowed.
Mar 16, 2018 5:37:59 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected dialect: OSS
Screenshot Taken
PASSED: test1
Screenshot :
Reference
You can find a detailed discussion in How to take screenshot with Selenium WebDriver
The problem is that while Selenium waits for the page to complete loading it cannot take any other command. This is why it throws TimeoutException also from the exception handler when you try take the screenshot.
The only option I see is to take the screenshot not through Selenium, but using other means that take a screenshot of the entire desktop. I've written such a thing in C#, but I'm pretty sure you can either find a way to do it in Java too.

Iterate through all links of a website using Selenium

I'm new to Selenium and I would like to download all the pdf, ppt(x) and doc(x) files from a website. I have written the following code. But I'm confused how to get the inner links:
import java.io.*;
import java.util.ArrayList;
import java.util.List;
import org.apache.commons.io.FileUtils;
import org.openqa.selenium.By;
import org.openqa.selenium.OutputType;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
public class WebScraper {
String loginPage = "https://blablah/login";
static String userName = "11";
static String password = "11";
static String mainPage = "https://blahblah";
public WebDriver driver = new FirefoxDriver();
ArrayList<String> visitedLinks = new ArrayList<>();
public static void main(String[] args) throws IOException {
System.setProperty("webdriver.gecko.driver", "E:\\geckodriver.exe");
WebScraper webSrcaper = new WebScraper();
webSrcaper.openTestSite();
webSrcaper.login(userName, password);
webSrcaper.getText(mainPage);
webSrcaper.saveScreenshot();
webSrcaper.closeBrowser();
}
/**
* Open the test website.
*/
public void openTestSite() {
driver.navigate().to(loginPage);
}
/**
* #param username
* #param Password Logins into the website, by entering provided username and password
*/
public void login(String username, String Password) {
WebElement userName_editbox = driver.findElement(By.id("IDToken1"));
WebElement password_editbox = driver.findElement(By.id("IDToken2"));
WebElement submit_button = driver.findElement(By.name("Login.Submit"));
userName_editbox.sendKeys(username);
password_editbox.sendKeys(Password);
submit_button.click();
}
/**
* grabs the status text and saves that into status.txt file
*
* #throws IOException
*/
public void getText(String website) throws IOException {
driver.navigate().to(website);
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
List<WebElement> allLinks = driver.findElements(By.tagName("a"));
System.out.println("Total no of links Available: " + allLinks.size());
for (int i = 0; i < allLinks.size(); i++) {
String fileAddress = allLinks.get(i).getAttribute("href");
System.out.println(allLinks.get(i).getAttribute("href"));
if (fileAddress.contains("download")) {
driver.get(fileAddress);
} else {
// getText(allLinks.get(i).getAttribute("href"));
}
}
}
/**
* Saves the screenshot
*
* #throws IOException
*/
public void saveScreenshot() throws IOException {
File scrFile = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
FileUtils.copyFile(scrFile, new File("screenshot.png"));
}
public void closeBrowser() {
driver.close();
}
}
I have an if clause which checks if the current link is a downloadable file (with an address including the word "download"). If it is, I will get it, if not, what to do? That part is my problem. I tried to implement a recursive function to retrieve the nested links and repeat the steps for the nested links, but no success.
In the meantime, the first link which is found when giving https://blahblah as the input, is https://blahblah/# which refers to the same page as https://blahblah. It can also cause a problem, but currently, I'm trapped in another problem, namely the implementation of the recursion function. Could you please help me?
You are not far off, but answering your question, grab all the link into a list of elements, iterate and click(and wait). Using C# something like this;
IList<IWebElement> listOfLinks = _driver.FindElements(By.XPath("//a"));
foreach (var link in listOfLinks)
{
if(link.GetAttribute("href").Contains("download"))
{
link.Click();
WaitForSecs(); //Thread.Sleep(1000)
}
}
JAVA
List<WebElement> listOfLinks = webDriver.findElements(By.xpath("//a"));
for (WebElement link :listOfLinks ) {
if(link.getAttribute("href").contains("download"))
{
link.click();
//WaitForSecs(); //Thread.Sleep(1000)
}
}
One option is to embed groovy in your java code if you want to search depth-first. When httpBuilder parses , it gives xml like documentation and then you can traverse as deep as you like using gpath in groovy. Your test.groovy is like below
#Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7' )
import groovyx.net.http.HTTPBuilder
import static groovyx.net.http.Method.GET
import static groovyx.net.http.ContentType.JSON
import groovy.json.*
import org.cyberneko.html.parsers.SAXParser
import groovy.util.XmlSlurper
import groovy.json.JsonSlurper
urlValue="http://yoururl.com"
def http = new HTTPBuilder(urlValue)
//parses page and provide xml tree , it even includes malformed html
def parsedText = http.get([:])
// number of a tags. "**" will parse depth-first
aCount= parsedText."**".findAll {it.name()=='a'}.size()
Then you just call test.groovy from java like this
static void runWithGroovyShell() throws Exception {
new GroovyShell().parse( new File( "test.groovy" ) ).invokeMethod( "hello_world", null ) ;
}
More info on parsing html with groovy
Addition:
When you evaluate groovy within Java, to access groovy variables in Java environment through groovy bindings, have a look here

Selenium get .har file

I have a two page application:
/login
/profile
I want to get .har file page /profile.
When i go to the page /login, the cookie is created with a key=connect.sid and value = "example value". This cookie is not yet active.
I added the cookies with active connect.sid.
WebDriver webDriver = getDriver();
webDriver.get(LOGIN_PAGE);
webDriver.manage().addCookie(connectsSId);
it does not work because after the load page, /login crated a new cookies.
i also tried this code:
WebDriver webDriver = getDriver();
webDriver.get(PROFILE_PAGE);
webDriver.manage().deleteAllCookies();
webDriver.manage().addCookie(connectsSId);
and this does not work. cookies were added but it seems too late.
WebDriver webDriver = getDriver();
LoginPage loginPage = new LoginPage(getDriver());
LandingPage landingPage = loginPage.login();
landingPage.openProfilePage();
This code created a .har file for the page /login.
for some reason, the file is created only after the first call to the page. I can not solve this problem.
Use PhantomJS with BrowserMobProxy. PhantomJS helps us for JavaScript enables pages. The following code works for HTTPS web addresses, too.
Place 'phantomjs.exe' in C drive and you get the 'HAR-Information.har' file in C drive itself.
Make sure you DO NOT put a ' / ' at the end of the url, like
driver.get("https://www.google.co.in/")
It should be
driver.get("https://www.google.co.in");
Otherwise, it won't work.
package makemyhar;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import net.lightbody.bmp.BrowserMobProxy;
import net.lightbody.bmp.BrowserMobProxyServer;
import net.lightbody.bmp.core.har.Har;
import net.lightbody.bmp.proxy.CaptureType;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.phantomjs.PhantomJSDriver;
import org.openqa.selenium.phantomjs.PhantomJSDriverService;
import org.openqa.selenium.remote.CapabilityType;
import org.openqa.selenium.remote.DesiredCapabilities;
public class MakeMyHAR {
public static void main(String[] args) throws IOException, InterruptedException {
//BrowserMobProxy
BrowserMobProxy server = new BrowserMobProxyServer();
server.start(0);
server.setHarCaptureTypes(CaptureType.getAllContentCaptureTypes());
server.enableHarCaptureTypes(CaptureType.REQUEST_CONTENT, CaptureType.RESPONSE_CONTENT);
server.newHar("Google");
//PHANTOMJS_CLI_ARGS
ArrayList<String> cliArgsCap = new ArrayList<>();
cliArgsCap.add("--proxy=localhost:"+server.getPort());
cliArgsCap.add("--ignore-ssl-errors=yes");
//DesiredCapabilities
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(CapabilityType.ACCEPT_SSL_CERTS, true);
capabilities.setCapability(CapabilityType.SUPPORTS_JAVASCRIPT, true);
capabilities.setCapability(PhantomJSDriverService.PHANTOMJS_CLI_ARGS, cliArgsCap);
capabilities.setCapability(PhantomJSDriverService.PHANTOMJS_EXECUTABLE_PATH_PROPERTY,"C:\\phantomjs.exe");
//WebDriver
WebDriver driver = new PhantomJSDriver(capabilities);
driver.get("https://www.google.co.in");
//HAR
Har har = server.getHar();
FileOutputStream fos = new FileOutputStream("C:\\HAR-Information.har");
har.writeTo(fos);
server.stop();
driver.close();
}
}
Set preferences in your Selenium code:
profile.setPreference("devtools.netmonitor.har.enableAutoExportToFile", true);
profile.setPreference("devtools.netmonitor.har.defaultLogDir", String.valueOf(dir));
profile.setPreference("devtools.netmonitor.har.defaultFileName", "network-log-file-%Y-%m-%d-%H-%M-%S");
and open console:
Actions keyAction = new Actions(driver);
keyAction.keyDown(Keys.LEFT_CONTROL).keyDown(Keys.LEFT_SHIFT).sendKeys("q").keyUp(Keys.LEFT_CONTROL).keyUp(Keys.LEFT_SHIFT).perform();
You can use browsermob proxy to capture all the request and response data
See here
I have tried as well to get the har file using a proxy like browsermob proxy
I did a lot of research because the file which I've received was always empty.
What I did was to enable the browser performance log.
Note this will work only with chrome driver.
This is my driver class (in python)
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium import webdriver
from lib.config import config
class Driver:
global performance_log
capabilities = DesiredCapabilities.CHROME
capabilities['loggingPrefs'] = {'performance': 'ALL'}
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument("--headless")
mobile_emulation = {"deviceName": "Nexus 5"}
if config.Env().is_mobile():
chrome_options.add_experimental_option(
"mobileEmulation", mobile_emulation)
else:
pass
chrome_options.add_experimental_option(
'perfLoggingPrefs', {"enablePage": True})
def __init__(self):
self.instance = webdriver.Chrome(
executable_path='/usr/local/bin/chromedriver', options=self.chrome_options)
def navigate(self, url):
if isinstance(url, str):
self.instance.get(url)
self.performance_log = self.instance.get_log('performance')
else:
raise TypeError("URL must be a string.")
The amount of information which is found the in output is huge so you'll have to filter the raw data and get the network received and send objects only.
import json
import secrets
def digest_log_data(performance_log):
# write all raw data in a file
with open('data.json', 'w', encoding='utf-8') as outfile:
json.dump(performance_log, outfile)
# open the file and real it with encoding='utf-8'
with open('data.json', encoding='utf-8') as data_file:
data = json.loads(data_file.read())
return data
def digest_raw_data(data, mongo_object={}):
for idx, val in enumerate(data):
data_object = json.loads(data[idx]['message'])
if (data_object['message']['method'] == 'Network.responseReceived') or (data_object['message']['method'] == 'Network.requestWillBeSent'):
mongo_object[secrets.token_hex(30)] = data_object
else:
pass
We choose to push this data into a mongo db which will be analyse later by an etl and pushed into a redshift database to create statistics .
I hope is what you are looking for.
The way Im running the script is :
import codecs
from pprint import pprint
import urllib
from lib import mongo_client
from lib.test_data import test_data as data
from jsonpath_ng.ext import parse
from IPython import embed
from lib.output_data import process_output_data as output_data
from lib.config import config
from lib import driver
browser = driver.Driver()
# get the list of urls which we need to navigate
urls = data.url_list()
for url in urls:
browser.navigate(config.Env().base_url() + url)
print('Visiting ' + url)
# get performance log
performance_log = browser.performance_log
# digest the performace log
data = output_data.digest_log_data(performance_log)
# initiate an empty dict
mongo_object = {}
# prepare the data for the mongo document
output_data.digest_raw_data(data, mongo_object)
# load data into the mongo db
mongo_client.populate_mongo(mongo_object)
browser.instance.quit()
My main source was this one which I've adjusted it to my needs.
https://www.reddit.com/r/Python/comments/97m9iq/headless_browsers_export_to_har/
Thanks
You may do it by the simplest way Selenide + Java + JS
import java.nio.file.Files and java.nio.file.Paths in you class
Then create function:
public static void getHar() {
open("http://you-task.com");
String scriptGetInfo = "performance.setResourceTimingBufferSize(1000000);" +
"return performance.getEntriesByType('resource').map(JSON.stringify).join('\\n')";
String har = executeJavaScript(scriptGetInfo);
Files.write(Paths.get("log.har"), har.getBytes());
}
It saves you log.har in the root of you project.
Just call this function in the place you want to save har-file

Unable to start Internet Explorer or Chrome in Selenium Webdriver (JAVA)

I am trying to start up an IE instance using Webdriver. I can't figure out why I'm receiving these errors, my code appears to be identical to every example I can find on the web.
I'm using Java and testng.
Here is the code:
import java.io.File;
import org.openqa.selenium.ie.InternetExplorerDriver;
import org.openqa.selenium.WebDriver;
public class Tests {
File file = new File("C:\\selenium\\IEDriverServer.exe");
System.setProperty("webdriver.ie.driver", file.getAbsolutePath() );
WebDriver driver = new InternetExplorerDriver();
}
The following errors are displaying, all of these errors are on the "System.setProperty" line.
Multiple markers at this line
- Syntax error on token ""webdriver.ie.driver"", invalid
FormalParameterList
- Syntax error on token(s), misplaced construct(s)
- Syntax error on tokens, FormalParameter expected instead
Please note that I have the exact same problem if I try to use Chrome with this code:
File file = new File("C:/selenium/chromedriver.exe");
System.setProperty("webdriver.chrome.driver", file.getAbsolutePath());
WebDriver driver = new ChromeDriver();
You are running your code from inside class instead of running it from inside method. Covert it to something like
import java.io.File;
import org.openqa.selenium.ie.InternetExplorerDriver;
import org.openqa.selenium.WebDriver;
public class Tests {
public static void main(String[] args) { // <-- you need a method!
File file = new File("C:\\selenium\\IEDriverServer.exe");
System.setProperty("webdriver.ie.driver", file.getAbsolutePath() );
WebDriver driver = new InternetExplorerDriver();
}
}
try this :
I'm using "mvn test" to lunch the test process so the path of the IE driver may be changed
File file = new File("classes/tools/IEDriverServer.exe");
Use IE driver with Capabilities
DesiredCapabilities caps = DesiredCapabilities.internetExplorer();
System.setProperty("webdriver.ie.driver", file.getAbsolutePath());
caps.setCapability("ignoreZoomSetting", true);
caps.setCapability("nativeEvents", false);
WebDriver driver = new InternetExplorerDriver(caps);
It may help you :)
Actually, on the updated eclipse version, you might have to use #suppressWarnings
package Login;
import java.io.File;
import org.openqa.selenium.ie.InternetExplorerDriver;
import org.openqa.selenium.WebDriver;
public class Login {
public static void main(String[] args) {
File file = new File("C:\\Users\\IEDRiverServer.exe");
System.setProperty("webdriver.ie.driver", file.getAbsolutePath() );
#SuppressWarnings("unused")
WebDriver driver = new InternetExplorerDriver();
}
}
Simple example:
public class IE {
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
System.setProperty("webdriver.ie.driver", "D:\\Sathish\\soft\\SELENIUM\\LatestDownloads\\selenium\\IEDriverServer.exe");
WebDriver driver = new InternetExplorerDriver();
driver.get("www.google.com");
driver.findElement(By.id("gbqfq")).sendKeys("abc");
driver.close();
}
}
Do the below process.
import org.openqa.selenium.ie.InternetExplorerDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
if (browserName.equalsIgnoreCase("InternetExplorer")) {
DesiredCapabilities caps = DesiredCapabilities.internetExplorer();
System.setProperty("webdriver.ie.driver", "drivers/IEDriverServer.exe");
caps.setCapability( InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS,
true);
caps.setCapability("nativeEvents", false);
browser = new InternetExplorerDriver(caps);
Then after, In IE, from the Tools menu (or the gear icon in the toolbar in later versions), select "Internet options." Go to the Security tab. At the bottom of the dialog for each zone, you should see a check box labeled "Enable Protected Mode." Set the value of the check box to the same value,
either checked or unchecked, for each zone.
I have applied the same thing at my end, it works fine.

Categories

Resources