below script i have to validate links on a page. Here is the twist, I need to validate links in this page plus need to click on each link then validate links on that page as well but I need to exclude links that were validated in the first page. I really do not know how to perform. I can do up to clicking on the link and validate the links in that page also but what code should i use to exclude those that were already validated. Please help if you can. Thanks
package siteLinks;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.Iterator;
import java.util.List;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
public class LinksValidation {
private static WebDriver driver = null;
public static void main(String[] args) {
// TODO Auto-generated method stub
String homePage = "http://www.safeway.com/Shopstores/Site-Map.page";
String url = "http://www.safeway.com/Shopstores/Site-Map.page";
HttpURLConnection huc = null;
int respCode = 200;
System.setProperty("webdriver.chrome.driver", "C:\\Users\\aaarb00\\Desktop\\Quotients\\lib\\chromedriver.exe");
driver = new ChromeDriver();
driver.manage().window().maximize();
driver.get(homePage);
List<WebElement> links = driver.findElements(By.tagName("a"));
Iterator<WebElement> it = links.iterator();
while(it.hasNext()){
url = it.next().getAttribute("href");
System.out.println(url);
if(url == null || url.isEmpty()){
System.out.println("URL is either not configured for anchor tag or it is empty");
continue;
}
try {
huc = (HttpURLConnection)(new URL(url).openConnection());
huc.setRequestMethod("HEAD");
huc.connect();
respCode = huc.getResponseCode();
if(respCode >= 400){
System.out.println(url+" is a broken link");
}
else{
System.out.println(url+" is a valid link");
}
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
driver.quit();
}
}
You can store the links you've already visited in an ArrayList and check whether that ArrayList contains the link already.
ArrayList<String> visitedLinks = new ArrayList<String>();
List<WebElement> elements = driver.findElements(By.tagName("a"));
for(WebElement element : elements) {
if(visitedLinks.contains(element.getAttribute("href"))) {
System.out.println("Link already checked. Not checking.");
} else {
visitedLinks.add(element.getAttribute("href"));
// Your link checking code
}
}
I'm not sure how you're checking the links off of the pages you check for a status 200 OK response, but you should probably define the URL for each page that you want to check the links on and then loop through those URLs. Otherwise you're likely to exit the site you're checking links for and escape out onto the wider internet. Your test is likely to never finish in that case.
Related
I have below code to fetch the pages inside the given URL but I am not sure how to display them in tree like structure.
public class BasicWebCrawler {
private HashSet<String> links;
public BasicWebCrawler() {
links = new HashSet<String>();
}
public void getPageLinks(String URL) {
//4. Check if you have already crawled the URLs
//(we are intentionally not checking for duplicate content in this example)
if (!links.contains(URL)) {
try {
//4. (i) If not add it to the index
if (links.add(URL)) {
System.out.println(URL);
}
//2. Fetch the HTML code
Document document = Jsoup.connect(URL).get();
//3. Parse the HTML to extract links to other URLs
Elements linksOnPage = document.select("a[href^=\"" +URL+ "\"]");
//5. For each extracted URL... go back to Step 4.
for (Element page : linksOnPage) {
getPageLinks(page.attr("abs:href"));
}
} catch (IOException e) {
System.err.println("For '" + URL + "': " + e.getMessage());
}
}
}
public static void main(String[] args) {
//1. Pick a URL from the frontier
new BasicWebCrawler().getPageLinks("https://www.wikipedia.com/");
}
}
Okay, I think I managed to do what you asked, when all links on site are checked or site has no links then the recursion will finish, but in internet it's actually not doable, it's funny where can you go from one site just by clicking first not checked link:
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import java.io.IOException;
import java.util.HashSet;
public class BasicWebCrawler {
private HashSet<String> links;
public BasicWebCrawler() {
links = new HashSet<String>();
}
public void getPageLinks(String URL, int level) {
//4. Check if you have already crawled the URLs
//(we are intentionally not checking for duplicate content in this example)
if (!links.contains(URL)) {
try {
//4. (i) If not add it to the index
if (links.add(URL)) {
for(int i = 0; i < level; i++) {
System.out.print("-");
}
System.out.println(URL);
}
//2. Fetch the HTML code
Document document = Jsoup.connect(URL).get();
//3. Parse the HTML to extract links to other URLs
Elements linksOnPage = document.select("a[href]");
//5. For each extracted URL... go back to Step 4.
for (Element page : linksOnPage) {
getPageLinks(page.attr("abs:href"), level + 1);
}
} catch (IOException e) {
System.err.println("For '" + URL + "': " + e.getMessage());
}
}
}
public static void main(String[] args) {
//1. Pick a URL from the frontier
new BasicWebCrawler().getPageLinks("http://mysmallwebpage.com/", 0);
}
}
I am writing a web crawler program using Jsoup library. (Sorry i can not post my code becase it too long to post it here).I need to crawl only URLs that can leed me to new links without crawling URLs with that starts with http or https and ending with image files, pdf, rar or zip files. I need just to crawl URLs that ending with .html, .htm, .jsp , .php and .asp etc.
I have two question regarding this issue:
1- How can i prevent the program to not read other unneeded URLs (like: images, PDFs or RARs) ?
2- How can i improve this class to not waisting time to load whole URL content to memory then parse URLs from it ?
This is my code below :
import org.jsoup.Connection;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileOutputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.io.Writer;
import java.math.BigInteger;
import java.util.Formatter;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.security.*;
import java.nio.file.Path;
import java.nio.file.Paths;
public class HTMLParser {
private static final int READ_TIMEOUT_IN_MILLISSECS = (int) TimeUnit.MILLISECONDS.convert(30, TimeUnit.SECONDS);
private static HashMap <String, Integer> filecounter = new HashMap<> ();
public static List<LinkNodeLight> parse(LinkNode inputLink){
List<LinkNodeLight> outputLinks = new LinkedList<>();
try {
inputLink.setIpAdress(IpFromUrl.getIp(inputLink.getUrl()));
String url = inputLink.getUrl();
if (inputLink.getIpAdress() != null) {
url.replace(URLWeight.getHostName(url), inputLink.getIpAdress());
}
Document parsedResults = Jsoup
.connect(url)
.timeout(READ_TIMEOUT_IN_MILLISSECS)
.userAgent("Mozilla/5.0 (Windows; U; WindowsNT 5.1; en-US; rv1.8.1.6) Gecko/20070725 Firefox/2.0.0.6")
.get();
inputLink.setSize(parsedResults.html().length());
/* IP address moved here in order to speed up the process */
inputLink.setStatus(LinkNodeStatus.OK);
inputLink.setDomain(URLWeight.getDomainName(inputLink.getUrl()));
if (true) {
/* save the file to the html */
String filename = parsedResults.title();//digestBig.toString(16) + ".html";
if (filename.length() > 24) {
filename = filename.substring(0, 24);
}
filename = filename.replaceAll("[^\\w\\d\\s]", "").trim();
filename = filename.replaceAll("\\s+", " ");
if (!filecounter.containsKey(filename)) {
filecounter.put(filename, 1);
} else {
Integer tmp = filecounter.remove(filename);
filecounter.put(filename, tmp + 1);
}
filename = filename + "-" + (filecounter.get(filename)).toString() + ".html";
filename = Paths.get("downloads", filename).toString();
inputLink.setFileName(filename);
/* use md5 of url as file name */
try (PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter(filename)))) {
out.println("<!--" + inputLink.getUrl() + "-->");
out.print(parsedResults.html());
out.flush();
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
String tag;
Elements tagElements;
List<LinkNode> result;
tag = "a[href";
tagElements = parsedResults.select(tag);
result = toLinkNodeObject(inputLink, tagElements, tag);
outputLinks.addAll(result);
tag = "area[href";
tagElements = parsedResults.select(tag);
result = toLinkNodeObject(inputLink, tagElements, tag);
outputLinks.addAll(result);
} catch (IOException e) {
inputLink.setParseException(e);
inputLink.setStatus(LinkNodeStatus.ERROR);
}
return outputLinks;
}
static List<LinkNode> toLinkNodeObject(LinkNode parentLink, Elements tagElements, String tag) {
List<LinkNode> links = new LinkedList<>();
for (Element element : tagElements) {
if(isFragmentRef(element)){
continue;
}
String absoluteRef = String.format("abs:%s", tag.contains("[") ? tag.substring(tag.indexOf("[") + 1, tag.length()) : "href");
String url = element.attr(absoluteRef);
if(url!=null && url.trim().length()>0) {
LinkNode link = new LinkNode(url);
link.setTag(element.tagName());
link.setParentLink(parentLink);
links.add(link);
}
}
return links;
}
static boolean isFragmentRef(Element element){
String href = element.attr("href");
return href!=null && (href.trim().startsWith("#") || href.startsWith("mailto:"));
}
}
To add another solution to Pshemo for your first question. You may want to make a regex to compare to so that you don't even take the element and put it in the list
in method "static List toLinkNodeObject" maybe something like
"[http].+[^(pdf|rar|zip)]" and match your url to the regex. This will speed up the program too because you won't even be adding those links to parse for.
String url = element.attr(absoluteRef);
if(url!=null && url.trim().length()>0
&& url.matches("[http].+[^(pdf|rar|zip)]")) {
LinkNode link = new LinkNode(url);
link.setTag(element.tagName());
link.setParentLink(parentLink);
links.add(link);
}
As to speed up the class as a whole, it would help to multithread the downloading and parsing and allow the multiple threads to get and validate the information.
I need to write a code which will get all the links in a website recursively. Since I'm new to this is what I've got so far;
List<WebElement> no = driver.findElements(By.tagName("a"));
nooflinks = no.size();
for (WebElement pagelink : no)
{
String linktext = pagelink.getText();
link = pagelink.getAttribute("href");
}
Now what I need to do is if the list finds a link of the same domain, then it should get all the links from that URL and then return back to the previous loop and resume from the next link. This should go on till the last URL in the Whole Website is found. That is for example, Home Page is base URL and it has 5 URLs of other pages, then after getting the first of the 5 URLs the loop should get all the links of that first URL return back to Home Page and resume from second URL. Now if second URL has Sub-sub URL, then the loop should find links for those first then resume to second URL and then go back to Home Page and resume from third URL.
Can anybody help me out here???
I saw this post recently. I don't know if you are still looking for ANY solution for this problem. If not, I thought it might be useful:
import java.io.IOException;
import java.net.MalformedURLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import java.util.Iterator;
public class URLReading {
public static void main(String[] args) {
try {
String url="";
HashMap<String, String> h = new HashMap<>();
Url = "https://abidsukumaran.wordpress.com/";
Document doc = Jsoup.connect(url).get();
// Page Title
String title = doc.title();
//System.out.println("title: " + title);
// Links in page
Elements links = doc.select("a[href]");
List url_array = new ArrayList();
int i=0;
url_array.add(url);
String root = url;
h.put(url, title);
Iterator<String> keySetIterator = h.keySet().iterator();
while((i<=h.size())){
try{
url = url_array.get(i).toString();
doc = Jsoup.connect(url).get();
title = doc.title();
links = doc.select("a[href]");
for (Element link : links) {
String res= h.putIfAbsent(link.attr("href"), link.text());
if (res==null){
url_array.add(link.attr("href"));
System.out.println("\nURL: " + link.attr("href"));
System.out.println("CONTENT: " + link.text());
}
}
}catch(Exception e){
System.out.println("\n"+e);
}
i++;
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
You can use Set and HashSet. You may try like this:
Set<String> getLinksFromSite(int Level, Set<String> Links) {
if (Level < 5) {
Set<String> locallinks = new HashSet<String>();
for (String link : Links) {
Set<String> new_links = ;
locallinks.addAll(getLinksFromSite(Level+1, new_links));
}
return locallinks;
} else {
return Links;
}
}
I would think the following idiom would be useful in this context:
Set<String> visited = new HashSet<>();
Deque<String> unvisited = new LinkedList<>();
unvisited.add(startingURL);
while (!unvisited.isEmpty()) {
String current = unvisited.poll();
visited.add(current);
for /* each link in current */ {
if (!visited.contains(link.url())
unvisited.add(link.url());
}
}
What I am attempting to do is:
Login to a website in order to retrieve data that can only be accessed while logged on.
The website I need to login to is https://www.indemed.com.
I think that this is a two part program, part 1 being logging in, while part 2 is getting the information. When I run the login part of my program and then attempt to manually log in it says my account is in use, which I take to mean it is correctly logging in.
However when I try to get the price it is not there (if not logged in prices will not show up, but everything else will be there).
My questions are: Is there a problem with how I am combining my logging method and my retrieving method? Is the problem just with my logging method? Is the problem with just my retrieving method? Why doesn't this work? Most importantly, how can I fix this?
Here is what I have attempted so far:
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
public class IndependenceMedical {
public IndependenceMedical(){
login();
}
private void login() {
URL URLObj;
URLConnection connect;
try {
// Establish a URL and open a connection to it. Set it to output mode.
URLObj = new URL("https://www.indemed.com/Action/Login/LoginAction.cfm?Refer=/index.cfm");
connect = URLObj.openConnection();
System.out.println(connect.toString());
connect.setDoOutput(true);
// Create a buffered writer to the URLConnection's output stream and write our forms parameters.
BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(connect.getOutputStream()));
writer.write("AccountNumber=12345&UserName=myUserName&Password=myPassword&Login=Login");
writer.close();
// Now establish a buffered reader to read the URLConnection's input stream.
BufferedReader reader = new BufferedReader(new InputStreamReader(connect.getInputStream()));
String lineRead = "";
// Read all available lines of data from the URL and print them to screen.
while ((lineRead = reader.readLine()) != null) {
System.out.println(lineRead);
}
reader.close();
}
catch (MalformedURLException ex) {
System.out.println("The URL specified was unable to be parsed or uses an invalid protocol. Please try again.");
System.exit(1);
}
catch (Exception ex) {
System.out.println(ex.getMessage() + "\nAn exception occurred.");
System.exit(1);
}
}
public Document getDoc(String itemNumber){
try {
return Jsoup.connect("https://www.indemed.com/Catalog/SearchResults.cfm?source=advancedSearch&psku=" + itemNumber + "&keyword=&PHCPCS=&PClassID=&ManufacturerID=&Search.x=41&Search.y=9").get();
}
catch (IOException e) {}
return null;
}
public String getPrice(Document doc){
try{
Elements stuff = doc.select("#tr_51187955");
stuff = stuff.select("div.product-price");
String newStuff = stuff.toString();
newStuff = newStuff.substring(newStuff.indexOf("$")); // throws exception because "$" is not in the String.
newStuff = newStuff.substring(0, newStuff.indexOf(" "));
return newStuff;
}
catch (Exception arg0){
return "";
}
}
public static void main(String[] args){
IndependenceMedical test = new IndependenceMedical();
Document doc = test.getDoc("187955");
System.out.println("\n\n\n\n\n\n\n\n\n\n"); //to separate the return lines
System.out.println(test.getPrice(doc));
}
}
Due to character restrictions and the fact that I don't know which parts are important, I can't show the output. However if requested I will try to provide all the requested output.
Sorry for being so wordy I'm just trying to make sure the question is clear.
Lastly I have thoroughly looked through other login questions and although there are examples of how to login, I can't seem to find how to do anything after logging in (i'm sure someone has talked about it, but I haven't been able to find it).
Thanks in advance to anyone that can help me with this.
EDIT:
Although this question is similar to Parse HTML source after login with Java
I'm not parsing the redirected page, I need access to all pages this grants access to.
Jsoup provides the methods for login mechanisms.
Try the below, after you've filled the username, password and account number.
import java.io.IOException;
import java.net.MalformedURLException;
import java.util.Map;
import org.jsoup.Connection;
import org.jsoup.Connection.Method;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
public class IndependenceMedical {
private Map<String, String> loginCookies;
public IndependenceMedical() {
login();
}
private void login() {
try {
Connection.Response res = Jsoup.connect("https://www.indemed.com/Action/Login/LoginAction.cfm?refer=MyAccount&qs=")
.data("UserName", "myUserName")
.data("Password", "myPassword")
.data("AccountNumber", "myAccountNumber")
.method(Method.POST)
.execute();
loginCookies = res.cookies();
} catch (MalformedURLException ex) {
System.out.println("The URL specified was unable to be parsed or uses an invalid protocol. Please try again.");
System.exit(1);
} catch (Exception ex) {
System.out.println(ex.getMessage() + "\nAn exception occurred.");
System.exit(1);
}
}
public Document getDoc(String itemNumber){
try {
return Jsoup.connect("https://www.indemed.com/Catalog/SearchResults.cfm?source=advancedSearch&psku=" + itemNumber + "&keyword=&PHCPCS=&PClassID=&ManufacturerID=&Search.x=41&Search.y=9")
.cookies(loginCookies)
.get();
} catch (IOException e) {}
return null;
}
public String getPrice(Document doc){
try {
Elements stuff = doc.select("#tr_51187955");
stuff = stuff.select("div.product-price");
String newStuff = stuff.toString();
newStuff = newStuff.substring(newStuff.indexOf("$")); // throws exception because "$" is not in the String.
newStuff = newStuff.substring(0, newStuff.indexOf(" "));
return newStuff;
} catch (Exception arg0) {
return "";
}
}
public static void main(String[] args){
IndependenceMedical test = new IndependenceMedical();
Document doc = test.getDoc("187955");
System.out.println("\n\n\n\n\n\n\n\n\n\n"); //to separate the return lines
System.out.println(test.getPrice(doc));
}
}
Working on Selenium Webdriver and using Java. I'm getting error as The system cannot find the path specified
Code:
package test;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.FileReader;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.ResourceBundle;
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.interactions.Actions;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.Select;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.Assert;
import org.testng.Reporter;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
import org.apache.log4j.Logger;
import org.apache.log4j.xml.DOMConfigurator;
import com.thoughtworks.selenium.DefaultSelenium;
import com.thoughtworks.selenium.Selenium;
public class OEPR_DefaultTab{
private static Logger Log = Logger.getLogger(OEPR_DefaultTab.class.getName());
private WebDriver driver;
private StringBuffer verificationErrors = new StringBuffer();
Properties p= new Properties();
public Selenium selenium;
#BeforeTest
public void Login() throws Exception {
driver = new FirefoxDriver();
try {
p.load(new FileInputStream("C:/Login.txt"));
} catch (Exception e) {
e.getMessage();
}
String url=p.getProperty("url");
DOMConfigurator.configure("src/log4j.xml");
Log.info("______________________________________________________________");
Log.info("Initializing Selenium...");
selenium = new DefaultSelenium("localhost", 4444, "*firefox",url);
Thread.sleep(5000);
Log.info("Selenium instance started");
try {
p.load(new FileInputStream("C:/Login.txt"));
} catch (Exception e) {
e.getMessage();
}
Log.info("Accessing Stored uid,pwd from the stored text file");
String uid=p.getProperty("loginUsername");
String pwd=p.getProperty("loginPassword");
Log.info("Retrieved uid pwd from the text file");
try
{
driver.get("https://10.4.16.159/login");
}
catch(Exception e)
{
Reporter.log("network server is slow..check internet connection");
Log.info("Unable to open the website");
throw new Error("network server is slow..check internet connection");
}
performLogin(uid,pwd);
}
public void performLogin(String uid,String pwd) throws Exception
{
Log.info("Sign in to the OneReports website");
Thread.sleep(5000);
Log.info("Enter Username");
driver.findElement(By.id("loginUsername")).sendKeys(uid);
Log.info("Enter Password");
driver.findElement(By.id("loginPassword")).sendKeys(pwd);
//submit
Log.info("Submitting login details");
waitforElement(driver,120 , "//*[#id='submit']");
driver.findElement(By.id("submit")).submit();
Thread.sleep(6000);
Actions actions = new Actions(driver);
Log.info("Clicking on Reports link");
if(existsElement("reports")==true){
WebElement menuHoverLink = driver.findElement(By.id("reports"));
actions.moveToElement(menuHoverLink).perform();
Thread.sleep(6000);
}
else{
Log.info("element not present");
System.out.println("element not present -- so it entered the else loop");
}
Log.info("Clicking on Extranet link");
if(existsElement("extranet")==true){
WebElement menuHoverLink = driver.findElement(By.id("extranet"));
actions.moveToElement(menuHoverLink).perform();
Thread.sleep(6000);
}
else{
Log.info("element not present");
System.out.println("element not present -- so it entered the else loop");
}
Log.info("Clicking on PR link");
if(existsElement("ext-pr")==true){
WebElement menuHoverLink = driver.findElement(By.id("ext-pr"));
actions.moveToElement(menuHoverLink).perform();
Thread.sleep(6000);
}
else{
Log.info("element not present");
System.out.println("element not present -- so it entered the else loop");
}
Log.info("Clicking on Overview and Evolution PR link");
if(existsElement("ext-pr-backlog-evolution")==true){
JavascriptExecutor executor = (JavascriptExecutor)driver;
executor.executeScript("arguments[0].click();", driver.findElement(By.id("ext-pr-backlog-evolution") ));
//executor.executeScript("document.getElementById('ext-pr-backlog-evolution').style.display='block';");
//driver.findElement(By.id("ext-pr-backlog-evolution")).click();
// WebElement menuHoverLink = driver.findElement(By.id("ext-pr-backlog-evolution"));
//actions.moveToElement(menuHoverLink).perform();
Thread.sleep(6000);
}
else{
Log.info("element not present");
System.out.println("element not present -- so it entered the else loop");
}
}
//Filter selection-1
#Test()
public void Filterselection_1() throws Exception{
BufferedReader in = new BufferedReader(new FileReader("C:/FilerSection/visualization.txt"));\\ Here i'm getting error
String line;
line = in.readLine();
in.close();
String[] expectedDropDownItemsInArray = line.split("=")[1].split(",");
// Create expected list :: This will contain expected drop-down values
ArrayList<String> expectedDropDownItems = new ArrayList<String>();
for(int i=0; i<expectedDropDownItemsInArray.length; i++)
expectedDropDownItems.add(expectedDropDownItemsInArray[i]);
// Create a webelement for the drop-down
WebElement visualizationDropDownElement = driver.findElement(By.id("visualizationId"));
// Instantiate Select class with the drop-down webelement
Select visualizationDropDown = new Select(visualizationDropDownElement);
// Retrieve all drop-down values and store in actual list
List<WebElement> valuesUnderVisualizationDropDown = visualizationDropDown.getOptions();
ArrayList<String> actualDropDownItems = new ArrayList<String>();
for(WebElement value : valuesUnderVisualizationDropDown){
actualDropDownItems.add(value.getText());
}
// Compare expected and actual list
for (int i = 0; i < actualDropDownItems.size(); i++) {
if (!expectedDropDownItems.get(i).equals(actualDropDownItems.get(i)))
System.out.println("Drop-down values are NOT in correct order");
}
}
private boolean existsElement(String id) {
try {
driver.findElement(By.id(id));
} catch (Exception e) {
System.out.println("id is not present ");
return false;
}
return true;
}
private void waitforElement(WebDriver driver2, int i, String string) {
// TODO Auto-generated method stub
}
#AfterTest
public void tearDown() throws Exception {
Log.info("Stopping Selenium...");
Log.info("______________________________________________________________");
driver.quit();
String verificationErrorString = verificationErrors.toString();
if (!"".equals(verificationErrorString)) {
Assert.fail(verificationErrorString);
}
}
}
Please check the code give me some solution.
The scenario which is present in the How to compare the drop down options is matching with the UI options in Selenium WebDriver?
For this scenario I'm trying to script. Please check the link as well.
If that's the exact text you are seeing this typically isn't a code issue - it means you need to update your PATH environment variable with the directory where java was installed.
Replace
BufferedReader in = new BufferedReader(new FileReader("C:/FilerSection/visualization.txt"));
with
BufferedReader in = new BufferedReader(new FileReader("C:\\FilerSection\\visualization.txt"));
This should help.