Cannot submit a website form through Selenium - java

This is the second post on Stack Overflow on my quest to access this godforsaken website: https://portal.mcpsmd.org/guardian/home.html
import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.htmlunit.HtmlUnitDriver;
public class WebAccessor {
public static void main(String[] args) {
WebDriver driver = new HtmlUnitDriver();
driver.get("https://portal.mcpsmd.org/public/");
System.out.println(driver.getCurrentUrl());
// Find the text input element by its name
WebElement username = driver.findElement(By.id("fieldAccount"));
WebElement password = driver.findElement(By.id("fieldPassword"));
// Enter something to search for
username.sendKeys("");
password.sendKeys("");
WebElement submitBtn = driver.findElement(By.id("btn-enter"));
submitBtn.click();
System.out.println(driver.getCurrentUrl());
driver.quit();
}
}
This code is tested and works on Facebook
I am sure that my button is being pressed as when I click submit, the site URL changes from
https://portal.mcpsmd.org/public/
to
https://portal.mcpsmd.org/guardian/home.html
When I type in usernames and passwords, (actual user and pass cannot be disclosed for obvious reasons), the password line actually tacks on another 20 or so characters to the end of the password field. (You can see this by typing in any random username and password and clicking submit).
This has lead me to believe there is some sort of front-end encryption going on. Is there any feasible way to log in?
Many thanks in advance.

due to lack of credentials, my answer is just a bet.
But i think you should redirect after login, with a little tweak to avoid exceptions, like this:
import java.io.IOException;
import java.net.MalformedURLException;
import com.gargoylesoftware.htmlunit.BrowserVersion;
import com.gargoylesoftware.htmlunit.FailingHttpStatusCodeException;
import com.gargoylesoftware.htmlunit.WebClient;
import com.gargoylesoftware.htmlunit.html.HtmlForm;
import com.gargoylesoftware.htmlunit.html.HtmlPage;
public class WebAccessor {
public static void main(String[] args) {
WebClient WEB_CLIENT = new WebClient(BrowserVersion.CHROME);
WEB_CLIENT.getCookieManager().setCookiesEnabled(true);
HtmlPage loginPage;
try {
loginPage = WEB_CLIENT.getPage("https://portal.mcpsmd.org/public/");
HtmlForm loginForm = loginPage.getFirstByXPath("//form[#id='LoginForm']");
loginForm.getInputByName("account").setValueAttribute("YOURPASSWORD");
loginForm.getInputByName("pw").setValueAttribute("YOURPASSWORD");
loginForm.getElementsByTagName("button").get(0).click();
HtmlPage landing = WEB_CLIENT.getPage("https://portal.mcpsmd.org/guardian/home.html#/termGrades");
System.out.println(landing.getTitleText());
} catch (FailingHttpStatusCodeException e) {
// TODO Auto-generated catch block
//e.printStackTrace();
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
//e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
//e.printStackTrace();
}
}
}
My output is: Student and Parent Sign In. But if you set correct attributes, it should be ok.

Related

Selenium - best way to write error checking in java?

I'm VERY new to java\Selenium. What I am trying to achive is test whether our website comes back up after patching, what needs to happen is
1 - open chrome
2 - open url
3 - log in
4 - download pdf
Im trying to catch an error every time my bot hits a roadblock. At the moment i have the below code
What is the best way to write into the code when a bad password prompt pops up on the website
Thanks and sorry if this is such an easy issue
package webdriver;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.interactions.Actions;
import java.io.*;
import java.lang.*;
import java.util.*;
public class launchbrowser {
public static WebDriver driver = null;
public static void main(String[] args) throws IOException {
File file1 = new File("out.txt");
FileWriter fw = new FileWriter(file1);
PrintWriter pw = new PrintWriter(fw);
try {
//Adds driver ver - dont remove
System.setProperty("webdriver.chrome.driver",".\\drivers\\chromedriver78.exe" );
driver = new ChromeDriver();
driver.manage().timeouts().implicitlyWait(15,TimeUnit.SECONDS);
}
//Catch the Error
catch (Exception e){
pw.println("unable to launch browser");
}
try {
//open webpage and maximizes the window
driver.navigate().to("****Webpage****");
driver.manage().window().maximize();
}
//Catch the Error
catch (Exception e){
pw.println("Unable to open Webpage");
}
try {
//locate Member Number\email
WebElement Username=driver.findElement(By.id("Login.Member number or email"));
Username.sendKeys("****test username****");
}
//Catch the Error
catch (Exception e){
pw.println("Unable to enter username");
}
try {
//locate password
WebElement Password=driver.findElement(By.id("Login.Password"));
Password.sendKeys("****test password****");
}
//Catch the Error
catch (Exception e){
pw.println("Unable to enter password");
}
//Click Login
WebElement Enter=driver.findElement(By.id("Login.Password"));
Enter.sendKeys(Keys.RETURN);
try {
//Download a statement
WebElement Transactions=driver.findElement(By.id("TransactionSummary"));
Transactions.click();
WebElement PDF=driver.findElement(By.linkText("Go"));
PDF.click();
}
catch (Exception e){
pw.println("Unable to download statement");
}
pw.close();
}
}
There are 2 ways to achieve, depending on what you want to test:
Find your pop up element using explicit wait, if it doesn't show up, your explicit wait will throw an exception, which means your password might be valid.
Determine your password is valid or not, then you will decide whether you need to find pop up element.
In the 1st option, I assumed that your validation had worked perfectly. And the 2nd one is checking that your validation worked properly.

how to pass more than one web elements to page using phantomjs

I am trying to use phantomjs for testing, I have one login page with obvious two parameters username and password. If i try same code with google url where i have to pass only one element with and say element.submit(); but in my login page i want to pass two elements how to achieve the same ?
here is my code -
import java.io.File;
import java.io.IOException;
import org.apache.maven.shared.utils.io.FileUtils;
import org.openqa.selenium.By;
import org.openqa.selenium.OutputType;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.phantomjs.PhantomJSDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.testng.annotations.Test;
public class PhantomExample {
public PhantomExample() {
System.out.println("this is constructor");
}
#Test
public void verify() {
File src = new File("C:\\Users\\Admin\\Downloads\\Compressed\\phantomjs-2.1.1-windows\\bin\\phantomjs.exe");
DesiredCapabilities phantomjsCap = new DesiredCapabilities();
phantomjsCap.setJavascriptEnabled(true);
phantomjsCap.setCapability("phantomjs.binary.path", src.getAbsolutePath());
System.out.println("inside the verify method");
System.setProperty("phantomjs.binary.path", src.getAbsolutePath());
WebDriver driver = new PhantomJSDriver(phantomjsCap);
driver.get("http://localhost:8080/MyApp/Login");
System.out.println(driver.getPageSource());
WebElement el =driver.findElement(By.name("username"));
WebElement elp =driver.findElement(By.name("password"));
el.sendKeys("username");
elp.sendKeys("0");
el.submit();
elp.submit();
System.out.println(driver.getTitle());
System.out.println(driver.getCurrentUrl());
System.out.println(driver.getPageSource());
File Ss=((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
try {
FileUtils.copyFile(Ss, new File("d:/sample.jpg"));
}
catch (IOException e) {
System.out.println(e.getMessage());
}
driver.quit();
}
}
Here i created two separate elements and expecting to get the response, but when i run it in debugger mode it is not executing after line el.submit();
I am quite sure that i am doing this wrong way, but can someone tell me what is the right approach to this and explain how to get response object which server would send after log in ?
Calling the submit() method on an input field submits the whole form (with all input values), so the following line can be deleted:
elp.submit();
If the form has a submit button, then after executing el.submit() login will be done.

sendkeys() not working in selenium webdriver

Till yesterday the below mentioned code was working fine but now i am facing some problem .
The code opens firefox browser then loads facebook.com but the code is not sending
email and password to web browser i.e. sendkeys() is not working.
I verified the id of both textbox of email and password which are correct yet code is not working .
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.WebDriver;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
public class Webdriver2 {
WebDriver driver ;
JavascriptExecutor jse;
public void invokeBrowser()
{
try
{
System.setProperty("webdriver.gecko.driver","C:\\geckodriver-v0.19.0-win64\\geckodriver.exe");
driver = new FirefoxDriver();
driver.manage().deleteAllCookies();
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
driver.manage().timeouts().pageLoadTimeout(30, TimeUnit.SECONDS );
driver.get("https://www.facebook.com/");
search();
}
catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}
}
public void search()
{
try
{
driver.findElement(By.id("email")).sendKeys("example#gmail.com");
Thread.sleep(4000);
driver.findElement(By.id("pass")).sendKeys("password");
Thread.sleep(4000);
driver.findElement(By.id("u_0_2")).click();
Thread.sleep(4000);
/*driver.findElement(By.name("q")).sendKeys("spit mumbai");
Thread.sleep(4000);
driver.findElement(By.xpath(" //button[#aria-label='Search' and #data-testid='facebar_search_button'] ")).click();*/
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
public static void main(String[] args) {
// TODO Auto-generated method stub
Webdriver2 w = new Webdriver2();
w.invokeBrowser();
}
}
Try clicking on the two textboxes before you are performing the sendKeys:
driver.findElement(By.id("email")).click();
driver.findElement(By.id("email")).sendKeys("example#gmail.com");
The textboxes probably needs focus.
We need to take care of a couple of things here as follows:
The Email or Phone field is within an <input> tag so we need to take it into account while selecting the locator.
The Password field is also within an <input> tag so we need to take it into account while selecting the locator.
If you observe closely the id of the Log In button is dynamic and also within an <input> tag, so we need to consider this factor as well.
Here is the minimum sample code block using cssSelector to access the url https://www.facebook.com/, provide Email or Phone and Password, finally click on Log In button:
package demo;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class Facebook_Login_CSS_FF {
public static void main(String[] args) {
System.setProperty("webdriver.gecko.driver", "C:\\Utility\\BrowserDrivers\\geckodriver.exe");
WebDriver driver = new FirefoxDriver();
driver.get("https://www.facebook.com/");
driver.findElement(By.cssSelector("input#email")).sendKeys("Ryusei");
driver.findElement(By.cssSelector("input[name='pass']")).sendKeys("Nakamura");
driver.findElement(By.cssSelector("input[value='Log In']")).click();
}
}

Scraping from website that needs authentication using Java

I found this code on an old form and I am trying to get it to work but am getting this error:
File: /net/home/f13/dlschnettler/Desktop/javaScraper/RedditClient.java [line: 46]
Error: cannot access org.w3c.dom.ElementTraversal
class file for org.w3c.dom.ElementTraversal not found
Here's the code:
import java.io.IOException;
import java.net.MalformedURLException;
import com.gargoylesoftware.htmlunit.BrowserVersion;
import com.gargoylesoftware.htmlunit.FailingHttpStatusCodeException;
import com.gargoylesoftware.htmlunit.WebClient;
import com.gargoylesoftware.htmlunit.html.HtmlForm;
import com.gargoylesoftware.htmlunit.html.HtmlPage;
public class RedditClient {
//Create a new WebClient with any BrowserVersion. WebClient belongs to the
//HtmlUnit library.
private final WebClient WEB_CLIENT = new WebClient(BrowserVersion.CHROME);
//This is pretty self explanatory, these are your Reddit credentials.
private final String username;
private final String password;
//Our constructor. Sets our username and password and does some client config.
RedditClient(String username, String password){
this.username = username;
this.password = password;
//Retreives our WebClient's cookie manager and enables cookies.
//This is what allows us to view pages that require login.
//If this were set to false, the login session wouldn't persist.
WEB_CLIENT.getCookieManager().setCookiesEnabled(true);
}
public void login(){
//This is the URL where we log in, easy.
String loginURL = "https://www.reddit.com/login";
try {
//Okay, bare with me here. This part is simple but it can be tricky
//to understand at first. Reference the login form above and follow
//along.
//Create an HtmlPage and get the login page.
HtmlPage loginPage = WEB_CLIENT.getPage(loginURL);
//Create an HtmlForm by locating the form that pertains to logging in.
//"//form[#id='login-form']" means "Hey, look for a <form> tag with the
//id attribute 'login-form'" Sound familiar?
//<form id="login-form" method="post" ...
HtmlForm loginForm = loginPage.getFirstByXPath("//form[#id='login-form']");
//This is where we modify the form. The getInputByName method looks
//for an <input> tag with some name attribute. For example, user or passwd.
//If we take a look at the form, it all makes sense.
//<input value="" name="user" id="user_login" ...
//After we locate the input tag, we set the value to what belongs.
//So we're saying, "Find the <input> tags with the names "user" and "passwd"
//and throw in our username and password in the text fields.
loginForm.getInputByName("user").setValueAttribute(username);
loginForm.getInputByName("passwd").setValueAttribute(password);
//<button type="submit" class="c-btn c-btn-primary c-pull-right" ...
//Okay, you may have noticed the button has no name. What the line
//below does is locate all of the <button>s in the login form and
//clicks the first and only one. (.get(0)) This is something that
//you can do if you come across inputs without names, ids, etc.
loginForm.getElementsByTagName("button").get(0).click();
} catch (FailingHttpStatusCodeException e) {
e.printStackTrace();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public String get(String URL){
try {
//All this method does is return the HTML response for some URL.
//We'll call this after we log in!
return WEB_CLIENT.getPage(URL).getWebResponse().getContentAsString();
} catch (FailingHttpStatusCodeException e) {
e.printStackTrace();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
}
import org.jsoup.Jsoup;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
public class Main {
public static void main(String[] args) {
//Create a new RedditClient and log us in!
RedditClient client = new RedditClient("hutsboR", "MyPassword!");
client.login();
//Let's scrape our messages, information behind a login.
//https://www.reddit.com/message/messages/ is the URL where messages are located.
String page = client.get("https://www.reddit.com/message/messages/");
//"div.md" selects all divs with the class name "md", that's where message
//bodies are stored. You'll find "<div class="md">" before each message.
Elements messages = Jsoup.parse(page).select("div.md");
//For each message in messages, let's print out message and a new line.
for(Element message : messages){
System.out.println(message.text() + "\n");
}
}
}
Not really sure how to fix it since I'm not very familiar with scraping in the first place.
Try to add xml-apis to your classpath

WebScraping with HTML Unit Issue with apache lang3

UPDATE: I ended up using ghost.py but would appreciate a response.
I have been using straight java/apache httpd and nio to crawl must pages recently but came across what I expected was a simple issue that actually appears to not be. I am trying to use html unit to crawl a page but every time I run the code below I get the error proceeding the code telling me a jar is missing. Unfortunately, I could not find my answer here as there is a weird part to this question.
So, here is the weird part. I have the jar (lang3) it is up to date and it contains a method StringUtils.startsWithIgnoreCase(String string,String prefix) that works. I would really like to avoid selenium as I need to crawl (if sampling tells me properly), about 1000 pages on the same site over several months.
Is there a particular version I need? All I saw was the note to update to 3-1 which I have. Is there a method if installation that works?
Thanks.
The code I am running is:
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import com.gargoylesoftware.htmlunit.BrowserVersion;
import com.gargoylesoftware.htmlunit.FailingHttpStatusCodeException;
import com.gargoylesoftware.htmlunit.Page;
import com.gargoylesoftware.htmlunit.RefreshHandler;
import com.gargoylesoftware.htmlunit.WebClient;
import com.gargoylesoftware.htmlunit.html.HtmlAnchor;
import com.gargoylesoftware.htmlunit.html.HtmlForm;
import com.gargoylesoftware.htmlunit.html.HtmlPage;
import com.gargoylesoftware.htmlunit.html.HtmlTable;
import com.gargoylesoftware.htmlunit.html.HtmlTableRow;
public class crawl {
public crawl()
{
//TODO Constructor
crawl_page();
}
public void crawl_page()
{
//TODO control the crawling
WebClient webClient = new WebClient(BrowserVersion.FIREFOX_10);
webClient.setRefreshHandler(new RefreshHandler() {
public void handleRefresh(Page page, URL url, int arg) throws IOException {
System.out.println("handleRefresh");
}
});
//the url for CA's Megan's law sex off
String url="http://www.myurl.com" //not my url
HtmlPage page;
try {
page = (HtmlPage) webClient.getPage(url);
HtmlForm form=page.getFormByName("_ctl0");
form.getInputByName("cbAgree").setChecked(true);
page=form.getButtonByName("Continue").click();
System.out.println(page.asText());
} catch (FailingHttpStatusCodeException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
The error is:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.lang3.StringUtils.startsWithIgnoreCase(Ljava/lang/CharSequence;Ljava/lang/CharSequence;)Z
at com.gargoylesoftware.htmlunit.util.URLCreator$URLCreatorStandard.toUrlUnsafeClassic(URLCreator.java:66)
at com.gargoylesoftware.htmlunit.util.UrlUtils.toUrlUnsafe(UrlUtils.java:193)
at com.gargoylesoftware.htmlunit.util.UrlUtils.toUrlSafe(UrlUtils.java:171)
at com.gargoylesoftware.htmlunit.WebClient.<clinit>(WebClient.java:159)
at ca__soc.crawl.crawl_page(crawl.java:34)
at ca__soc.crawl.<init>(crawl.java:24)
at ca__soc.us_ca_ca_soc.main(us_ca_ca_soc.java:17)
According to documentation
Since:
2.4, 3.0 Changed signature from startsWithIgnoreCase(String, String) to startsWithIgnoreCase(CharSequence, CharSequence)
so, probably you have two similar jars on your classpath.

Categories

Resources