I'm trying to fill in multiple forms that come after each other, all the forms get filled swiftly with no errors because I make sure to add
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.elementToBeClickable(By.xpath("")));
before doing anything on a new page, and I know I'm on the correct page.
On the last form, I encounter this error :
Exception in thread "main" org.openqa.selenium.NoSuchElementException: Unable to locate element: //*[#id="formtovalidate"]/fieldset[1]/div/label/input For documentation on this error, please visit: https://www.seleniumhq.org/exceptions/no_such_element.html
So I went to check on the browser by taking a screenshot and the browser is on the correct page with the correct form, I also checked the xpath values and even tried other attributes.. nothing seemed to work.
So I went ahead and printed out the PageSource which showed a totally different page (not the previous page), I also noticed the this page flashed for a second before the final form appeared.
I also tried driver.navigate().refresh() but that didn't work. I kept searching and looking but nothing appeared. I also changed browsers, that did nothing..
This is the method I'm trying to execute:
private void method() {
WebDriverWait wait = new WebDriverWait(driver, 20);
wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//*[#id=\"formtovalidate\"]/fieldset[1]/div/label/input")));
driver.findElement(By.xpath("//*[#id=\"formtovalidate\"]/fieldset[1]/div/label/input")).sendKeys(email); }
Update
Here's the form screenshot:
Here's the execution results:
Code:
String body_text = driver.findElement(By.tagName("body")).getText();
System.out.println(body_text);
Result: The form but in text
Code:
String body_innerHTML = driver.findElement(By.tagName("body")).getAttribute("innerHTML");
System.out.println(body_innerHTML);
Result: A different page :(
<zendesk-ticketing-form base-url="https://www.runescape.com/a=870/c=K0aO9WO69EI" css-cachebust="129" sitekey="6Lcsv3oUAAAAAGFhlKrkRb029OHio098bbeyi_Hv" grecaptcha="" has-valid-session="true" weblogin-url="https://secure.runescape.com/m=weblogin/a=870/c=K0aO9WO69EI/loginform?mod=www&ssl=1&dest=zendesk/support-form?form=360000065898">
<div class="x-display-none ie-error-display" data-js-ie-error="">
<section class="c-article">
<div class="c-article__content">
<h1>Error: Unsupported Browser</h1>
<p>
We do not support your web browser. Please use a supported web browser by choosing one below.
<br>
FireFox
<br>
Chrome
</p>
</div>
</section>
</div>
Code:
String pagesource = driver.getPageSource();
System.out.println(pagesource);
Result: Same as the previous one.. different page..
Firefox Page Source: https://pastebin.com/Kv15V2SK
Firefox Inspect Element of the page screenshot: http://prntscr.com/qvi6hc
This is weird, as the page source is different to the form!
I couldn't find time to solve your problem. If you want to do it on your own, please Search this on Google, "Shadow Root, Selenium", I had this kind of error before. What I know is, you cannot directly reach an element that stays inside of a shadow root, This is why you are not getting the source code inside of it.
What you need to do is go through the element step by step:
You have to expand the shadow root,
Here is shadow root expand function:
public static WebElement expand_shadow_element(WebElement element)
{
WebElement shadow_root = (WebElement)((JavascriptExecutor)driver).executeScript("return arguments[0].shadowRoot", element);
return shadow_root;
}
You can imagine this function like
.switchTo.frame()
for now..
After some researches you will understand the shadow root.
I hope I got the problem right..
Try this function, If you cannot, I will help you later on. Good Luck.
The PageSource from the <body> tag, containing...
<zendesk-ticketing-form base-url="https://www.runescape.com/a=870/c=K0aO9WO69EI" css-cachebust="129" sitekey="6Lcsv3oUAAAAAGFhlKrkRb029OHio098bbeyi_Hv" grecaptcha="" has-valid-session="true" weblogin-url="https://secure.runescape.com/m=weblogin/a=870/c=K0aO9WO69EI/loginform?mod=www&ssl=1&dest=zendesk/support-form?form=360000065898">
<div class="x-display-none ie-error-display" data-js-ie-error="">
<section class="c-article">
<div class="c-article__content">
<h1>Error: Unsupported Browser</h1>
<p>
We do not support your web browser. Please use a supported web browser by choosing one below.
<br>
FireFox
<br>
Chrome
</p>
</div>
</section>
</div>
...implies that the WebDriver driven Browsing Context was detected as a BOT and the navigation was blocked due to presence of reCAPTCHA.
There are different approaches to solve captcha / recaptcha. You can find a couple of relevant discussion in:
How to bypass Google captcha with Selenium
Selenium webdriver: Modifying navigator.webdriver flag to prevent selenium detection
Update
From your comments now it is clear that you want to fill up the fields within the form:
At this point it is worth to mention that you had been redirected to this page for either of the following reasons:
You EmailID / UserID is banned / blocked from accessing the site.
You EmailID / UserID is black-listed from accessing the site.
As you have used a BOT to access/scrape the site which may have violated the T&C.
Solution
It would be tough to propose a solution to automatically fillup the fields as presumably the elements in the BAN APPEAL REQUEST page may be protected by Invisible reCAPTCHA and you may have to Programmatically invoke the challenge
As others have suggested, it appears RuneScape's website has detected that you're using a bot to interact with their site. It doesn't matter that you solved the captcha manually, as they can still detect automated behavior quite easily without one (and no, the navigator.webdriver flag is not their only way to detect this).
The captcha is meant to prevent automated interaction with their site, which means they don't want you using Selenium/WebDriver to interact with it. You should respect this, especially as it seems you want your account unbanned (going by the pasted snippets and screenshots), so trying to do exactly what they don't want won't win you any favors.
There is a button “Apply Now” on http://yearup.org.
Apply Now
So far I tried:
//*[#class='button']") (but there are 11 occurrences. Using [0] doesn't help
//*[#class='button']//*[text()='Apply Now']
"html/body/div/section[2]/header/section/ul/li/ul/div[2]/a"
.findElement(By.linkText("Apply Now"));
None of them worked for me.
Following error is obtained:
org.openqa.selenium.ElementNotInteractableException: Cannot click on element
What makes it a bit hard, is a duplicate of same tag in same page.
There are two solutions:
//div[#class='large-9 columns large-centered center-absolute']/a[#class='button' and contains(text(),'Apply Now')]
or
(//a[contains(text(),'Apply Now')])[2]
This will work:
driver.findElement(By.xpath(".//*[#href='http://www.yearup.org/seize-opportunity/']")).click();
If you are only trying to locate it and not click the element, then use this:
driver.findElement(By.xpath(".//*[#href='http://www.yearup.org/seize-opportunity/']"));
And further, if you are trying to travel to that page and you don't need to actually click on the element to travel there, just do:
driver.get("http://www.yearup.org/seize-opportunity/");
I know in the HTML it shows up as "Apply Now" but if you use
driver.findElement(By.linkText("APPLY NOW")).click();
it works. I just tested it.
I am using this code to verify if error message is present in page.
field_required = driver.findElements(
By.xpath("//*[#id='tab1']/fieldset/div/div/*[text()='This field is required']")
);
and checking if field_required.size() > 0
The error message should appear only if I leave a field blank and click submit. I noticed that even before I click submit field_required.size() is greater than zero .
Hence am guessing that my validation is not done by this code. Please give me another way that would work. Not sure why I am getting incorrect results here.
Seems like element is already present in the DOM and invisible to end user.
Would you please modify your code as below and try?
field_required.size() && field_required.isDisplayed()
Here is the link for isDisplayed method
What I need to do is browse to a webpage, login, then browse to another webpage on that site that requires you to be logged in, so it needs to save cookies. After that, I need to click an element on that page, in which I would fill out the form and get the message that the webpage returns to me. The reason I need to actually go to the page and click the button as suppose to just navigating directly to the link is because the you are assigned a session ID every time you log in and click the link, and its always different. The button looks like this, its not a normal href link:
<span id=":tv" idlink="" class="sA" tabindex="0" role="link">Next</span>
Anyway, what would be the easiest way to do this? Thanks.
Update:
After trying HTMLunit, and other headless browser libraries, it doesnt seem that its happening using anything "headless." Another thing that I recently found out about this page is that that all the HTML is in some weird format... Its all inside a script tag. Here is a sample.
"?ui\x3d2\x26view\x3dss\x26mset\x3dmain\x26ver\x3d-68igm85d1771\x26am\x3d!Zsl-0RZ-XLv0BO3aNKsL0sgMg3nH10t5WrPgJSU8CYS-KNWlyrLmiW3HvC5ykER_n_5dDw\x26fri"],"http://example.com/?ctx\x3d%67mail\x26hl\x3den",,0,"Gmail","Gmail",[["us","c130f0854ca2c2bb",[["n"],["m","New features!"],["u"],["k","0"],["p","1000:500000,10,200000,5,100000,3,75000,2,0,1"],["h","https://survey.googleratings.com/wix/p1679258.aspx?l\x3d1033"],["at","query,5,contacts,5,adv,5,cf,5,default,20"],["v","https://www.youtube.com/embed/Ra8HG6MkOXY?showinfo\x3d0"],
When I do inspect element on the button, the HTML code that I posted above for the button comes up, but not when doing view source. Basically, what I am going to need to do is use some sort of GUI and have the user navigate to the link and then have the program fill out the info. Does anyone know how I can do this? Thanks.
Have a look at the 5 Minute Getting Started Guide for Selenium: http://code.google.com/p/selenium/wiki/GettingStarted
On the login page, look at the form's HTML to see the url it posts to and the url parameters. Then request that url with the same parameters filled in with correct info, and make sure to save all the cookie headers to send to the second page. Then use an html parser to find your link. There are several html parsers available on sourceforge, and you could even try java's built in xml parsers, though if the site has even a tiny html mistake they will glitch.
EDIT didn't notice the fact that it is not a normal link. In that case you will need to look at the site's javascript to see where the link leads. If the link requires javascript to run, it gets more complicated. Java is not able to execute browser javascript, but I found a library called DJ native swing which includes a web browser class that you can add to jframes. It uses your native browser to render, and to run javascript.
This should be possible in Selenium as others have noted.
I have used Selenium to login then crawl a site and discover every permuation of values for every form on the site (30+ forms). These values are later used to fill and submit the form with a specific perumation of values. This site was very JS/jQuery heavy and I used Selenium's built-in support of javascript executor, css selectors, and XPath to accomplish this.
I implemented HtmlUnit and HttpUnit as faster alternatives, but found they were not as reliable as Selenium given the JS semantics of the site I was crawling.
It's hard to give you code on how to accomplish it because your Selenium implementation will be quite page-specific and I can't look at the page you're coding against to figure out what's going on with that button script junk. However, I have include some possibly relevant selenium code (Java) snippets:
Element element = driver.findElements(By.id(value)); //find element on page
List<Element> buttons = parent.findElements(By.xpath("./tr/td/button")); //find child element
button.click();
element.submit() //submit enclosing form
element.sendKeys(text); //enter text in an input
String elementText = (String) ((JavascriptExecutor) driver).executeScript("return arguments[0].innerText || arguments[0].textContent", element); //interact with a selenium element via JS
If you are coding similar functions on different pages, then PageObjects behind interfaces can help.
The link Anew posted is a good starting point and good ol' StackOverflow has answers to just about any Selenium problem ever.
Instead of trying to browse around programmatically, try executing the login request and save the cookies then set those in the next request to the form post.
HTMLUnit is pretty bad at processing JavaScript, the Rhino JS library produces often errors (actually no errors is much the exception). I would advise to use Selenium, which is basically a framework to control headless browsers (chrome, firefox based).
For your question, the following code would do the work
selenium.open(myurl);
selenium.click("id=:tv");
You then have to wait for the page to load
selenium.waitForPageToLoad(someTime);
I would recommend htmlunit any day. It's a great library.
First, check out their web page(http://htmlunit.sourceforge.net/) to get htmlunit up and running. Make sure you use the latest snapshot(2.12 when writing this)
Try these settings to ignore pretty much any obstacle:
WebClient webClient = new WebClient(BrowserVersion.FIREFOX_17);
webClient.getOptions().setRedirectEnabled(true);
webClient.getOptions().setCssEnabled(false);
webClient.getOptions().setThrowExceptionOnScriptError(false);
webClient.getOptions().setThrowExceptionOnFailingStatusCode(false);
webClient.getOptions().setUseInsecureSSL(true);
webClient.getOptions().setJavaScriptEnabled(true);
webClient.getCookieManager().setCookiesEnabled(true);
Then when fetching your page, make sure you wait for background Javascript before doing anything with the page, like posting a login form:
//Get Page
HtmlPage page1 = webClient.getPage("https://login-url/");
//Wait for background Javascript
webClient.waitForBackgroundJavaScript(10000);
//Get first form on page
HtmlForm form = page1.getForms().get(0);
//Get login input fields using input field name
HtmlTextInput userName = form.getInputByName("UserName");
HtmlPasswordInput password = form.getInputByName("Password");
//Set input values
userName.setValueAttribute("MyUserName");
password.setValueAttribute("MyPassword");
//Find the first button in form using name, id or xpath
HtmlElement button = (HtmlElement) form.getFirstByXPath("//button");
//Post by clicking the button and cast the result, login arrival url, to a new page and repeat what you did with page1 or something else :)
HtmlPage page2 = (HtmlPage) button.click();
//Profit
System.out.println(page2.asXml());
I hope this basic example will help you!
I am currently trying to map the highlighted line shown below in the screen grab.
I've looked at w3schools and looked it up here on SO but cant seem to get my code right.. My selenium script keeps erroring out with cannot identify element.
currently, I am doing something like:
selenium.click("xpath=//table[#id='resultTable']/tbody/tr[#class='level3']/td[#id='resultTable_0_0_1_ob']/span/a[#class='linkOnly']");
I have also tried this:
selenium.click("xpath=//table[#id='resultTable']/tbody[1]/tr[6]/td[2]/span[1]/a[1]");
and this:
selenium.click("xpath=//table[#id='resultTable']/tbody/tr[6]/td[2]/span/a");
Am I doing it right and I just need to put in a delay ? or am I doing this completely wrong ?
EDIT:
Here is the code snippet as requested. Thanks for the pointer, I didn't really notice that there were multiple classes with the same name! Hm, but for some reason, the other two XPaths that I wrote dont work.
In this code snippet, I expanded the table so you can better see how the table is set up. Again, sorry for the image size, but ctrl+scroll up should enlarge the picture.
I'd prefer using css selectors as they work faster (currently using webdriver + java)
so solution to your problem be like:
String cssSelector = "tr[class='level3']>td[id='resultTable_0_0_1_ob']>span>a[class='linkOnly']"
driver.findElement(By.cssSelector(cssSelector)).click():
driver.manage.timeouts.implicitWait(3,TimeUnit.SECONDS);
As you got ID='resultTable_0_0_1_ob' I think this ID should help in finding unique element on the page.
2nd way to solve your problem:
also if framework which your site is implemented on supports jQuery you can easly use jQuery:
JavascriptExecutor js = (JavascriptExecutor) driver;
StringBuilder stringBuilder = new StringBuilder();
stringBuilder.append("var x = $("tr[class='level3']>td[id='resultTable_0_0_1_ob']>span>a[class='linkOnly'];");
stringBuilder.append("x.click();");
js.executeScript(stringBuilder.toString());
And don't forget to verify your xpaths, css selectors in firepath(firebug extension) in firefox.
Picture below provided:
Hope this helps you.
xpath=//table[#id='resultTable']/tbody/tr[#class='level3']/td[#id='resultTable_0_0_1_ob']/span/a[#class='linkOnly']
The Corresponding CSS is css=#resultTable > tbody > tr.level3 > #resultTable_0_0_1_ob > span > .linkOnly