Selenium Execute JavaScript Exception in IE9 - java

I fight with selenium using Internet Explorer 9 and executing JavaScript. Certain JavaScripts work like
return argument[0].innerHTML or return argument[0].children[0].innerHTML. If I use executeScript("return argument[0].childNodes.length;", getElement())
or
List<WebElement> children = executeScript("return argument[0].children;", getElement()) it fails with a JavaScript error inside IE.
Can someone point me to the problem and why this is happening?

Related

I'm trying to get page source code using Selenium, but I got empty page

I'm trying to get page source code using Selenium, the code is general SOP. it worked out for Baidu.com and example.com. but when it comes to the URL i actually need,I got empty page.and the source code show nothing but empty tags like the following code. is there anything I missing?
I tried to add up some more params of options, but it doesn't seem helpful
WebDriver driver;
System.setProperty("webdriver.chrome.driver", "E:\\applications\\ChromeDriver\\chromedriver_win32 (2)//chromedriver.exe");
// 实例化一个WebDriver的对象 作用:启动谷歌浏览器
driver = new ChromeDriver();
driver.manage().timeouts().implicitlyWait(2, TimeUnit.SECONDS);
driver.get("http://rd.huangpuqu.sh.cn/website/html/shprd/shprd_tpxw/List/list_0.htm");
String pageSource = driver.getPageSource();
String title = driver.getTitle();
System.out.println("==========="+title+"==============");
System.out.println(Jsoup.parse(pageSource));
I expect the parsed page source of the URL so that I can get the info I need. but I'm stuck in here.
I could reproduce the issue with this website when using ChromeDriver. What I found is that there is a JS detecting that you are using ChromeDriver and blocks the request to the web page with 400 HTTP error code:
Now, Firefox is working as expected with the following code:
FirefoxDriver driver = new FirefoxDriver();
driver.get("http://rd.huangpuqu.sh.cn/website/html/shprd/shprd_tpxw/List/list_0.htm");
Thread.sleep(5000);
String pageSource = driver.getPageSource();
String title = driver.getTitle();
System.out.println("==========="+title+"==============");
System.out.println(Jsoup.parse(pageSource));
driver.quit();
I used just a sleep for 5 seconds which worked. The best practice is to wait for a specific element in your page, check this for reference - How to wait until an element is present in Selenium?
firefox browser version: 67.0.1 geckodriver 0.24.0 selenium version:
3.141.59
first for all, it's for sure a compability problem. it is mainly because of selenium-it's been through lots of development,therefore, tons of problems about version compatibility.
Here is how I fianlly deal with this.
I chose Firefox browser to drive, the version is 67.0(64 bit).Cos Chrome will response with a blank result as #Adi Ohana mentioned.
and I use Selenium with the version of 3.X.
to use Selenium 3.X,I add the following code in pom.xml:
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-server</artifactId>
<version>3.141.59</version> <!-- this version context matters -->
</dependency>
note this, it's <artifactId>selenium-server</artifactId> you need add into your pom.xml.otherwise,you may get some unexpected error.
with these done, you need a proper driver.the driver for firefox named geckodriver.I use v0.24.0 version,it's a .exe file ranther than .jar so that you can specify it by java code in your programming like this:
System.setProperty("webdriver.gecko.driver","E:\\applications\\GeckoDriver-v0.24.0-win64\\geckodriver.exe"); // 0.24.0 the 2nd param is the location of geckodriver.exe in your local computer
then, send a request for the URL.and since the body content is loaded by another AJAX request. you need wait a couple of second for Selenium to doing that.
Thread.sleep(5000); // this is the easyest way, may not the best though.
Conclusion:I get the original source code as I expected,but I do not tackle why googleDriver can not work as expected.I may leave this for a further digging.
Sum things up:
Firefox 67.0
geckodriver v0.24.0 [sepecfied by java-code]
Selenium 3.X [add by xml-code]
thanks for all you guys, it's been really helpful. like this community
PS:I'm new to use stackoverflow.still learning the ropes...

Selenium WebElement times out on most commands

I've run into a problem that has baffled me while attempting to use Selenium 3.4 on jre 1.8 in JUnit. After successfully grabbing a WebElement, attempting to perform the click(), isDisplayed(), sendKeys(), and clear() functions all cause the driver connection to timeout before they can complete. I've wound up creating the following code:
#Test
public void canLogIn(){
WebDriver driver = new HtmlUnitDriver();
driver.get("http://"+ip+"/login/loginpage.html");
WebElement username = driver.findElement(By.id("username_div"));
System.out.println("Username to string: "+username.toString());
/*Thread.sleep(6000);*/
if(!username.isEnabled()) fail();
if(!username.isDisplayed()) fail();
username.click();
username.clear();
username.sendKeys("manager");
...
So far, the code has timed out on username.isDisplayed(), username.click(), username.clear(), and username.sendKeys() when all the other elements were commented out. However, username.toString() works, and shows the correct element, and the code has yet to hang on username.isEnabled(). Thread.sleep() was used to test whether allowing the page to load would eliminate the issue, but to no avail. I have tried executing these commands using Selenium's JavascriptExecutor, also to no avail. I am well and truly stumped at this point, and any assistance you could give me would be greatly appreciated.
Is the username element visible? Perhaps you can try adding this after opening the page:
WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until( ExpectedConditions.visibilityOfElementLocated("username_div"));
What exception are you getting if any?

How do I check if the selenium selector was successful?

I am currently developing automated UI tests with Appium for a website.
I run my tests with many devices on testobject and there are some problems I try to solve.
My sample code is this:
WebElement lexiconCollapsible = mDriver.findElement(By.xpath("//*[#id='1014']/a"));
assertNotNull(lexiconCollapsible);
ScrollHelper.scrollToElement(mDriver,lexiconCollapsible);
Thread.sleep(1000);
lexiconCollapsible.click();
This is working for many devices but not for all of them.
On some I get the following error code:
org.openqa.selenium.InvalidSelectorException: Argument was an invalid selector (e.g. XPath/CSS). (WARNING: The server did not provide any stacktrace information)
The exception is thrown at the position where I want to click the element, so the object is not null.
So my question is:
Has anybody found a solution to check if the device is capable of finding the object? Is there something like a isObjectFound method for this?
I tried with css selector, id, etc. too but the results are the same.
From Selenium Docs,
exception selenium.common.exceptions.InvalidSelectorException(msg=None, screen=None, stacktrace=None)[source]
Thrown when the selector which is used to find an element does not return a WebElement. Currently this only happens when the selector is an xpath expression and it is either syntactically invalid (i.e. it is not a xpath expression) or the expression does not select WebElements (e.g. “count(//input)”).
So it looks like your Selector is incorrect and since it only happens with XPATH, you should try css selector.
Try,
WebElement lexiconCollapsible = mDriver.findElement(By.cssSelector("#1014 a")).click();
Regarding your isObjectFound method, it looks like the WebElement is found when you did findElement, you are getting an exception only on click(). I suggest you switch to CSS selectors to avoid this exception.
Answer provided by JeffC.
It was a problem with the timeout and wait time in between the functions.
Utility.implicitlyWaitForElementPresent(mDriver,By.xpath("//*[#id='1014']/a"));
WebElement lexiconCollapsible = mDriver.findElement(By.xpath("//* [#id='1014']/a"));
assertNotNull(lexiconCollapsible);
ScrollHelper.scrollToElement(mDriver,lexiconCollapsible);
Thread.sleep(1000);
lexiconCollapsible.click();
It's working far better with xpath.

:first / :first-child doesn't seem to work with Selenium 2

While migrating from Selenium 1 to Selenium 2 I am running into a problem.
I have the structural equivalent of the following:
<ul id="documentType">
<li>first</li>
<li>second</li>
<li>third</li>
</ul>
Previously I would in Selenium 1 use the following css selector to find the first anchor link:
#documentType li:first-child a
This would work great, however, when I switch to selenium 2 and try and use the equivalent I get element not found. The following does work but is less precise then I would like.
#documentType li a
I have tried but could not get to work the following:
#documentType li:first a
For greater detail I'm using HtmlUnitDriver with the following code:
driver.findElementByCssSelector("#documentType li a");
Any help on getting the equivalent of the original selector working I would greatly appreciate it!
I be confused :)
EDIT: Phill Sacre brought up a good point on the fact I'm directly using HtmlUnitDriver which could be the source of the problem since it's a pure java implementation. I do this specifically for the ability to deal with a nasty Ajax problem of how to know when Ajax is done running. You can do this with the following code:
protected void waitForAjaxToComplete() {
long result = jQueryActive();
while (result != 0) {
result = (Long) driver.executeScript("return jQuery.active;");
}
}
This is obviously advantageous over using the technique of waiting for an element to appear which can be very inaccurate. I wish WebDriver would expose the executeScript method which would resolve this problem.
Further I noted that by default HtmlUnitDriver does use a java based implementation to parse the css selector supplied and I'm guessing this is the source of the problem. The parser is com.steadystate.css.parser.SACParserCSS21.SACParserCSS21 which may not properly take into account the :first and :first-child qualifiers.
What seems to make this ok is that the behavior of HtmlUnitDriver seems to return the first element by default. It's sister method findElementsByCssSelector seems to return an ordered list.
As a result while this appears to be a bug I may have answered my own question by learning how HtmlUnitDriver operates.
Which browser were you doing your Selenium 1 testing on? Looking in the Selenium documentation, the HtmlUnit driver is a pure-Java solution (i.e. it doesn't run in a browser, obviously).
Now I think there are some differences in selectors between browsers, so if you were using the Firefox browser before it may be worth using the FirefoxDriver to run your Selenium 2 tests?
Similar to Phill's answer- HtmlUnit parser the page differently than FF. I would start the debugging process by running the same test using the FF driver and see if it passes there. If it does pass, then the next step I would do is get the HTML from the HtmlUnit driver (I guess the command would be driver.getPageHtml() or something similar). I would compare that html with the html you get when you look at the html through a real browser (FF, chrome,..). Sometimes you see that tags have been put in by the real browser (i.e., by its parser) or by the HtmlUnit browser. You can either correct your selector or use a different driver (or go tell the developers to fix their html because that what usually causes these problems :-)
I had the same problem with this selector (using Spock/Geb). This has been resolved in the new version of Geb which is 0.9.2 (which has the changes to the HTML unit driver too). Now $('tr:first-child a') works fine with HTMLUnitDriver
I can provide the following CSS Selectors
1. css=#documentType > li > a -- First Link
2. css=#documentType > li+li > a -- Second Link
3. css=#documentType > li+li+li > a -- Third Link
else, you can try
1.css=#documentType > li:nth-child(2) > a -- Second Link
2.css=#documentType > li:nth-child(3) > a -- Third Link

How to execute random Javascript code on a web page?

I'm using htmlunit to test some pages and I'd like to know how can I execute some javascript code in the context of the current page. I'm aware that the docs say I'd better emulate the behavior of a user on a page, but it isn't working this way :( (I have a div which has an onclick property, I call its click method but nothing happens). So I've made some googling and tried:
JavaScriptEngine jse = webClient.getJavaScriptEngine();
jse.execute(page, what here?);
Seems like I have to instantiate the script first, but I've found no info on how to do it (right). Could someone share a code snippet showing how to make webclient instance execute the needed code?
You need to call executeJavaScript() on the page, not on webClient.
Example:
WebClient webClient = new WebClient(BrowserVersion.FIREFOX_3);
webClient.setJavaScriptEnabled(true);
HtmlPage page = webClient.getPage("http://www.google.com/ncr");
ScriptResult scriptResult = page.executeJavaScript("document.title");
System.out.println(scriptResult.getJavaScriptResult());
prints "Google". (I'm sure you'll have some more exciting code to put in there.)
I don't know the JavaScriptEngine you're quoting and maybe it's not the answer you want, but this sounds like a perfect case for Selenium IDE.
Selenium IDE is a Firefox add-on that records clicks, typing, and other actions to make a test, which you can play back in the browser.
In TestPlan using the HTMLUnit backend the google example is:
GotoURL http://www.google.com/ncr
set %Title% as evalJavaScript document.title
Notice %Title%

Categories

Resources