I have a print button in my application. This print button opens a PDF in a new window. On top of this window is the Print icon that can be clicked to open a Chrome's print preview window where one can select "Save as PDF" option to save the PDF file.
I would like to test this flow in selenium.
--- I click the Print button on my webpage. This opens a new Chrome window with the PDF
driver.findElement(By.cssSelector("ID")).click();
--- I switch to the newly opened Chrome window
driver.switchTo().window(driver.getWindowHandles().toArray()[1].toString());
I can see in the Logs that selenium is indeed switching to the new window.
--- I try to get the element of the new Chrome window opened
driver.findElement(By.cssSelector("viewer-pdf-toolbar"))
This is where the issue is. Selenium is unable to interact with this newly opened window and cant find this element. When I inspect the newly opened window I see the following DOM structure -
<body>
<viewer-pdf-toolbar id="toolbar"></viewer-pdf-toolbar>
<div id="sizer" style="width: 735px; height: 1092px;"></div>
<viewer-password-screen id="password-screen"></viewer-password-screen>
...
..
.
</div>
The element I'm looking for does exist in the DOM structure and I can access it via Console but I cant access it using Selenium. You can view this DOM by opening any PDF file in Chrome browser and inspecting the window.
Any ideas why this is happening?
Note : When the new window loads, for a second or two it shows "Printing, Please wait" and then the PDF loaded. Not sure if this information is relevant to the problem here.
Additional Information -
In debug mode, after the Driver has switched to the new PDF window I evaluated - driver.getCurrentUrl() - this URL is the correct URL as displayed in the address bar of the preview window. This means that my driver is on the correct window.
But when I execute driver.getPageSource - I get the following output
<html>
<head>
</head>
<body style="height: 100%; width: 100%; overflow: hidden; margin:0px; background-color: rgb(82, 86, 89);">
<embed name="XXXXXXD474B71424EC89403FB4FA75CF" style="position:absolute; left: 0; top: 0;" width="100%" height="100%" src="about:blank" type="application/pdf" internalid="XXXXXXD474B71424EC89403FB4FA75CF"></body>
</html>
So the driver sees this as the page source whereas when I do an inspect on the same window I see a different DOM. Why is this happening? Is Chrome replacing the <embed> tag with the final DOM? How can selenium be made aware of this change?
Please help!
Related
I want to show a WebPage on another WebPage using InlineFrame.
I initialized it like this:
Wicket/ Java:
InlineFrame choosenTestcaseInlineFrame =
new InlineFrame("inlineFrame", AuthenticationPage.class);
public WhatToDoPage() {
Form whatToDoForm = configureWhatToDoForm();
add(whatToDoForm);
add(choosenTestcaseInlineFrame.setOutputMarkupId(true));
add(choosenTestcaseInlineFrame);
}
'
HTML:
<iframe wicket:id="inlineFrame" style="margin-left: 200px; height: 500px; width: 1000px">
The Problem is That the InlineFrame seems to refuse to show the Content.
here is a Screenshot:
I don't know if Chrome has a specific option to allow iframe to be displayed, but you might find some clues to solve the problem here:
iframe refuses to display
Below is the the HTML script:
<div class="textarea-wrapper">
<div class="textarea" tabindex="0"role="textbox" aria-label="Message text
input box" placeholder="Type a message..." spellcheck="true" data-
role="message-text-input" ng-keydown="preventReturns($event)" ng-
click="handleOneToOneMessage($event)" ng-
keyup="removeWhitespace();handleOneToOneMessage($event)" ng-blur="onBlur()"
ng-paste="parsePlainText()" ondrop="return false;" contenteditable="true"/>
</div>
The command i am using to send keys to that textarea:
mesg_box = browser.find_elements_by_css_selector(".textarea")
time.sleep(3)
browser.execute_script("arguments[0].click();", mesg_box[0])
mesg_box[0].send_keys("hiii")
here mesg_box will return 2 items and they are returning fine but I couldn't able to click on the input field may be because it has only div field but not the input field.
The test case is passing fine if the browser in focus.
The above Code is working Fine for chrome(when minimized also) but is failing for Firefox if Firefox is not on focus I mean if Firefox is minimized.
please let me know if there is any solution to send keys for the text area even when Firefox is minimized.
Following is the HTML of the page, How do i get the Xpath or is there anyother way to automate using java?In fact, we shud click on this "continue" button.
<regform-button>
<button ng-disabled="activityIndicator" ng-click="validate()" type="button">
<div template="api-loader" ng-http-loader="">
<div class="http-loader__wrapper ng-hide" ng-show="showLoader" ng-include="template">
<span class="api-loader"></span>
</div>
</div>
<ng-transclude>
</button>
</regform-button>
If you are using Google Chrome, right click on the button, and select Inspect in the popup. The html will open in a developer tools frame. Right click on the element in the developer tools frame, hover over copy, select copy xpath.
Here is the XPath to the form on the URL.
//*[#id="main-frame"]/div[1]/div[2]/div/div[1]/div[1]/form/regform-steps/ng-transclude/regform-step[1]/ng-form/ng-transclude/regform-button/button
WebDriver driver = new ChromeDriver();
driver.get("https://uk.match.com/unlogged/landing/2016/06/02/hpv-belowthefold-3steps-geo-psc-bowling?klid=6740");
//fill in fields
WebElement element = driver.findElement(By.xpath("//*[#id=\"main-frame\"]/div[1]/div[2]/div/div[1]/div[1]/form/regform-steps/ng-transclude/regform-step[1]/ng-form/ng-transclude/regform-button/button"));
element.click();
driver.findElement(By.xpath("//regform-button/button")).click();
Try opening the console for Google Chrome and typing the following:
document.evaluate("//regform-button/button", document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue.click()
It works...that's if you've specified the required fields. Also be sure to wait for a while (2 secs to be sure) after specifying the required fields as there might be an additional load after setting the required fields.
Edit: The code is javascript..I just took the xpath given by the previous answers and used that to locate the element through js. I'm saying that "//regform-button/button" should work..
Once you have selected the required values from the drop-downs appearing before Continue button, you can click on Continue button using the following statement:
driver.findElement(By.xpath("//span[text()='Continue']")).click();
The following xpath worked for me.But I dont know, why is it different for each different browser.
For Firefox :
`driver.findElement(By.xpath("//*[#id='main-frame']/div[1]/div[2]/div/div[1]/div[1]/form/regform-steps/ng-transclude/regform-step[1]/ng-form/ng-transclude/regform-button/button")).click()`;
For chrome: driver.findElement(By.xpath("//regform-button/button")).click();
Taking this instagram link for eg. https://www.instagram.com/janelaverdegarden/, I want to load all the images by scrolling to the bottom of the page. To do that, I have to click LOAD MORE button, scroll down to bottom page, wait for page to load, scroll up abit, and scroll down to bottom page and wait for page to load again till no more images are left (manually).
However, I want to load the page automatically using Selenium. Thus, initially, I thought that once the page is fully loaded to the end of the page with all the images, the web element rbSensor will disappear, allowing me to mark that the page has reached the bottom with no more images to be loaded. However, it still remains there. So is there any other way to check if the page is fully loaded with all the images?
<div data-reactid=".0.1.0.1:$mostRecentSection/=10">
<div class="_nljxa" data-reactid=".0.1.0.1:$mostRecentSection/=10.0">
<div class="ResponsiveBlock" data-reactid=".0.1.0.1:$mostRecentSection/=10.1">
<div class="rbSensor" data-reactid=".0.1.0.1:$mostRecentSection/=10.1.$sensor">
<iframe class="rbSensorFrame" data-reactid=".0.1.0.1:$mostRecentSection/=10.1.$sensor.0"/>
</div>
</div>
</div>
Code. As I used rbSensor to load the page, even if the page is fully loaded with all the images, the element will still be clicked continuously as the element is still in the html.
while (driver.findElements(By.className("rbSensor")).size() > 0) {
jse.executeScript("window.scrollTo(0,-300)"); //scroll up abit
//click on button if found
driver.findElement(By.className("rbSensor")).click();
Thread.sleep(2000); //pause for 2 seconds
}
Using the answer below in Page scroll up or down in Selenium WebDriver (Selenium 2) using java is not what I want and will not work in this case.
jse.executeScript("window.scrollBy(0,250)", "");
Try using a different class name
className("_oidfu")
has href attribute
Scrolling answer can be found here: Page scroll up or down in Selenium WebDriver (Selenium 2) using java
I've dropdown control in the web application having following html code
<input class="dynamic-list-widget-input ui-widget ui-widget-content" title="" autocomplete="off" aria-invalid="false">
I've tried accessing using selenium webdriver through xpath(relative/absolute), cssSelector but no to avail i got following exception
org.openqa.selenium.NoSuchElementException: Unable to locate element: {"method":"xpath","selector":".//*[#id='With-Attachment']/div[2]/div[2]/div[5]/div[1]/span/input"}
Command duration or timeout: 10.12 seconds
It looks very likely you get the wrong xpath for that input element.
if you are using Chrome, you can do as follows:
right click on the element in the webpage.
click "inspect element" in pop up menu
right click the highlighted html code on the right, the then click "copy xpath"
you will get the xpath of that element and compare what it is in your code.