Extract text from a popup using Selenium - java

Consider:
I need to find a way how I could get the text from the popup using Selenium/core Java, so that I can compare the text with expected data.
Is there a way I could extract the text from the popup?
A screenshot is attached.

If you inspect the element which has the focus.
This is possible with all common browsers via development tools or use the SeleniumIDE Plugin for Firefox to get information about that page and build locators.
Often framework are used (like bootstrap) to ensure a consistent layout so a CSS-locator might look like:
var popupBody = driver.FindElement(By.CssSelector("div.modal-dialog div.content div.body")
Note that in this case Bootstrap would not the call the class 'popup' but 'modal-dialog'. The locator might furthermore vary depending on the inner structure. As mentioned inspect the element (or share the HTML code so we can suggest concrete locator).
By this you get a normal WebElement ala Selenium where you can get the Text
// use the element
.. popupBody.Text ..

Related

How to get all the texts displayed on a Web Page using Robot Framework?

I'm using Robotframework to automate tests, it uses the Selenium2 Library and gives the opportunity to extend many libraries (Java, Python, AngularJS, etc.).
Here's my question.
Is there a way to get all the texts displayed on a page?
I can get any specific text by the element locator, but currently I need to write a function which gets all the texts displayed on the page.
Does anyone know a way? Or a hint how to get things going?
You can do that by getting the text content of the <body> tag:
${text}= Get Text //body
Log ${text} # a very long string, with newlines as delimiters b/n the different tags
${text as list}= Split To Lines ${text}
Log ${text as list} # a list, each member is the different tag's text
Another (non-working with SE) way to do it is to go after each element, with a locator like //body//*, producing webelements with Get Webelements on it.
But when you callGet Text on each produced webelement, it will return its text, plus the ones for all its children - thus duplicating the data. That can be done in pure xpath/xslt (with text(), . and normalize-space()), but regretfully not through webdriver/selenium (it always expects a node as argument).
The purpose of that ^ de-tour from the answer was to present the outcome of a 2 minute research :), and to get any feedback from someone that might have actually accomplished it with Get Text on each element of the page.

Which is the best way to locate an element in selenium webdriver other than XPath?

The application which I'm testing is fast developing, and new features keep being adding, requiring changes to the testing XPaths. So the selenium scripts which were successful before now failed as the XPaths have changed. Is there any reliable way to locate element (which will never change)? FYI, I thought of using ID's but my application does not have ID's for each and every element as it is not recommended to give ID's in the code.
I feel the following is the hierarchy for choosing the element in selenium
1.id
2.class name
3.name
4.css
5.xpath
6.link text
7.Partial link text
8.tag name
In case of changing DOM structure you can try using functions like text() and contains(). The following link explains basic of the mentioned function.
http://www.guru99.com/using-contains-sbiling-ancestor-to-find-element-in-selenium.html
The following link can be referred for Writing reliable locators
https://blog.mozilla.org/webqa/2013/09/26/writing-reliable-locators-for-selenium-and-webdriver-tests/
Hope this helps you.
If you cannot impose #id discipline on the interface that keeps changing, one alternative is to use CSS selectors.
Another alternative to write more robust XPath:
Be smart about using the descendent-or-self axis (//):
Rather than /some/long/and/brittle/path/uniquepart use //uniquepart or //uniquepart/further/path to bypass that which is likely to change.
Don't overspecify label matching.
Use case-insensitive contains(), and try to match critical parts of labels that are likely to remain invariant across interface changes.
One other way I can think if is that you can load your page elements in to DOM and use DOM element navigation. It is a good practice to have id on elements though. If you have to use the xpath way then it is a good practice to split the path to keep the common path separately and adding the leaf elements as needed. In a way change in xpath triggering the test to fail is a good indication of catching the changes.

Get Xpath (or element) when page has randomly generated ID's HTML Unit

so I am using HTML Unit to click an item on a webpage. I usually use Xpath to select my items, but this page gives every element a randomly generated ID and class. I usually use Google Chrome to get the Xpath of elements, but it gives me something like this: //*[#id=":og"] where :og is the randomly generated ID. I know that sometimes chrome gives me Xpath without any ID's or Classes, like this: /html/body/table/tbody/tr[2]/td/table/tbody/tr[3]/td/form/table[2]/tbody/tr/td/input[2] Is it possable to get an Xpath that does not rely on IDs or Classes in a case like this?
Thanks.
In order to construct shorter xpaths or alternative ones based on tags only you can use plugins that will let you do just that. Particularly I favor the Selenium IDE in firefox, but in Chrome you can use things like Xpath Helper. There are others you can explore by searching the chrome web store.

Parse javascript generated content using Java

http://support.xbox.com/en-us/contact-us uses javascript to create some lists. I want to be able to parse these lists for their text. So for the above page I want to return the following:
Billing and Subscriptions
Xbox 360
Xbox LIVE
Kinect
Apps
Games
I was trying to use JSoup for a while before noticing it was generated using javascript. I have no idea how to go about parsing a page for its javascript generated content.
Where do I begin?
You'll want to use an HTML+JavaScript library like Cobra. It'll parse the DOM elements in the HTML as well as apply any DOM changes caused by JavaScript.
you could always import the whole page and then perform a string separator on the page (using return, etc) and look for the string containing the information, then return the string you want and pull pieces out of that string. That is the dirty way of doing it, not sure if there is a clean way to do it.
I don't think that text is generated by javascript... If I disable javascript those options can be found inside the html at this location (a jquery selector just because it was easier to hand-write than figuring out the xpath without javascript enabled :))
'div#ShellNavigationBar ul.NavigationElements li ul li a'
Regardless in direct answer to your query, you'd have to evaluate the javascript within the scope of the document, which I expect would be rather complex in Java. You'd have more luck identifying the javascript file generating the relevant content and just parsing that directly.

GWT id element is changing every time in selenium

selenium.click("gwt-uid-204"); // this is recorded from Selenium IDE
I am clicking the check box in my (gwt) java application. The gwt-uid is changing ever time, so if the id changed then my element is not found in my apps. The regular expression is not working for me and I am not sure what I am doing wrong. Thanks for your help
selenium.click("gwt-uid-[0-9]);
I am using selenium 1.0.3, Java
Many GWT elements comes with ensureDebugId (method on UIObject) to allow you to explicitly set Ids to elements for testing and debugging purpose. You also need to inherit the module
<inherits name="com.google.gwt.user.Debug"/>
to make it work. The advantage of this is, you can remove the trace from the production deployment by removing the inherited module in during prod mode compile. Hence there wont be code changes to remove unnecessary Ids.
You can do it in 3/4 ways
Can check this link :
3 ways of dealing with GWT dynamic element Ids
which talks about 3 different ways of assigning a static id to your GWT elements.
Also,
You can write a custom javascript method which will fetch all the ids dynamically. Then you can process those ids for selenium actions.
There are two possible solutions. The first is to tell Se that you are using a regex by saying regex:gwt-uid-[0-9]. As you have it there it is looking the and element whose name or id is that literal string.
The other solution is to turn on static id's for things which I discuss in http://element34.ca/blog/google-web-toolkit-and-id.
-adam
Assuming you have dynamic IDs, as you have presented, first realize that Selenium's click method takes a locator argument. A simple approach is to specify a locator that finds an ID starting with your constant "gwt-uid-" prefix. You can use any of these locators as the argument to your click method, depending on your preference of technologies:
== XPath ==
//input[starts-with(#id, 'gwt-uid-')]
== CSS ==
css=input[id^='gwt-uid-']
== DOM ==
dom=for each (e in document.getElementsByTagName('input')) if (e.id && (e.id.substr(0, 'gwt-uid-'.length) === 'gwt-uid-')) e
Footnote 1: I have not used GWT, so my examples above assume that it still puts a check box in an <input> element; adjust as needed.
Footnote 2: Selenium does offer regular expression support, as Adam intimated, but there are two issues with it in this case: (1) the prefix is "regexp:" rather than "regex:". (2) Selenium's click method does not support the regexp prefix at all! (My empirical evidence suggests that locators do not use regular expressions in Selenium, only text matching arguments do.)
You can also use Firebug add ons to remove the GWT UID.
Right click where the GWT UID is and select "inspect element with
firebug" -
Click on the code where the GWT UID is and when the firebug windows
appears select "deleted attributes id"
After removing the id, right click one more time and copy the "Xpath"
Add an extra (/) and paste the xpath on the target.
This may also help.

Categories

Resources