I am working on a project for a client where they are going to upgrade to Siebel Open UI. With that upgrade, they also want to start implementing Selenium. The problem we are currently facing, or going to face once implementing, is that with each build the ID's/Names of HTML elements in Siebel change. Because we are talking about a lot of views and applets it's not a good solution to change the code manually each time.
What is a good solution for this problem? One solution that was offered is a correlation table where we keep track of changes in the ID's.
Xpath in this case is also not a good option because of the complicated structure of the views and applets.
I would suggest that you look into CSS Selectors. They are faster and less brittle than XPath. For ID/names that are dynamic, typically there is at least some portion of the ID that is static.
For instance,
<a id="somestatictext_12345">...
where "12345" is some dynamically generated number. In this case you can use a CSS selector like
driver.findElement(By.cssSelector("[id^='somestatictext']"));
Examples
"[id^='somestatictext']" - ID begins with "somestatictext"
"[id$='somestatictext']" - ID ends with "somestatictext"
"[id*='somestatictext']" - ID contains "somestatictext"
For more info, take a look at this CSS Selector reference.
Related
I am currently working on the Robot Framework and using Selenium2Libraries to work on a Web Application. I'm working on a Form and I'm dealing with a dynamic elements which is an editable text area and drop down list..
I really hope someone would be able to guide me on how I can do this. An example of what I am doing is,
[Example element code]
input id="textfield-1237-inputEl" class="x-form-field x-form-text x-form-text-default x-form-focus x-field-form-focus x-field-default-form-focus"
data-ref="inputEl" size="1" name="textfield-1237-inputEl"
maxlength="200" role="textbox" aria-hidden="false" aria-disabled="false"
aria-readonly="false" aria-invalid="false" aria-required="false" autocomplete="off" data-componentid="textfield-1237" type="text"
Any information on this would be much appreciated. Thanks!
There are many types of Identifiers are available.you can search,If the values are dynamic you can use Xpath Identifier to find the locator.Id can be used only for the static values.
In the above case you can use Xpath as
xpath=.//*[contains(type(),'text')]
because text is static.It wont be change.
When trying to handle dynamic IDs, and elements which dont have easy UIDs about them, the best way to go around this is using Xpath.
Xpaths are basically the location of the element within the HTML. This is kind of the best way to get around the problem of not having ID readily available (My work has no IDs anywhere I can use, thus I have no choice but to use Xpaths)
Xpaths are really powerful, if used correctly. If not they are really brittle and can be a nightmare to maintain. Ill give you an example of potential Xpaths you may have to use:
Select From List By Label xpath=(//select)[2] DropDownItem1
You said that you have a drop down. Here is a potenital "look-a-like" you would see. The Xpath here is basically saying, find the 2nd drop down you find, anywhere on the entire HTML page.
Xpaths will take a while to get your head round, esspecially if you have had the luxurary of using IDs. The tools I use in order to locate and debug Xpaths are:
FireBug
Selenium IDE
I mainly use Selenium IDE now, as it is a nice tool which basically lets you "Select" an element within the HTML and it will spurt out its ID, CSS Path, Xpath, DOM, etc... Not only that, when you come to discover more complex Xpaths, there is a "Find" tools which shows you visually, where your Xpath is pointing to (or isnt, if its wrong)
Something which really helped me was This. It is really usful and has a lot of examples for you to work against.
If you have any problems, just reply and ill try to help
More Examples:
Click Element //span[contains(text(), 'Submit')]
Input Text xpath=(//textarea)[3] Some Random Text!
As with the other answers, I propose that you use Xpath.
Using Xpath can point you to the element by identifying the relationship of that element with the other elements around it. So my suggestion is to find a static element that you could use as your starting point.
For example:
starting point has static id:
xpath=//td[#id='startingPoint']/following-sibling::select[1]
starting point has no id but has static text (usually the label of the field):
xpath=//td[contains(text(),'Field Label')]/following-sibling::select[1]
If you could give us an idea of what the element is..we could provide you better examples..
What I did was alter the Xpath for example:
//*[#id="cec9efb093e14182b361766c26fd1919"]/section/div[1]/ticket/div/div/input
And took out the Id what was being generated dynamically cec9efb093e14182b361766c26fd1919 to switch for an autoId I set to the parent element where the Id was being generated. It's a cheap fix but it works if only one of the parent element is being generated.
So the parent element has the attribute autoid=container added to it and I referenced it as [#autoid="container"]/section/div[1]/ticket/div/div/input in the robot code
I am developing an application to test whether a HTML page is responsive or not. Right now, I am assuming that using media queries is the only way to make a HTML page responsive.
But I am using a very crude logic to test it. I am parsing the HTML file and reading it for the presence of a media query statement. If its present I am declaring it as responsive, otherwise non-responsive.
Is there any other way I can go about it?
Is there any other test I can perform before declaring it as responsive or non-responsive?
Check if they are using hard coded px instead of % or em. Maybe see if text is too small or links to close together.
At the end of the day it wont be a great resource for responsive checking since there are so many factors
According to Ethan Marcotte's seminal article that introduced Responsive Web Design (http://alistapart.com/article/responsive-web-design), a responsive page will use media queries, flexible grid layouts and responsive text.
But, even if a page has these elements, it doesn't mean that it is using them correctly. A responsive page is not one that simply uses media queries.
I'm not sure that the ability to programmatically determine if a page is built responsively is even a viable goal. You can check for ingredients, but that won't tell you if the right recipe was followed.
Also, why have you tagged this question with Java?
I'm trying to make a desktop app with java to track changes made to a webpage as a side project and also to monitor when my professors add content to their webpages. I did a bit of research and my current approach is to use the Jsoup library to retrieve the webpage, run it through a hashing algorithm, and then compare the current hash value with a previous hash value.
Is this a recommended approach? I'm open to suggestions and ideas since before I did any research I had no clue how to start nor what jsoup was.
One potential problem with your hashing method: if the page contains any dynamically generated content that changes on each refresh, as many modern websites do, your program will report that the page is constantly changing. Hashing the whole page will only work if the site does not employ any of this dynamic content (ads, hit counter, social media, etc.).
What specifically are you looking for that has changed? Perhaps new assignments being posted? You likely do not want to monitor the entire page for changes anyway. Therefore, you should use an HTML parser -- this is where Jsoup comes in.
First, parse the page into a Document object:
Document doc = Jsoup.parse(htmlString)
You can now perform a number of methods on the Document object to traverse the HTML Nodes. (See Jsoup docs on DOM navigation methods)
For instance, say there is a table on the site, and each row of the table represents a different assignment. The following code would get the table by its ID and each of its row by selecting each of the table's tags.
Element assignTbl = doc.getElementById("assignmentTable");
Elements tblRows = assignTbl.getElementsByTag("tr");
for (Element tblRow: tblRows) {
tblRow.html();
}
You will need to somehow view the webpage's source code (such as Inspect Element in Google Chrome) to figure out the page's structure and design your code accordingly. This way, not only would the algorithm be more reliable, but you could take it much further, such as extracting the details of the assignment that has changed. (If you would like assistance, please edit your question with the target page's HTML.)
The application which I'm testing is fast developing, and new features keep being adding, requiring changes to the testing XPaths. So the selenium scripts which were successful before now failed as the XPaths have changed. Is there any reliable way to locate element (which will never change)? FYI, I thought of using ID's but my application does not have ID's for each and every element as it is not recommended to give ID's in the code.
I feel the following is the hierarchy for choosing the element in selenium
1.id
2.class name
3.name
4.css
5.xpath
6.link text
7.Partial link text
8.tag name
In case of changing DOM structure you can try using functions like text() and contains(). The following link explains basic of the mentioned function.
http://www.guru99.com/using-contains-sbiling-ancestor-to-find-element-in-selenium.html
The following link can be referred for Writing reliable locators
https://blog.mozilla.org/webqa/2013/09/26/writing-reliable-locators-for-selenium-and-webdriver-tests/
Hope this helps you.
If you cannot impose #id discipline on the interface that keeps changing, one alternative is to use CSS selectors.
Another alternative to write more robust XPath:
Be smart about using the descendent-or-self axis (//):
Rather than /some/long/and/brittle/path/uniquepart use //uniquepart or //uniquepart/further/path to bypass that which is likely to change.
Don't overspecify label matching.
Use case-insensitive contains(), and try to match critical parts of labels that are likely to remain invariant across interface changes.
One other way I can think if is that you can load your page elements in to DOM and use DOM element navigation. It is a good practice to have id on elements though. If you have to use the xpath way then it is a good practice to split the path to keep the common path separately and adding the leaf elements as needed. In a way change in xpath triggering the test to fail is a good indication of catching the changes.
I am using Selenium, java and Testng for automation. I am using ID to identify elements, but everybody say that IDs may change and its very brittle way to use Id for testing, so can any one tell me how to use part of id or any other way which will not effect my automation even if there is change in id after some time period.thanks in advance.
On the contrary...
A well built application will always have unique ID's on the page, and is the least projection-able thing to change.
Unfortunately, you will run into things that will be dynamic, or even duplicate.
Where I work, our ID's are generated by Apache Tapestry, and turn out to these types of ID's..
<input id="someID_124905830" />
<input id="submit_0" />
But these are easy to counter using parent-child hierarchy's, or a partial match like input[id^='submit_']
In short. The statement is invalid.
Everybody say that ID's may change and it's very brittle
My question for you is, who is "everybody"? Because "everybody" i talk to, and i'm sure the majority of the web development community would disagree.
There are many more ways to locate an element in Selenium.. other than ID, like xpath, css, dom, link, name etc. However working with xpath and relative xpath will give you confidence about it.
You can google it or can see link1 or link2 or link3