Tests becoming very long with configurations Selenium - java

been a while since I have been here and am just trying to re-familiarize myself with my test automation framework I have been working on. Maybe a stupid question but I am going to throw it out there anyway as I think aloud.
Because I have introduced a config file which contains the path to an excel file(which contains test data) and implemented a basic excel reader to extract this data for testing I am finding that a great deal of my initial test is primarily taken up by all this set up.
For instance:
create an instance of a ReadPropertyFile class
Create an object of an ExcellDataConfig class and pass to it the location of the excel file from the config file
set the testcase id for this test to scan the excel file for where to start reading the data from the sheet - excel sheet contains markers
get the locations rol / col info from the sheet of all the interesting stuff i need for my test e.g. username / password, or some other data
open the browser
in the case of running a test on for multiple users set up a for loop that iterates through the excel sheet and logs in and then does the actual test.
Is a lot of configuration options but is there a simpler way?
I have a separate TestBase class which contains the login class and i thought to somehome move this user login info stuff there but not sure if that is such a good idea.
I just don't want to get bogged down duplicating work does anyone have any high level suggestions?

Here is a compilable (but not fully coded) quick-and-dirty example how you could design a base class for Selenium test classes. This follows the DRY principle (Don't Repeat Yourself).
The base class defines a login/logout method which would be called prior/after test execution of derived test classes.
Data is read from a Json file (based on javax.json) and used for locating elements (using the keys) and entering data (using the values). You can easily expand the code to support handling of other elements or location strategies (css, xpath).
Note that this example is not performance-optimised. But it is quick enough for a start and you could adapt it to your needs (e.g. eager data loading in a static context).
package myproject;
import java.io.*;
import java.util.*;
import javax.json.Json;
import javax.json.stream.JsonParser;
import javax.json.stream.JsonParser.Event;
import org.junit.*;
import org.openqa.selenium.*;
import org.openqa.selenium.support.ui.*;
public class MyProjectBaseTest {
protected static WebDriver driver;
#Before
public void before() {
WebDriver driver = new FirefoxDriver();
driver.get("http://myapp");
login();
}
#After
public void after() {
logout();
}
private void login() {
Map<String, String> data = readData("/path/to/testdata/login.json");
Set<String> keys = data.keySet();
for (String key : keys) {
WebDriverWait wait = new WebDriverWait(driver, 20L);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(key)));
final WebElement we = driver.findElement(By.id(key));
if ("input".equals(we.getTagName())) {
we.clear();
we.sendKeys(data.get(key));
}
//else if "button".equals(we.getTagName())
}
}
private void logout() {
//logout code ...
}
private Map<String, String> readData(String filename) {
Map<String, String> data = new HashMap<String, String>();
InputStream is = null;
String key = null;
try {
is = new FileInputStream(filename);
JsonParser parser = Json.createParser(is);
while (parser.hasNext()) {
Event e = parser.next();
if (e == Event.KEY_NAME) {
key = parser.getString();
}
if (e == Event.VALUE_STRING) {
data.put(key, parser.getString());
}
}
parser.close();
}
catch (IOException e) {
//error handling
}
finally {
//close is
}
return data;
}
}

All this "setup work", you've described actually is pretty common stuff and it is how the AAA pattern really works:
a pattern for arranging and formatting code in UnitTest methods
For advanced Fixture usage you could utilize the most suitable for your case xUnit Setup pattern.
I totally agree with #Würgspaß's comment. What he is describing is called Object Map and I've used it heavily in the past 3 years with great success across multiple Automation projects.
I don't see in your scenario any usage of specific framework, so I would suggest that you pick some mature one, like TestNG in combination with Cucumber JVM. The last one will provide a Context injection, so you can get always clean step definition objects that can share context/state during the scenario run. And you will be able to reuse all the heavy setup just once and share it between all the tests. I/O operations are expensive and may cause issues in more complex cases, e.g. parallel execution of your tests.
As for the design of your code, you can find some of the Selenium's Test design considerations very useful, like the CallWrappers.

Related

Can I get the Field value in String into custom TokenFilter in Apache Solr?

I need to write a custom LemmaTokenFilter, which replaces and indexes the words with their lemmatized(base) form. The problem is, that I get the base forms from an external API, meaning I need to call the API, send my text, parse the response and send it as a Map<String, String> to my LemmaTokenFilter. The map contains pairs of <originalWord, baseFormOfWord>. However, I cannot figure out how can I access the full value of the text field, which is being proccessed by the TokenFilters.
One idea is to go through the tokenStream one by one when the LemmaTokenFilter is being created by the LemmaTokenFilterFactory, however I would need to watch out to not edit anything in the tokenStream, somehow reset the current token(since I would need to call the .increment() method on it to get all the tokens), but most importantly this seems unnecessary, since the field value is already there somewhere and I don't want to spend time trying to put it together again from the tokens. This implementation would probably be too slow.
Another idea would be to just process every token separately, however calling an external API with only one word and then parsing the response is definitely too inefficient.
I have found something on using the ResourceLoaderAware interface, however I don't really understand how could I use this to my advantage. I could probably save the map in a text file before every indexing, but writing to a file, opening it and reading from it before every document indexing seems too slow as well.
So the best way would be to just pass the value of the field as a String to the constructor of LemmaTokenFilter, however I don't know how to access it from the create() method of the LemmaTokenFilterFactory.
I could not find any help googling it, so any ideas are welcome.
Here's what I have so far:
public final class LemmaTokenFilter extends TokenFilter {
private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
private Map<String, String> lemmaMap;
protected LemmaTokenFilter(TokenStream input, Map<String, String> lemmaMap) {
super(input);
this.lemmaMap = lemmaMap;
}
#Override
public boolean incrementToken() throws IOException {
if (input.incrementToken()) {
String term = termAtt.toString();
String lemma;
if ((lemma = lemmaMap.get(term)) != null) {
termAtt.setEmpty();
termAtt.copyBuffer(lemma.toCharArray(), 0, lemma.length());
}
return true;
} else {
return false;
}
}
}
public class LemmaTokenFilterFactory extends TokenFilterFactory implements ResourceLoaderAware {
public LemmaTokenFilterFactory(Map<String, String> args) {
super(args);
if (!args.isEmpty()) {
throw new IllegalArgumentException("Unknown parameters: " + args);
}
}
#Override
public TokenStream create(TokenStream input) {
return new LemmaTokenFilter(input, getLemmaMap(getFieldValue(input)));
}
private String getFieldValue(TokenStream input) {
//TODO: how?
return "Šach je desková hra pro dva hráče, v dnešní soutěžní podobě zároveň považovaná i za odvětví sportu.";
}
private Map<String, String> getLemmaMap(String data) {
return UdPipeService.getLemma(data);
}
#Override
public void inform(ResourceLoader loader) throws IOException {
}
}
1. API based approach:
You can create an Analysis Chain with the Custom lemmatizer on top. To design this lemmatizer, I guess you can look at the implementation of the Keyword Tokenizer;
Such that you can read everything whatever is there inside the input and then call your API;
Replace all your tokens from the API response in the input text;
After that in Analysis Chain, use standard or white space tokenizer to tokenized your data.
2. File-Based Approach
It will follow all the same steps, except calling the API it can use the hashmap, from the files mentioned while defining the TokenStream
Now coming to the ResourceLoaderAware:
It is required when you need to indicate your Tokenstream that resource has changed it has inform method which takes care of that. For reference, you can look into StemmerOverrideFilter
Keyword Tokenizer: Emits the entire input as a single token.
So I think I found the answer, or actually two answers.
One would be to write my client application in a way, that incoming requests are first processed - the field value is sent to the external API and the response is stored into some global variable, which can then be accessed from the custom TokenFilters.
Another one would be to use custom UpdateRequestProcessors, which allow us to modify the content of the incoming document, calling the external API and again saving the response so it's somehow globally accessible from custom TokenFilters. Here Erik Hatcher talks about the use of the ScriptUpdateProcessor, which I believe can be used in my case too.
Hope this helps to anyone stumbling upon a similar problem, because I had a hard time looking for a solution to this(could not find any similar threads on SO)

Unit tests for file parsing and translation

How do I write the unit test for the following method - so that it is input file agnostic?
It seems that the reading and translation into business objects are distinct responsibilities - which need to be separate.
That would allow the business translation to be testable.
Any suggestions are welcome.
public Map<header,record> createTradeFeedRecords(String tradeFile,String config) throws Exception {
Map<header,record> feedRecordMap =
new LinkedHashMap<>();
try (BufferedReader reader = new BufferedReader(new FileReader(tradeFile))) {
for (String line; (line = reader.readLine()) != null;) {
if (line.trim().isEmpty() || line.startsWith("#") )
continue;
Record record = recordParser.extractTradeFeedRecord(line,config):
feedRecordMap .put(record.getHeader(), record) ;
}
} catch (Exception e) {
e.printStackTrace();
throw e;
} finally {
}
return feedRecordMap ;
}
You could use JUnit's TemporaryFolder rule (or, if using JUnit5, its equivalent extension) to create an input file for your test(s). You would then provide the path to this file in your tradeFile argument and your test would operate on the file you created. On completion of the test JUnit will discard the temporary folder, thereby adhering to the test principle of self containment.
This is, I think, the approach which most closely mirrors the actual behaviour of the createTradeFeedRecords method.
However, if you really don't want to play around with the file system in your tests or indeed if you just want to achieve this ..
It seems that the reading and translation into business objects are distinct responsibilities - which need to be separate.
... then you could extract the new FileReader(tradeFile) call behind an interface. Something like this, perhaps:
public interface TradeReader {
Reader read(String input);
}
The 'normal' implementation of this would be:
public class FileTradeReader implements TradeReader {
#Override
public Reader read(String input) {
return new FileReader(input);
}
}
You could then provide an implementation of this for use in your test case:
public class StubTradeReader implements TradeReader {
#Override
public Reader read(String input) {
return new StringReader(input);
}
}
In your tests you would then inject the class-under-test (i.e. the class which contains createTradeFeedRecords) with an instance of StubTradeReader. In this way, the createTradeFeedRecords method invoked within your tests would act upon whatever input you provided when creating the StubTradeReader and your tests would have no interaction with the file system.
You could also test the TradeReader separately (using the temporary folder approach outlined above, perhaps) thereby achieving the goal of separating reading and translating and testing both independently.

Concurrent inserting to DB

I made a parser based on Jsoup. This parser handles a page with pagination. This page contains, for example, 100 links to be parsed. I created a main loop that goes over pagination. And I need to run async tasks to parse each of 100 items on each page. As I understand, Jsoup does not support async requests handling. After handling each of item I need to save it to DB. I want to avoid errors during insert into DB's table (if threads will use the same id for different items at the same time, if its possible). What you could suggest?
Could I use simple Thread instance to parse each item:
public class ItemParser extends Thread {
private String url;
private MySpringDataJpaRepository repo;
public ItemParser(String url, MySpringDataJpaRepository repoReference) {
this.url = url;
this.repo = repoReference;
}
#Override
public void run() {
final MyItem item = jsoupParseItem();
repo.save(item);
}
}
And run this like:
public class Parser {
#Autowired
private MySpringDataJpaRepository repoReference; // <-- SINGLETON
public static void main(String[] args) {
int pages = 10000;
for (int i = 0; i < pages; i++) {
Document currentPage = Jsoup.parse();
List<String> links = currentPage.extractLinks(); // contains 100 links to be parsed on each for-loop iteration
links.forEach(link -> new ItemParser(link, repoReference).start());
}
}
}
I know that this code is not compilable, I just want to show you my idea.
Or maybe it's better to use Spring Batch?
What is best practice to solve this?
What do you think?
If you use row level locking should be fine. It might save problems to have each insert be a transaction but this has implications given the whole notion of a transaction as a unit of work (i.e. if a single insert fails do you want the whole run to fail and rollback?).
Also, if you use UUIDs or db-generated ids you won't have any collision issues.
As to how to structure the code, I'd look at using Runnables for each task, and a thread pool executor. Too many threads and the system will lose efficiency for trying to manage them all. I notice you're using spring, so take a look at https://docs.spring.io/spring/docs/current/spring-framework-reference/html/scheduling.html

Unit testing ElasticSearch search result converter

In our project I have written a small class which is designed to take the result from an ElasticSearch query containing a named aggregation and return information about each of the buckets returned in the result in a neutral format, suitable for passing on to our UI.
public class AggsToSimpleChartBasicConverter {
private SearchResponse searchResponse;
private String aggregationName;
private static final Logger logger = LoggerFactory.getLogger(AggsToSimpleChartBasicConverter.class);
public AggsToSimpleChartBasicConverter(SearchResponse searchResponse, String aggregationName) {
this.searchResponse = searchResponse;
this.aggregationName = aggregationName;
}
public void setChartData(SimpleChartData chart,
BucketExtractors.BucketNameExtractor keyExtractor,
BucketExtractors.BucketValueExtractor valueExtractor) {
Aggregations aggregations = searchResponse.getAggregations();
Terms termsAggregation = aggregations.get(aggregationName);
if (termsAggregation != null) {
for (Terms.Bucket bucket : termsAggregation.getBuckets()) {
chart.add(keyExtractor.extractKey(bucket), Long.parseLong(valueExtractor.extractValue(bucket).toString()));
}
} else {
logger.warn("Aggregation " + aggregationName + " could not be found");
}
}
}
I want to write a unit test for this class by calling setChartData() and performing some assertions against the object passed in, since the mechanics of it are reasonably simple. However in order to do so I need to construct an instance of org.elasticsearch.action.search.SearchResponse containing some test data, which is required by my class's constructor.
I looked at implementing a solution similar to this existing question, but the process for adding aggregation data to the result is more involved and requires the use of private internal classes which would likely change in a future version, even if I could get it to work initially.
I reviewed the ElasticSearch docs on unit testing and there is a mention of a class org.elasticsearch.test.ESTestCase.java (source) but there is no guidance on how to use this class and I'm not convinced it is intended for this scenario.
How can I easily unit test this class in a manner which is not likely to break in future ES releases?
Note, I do not want to have to start up an instance of ElasticSearch, embedded or otherwise since that is overkill for this simple unit test and would significantly slow down the execution.

Manage selenium test project

I have general questions about Managing selenium web project, the example is below, my question is how to manage those test cases?(its only 3 for the example, the real number of test cases is more than 1000)
Did create class for sub tests is good, like class for login and all the tests that related to log in is under this class?
Did there is an Conventions for writing test cases and manage them?
Thanks you all.
I create class with tests like:
#Test //Test1
public void logInFaildTest() {
GridTest gridTest = new GridTest();
WebDriver webDriver = gridTest.getWebDriver();
String url = gridTest.getUrl();
LoginPage logIn = new LoginPage(webDriver, url);
String userName = "user";
String pass="pass";
logIn.login(userName, pass);
WebElement errorMsg = webDriver.findElement(By.className("dijitToasterContent"));
String actual = errorMsg.getAttribute("innerHTML");
String expected="Incorrect user name or password. Please try again.";
assertEquals(expected, actual);
webDriver.close();
}
#Test
public void loginSucsecc()
{
GridTest gridTest = new GridTest();
String url = gridTest.getUrl();
WebDriver webDriver = gridTest.getWebDriver();
LoginPage logIn = new LoginPage(webDriver, url);
String userName = "user";
String pass="pass";
logIn.login(userName, pass);
String actual = webDriver.getCurrentUrl();
String expected= url+"#lastmile/";
// webDriver.close();
webDriver.quit();
assertEquals(expected, actual);
}
#Test
public void accountLock()
{
GridTest gridTest = new GridTest();
String url = gridTest.getUrl();
WebDriver webDriver = gridTest.getWebDriver();
LoginPage logIn = new LoginPage(webDriver, url);
String userName = "user";
String pass="wrong";
for(int i=0;i<11;i++){
logIn.login(userName, pass);
logIn.clearFileduNamePass();
}
WebElement msg = webDriver.findElement(By.id("dijit__TemplatedMixin_0")); //block message
String actual = msg.getAttribute("innerHTML");
int splitIndex= actual.indexOf(".<");
actual = actual.substring(0, splitIndex);
String expected= "Your account has been locked";
webDriver.quit();
assertEquals(expected, actual);
}
}
Yes what you've done is good only.So that all Login related operations can go into one class so if there is any change we can easily manage that
Object Maintaenance
You can go with Page Object Model(POM) as it is widely used approach and easily manageable one.This is for managing your Objects more like maintaining an Object Repository
As you can observe, all we are doing is finding elements and filling values for those elements.
This is a small script. Script maintenance looks easy. But with time test suite will grow. As you add more and more lines to your code, things become tough.
The chief problem with script maintenance is that if 10 different scripts are using the same page element, with any change in that element, you need to change all 10 scripts. This is time consuming and error prone.
A better approach to script maintenance is to create a separate class file which would find web elements , fill them or verify them. This class can be reused in all the scripts using that element. In future if there is change in the web element , we need to make change in just 1 class file and not 10 different scripts.
This approach is called Page Object Model(POM). It helps make code more readable, maintainable, and reusable.
Test Data Maintenance
The next you've to consider is the test data used to run the test cases with different set of data Test-Driven Approach
Same as POM You can create a factory class which will give you set of data whenever required so that when you want to change/modify the data you can simply go to the factory and change it .
For ex you create a class named LoginData which have functions like getValidCredentials getRandomCredentials to get your data. If your application requires random emailid for each run then you can simply modify the getValidCredentials part alone
It will help you a lot when your application runs mainly on forms or user datas
Reusable Components
The third thing is the Re-usability of what you've created.You can reuse the validLogin for other scenario's as well

Categories

Resources