Flow of automation tests with an if else logic in Java - java

In automating the passing of levels in a game, I have several user groups, which are all logging in the game with a method, that accepts the input for username and password, but receive a different number of unlocked levels, depending on the user group (each user has different levels available in the game).
In the process of testing the passing of all the levels, I want to determine during the test at the end of each level if the user has the next one unlocked, and either continue the test or finish it successfully.
I've read about if else implementations with selenium and I am currently using a method like this:
public boolean isElementExisting(WebElement element){
try {
wait.until(ExpectedConditions.elementToBeClickable(element));
} catch (Exception e) {
return false;
}
return true;
}
and using and if logic like so in the test:
if (isElementExisting(level3Button) == false) {
driver.quit();
}
- rest of the tests follow here.
When using a driver.quit(); the test automatically fails. The desired behavior I am searching for is for the test to pass in the if statement (what could be used instead of a driver.quit(); method)?
I could just fit all the code for testing the further levels in separate nested if/else statements but that would be troublesome, currently looking for a more practical solution, like succeeding the test at a certain point.

A test should be static in the sense that it should have a known outcome.
As a result, the way the test is structured and written should follow that logic.
Given what was described above, I would write a test something like this:
login.asUser(username,password);
// additional logic in here
assertTrue(page.userHasLevelUnlocked("level3"));
then the method
public boolean userHasLevelUnlocked(String level){
switch(level)
case "level3":
if(isElementExisting(level3button){
return true;
} else {
return false
}
}
or something along those lines

Thank you for you answer. I understand the concept of a static test, with the addition that a test should not have a "known" outcome, but more that it should have an "expected" outcome, which should be matched, in the sense that it tests something to verify its functionality.
The switch case is a valid scenario, quite frankly I don't see what happens after the assertion fails in the posted example (the test will fail as well).
The solution I implemented is to determine if the user has the next level unlocked at the end of the previous one with a method similar to this:
public void isElementExistingAlternateResult(WebElement element) {
boolean isElementFound = true;
try {
wait.until(ExpectedConditions.elementToBeClickable(element));
} catch (Exception e) {
isElementFound = false;
}
if (isElementFound == true) {
System.out.println("test is continued...");
} else {
Reporter.getCurrentTestResult().setStatus(ITestResult.SUCCESS);
System.out.println("next level not unlocked.");
}
That way, only if the next available level is not found, the test will determine this real time and it will stop and pass at this exact point. Note that this is alternating the result of a otherwise failed test case with the TestNG Reporter class in:
Reporter.getCurrentTestResult().setStatus(ITestResult.SUCCESS);
The downside - this makes the test unable to test the functionality of having different amount of levels unlocked for different users, as no matter of the number of levels unlocked it will test them and pass, but that's something better of not automated.
The upside - it's super simple and works great for a test case of about 500 steps (makes only a few of them "dynamic").

Related

Is it a bad practice to test the flow of logic by log statements?

I have a logic that does something like this that I want to test out:
public void doSomething(int num) {
var list = service.method1(num);
if (!list.isEmpty()) {
// Flow 1
LOG.info("List exists for {}", num);
doAnotherThing(num);
} else {
// Flow 2
LOG.info("No list found for {}", num);
}
}
public void doAnotherThing(int num) {
Optional<Foo> optionalFoo = anotherService.get(num);
optionalFoo.ifPresentOrElse(
foo -> {
if (!foo.type().equals("no")) {
// Flow 3
anotherService.filter(foo.getFilter());
} else {
// Flow 4
LOG.info("Foo is type {} - skipping", foo.type());
}
},
// Flow 5
() -> LOG.info("No foo found for {} - skipping", num));
}
For each test that'll test out different flow, my first thought was to Mockito.verify() to see if they were called or not. So to test out Flow 1, I would verify that anotherService.get() was called inside doAnotherThing(). And to test out Flow 2, I would verify that anotherService.get() was never called. This would've been fine except for Flow 4 and Flow 5. They would both invoke anotherService.get() once but not anything else.
Because of that, I've created a class to capture logs in tests. This would check to see if certain logs were logged and I would be able to see by it which flow it landed on. But I wanted to ask: is this a bad practice? I would combine this with verify() so that on flows that can be reached by verify() will take that as higher precedence.
One downside to this would be that the tests would rely on the log messages being correct so it would be a bit unstable. To account for that issue, I thought about taking some of these log messages out as a protected static variable that the tests can also use so the message would remain the same between the methods and the respective tests. This way, only the flow would be tested.
If the answer is that it is a bad practice, I would appreciate any tips on how to test out Flow 4 and Flow 5.
Log statements are usually not part of the logic to test but just a tool for ops. They should be adjusted to optimize ops (not too much info, not too few), so that you can quickly find out if and where something went wrong, if something went wrong. The exact text, the log levels and the number of log statements should not be considered as something stable to rely your tests on. Otherwise it will make it harder to change the logging concept.

Cucumber Scenarios to be run in Sequential Order

I have few concerns regarding cucumber framework:-
1. I have single Feature file(steps are dependent on each other)and i want to run all the scenarios in order, by default they are running in random order.
2. How to run a single feature file multiple times?
I put some tags and tried to run but no luck.
#Given("Get abc Token")
public void get_abc_Token(io.cucumber.datatable.DataTable dataTable) throws URISyntaxException {
DataTable data=dataTable.transpose();
String tkn= given()
.formParam("parm1",data.column(0).get(1))
.formParam("parm2", data.column(1).get(1))
.formParam("parm3", data.column(2).get(1))
.when()
.post(new URI(testurl)+"/abcapi")
.asString();
jp=new JsonPath(tkn);
Token=jp.getString("access_token");
if (Token==null) {
Assert.assertTrue(false,"Token is NULL");
}else {
}
}
#Given("Get above token")
public void get_abovetoken(io.cucumber.datatable.DataTable dataTable) throws URISyntaxException {
System.out.println("Token is " +Token);
}
}
So in the above steps i am getting token from one step and trying to print token in other step but i got null and not the actual value, because my steps are nunning randommmally
Please note i am running TestRunner via testng.xml file.
Cucumber and testing tools in general are designed to run each test/scenario as a completely independent thing. Linking scenarios together is a terrible anti-pattern don't do it.
Instead learn to write scenarios properly. Scenarios and feature files should have no programming in them at all. Programming needs to be pushed down into the step definitions.
Any scenario, no matter how complicated can be written in 3 steps if you really want to. Your Given can set up any amount of state. Your When deals with what you are doing, and your Then can check up any number of conditions.
You do this by pushing all the detail down out of the scenario and into the step definitions. You improve this further by having the step definitions call helper methods that do all the work.

Reversible resource modification in Java

So, there is a huge java test framework project working with various hardware components. The problem is: the #Aftermethod can't dinamically decide what resource to set back to their corresponding original state/value given an exception/hw failure etc. in the #Beforemethod. This could endanger subsequent test cases that rely on the same hw element states (false negatives mostly).
Respectively, I would like to reverse all the object modifications that happened in the #BeforeMethod before encountering the error. This way I could make other tests more error prone (less chance of getting false negatives).
Having defined an atomic state for every suite is not an option (in my opinion) - too much hassle, would require tremendous code modification, thus setting the atomic state for every object could take a lot more time than it should.
Any suggestions? Do you know any good testing guideline/pattern for this kind of problem?
Edit:
TestClass1{
#BeforeMethod
method(){
resource1.setfoo("foo");
resource1.setbar("bar");
...
resource7.setfoo("bar"); // -> hw error occurs, testmethod1 is not run
...
}
testmethod1(){
foo.bar();
}
}
TestClass2{
testmethod2(){
assertTrue(resource1.doSomething()); /*fails because some combination of
the resource modifications that happened in the previous #Beforemethod
in TestClass1 changed the hardware operation in some way. */
}
}
There is no need to handle the atomic state, as long as there is a definite way to reverse the state of resources. For example, adding a counter and a try/catch block in the #BeforeMethod may suffice:
#BeforeMethod
void method(){
int setupStep = 0;
try {
resource1.setfoo("foo");
setupStep++; //1
resource1.setbar("bar");
setupStep++; //2
...
resource7.setfoo("bar"); // -> hw error occurs, testmethod1 is not run
setupStep++; //99
...
} catch (Exception e) {
switch (setupStep) {
case 99:
resource7.setfoo("bar_orig");
...
case 2:
resource1.setbar("bar_orig");
case 1:
resource1.setfoo("foo_orig");
default:
// Failed on first step
}
throw e; //Make sure the set-up method fails
}
}
Notice the use of fall through functionality of the switch block.

Is it bad practice to have boolean setters to see whether they executed succesfully?

Is it bad practice to write my setters as booleans to check if they were set correctly?
For example, the following code will set whether my frame is always on top and will return true if it is successfully set to what I decided. Is it bad practice to do this, or should I just leave it with no return type?
public boolean setAlwaysOnTop(boolean alwaysOnTop) {
frame.setAlwaysOnTop(alwaysOnTop);
return frame.isAlwaysOnTop() == alwaysOnTop;
}
Bear in mind in this example I also have a getter for when I want to check the value when not attempting to set it:
public boolean isAlwaysOnTop() {
return frame.isAlwaysOnTop();
}
Thanks in advance. If you would like any more information please feel free to ask and I will provide it.
Edit:
It's just that I was wondering if it would be useful because I could do an
if(setAlwaysOnTop(true))
do this
instead of just using a void like this:
setAlwaysOnTop(true);
do this
Adding of additional conditions to check the API work is not a good idea in general. frame.setAlwaysOnTop must do its work. If you don't trust the API - cover it with unit tests. Only if tests fail you should think about the work around solution - find working version of the software, report a bug or fix the issue if you have access to code base.
Such additional check can make sense only if you know about some existing problem and it can not be fixed now. In this case I would raise custom exception from setAlwaysOnTop method (because you know that it is an exceptional case), log the error and perform your sanity actions.
public void setAlwaysOnTop(boolean alwaysOnTop) throws UIModificationExcepion {
frame.setAlwaysOnTop(alwaysOnTop);
// due to existing bug ... is not updated for all cases
if (frame.isAlwaysOnTop() != alwaysOnTop) {
throw new UIModificationException("Unable to change 'always on top' property");
}
}
Client code
try {
setAlwaysOnTop(true);
} catch (Exception e) {
log.warn("Could not update always on top", e);
// do some stuff
}
You should try to write the code which is required not the extra code(which is bad for maintenance)
public boolean setAlwaysOnTop(boolean alwaysOnTop) {
frame.setAlwaysOnTop(alwaysOnTop);
return frame.isAlwaysOnTop() == alwaysOnTop;
}
why do you need to check frame.isAlwaysOnTop() == alwaysOnTop; when you are doing frame.setAlwaysOnTop(alwaysOnTop); code will obviuosly set it To make sure you have written the right code you should write junits but your code should not have anything that you dont require on production but just to clear your doubts.
so to answer your question yes its a bad pracltice

How do run each browserdriver based on enum list value?

I am using the code found in the first answer on this page: Click Here
I an able to run this successfully and choose the browser by changing environment USED_DRIVER line for a number of different browsers.
I was wandering if it is possible to run a test runs it through each case once before finishing, i.e. so that it has been tested on each of the selected browsers once I have had a go at using for, and if but havnt been very successful.
Example Test
driver.get("calc.php");
driver.findElement(By.name("firstnumber")).sendKeys("2");
Thread.sleep(500);
driver.findElement(By.name("secondnumber")).sendKeys("2");
Thread.sleep(500);
driver.findElement(By.name("Calculate")).click();
Thread.sleep(500);
driver.findElement(By.name("save")).click();
Thread.sleep(500);
I believe what you are asking for is to run a single test multiple times, once for each browser.
There are different ways you can do this...I'll start with the simplest (but hardest to maintain in the future, so make sure you understand each choice before choosing):
Solution 1: The simplest way would be to put a for loop around your test. You will have a List of different WebDrivers that the tests will run on. It would look something like this:
WebDriver[] drivers = new WebDriver[]{firefoxDriver, chromeDriver};
for (WebDriver driver:drivers){
...test goes here.....
}
The problem with this method is that each test you run will have to have that for loop, and they all will create their own drivers.
Solution 2: You could have a central method call each of your tests. It would look something like this:
public void runTests(){
...create your drivers here (and the array)...
for (WebDriver driver: drivers){
runFirstTest(driver);
runSecondTest(driver);
}
}
public void runFirstTest(driver){
...code using driver goes here....
}
This solves the problem of having a for loop and creating driver instances in every test, but now, whenever you write a new test, you have to add it to this for loop.
Solution 3: Another solution exists, using a testing framework. The two most popular are TestNG and JUnit. I'm going to assume all of your tests are in the same class, but if you have multiple classes, you will want to have only 1 class have the #DataProvider
#DataProvider(name = "drivers")
public provideDrivers(){
...create drivers here...
return new Object[][]{{firefoxDriver},{chromeDriver},....};
}
#Test(dataProvider = "drivers")
public runTest(WebDriver driver){
...do stuff with driver here...
}
This solution will run every method that has #Test(dataProvider = "...") once for every driver you pass in. More information is here
If you have questions, feel free to comment. I will respond.

Categories

Resources