So, there is a huge java test framework project working with various hardware components. The problem is: the #Aftermethod can't dinamically decide what resource to set back to their corresponding original state/value given an exception/hw failure etc. in the #Beforemethod. This could endanger subsequent test cases that rely on the same hw element states (false negatives mostly).
Respectively, I would like to reverse all the object modifications that happened in the #BeforeMethod before encountering the error. This way I could make other tests more error prone (less chance of getting false negatives).
Having defined an atomic state for every suite is not an option (in my opinion) - too much hassle, would require tremendous code modification, thus setting the atomic state for every object could take a lot more time than it should.
Any suggestions? Do you know any good testing guideline/pattern for this kind of problem?
Edit:
TestClass1{
#BeforeMethod
method(){
resource1.setfoo("foo");
resource1.setbar("bar");
...
resource7.setfoo("bar"); // -> hw error occurs, testmethod1 is not run
...
}
testmethod1(){
foo.bar();
}
}
TestClass2{
testmethod2(){
assertTrue(resource1.doSomething()); /*fails because some combination of
the resource modifications that happened in the previous #Beforemethod
in TestClass1 changed the hardware operation in some way. */
}
}
There is no need to handle the atomic state, as long as there is a definite way to reverse the state of resources. For example, adding a counter and a try/catch block in the #BeforeMethod may suffice:
#BeforeMethod
void method(){
int setupStep = 0;
try {
resource1.setfoo("foo");
setupStep++; //1
resource1.setbar("bar");
setupStep++; //2
...
resource7.setfoo("bar"); // -> hw error occurs, testmethod1 is not run
setupStep++; //99
...
} catch (Exception e) {
switch (setupStep) {
case 99:
resource7.setfoo("bar_orig");
...
case 2:
resource1.setbar("bar_orig");
case 1:
resource1.setfoo("foo_orig");
default:
// Failed on first step
}
throw e; //Make sure the set-up method fails
}
}
Notice the use of fall through functionality of the switch block.
Related
I have a logic that does something like this that I want to test out:
public void doSomething(int num) {
var list = service.method1(num);
if (!list.isEmpty()) {
// Flow 1
LOG.info("List exists for {}", num);
doAnotherThing(num);
} else {
// Flow 2
LOG.info("No list found for {}", num);
}
}
public void doAnotherThing(int num) {
Optional<Foo> optionalFoo = anotherService.get(num);
optionalFoo.ifPresentOrElse(
foo -> {
if (!foo.type().equals("no")) {
// Flow 3
anotherService.filter(foo.getFilter());
} else {
// Flow 4
LOG.info("Foo is type {} - skipping", foo.type());
}
},
// Flow 5
() -> LOG.info("No foo found for {} - skipping", num));
}
For each test that'll test out different flow, my first thought was to Mockito.verify() to see if they were called or not. So to test out Flow 1, I would verify that anotherService.get() was called inside doAnotherThing(). And to test out Flow 2, I would verify that anotherService.get() was never called. This would've been fine except for Flow 4 and Flow 5. They would both invoke anotherService.get() once but not anything else.
Because of that, I've created a class to capture logs in tests. This would check to see if certain logs were logged and I would be able to see by it which flow it landed on. But I wanted to ask: is this a bad practice? I would combine this with verify() so that on flows that can be reached by verify() will take that as higher precedence.
One downside to this would be that the tests would rely on the log messages being correct so it would be a bit unstable. To account for that issue, I thought about taking some of these log messages out as a protected static variable that the tests can also use so the message would remain the same between the methods and the respective tests. This way, only the flow would be tested.
If the answer is that it is a bad practice, I would appreciate any tips on how to test out Flow 4 and Flow 5.
Log statements are usually not part of the logic to test but just a tool for ops. They should be adjusted to optimize ops (not too much info, not too few), so that you can quickly find out if and where something went wrong, if something went wrong. The exact text, the log levels and the number of log statements should not be considered as something stable to rely your tests on. Otherwise it will make it harder to change the logging concept.
I have 3 methods and they're being called from the front end. You can only call a function once you have called the previous function, but don't necessarily need to call all of them. So you may call either just f1, or f1->f2 or f1->f2->f3.
My problem is that on the front end you can click on a function, before the previous one has even stopped running. I need each function to finish before the next function starts running.
What I'm doing at the moment, which works, is pausing the execution until the end of the previous function, but I'd like for a nicer answer:
f1 {
ready1=false
...
ready1=true }
f2 {
ready2=false
while (!ready1) {Thread.sleep(250);}
...
ready2=true }
f3 {
while (!ready2) {Thread.sleep(250);}
...
}
Is there an easy way to do this?
It sounds like you're using a web framework, so maybe include the framework, and it will have some built in tools. One example is to use the built in java tools.
class NoLookingBack{
CountDownLatch latchB = new CountDownLatch(1);
public void methodA(){
//do work.
latchB.countDown();
}
public void methodB(){
try{
latchB.await();
} catch(InterruptException e){
//do something or declare this method throws.
return;
}
}
}
I think this shows how more methods could be included by adding more latches when necessary.
Your example is flawed, and so is this solution. What if methodA fails, then methodB will block forever. The latch gives you the power to use a timeout value, then you can use a response that indicates a failure.
In automating the passing of levels in a game, I have several user groups, which are all logging in the game with a method, that accepts the input for username and password, but receive a different number of unlocked levels, depending on the user group (each user has different levels available in the game).
In the process of testing the passing of all the levels, I want to determine during the test at the end of each level if the user has the next one unlocked, and either continue the test or finish it successfully.
I've read about if else implementations with selenium and I am currently using a method like this:
public boolean isElementExisting(WebElement element){
try {
wait.until(ExpectedConditions.elementToBeClickable(element));
} catch (Exception e) {
return false;
}
return true;
}
and using and if logic like so in the test:
if (isElementExisting(level3Button) == false) {
driver.quit();
}
- rest of the tests follow here.
When using a driver.quit(); the test automatically fails. The desired behavior I am searching for is for the test to pass in the if statement (what could be used instead of a driver.quit(); method)?
I could just fit all the code for testing the further levels in separate nested if/else statements but that would be troublesome, currently looking for a more practical solution, like succeeding the test at a certain point.
A test should be static in the sense that it should have a known outcome.
As a result, the way the test is structured and written should follow that logic.
Given what was described above, I would write a test something like this:
login.asUser(username,password);
// additional logic in here
assertTrue(page.userHasLevelUnlocked("level3"));
then the method
public boolean userHasLevelUnlocked(String level){
switch(level)
case "level3":
if(isElementExisting(level3button){
return true;
} else {
return false
}
}
or something along those lines
Thank you for you answer. I understand the concept of a static test, with the addition that a test should not have a "known" outcome, but more that it should have an "expected" outcome, which should be matched, in the sense that it tests something to verify its functionality.
The switch case is a valid scenario, quite frankly I don't see what happens after the assertion fails in the posted example (the test will fail as well).
The solution I implemented is to determine if the user has the next level unlocked at the end of the previous one with a method similar to this:
public void isElementExistingAlternateResult(WebElement element) {
boolean isElementFound = true;
try {
wait.until(ExpectedConditions.elementToBeClickable(element));
} catch (Exception e) {
isElementFound = false;
}
if (isElementFound == true) {
System.out.println("test is continued...");
} else {
Reporter.getCurrentTestResult().setStatus(ITestResult.SUCCESS);
System.out.println("next level not unlocked.");
}
That way, only if the next available level is not found, the test will determine this real time and it will stop and pass at this exact point. Note that this is alternating the result of a otherwise failed test case with the TestNG Reporter class in:
Reporter.getCurrentTestResult().setStatus(ITestResult.SUCCESS);
The downside - this makes the test unable to test the functionality of having different amount of levels unlocked for different users, as no matter of the number of levels unlocked it will test them and pass, but that's something better of not automated.
The upside - it's super simple and works great for a test case of about 500 steps (makes only a few of them "dynamic").
Please show me where I'm missing something.
I have a cache build by CacheBuilder inside a DataPool. DataPool is a singleton object whose instance various thread can get and act on. Right now I have a single thread which produces data and add this into the said cache.
To show the relevant part of the code:
private InputDataPool(){
cache=CacheBuilder.newBuilder().expireAfterWrite(1000, TimeUnit.NANOSECONDS).removalListener(
new RemovalListener(){
{
logger.debug("Removal Listener created");
}
public void onRemoval(RemovalNotification notification) {
System.out.println("Going to remove data from InputDataPool");
logger.info("Following data is being removed:"+notification.getKey());
if(notification.getCause()==RemovalCause.EXPIRED)
{
logger.fatal("This data expired:"+notification.getKey());
}else
{
logger.fatal("This data didn't expired but evacuated intentionally"+notification.getKey());
}
}}
).build(new CacheLoader(){
#Override
public Object load(Object key) throws Exception {
logger.info("Following data being loaded"+(Integer)key);
Integer uniqueId=(Integer)key;
return InputDataPool.getInstance().getAndRemoveDataFromPool(uniqueId);
}
});
}
public static InputDataPool getInstance(){
if(clsInputDataPool==null){
synchronized(InputDataPool.class){
if(clsInputDataPool==null)
{
clsInputDataPool=new InputDataPool();
}
}
}
return clsInputDataPool;
}
From the said thread the call being made is as simple as
while(true){
inputDataPool.insertDataIntoPool(inputDataPacket);
//call some logic which comes with inputDataPacket and sleep for 2 seconds.
}
and where inputDataPool.insertDataIntoPool is like
inputDataPool.insertDataIntoPool(InputDataPacket inputDataPacket){
cache.get(inputDataPacket.getId());
}
Now the question is, the element in cache is supposed to expire after 1000 nanosec.So when inputDataPool.insertDataIntoPool is called second time, the data which has been inserted first time will be evacuated as it must have got expired as the call is being after 2 seconds of its insertion.And then correspondingly Removal Listener should be called.
But this is not happening. I looked into cache stats and evictionCount is always zero, no matter how much time cache.get(id) is called.
But importantly, if I extend inputDataPool.insertDataIntoPool
inputDataPool.insertDataIntoPool(InputDataPacket inputDataPacket){
cache.get(inputDataPacket.getId());
try{
Thread.sleep(2000);
}catch(InterruptedException ex){ex.printStackTrace();
}
cache.get(inputDataPacket.getId())
}
then the eviction take place as expected with removal listener being called.
Now I'm very much clueless at the moment where I'm missing something to expect such kind of behaviour. Please help me see,if you see something.
P.S. Please ignore any typos.Also no check is being made, no generic has been used, all as this is just in the phase of testing the CacheBuilder functionality.
Thanks
As explained in the javadoc and in the user guide, There is no thread that makes sure entries are removed from the cache as soon as the delay has elapsed. Instead, entries are removed during write operations, and occasionally during read operations if writes are rare. This is to allow for a high throughput and a low latency. And of course, every write operation doesn't cause a cleanup:
Caches built with CacheBuilder do not perform cleanup and evict values
"automatically," or instantly after a value expires, or anything of
the sort. Instead, it performs small amounts of maintenance during
write operations, or during occasional read operations if writes are
rare.
The reason for this is as follows: if we wanted to perform Cache
maintenance continuously, we would need to create a thread, and its
operations would be competing with user operations for shared locks.
Additionally, some environments restrict the creation of threads,
which would make CacheBuilder unusable in that environment.
I had the same issue and I could find this at guava's documentation for CacheBuilder.removalListener
Warning: after invoking this method, do not continue to use this cache
builder reference; instead use the reference this method returns. At
runtime, these point to the same instance, but only the returned
reference has the correct generic type information so as to ensure
type safety. For best results, use the standard method-chaining idiom
illustrated in the class documentation above, configuring a builder
and building your cache in a single statement. Failure to heed this
advice can result in a ClassCastException being thrown by a cache
operation at some undefined point in the future.
So by changing your code to use the builder reference that is called after adding the removalListnener this problem can be resolved
CacheBuilder builder=CacheBuilder.newBuilder().expireAfterWrite(1000, TimeUnit.NANOSECONDS).removalListener(
new RemovalListener(){
{
logger.debug("Removal Listener created");
}
public void onRemoval(RemovalNotification notification) {
System.out.println("Going to remove data from InputDataPool");
logger.info("Following data is being removed:"+notification.getKey());
if(notification.getCause()==RemovalCause.EXPIRED)
{
logger.fatal("This data expired:"+notification.getKey());
}else
{
logger.fatal("This data didn't expired but evacuated intentionally"+notification.getKey());
}
}}
);
cache=builder.build(new CacheLoader(){
#Override
public Object load(Object key) throws Exception {
logger.info("Following data being loaded"+(Integer)key);
Integer uniqueId=(Integer)key;
return InputDataPool.getInstance().getAndRemoveDataFromPool(uniqueId);
}
});
This problem will be resolved. It is kind of wired but I guess it is what it is :)
Ok, I know the difference between commit and rollback and what these operations are supposed to do.
However, I am not certain what to do in cases where I can achieve the same behavior when using commit(), rollback() and/or do nothing.
For instance, let's say I have the following code which executes a query without writing to db:
I am working on an application which communicates with SQLite database.
try {
doSomeQuery()
// b) success
} catch (SQLException e) {
// a) failed (because of exception)
}
Or even more interesting, consider the following code, which deletes a single row:
try {
if (deleteById(2))
// a) delete successful (1 row deleted)
else
// b) delete unsuccessful (0 row deleted, no errors)
} catch (SQLException e) {
// c) delete failed (because of an error (possibly due to constraint violation in DB))
}
Observe that from a semantic standpoint, doing commit or rollback in cases b) and c) result in the same behavior.
Generally, there are several choices to do inside each case (a, b, c):
commit
rollback
do nothing
Are there any guidelines or performance benefits of choosing a particular operation? What is the right way?
Note: Assume that auto-commit is disabled.
If it is just a select, I wouldn't open a transaction and therefore, there is nothing to do in any case. Probably you already know is there is an update/insert since you have already passed the parameters.
The case where you intend to do a manipulation is more interesting. The case where it deletes is clear you want to commit; if there is an exception you should rollback to keep the DB consistent since something failed and there is not a lot you can do. If the delete failed because there was nothing to delete I'd commit for 3 reasons:
Semantically, it seems more correct since the operation was
technically successful and performed as specified.
It is more future proof so if somebody adds more code to the
transaction they won't be surprised with it is rolling back because
one delete just didn't do anything (they would expect on exception
that the transaction is rolled back)
When there is an operation to do, commit is faster but in this case I
don't think it matters.
Any non-trivial application will have operations requiring multiple SQL statements to complete. Any failure happening after the first SQL statement and before the last SQL statement will cause data to be inconsistent.
Transactions are designed to make multiple-statement operations as atomic as the single-statement operations you are currently working with.
I've asked myself the exact same question. In the end I went for a solution where I always commited successful transactions and always rollbacked non-successful transactions regardles of it having any effect. This simplified lot of code and made it clearer and more easy to read.
It did not have any major performance problems in the application I worked in which used NHibernate + SQLite on .net. Your milage may vary.
As others stated in their answers, it's not a matter of performance (for the equivalent cases you described), which I believe is negligible, but a matter of maintainability, and this is ever so important!
In order for your code to be nicely maintainable, I suggest (no matter what) to always commit at the very bottom of your try block and to always close your Connection in your finally block. In the finally block you also should rollback if there are uncommitted transactions (meaning that you didn't reach the commit at the end of the try block).
This example shows what I believe is the best practice (miantainability-wise):
public boolean example()
{
Connection conn;
...
try
{
...
//Do your SQL magic (that can throw exceptions)
...
conn.commit();
return true;
}
catch(...)
{
...
return false;
}
finally
{//Close statements, connections, etc
...
closeConn(conn);
}
}
public static void closeConn(Connection conn)
{
if (conn != null)
if (!conn.isClosed())
{
if (!conn.getAutoCommit())
conn.rollback();//If we need to close() but there are uncommitted transacitons (meaning there have been problems)
conn.close();
conn = null;
}
}
What you are saying depends upon the code being called, does it return a flag for you to test against, or does it exclusively throw exceptions if something goes wrong?
API throws exceptions but also returns a boolean (true|false):
This situation occurs a lot, and it makes it difficult for the calling code to handle both conditions, as you pointed out in your OP. The one thing you can do in this situation is:
// Initialize a var we can test against later
// Lol, didn't realize this was for java, please excuse the var
// initialization, as it's demonstrative only
$queryStatus = false;
try {
if (deleteById(2)) {
$queryStatus = true;
} else {
// I can do a few things here
// Skip it, because the $queryStatus was initialized as false, so
// nothing changes
// Or throw an exception to be caught
throw new SQLException('Delete failed');
}
} catch (SQLException $e) {
// This can also be skipped, because the $queryStatus was initialized as
// false, however you may want to do some logging
error_log($e->getMessage());
}
// Because we have a converged variable which covers both cases where the API
// may return a bool, or throw an exception we can test the value and determine
// whether rollback or commit
if (true === $queryStatus) {
commit();
} else {
rollback();
}
API exclusively throws exceptions (no return value):
No problem. We can assume that if no exception was caught, the operation completed without error and we can add rollback() / commit() within the try/catch block.
try {
deleteById(2);
// Because the API dev had a good grasp of error handling by using
// exceptions, we can also do some other calls
updateById(7);
insertByName('Chargers rule');
// No exception thrown from above API calls? sweet, lets commit
commit();
} catch (SQLException $e) {
// Oops exception was thrown from API, lets roll back
rollback();
}
API does not throw any exceptions, only returns a bool (true|false):
This goes back to old school error handling/checking
if (deleteById(2)) {
commit();
} else {
rollback();
}
If you have multiple queries making up the transaction, you can borrow the single var idea from scenario #1:
$queryStatus = true;
if (!deleteById(2)) {
$queryStatus = false;
}
if (!updateById(7)) {
$queryStatus = false;
}
...
if (true === $queryStatus) {
commit();
} else {
rollback();
}
"Note: Assume that auto-commit is disabled."
Once you disable auto-commit, you are telling the RDBMS that you are taking control of commits from now until auto-commit is re-enabled, so IMO, it's good practice to either rollback or commit the transaction versus leaving any queries in limbo.