I am studing for 1Z0-851 Oracla Java SE 1.6 Certification and I saw this question:
I marked the first alternative as the correct one and failed! "All of the assert statements are used appropriately" and the answer says that the first one assert(x > 0); is incorrect.. the question is why?
The correct answer is this
Appropriate and inappropriate use of assertions
You can place an assertion at any location that you don't expect to be reached normally.
Assertions can be used to validate the parameters passed to a private method. However,
assertions should not be used to validate parameters passed to public methods because a
public method must check its arguments regardless of whether assertions are enabled or
not. However, you can test postconditions with assertions in both public and non-public
methods. Also, assertions should not change the state of a program in any manner.
Src: http://www.freejavaguide.com/java-scjp-part1.pdf
Line 12 is redundant.
if you remove it, the assertion on line 15 will cover the case where x <= 0
To be honest its a strangely worded question but that is all I can see. I am not sure what is meant by appropriately
If you read just the first assert statement -- which should be interpreted as a "precondition" because of its position --, it implies that the function should work properly with any positive int value, which is not true. Therefore, that assertion is misleading.
Starting by go2, it is easy to understand the assert.
The method does nothing, it just asserts your expectation, that x < 0.
The go method, on the other hand, has a switch.
It is good practice to assert false on the default clause, if you absolutely do not expect your program to fall under this clause, ie, under normal circumstances, one of the cases has to be correct.
The only case on the switch expects x to be exactly 2.
So, to sum up, you don't expect x to be greater than 0, as the first assertion says, you expect x to be 2 and nothing else. Thus, the assertion is not used appropriately.
However, as Jeff noted, the case has no break, which means the default will always be executed, leading, in every scenario, to assert false.
Conclusion: The go method should always result in an error, making assert false properly used, while assert x > 0 isn't correct at all.
Related
Testing my code I've encountered a thing that I can't interpret. Examining the code coverage with eclemma I've found a header of a for-loop that is highlighted in yellow with the message reading "1 of 2 branches missing".
The code line is as following:
for (int i = maxIdx; i >= 0; i--) {
The body of the loop is highlighted as covered (and is actually executed), as well as the preceding and following statements, and the method works fine under all possible conditions. The headers of other for-loops, as far as I could notice, are highlighted in yellow with the same message only in cases if the body of the loop have never executed.
What is the sense of this message? What branch is missing?
Here is how for loop of the form
for (ForInit; ForCondition; ForUpdate)
Body
is executed:
ForInit is executed
ForCondition is evaluated
when false, then Body is not executed and execution continues after loop
when true, then Body is executed, ForUpdate is executed and execution continues from step 2
"2 branches" correspond to the above two options for ForCondition.
"1 of 2 branches missing" means that happened only one of these options, either first one, or second one.
In absence of complete example that includes body of your loop, hard to answer your additional questions
But strange -- why then other loops that always executed at least once are green?
Yet it's rather strange -- why other loops are always green?
However given that Body of your loop was executed, possible that there is exit from the loop in the Body before ForCondition evaluates to false.
For example using latest as of today version 2018-12 of Eclipse IDE for Java that comes with EclEmma 3.1.1:
And maybe there is no such exits in your other loops:
This can also explain
Running this code with an empty StringBuilder paints it green.
and
Adding an artificially created situation with an empty StringBuilder (that's impossible in reality) colors the loop in green.
because of added case when ForCondition evaluates to false before execution of Body:
I'm guessing the missing branches refer to the condition i >= 0. Since i is initialized with a positive maxIdx (according to the comments), you should probably also add test cases for maxIdx of 0 and a negative maxIdx.
Note that since maxIdx is the length of a StringBuilder (according to the comments), this may not be possible, and you'd have to either live with the missing branch, or "artificially" refactor your code so that you can pass a negative maxIdx.
Is there is any difference to put most probable condition in if, else-if or else condition
Ex :
int[] a = {2,4,6,9,10,0,30,0,31,66}
int firstCase = 0, secondCase = 0, thirdCase = 0;
for( int i=0;i<10;i++ ){
int m = a[i] % 5;
if(m < 3) {
firstCase++;
} else if(m == 3) {
secondCase++;
} else {
thirdCase++;
}
}
What is the difference of the execution time with input
int[] a = {3,6,8,7,0,0,0,0,0,0}
Is there is any different to put most possible true condition in if, else-if or else condition
Actually, the answer with Java is that "it depends".
You see, when you run Java code, the JVM starts out by using the using the interpreter while gathering statistics. One of the statistics that may be recorded is which of the paths in a branch instruction is most often taken. These statistics could then used by the JIT compiler to influence code reordering, where this does not alter the compiled code's semantics.
So if you were to execute your code, with two different datasets (i.e. "mostly zero" and "mostly non-zero"), it is possible that the JIT compiler would compile the code differently.
Whether it can actually make this optimization depends on whether it can figure out that the reordering is valid. For example, can it deduce that the conditions being tested are mutually exclusive?
So how does this affect the complexity? Well ... lets do the sums for your simplified example, assuming that the JIT compiler doesn't do anything "smart". And assume that we are not just dealing with arrays of length 10 (which renders the discussion of complexity moot).
Consider this:
For each zero, the loop does one test and one increment - say 2 operations.
For each non-zero element, the loop does two tests and one increment - say 3 operations.
So that is roughly 2*N operations for N elements when all zero versus 3*N operations ehen all non-zero. But both are O(N) ... so the Big O complexity is not affected.
(OK I left some stuff out ... but you get the picture. One of the cases is going to be faster, but the complexity is not affected.)
There's a bit more to this than you're being told.
'if' versus 'else': If a condition and its converse are not equally likely, you should handle the more likely condition in the 'else' block, not the 'if' block. The 'if' block requires a conditional jump which isn't taken and a final branch around the 'else' block; the 'else' block requires a condition branch which is taken and no final branch at all.
'if' versus 'else if' versus 'else': Obviously you should handle the most common case in the 'if' block, to avoid the second test. The same considerations as at (1) determine that the more common case as between the final 'else if' and the final 'else' should be handled in the final 'else' block.
Having said all that, unless the tests are non-trivial, or the contents of all these blocks are utterly trivial, it it is rather unlikely that any of it will make a discernible difference.
There is no difference if you only have an if-else, since the condition will always be evaluated and it does not matter whether it is almost always true or false. However, if you have an if in the else part (the else if), it is much better to put the most possible true condition in the first if. Therefore, most of the time you won't need to evaluate the condition inside the else, increasing performance.
If most conditions are true in if then the execution time will be less .Because in the first if condition only it satisfied.
If most conditions are true in if-else then the execution time will be less then last and more than first scenarios .
If most conditions are true in else then the execution time will be more.Because it checkes first 2 conditions.
Sure it is.
if ... else if ... checks are going in order in which they were coded. So, if you will place most possible condition in the end of this conditions checking queue - such code will work slightly slower.
But it all depenends how these conditions are built (how complex they are).
Most Possible condition should go to the if and then if else and so on.
It's good to write the most common condition in the very first level so that if that condition is true or false will be treated first in less time.
If you put the most frequent condition in middle (else..if) or in last (else), then it will take time to reach to that condition statement because it needs to check every condition statement.
Lets say that I have a lot of nested loops (3-4 levels) in a java method and each of these loops can have some if-else blocks. How can I check if all these things are working properly ? I am looking for a logical way to test this instead of using a brute force approach like substituting illegal values.
EDIT:
Can you also suggest some good testing books for beginners ?
The way I've always been taught basic testing is to handle around edge cases as much as possible.
For example, if you are checking the condition that variable i is between 0 and 10 if(i>0 &&i<10), what I would naturally test is a few values that make the test condition true, preferably near the edges, then a few on the edges that are a combination of true and false, and finally cases that are way out of bounds. With the aforementioned condition, I'd test 1,5 ,9, 0, 10, -1, 11, then finally an extremely large integer, both positive and negative.
This sort of goes against the "not substituting illegal values)", but I feel that you have to do that in order to ensure that your conditions fail properly.
EMMA is a code coverage tool. You run your unittests under EMMA and it will produce an HTML report with colorized source code showing which lines were reached and which were not. Based on that you can add tests to make sure you're testing all the various branches.
Each if/then in your code contains a boolean sub-expression as is the sub-expression used in a loop to decide whether to enter/rerun the loop. Predicate coverage should tell give you a good idea how thorough your tests are.
Wikipedia explains predicate coverage
Condition coverage (or predicate coverage) - Has each boolean sub-expression evaluated both to true and false? This does not necessarily imply decision coverage.
I believe that using debug is easiest way to find the mistake. You can find a full explanation about debug at this link: http://www.ibm.com/developerworks/library/os-ecbug/.
Also you can use this link: http://webster.cs.washington.edu:8080/practiceit/ for practising.
for example find input which will go through each of those loops with some values. then find input which will go through each branch of the if's. Then find input which will go through the loops with large or small or illegal values.
Set some input and output data. Make the calculations yourself.
Create a class to check if the output values match the ones you separately calculated.
Example:
input: Array(3,4,5,6);
output (sum of odd numbers) : 8
class TestClass{
//test case
//here you keep changing the array (extreme values, null etc..
public void test1(){
int[] anArray=new int[4];
anArray[0] = 3;
anArray[1] = 4;
anArray[2] = 5;
anArray[3] = 6;
int s=Calculator.oddSum(x);
if (s==8)
System.out.println("Passed");
else
System.out.println("Failed");
}
public static void main(){
TestClass t=new TestClass();
t.test1();
}
}
What do you normally write when you're testing for the return value of indexOf?
if str.indexOf("a") < 0
vs
if str.indexOf("a") == -1
Would one method be preferred over the other?
I'm actually posing this question for any function in any language that returns -1 on error.
I normally prefer the < 0 approach, because if the function is extended to return -2 on some other case, the code would still work.
However, I notice that the == -1 approach is more commonly used. Is there a reason why?
I try to implement the general principle that tests for "error conditions" should be as wide as possible. Hence I would use < 0 rather than == -1.
This is a principle I was taught during classes in formal methods during my CS degree.
On a simple if it doesn't matter too much, but on loops it's important to detect any "out of range" condition to ensure that the loop is terminated, and not to assume that the loop termination value will be hit exactly.
Take for example this:
i = 0;
while (i < 10) {
++i;
// something else increments i
}
v.s.
i = 0;
while (i != 10) {
++i;
// something else increments i
}
The latter case could fail - the former case won't.
I would also prefer the <0 approach. The reason why == -1 approach is widely used is because of the fact that the functions would indeed return -1 if the index is missing and the case of "extended function" will never happen, according to Java documentation.
In almost all cases advisable to stick to the Java doc as closely as possibly.
See Java Doc:
http://docs.oracle.com/javase/6/docs/api/java/lang/String.html#indexOf%28int%29
... according to it: -1 will be returned.
So I would always check for -1. If you have a specific value to check, its safer to check for that specific value, to protect against future code changes in java api.
For example, if you get a -2, that means that somethings is seriously wrong in your JVM. It doesn't mean "not found". It would be better for code to proceed to and cause an Exception/Error.
According to javadoc indexOf is always supposed to return -1 if character does not occur in sequence
http://docs.oracle.com/javase/6/docs/api/java/lang/String.html#indexOf%28int%29
As Sandeep Nair points out the javaDoc explains it, but I want to comment a bit.
Both would work fine and I wouldn't say one is "better" than the other. The reason many people write == -1 is that there is a general convention that search methods return -1 if "stuff cannot be found". We cannot use 0 since they are used as indices in arrays etc.
So, it's a matter of opinion. As long as the convention stays, it doesn't matter.
I'd use -1 simply because I want to check what I'm expecting.
I'm not expecting -2. If I got -2, that'd probably mean I have some issues. Checking what you're expecting is just better than checking what you may expect.
When I look at the examples in the Assert class JavaDoc
assertThat("Help! Integers don't work", 0, is(1)); // fails:
// failure message:
// Help! Integers don't work
// expected: is <1>
// got value: <0>
assertThat("Zero is one", 0, is(not(1))) // passes
I dont see a big advantage over, let's say, assertEquals( 0, 1 ).
It's nice maybe for the messages if the constructs get more complicated but do you see more advantages? Readability?
There's no big advantage for those cases where an assertFoo exists that exactly matches your intent. In those cases they behave almost the same.
But when you come to checks that are somewhat more complex, then the advantage becomes more visible:
val foo = List.of("someValue");
assertTrue(foo.contains("someValue") && foo.contains("anotherValue"));
Expected: is <true>
but: was <false>
vs.
val foo = List.of("someValue");
assertThat(foo, containsInAnyOrder("someValue", "anotherValue"));
Expected: iterable with items ["someValue", "anotherValue"] in any order
but: no item matches: "anotherValue" in ["someValue"]
One can discuss which one of those is easier to read, but once the assert fails, you'll get a good error message from assertThat, but only a very minimal amount of information from assertTrue.
The JUnit release notes for version 4.4 (where it was introduced) state four advantages :
More readable and typeable: this syntax allows you to think in terms of subject, verb, object (assert "x is 3") rather than assertEquals, which uses verb, object, subject (assert "equals 3 x")
Combinations: any matcher statement s can be negated (not(s)), combined (either(s).or(t)), mapped to a collection (each(s)), or used in custom combinations (afterFiveSeconds(s))
Readable failure messages. (...)
Custom Matchers. By implementing the Matcher interface yourself, you can get all of the above benefits for your own custom assertions.
More detailed argumentation from the guy who created the new syntax : here.
Basically for increasing the readability of the code.
Besides hamcrest you can also use the fest assertions.
They have a few advantages over hamcrest such as:
they are more readable
(assertEquals(123, actual); // reads "assert equals 123 is actual" vs
assertThat(actual).isEqualTo(123); // reads "assert that actual is equal to 123")
they are discoverable (you can make autocompletion work with any IDE).
Some examples
import static org.fest.assertions.api.Assertions.*;
// common assertions
assertThat(yoda).isInstanceOf(Jedi.class);
assertThat(frodo.getName()).isEqualTo("Frodo");
assertThat(frodo).isNotEqualTo(sauron);
assertThat(frodo).isIn(fellowshipOfTheRing);
assertThat(sauron).isNotIn(fellowshipOfTheRing);
// String specific assertions
assertThat(frodo.getName()).startsWith("Fro").endsWith("do")
.isEqualToIgnoringCase("frodo");
// collection specific assertions
assertThat(fellowshipOfTheRing).hasSize(9)
.contains(frodo, sam)
.excludes(sauron);
// map specific assertions (One ring and elves ring bearers initialized before)
assertThat(ringBearers).hasSize(4)
.includes(entry(Ring.oneRing, frodo), entry(Ring.nenya, galadriel))
.excludes(entry(Ring.oneRing, aragorn));
October 17th, 2016 Update
Fest is not active anymore, use AssertJ instead.
A very basic justification is that it is hard to mess up the new syntax.
Suppose that a particular value, foo, should be 1 after a test.
assertEqual(1, foo);
--OR--
assertThat(foo, is(1));
With the first approach, it is very easy to forget the correct order, and type it backwards. Then rather than saying that the test failed because it expected 1 and got 2, the message is backwards. Not a problem when the test passes, but can lead to confusion when the test fails.
With the second version, it is almost impossible to make this mistake.
Example:
assertThat(5 , allOf(greaterThan(1),lessThan(3)));
// java.lang.AssertionError:
// Expected: (a value greater than <1> and a value less than <3>)
// got: <5>
assertTrue("Number not between 1 and 3!", 1 < 5 && 5 < 3);
// java.lang.AssertionError: Number not between 1 and 3!
you can make your tests more particular
you get a more detailed Exception, if tests fail
easier to read the Test
btw: you can write Text in assertXXX too...
assertThat(frodo.getName()).isEqualTo("Frodo");
Is close to natural language.
Easier read, easier analyze code.
Programer spend more time to analyze code than write new one. So if code will be easy to analyze then developer should be more productive.
P.S.
Code should be as well-written book.
Self documented code.
there are advantages to assertThat over assertEquals -
1) more readable
2) more information on failure
3) compile time errors - rather than run time errors
4) flexibility with writing test conditions
5) portable - if you are using hamcrest - you can use jUnit or TestNG as the underlying framework.