Campbell Ritchie wrote:I am afraid you didn't have 100% coverage; an incomplete test is worthless.
Well, that seems harsh. From my perspective, it's nearly impossible to get truly complete coverage - yet there's still value to each test case you
do have. Don't let the quest for perfection get in the way of making useful progress.
I agree with Campbell's other comments, and all of Stephan's.
Jose, it sounds like maybe a tool reported your code coverage as 100%. And yet, as you've found, that doesn't necessarily mean it's really 100%. This depends in part on what definition of code coverage is used, and what tool is used to measure it. 100% might mean that 100% of methods have been executed at least once, or 100% of lines have been executed at least once. But there are different things that can happen
within a line, too.
That will behave differently if message was null, than if it wasn't. Did you test both? Or:
That will call different methods, depending on the results of each method called. Did you test all possible variations?
You can use something like JaCoCo which measures instruction-level coverage. Which means, at the bytecode level, how many of the bytecode instructions are getting called? That's pretty good. Even then, though, things can be missed. What happens if a method call throws an exception? Or if a mathematical operation throws an ArithmeticException? These possibilities are generally
not covered by most code coverage tools, as far as I know. In part because in principle, any method call could thrown an unchecked exception at any time, for unknown reason. Tools (and humans) can't really cover all the things that
might theoretically happen. Generally humans have to make their best guesses about things that
could happen, based on what they understand about how the code is used. And when you find out you missed something, add another test (or group of tests) to help fill the remaining gaps.