Hello rangers,
one thing I'm still often wondering about when it comes to
testing is how I should decide between better code and design practices vs. easier testability. In particular I'd like to hear your opinions regarding
Java's "final" keyword.
Static analysis tools and lots of advices and best practices for good application design suggest that it's considered better design to make methods (or classes) final, if you don't explicitly want them to be overridden in subclasses.
I'm currently working on a project where inheritance is primarily used to combine big classes into giant classes via inheritance without thinking about the real purpose of inheritance. Therefore I can really see the value to use the final keyword to enforce for example the open closed principle or liskov's substitution principle and prevent others from accidentally overriding methods incorrectly and violating the said rules.
On the other hand from a testing perspective it's quite handy to have non-final methods and classes. It's always very convenient for me to be able to override a method for testing purposes which would not be possible with final classes or methods. Of course I know there are frameworks like JMockit which allow to circumvent almost any hurdle like final, private or static modifiers. Especially this testing framework is very powerful and easy to use, at least in my opinion. But still this feels more like a hack if you need a framework to manipulate the bytecode of a class to make it testable. This makes it very tempting to leave out final modifiers or loosen access modifiers to make your code easier testable without the need for special frameworks.
A third alternative I recently read about is to provide separate test-specific implementations for critical classes of your API which are solely intended to be used for testing. This idea combines the best of both worlds but obviously it comes at a price, i.e. you have to maintain additional classes just to make your code testable.
Maybe this discussion wouldn't be necessary if everyone on the team works very, very disciplined but even this doesn't completely prevent mistakes which could be hard to fix later. So I personally would tend not to make compromises and reduce the code quality just for the sake of easier testability. Of course it may depend on other factors like the application, the team etc. but I'd still love to hear your opinions on this topic. What is your practical experience with this?
Thanks for your answers!
Marco