The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Thanks. That is how we estimate currently, however, as our team gains more experience writing tests, we are beginning to cover more than function testing alone (e.g. performance, error recovery, coding standards, etc.). How do we know when we're done?
Kevin Sprague wrote:Welcome Lisa and Janet! My question has two parts: 1) During iteration planning, are there any "rules of thumb" for estimating the time that should be allocated to developing the test cases? Some have suggested to tie it to number of functions built during the iteration, however, it seems like that approach would not be able to take into account implementation decisions that may impact how/what we test. 2) What are your thoughts on using test coverage tools? Thanks very much!
Co-author, with Janet Gregory: Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009) http://lisacrispin.com
Kevin Sprague wrote:Your reply has touched on the heart of the matter. Our client is pushing back on the extra testing time we've started including in the interation planning. You have reminded me of a key Agile principle which is easy to lose sight of, Simplicity. Why look ahead to a "potential" performance issue. Have the Courage to deal with it when the customer places it on the iteration backlog. Thanks!
Co-author, with Janet Gregory: Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009) http://lisacrispin.com
Lisa Crispin wrote:Test coverage tools are useful, but of course, they only measure the code that your team remembered to write! It won't catch missed functionality.
-- Lisa
When we say "test coverage" we almost always mean "code coverage", which as you point out is inherently "white box". Customer tests, missed functionality and otherwise, on the other hand are inherently "black box". To me that means coverage needs to be understood differently, probably in terms of the input space instead of the code. Does your book treat the question of what "coverage" means in the context of customer tests?
As with code coverage, it would seem necessary to devise some way to tame the combinatoric explosion of possible inputs. Unlike code paths, these seem to arise in two dimensions - combinations of values within each operation and sequential combinations of operations. One approach I have encountered that seems to address the former is Orthogonal Array Testing. I haven't seen anything addressing the latter. Does the book have any recommendations in these areas?
<sidebar>
Orthogonal Array Testing essentially does for input values what Basis Path Testing does for code paths. Basis Path Testing covers a subset of paths such that all untested paths are combinations of tested paths. Orthogonal Array testing exploits the fact that most value-interaction bugs involve binary interactions to define a minimal set of test cases that covers all the binary combinations of input values.
</sidebar>
On a different but related note, in automating customer tests, what can you tell us about managing test fixtures (DB state, etc.)? That's where my current team is hung up.
Thanks,
Lance
The application I work on has a complex data structure, and for our GUI smoke tests, it's not feasible for the tests to set up all their own data, so we also use a "canonical data" approach where the build process first refreshes the test schema with "seed" data before running the suite of tests. This is a pain because the tests have to be run in a particular order.
Mike Farnham wrote:
The application I work on has a complex data structure, and for our GUI smoke tests, it's not feasible for the tests to set up all their own data, so we also use a "canonical data" approach where the build process first refreshes the test schema with "seed" data before running the suite of tests. This is a pain because the tests have to be run in a particular order.
So, is this "canonical data" stored in a database schema entirely outside the path to production?
We have dev, test, qa, and prod environments.
The code migrates from dev to test to qa and finally to production.
Each environment has its own schema.
I would be interested if the "canonical data" you are talking about resides,
in a separate schema.
This might be a big help to our situation,
at least for the data we actually maintain.
Our biggest challenge however is the data we get from other schemas
that we do not maintain. Plus, the fact that our data is cyclical in nature.
Do you have an suggestions for testing data that has a cyclical nature?
(I work for a University and our applications primarily deal with data for the current semester.)
Co-author, with Janet Gregory: Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009) http://lisacrispin.com
Lisa Crispin wrote:
Because we drive coding with our business-facing tests, coverage is, in a way, built in. The tests are there first, and the code has to be written to make them pass....
-- Lisa
Lance Zant wrote:
Lisa Crispin wrote:
Because we drive coding with our business-facing tests, coverage is, in a way, built in. The tests are there first, and the code has to be written to make them pass....
-- Lisa
The question I was trying to get to is "coverage of what?" in the case of business-facing tests. Writing them first is great, but seems orthogonal to the question of how many are enough (or better, which ones are needed). The goal is to cover the requirements. Using tests to document requirements might turn the question back to the customer/product owner. If there's not test where x=1 && y=-1, you can argue that there's no "requirement" to handle that condition. If you can make that work, I'd love to know how you do so.
In my experience, tests identified by business product owners' tend to be indicative rather than exhaustive. They tend to come up with a sunny day case and stop there. Prodded for error cases, they give me a couple of obvious missing or bad values. A second round of prodding may or may not produce a couple of interaction exceptions (no cash refund for a credit purchase), but it definitely begins to raise the frustration level. ("I just need it to work, dammit!") Unfortunately, when a subtle interaction bug arises, the fact that there was no test for that combination is cold comfort, and the blame game begins. ("Of COURSE, we need to process payments against canceled orders!")
So the question is, how do you assess the adequacy of your business-facing tests, if it's not based on some kind of coverage of the possible input combinations and sequences? If the answer is "heuristically", fair enough. The follow up in that case is whether any of the heuristics are general across projects and domains, and how do you get the business types to really engage them?
thanks again,
Lance
Lance Zant wrote:
In my experience, tests identified by business product owners' tend to be indicative rather than exhaustive. They tend to come up with a sunny day case and stop there. Prodded for error cases, they give me a couple of obvious missing or bad values. A second round of prodding may or may not produce a couple of interaction exceptions (no cash refund for a credit purchase), but it definitely begins to raise the frustration level. ("I just need it to work, dammit!") Unfortunately, when a subtle interaction bug arises, the fact that there was no test for that combination is cold comfort, and the blame game begins. ("Of COURSE, we need to process payments against canceled orders!")
So the question is, how do you assess the adequacy of your business-facing tests, if it's not based on some kind of coverage of the possible input combinations and sequences? If the answer is "heuristically", fair enough. The follow up in that case is whether any of the heuristics are general across projects and domains, and how do you get the business types to really engage them?
Tell me how it all turns out. Here is a tiny ad:
New web page for Paul's Rocket Mass Heaters movies
https://coderanch.com/t/785239/web-page-Paul-Rocket-Mass
|