luri ron wrote:i have tried to adopt some technique from agile development methods such as Scrum, Extreme programming, and RUP. One thing I found that is chanllenging is that the requirement gathering time desribed in these methods is too short. in extreme programming and scrum, the requirement gathering is done in story cards or use cases within a day or two. but in realitiy, the requirement gathering takes much longer time. just want to see if anyone with agile experience has any feedback on this.
Mourouganandame Arunachalam wrote:Hi,
What AUP lacks when compared to RUP, being a subset of RUP?
Mourougan
Ilja Preuss wrote:
Lisa Crispin wrote:
IMO, Agile is mainly about values and principles. If you're committed to delivering high-quality software and the best possible business value, and you're always trying to improve the way you work, that's agile in my book. Being "agile" would mean working closely with your customer, and using good practices to produce what the customer needs.
Uh, while I agree that Agile is defined by values and principles, I'd also say that it's a bit more specific than "just" what you indicate above. I think it's very well defined by the Agile Manifesto. William Petri has a very good blog post on this topic: http://agilefocus.com/2009/02/agile-versus-agile/ (well, at least I like it... ;)
Jeff Langr wrote:
Lisa Crispin wrote:... how to go about identifying your tool requirements, researching and evaulating tools.
I'd be interested in that list of criteria. From my standpoint, two of the more important considerations for agile testing would seem to be accessibility and ability for the tests to be reasonably self-documenting. I can think of a number of other things. What do you feel is most important to consider?
Thanks,
Jeff
Mike Farnham wrote:So with regard to the book "Agile Testing,
please name some automated testing tools.
Also, is Agile Testing applicable to software development without regards to language,
as long as an Automated Testing tool is available?
Jeanne Boyarsky wrote:Lisa has a cool photo in her JavaRanch profile. What animals are these?
palla sridhar wrote:Hello Lisa and Janet!
Thanks for the offer. But I got a few questions.
Can it be used for legacy systems like Mainframes?
Mike Farnham wrote:Thanks Janet (and Lisa) for your replies and insight.
I did find the article on InformIT.
I wonder if there is an anti-pattern or syndrome. "Too busy to test",
or "Too busy to write tests"?
Mourouganandame Arunachalam wrote:Hi,
Is it possible to apply Agile testing process in on on-going project? or can only applied to new projects?
Mourougan
Joe Deluca wrote:Lisa and Janet,
I am a programming student interested in agile methodologies. Is there any way to implement agile concepts in single person projects?
Would the content provided in your book be suitable/learnable for someone such as myself who is new in the agile field?
Cheers,
Joe
Lance Zant wrote:
Lisa Crispin wrote:
Because we drive coding with our business-facing tests, coverage is, in a way, built in. The tests are there first, and the code has to be written to make them pass....
-- Lisa
The question I was trying to get to is "coverage of what?" in the case of business-facing tests. Writing them first is great, but seems orthogonal to the question of how many are enough (or better, which ones are needed). The goal is to cover the requirements. Using tests to document requirements might turn the question back to the customer/product owner. If there's not test where x=1 && y=-1, you can argue that there's no "requirement" to handle that condition. If you can make that work, I'd love to know how you do so.
In my experience, tests identified by business product owners' tend to be indicative rather than exhaustive. They tend to come up with a sunny day case and stop there. Prodded for error cases, they give me a couple of obvious missing or bad values. A second round of prodding may or may not produce a couple of interaction exceptions (no cash refund for a credit purchase), but it definitely begins to raise the frustration level. ("I just need it to work, dammit!") Unfortunately, when a subtle interaction bug arises, the fact that there was no test for that combination is cold comfort, and the blame game begins. ("Of COURSE, we need to process payments against canceled orders!")
So the question is, how do you assess the adequacy of your business-facing tests, if it's not based on some kind of coverage of the possible input combinations and sequences? If the answer is "heuristically", fair enough. The follow up in that case is whether any of the heuristics are general across projects and domains, and how do you get the business types to really engage them?
thanks again,
Lance
Ilja Preuss wrote:
Lisa Crispin wrote:
Here's an example: Our application manages retirement plans. When people contribute to their retirement account, withdraw money, or change their investments, we have to do trades through a trading partner (they actually do the buys and sells of the mutual funds). We had a story where for certain mutual funds, we had to mark trades as "new money" if they were the result of new contributions to the account. In other words, if someone merely switched from fund AAAAX to BBBBX, that wasn't new money, but if they sent in money to buy new positions in fund BBBBX, that's "new money".
This wasn't anything that our plan administrators could see - it was a new field in the trade file sent to the trading partner, based on a value in a new column in the database. So there was nothing to demonstrate to our internal users.
Are you saying that non of your internal users cared about fullfilling this requirement? Who came up with it? How did you know you had to do it? Just curious...![]()
Ilja Preuss wrote:
Lisa Crispin wrote:
My team could probably get along ok without testers. But it would mean that the programmers spend a lot more time eliciting examples and requirements from the customers, and thinking about them from multiple viewpoints.
Mhh, somehow that sounds like a good thing to me... ;)
Mike Farnham wrote:
The application I work on has a complex data structure, and for our GUI smoke tests, it's not feasible for the tests to set up all their own data, so we also use a "canonical data" approach where the build process first refreshes the test schema with "seed" data before running the suite of tests. This is a pain because the tests have to be run in a particular order.
So, is this "canonical data" stored in a database schema entirely outside the path to production?
We have dev, test, qa, and prod environments.
The code migrates from dev to test to qa and finally to production.
Each environment has its own schema.
I would be interested if the "canonical data" you are talking about resides,
in a separate schema.
This might be a big help to our situation,
at least for the data we actually maintain.
Our biggest challenge however is the data we get from other schemas
that we do not maintain. Plus, the fact that our data is cyclical in nature.
Do you have an suggestions for testing data that has a cyclical nature?
(I work for a University and our applications primarily deal with data for the current semester.)
Ilja Preuss wrote:
Lisa Crispin wrote:In fact, when we don't do any "visual" stories in a sprint, and the stories delivered don't change the way the business people work, we don't bother with a sprint review.
Can you give an example on such a story? I'm a bit puzzled - if it doesn't affect the users of the system, why would you want to implement it at all? Thanks!