• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • paul wheaton
  • Liutauras Vilda
  • Ron McLeod
Sheriffs:
  • Jeanne Boyarsky
  • Devaka Cooray
  • Paul Clapham
Saloon Keepers:
  • Scott Selikoff
  • Tim Holloway
  • Piet Souris
  • Mikalai Zaikin
  • Frits Walraven
Bartenders:
  • Stephan van Hulst
  • Carey Brown

Tests for documentation ?

 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mark Herschberg:
Well, let's say performance is an issue. In my example you choose an array over a vector for performance reasons. Now performance is a requirement and there are specific test for it, but those tests will not delve into this level.


Why wouldn't I write a test that delves into this level instead of writing the comment?
What would happen if I had neither the test nor the comment?
 
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Both tests and comments need thought, I can't escape the feeling, though that somehow tests are definitive where comments are only indicative.
It's certainly possible to write unit tests which are too constraining and make too many assumptions about the implementation, just as it's possible to write comments which just describe the "what", rather than the "why". However, well written unit tests should allow refactoring "under the hood" to any implementation which satisfies the test "requirements".
If a project has a performance requirement, then this needs to be tested just as any feature requirement. If, during development, an area of code is recognized as a performance bottleneck, there should be performance unit tests around that area, so that any future change which pushes the performance outside acceptable bounds is caught.
Although a comment can quickly and succinctly express the design choice for using an array rather than a list, it can't help if (for example) a developer codes a search loop along the array without breaking out when it has found its target.
 
Author
Posts: 6055
8
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:

I think that he would remember very fast once he sits down with the partner to browse through the code to explain it.


My experience (for both is myself and for others) suggests otherwise.

Originally posted by Ilja Preuss:

Well, I don't know your paradigm very well, so it may be true.


Well, I believe we are all working in the same paradigm. Perhaps XP is a paradigm shift, this is what I meant by maybe I'm missing something. As of yet, I'm not convinced about XP.

Originally posted by Ilja Preuss:

I don't think so - you will have a hard time to get me writing inline documentation, but I willingly write tests over tests. That is because I personally gain so much from the tests - they make my development live much easier *at the time I write them*. (http://c2.com/cgi/wiki?TestInfected)


Exactly! You've found benefits to tests; I've found benefits to documentation. My point is, just as a developer, upon realizing the value of tests, can change his behavior to always write them and keep them up-to-date, so, too, can a developer keep comments accurate and up-to-date.

--Mark
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
This forum software is driving me mad - anytime I "reply with quotes", it strips all but the very last paragraphs...
OK, feeling better now...

Originally posted by Mark Herschberg:
I think that he would remember very fast once he sits down with the partner to browse through the code to explain it.
My experience (for both is myself and for others) suggests otherwise.


Then he certainly needs either more comments or better code...


As of yet, I'm not convinced about XP.


I see. Are you curious enough to try a practice, say test-driven development, on a small test project?


Exactly! You've found benefits to tests; I've found benefits to documentation. My point is, just as a developer, upon realizing the value of tests, can change his behavior to always write them and keep them up-to-date, so, too, can a developer keep comments accurate and up-to-date.


I think it is easier with tests if you do test-driven development:
In TDD, you only write production code when you have a failing test. You write a tiny single test which tests the smallest code requirement you can think which possibly doesn't exist yet. You watch the test fail, write just enough code to let it pass, then refactor a little bit. Then you are ready to think about the next test.
This way you have a very motivating cycle of test-code-refactor which makes it rather unlikely that you 'forget' a test.
I wouldn't know how to get this feedback loop with comments instead of tests.
 
Mark Herschberg
Author
Posts: 6055
8
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:

Why wouldn't I write a test that delves into this level instead of writing the comment?


And test what? Both the array and the vector satisfy the functional requirements. Both will pass a functional test. The difference is performance. You can create measurable timing test, however, the results may be hard to read. Let's suppose this method makes a call to another method which does the actual DB access, perhaps with some pre- and post- data modification. As the DB and the other code changes, the raw performance numbers for this method will vary so much it will be hard to measure. What test would you write?

Originally posted by Ilja Preuss:

What would happen if I had neither the test nor the comment?



Someone comes along later and chances the code, not understanding the underlying motivations for why it was implamented as it was. If the constraint hasn't changed (but others have, which is why the code is being modified), then in meeting one new constraint, we break meeting an old again. Again, the old one is soft, it's performance, so it's not inherently clear how much effect this code have on it, just by looking at the code.

Originally posted by Ilja Preuss:

I see. Are you curious enough to try a practice, say test-driven development, on a small test project?


Yes.

I still think 4 months after writing some code, even with tests, you're likely to forget some of the subtlies of it.

--Mark
 
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja, I hadn't answered a question you had on another thread, and the answer is relevant here. You indicated you didn't know what I meant by an in-situ performance test during the CM discussions.
By an in-situ performance test I meant an integration evaluation that can only be performed by setting up a real system with all the interconnections to other required services. These situations are typically more complex than can be handled by unit tests. An example would be a web-site for a online brokerage; the companies in that industry compete very aggressively on the time it takes customers to perform various operations, and the performance measurements take place in areas both within and external to your code.

You could try and write a unit test to determine the performance of those operations, but I doubt you'd do it with something like JUnit. The integration and data prep issues are way too complicated. There are QA automation tools that are usually used for that kind of thing. Even if you had such a unit test, as a developer you wouldn't have access to the physical resources to execute it. Setting up those tests can take man days or man weeks of work, and involves millions of dollars worth of equipment.
So, I agree that you would want automated tests for your acceptance, but in situations like the one I described those test materials and the execution of them are very disconnected from the activity of the developers. It wouldn't hurt at all for a developer to have supplemental information that explained some of the whys and wherefores of how the design was approached in order to achieve the desired performance levels across the entire integrated environment.
I think it would be good to have a growing body of small unit tests that kept an eye on localized performance issues, but the physical resources required to give the developer a realistic test of performance costs a huge amount - more than enough for you and your family to retire on.
I will say that in following this thread I'm becoming more interested in the idea of TDD; I can see pushing it a lot farther than I'd considered previously.
[ July 11, 2002: Message edited by: Reid M. Pinchback ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mark Herschberg:
And test what? [...] What test would you write?


If I thought the performance of this specific algorithm would be critical, I would write a performance test for this specific algorithm. And I would refactor it so that it doesn't directly accesses the database, so that I can test it with mocked-up data.


Someone comes along later and chances the code, not understanding the underlying motivations for why it was implamented as it was. If the constraint hasn't changed (but others have, which is why the code is being modified), then in meeting one new constraint, we break meeting an old again. Again, the old one is soft, it's performance, so it's not inherently clear how much effect this code have on it, just by looking at the code.


So, either the change would in fact be valid, or it would break the constraint, which would break at least one test. Am I missing something?


I see. Are you curious enough to try a practice, say test-driven development, on a small test project?
Yes.


So, can we support you in any way? Would you be interested in a TDD experiment in this forum?


I still think 4 months after writing some code, even with tests, you're likely to forget some of the subtlies of it.


The Blood Ball project I used for code examples above is in development for quite a time (it's on sourceforge since march, iirc, but we started it rather a year afore). We work on it rather sporadically, most often just a few days a month.
It currently has 6000 LOC in 123 classes of each 7+ methods on average. I certainly don't remember all of the subtlies of it, but I have an overview of its architecture and a rough sense of where to look for details.
Sometimes I get some of the subtleties wrong. Most often one of the 197 tests tells me immediately (some areas aren't yet tested, such as the GUI). Sometimes I just behaved stupidly, but most often I find a way to improve the code, so that I probably won't make the same mistake again.
From my current experience, I don't see why that shouldn't scale to bigger projects.
 
Mark Herschberg
Author
Posts: 6055
8
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'll be in AZ Friday through Monday, so I may be a little silent in this thread for the next few days.

Originally posted by Ilja Preuss:

So, either the change would in fact be valid, or it would break the constraint, which would break at least one test. Am I missing something?


Performance isn't something that works one day, and doesn't the next. There's usually not some magic method which suchs up all the time, such that if you fix this, everything runs faster. Sometimes you do get lucky like that, although usually it's because someone wrote sime wildly inappropriate code. I've gotten luck once or twice, and the code was inappropriate. But by and large your code runs slowly because in dozens, scores, or even hundreds of places, things are a little off. You're objects are a little bigger then the need to be, you're taking unecessary steps, doing unecessary work, or simply not being as efficent as your could be.
Let's conider the change above. You change from an array to a vector (or whatever it was I initially described). Now what exactly is the test? It basically records whether an operation completed in X amount of time, where X is a very clearly defined number (btw, memory works as well as a constraint). Well, before the change, the test ran in time Y, less then X; now it runs in time Y+D1, also less then X. Next change, Y+D1+D2, and so forth. N changes, and it runs in time Y+D1..+DN < X. Change N+1, however, put you over the X time budget. Opps, well, I guess that change was inappropriate. But the problem isn't in that one change, it's in all those changes together. In fact, maybe of all the changes you need to make, change, N+1 makes more sense then change 2, but it's too late, it was change N+1 which broke the code, so that's where you start and focus your investigation.
Are you familiar with soft requirements, i.e. "-ilities?" (Plug: I spend about 10 pages in my book discussing them ;-) Examples include: Security, Reliability, Availability, Scalability, Manageability, Portability, Testability, and so on. These requirements are met in a very non-local fashion, with bits and pieces of the implamentation scattered across the system. Examined locally, some code used for one of these requirements may not make much sense. Unit testing is local. As such, it cannot adequately express the overall motivation for the necessary code. (If I am short selling unit tests, please correct me.)

--Mark
PS Ilja, do you have example of code, which, is undocumented, but sufficently unit tested, such that I, as an outsider, can understand it simply looking at the code and tests?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mark Herschberg:
I'll be in AZ Friday through Monday, so I may be a little silent in this thread for the next few days.


And I'll have a visitor over the weekend, so the whole thread might get a little bit silent...

Are you familiar with soft requirements, i.e. "-ilities?" (Plug: I spend about 10 pages in my book discussing them ;-)


Please forgive my ignorance - which book would that be? (Amazon doesn't seem to know about a book associated to you... )


Unit testing is local. As such, it cannot adequately express the overall motivation for the necessary code. (If I am short selling unit tests, please correct me.)


You are right, you will need other tests beside the unit ones. (The current trend in XP is to speak of "Programmer Tests" and "Customer Tests" because of this - the usage of the term "unit tests" got causing more and more confusion in the XP community.)
But the comments you are speaking about seem to be as local to me. As you argued yourself, the array isn't essential to fullfill the performance requirement. Possibly it would be advisable to use the slower Vector instead and streamline the algorithm or even streamline an other part of the code. It might be possible that I know a way to make the algorithm as fast or even faster just *by* using a Vector...
The problem with a comment is that it doesn't tell me about the possible alternatives. A performance test does.
In my experience, almost every time I find such types of comments in the code, they don't help me to understand the true concern anyway. Most often when I ask around I find out that either it is probably outdated or nobody really knows about the real reason. Sometimes the code and comment didn't get touched for years, just cautionary.
That doesn't mean that I never would like to have an indicator of a performance requirement directly in the code. But if I would, I would try to put it *directly* into the code - what keeps me from naming a method/class/whatever "veryFastXxx"?


PS Ilja, do you have example of code, which, is undocumented, but sufficently unit tested, such that I, as an outsider, can understand it simply looking at the code and tests?


Well, you can look at http://www.xp123.com/xplor/xp0201/index.shtml, there are some example implementations at the bottom. (I didn't put my own solution online yet, but I will if you are interested). Of course it might also be interesting to do the experiment by yourself, so you should think hard about taking a look at the solutions beforehand.
The only other code I can offer is the aforementioned Blood Ball project. It is much more complex, but also not in an as good state. Nevertheless I would guess that you could understand it rather well, especially the rather well tested model part, if you knew the rules of the implemented game.
If you are interested, you can take a look at a newer version of the rules at http://www.games-workshop.com/Warhammerworld/bloodbowl/l_rule_bbok_a.htm to get a rough feeling of what the game is about. To look at the source, you could either use the aforementioned link to directly browse the CVS repository, or go to http://sourceforge.net/projects/bloodball to download the latest source release.
I would be very interested in hearing about your opinion on the code.
[ July 12, 2002: Message edited by: Ilja Preuss ]
 
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Speaking about performance, I wonder if you heard about "aspect-oriented programming" (here is the paper, most performance-related among other papers on the subbject) and what do you think about it.
 
Ranch Hand
Posts: 1874
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
nice to hear from you , Steve. It is almost many months that your presence is not felt in this forum , infact in ranch. seems to very busy.
shailesh.
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Um, I'm a little puzzled here. Is this the right thread for this message? Who is Steve ?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
So, I agree that you would want automated tests for your acceptance, but in situations like the one I described those test materials and the execution of them are very disconnected from the activity of the developers. It wouldn't hurt at all for a developer to have supplemental information that explained some of the whys and wherefores of how the design was approached in order to achieve the desired performance levels across the entire integrated environment.
I think it would be good to have a growing body of small unit tests that kept an eye on localized performance issues, but the physical resources required to give the developer a realistic test of performance costs a huge amount - more than enough for you and your family to retire on.


I understand that it might be very costly to set up a whole system test, so you wanted to run it rather seldom. OTOH, one of the outputs of such a test should be a specific performance requirement for the software side of the system. I think it would be very worthwhile to capture such a requirement in less costly automated system tests (where the more costly components might get mocked up, for example). Do you disagree?


I will say that in following this thread I'm becoming more interested in the idea of TDD; I can see pushing it a lot farther than I'd considered previously.


 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:

OTOH, one of the outputs of such a test should be a specific performance requirement for the software side of the system. I think it would be very worthwhile to capture such a requirement in less costly automated system tests (where the more costly components might get mocked up, for example). Do you disagree?


Nice in theory, doesn't always work in practice. I was on a large distributed systems project that tried to do exactly that, and the system ended up over a year late, largely because of performance problems.
I suspect you are working under the assumption that the performance of a complete system is something like a simple sum of the individual components, and largely independent of the hardware environment (making appropriate adjustments for CPU or network bandwidth). In most small-to-medium-scale systems that is true. It isn't necessarily true for large asynchronously-connected and (somewhat) parallelized systems. Something like a minor (to a developer) tweak in a messaging protocol can entirely tank the global performance (I've seen it happen, luckily I wasn't the tweaker).
Sometimes you can't predict bottleneck and harmonic effects readily without having a duplicate environment. You do create such environments, but in the brokerage world they aren't cheap, and hence they become a highly constrained resource (you have to share access to them). They do help you to identify the more stable and instable parts related to performance. For the stable areas you could create the kinds of tests you describe, but the instable areas get tuned and declared 'hands off'. When something significant changes in the environment (hardware, O/S, network topology) you have to retest to make sure you are still ok. If the universe is in a good mood, the stable areas stay stable. Sometimes the universe isn't in a good mood (e.g. hotspot problems on Solaris prior to JDK 1.4). Then you end up with code hacks that have nothing to do with functionality or unit tests for performance. You end up with comments in the code that say "JVM for JDK 1.3 is broken, changed to work around bug".
Another thing that makes the simulation approach difficult is that you may be simulating things for which you have inadequate information. You consider yourself lucky if you have decent API documentation, but you will have little or no information about performance behaviour for various kinds of transactions or loading situations. You could evolve to a point where you reverse-engineered that knowledge; then I think it becomes more feasible to create the simulations you describe. If you are on a team that lives with the system over its lifecycle (e.g. in-house development, as opposed to one-shot work done by a consulting firm), this would be a good thing. You couldn't do TDD on the performance issues for your early product releases (because you didn't have the necessary knowledge at that time), but you could evolve to a point where you could do a lot more of it as you learned about performance issues.
[ July 16, 2002: Message edited by: Reid M. Pinchback ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
Nice in theory, doesn't always work in practice. I was on a large distributed systems project that tried to do exactly that, and the system ended up over a year late, largely because of performance problems.


Please explain how more local inline comments would have helped.


I suspect you are working under the assumption that the performance of a complete system is something like a simple sum of the individual components, and largely independent of the hardware environment (making appropriate adjustments for CPU or network bandwidth).


No, I am not assuming that. I am assuming that to optimize the performance of the system by tweaking the software, we need to know the requirements the software has to fulfill to get better performance. And that if we know these requirements, we might be able to manifest them into tests.

Something like a minor (to a developer) tweak in a messaging protocol can entirely tank the global performance (I've seen it happen, luckily I wasn't the tweaker).


If we aren't aware of such problems, certainly no comment can help us. If we *are* aware of such problems, can't we write automatic tests preventing the inappropriate tweaking of the messaging protocol? What are the performance-relevant properties of the protocol?


Sometimes you can't predict bottleneck and harmonic effects readily without having a duplicate environment. You do create such environments, but in the brokerage world they aren't cheap, and hence they become a highly constrained resource (you have to share access to them). They do help you to identify the more stable and instable parts related to performance.


What exactly do you mean by stable/instable in this regard?

Then you end up with code hacks that have nothing to do with functionality or unit tests for performance. You end up with comments in the code that say "JVM for JDK 1.3 is broken, changed to work around bug".


I think it would be a good idea to isolate this workaround code as well as possible from the rest of the code (because it will likely change for different reasons). If we are able, for example, to extract it in its own method, I might be inclined to name this method something like "JVM1_3BugWorkaround".
Of course, if there is more information (like a link in Suns bug database), I certainly would like to add a comment! I am not totally opposed to writing comments into code - it's just that for most comments I see there seem to be better alternatives. I still think that most code can be written in a way that it doesn't hardly need any comment...


Another thing that makes the simulation approach difficult is that you may be simulating things for which you have inadequate information. You consider yourself lucky if you have decent API documentation, but you will have little or no information about performance behaviour for various kinds of transactions or loading situations.


Again, how would comments help in that situation?
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You were claiming that you could write tests for these things, and in particular you've been espousing TDD design, which requires that you know how to write the test before you write the code. My point was that sometimes you can't economically write such tests, paricularly when you don't have detailed information apriori to allow complex test rigging to be written. It is not "agile" to spend millions of dollars writing test rigs for complex asynchronous environments when you could just execute the tests in the real but duplicated hardware environment.
Unless it is your intention to withhold current information from future developers, there is nothing wrong with adding some comments to the code to explain things for which it wasn't feasible to write such test rigs. I do not see how it is "agile" to be so unwilling to use normal human written communications to provide information to developers. Why spend months writing complex test rigs for the equivalent of a few lines or paragraphs of comments? I'd invite you to find out from your accounting department the average hourly cost for employees. I don't see TDD being a universal panacea, particularly in situations where the cost of providing the tests swamps the cost of writing and documenting code and doing normal QA of the system. That to me is not being agile.
Additionally, while I agree that documentation that is not in sync with the code can be misrepresentative, the very same thing is true of tests. The whole point, as I understand it, of TDD is that developers can routinely execute those tests to ensure the correctness of there work. THAT is a gross misrepresentation when the tests are executed in a development environment that does not reflect the realities of the deployment environment. I hate to think of the months of time I've lost to bullet-headed developers who have insisted, incorrectly, that code was correct because it tested ok on their workstation. Test success tells you nothing when the test conditions are outside the control of the code you are writing. The whole world does not revolve around software. Other things happen outside of software, like hardware and people.
[ July 17, 2002: Message edited by: Reid M. Pinchback ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
You were claiming that you could write tests for these things, and in particular you've been espousing TDD design, which requires that you know how to write the test before you write the code. My point was that sometimes you can't economically write such tests, paricularly when you don't have detailed information apriori to allow complex test rigging to be written.


Which type of information do you need for writing the test, but not for making the test pass?

It is not "agile" to spend millions of dollars writing test rigs for complex asynchronous environments when you could just execute the tests in the real but duplicated hardware environment.


IMO, it wouldn't be agile to run a critical test less than daily. Possibly you are on a project which couldn't possibly be agile, though I don't think that is a generally reasonable way to bet.
I also have some problems imagining a project where tests needed to cost millions - except perhaps projects costing billions in total.
Don't get me wrong - there certainly is software that is hard to test. And it probably wouldn't pay back to write complex test riggs for it. But almost always you don't have to - you can instead make it easier to test the software by refactoring the software itself. Such an approach pays back manyfold, as the most important property of a testable system is a very decoupled design.


Unless it is your intention to withhold current information from future developers, there is nothing wrong with adding some comments to the code to explain things for which it wasn't feasible to write such test rigs. I do not see how it is "agile" to be so unwilling to use normal human written communications to provide information to developers.


I am not unwilling to use "normal" written communication - when putting it directly into code is impossible. I am certainly unwilling to easily give up finding a way to put information directly into code, though.
BTW, Ron Jeffries just started a series of articles on expressing ideas in code: http://www.xprogramming.com/xpmag/acsExpressIdeas.htm


Why spend months writing complex test rigs for the equivalent of a few lines or paragraphs of comments?


In my experience, every minute I spend writing tests saves me a bunch of minutes debugging. Additionally, tests seem to have a much more positive impact on design than comments.

I'd invite you to find out from your accounting department the average hourly cost for employees.


You seem to be believing that testing *costs* time, whereas I think that it *saves* time. I wonder why that is the case.

I don't see TDD being a universal panacea


You are right, it isn't. OTOH I also don't see much value in dismissing it easily.

particularly in situations where the cost of providing the tests swamps the cost of writing and documenting code and doing normal QA of the system. That to me is not being agile.


If "doing normal QA" means running tests only late in the development, that certainly isn't agile at all, however reasonable it might be.

Additionally, while I agree that documentation that is not in sync with the code can be misrepresentative, the very same thing is true of tests. The whole point, as I understand it, of TDD is that developers can routinely execute those tests to ensure the correctness of there work. THAT is a gross misrepresentation when the tests are executed in a development environment that does not reflect the realities of the deployment environment. I hate to think of the months of time I've lost to bullet-headed developers who have insisted, incorrectly, that code was correct because it tested ok on their workstation.


What was the problem with convincing them that the tests were wrong/incomplete? How did you know that the code was incorrect?
(BTW, waiting months until deploying the system isn't agile, too... )
Regards, Ilja
[ July 23, 2002: Message edited by: Ilja Preuss ]
 
Seriously? That's what you're going with? I prefer this tiny ad:
Smokeless wood heat with a rocket mass heater
https://woodheat.net
reply
    Bookmark Topic Watch Topic
  • New Topic