• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • paul wheaton
  • Jeanne Boyarsky
  • Ron McLeod
Sheriffs:
  • Paul Clapham
  • Liutauras Vilda
  • Devaka Cooray
Saloon Keepers:
  • Tim Holloway
  • Roland Mueller
Bartenders:

some problems with XP ...

 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
Even Cockburn, who you like to quote, admits that the amount of process oversite required for project work depends on the scale and risks of the project.


OTOH, Crystal Clear, his method for low to medium risk projects of the size up to six developers, is even much less "process oriented" then XP or SCRUMM. Ron Jeffries likes to summarize it as "come together in peace and release software monthly"...
[ July 02, 2002: Message edited by: Ilja Preuss ]
 
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Reid wrote: It gets back to the notion of independence (or lack of it) between teams. There is nothing wrong with XP per se *if* they augment it with something; an understanding of infrastructure and acceptance of some reasonable corporate standards. Without that, you can end up with a large collection of deliverables from small projects that are so idiosyncratic that they don't co-exist. You can end up with something that feels like a legacy code burden in a very short time.
Ilja wrote: Typically it is the responsibility of a project manager to keep the external stakeholders up to date about the project and to communicate their needs to the team and to make it their concern. How this happens is outside the scope of XP - I don't know wether an other agile process touches this issue.
I guess what we have here is a difference in understanding how XP (in particular) proposes to solve certain problems. One of the issues not brought out very fully in the original XP book is something Kent Beck probably just does without thinking. I try and communicate this by saying something like "there are no 'soft' requirements. If you want it, decide much you want it, and tell us how we'll know when you've got it". Ron Jeffries probably has a stronger and more pithy way of saying that if there is no acceptance test for something it just doesn't exist.
XP has an explicit mechanism for managing things like interacting with other corporate projects and processes, but it's not very obvious, because it is the same mechanism used for managing everything else. If something is important to a "customer", there should be a user story for it, so that it can be prioritized and the team can cost it. It might well be important that a project should be able to share data with PeopleSoft, or be available in some CORBA or DCE repository, or not claim domain names reserved by other projects, or be delivered with a document conforming to some corporate standard, or whatever. But until there is a story to base priorities and estimates on, the team (and often, the customer) doesn't know how important it is. To an XP team there are no black-and-white decisions, everything has a relative importance and cost.
Agile advocates tend to look with a raised eyebrow at any process which mandates something "just because". This is often an indication of a process which doesn't understand and adapt itself. It's also common in organizations with a lot of internal politics for these kind of mandatory requirements to be used as a way of justifying the existence of a person, team or department which doesn't really contribute to "the bottom line".
Can you give me an example of such a soft requirement which is (a) really needed by the organization, (b) is always more valuable than (for example) delivering a usable increment of functionality to a customer, and (c) for which it is impossible to formulate an acceptance test? If you can. I'l concede the point.
 
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
So, if I take the step back and see that some of my work is process work - what should I do about it???


Standardize it and determine what is required to be able to retain the process. Document the required inputs, the steps involved, skills necessary, the outputs that will be generated. Support management efforts to collect data on the process so that resources can be allocated effectively (and help managers do one of their jobs - provide some proof of the productivity of their staff during the typical annual budget wars). Do what you can to make other forms of work be done in a way that allow the process to exist.


Maybe. OTOH, maybe they just use a form of requirements management you aren't used to. Perhaps they even think of you as being in fear of their form of requirements management?


When somebody can't adequately tell me the requirements they are working to, and what the source of those requirements are, when work can't be effectively divided and parallelized between groups, yes... I fear the consequences of that form of requirements management.


I assume as rude as it sounds to me - you probably agree that there are many projects that are much more complex than a simple subsription website but where a single failure won't cause the loss of life or ruin a whole corporation.


Hmmm... first time I've heard somebody claim I was rude because I agreed with them. Particularly when the point I was making was straight out of the project complexity/risk continuum described by Cockburn. Guess I just assumed that by providing the two extreme points, a smart developer (e.g. you) would connect a line between them and draw the obvious conclusions. Congrats; looks like you did.


Rudeness objection - I don't find it very constructive to assume that this "obfuscation" is done mischievously.


You really do seem to have a bee in your bonnet about rudeness, and the mischievous claim is all your own. Maybe the word 'debate' translates into something more derogatory in your native tongue than it is in English. I could mention another debating technique that this reminds me of, but then no doubt I'd be rude again.


Are you sure that your definition of "process" is the only valid one?


If I had invented it all by myself, you would have a point. However I didn't, this is old hat in the process improvement/process management world (a big part of my job, apparently not something that drives yours, so terminology expectations will likely differ... and that was exactly the point I was trying to make).
At the risk of being "rude", how the heck do you ever expect people to agree on anything if they base their discussions on different semantic concepts? A very fundamental aspect of communications is to ensure that the participants are dealing with the same semantic concepts; when two parties use the same words but are talking about different things, it is virtually impossible to resolve differences via debate. The first step is to clarify the underlying semantics so that all the issues are on the table. Doing so is not rude. Not doing so would render any debate pointless and any apparent conclusions highly suspect.
[ July 02, 2002: Message edited by: Reid M. Pinchback ]
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
I try and communicate this by saying something like "there are no 'soft' requirements. If you want it, decide much you want it, and tell us how we'll know when you've got it". Ron Jeffries probably has a stronger and more pithy way of saying that if there is no acceptance test for something it just doesn't exist.


I strongly agree with the sentiment, with I guess some qualification. The acceptance test may need to be anecdotal as opposed to mechanical (maybe this is what you mean by a 'soft' requirement?). I can think of three examples right off the top of my head:
1) Security issues (particularly for 3rd-party systems brought in-house - hence no source code to examine). Since you can't prove a negative, you can't prove that a system can't be broken into, hence you can't automate an acceptance test to prove it can't be broken into. You can clearly describe the kinds of investigation that you want performed on the resulting system before certifying it as acceptable. Automating the investigation could be cost-prohibitive.
2) Java threading issues. With multiple JVMs, multiple possible schedulers, the sequence of thread scheduling is effectively non-deterministic. Without getting into a lot of very complex algebra for modelling concurrent systems, it can be very difficult to automate tests to prove how the system will perform. Again, you can't prove a negative, so you can't automate a test to prove the impossibility of deadlock. You can choose algorithms known to eliminate deadlock, but you can't really test to see if the algorithm was implemented correctly. You can describe a QA process that can make reasonable efforts to test the system, but as with security this is basically a good-faith attempt to reduce the probability of problems, not completely eliminate them.
3) Usability and brand identity. Hard requirements describe functionality, but usability and branding issues are subjective and you have to try and assess what the 'typical' reaction would be of your customer base. Again, you can describe the investigations you want performed to assess usability and then feed that information back to the developers, but you can't automate it all up-front. This is one area where organizations sometimes create standards (e.g. for look and feel, accessibility) that should be adhered to, because delayed feedback is more expensive than factoring known issues into the design up front. The standards evolve based on the experiences of previous projects (or I suppose even from legal battles, in the case of ADA issues, but I'm not aware of any software-based ADA litigation).


Can you give me an example of such a soft requirement which is (a) really needed by the organization, (b) is always more valuable than (for example) delivering a usable increment of functionality to a customer, and (c) for which it is impossible to formulate an acceptance test? If you can. I'l concede the point.


I'm assuming that by 'soft' you mean something that is infeasible or impossible to automate, then I can do that easily.
In regulated environments (e.g. software done in support of drug development), you have the perfect example. Drug development regulations (e.g. CNIL, FDA) are very stiff. You side-step your documented development process at extreme peril, and the regs determine some of the characteristics of that process and some requirements for the deliverables (e.g. auditability and security).
You can entirely tank a $10-$100 million dollar clinical trial by creating a software tool to collect or report on data. The tool could have high functionality, but if you did the work in a way that violated regulations (these regs, like the security examples above, aren't readily subject to automated tested), your entire drug application could be tossed out or delayed (and the delays and the impact on your stock are at least as expensive as the clinical trials themselves).
Having automated tests is part of helping to substantiate that you have met certain regulatory requirements (mostly verification, a bit on validation), but others (e.g. FDA regs requiring recorded and signed meeting minutes for the development team) would be infeasible to automate.
Not all organizations face these specific challenges, but many face analogous ones. A lot of Nasa subcontract work has to go through the IV&V facility which I doubt you'll be able to do without process documentation and process logging. There are similar acceptance regs that impact banking and trading systems. Telecommunications in the US has its own suite of auditing issues. Medical devices, like drugs, are subject to heavy regulation (the company that created the pacemaker - something of high value - found that out the hard way, and I believe some of the founders did jail time). There are probably many more examples that I've never even heard of.
On the purely technical side, data architecture is another example. I've seen uncoordinated data designs across multiple projects result in rapid explosion of almost-duplicate data in an organization that can't be unified. It is a common problem in older organizations and is the typical cause for creating data warehouses (somebody gets stuck with the ugly job of reverse-engineering data designs of the legacy systems in the hopes of unifying disparate renderings of data). Any non-trivial company can have hundreds of millions of dollars of value in their data, but that value is largely inaccessible because of problems in how the data was represented. Added problems are an explosive growth in demand on backup systems (to deal with all the duplicate and aging data). Next is demand for high-throughput networks just to keep the backup system cycle from grinding all other network-dependent work to a halt.
[ July 02, 2002: Message edited by: Reid M. Pinchback ]
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Reid wrote: I 100% agree with this, with I guess slight qualification. The acceptance test may need to be anecdotal as opposed to mechanical..
I'll buy this, for most of the kinds of cases you describe, but only where all cost-effective possibilities of automation have been exhausted. In particular I think that both the security case and the multithreaded case would benefit from automation of known error cases.
It's not that up-front automated testing in development ever proves the product works 100% - that's both philosophically and practically impossible. The benefit is that it frees the developers from the worry that any given set of changes will cause known faults to (re)appear. The more the behaviour of the system is described as automated tests, the more bold and confident the developers can be in refactoring, improving and optimizing the system to meet new requirements.
So I'd hope to see (for example) automated port-scan and buffer-overrun tests as part of a security test suite, and tests added for specific known exploits just as tests are added for faults discovered "in the field". Similarly the multithreaded case would greatly benefit from basic concurrent load tests and specific tests for known issues as they are discovered.
I wouldn't be happy with a blanket statment that such problems are "too hard to test, so we won't do any testing". The growth of automated partial test suites for such tricky areas is one of the ways which a smart business can learn and grow.
As an aside, I had a similar sort of discussion at lunch today. A colleague was talking about the large and expensive "valuing diversity" training program recently implemented across the company (it's a big company, many thousands of employees). I asked my usual "how do you know if/when it is done" question, and got the answer that we probably won't know until either it is tested in court with some sort of unfair dismissal case or it pays off in very wishy-washy recruiting and corporate performance terms.
Oh well. Glad I'm not working on it.
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
I'll buy this, for most of the kinds of cases you describe, but only where all cost-effective possibilities of automation have been exhausted.
...
So I'd hope to see (for example) automated port-scan and buffer-overrun tests as part of a security test suite, and tests added for specific known exploits just as tests are added for faults discovered "in the field".


Agreed. Random but not completely irrelevant aside; I once had to work on a security evaluation for a product that the vendor swore on a stack of bibles only used a couple of well-defined ports. It wasn't true, and a port scan probably wouldn't have caught it. Certain ports only became bound and in use when particular pieces of functionality were invoked. The problem was that the product was going to be used in a relatively hostile networking environment. Pain-in-the-backside.


I wouldn't be happy with a blanket statment that such problems are "too hard to test, so we won't do any testing".


Agreed, that would just be lazy. Too bad you couldn't automate a test for lazy.


I asked my usual "how do you know if/when it is done" question, and got the answer that we probably won't know until either it is tested in court with some sort of unfair dismissal case or it pays off in very wishy-washy recruiting and corporate performance terms.


The "when done" question is a very important one in projects. I've noticed that lawyers dodge that question. It no doubt would reduce their billable hours. :roll:
[ July 02, 2002: Message edited by: Reid M. Pinchback ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
Security issues (particularly for 3rd-party systems brought in-house - hence no source code to examine). Since you can't prove a negative, you can't prove that a system can't be broken into, hence you can't automate an acceptance test to prove it can't be broken into. You can clearly describe the kinds of investigation that you want performed on the resulting system before certifying it as acceptable. Automating the investigation could be cost-prohibitive.


I totally agree. That is why an XP team wouldn't accept a requirement like "can't be broken into" - after all a requirement you can't know wether you met it is a useless requirement. An XP team would work with the customer to extract the "hard" requirements from the "soft" one - for example "locks an account after three successive failed logins".


Java threading issues. [...] You can choose algorithms known to eliminate deadlock, but you can't really test to see if the algorithm was implemented correctly.


Why? Aren't these algorithms deterministic?

Usability and brand identity. Hard requirements describe functionality, but usability and branding issues are subjective and you have to try and assess what the 'typical' reaction would be of your customer base.


Yes - this is in fact outside the scope of XP. Cockburn has an example in his ASD book where he augments XP with Usage Centered Design. Also, Interaction Design has raised some interest in the XP community lately.
Of course, as you also said if understand you correctly, once you decided about these issues, you can (and should) extract hard requirements from this again - though the look of the GUI will probably most often need a "manual acceptance test"...


I'm assuming that by 'soft' you mean something that is infeasible or impossible to automate, then I can do that easily.
In regulated environments (e.g. software done in support of drug development), you have the perfect example. Drug development regulations (e.g. CNIL, FDA) are very stiff. You side-step your documented development process at extreme peril, and the regs determine some of the characteristics of that process and some requirements for the deliverables (e.g. auditability and security).


Yes, requirements on the process are certainly hard to test automatically.


On the purely technical side, data architecture is another example. I've seen uncoordinated data designs across multiple projects result in rapid explosion of almost-duplicate data in an organization that can't be unified.


I am not sure about this - once you identified the duplicate data, wouldn't it be possible to test that the projects are using the same data storage?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
So, if I take the step back and see that some of my work is process work - what should I do about it???
Standardize it and determine what is required to be able to retain the process. Document the required inputs, the steps involved, skills necessary, the outputs that will be generated.


In which ways would that be valuable?
(Notice that I don't question wether there would be any value in doing that, but that I am just interested in where *you* see the value.)

Support management efforts to collect data on the process so that resources can be allocated effectively (and help managers do one of their jobs - provide some proof of the productivity of their staff during the typical annual budget wars). Do what you can to make other forms of work be done in a way that allow the process to exist.


Luckily, till now I never worked in an organization with annual budget wars.
I am not sure that I understand the other points. IIRC, you mentioned configuration management as one part of process work. Perhaps you elaborate on this example what data you would suggest to collect and what other forms of work would be adjusted to allow the process to exist? Thanks again!


If I had invented it all by myself, you would have a point. However I didn't, this is old hat in the process improvement/process management world (a big part of my job, apparently not something that drives yours, so terminology expectations will likely differ... and that was exactly the point I was trying to make).


It seemed to me that you tried to make the point that your definition of "process" was the only valid one and that agile advocates were using a wrong definition. I apologize if I got that wrong...


I assume as rude as it sounds to me - you probably agree [...]
Hmmm... first time I've heard somebody claim I was rude because I agreed with them.


Duh, that sentence was missing something - it was meant to be "I assume this wasn't meant as rude as it sound to me ..."

Particularly when the point I was making was straight out of the project complexity/risk continuum described by Cockburn. Guess I just assumed that by providing the two extreme points, a smart developer (e.g. you) would connect a line between them and draw the obvious conclusions. Congrats; looks like you did.


Guess I assumed that you *might* have wanted to imply that projects needing nothing more than an agile process can't be much more than menial gewgaw. :roll:
Hell, written communication is so - incomplete! (Well, that is probably why agile processes don't want to rely on it too much... )


Maybe the word 'debate' translates into something more derogatory in your native tongue than it is in English.


I wasn't objecting to the word 'debate', but to your notion that agile advocates could be mischievously 'obfuscate' them.


At the risk of being "rude", how the heck do you ever expect people to agree on anything if they base their discussions on different semantic concepts? [...] The first step is to clarify the underlying semantics so that all the issues are on the table. Doing so is not rude. Not doing so would render any debate pointless and any apparent conclusions highly suspect.


Yes, you are right, of course. Again, I wasn't objecting to your effort of clarifying what you mean by a process.
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja wrote: Luckily, till now I never worked in an organization with annual budget wars.
Ah, this helps to clarify why there might be a difference of understanding. I (and I'm pretty sure Reid) have worked on a lot of projects in large organizations where the goals of most of the individuals can be quite different from the goals of the organization as a whole. Where I work at the moment, for example, it makes no perceptible difference to the company turnover, profit or share value whether each individual does his or her job well or badly. On the other hand it makes a lot of difference to personal satisfaction, annual pay rises and bonuses if people play the game of internal politics well. "Empire Building" - getting to be in charge of as many people and as much budget as possible; "Blame Avoidance" - it doesn't matter whether something succeeds or fails, as long as you don't get blamed for failure; "Brown Nosing" - associating with people with the power to improve your own position, and so on.
In an organization such as this, crazy things can happen. I'll tell you about the project I'm currently working on one day, once I'm out of it
Ilja wrote: I am not sure that I understand the other points. IIRC, you mentioned configuration management as one part of process work. Perhaps you elaborate on this example what data you would suggest to collect and what other forms of work would be adjusted to allow the process to exist? Thanks again!
Imagine a situation where team A produces some software (say project a). It's well factored, lean, mean and does its job. In a spirit of inter-team cooperation, another team (B) asks to use one of the components (call it a1) in their project (b). As far as B is concerned, a1 is now their property, so they start to modify it and refactor it into b. And guess what, Team C wants it too, and clones the version from team B's repository.
Then something changes - maybe team A discovers a security loophole or memory leak in a1, or perhaps some company or government business assumptions enmbodied in a1 need to be changed. How does the organization as a whole ensure that all the projects keep up with all the requirements? Aargh. Configuration Management nightmare.
So the almost smart organization decides that all this copying and parallel updating is bad. So all code goes into a single repository, and if you re-use something you automatically get the latest. Works great for security fixes and underlying rules changes, but if team A refactors the interface to work better with project a, it can suddenly break b and c. Then b changes, and it breaks a and c. Aargh. Configuration Management nightmare.
So it starts again, and again. This kind of stuff is why configuration management in a large organization is not a trivial problem.
And the information to gather? To decide which combination of approaches to use, it's vital to understand both what is and what could be shared, re-used and/or modified by the separate projects and teams within the organization.
For me, the idea of embodying requirements in the form of automatic tests is a powerful tool for getting out of this trap. Instead of trying to mandate a process for keeping all the actual reused or reimplemented source code in sync, consider the potentially simpler process of making all requirements testable and all tests available to all projects. Imagine an assumption in module a1 changes and impacts the way a and b but not c use the module; all the teams get the new tests, and work to schedule and implement them using their normal planning process. For A and B the changes might be high priority work, whereas for C the new tests might never get scheduled and implemented. The choice of teams A and B whether to copy or re-implement to pass the tests can now confidently be made as a purely local decision.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
Where I work at the moment, for example, it makes no perceptible difference to the company turnover, profit or share value whether each individual does his or her job well or badly. On the other hand it makes a lot of difference to personal satisfaction, annual pay rises and bonuses if people play the game of internal politics well. "Empire Building" - getting to be in charge of as many people and as much budget as possible; "Blame Avoidance" - it doesn't matter whether something succeeds or fails, as long as you don't get blamed for failure; "Brown Nosing" - associating with people with the power to improve your own position, and so on.


Accept my deep sympathy. I don't think that agile processes should be considerate of such "hostile environments", though. I don't think that they would be agile anymore if they tried to.

[a good configuration management example]


Isn't hard to imagine for me...
The company I work for is developing two more or less orthogonal product lines (telecomunication and information management software). Nevertheless both make use of a common library (containing general utitilities and the like). A similar structur exists between subprojects.
The solution to the "configuration management nightmare" seem to be the two XP practices Continuous Integration and Relentless Testing. I recently installed a Cruise Control server, which looks for changes in CVS and starts a new complete build- and testing cycle of all projects. This way half an hour after you commited your changes, you know wether you broke anything in other projects.
We don't have enough tests for this to work reliably, but it's getting better.
If you think of this example as being process work, I don't think it is ignored by agile advocates at all - at least not by the ones I know of.
 
Ranch Hand
Posts: 2676
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have been quietly following the thread. It has a lot of really good information for a newbie like me. I read an article today relating the problems with XP. Since that is the topic, I thought I would post it and get some thoughts from the heavyweights here.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Matthew Phillips:
I have been quietly following the thread. It has a lot of really good information for a newbie like me.



So, what do you think so far?

I read an article today relating the problems with XP. Since that is the topic, I thought I would post it and get some thoughts from the heavyweights here.


If you are also interested in some thoughts of an "extreme one":
1)
- XP teams don't skip analysis and design. In fact they never stop doing that.
- IMHO it's not less risky to spend much time on delivering mock screen shots, storyboards etc. instead of a working system. That doesn't mean that an XPer never does this things.
2)
- IME, test driven design is much more effective and less risky than big up-front modeling
3)
- XP teams can't "goof off for six months", as they are supposed to deliver a working system every two weaks. The Onsite-Customer has very intimate knowledge about what was accomplished in this time.
4)
- most of the developers who tried it seem to like pair programming. Experiences of those who tried seem to suggest that pair programming is in fact more effective on nearly any task.
5)
- see http://c2.com/cgi/wiki?CthreeProjectTerminated for probably more accurate information about why the project failed
6)
- Oh, yeah... :roll:
7)
- I don't see how writing things down would be less risky and more beneficial than steering the project...
8)
- Well, to some degree they are right with this one - if you want to "play to not lose", XP isn't probably for you. OTOH, because of the massive feedback XP is providing, you will detect very fast if something got wrong, so the risk might not be *that* high.
And nobody says that it would be sufficient to "put all the programmers in the same room" - that would only be one practice (part of "one team", which recently replaced "onsite customer").
[ July 03, 2002: Message edited by: Ilja Preuss ]
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
As you can tell from previous posts, I'm not 100% sold on XP. Thanks to Matthew for the article pointer; it is good to gather whatever insights we can. That said, I can't say I found the anti-XP article particularly persuasive. Some of the rationales were too nebulous; I think much better arguments could have been made by the author.
Item #5 is a perfect case in point. Yes, you have a risk with somebody leaving, but that isn't the strongest reason I can think of for having written requirements. I know of several:
1) up-front resource estimation (it is easy to turn good requirements into a pretty accurate estimate of the resources required to create the deliverables on a given schedule).
2) parallelism (QA and documentation prep work can begin at about the same time the work starts on the code-based deliverables).
3) validation (in some industries you have to be able to prove that an application was designed and implemented to do exactly what the written requirements contained, no more and no less).
4) buy-vs-build (if you get requirements in dribs and drabs, you may piecewise build something that you could have bought or re-used for a lot less).
5) scale (related to parallelism; teams with many dozens or hundreds of people working on a very large project need non-verbal mechanisms to help keep the work in sync and help the developers identify the portions of the information related to their responsibilities).
6) mounting lag (you can track the increase in new issues and elimination of old issues relative to the original project requirements and schedule to see if you'll hit your target date - done properly you'll know if you can hit your date after 10-20% of the work is done)
I could go on, but you get the idea. The author really isn't going to enough effort to provide concrete issues.
Every interesting methodology has its strengths (or it wouldn't have many supporters). It also has its weaknesses, which is why I tend to be skeptical about any methodology that is espoused too much. I didn't get the impression that the author really bothered to think about the strengths and weaknesses sufficiently to help the reader understand when they should, or shouldn't use XP, or how they could improve on the original concept. It seemed mostly like an attempt to justify an I-don't-like-it position.
[ July 03, 2002: Message edited by: Reid M. Pinchback ]
 
Matthew Phillips
Ranch Hand
Posts: 2676
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
So, what do you think so far?


I don't want to make the concept of a process sound unimportant, but it is the end result that matters. As has been brought up, not every process is right for every team or project. I do very much like the idea of pair programming to bring an inexperienced programmer (such as myself) up to speed.
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
IIRC, you mentioned configuration management as one part of process work. Perhaps you elaborate on this example what data you would suggest to collect and what other forms of work would be adjusted to allow the process to exist? Thanks again!


Originally posted by Frank Carver:
...
Aargh. Configuration Management nightmare.


Originally posted by Ilja Preuss:
This way half an hour after you commited your changes, you know wether you broke anything in other projects.


Ok, I think that gets the major thread points in place. Anybody who has done a lot of CM stuff, apologies in advance, you'll probably want to skip over my about-to-be-fragmentary tidbits.
Frank's example is one scenario of one slice of CM responsibility. Shared application source is a thorny piece, but it isn't by any means the most complex one. Ilja's response is a reasonable mechanism for reducing that particular problem in one particular pattern of usage, when there is one source branch and the interface functionality is the acceptance criteria for the source code that was changed. It doesn't work so well with multiple branches, and it doesn't work if the acceptance criteria includes measuring in-situ performance levels (a common requirement for trading systems).
CM, in its entirety, is about ensuring the organization's ability to generate deliverables based upon intellectual property. It is a simple phrase with a lot of implications:
- Capture source and source versions.
- Capture/allow variant system configurations.
- Enable separation of mainstream and maintenance development workflow.
- Capture tool (e.g. library) dependencies.
- Build/release management.
- Establishing intellectual property rights.
- Independent verification and validation.
- Capture process documents and related work logs.
- Establish training requirements and log training participation relative to the process.
There is more I'm missing (particularly in the QA interface), but that gives you the gist of it. Not all organizations need all of this, but some subset is generally necessary for any non-trivial full development lifecycle. Lots of studies have been done about the costs of failures in various stages of development, requirements always being the most significant. I haven't seen studies about it, but from my experience I think CM-based failures might cost more than requirements failures.
As an example of things that need to change to support a CM process:
1) Requiring developers to check in source code
2) Requiring developers to use the same versions of libraries that are used in the build process (if changes are needed, update the libraries used by the build)
3) Designing systems into cleanly factored components and eliminating spurious dependencies. A lot of lifecycle problems have their roots right here, including CM shared source management difficulties.
4) Standardizing on tools where possible. That doesn't necessarily mean that everybody should use the same IDE, although that can be an attractive solution (you can create pretty streamlined CM processes around a single shared development environment). You want to make sure that you've checked in everything that matters for creating your product, and that more than one person can use that material once it has been checked in.
[ July 03, 2002: Message edited by: Reid M. Pinchback ]
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja wrote: The solution to the "configuration management nightmare" seem to be the two XP practices Continuous Integration and Relentless Testing. I recently installed a Cruise Control server, which looks for changes in CVS and starts a new complete build- and testing cycle of all projects. This way half an hour after you commited your changes, you know wether you broke anything in other projects.
This is a reasonable approach, but again only suits small, close-knit organizations. Where I work, having any version control at all on a project is nice, but useful version control for developers is very unusual. There is no way that separate projects would ever submit to using the same version control software. In this environment a shared repository accessible by a single Cruise Control is a pipe dream.
Reid wrote: As an example of things that need to change to support a CM process: ...
In general I agree with you, but I'd like to point out that sometimes the CM process or the CM tools also need to change and learn to adapt to the working practices of the business as a whole.
As a concrete example, where I work now uses PVCS (which I have learned to dislike, by the way) as its primary repository, and it is a requirement that every third-party library, tool and config file is booked in to ensure repeatability. A fine and noble goal, except that the project treats source code in the same way. In this case a single 80MB tar.gz file is booked in whenever a new release of the software is delivered. The process has become so focussed on repeatable delivery from the CM tool to test, that it completely misses the point of being usable during development.
For our own sanity, developers have to set up alternative methods of keeping track of changes during development, and, yes, they are all different.
I have many more CM horror stories, roughly one per project. It is this sort of mad situation which leads me to think that the fundamental "Extreme" approach of reconsidering all aspects of process in the harsh light of genuine business value, has a lot to offer. I'm not claiming that a large or mature business won't need some or all the CM features described, but that everything should be understood in terms of its importance to the business, right now, and not just taken as unconditionally mandatory.
What's the point of spending time setting up a repeatable CM process for a project which is is cancelled or only delivers once?
It shouldn't happen, but in my experience CM all too often turns into superstition, just like those tape backups that carry on being made even after they can no longer be used to restore from.
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Agreed, you absolutely have to suit the CM process to the needs of the business and the projects. Those needs, unfortunately, sometimes conflict with each other, which adds a particular challenge. You want the developers to buy into it, but it always adds some extra work (you don't get something for nothing). The trick really is to find some way to gain value in areas that the developers can react to, even if the underlying motivation is more driven by non-development business issues.
One thing that makes it particularly difficult is that you really, really need to know what you are doing when you create and roll out CM processes. On the one hand you have to develop a very strong understanding of what kind of work is being done. On the other hand, CM process design is primarily an engineering problem, consensus approaches don't work if the parties involved can't be bothered to understand the issues.
You can get a pretty good idea of how effectively the CM processes are working by seeing how many people are required to run them, compared to the amount of development work being done. With badly crafted and run processes, the ratio of CM to development staff seems to be 1:10, and you have big delays in getting anything released. With good ones, the staff ratio is about 1:100, with negligable delay.
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
In this environment a shared repository accessible by a single Cruise Control is a pipe dream.


I've been on one project that was large enough it had distinctly different sub-teams, and they made an approach like this work. The didn't use Cruise Control; they simply had a nightly automated Ant build that would re-run the unit tests and create a set of jars for developers to download and use the next morning. The release engineer would read the build report and contact the appropriate developers if any tests failed. This approach probably wouldn't suit projects with dramatically different objectives that had odd little bits of shared code dependencies. It did work with keeping sub-project integration issues under control, without getting too much in anybody's way.
 
Reid M. Pinchback
Ranch Hand
Posts: 775
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
I am not sure about this - once you identified the duplicate data, wouldn't it be possible to test that the projects are using the same data storage?


I think you are working from a mental model of persistent data that is quite a bit different from what I'm talking about. Examples:
- multiple ways of identifying people, each with different weakness and rules about duplication or reuse of ids
- multiple ways of describing events - some based on time, some based on free text, some based on contractual activities, some based on absolute time, some based on relative time
- multiple ways of identifying departments in an organization (in big companies, the "what is a department" question can be amazingly tough to answer)
- multiple ways of constructing primary keys (and no, not just with respect to composite and non-composite keys)
- differences in the use, or lack of use, of metadata
- data values that, at face value, are the same, but have different business interpretations
- data values that appear different but that have the same business interpretations in some circumstances
That is just the high-level overview. It isn't a case of testing to see if two programs were built to look at the same set of pre-existing data. The problem is with the ongoing creation of data elements that didn't exist previously (e.g. two teams working at the same time, or for other reasons didn't know about each other's work).
Ultimately it is a matter of coordinating the design of the data elements for the two programs so that you are in a position to assess the similarities and differences. If nobody is responsible for managing the cumulative impact of the data designs, you have a classic example of a tragedy-of-the-commons problem, where nobody owns the job of managing the resource (data and knowledge in this case) that everybody depends upon and yet gradually damages through their use of it.
As a side note, sometimes you can uncover very serious problems in the entire scoping of the project right up front just by producing a conceptual data model - something which is essentially an integration document that identifies key data concepts that are known to be fundamental to the project, relative to their use in other systems. Proposed system connections may not be as feasible as assumed in initial verbal discussions. The key data concepts form a protocol between connected systems, and if the systems don't have data notions that can be rendered compatible, they can't be integrated (or at least, it will take a lot longer than the project schedule may have made allowances for).


You can choose algorithms known to eliminate deadlock, but you can't really test to see if the algorithm was implemented correctly.
...
Why? Aren't these algorithms deterministic?


I suspect you are confusing algorithm proof, program correctness, and program testing. Testing is a one-sided activity, and doesn't help as much in threading as it does in non-threaded situations.
Proving algorithm correctness is possible (strictly speaking something doesn't warrant the term 'algorithm' if such a proof of correctness and termination couldn't be constructed), but that has little to do with tests. That doesn't mean proof is easy; it is particularly difficult in concurrent systems.
Even if you have an algorithm that you know is supposed to work (e.g. serialized acquisition of resource locks), you are left with the problem of determining if the program does in fact implement the algorithm correctly. That is hard enough to do in deterministic situations (e.g. design-by-contract approaches and reduction systems). In non-deterministic situations like threading, unless your application has access to execution graph information, I'm pretty sure you have something that is the equivalent to the halting problem... you don't know if the delays are natural or the result of a deadlock.
That still leaves the issue of testing. Testing will tell you if the test detected a bug; it won't prove the absence of one. Since threaded applications are essentially non-deterministic, you usually have no way of constructing a test with sufficient coverage. You can create some tools to help with that, if you are willing to expend the effort (there is a JUnit extension that is trying to evolve in this direction). Not something you'd want to do yourself unless you really needed it (e.g. simulators for realtime systems).
PS: If you want an example of the difficulty in associating algorithm proof and program correctness, you might find it interesting to do a literature search on quicksort. Apparently there are very few correct implementations of the algorithm; there are implementations that manage to sort data, but not with all the properties and performance characteristics of the published algorithm. There is also a volume of literature on attempts to create software approaches to a thread-safe lock instead of using hardware - again, very difficult to get the algorithms and implementations correct.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Matthew Phillips:
I read an article today relating the problems with XP.


There started a thread about this article on the xp mailing list which might interest you:
http://groups.yahoo.com/group/extremeprogramming/message/54944
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
Ultimately it is a matter of coordinating the design of the data elements for the two programs so that you are in a position to assess the similarities and differences. If nobody is responsible for managing the cumulative impact of the data designs, you have a classic example of a tragedy-of-the-commons problem, where nobody owns the job of managing the resource (data and knowledge in this case) that everybody depends upon and yet gradually damages through their use of it.


OK, I see your point - it is probably hard to test for the fact that all the duplicated data was identified.
Do you agree that it shouldn't that hard to test that already identified common data wasn't duplicated?


That still leaves the issue of testing. Testing will tell you if the test detected a bug; it won't prove the absence of one.


Yes; of course this is equally true for deterministic applications.


Since threaded applications are essentially non-deterministic, you usually have no way of constructing a test with sufficient coverage.


I think I was asking about testing the algorithm, not the whole application. So, wouldn't that be possible?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
Where I work, having any version control at all on a project is nice, but useful version control for developers is very unusual.


And why is this the case?

There is no way that separate projects would ever submit to using the same version control software. In this environment a shared repository accessible by a single Cruise Control is a pipe dream.


Afaik, Cruise Control can work with several different repositories without much problems. Of course you might want to use more than on instance of Cruise Control to hold build times low. Also, Cruise Control is just one example of an integration server - there are several others, targeted at slightly different environments.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
Proposed system connections may not be as feasible as assumed in initial verbal discussions. The key data concepts form a protocol between connected systems, and if the systems don't have data notions that can be rendered compatible, they can't be integrated (or at least, it will take a lot longer than the project schedule may have made allowances for).


Well, if this connection is critical to the customer, it probably also has high business value, so an XP team would implement it as one of the first stories. If they are unsure how well the connection will work out (remember, they have to estimate the story upfront, so they need to have some sense of its difficulty), they might want to implement a spike solution or even produce a rough conceptual data model.
Also, stories from the XP mailing list seem to suggest that many XP teams are rather well trained in holding the data format flexible. I don't have experience with such issues myself, though.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
It doesn't work so well with multiple branches [...]


Agreed - in fact IME the administration of multiple (hell, even two) branches is a PITA anyway. So I would work very hard to find other solutions to whatever problem I try to solve by branching.

[...]it doesn't work if the acceptance criteria includes measuring in-situ performance levels (a common requirement for trading systems).


This I don't understand. Can you please elaborate?

I haven't seen studies about it, but from my experience I think CM-based failures might cost more than requirements failures.


And what do you think about why this is the case?

As an example of things that need to change to support a CM process:
1) Requiring developers to check in source code
2) Requiring developers to use the same versions of libraries that are used in the build process (if changes are needed, update the libraries used by the build)


I think these are non-brainers, aren't they?


3) Designing systems into cleanly factored components and eliminating spurious dependencies. A lot of lifecycle problems have their roots right here, including CM shared source management difficulties.


Yeah. Of course you should do this anyway - many other problems are related to this, too. Sadly there are astoundingly many development teams which don't give this any thought... <sigh>


4) Standardizing on tools where possible. That doesn't necessarily mean that everybody should use the same IDE, although that can be an attractive solution (you can create pretty streamlined CM processes around a single shared development environment). You want to make sure that you've checked in everything that matters for creating your product, and that more than one person can use that material once it has been checked in.


I totally agree with all of this. What I still don't get is what essential difference you do see between process- and project work - or why you seem to think that these issues are neglected by agile proponents.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Reid M. Pinchback:
[...] that isn't the strongest reason I can think of for having written requirements. I know of several:
1) up-front resource estimation (it is easy to turn good requirements into a pretty accurate estimate of the resources required to create the deliverables on a given schedule).


Mhh, most XP teams seem to experience that they can create a reasonable estimate based on User Stories.


2) parallelism (QA and documentation prep work can begin at about the same time the work starts on the code-based deliverables).


This is how XP teams work. In fact, it is required by XP that the acceptance tests are implemented at the end of the iteration the requirements are scheduled for. A User Story is seen as completed not until all its accompanied Acceptance Tests run.
That doesn't imply that the requirements need to be written down in any intermediate form, though.


3) validation (in some industries you have to be able to prove that an application was designed and implemented to do exactly what the written requirements contained, no more and no less).


Certainly, if it is required, you have to do it. Wether it really helps the development team is an other question.


4) buy-vs-build (if you get requirements in dribs and drabs, you may piecewise build something that you could have bought or re-used for a lot less).


XP doesn't suggest to get the requirements "in dribs and drabs". And just because something doesn't get written down it doesn't mean that the development team doesn't know about it.


5) scale (related to parallelism; teams with many dozens or hundreds of people working on a very large project need non-verbal mechanisms to help keep the work in sync and help the developers identify the portions of the information related to their responsibilities).


Certainly, although you may reduce this need by forming reasonably encapsulated subteams.


6) mounting lag (you can track the increase in new issues and elimination of old issues relative to the original project requirements and schedule to see if you'll hit your target date - done properly you'll know if you can hit your date after 10-20% of the work is done)


Again, the XP planning process seems to only need User Stories to do this.


Every interesting methodology has its strengths (or it wouldn't have many supporters). It also has its weaknesses, which is why I tend to be skeptical about any methodology that is espoused too much.


You are certainly well advised to be sceptic.

I didn't get the impression that the author really bothered to think about the strengths and weaknesses sufficiently to help the reader understand when they should, or shouldn't use XP, or how they could improve on the original concept. It seemed mostly like an attempt to justify an I-don't-like-it position.


That aligns with my impression.
Additionally, though I find it interesting to read about the reasoning of an uninitiated, I think to really give helpfull advise about when to use XP you should have at least used it for a reasonable time yourself.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Matthew Phillips:
I read an article today relating the problems with XP...


The accompanied forum thread contains some interesting thoughts on the article, especially at the end of the thread.
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic