James Shore

author
+ Follow
since Sep 21, 2007
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by James Shore

Thanks, everyone! I had a great time. Feel free to drop by my website for my blog and more of my writing. I'll also be speaking in Seattle on Nov 8th and in the bay area on Dec 11 & 12th. If you're in the area, I'd love to see you there. (Event details are on my site.)

Cheers,
Jim
Hi Christophe,

Yes, there's quite a bit of strategy behind release planning and we devote 12 pages to that topic alone. I have to go to bed (and this is my last day here, sorry) but briefly you don't release until you have enough to make the release valuable. Doing that is a matter of focusing on what makes your software most valuable. For example, a word processor seems like the type of product that couldn't possibly deliver in small releases. It would take years of work before it could even achieve parity with Microsoft Word, let alone compete with it.

Yet Writely (the online word processor) shipped their first version in two weeks and was acquired by Google ten months later, forming the basis for Google Docs. They did this by focusing on what set them apart (online collaboration, simplicity, instant access from any computer) rather than on achieving feature parity.
[ November 02, 2007: Message edited by: James Shore ]
Hi Vinayagam,

The opposite is actually true, in my experience. Many shops use manual regression testing, which is incredibly slow and expensive. Ratios of 1 tester to 2 developers, or even 1:1 or greater are common.

XP, in contrast, has automated regression tests that are created by the programmers. The regression testing burden is much smaller, which reduces the number of testers you need and also frees the testers up to contribute in more sophisticated ways. I've ratios of 1:4, 1:6, and lower.

See my post about how testers fit in and my post about staffing ratios for more.
Hi Christophe,

Think of documents as a form of communication. Now compare them to other forms of communication, such as speaking face-to-face at a whiteboard, or over email, or by phone. What are their strengths? What are their weaknesses?

Now consider that a perfectly-executed agile method is going to put all of the key players in the same room full-time for the length of the project. All communication can be accomplished with face-to-face conversation at a whiteboard.

In this environment, what project communication is still best done with documents? Those are the documents you create in an agile project.

(Or you could just read the "documentation" section of our book. For a discussion of how testing works specifically, see my earlier post.)
[ November 02, 2007: Message edited by: James Shore ]
Hi Vinayagam,

The correct answer to your question is "what are you trying to accomplish?" but I'm tired so I'll just refer you to my recent scheduling post.
Hi Jeff,

We devote an entire chapter to planning-related practices, 70 pages, and cover the material thoroughly.

There are a lot of different components to scheduling. First, you have a choice between "scopeboxed" schedules or "timeboxed" schedules. Scopeboxed schedules release when all of the planned features are complete, adjusting the release date to match. Timeboxed schedules release on a particular date, adjusting features to fit.

We recommend timeboxed schedules in the book for two reasons. First, they're a lot less risky. Second, they force you to make hard tradeoff decisions. These decisions are a good way of weeding out the less valuable requirements that invariably get added to any software project. You only ship the most valuable software, and you do it on a specific, predictable date.

So in a way, scheduling is easy in the agile world. Pick a date. Ship your software on that date. Done.

Okay, it's not quite that easy.

To make this work, you have to develop software in such a way that it's always technically ready to release. (Our "Releasing" and "Developing" chapters talk about this topic.) Many people develop software by technology: they develop the database layer for all of the features, then the UI layer, then the business layer, and so forth. This means that the software can't be released until all of the layers and all of the features are done.

Agile methods develop by feature: they develop the database, UI, and business layer for one feature, then for another, and so forth. This is a little harder technically but it means that the software can be released after each feature. Most agile methods work in iterations that are one to four weeks long. They are also timeboxed: in an XP project, for example, you are always finished and ready to release on, say, Wednesday morning.

Now, technically ready to release on Wednesday morning doesn't mean that you'll actually release. The other component to this is managing your priorities so that when your release date comes you have something worth shipping.

There are several aspects to this planning effort. First, you have to have a good overall understanding of what you're building, why it's valuable, and what success means for your project. Our "Vision" practice addresses this need.

Second, you need to determine what the "minimum marketable features" of your product are. A minimum marketable feature (MMF) is something that has value to the marketplace. The "marketplace" can be paying customers or internal users depending on what kind of software you're developing, but either way, an MMF provides value. MMFs are also small ("minimal"), which is important, because small features are easier to finish quickly, which reduces the risk that you'll throw away partially-done work on the ship date.

MMFs in turn are broken down into stories, which are bits of MMFs that represent recognizable progress to your business experts... the people XP calls "on-site customers". It's important that stories be customer-valued because that's how you demonstrate progress that everybody can understand.

Stories can be estimated extremely accurately. Well, sort of. This is actually a simplification. The estimates are not at all accurate, but when combined with slack and velocity, they lead to very consistent and predictable iteration schedules.

So a naive approach would be to identify your vision, brainstorm MMFs and stories, estimate them all, and build a schedule based on those estimates. In a perfect world, that schedule would be accurate.

I'm tempted to leave it at that, but in fact it's not that simple. Reality is messy. First, even if you did brainstorm all of those MMFs and stories--and you shouldn't (see below)--your consistent iteration schedule won't always come true. People get sick, there are unexpected events that disrupt your work, and so on. These risks reduce the amount of work you can finish before the deadline. We have a section on risk management that tells you how to predict schedules in the face of risks. We provide some simple arithmetic that will give you probability curve for how much work you'll get done. In other words, you'll have a 10% chance of getting X amount of work done, 50% chance of getting Y amount done, and 90% chance of getting Z done.

With these numbers, you can then say: "We will ship on this date no matter what. On that date, we will almost certainly have Z features done. We will try to get Y features done, but it's a stretch. It's almost impossible for us to have more than X features done." Then you actually do ship exactly on your ship date and you nearly always have features done in the range that you promised. And, as each iteration completes, you get more information that tightens and improves your predictions.

This is already far beyond what you see in most projects, but it's still rather simplistic. The big problem with everything I've told you so far is that it's static. It assumes that your plan is made at the beginning of the project and never changes. But in fact, your initial plan is your worst plan, because it's made at the point when you have the least information about how people will react to your software. You can add a lot more value if you change your plan as you learn new things.

One of the big advantages of agile development is that it allows you to change your plan. This leads to a technique called "adaptive planning" that is very cool, but I don't have time to go into right now. The nutshell version is that you actually seek out opportunities to learn new things and take your plan in unexpected directions. You do this because it allows you to increase the value of your software. For example, when my wife and I traveled to Europe a few years ago, we used adaptive planning. When we were in Italy, we discovered that our plan to go to Turkey would take too long, so we went to Prague and Switzerland instead and had some of our most memorable experiences. We increased the value of our trip because we weren't locked into an up-front plan.

There's much, much more to this, including a technique of tiering your planning horizons to reduce planning costs, but you'll have to refer to the book for the details.
Hi Shiang,

You're describing a common problem that's a result of using sequential phases in short iterations. XP's approach of using simultaneous phases works better; see my post on how QA fits in for more.

I strongly recommend against separating a QA sprint from the development sprint; you'll end up with bottlenecks and rework. The point of the sprint is to have known-good, completed, ready to ship code at the end of the sprint. This reduces risk and improves quality. Separating QA out will lead to all sorts of problems: backlogs of incomplete work, unexpected rework, large bug queues, and more.

Learn how to use XP-style simultaneous phases instead. It works much better.
[ November 01, 2007: Message edited by: James Shore ]
Hi Peter,

The issue of paying for infrastructure is an important one; often there's a way to proceed that will work and is cheaper in the short run but sacrifices maintainability. This leads to technical debt, of course.

The XP position (and mine as well) is that technical debt is so costly that we simply do not tolerate it at any time. Part of the balance of responsibilities is that, although business experts are responsible for priorities, technical experts (programmers) are responsible for costs and estimates. As programmers, we have a professional responsibility to deliver maintainable systems, except in those very rare instances where the software is truly not going to be maintained or developed for more than a month or so. (And when does that ever happen, really?)

As a result, I don't give business people a choice about technical infrastructure; I simply include it in my estimates as part of the cost of doing business. When challenged, I explain in my best non-confrontational "you hired me to make these decisions" manner: "This story seems more expensive than usual because it requires us to upgrade the database, which is necessary to achieve the performance you've asked for."

Incremental design/architecture is crucial to this work because it allows you to split the technical infrastructure work into small pieces that are spread (more or less) evenly over all of the stories. As a result the customers don't see big technical infrastructure hits and are less likely to be spooked. From their perspective, every single story is focused entirely on delivering value.

Learning how to split infrastructure so evenly and incrementally takes time, but it's a valuable skill that's well worth learning.

Regarding the third party code, there are ways to make this work and still be agile. A good resource is Eric Evans' discussion of "strategic design" (which covers multiple ways of interfacing multiple teams) in his book Domain-Driven Design. I think it's chapter 14.
Hi Jammy,

Productivity is very hard to study formally: first you have to have a valid definition of "productivity," which is surprisingly hard in software; next you have to somehow control for all of the variables like programmer productivity and experience; and finally you have to get a bunch of companies to agree to a long running experiment, which nobody wants to pay for.

Nobody does that, so we're stuck with little studies conducted on students (with no real experience) or industry professionals who take a day off (and solve toy problems). Net result? You can't prove anything about the effectiveness of anything.

(I'm exaggerating a bit here. Not much.)

So we're left with anecdotes, which don't prove anything either. Anecdotes like Ilja's say that agile processes are more productive and lead to higher quality than other processes. That's a common story. However, there are also stories of people who tried it and failed.

Best way to know if it will work for your team is to try it. One thing we did in our book is to identify the common characteristics of teams that struggle. We coalesced these into our "Is XP Right For Us?" section. I think that a team that changes itself to meet those conditions and then applies our advice carefully and rigorously will be faster and deliver higher quality. In many cases, I think it will be dramatically faster and higher quality. That's been my experience in applying these techniques in reality.

But I can't promise that, and you'll definitely go slower at first while you learn.
Hi Vinayagam,

The milestones you've defined are phase-based milestones that make little sense in an agile environment. Remember, analysis, design, coding, and testing happen every iteration in an agile project. (And the iterations are less than a month long.) In an XP project, they actually happen simultaneously.

The milestones you've described are specific to phase-based models. Agile teams use different milestones that are based on features delivered to customers.
Hi Christophe,

From your response, it sounds like you're assuming that analysis, design, coding, and testing have to happen in sequential order. On XP projects, that isn't true; they actually all happen at once, all the time. So testers always have something to do. However, it doesn't always look like traditional testing; much of what testers do on an XP team is help the team prevent bugs. This involves them much more deeply in the overall production of software. Compared to regular testers, very little of their time is actually spent executing tests.

I talked more about the role of testers in my post on where QA fits in.
Hi JD,

1- XP uses simultaneous phases, so the proper answer would be "100% requirements, , 100% design, 100% coding, and 100% testing."

In the book, we recommend that you have two people focused on requirements ("on-site customers") for every three programmers. We also recommend one tester for every 4-6 programmers. That leads to the following ratios:
  • 36% of the team is dedicated to requirements full-time.
  • 54% of the team is dedicated to designing, coding, and programmer testing full-time.
  • 10% of the team is dedicated to testing full-time.

  • The ratios are just rules of thumb and must be modified for your specific situation.

    2- The book is written for teams as small as five people (four programmers and a product owner). You can go smaller, but some practices would need to be changed.

    3- You can always modify your approach to agile development. However, you're correct that communication is the key to success (in any project!) Agile methods prefer face-to-face communication over document-based communication. It's one of the reason's they're successful. If your customer isn't the collaborative type, you'd need to figure out some way to improve communication.
    Hi Peter,

    One of the challenges of Scrum is that it doesn't include technical practices, and yet the Analyze-Design-Code-Test SDLC doesn't work well in a two-to-four week cycle. Something else is needed instead.

    This is one of the reasons I like XP (and part of why we focused on XP in the book): it includes specific technical practices to address this problem. Now, this is a big subject so I'm only able to give an overview here. I'll refer you to the book for more.

    Many people squeeze the Analyze-Design-Code-Test sequence of phases into Scrum's two-to-four week Sprints. As you're discovering, this doesn't work all that well--it leads to technical debt and often causes testing bottlenecks as well. XP provides a solution, but it's probably quite different from what you're used to.

    In XP we prefer not to do any work that doesn't provide customer value. So there should be no technical infrastructure effort that doesn't support current or past stories (aka deliverables)... and stories must be customer-valued, so technical infrastructure stories aren't allowed either.

    This requires that all technical infrastructure be build incrementally, alongside stories. To achieve this, XP uses simultaneous phases rather than sequential phases. Analysis, design, coding, and testing are performed simultaneously.

    The driver for this effort is test-driven development (TDD). Test-driven development mixes together design, coding, and testing into a single activity. Briefly, it works in very small, repeated steps (each about 30 seconds long) as follows:
  • Decide on the next, very small increment of functionality
  • Define behavior and define interface; write test that checks that behavior and interface; run test and watch it fail (about five lines of test code)
  • Implement interface and behavior; run test and watch it pass (about five lines of production code)
  • Review design, identify improvements, and use refactoring to implement those improvements; make sure tests continue to pass after each micro-refactoring.
  • Repeat with next small increment



  • Now here's the trick. Much of the design improvement comes during the "refactor" step. I call it reflective design: it's a process of reviewing existing code, extrapolating its design, coming up with improvements, and then refactoring to implement those improvements. The design actually comes after the code, which is why it's reflective.

    Designing after coding sounds ridiculous, I'm sure. From a phase-based perspective, it would be. But remember that design and coding actually happen simultaneously. It's not a case of spending days coding, then days more fixing the design. It's actually a matter of creating a few lines of code that provably solve a problem, then critiquing that code and how it fits into the overall design of the system and making immediate improvements, which are also verified by your tests.

    This probably seems impossibly low-level from an architecture point of view... and it is. But I had to introduce the basic concept of reflective design before I could talk about how design and architecture works in XP.

    Test-driven development alone isn't enough. Constantly reviewing your code and refactoring leads to good method- and class-level design, but it doesn't address the larger question of properly partitioning class responsibilities and relationships, or the even larger question of overarching architectural patterns.

    Continuously throughout this process, the programmer should also be performing reflective design at the package and architectural level as well. Just as refactoring can eliminate duplication, clarify concepts, and improve design at the method and class level, it can solve problems at the higher level as well. And, if the design is kept simple and the design is done continuously, the cost of these changes is low.

    Thinking at multiple levels simultaneously is difficult, which is one of the reasons pair programming is valuable in XP. While one person types and thinks about the tactical issues in the current class, the other person is thinking ahead to the next set of tests and considering how the changes the pair is making influences the overall design. Is there are an opportunity to improve relationships or responsibilities? Is the current architecture sufficient to solve the problems the pair is seeing?

    There's one more element: risk-driven architecture. Architectural issues are often cross-cutting and difficult to change. For example, if your code doesn't support internationalization, then adding it after the fact could be difficult.

    Risk-driven architecture looks ahead to potentially risky architectural problems (such as internationalization) and directs refactoring efforts to make those risks go away, without actually implementing the speculatory feature. In other words, if your customers don't want any localization yet, you wouldn't internationalize yet. However, you might improve your design so that there was only one class to internationalize when a localization story finally did come along.

    I'm condensing 45 pages of material into a few paragraphs here, so I'm afraid this still may not sound feasible. I guess you'll have to buy the book if you're still not convinced.

    To answer your specific questions:

    - Enterprise strategy: If you have a overall architecture or strategy that you have to plug into, it's a constraint on your development process that team members should keep in mind as they work. When you discuss design issues as a team, this should be part of those discussions; when you do test-driven development, this should influence your decisions.

    - Delivering new infrastructure: This is largely incompatible with the idea that all deliverables should be valued by your on-site customers (the business experts on the team). You can engage in some tortuous logic that says that, since the infrastructure is important to the company, it's customer-valued, but that usually doesn't fool anybody. Often in this situation there's a schism between what the business experts care about and what the technical organization cares about... perhaps because the technical organization has not yet grokked delivering technical infrastructure incrementally.

    You can deliver technical infrastructure incrementally using reflective, continuous, incremental design as I described above. However if people don't believe it's possible, they'll want the technical infrastructure to be developed all at once. I would push back against this, personally, but that's because I know how to do it incrementally. If I wasn't confident in my ability to deliver infrastructure incrementally, I might ask for it to be a "technical story"... but expect a lot of flack (and rightfully so) from your business experts. Introducing technical stories and insisting that they come first upsets the balance of power in agile planning: estimates are supposed to come from developers, priorities are supposed to come from business experts.

    We can talk about this further if you like. There's a lot of context here.

    - Ilities: There are a lot of different kinds of ilities. Scalability, Stability, and performance...ility are customer facing and can be scheduled with stories. "Flexibility" and "maintainability" are more abstract. In XP, however, flexibility and maintainability are built-in to the process. You're constantly improving your design, which constantly improves flexibility and maintainability.

    One thing that isn't done in XP is speculative flexibility. In other words, you don't code in hooks or plug-in points for things you don't currently need. That's because, in XP, you're constantly improving the design. Change actually gets easier over time, not more difficult, so adding code now is more expensive than adding it later. Also, speculative flexibility is sometimes wrong, and when it is, it's often difficult to fix because too many things are depending on it.

    This idea--no speculative generality--can be difficult for people to swallow. One of the things that separates good designers from poor designers is their ability to see and implement abstractions. You're still allowed to see the abstractions; we just ask that you delay implementing them until there's an actual need. Sometimes waiting will lead to new information that allows you to see simpler and more powerful abstractions. (Going through this process--seeing how my abstractions were simpler and more powerful if I waited to introduce them--is what convinced me that incremental/evolutionary design was feasible.)

    - Third-party solutions: I'd have to know more about your situation to comment on this. In general, most third party code can be isolated behind an interface (or interfaces) that you can refactor.
    Hi R.M.,

    Good question. The short answer is yes, you can remove any of XP's practices (including pair programming) and still be successful. The trick is that you have to understand what the practice provides and why it's important. Then, rather than removing the practice, replace it with other practices that fill the same needs.

    I think the best way to gain this level of expertise is to do XP "by the book" for several months and see how everything fits together. In our book, we help readers along this path by providing a discussion of alternatives for each practice. Here's what we say about pair programming:

    Alternatives

    Pairing is a very powerful tool. It reduces defects, improves design quality, shares knowledge amongst team members, supports self-discipline, and reduces distractions, all without sacrificing productivity. If you cannot pair program, you need alternatives.

    Formal code inspections can reduce defects, improve quality, and support self-discipline. However, my experience is that programmers have trouble including inspections in their schedules, even when they're in favor of them. Pairing is easier to do consistently, and it provides feedback much more quickly than scheduled inspections. If you're going to use inspections in place of pairing, add some sort of support mechanism to help them take place.

    Inspections alone are unlikely to share knowledge as thoroughly as collective code ownership requires. If you cannot pair program, consider avoiding collective code ownership, at least at first.

    If you'd still like to have collective code ownership, you need an alternative mechanism for sharing knowledge about the state of the codebase. I've formed regular study groups in which programmers meet daily for a timeboxed half-hour to review and discuss the design.

    I'm not aware of any other tool that helps reduce distractions as well as pair programming does. However, I find that I succumb to more frequent distractions when I'm tired. In the absence of pairing, but more emphasis on energized work.


    [ October 31, 2007: Message edited by: James Shore ]