This week's book giveaway is in the Agile and Other Processes forum.
We're giving away four copies of Real-World Software Development: A Project-Driven Guide to Fundamentals in Java and have Dr. Raoul-Gabriel Urma & Richard Warburton on-line!
See this thread for details.
Win a copy of Real-World Software Development: A Project-Driven Guide to Fundamentals in Java this week in the Agile and Other Processes forum!

Michael Feathers

+ Follow
since Aug 24, 2004
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Michael Feathers

Originally posted by Bert Bates:
My project is right at the "alpha" stage. Just a little bit more proof of concept and it will be time to do some refactoring. I've been having debates with my local OO guru about the size of classes. From a pure OO perspective, it seems like class size shouldn't be a concern, and one should worry only about concepts like cohesion and coupling. However, from a maintenance perspective I find that it's often easier to deal with a few smaller classes that work together than with one humongous class. If for no other reason than that I can have each class in its own window and see lots of stuff at once.

So, what do all you OO gurus do about this dilemma?

One of the things I think is neat is that when you look at the internal dependency structure of classes, often there's obvious partitioning. Sometimes it's worth doing an extract class and at other times it isn't.

I've posted some Feature Diagrams that I generated recently with a little tool I wrote.

The diagrams are of FIT, which is a very nice set of small elegant classes. I'll post some large awkward ones in a little while.

Originally posted by Ilja Preuss:

And to finally answer your actual question ( ), in my experience by far the best you can do is *not* "leaving space" for a potentially upcoming requirement, for at least two reasons:

- investing resources into features that are needed *now* gives better return on investment than investing them into making the design more complex now with the assumption that it might (or might not!) pay back later, and

- it's quite likely, in my experience, when the requirement actually comes, it will be not exactly like we expected, and the system will already have changed in ways we didn't expect, so the "space" we left for it is unlikely to be very useful.

Instead of "leaving space" now, I'd rather invest into getting really good at refactoring (including refactoring the database) - *that* really pays back manifold!

I agree. If you want maximum flexibility, the only space to leave is whitespace. That is the most flexible space of all.

Michael Feathers
The important thing to note is that in many old code bases you'll never get all of the coverage you want. It would be nice to have it, and it's good to work for it, but you can still refactor when your code isn't at 90% or 100%. When you refactor you need to have coverage for the areas you are about to change. You have to know what piece of code you'll be changing and figure out where to detect any behavioral changes that could result.

Coverage is great, but it's good know that sometimes you can use a laser if you don't have a floodlight.

Michael Feathers

Originally posted by Ilja Preuss:
And even the instance is expensive to create, I'd very likely *not* use a singleton. Instead I might be thinking about handing the instance around or something. There also might be an abstraction missing.

I agree. I'll also point out that instantiation may be much faster than you think. On JVMs, the allocation of a single object is insanely fast compared to heap allocation in C or C++.

Michael Feathers

Originally posted by Jac Jones:

I understand Design by Contract as espoused by Meyer, unit testing and defensive programming (DP) as cited by Joshua Block in Effective Java, Item #23. But am interested in opinions between the 3.

The difference between DBC and DP that I can think of are:
1) DBC is contractual whereas defensive programming isnt
2) DBC is more oriented to polymorhism and the substutability of objects.
Any others?

a) Does one need to do DP if there is adequate unit test coverage?
b) Do you need to do DBC if you have adequate unit test coverage?

Any other opinions are appreciated!



The most important thing.. the thing that is never stated is that application coding and API coding are different. Josh Bloch is writing from an API designer's point of view in Effective Java. I agree that testing parameters of public methods makes sense in an API, but in application code, contracts can and do suffice.

Michael Feathers
I do it all the time in constructors. It saves making up silly names for parameters when all you are going to do is assign them to a field.

If you use some sort of naming wart like an 'f' or underscore prefix for fields then there isn't much need to use 'this' in this way.
I think that the reason that people use sequence diagrams more often is because it is easier to lay out objects in a line than it is to lay them out in 2D without regretting where you've placed them.

Personally, I like collaboration diagrams a little better. They tend to emphasize the "players" more than the algorithm.

Originally posted by Michael Feathers:
I'd vote off Template Method. To me, it's not a pattern, it's just the typical result when you remove duplication in an inheritance hierarchy.

Originally posted by Ilja Preuss:
How does that conflict with the definition of a Pattern - being a common (typical!) solution to a common design problem (i.e. removing duplication in an inheritance hierarchy)?

It seems that we (the community) swing back and forth on the question of what a pattern is. Originally, patterns were seen as the description of a problem and the core of a solution, rather than just the solution. So, for instance, the problem for Template Method is: how do we get ourselves into a position where we can just subclass and override details to get variation. But, that is not the same problem as: I'm changing code over and over again in this hierarchy, how do I fix it? The common bit is the end point, but if the end point is what determines the pattern then there are other issues. People can arrive at the same code structure when they apply Bridge as they would if they applied Strategy, for instance.

I don't lose sleep over these things. I don't mind the fuzziness, but it does feel weird to say to myself that I am applying Template Method when I'm just removing bits of duplication. At the end, the template method (the thing, not the pattern) just sits there like a surprise. I didn't have to write a template method, I just removed duplication and a method became one whether I noticed it or not.

I find it to be quite usefull to have a name for it: "Gee, I always forget to call when I override this method in subclasses." "Why don't you use Template Method instead?" "Well, good question. Why didn't I think of it???"[/QB]

True, that is very valuable. No qualms there.
[ December 05, 2004: Message edited by: Michael Feathers ]

Originally posted by Ilja Preuss:

Well, it doesn't hurt to have a rough idea of what the design might look like when starting TDD (actually many descriptions of TDD start with a short design session). After all, to start to write your first test, you at least need to have a rough idea of what kind of class you will test, and what behaviour you will expect from it, don't you?

I agree and I often do that, but I find that the most valuable state of mind to be in when I TDD is the state of allowing myself to be surprised. If the tests lead to a different design, it's important consider it and not to feel constrained by initial design concepts.

Originally posted by Nate Schutta:
After reading Martin Fowler's OOPSLA wrap up ( I couldn't help but wonder: what pattern would you vote off the island?

I'd vote off Template Method. To me, it's not a pattern, it's just the typical result when you remove duplication in an inheritance hierarchy.

Originally posted by Naina Si:
To my little knowledge, we can use singleton for logging and database connection. where else can we use this pattern?

You are better off, in most cases, not using the singleton pattern at all. Singletons are essentially global variables; use them too often and your system can turn into a rat's nest.

Originally posted by Pradeep Bhat:
What is Seam Model which is mentioned in your book ?

Essentially, it is a way of looking at software that lets you see where you can replace behavior when you test.

Here's an example:

If I have a Java method like this:

and I want to test it without running the methods on account (because it talks directly to a database) then I'm in a bit of trouble. In this code, I'm not, because each call to account is now a seam, a place where I can replace behavior:

How can I replace behavior? I can pass in a FakeAccount rather than a real Account.

In the first piece of code, those calls were effectively hard coded, I couldn't have other methods executed when the method called a.deposit, etc. In the second example, I can.

This seam, the object seam, is pretty common and easy to use. There are a bunch of others: text seams, link seams, etc. When you start to see your code in terms of seams, you can get a good seams of where I can instrument your code for test.

Originally posted by Pradeep Bhat:
What is Seam Model which is mentioned in your book ?

Essentially, it is a way of looking at software that lets you see where you can sense conditions when you test.

Here's an example:

If I have a Java method like this:

and I want to

Originally posted by Jeanne Boyarsky:
One of the techniques is to create a seam to alter behavior without changing code. In the book, link seams (changing the files the linker uses) are an example. It seems like aspects would be the equivalent in Java.

Is it a problem effectively changing the compilation for testing? If you aren't testing what gets deployed, can't bugs still get through? (I'm not that comfortable with Aspect Oriented Programming and its effects on testing yet.) Or is this more similar to mocks?

[edited to fix typo]

[ November 16, 2004: Message edited by: Jeanne Boyarsky ]

Great question. It is important to have higher level tests of the full application, but the kind of testing that I'm concentrating on is really unit testing, testing pieces in isolation. And, if you have to introduce seams to do that you do get a very tangible benefit. You are able to examine a class or a function by itself and feel confident that you've changed it the way you expected.

Re: Aspects, yes, that is one approach, but OO itself gives us seams to exploit. The discussion on 'object seams' gets into that a bit.

Originally posted by Alvin chew:
michael, would this book recommended as college text book to enchance student's software design skill and is there any material like powerpoint slide or lecturer instruction handbook provided ?

Unfortunately, no. I thought about doing that sort of thing, and it may still happen, but I think that the approach I took depends strongly on people have had a big of experience in nasty code and feeling that pain concretely. I know I just didn't have that when I was a student. I could be wrong, but I just don't know if people would be in a position to connect with it. On the other hand, if anyone feels that it would be appropriate to hand the book to people as they graduate, to look at in about six months time after working a bit, I'm all for that.