Jeff Langr

author
+ Follow
since May 14, 2003
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
5
In last 30 days
0
Total given
0
Likes
Total received
17
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Jeff Langr

Hi Henry--

Can you send me the email addresses of John Czukkermann and Atul Khot? I've gotten the addresses of the other two winners (Wil and Roel) and forwarded them on to my editor at PragProg.

Regards,
Jeff
8 years ago
If you've not received the book yet, please send me an email (jeff@langrsoft.com) and I'll lean on the publisher.

thanks,
Jeff
8 years ago
Hi Roel--

Indeed that's a possibility, though in practice unlikely, and I'm struggling to think of a case where I've seen something like that happen (I'm sure it has for me).

There are lots of folks expounding on this in various places if you search; for example, http://butunclebob.com/ArticleS.UncleBob.TheDangerOfMockObjects

I think there's a balance to be stricken. I found that I didn't enjoy mocking *every* collaborator call when I've tried it--it leaned toward too much pain when I wanted to refactor.

Regards,
Jeff
8 years ago
Thanks Henry--

MoMA was definitely worth the visit. We did the Guggenheim last visit, so next time it'll have to be the Met. (We grew up in DC, so always had a wealth of great museums to visit, but had never seen much in NY.)

Jeff
8 years ago
I'm about to head over to the NY MoMA (yay!) for the day, but wanted to thank the JavaRanch folks for setting up this promo this week for Pragmatic Unit Testing in Java.

Thanks to everyone who participated in the forum! I enjoyed the questions--all good--and will be happy to answer additional ones about the book, or about unit testing and/or TDD, here or privately.

Regards,
Jeff
8 years ago
I've seen Watir in use in a couple places, and it was reasonably effective.

If you're doing Selenium, I'd recommend Capybara if you're just getting started.

Jeff
8 years ago

Roel De Nijs wrote:A simple example: assume you'll have a utility method isEmpty(String s) which returns true if s is null, empty or whitespace; false otherwise. And you need to test this method:Will you write tests for empty type values like null, "" and " "? Or do you mock the isEmpty call returning true if you pass "ty0" and false with all other values?



Hi Roel--

Right--if you assume that isEmpty is already tested (I wrote unit tests for it elsewhere), the job in testing the create method is to verify the two possible paths based on the two possible outcomes of the if conditional. As you suggest, you could mock the static (if you were using, for example, PowerMock). You could also set the value of type to a value that answers false for the conditional for one test, and to a value that answers true for a second test. Those two tests, perhaps named:

createReturnsNullWhenTypeIsEmpty
createReturnsAnimalWhenTypeNotEmpty

are sufficient, since isEmpty is already tested. Any more tests around how type might be "empty" would be redundant.

I think I lean toward using mocks if the tool is right (probably Mockito--i.e. a tool where I can clearly express the stubbing in a single line) and it's not a static (I don't like the extra complexity of bringing in PowerMock). Otherwise, I'm ok with passing a representative value. The test names are the same either way.

Jeff
8 years ago
Greetings Yvette--

This "Agile in a Flash" card overviews the test smells:

http://agileinaflash.blogspot.com/2011/11/test-abstraction-smells.html

It links to a PragPub article where Tim Ottinger and I take a difficult test method, and whittle it down into something reasonable.

I thought it was effective to see the distinction between a tests whose maintenance value is low, and one that can act as reasonable documentation on class capabilities. So I ended up repeating the concept in the book--i.e. you'll see another example of cleansing a test.

If you google about, you'll note that there are probably quite a few differing collections of "test smells." The book xUnit Test Patterns, for example, offers its own list. I don't think there's a canonical list, and I fear I'm muddying the waters a bit :-), but I've found it useful to strive to stomp out these smells.

Regards,
Jeff
9 years ago
Hi Atul,

Even painstakingly careful analysis of multi-threaded code can fail to reveal insidious concurrency defects that often don't surface well into production use.

The best approach is to use well-tested libraries (e.g. Lea's Java 5 concurrency library) and minimize the amount of code that requires concurrent execution. Think "separation of concerns" between threading needs and business (application) logic.

There's a small threading example in the book that covers how to factor your code to this end. Also, if you take a look at Modern C++ Programming with Test-Driven Development, I demonstrate how you might test-drive threading needs.

http://blog.narmnevis.com/2011/03/implementing-active-object-pattern-with-concurrent-java-1/ -- one demonstration of using the Java concurrency API to implement active object.

Regards,
Jeff
9 years ago
Hi Atul,

More often than not, logging is an ad hoc exercise, and I suspect most teams over-log. Of course, when you have a sticky production problem, it always seems like you've under-logged.

My other concern about logging is that I've seen enough unfortunate cases where the logging code itself had defects, and that sort of thing is rarely tested on any level (automated or manual, unit or integrated).

Everyone I've seen (myself included) insert logging statements without tests, and most of the time that's probably ok. I still find a need to test some of the more interesting and/or critical areas of the system where I've inserted logging, particularly when there's a bit of logic involved. But yeah, that's fairly rare for me even.

Jeff
9 years ago
Thank you for ordering the book, Chan! I'll try to hang out at CodeRanch a bit after the promo finishes.
9 years ago
Thanks for having us Jeanne!
9 years ago
Thanks John.

aha! Feather's book Working Effectively With Legacy Code, though a little dated, is still good fodder for getting legacy code under control. The Mikado Method also is a great disciplined approach for larger attempts at cleaning up legacy code.

"If it ain't broke, don't fix it" is how most systems got to be so bad. Having fast unit tests can help a lot here.

Regards,
Jeff
9 years ago
Hi Tomas,

If you're writing code that's clear, well-designed, accommodates new features easily, and doesn't exhibit any defects, you might not need unit tests. Unfortunately, most systems I've seen aren't like that. They're bloated, with lots of duplicate code, hard-to-understand code, inflexible design, and so on. They get that way because of the adage "if it ain't broke, don't fix it." We get our code to work, and ship it. When we add new code, we seek to add it in a minimally invasive way, which often isn't the best way to design it (e.g. it's easier to copy a long method and make a small change to get your stuff working, rather than do the right thing and factor the common code to a single spot). We fear touching code already proven to work, because why would we risk breaking something that's not what we're working on? As such, the natural progression of a system is to degrade in quality.

The larger hope for unit testing is that it allows you to make changes while minimizing the fear of breaking things already working. Regular cleanup of the code can help stave off the degradation.

I did no unit testing for ~18 years prior to learning about it. I didn't ship a lot of defects, though of course I did ship some, and they were often dumb mistakes that a simple unit test might've caught.

You can survive without unit tests. But it's likely your systems will degrade faster than mine. I wouldn't have spent another 15 years doing unit testing/TDD if I didn't gain a number of significant benefits.

As far as "better understanding the system," there's a lot of emphasis in the book on creating tests that clearly document the behavior they are verifying. Test names are important, as is the structure and readability of tests. I often will pull up just the list of test names for a class, and quickly get a good understanding of what features it exposes, and what the developers intended with its design. And as long as the tests are passing, I know they honestly depict how the system currently behaves (unlike comments).

Regards,
Jeff
9 years ago