This week's book giveaway is in the Reactive Progamming forum.
We're giving away four copies of Reactive Streams in Java: Concurrency with RxJava, Reactor, and Akka Streams and have Adam Davis on-line!
See this thread for details.
Win a copy of Reactive Streams in Java: Concurrency with RxJava, Reactor, and Akka Streams this week in the Reactive Progamming forum!

J. B. Rainsberger

author
+ Follow
since Aug 05, 2004
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by J. B. Rainsberger

Hello, authors.

First, congratulations on completing your book. I appreciate how much work you've put into it and I hope it's a tremendous success.

I have a question about Wicket and TDD/evolutionary design. I am a TDD practitioner and I get the best results when I let my designs evolve. What are three impediments--big or small--you think I would encounter if I test-drove a Wicket-based application? I ask because I frequently give a half-day tutorial on TDD and Enterprise Code, and I'd like to introduce some more modern frameworks, platforms and tools, and I'd like to know what I'm likely to encounter if I tried to introduce Wicket into the mix.

Thank you for your time.
Hello, there. I have only seen a little of GWT, and am intrigued by the notion of being able to test-drive Javascript with JUnit. Are the authors TDD or BDD practitioners? What practical advice or warnings do they have about trying to test-drive GWT applications?
12 years ago
GWT
I'm curious about the relationship between EJB 3 and the test-driven development community, or even the lightweight Java community, in general. It looks like EJB 3 was influenced to a degree by the ideas coming out of those communities, so if the authors have any story they'd care to share about that, I'd like to read it, positive or negative.

Also, I'm curious about the odd style of dependency injection that EJB 3 appears to have adopted. I admit that I've only looked at it briefly, but it appears not to have what I consider the true benefit of dependency injection, and that is an easy way to substitute test doubles for EJB dependencies to make testing easier. I can see that the annotation-style injection removes the need for duplicating all the JNDI lookup code, but is that all it does?

Thanks. After years of telling people how to work around EJB, it's nice to see the potential for not having to give out that advice any more.

Take care.
I have tried a couple of times to learn about AJAX, but have to admit that I haven't seen a good exposition of the subject yet, so I continue to struggle. One of my underlying fears has to do with JavaScript, asynchronous communication and TDD.

Does anyone here have any experience to share about test-driving AJAX-based GUIs? I've tried a little with Ruby on Rails' AJAX support, but I haven't got very far at all. JavaScript has historically been a nuisance to TDD practitioners, so I'm worried that if I embrace AJAX I have to leave test-driving my GUIs behind....

Originally posted by Reza Rawassizadeh:
OK so for my usage(just know it and don't need to work with it deep) I prefer to read my lovly "Thinking in java".
After I read 115 page of JUnit Recipes, I don't find anything about teardown() so I don't find it interesting book (from my point of view).



This is by far the strangest complaint I've seen so far about the book. Reza, if you are specifically interested in tearDown(), then did you look in the index to see where I use tearDown()?

You might also consider that if I don't talk about tearDown() in the first 115 pages, then perhaps it's because I don't use tearDown() very much at all.
13 years ago
First, let me welcome the authors.

Second, I don't know whether this is more of a Hibernate question than it is a Spring question, but I'll ask it just the same. From what I understand, one can use Spring to provide container-managed transactions when using Hibernate. I am certainly interested in this feature, as well as its impact on testing. Can you recommend a tutorial or overview? I've read the relevant Hibernate documentation and didn't find it terribly clear.

Take care.
Welcome to the authors!

It's no secret that my primary interest in any software package, framework or platform has to do with designing for testability. Do you have an opinion as to how well SWT/JFace mesh with, say, a test-driven development approach? In particular, is it any easier to test SWT/JFace-based UIs in isolation from the rest of the application than, say, Swing/AWT? If so, can you give an example?

Thanks.
14 years ago

Originally posted by Don Kiddick:
[QB]

I don't see much advantage of FitNesse over unit testing then...apart from the communication aspect of the wiki. Or am I missing something ?



Yes: customers don't know how to write code, but they do know how to express the results they want in tables. That's the value Fit brings beyond just programmer testing.
15 years ago


Let�s supose that it�s necessary to validate an student�s school registration.


validateXXX is a part of the whole validateRegistragion. They don�t have a real meaning by themselves but they are complex enought to deserve to be tested.
I wouldn�t like to expose them JUST because i want to test them.
What do you suggest me to do ?

Regards,[/QB]


Here is what I have done in the past, and will do again.

Validating input (whether semantically or syntactically, the approach is the same) is based on a sequence of rules. Each rule defines an aspect of the input that makes it correct or incorrect. Generally speaking, input is valid if and only if the input passes all the validation rules involved.

I can reuse ShortCircuitRuleset all over the place, but I only need to test it once for an arbitrary set of rules that might pass or fail. It might require six tests. It is very easy (by comparison) to test each part of the validation rule in isolation. By testing each part of the rule in isolation, you will find it easier to make sure you cover more and better failure conditions.

Future refactorings, if appropriate, are easier: you could potentially externalize the validation rules as data in files. For syntactic validation, I have in past made these rules simple regular expression matches that were stored outside the system in easy-to-change text files. I could even put them directly into web page templates next to the widget the validation rules belong to!

This is an example of taking private methods that really don't want to be private and changing the design in such a way to make the whole thing easier to test and better suited for change. The duplication in all those private methods wasn't obvious before, and now it is: each method looks at data, decides whether it's valid and registers some kind of failure message if it isn't. Now that duplicate behavior is explicit and easy to see. We have even leveraged it by using the Composite pattern!

Take care.
15 years ago
I'm not saying that that won't work. I'm merely pointing out that when I think of refactoring, I don't think of writing new tests; instead, I think about changing code so that it still passes the existing tests. When I'm writing new tests, I assume it's to add new features. The generally accepted view of things is that one either refactors, or adds new features, but never both simultaneously.
15 years ago

Originally posted by Rickard Johansson:
I have heard people say that the argument for only testing the public interface of a unit is that you will be able to refactor the internal structure of the unit without having to change the tests. I guess that this comes from bad experience in projects where test discouraged people from doing codechanges since they would also have to rewrite the test (i.e. more work).

I myself find this a bit strange since a solid testbed should give developers the courage to refactor code and be confident they did not break anything.



Originally I did not want to reply to this, because I didn't want to stretch out this discussion; however, I feel it's important to point something out about Rickard's objection.

Let's describe what happens when I have to change a test because I refactored.

When I start, my codebase is GREEN (all tests pass). Now I want to refactor some internal implementation detail inside a private method that has tests. My first instinct is to change the private method, then recompile. Since it is merely an implementation detail, I expect the codebase to remain GREEN; however, because I have tests for this implementation detail, there is a good chance that either the tests no longer compile, or they compile but fail. This is because I'm changing behavior upon which the tests depend.

So now I have two choices, neither of which I particularly like:

  • Test-drive a new private method from scratch, then change the production code to invoke the new method, then remove the old method. This is among the weakest kinds of refactorings: Replace Algorithm. It requires having two sets of code that both pass semantically equivalent but syntactically different tests -- that is, the tests are written differently (APIs are different), but mean the same thing. Do you think we might get that wrong? This sounds like a lot more effort than I expected when I started.
  • Change the production code, then change the tests to match. Hm. Do we really want to surrender the value that comes from writing the tests first? The more we pursue this option, the further we slide back into testing our code after we write it, and I know I'm less productive when I do that.


  • When refactoring, the ideal rhythm is to change production code without changing tests. This means that the design needs to be flexible enough to minimize the number of tests that fail when the production code changes. One of the ways we achieve this is by a combination of interface-rich designs and testing entirely through the public API. It works very well, and the alternatives are less desirable.

    That's why I recommend it. As always, your mileage may vary.
    15 years ago
    Either read this, or recipe 2.8 of the book JUnit Recipes
    [ October 08, 2004: Message edited by: J. B. Rainsberger ]
    15 years ago
    First, Frank, let me thank you for taking so much time to respond to my question. Let me see what there is to say here...


    1) Agile stresses testing-first where a unit/functional test is built before and engineer writes the application software. UGOT says "test first" is a good idea but it is optional. UGOT is not nearly as militant as Agile sometimes comes across.



    Do please note that not all Agile Software Development approaches advocate "Testing First". Some, such as Scrum, say little about technical practices; while others, such as FDD, advocate continual testing without recommending that tests precede code. It is XP in particular that essentially requires it (although I prefer to say that XP recommends it strongly).

    That said, I can't argue with the perception that Agile can come across as militant with respect to Testing First. Part of that is that XP is the most well-known Agile school, and as a result it is easy to believe that Agile == XP.


    2) UGOT provides a decision maker with actionable knowledge to decide when a service is ready for production. UGOT provides:
    - Functional test to check for regression and successful operation.
    - Scalability testing for capacity planning. Scalability is stated as a function of throughput measured at the consumer/client in requests-per-second as the number of concurrent consumers/clients increases. This shows that the system has enough capacity to serve forecasted levels of users. (See http://www.pushtotest.com/Docs/howto/onscalability2.html to see how I have been plotting this for our customers. I'm open to feedback/criticism.)
    - Performance testing to make sure the service meets user expectations for response times. A 3 tall mocha latte's a day person isn't going to wait more than 5 seconds for their email client to check for new messages.



    Customer Testing certainly addresses the first point, by encouraging stakeholders to describe features by means of examples we turn into tests.

    Regarding performance and scalability testing -- and I love that you know and emphasize the difference between the two -- XP advocates doing both as part of both Programmer and Customer Testing. In fact, the XP practice "Ten Minute Build" indirectly addresses the performance issue, because if the tests run slowly, then perhaps the production code runs slowly. XP also advocates turning performance and scalability requirements into User Stories, from which tests must follow to ensure those requirements are met.


    3) Bret described "coaching tests, a way of thinking about Acceptance tests" that turn user stories into tests. UGOT identifies archetypal users by their behavior when using the service. By understanding the archetypal users behavior, we can then write test agent code that drives the service, as the archetypal user will.



    Yes, Coaching Tests, or Specification by Example, or Example-Driven Development, or whatever we decide to call it, is an integral part of the way Customer Testing is practiced among XP practitioners. As for the archetypal user technique, I have just attended a workshop at IBM's CASCon that discussed the intersection of Agile and User-Centered Design. As you know, using Personas is a well-known UCD technique, and as the Agile and UCD communities learn to work together, UCD techniques will become a more explicit part of the way Agile is practiced.


    Java Testing and Design shows the reason why building archetypes is important to a business, how to go about doing it, and then how to repurpose them between developers, QA technicians, and IT managers.



    Do you make specific references to this as a UCD technique, or even to other UCD techniques? (Your reference to Alan Cooper here suggests that you do.)

    Thanks again for your patience in responding. I look forward to the book.
    [ October 07, 2004: Message edited by: J. B. Rainsberger ]
    15 years ago

    Java Testing and Design puts forth User Goal Oriented Testing (UGOT.) UGOT is a testing methodology that models the behavior of archetypal users of a service. In the book I show how to understand these behaviors and how to turn them into test agent code. The resulting code does a functional test of a service. The functional test is intelligent since the test requires multiple calls to the service to accomplish the user's goals. Rather than testing every function of the service, the UGOT method tests the service against the goals of the archetypal users. This has worked really well with GM, Sun, and others. The book covers three case studies.



    Forgive me, Frank, but could you give me a taste of the difference between UGOT and Customer Testing (or Acceptance Testing) the way the Extreme Programming community generally understands the term? Based on the description you give here, I cannot distinguish the two. What I see is that you focus on Customer Testing, which attempts to show that the required features in a system are present, with (perhaps!) less to say on Programmer Testing, which attempts to show that the code we wrote behaves the way we expected it to.
    15 years ago
    Gian Franco:


    Why, in your opinion, are tiger's new
    features described by Schildt undermining the
    distinctive characteristics of Java?

    Bare in mind I don't know C++, but I'm trying
    to understand your preoccupation.



    If I understand correctly, C++ added templates purely for the purpose of generic algorithms and parameterized types. Instead, they created a complete Turing Machine that has made C++ a very complicated language. If the same thing happens to Java, then someone will have to invent another simpler language in 2007 to take its place, and that language will probably look a like like Java did in 1997! What would be the gain?
    15 years ago