• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Tim Cooke
  • Campbell Ritchie
  • paul wheaton
  • Jeanne Boyarsky
  • Ron McLeod
Sheriffs:
  • Paul Clapham
  • Devaka Cooray
Saloon Keepers:
  • Tim Holloway
  • Carey Brown
  • Piet Souris
Bartenders:

For Mr. Cohen: Testing Java vs. testing .NET

 
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
One thing that's interesting to me about your book's writeup is that while the title is "Java Testing and Design," the "Java" part apparently refers more to the testing technology than to the tested technology: you show how to test not only J2EE apps but .NET apps and perhaps other technologies as well.

My question: tell us what's similar, and what's different, about client-side testing of J2EE Web apps and .NET Web apps? Does the back end technology matter at all to you when your testing hat is on? Should it?
 
Ranch Hand
Posts: 3271
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:
One thing that's interesting to me about your book's writeup is that while the title is "Java Testing and Design," the "Java" part apparently refers more to the testing technology than to the tested technology: you show how to test not only J2EE apps but .NET apps and perhaps other technologies as well.

My question: tell us what's similar, and what's different, about client-side testing of J2EE Web apps and .NET Web apps? Does the back end technology matter at all to you when your testing hat is on? Should it?




I'm certainly interested to see Frank's reply, but my take on this would be that it should matter...and it shouldn't.

As far as I'm concerned, this is kinda like comparing white-box testing to block-box testing and you should really do both.

In one way, I think you should be able to ignore what technology is used to implement an application so that you can test the functionality of that application at face value. Regardless of what was used to create an application, it should still perform the same functions and those functions certainly need to be tested. That would seem to fall right in-line with black-box testing.

On the other hand, I think it's only fitting that you do pay attention to the technology that was used to write the application so that you can ensure that every execution path is tested (or at least try to). That would line up nicely with white-box testing. It would be great if the black-box tests covered all possible execution paths but, unless you've got a lot of black-box tests lined up, you're probably going to need to use some white-box tests.

So, like I said, I look forward to Frank's reply, because I'm one that believe you should do both - ignore the technology and pay attention to the technology - just not at the same time.
 
Ranch Hand
Posts: 995
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi!

First of all, welcome to Frank.
Than back in business :-). I do not consider an advamced test designer, but I am thinking that testing a Web application client side is in fact testing html, javascript, etc. This being said, I would go further and say that possible any web application can be tested (in a way) by the same means/tools.
What it interesting to me is what is the set of tests that may be performed _just_ on the client side of the web application?

./pope
 
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ali Pope:
testing a Web application client side is in fact testing html, javascript, etc. This being said, I would go further and say that possible any web application can be tested (in a way) by the same means/tools.


Note that Frank's book talks about testing both web applications and other network-centric applications that are based on protocols such as HTTP and SOAP, XML-RPC, etc.
 
author
Posts: 18
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Guys: There is a world of difference between testing services built with Java and .NET. For starters, most .NET developers do not have an understanding of the impact of SOAP encoding styles, XML handling methods, and coupling techniques have on functionality, scalability and performance. Visual Studio makes these choices for the developer.

In the Java world software developers and QA technicians are still arguing over weather Eclipse is better than NetBeans. :-) Java developers are expected to pick their favorite tools, techniques and designs. For example, given a choice between JAXB, JAXP, Xerces, and JDOM which is most appropriate for a service design and which will run fastest in production? Java developers have any choices to make and each choice has a huge impact on the scalability and performance.

Java Testing and Design is the first book to look at these issues and provide a methodology and framework to understand scalability and performance in a service environment. For example, Chapter 14 shows a huge problem in building scalable SOAP-based Web Services when the developer chooses SOAP RPC-encoding. I'll post a separate reply about this topic.

-Frank
 
Frank Cohen
author
Posts: 18
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


In one way, I think you should be able to ignore what technology is used to implement an application so that you can test the functionality of that application at face value. Regardless of what was used to create an application, it should still perform the same functions and those functions certainly need to be tested. That would seem to fall right in-line with black-box testing.

On the other hand, I think it's only fitting that you do pay attention to the technology that was used to write the application so that you can ensure that every execution path is tested (or at least try to). That would line up nicely with white-box testing. It would be great if the black-box tests covered all possible execution paths but, unless you've got a lot of black-box tests lined up, you're probably going to need to use some white-box tests



You've hit upon a topic that is a central thesis of Java Testing and Design: Black box and white box testing do not produce actionable knowledge in a service environment. Black box testing assumes that the "box" is self-contained � You are the operator of the box and when you make a sequence of requests the box is supposed to respond in the same way. That's just not how a service works. When you make a request to the service, it's functionality and speed-to-perform-the-request depends on what other requests are being served concurrently. Black-box testing is fine to achieve a simple regression test to make sure the box still performs functions correctly, but you cannot take the response and extrapolate that the functional will be correct for subsequent requests.

White box testing has a similar and deadly problem in a service environment: In my experience there is little to no chance of achieving 100% coverage in a set of test cases. With services you not only have to test the consumer/client facing request interface but also test the backend system too. This leads to text matrices that have 20,000 or more test cases. For example, in a test of Web Services for General Motors we looked at a service that offered multiple encoding styles, multiple levels of functions, and different hard configurations. In all there were 18,000 possible test cases. With each test case taking 30 minutes to set-up, run, and tear-down the entire suite would take more than a year to run. Few enterprises are able to test all of the test cases.

Java Testing and Design puts forth User Goal Oriented Testing (UGOT.) UGOT is a testing methodology that models the behavior of archetypal users of a service. In the book I show how to understand these behaviors and how to turn them into test agent code. The resulting code does a functional test of a service. The functional test is intelligent since the test requires multiple calls to the service to accomplish the user's goals. Rather than testing every function of the service, the UGOT method tests the service against the goals of the archetypal users. This has worked really well with GM, Sun, and others. The book covers three case studies. Take a look at my blog (http://www.pushtotest.com) for additional case studies that happened after I turned in the manuscript.

Anyway, I hope this gets the discussion going here. I'm open to any and all of your new ideas.

-Frank
 
author
Posts: 87
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Java Testing and Design puts forth User Goal Oriented Testing (UGOT.) UGOT is a testing methodology that models the behavior of archetypal users of a service. In the book I show how to understand these behaviors and how to turn them into test agent code. The resulting code does a functional test of a service. The functional test is intelligent since the test requires multiple calls to the service to accomplish the user's goals. Rather than testing every function of the service, the UGOT method tests the service against the goals of the archetypal users. This has worked really well with GM, Sun, and others. The book covers three case studies.



Forgive me, Frank, but could you give me a taste of the difference between UGOT and Customer Testing (or Acceptance Testing) the way the Extreme Programming community generally understands the term? Based on the description you give here, I cannot distinguish the two. What I see is that you focus on Customer Testing, which attempts to show that the required features in a system are present, with (perhaps!) less to say on Programmer Testing, which attempts to show that the code we wrote behaves the way we expected it to.
 
Frank Cohen
author
Posts: 18
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi J.B.: I'm not an expert at Agile methods, so please take my reply with a grain of salt. I would like to learn from you about Agile Customer Testing (and Acceptance Testing) so feel free to reply.

Bret Petticord introduced me to Agile Testing a couple of years ago. It seemed very well thought-out and something that would challenge engineers to think about testing first. UGOT is different from Agile Testing in these ways:

1) Agile stresses testing-first where a unit/functional test is built before and engineer writes the application software. UGOT says "test first" is a good idea but it is optional. UGOT is not nearly as militant as Agile sometimes comes across.

2) UGOT provides a decision maker with actionable knowledge to decide when a service is ready for production. UGOT provides:
- Functional test to check for regression and successful operation.
- Scalability testing for capacity planning. Scalability is stated as a function of throughput measured at the consumer/client in requests-per-second as the number of concurrent consumers/clients increases. This shows that the system has enough capacity to serve forecasted levels of users. (See http://www.pushtotest.com/Docs/howto/onscalability2.html to see how I have been plotting this for our customers. I'm open to feedback/criticism.)
- Performance testing to make sure the service meets user expectations for response times. A 3 tall mocha latte's a day person isn't going to wait more than 5 seconds for their email client to check for new messages.

3) Bret described "coaching tests, a way of thinking about Acceptance tests" that turn user stories into tests. UGOT identifies archetypal users by their behavior when using the service. By understanding the archetypal users behavior, we can then write test agent code that drives the service, as the archetypal user will. For example, the following are two archetypes for an upcoming test of a payroll service:

Payroll Data Entry Clerk (Irene)
Irene works for a paper supply company in Milwaukee, Wisconsin. She is 26, engaged to be married next Spring, an accomplished long distance runner. Irene is response for managing the company and employee information for the company payroll. On a daily basis she gets questions from employees and contractors about their tax withholding. For example, how many dependents do I have on my W2? The company makes a payroll every 2 weeks. Irene's work effort greatly increases as the next payroll date approaches.

Payroll Approver (Maggie)
Maggie works for the same paper supply company as Irene. She manages the in-house bookkeeping staff of 3 clerks, including Irene. Maggie has been with the company for the past 12 years as the financial services manager. She has two children, her husband is a small business owner, and she loves to travel. Maggie is the companies' final check that payroll information is correct and that the calculated taxes in each payroll are correct. Maggie routinely updates company and employee information.

The user archetypes provide a common, down-to-earth, way of discussing typical usage of a service among developers, QA techs, and IT managers. UGOT turns the archetypes into functional tests. This can be done in any framework: jUnit, Load Runner, jMeter, Java, Jython, etc. (Of course, my preference is Jython and Java in TestMaker.)

With the right framework these functional tests are run multiple times and concurrently to test the system for scalability and performance. The same functional tests can be run periodically as a Quality of Service monitor.

Java Testing and Design shows the reason why building archetypes is important to a business, how to go about doing it, and then how to repurpose them between developers, QA technicians, and IT managers.

-Frank
 
Ranch Hand
Posts: 63
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Frank,

Your sig is broken - try putting a space between your link and the comma. The book sounds interesting, BTW. I'd like to flip through it before buying, but my local B&N doesn't have a copy. Bummer.

Josh
 
Corey McGlone
Ranch Hand
Posts: 3271
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Frank,

Would I be correct in saying that you generally identify these "archtypes" during system requirement analysis? During that period, you're probably asking what the system needs to do and, in order to determine that, you're going to want to know who will use the system and in what ways. It would seem to me that these archtypes would naturally fall out of that process.

At that point, you could use the archtypes all the way through the development process to design the system in the beginnging and test the system in the end.

Corey
 
Frank Cohen
author
Posts: 18
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

At that point, you could use the archtypes all the way through the development process to design the system in the beginnging and test the system in the e



Yes, archetypes help the process from the start to finish. Alan Cooper introduced me to Goal-Directed Software Design back in 1997 when I was the principal archetect of the Sun Community Server. His mantra is to tell software developers to stop thinking about solving every possible use of their software and instead to concentrate on solving the needs of 4-5 users. I found that the technique could be extended to software testing and so I coined the phrase User Goal Oriented Testing (UGOT.)

I was sitting in a meeting of two companies that were considering to partner with one another. The more established company asked how they could be assured that the service would be always available and offer high quality. I described to them to 5 user archetypes. They had built their application under a "whatever-is-needed-now" philosophy and so having them talk about the user archetypes opened-up a whole new level of considerations that pertained to both the ongoing maintenence of their software and the method to test the service. Defining archetypes was a "win" all around.

The really cool thing about using archetypes today is the new interest software developers have in unit testing and how those tests implement the archetypes behavior and are shareable to do scalability, load, regression, and quality of service monitoring testing. This is the start of a new "golden age" of testing.

-Frank
 
J. B. Rainsberger
author
Posts: 87
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
First, Frank, let me thank you for taking so much time to respond to my question. Let me see what there is to say here...


1) Agile stresses testing-first where a unit/functional test is built before and engineer writes the application software. UGOT says "test first" is a good idea but it is optional. UGOT is not nearly as militant as Agile sometimes comes across.



Do please note that not all Agile Software Development approaches advocate "Testing First". Some, such as Scrum, say little about technical practices; while others, such as FDD, advocate continual testing without recommending that tests precede code. It is XP in particular that essentially requires it (although I prefer to say that XP recommends it strongly).

That said, I can't argue with the perception that Agile can come across as militant with respect to Testing First. Part of that is that XP is the most well-known Agile school, and as a result it is easy to believe that Agile == XP.


2) UGOT provides a decision maker with actionable knowledge to decide when a service is ready for production. UGOT provides:
- Functional test to check for regression and successful operation.
- Scalability testing for capacity planning. Scalability is stated as a function of throughput measured at the consumer/client in requests-per-second as the number of concurrent consumers/clients increases. This shows that the system has enough capacity to serve forecasted levels of users. (See http://www.pushtotest.com/Docs/howto/onscalability2.html to see how I have been plotting this for our customers. I'm open to feedback/criticism.)
- Performance testing to make sure the service meets user expectations for response times. A 3 tall mocha latte's a day person isn't going to wait more than 5 seconds for their email client to check for new messages.



Customer Testing certainly addresses the first point, by encouraging stakeholders to describe features by means of examples we turn into tests.

Regarding performance and scalability testing -- and I love that you know and emphasize the difference between the two -- XP advocates doing both as part of both Programmer and Customer Testing. In fact, the XP practice "Ten Minute Build" indirectly addresses the performance issue, because if the tests run slowly, then perhaps the production code runs slowly. XP also advocates turning performance and scalability requirements into User Stories, from which tests must follow to ensure those requirements are met.


3) Bret described "coaching tests, a way of thinking about Acceptance tests" that turn user stories into tests. UGOT identifies archetypal users by their behavior when using the service. By understanding the archetypal users behavior, we can then write test agent code that drives the service, as the archetypal user will.



Yes, Coaching Tests, or Specification by Example, or Example-Driven Development, or whatever we decide to call it, is an integral part of the way Customer Testing is practiced among XP practitioners. As for the archetypal user technique, I have just attended a workshop at IBM's CASCon that discussed the intersection of Agile and User-Centered Design. As you know, using Personas is a well-known UCD technique, and as the Agile and UCD communities learn to work together, UCD techniques will become a more explicit part of the way Agile is practiced.


Java Testing and Design shows the reason why building archetypes is important to a business, how to go about doing it, and then how to repurpose them between developers, QA technicians, and IT managers.



Do you make specific references to this as a UCD technique, or even to other UCD techniques? (Your reference to Alan Cooper here suggests that you do.)

Thanks again for your patience in responding. I look forward to the book.
[ October 07, 2004: Message edited by: J. B. Rainsberger ]
 
Alexandru Popescu
Ranch Hand
Posts: 995
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I would like to say that this entry is by far one of the most interesting readings on testing. I would like to add and ask some more things.


When you make a request to the service, it's functionality and speed-to-perform-the-request depends on what other requests are being served concurrently. Black-box testing is fine to achieve a simple regression test to make sure the box still performs functions correctly, but you cannot take the response and extrapolate that the functional will be correct for subsequent requests.



I do agree on the fact the unit testing is not covering/guaranting the overall success (as in optimal problems: the sum of local maximums doesn't offer the general maximum). But I am thinking that targeting only the general testing will not guarantee the local testing. (I am not sure I've expressed well what I am thinking). If the general result is good, doesn't assure that the intermediary results are correct, and so the general result may be a coincidence. Concluding I think that both methods are needed.
Another thing that rises questions is if testing includes/supose peformance analysis. Till this moment, I have used a different approach: testing the correctness of the solution and afterwards the performance analysis. I do believe this steps represent different points in the development lifecyle. Doing performance analysis before being sure about the correctness of the system doesn't really help. Being sure about the correctness of the solution, you may identify the hot-spots of the problem and so allows you to focus on performance issues.

Java Testing and Design puts forth User Goal Oriented Testing (UGOT.) UGOT is a testing methodology that models the behavior of archetypal users of a service.



This makes me think that the design of tests is driven by full use cases. While I agree that the overall goal of the system is to offer a solution for the use cases (and so these represent the real tests), designing tests at this level may be unproductive in a way. For short area projects it may seem a good solution, but for projects in evolutive areas this may represent a need for unnecessary work. I believe that designing fine-grained tests and grouping them according to the use cases, while better serve for long-time/evolutive systems.

I thank you for taking the time to read the aboves. I hope I didn't take it completely wrong.
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic