• Post Reply Bookmark Topic Watch Topic
  • New Topic

How to design using unit test  RSS feed

 
John Vorwald
Ranch Hand
Posts: 139
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm not very good at applying unit testing early in the process, I tend to write software from an algorithm / procedural point of view, and then write the unit tests after the software to check the results / state modifications are correct. But, I want to get better at utilizing unit testing earlier.

I need to write a class to solve a problem of optimizing a model using a tree structure. So part of the problem is storing information in the tree, such as model configuration, results, and ranking. The ranking is used to determine which branches of the tree, or models, are allowed to propagate to the next generation. Every generation of the tree performs a finite list of perturbations to the model structure. Another part of the problem are the two interfaces, one is with a user interface, allowing the next generation to start, and providing feedback information on progress. The second interface is with the class to evaluate the model to determine the results/scores for the model that are used for ranking. And finally, the evaluation of a single model takes some time, on the order of 1/2 hour, so the evaluation of one generation requires evaluating (N Models Propagating) time (N Model Perturbations) which is on the order of 200 model variations, so takes on the order of 100 hrs. It is desirable that the models are evaluated concurrently to reduce time required to evaluate a generation. Even, it would be good if the models could be evaluated on different computers, but I don't know how to do that...

OK, I'm looking for suggestions on how to write a set of unit tests that the code needs to past before writing the code. Otherwise, I will work on getting the code to get it to function, and afterwards write the unit tests to monitor results / state changes.
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
John Vorwald wrote:OK, I'm looking for suggestions on how to write a set of unit tests that the code needs to past before writing the code. Otherwise, I will work on getting the code to get it to function, and afterwards write the unit tests to monitor results / state changes.

Testing is a bit of a black art, and good testers are, in my opinion, grossly undervalued and underpaid. I freely admit to not knowing enough about it myself, but from the little I do know I understand that there are three basic types of test outside the exhaustive:
Smoke tests - where you pick specific combinations of values that you know should pass or fail (these will usually have something to do with the rules of the system being tested; and are often chosen in pairs: one that passes and one that fails) and ensure that they behave as expected.
Random tests - where you pick values at random and ensure that they behave as expected.
Boundary tests - where you analyse the limits of possible values that can be supplied and test your code on either side of those limits. These are also often called 'corner' or 'edge' tests, and there may be many combinations of them, but the idea is that the test must pass inside the boundaries and fail outside in all cases.

If need be (and once you've got past the first three), you may also want to add some performance tests to make sure that the code is working as close to optimally as possible without impairing readability or good coding practise. Obviously, for something like this, you'll have to have some idea of what "optimally" is: eg, whether the function should work in O(n) or O(log n) (or possibly even O(1)) time.

The above is just a very basic guide and probably misses out a whole raft of stuff. There's also quite a lot to know about the order in which you test, so as to avoid wasting time.

I'd also make sure you have a decent testing framework. I use JUnit myself, but I'm sure there are many others out there. Certainly, it should be able to store tests for re-use; and you may even want to keep some sort of test diary.

Unfortunately, beyond getting a good book on the subject (and you may well get some recommendations from the chaps here; otherwise Google is your friend), I can't offer much more advice.

But good luck.

Winston
 
John Vorwald
Ranch Hand
Posts: 139
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Those are some interesting, useful, thoughts, Winston, having categories of test based on dimensionality/space coverage and/or timing. I am typically generating smoke or random tests, haven't given much thought to boundary limits. Also, I haven't written tests for timing, but, when I write the computationally intensive portions of the the code, I have different implementations with timing built in... Timing is hard because the timing issue may not be at lower levels, like solving some portion of the problem, but at a higher level like solving a minimal number of attempts.

Last week, I couldn't figure out how to have a test case for a class, so I ran the class and stored the results, to use as future test results. That serves to maintain reliability.

The current class that I'm trying to write the test case first, but am challenged with coming up with reference results. While I can work some out by hand/excel, I'm trying to code a reference solution in the test suite. I can see it will be somewhat trivial, but useful, to take the reference solver from the test code and use it in the class being developed.

It just seems that more of my test cases are looking like re-implementation of the class under test, in order to have results for comparison.
 
Winston Gutkowski
Bartender
Posts: 10575
66
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
John Vorwald wrote:It just seems that more of my test cases are looking like re-implementation of the class under test, in order to have results for comparison.

Yes it often seems like that, doesn't it.

One of the things you can do is to use it as an exercise in modularity and make sure that, as far as possible, your code is written so that what it should be doing (the test) is what it actually does, so that both are, in fact, largely running the same code. Indeed, a good test suite can often point out places where code should be broken down a bit further (or maybe slightly differently) than it currently is.

However, as I say, I'm no expert, and this stuff is more in the realm of TDD, which I haven't had a lot of exposure to. I expect there are others here that have though.

Winston
 
Matthew Brown
Bartender
Posts: 4568
9
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm not a testing expert either, but what I'd be trying to do is break the problem down into smaller steps, and write unit tests for each of those tests. If you can do this enough the tests should be simple enough that you can work out what they ought to be doing. And you'd have the added advantage that if a test fails, you've got a much better idea of where the problem is.
 
Brian Burress
Ranch Hand
Posts: 131
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
John,

You are asking a question on a topic that is very near and dear to my heart. I also have to concur with Winston's comments about testing being undervalued. Everyone in IT is underpaid, so I will not make a distinction for testers there ;)

I'll assume you are using JUnit as your test framework. So I'll suggest you follow the approach which I have been following for at least two years. It is more or less adapted from Test Driven Development and Agile concepts.

Use the Junit class(es) to document your requirements. If you have a requirements document, then refer to it and reference it in the JUnit javadocs. If you do not have a requirements document, then I suggest using the JUnit javadocs as your requirements. If you are fortunate to have a team that can think out of the box, then using the JUnit javadocs as a requirements doc eliminates maintenance of a formal requirements doc which can help eliminate some overhead.

Minimally, there should be one test method per requirement. If you want to keep tests very concise, then break the tests up according to the various paths implied by the requirement. Be very descriptive in the method javadoc to state the purpose of the test, the inputs/data values to use, and the expected results. Code the test with a "fail" statement with verbiage like "not implemented" such that you can execute the tests and get an accurate feel for development progress by looking at the pass/value ratio and quickly see which failures are potentially due to tests not being written vs. the application not being written (or not working).

Another aspect to consider is that the requirements based approach will be looking at the application more from a user point of view. If you are writing a component based solution, then the tests should definitely work to validate that the integrated components accurately solve the problem/requirements.

If you are writing components, then the above still applies, but you need to write requirements for the behavior of the component and apply the same approach. At some point you will morph from writing requiements from a "user view" and write them from a more technical view in that your component may do more than required for the immediate project.

You can continue breaking this solution down to a class level for what I'll call "unit" tests for a class. I generally have broken the formal documentation at this point. While the unit tests are part of an automated solution, I look at the requirements based tests essentially as the set of acceptance tests that the code works as needed.

Organize your JUnit classes as you see appropriate. For initial review, having few classes may be appropriate. As you schedule the work and determine release strategies then you will likely want to break the tests up into classes and/or packages organized by release so you can have easier visibility on the progress for a release. For tests "far" in the future, no one needs to be concerned if they fail but you do have a parking lot or todo list as a by product which allows developers with free time to work ahead.

I think you also touched on a point earlier which is that the application/components need to be organized in a way to facilitate testing. I think this is true to some extent, but if you approach the solution architecture using an object-oriented approach and have provisions for swapping the implementation through configuration then you have some natural points to plug in different options to allow for earier testing (API test stubs, default data, etc).

I am glossing over some points of the process such as needing to have your build processes set up to execute everything. At the risk of stating the obvious, here too you have a lot of flexibility in your approach. You can go for an end to end automated solution that provides scripting for the pulling of the code base from version control, building the app, deploying the app, running the tests, and providing a "versioned" set of results so you can look at the build history and see when a certain test "broke", etc. You could also be a simple as just executing things manually when you need them run.

I'll stop the diatribe at the moment. Hopefully this helps give some ideas you can consider adopting for your situation. If there are some aspects that you would like to explore further, just shout.

 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!