• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Liutauras Vilda
  • Tim Cooke
  • Jeanne Boyarsky
  • Bear Bibeault
Sheriffs:
  • Knute Snortum
  • paul wheaton
  • Devaka Cooray
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Ron McLeod
  • Piet Souris
  • Ganesh Patekar
Bartenders:
  • Tim Holloway
  • Carey Brown
  • salvin francis

Unit Tests, when and how often?

 
Greenhorn
Posts: 14
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi all,

I'm the test manager on my university project where we're developing a small software product in Java. I'm currently in the middle of writing up a test document where I'm detailing our test strategy and so on.
My big question right now, however, is WHEN. When are test supposed to be run? We're not doing TDD. This means tests need to be written as soon as there has been some code written. If we take Unit Tests as an example, how often should the tests be carried out? Is a Unit Test for a component(say a method) to be carried out as soon as the component is implemented? Or should I wait for a whole class to be implemented before running test cases for it's components? I'd also be curious to know how this applies to integration tests. However, for simplicity's sake let's focus on Unit Tests for now.
Looking forward to hearing your answers!
 
Sheriff
Posts: 13701
227
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ideally, as often as you can. Every check-in of code into the version control, ideally with a dedicated automated process watching the source control. CI tools like Jenkins do that. Developers should be encouraged to check their code in as often as possible.  With the most sophisticated teams, that means every few minutes, after you've run all your unit tests and see them still pass.

Your approach smells of the attitude of "Tests are a way to verify correctness of the program."  Note, however, that a great computer scientist, Edsger W. Dijkstra, once wrote that "Program testing can only show the presence of bugs, but never their absence."

If you keep that in mind and take it to heart, you'll find yourself getting pushed towards the attitude needed to do Test-Driven Development. People are seldom ready or able to do TDD without proper training and practice however. So, you're in between a rock and a hard place.

I'm not saying that TDD is the only way you can effectively use unit tests but I can say from experience, TDD is a far better way to leverage the benefits you get from writing unit tests than when you write and run them after-the-fact as many traditional software development shops still do.
 
Junilu Lacar
Sheriff
Posts: 13701
227
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Also, when you tell developers to write/run unit tests AFTER they have written code, the tests tend to be written with a bias to make them pass. This is diametrically opposite of the bias that's needed to write good, meaningful tests.  Sorry if I keep jumping to TDD even when you asked to focus on unit testing but to me the two are very tightly bound together in a good development process. Writing the tests with the goal of seeing them fail first sets your mind in the right bias perspective.  It makes you think "How can I write this test so that it fails if something is wrong with the code?"  

With after-the-fact testing, developers often just want to get this "requirement from management" over and done with so they can move on to their next programming task. So, their mindset is, "How can I write this test so that it passes and I can move on to the other things I need to do?"
 
Omar Sharaki
Greenhorn
Posts: 14
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Thanks for the insights guys. It's not the first time I've heard the benefits of TDD being mentioned. However, given lack of time, in depth specification and most of all experience I think TDD isn't really an option for us.

After speaking with our chief programmer, I decided that unit tests will be carried out for a class's components whenever a developer pushes his version on to the server. The test results will then be returned to the developer who upon failure of the tests will make the proper adjustments and give the new version in to be re-tested. Does that sound sensible? Furthermore, I'm wondering how often I should get the developers to make reviews of their code. An idea would be once right before each release. This includes releases which happen after adjustment of the code in case of a failed test.    
 
Junilu Lacar
Sheriff
Posts: 13701
227
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Omar Sharaki wrote:Is a Unit Test for a component(say a method) to be carried out as soon as the component is implemented? Or should I wait for a whole class to be implemented before running test cases for it's components?


This is a great question and almost hints at some awareness at least of what TDD brings to the table.  Designs have to be fluid. Programmers seldom write perfect code off the bat. There's always something wrong, something that they didn't foresee, that they couldn't have foreseen. I often tell the programmers I work with, "Admit it, you're just programmers! You're not a bunch of fortune tellers who can see the future and correct mistakes before you make them! You have to make mistakes, see them, see what problems they create, then do something to fix those problems."  You can't tell if you messed up until you run your tests and see them fail (Remember what E.W. Dijkstra said?)  Fancy UML diagrams on the whiteboard, or WORSE, in Visio, won't generate compiler errors or fail JUnit tests. Only code that you have written (or haven't yet written) can make a JUnit test fail.

So, what's the best way to validate your detailed designs and see what kind of gaps you're missing? Writing tests that serve as examples of how your code is supposed to be used! Do the tests make good examples? Do the tests suck because your code is difficult to use? Do the tests make sense or do they make no sense at all because your code's API makes no sense?

These are all the kinds of questions and conversations that are driven from those questions that writing tests up front and often can give you. These kinds of conversations very seldom come up as a result of a developer sitting down and "banging out code" until they proudly announce that they're done.
 
Omar Sharaki
Greenhorn
Posts: 14
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I feel that you're still trying to get me to lean toward TDD. Am I right? Also, do you have any comments on my Review suggestion in the previous post?
 
Junilu Lacar
Sheriff
Posts: 13701
227
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Omar Sharaki wrote:Thanks for the insights guys. It's not the first time I've heard the benefits of TDD being mentioned. However, given lack of time, in depth specification and most of all experience I think TDD isn't really an option for us.


Yes, that's the common reason for not doing TDD. It's also what keeps you from getting anywhere closer to being able to do it. Ah well.

After speaking with our chief programmer, I decided that unit tests will be carried out for a class's components whenever a developer pushes his version on to the server. The test results will then be returned to the developer who upon failure of the tests will make the proper adjustments and give the new version in to be re-tested. Does that sound sensible? Furthermore, I'm wondering how often I should get the developers to make reviews of their code. An idea would be once right before each release. This includes releases which happen after adjustment of the code in case of a failed test.


Well, the more often the better. A failing test is a signal to do a code/design review. If you only do a code review right before a release, then what's the point? What are you going to do if you find a problem, cancel the release? The code review basically becomes just a formality, a rubber stamping session to give the higher-ups a sense of (false) security that their development team has done their "due diligence".

Not to keep pushing you to consider TDD because that seems to be off the table but I just want to help you understand the difference in what you're not getting: With TDD, you're looking to have a failing test every 2-3 minutes. That's with skilled and experienced TDDers. More realistically for less experienced TDDers, a 5-10 minute interval is ambitious but achievable. Imagine how many problems in your design you can head off early by reviewing the code every 5-10 minutes. Believe me, there are problems in design that do not make themselves obvious in nice UML diagrams or in word documents that describe a systems design to excruciating detail. Only running code and failing tests can show you the real flaws in your code's design.

So, if you wait for just before each release to do code reviews, you're raising the risk of problems slipping by you and getting into production. I would bet you that's what you'll see with the process you have in mind. Even with TDD, some teams still see problems slip by. In my experience, however, those problems are far fewer and less consequential than the ones that slip by with development processes that have much longer feedback cycles like the one that you're considering.
 
Ranch Hand
Posts: 497
10
Android Open BSD Slackware
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Junilu Lacar wrote:Also, when you tell developers to write/run unit tests AFTER they have written code, the tests tend to be written with a bias to make them pass. This is diametrically opposite of the bias that's needed to write good, meaningful tests.  Sorry if I keep jumping to TDD even when you asked to focus on unit testing but to me the two are very tightly bound together in a good development process. Writing the tests with the goal of seeing them fail first sets your mind in the right bias perspective.  It makes you think "How can I write this test so that it fails if something is wrong with the code?"  

With after-the-fact testing, developers often just want to get this "requirement from management" over and done with so they can move on to their next programming task. So, their mindset is, "How can I write this test so that it passes and I can move on to the other things I need to do?"


Well, for the sake of the discussion could be argued that if you take unreadable legacy code  you want to make Junit to understand better what is written?
 
Giovanni Montano
Ranch Hand
Posts: 497
10
Android Open BSD Slackware
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Junilu Lacar wrote:Imagine how many problems in your design you can head off early by reviewing the code every 5-10 minutes


Wow, this is convincing indeed. Please Junilu regarding the how often could you share something about refactoring and Junit test. I have a big ball of mud, full of dependencies instantiated with the new keyword, I am thinking to mock and eliminate most of them with Dagger and Factory patterns, but should the junit test pass once I refactor, or I have to change them manually because are referring to other classes?
if this is the scenario, I am wondering what people do when have houndreads of test that fail just because one changed a class. Maybe do you have some reccomentation? Also I thought to do a GIT branch only to start refactoring, is this a good best practice? My plan is to merge the branch only if works good, and proceed with successive test commiting on the branch,not master, so that I can run a git reset --hard HEAD everything I start to stage mistakes.
Please advice me, I bought the first three FANTASTIC books you recommended me but still need some small orientation. Thank you
 
Junilu Lacar
Sheriff
Posts: 13701
227
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Giovanni, sorry for the late response, I just noticed this.

I don't know if you got it, too, but "Working Effectively with Legacy Code" by Michael Feathers is the most important book you'll have within reach when attempting to refactor legacy code, which the book defines as "any code that has no unit tests," including code that you may have written just a few seconds ago.

Another symptom of code that is too tightly coupled are tests that break when the design is refactored. Ideally, tests should be dependent on the production code's API, not its internals. Mocking has a tendency to tie you in to the internal implementation. In that sense, excessive mocking can itself be a code smell. However, mock objects can play an essential role in trying to tease apart and loosen a dependency.  They can act as a "shim" or wedge between two tightly coupled classes that you want to make less so.  You just have to expect to have a few breaking tests to fix every time you drive the wedge in deeper and separate the two dependent classes further.

So, in refactoring a big ball of mud, you need to prepare yourself to expect things to get worse before they get better. Be careful about excessive mocking and especially watch out for mocks that know too much about the internal workings of a method. Prefer using mocks to simulate interactions at the class API level.

Hope this helps.
 
Giovanni Montano
Ranch Hand
Posts: 497
10
Android Open BSD Slackware
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Junilu Lacar wrote:Giovanni, sorry for the late response, I just noticed this.

I don't know if you got it, too, but "Working Effectively with Legacy Code" by Michael Feathers is the most important book you'll have within reach when attempting to refactor legacy code, which the book defines as "any code that has no unit tests," including code that you may have written just a few seconds ago.

Another symptom of code that is too tightly coupled are tests that break when the design is refactored. Ideally, tests should be dependent on the production code's API, not its internals. Mocking has a tendency to tie you in to the internal implementation. In that sense, excessive mocking can itself be a code smell. However, mock objects can play an essential role in trying to tease apart and loosen a dependency.  They can act as a "shim" or wedge between two tightly coupled classes that you want to make less so.  You just have to expect to have a few breaking tests to fix every time you drive the wedge in deeper and separate the two dependent classes further.

So, in refactoring a big ball of mud, you need to prepare yourself to expect things to get worse before they get better. Be careful about excessive mocking and especially watch out for mocks that know too much about the internal workings of a method. Prefer using mocks to simulate interactions at the class API level.

Hope this helps.


No sorry, please your previous replies have been already a game changer for me, I almost finished to read clean code and now I understand why lead developer said that my code was not good.
I have also working with legacy code and the 4 rules of simple design, I went only trough the index so far on both of them.

As far as I know I have to go now into the Law of Demetry that looks as an esoteric statement, but is a really valuable way to right testable code. Interesting that excessive mock can be a smell, and I intuitively see your point. If i need to recur always to mock, it means that I did not work enough on dependencies, separation of concerns and all the S.O.L.I.D.principles.

I reckon I am going to extract classes, using Singleton and Abstract Factory Patterns, Junit test I am going to build should not fail per definition, they test the logic really granular.

If I well understood well I could formulate our law, the Law index percentage of Junilu:
given n refactoring  more the formula    1- [(failed tests)(number of mockito tests or mocks)] / [(number of junit tests)*(number of refactoring action done)*( coverage percentage)*100] tends to 1 better the test is.  
 
Ranch Hand
Posts: 531
1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Omar Sharaki wrote:Hi all,

I'm the test manager on my university project where we're developing a small software product in Java. I'm currently in the middle of writing up a test document where I'm detailing our test strategy and so on.
My big question right now, however, is WHEN. When are test supposed to be run? We're not doing TDD. This means tests need to be written as soon as there has been some code written. If we take Unit Tests as an example, how often should the tests be carried out? Is a Unit Test for a component(say a method) to be carried out as soon as the component is implemented? Or should I wait for a whole class to be implemented before running test cases for it's components? I'd also be curious to know how this applies to integration tests. However, for simplicity's sake let's focus on Unit Tests for now.
Looking forward to hearing your answers!



Hi, Omar,

There are tools on code coverage that you should look into, they will tell you how much of your code is covered by unit testing. Some code is more important than other code, too, so it needs to be covered mandatorily.

Best to write a unit test around the time a method is written (or even before if possible).

Set up a continuous integration server with your source control and have it run the unit tests on every check in and alert you via email if a run fails.

With best regards,

Anton.
 
Giovanni Montano
Ranch Hand
Posts: 497
10
Android Open BSD Slackware
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Anton Golovin wrote:

There are tools on code coverage that you should look into, they will tell you how much of your code is covered by unit testing. Some code is more important than other code, too, so it needs to be covered mandatorily.

Best to write a unit test around the time a method is written (or even before if possible).

Set up a continuous integration server with your source control and have it run the unit tests on every check in and alert you via email if a run fails.



Anton.


i suggest for coverage http://www.eclemma.org/jacoco/
CI server https://jenkins.io/2.0/

but I would first focus on jacoco
 
If tomatoes are a fruit, then ketchup must be a jam. Taste this tiny ad:
Enterprise-grade Excel API for Java
https://products.aspose.com/cells/java
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!