This week's book giveaway is in the Agile and Other Processes forum.
We're giving away four copies of The Journey To Enterprise Agility and have Daryl Kulak & Hong Li on-line!
See this thread for details.
Win a copy of The Journey To Enterprise Agility this week in the Agile and Other Processes forum! And see the welcome thread for 20% off.
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Jeanne Boyarsky
  • Liutauras Vilda
  • Campbell Ritchie
  • Tim Cooke
  • Bear Bibeault
Sheriffs:
  • Paul Clapham
  • Junilu Lacar
  • Knute Snortum
Saloon Keepers:
  • Ron McLeod
  • Ganesh Patekar
  • Tim Moores
  • Pete Letkeman
  • Stephan van Hulst
Bartenders:
  • Carey Brown
  • Tim Holloway
  • Joe Ess

Optimizing Java: Question about testing in a noisy environment  RSS feed

 
Ranch Hand
Posts: 73
6
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I hope you don't mind, but I'm going to be a little greedy today and ask a second question in addition to my previous post.

I've been working on some very CPU-intensive applications (image processing, computational geometry, etc.).  One thing I've found in working with performance issues is one little experiment can undo a vast amount of theorizing.   So every time I look at a way to improve the performance of my code, I try to run it through realistic timing tests to see if my ideas actually work.  I've lost track of the times that some idea that I was sure was going to improve performance turned out to do just the opposite.

The thing is, as valuable as timing tests are, Windows is a terribly noisy testing environment.  I never seem to know when something is going to kick off in the background when I'm running a test.  I try to compensate for that by running multiple repetitions of the same test (and discarding the initial tests during which class loaders and JIT operations get involved).  Even so, I routinely see time values that can bounce around by 20 or 30 percent (most of my tests take a few seconds to run).  Does your book address ways of better isolating system performance issues when conducting timing tests?
 
author
Greenhorn
Posts: 14
5
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Gary,

If you are looking to test small amounts of code (outside of the context of an application) you are heading down the path of writing a Microbenchmark. There are many perils to doing this, especially if you are hand rolling your own test harness. One example is that the Just In Time Compiler kicks in at 10_000 iterations by default, so if you haven't got the right warm up you won't see the true profile. There's also the issue that I've seen many times where the benchmarks people write are simply optimised away as the author does nothing with the result. You could log them, but that's not a true performance measure.

A benchmark always gives you a number, the issue is being able to trust that number.

If you are going to do Microbenchmarking you really should be using JMH, which is written by the authors of the JVM. It will do things like warm up and prevent unwanted optimisations from happening in your benchmark (e.g. removing the test itself).

Even JMH though won't necessarily help with proving a speed up in your target application, as the optimisation at runtime would potentially be very different to results obtained from JMH.

No real answers here, some cautions though .

Jim
 
author
Ranch Hand
Posts: 51
5
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
There's a quote from Richard Feynman in the book:

The first principle is that you must not fool yourself – and you are the easiest person to fool.



One of the big traps in performance tuning is Confirmation Bias - we go look for the evidence of the causes that we think  are behind our problems - we tend to cherry-pick data if we're not careful.

So, we need to take a step back, not second guess, and let the data speak. Part of that is recognising that we operate in noisy environments.

20-30% noise in an environment is, if anything, quite low.

Consider this bit of code:



All it's really doing is showing that the time take to hash a string is proportional to its length - that is O(1) - but what it actually shows us is a measure of how noisy our execution environment is.

Grab the data points and analysis the distribution of them, against string length, and you'll see how much random error there is - easy 20-30% on some measurements on most real systems, I bet.

My conclusion from this is, unless we're squeezing out the last few glistening drops of performance, we have to concentrate on the big points first - and take the noise as a fact of life - if a performance effect can't be made to rise above the noise then we shouldn't regard it as a real effect - that's what Feynman would have done in Physics, after all.
 
Gary W. Lucas
Ranch Hand
Posts: 73
6
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Jim,

Thanks!   I've never heard of JMH, and I am very grateful for the suggestion.

It's funny you should mention the compiler optimizing away the test logic.  I've gotten burned by that a few times.  A test would do all sorts of interesting work, but because the application didn't collect the computation results, the optimizer just removed the test logic from the code... which made it run much faster, of course, but didn't provide me with much information. So I've gotten into the habit of making sure that my test collects (and prints) a data result even if it's just a dummy value I don't care about.

Gary
 
It is sorta covered in the JavaRanch Style Guide.
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!