• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
  • Campbell Ritchie
  • Ron McLeod
  • Rob Spoor
  • Tim Cooke
  • Junilu Lacar
  • Henry Wong
  • Liutauras Vilda
  • Jeanne Boyarsky
Saloon Keepers:
  • Jesse Silverman
  • Tim Holloway
  • Stephan van Hulst
  • Tim Moores
  • Carey Brown
  • Al Hobbs
  • Mikalai Zaikin
  • Piet Souris

Please welcome Jack Shirazi, author of "Java Performance Tuning"

Posts: 23370
IntelliJ IDE Firefox Browser Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Jack Shirazi's book "Java Performance Tuning" has recently been published by O'Reilly.
Jack has graciously offered to hang out here and talk a bit with us about performance and getting the most out of the Java VM.
My first question doesn't have as much to do with coding techniques, but with the new 1.3 VM. I saw some benchmarks and the 1.3 VM was kicking butt on the 1.2.2 VM. What's up with that?
Posts: 96
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
*chuckle* Choosing your benchmarks carefully so that your VM kicks butt is always good.
The 1.3 VM uses a HotSpot engine, where 1.2.2 is pure JIT. In case you missed what HotSpot technology means, the basic idea is that the VM profiles the code while it's running, then only generates native code for those bits of the app that are bottlenecked. The VM does this by running the app in interpreted mode with an internal profiler runnning at the same time. The app profile is constantly monitored, and if some code (method or loop) is staying too long at the top of the execution stack (the "hot spots" in the app), the VM generates native code for that method/loop, and swaps the interpreted bytecode for the native-code. In HotSpot 1.0, the VM had to wait until a method completed before the swap could happen, but in HotSpot 2.0 (which is the engine used in 1.3) the swap can happen while a method/loop is running.
Unlike the server-side HotSpot VMs (called HotSpot 1.0 and 2.0) the 1.3 VM is tuned for client-side running, which basically means "don't hang about as long before generating native code, and don't apply as many optimizations when the native code is generated, so that the code is not held up as long.". If you have a long-running process, you are probably better off using
the server-side HotSpot since the longer running time can be taken advantage of by the VM.
The upshot is that 1.3 VM effectively acts like it has a low-level performance tuning expert running inside the VM. He can speed up the code in the bits that need speeding up the most. But he only ever applies a limited set of optimizations. The result is that some things run quite a bit faster - I've seen double the speed for some tasks.
On occasion the VM can get it wrong, but not often. However, 1.1.6 and later 1.1.x JIT VMs can outperform 1.2 and 1.3 VMs for some tasks because those VMs have different task loads. In addition, people are pretty clever, and manual optimizations can often outperform the HotSpot optimizations. I have an article at http://java.oreilly.com/news/javaperf_0900.html
which runs through a basic tuning exercise on running a query against a collection. The article shows how HotSpot VMs start by outperforming, but can end up lagging after manual optimizations are applied.
    Bookmark Topic Watch Topic
  • New Topic