Here's the summary that will go into my newsletter.
Different versions of the Sun JVM support different optimization flags. Some flags may allow you to configure the garbage collector generational spaces.
Configure heap space using -Xms and -Xmx [-ms and -mx for 1.1.x JVMs] to optimize the JVM heap memory for improved performance.
If the JVM supports configuring the garbage collector generational spaces (-Xgenconfig in 1.2.2; -XX:newSize, -XX:MaxNewSize, -XX:SurvivorRatio in 1.3), then you can improve performance by specifying generation spaces more appropriate for your application [you can start with some appropriate configuration depending on the ratios of short-lived to medium-lived to long-lived objects, then test multiple configurations to determine the optimal config].
The 1.3 JVM appears to be faster when run with the -server flag.
The -Xoptimize flag seems to improve performance on those 1.2.x JVMs that support it.
For this forum, I should add that I would not bother to tune the generational garbage collector spaces in any system except for either a benchmark where the customer needed every last microsecond, or a really stable deployed system, i.e. one that needed no more development. The generational garbage collector will probably change with every JVM (vendor and version), and you could easily end up with a suboptimal system when changing JVMs. Also the GC activity is dependent on object lives, and again you could easily end up with a suboptimal system if you tuned GC spaces then altered application code. On the other hand I always recommend tuning the heap space, since it is much more closely aligned to the application characteristics than the garbage collector. Heap tuning optimizations tends to survive across JVM changes, and often application changes too unless you dramatically change the lifecycle of many objects. Though you should verify the heap configuration each time you change JVM or app code. My site lists several suggestions from Sun, IBM and HP about how to tune the heap (search for -Xmx on the tips page). --Jack Shirazi http://www.JavaPerformanceTuning.com/
For this forum, I should add that I would not bother to tune the generational garbage collector spaces in any system except...
Why not? Neither the GC nor heap implementation is specified, but most JVMs store objects in so similar a manner that setting the heap size to some value often holds true from one JVM to the next? Why would the generational spaces be any different? At very very least, you can always come up with specs from a common JVM, such as Sun's and note that other JVMs will differ, right? What exactly do you mean by "the GC activity is dependent on object lives"? I understand the idea of generational spaces, but not too much about their implementation. Thanks.
Thanks for pointing out the article. Have you seen the latest issue of the Java Report? It's all related to performance tuning. I'm thinking of writing to some of the authors to see if we can get permission to hyperlink to their articles from JavaRanch. -Peter
After compilation/interpretation, garbage collection is the next most important area that affects performance as far as JVM engineers are concerned. Consequently, the GC algorithm is usually altered with most version changes. If you start tweaking the GC spaces, you need to know exactly what the GC algorithm does, and what are the most useful effects to look out for. Analysis of OO systems shows that the number of objects generated is roughly inversely proportional to the length of their lives, which is why focusing on shorter lived objects pays off. Generational garbage collectors use multiple spaces, copying objects from one to another so that the GC can always focus on objects in inverse proportion to the time they live. Longer lived objects move to successively older spaces, short lived objects get reclaimed as soon as possible. This also gains an efficiency from only needing to GC a smaller set of objects (not all spaces are GC'd at each pass). Two-space generational GC is only one of many generational GC algorithms. More spaces are likely to appear in future versions of JVMs. The 1.2 and 1.3 versions seem to have different enough GC algorithms that you probably couldn't tune them both with the same parameters (even ignoring that the flags are completely different). Heap tuning is different. It is just two numbers, the minimum and the maximum sizes for the heap. This is easy to guess at for any application, and even easier to test for a variety of parameters. Moving to a different JVM usually won't invalidate such a gross tuning measure. If you are tuning both heap and GC spaces, note that they interact. Generally, larger heaps mean that GC kicks in later and less often, but that when it does kick in, there is more work to do (so longer pauses in JVM processing). This means that each change to the heap parameters requires a complete re-evaluation of the GC space parameters. If you have huge amounts of free time, sure go ahead and play with GC spaces. But personally I would expect that you'd get a bigger boost from spending your time on tuning code. --Jack Shirazi http://www.JavaPerformanceTuning.com/
Jack, We had a discussion with some performance tuning hardware people (HP Servers), and they told us if we're running JAVA that we'll always get the best performance when we set the -Xms = -Xmx. Is this a true statement? Thanks, -Peter
There are a several recommendations that I know of. HP suggests that start heap should be set to the same as max heap; another HP rec is that the initial heap is about the size needed for the maximum number of live objects and max heap 3-4 times this. Sun suggest that start heap should be half of max heap; elsewhere they've suggested that start should be between 1/10th to 1/4 of max heap. They recommended setting start and max heaps to be different in 1.1.x. IBM suggested that as a starting point for a server based on a single JVM, consider setting the maximum heap size to 1/4 the total physical memory on the server and setting the start heap to 1/2 of the maximum heap. IBM points out that in general increasing the size of the Java heap improves throughput to the point where the heap no longer resides in physical memory. Once the heap begins swapping to disk, Java performance drastically suffers. Therefore, the max heap setting should be set small enough to contain the heap within physical memory. Also, large heaps can take several seconds to fill up, so garbage collection occurs less frequently but pause times due to GC will increase. I don't really believe that anyone has general peer-reviewed tests to back up these suggestions. There are some rationales. Assuming you've worked out or tuned the JVM so that you know what the max heap should be, then growing the JVM memory can be considered as pure overhead, requiring multiple system calls and resulting in segmented system memory allocation. If you figure that you are going to get to max heap anyway, then there is a good argument for simply starting out at the max heap (HP suggestion), and avoiding the growth overhead as well as getting memory which is less segmented. However, this can mean that when garbage collection kicks in it results in longer pauses, so the system load might not be smoothed out as much as you'd want. But a generational garbage collector will not necessariy suffer from this problem, as it specifically tries to smooth out the GC load, and can do so. An alternative view is that there is this lovely garbage collection system in the JVM, which will grow the JVM to just as big as needed and no more, so why not let it do its job. This way, although there is some overhead in growing the JVM, you will end up using the minimum resources and the GC should be optimizing what it does best, i.e. handling and maintaining memory. With this argument, you set start heap to 1MB, and max as high as is reasonable. A combination of these two rationales might lead you to the Sun recommendations: e.g. assuming that the max heap is an overestimate of the ultimate JVM size, then half max is probably a good starting point to minimize memory allocation and memory segmentation overheads, while still giving the GC space to do its stuff of optimizing memory usage. All vendors recommend using verbosegc to help determine the optimum size that minimizes overall GC. So all the recommendations should really be seen as suggested starting points. Take your pick. Or come up with other rationales. Or test your system. For testing remember that the goal is to minimize the GC overheads, so you need to measure the total amount of time that GC takes (verbosegc is probably the only way, and it produces non-standardized output) and play with min/max heap to minimize total GC time. --Jack Shirazi http://www.JavaPerformanceTuning.com/
[This message has been edited by Jack Shirazi (edited February 18, 2001).]
please buy this thing and then I get a fat cut of the action: