Jack Shirazi

Author
+ Follow
since Oct 26, 2000
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Jack Shirazi

The 12 hour weblog analysis program exactly illustrates what I'm trying to say. Here are two scenarios:
1. You want hourly stats. So you set yourself a target of sub-hourly analysis processing time. Performance tune and get there or give up. No need for outside numbers.
2. You want hourly stats. But you haven't got a clue that any kind of speedup is possible. So you give up. Then you find out that someone else analyses their logs in 10 minutes. This tells you that performance tuning your app is possible. Now you adjust your expectations and perf tune (or hire someone to do it). But it was not the performance tuning process that benefitted from that knowledge, it was your business targets that benefitted.
This is what I meant by the information being useful for setting performance targets, which is a business level activity, but not for performance tuning. The 10 minutes (or sub-60 minutes) target does not help the performance tuning process in any way. It is a contraint on the performance tuning. You need to carry on performance tuning until you reach the target. But 10 minutes doesn't tell you where the bottlenecks are in your app, nor what kinds of techniques will help you speed up the app.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago


a 10x speedup could be funtionally useful (as in the DB example), whereas a 1% speedup might not be useful to the same degree


Once a day, I run an analysis on my weblogs. It takes about 10 minutes. I run it in the background. I have not the slightest doubt that I could spend a day or two tuning the program and get it down to 1 minute. What would that gain me? Nothing functional. A couple of days wasted. I don't wait for a run to finish, I get on with other things (deleting junk mail, replying to other mail). A 10x speedup is functionally useless in this context. It would not be performance tuning, it would be playing around. This 10x speedup is worth nothing to me. I wouldn't pay a penny for it if someone offered to do it.
Does it help you to know that my weblog analysis program takes 10 minutes? Even if you were writing your own weblog analysis program?
It can take me one month to complete development of a training course. If I can improve that performance by 3%, I gain a day. It may not sound like much, but I'm desperate for all the time I can get. In some situations, a day of my time could be very valuable to me, worth paying for (because someone else is willing to pay me more).
It may well help someone to know that it takes me a month to develop a course. They can offer me a service to help me, which I might pay for. Or they can try and build a rival course, and knowing how long it takes means they know their lead, or lag. Or, if they hire someone to write a course, they have my benchmark to know what may be reasonable.
What if everone else takes one week to develop a course? That benchmark is misleading someone.
The usefulness of the speedup doesn't depend on the degree of speedup, it depends on the usefulness of the speedup. The usefulness of a benchmark is very dubious. It may be of some help as product development information. It may be completely misleading. And it could be more expensive to find out it is misleading than to ignore it in the first place.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
But this isn't performance tuning, this is adding value. Identifying that their performance targets could be improved to add value to their system.
Your basic point that if you know what is possible from other projects then you can improve the system, is a value adding proposition. As such, you are right. The kind of figures you are talking about are not particularly useful to the performance tuner. They are useful for management, for marketing, for product development, for specifying the business case. Not application development.
So now I don't know whether to concede that you are right - the information is useful; or insist that you are wrong - the information is not particularly useful for the performance tuning process. I guess since the information is useful I must concede you are right.
You won't find that kind of information in many places. Specifically because it is useful to rival product development. I can't even report on my improvements made for my own customers most of the time because they don't want to give out that kind of information. And this is not profile information (your very first post), it is performance target specifications.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
If 150 hours was adequate performance, it doesn't matter even if it could have been done in 2 seconds. If 150 hours is inadequate performance, then they must have had a target time that the calculation needed to be done in. What difference what is achievable? Your business requirements should drive target performance times, not some arbitrary target of what can be achieved by other applications.
The fact that you didn't know how long things would take (wireless system) is one thing. That is not uncommon. That's what performance testing is for. Not having targets under different loads is another. Marketing's job is to do the customer research to find out what was acceptable to users. Leaving it to engineers to set times with no input from sales/marketing just means that you get them coming back saying "that's too slow".
Here's some figures for you
User interface:
Response time is the primary performance target.
Primary guideline, make sure you set the users' expectation of how long any task will take. In the absence of a guide from the app, the user expects task time to be more or less instantaneous (so is always disappointed)
UI should remain responsive at all times, even when user initiated activity is occuring. For any activity which will take more than a tenth of a second, status should be displayed. For less than half a second, status messages are sufficient, but beyond that a status bar is preferred.
In the absence of expectation, if an activity will take more than one second user patience begins to stall. Over two seconds and the chance of abandoning the activity starts to increase. By eight seconds, the chances are very high that the user will abandon the task (various studies including IBM studies and more recent web page interactive studies).
Another study shows that the users memory of the "average" response time is actually the response time that corresponds to approximately the 90th percentile response time, i.e. the response time which is higher than 90% of response times encountered for the task.
Sound streams and video streams are different. Users identify stalls and gaps in streams very easily. Lowering the resolution is better than losing time segments. I don't have hard figures in this area.
Server systems
Throughput (number of requests served per second/minute); transactions rates (number of transactions completed per second/minute); response time; and concurrency levels (the number of simultaneous requests being handled by the server) are the primary performance targets.
Good performance (achievable currently with J2EE) has sub-second response times and hundreds of (e-commerce) transactions per second. Servlets running on an average single server configuration machine can serve tens of dynamically built pages per second.
Near real-time systems (e.g. telco systems):
If a response will take longer than 200 ms, that needs to be signalled to the caller. No response in 500 ms is essentially a lost request. This essentially means that round-trip response times should be 500 ms or less. http://www.ietf.org/rfc/rfc2543.txt
What other systems do we you to talk about?
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
I've been to many customers who had no performance targets in place. It has never taken them more than two days to come up with basic response times, throughput transaction rates and concurrency level targets, and they knew before I needed to explain that those were the statistics that needed to be specified. You'd have to be pretty far out of it to be developing a J2EE app and not know that these are the statistics you need to target. Mostly they just needed someone to tell them to write down target numbers, and use them (yes, I get hired to state the obvious. It is so much cheaper to buy and read my book). They don't usually have too much trouble coming up with target numbers.
I have seen several game projects described. In every case the developers knew exactly what performance targets they were going for, frame rates and polygons. Every J2SE GUI application I've heard of knew that they needed a responsive GUI. That meant no frozen screens. Their problem wasn't in specifying the performance targets, it was in achieving them. They also knew that they needed to target response times for user initiated activity.
Which development project have you been on or know of where they could not specify performance targets? I certainly agree that too many projects do not have those targets. But in my experience that has always been because they just didn't consider performance until the customer or director came along and said "My God, that's slow".
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
Well, I'm afraid that I'm pretty much out of time here. I'll try to follow one or two threads if I can, but if I can't, it's been fun. Some great discussions.
Good luck all of you on winning a copy, and I wish you all success in your future careers.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
BTW, I don't advocate (and never said) waiting until the project is "fully" stable before tuning. I said wait until components are functionally stable. Components of most application are functionally stable when the functionality specified is working, but not fully QA'ed. At this stage the component still needs to have all it's potential runtime paths tested and any bugs fixed, but the primary runtime paths are working and so can be profiled. This usually comes at the unit testing stage. That's way before the app is in pre-production. Then you have integration testing stages where you can exercise full application paths for the components you have, and often the pre-production QA phase runs performance testing in parallel.
And a good number of projects work out that the best time to do the implementation performance testing is at pre-production, and do schedule that. Performance testing unstable apps and unstable components will give you nothing at all but wasted resources more than half the time. Which is the kind of ratio that makes it very expensive.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
The automanufacturer example of layout being design and inspector numbers being implementation is nice. And if he "profiled" his production line in the latter case, he would quickly identify that the major bottleneck is waiting for the inspector to inspect each step. It may be complex to profile a production line, but not that complex for a Java app.
Lets look again at the example of that automanufacturer. Your way to help performance tuning is to look for the benchmark. In this case other automanufacturers run their assembly lines 3 times faster, so he knows he has to improve. My way is to performance tune to targets. Well he needs initial targets. Of course in this case that would be the rival manufacturer's production rate. So he starts with the target "throughput", and keeps repeating profile/fix until he gets there. In terms of Java applications, I always emphasize that you must have targets before you start tuning, otherwise you have no idea how much tuning you need to get done. So this doesn't separate us.
But my understanding is that you aren't really talking about throughput and response time targets. Every application should have these targets before you start tuning, or you're just blowing in the wind. My understanding is that you are talking about lower level benchmarks such as the types of objects and their frequency, lifetimes, etc. That's much lower level than the production line throughput. I think that's more equivalent to, say, statistics on the individual workers on the production line. Productivity statistics.
So our primary goal is the target throughput. A secondary goal is to optimize individual productivity levels. In the absence of specific time targets, you can use an ROI (return on investment) target. For performance tuning, this works by identifying inefficencies and determining whether fixing them pays back for itself. In tuning terms, this goes something like: it is almost always worth fixing bottlenecks taking 10% of application time. it is rarely worth fixing bottlenecks taking 0.1% of application time. The return on halving the time on a 10% bottleneck is a speedup of 5% of the app. The return of halving the the time on a 0.1% bottleneck is a speedup of 0.05% of the app.
So your way says, I want individual productivity levels for workers at other factories. But I would say that just because they work at other factories that doesn't mean their productivity levels are at all relevant. You might be lucky, and they are. Or you might get totally misleading benchmark productivity levels, and set too low targets, or impossible to achieve targets.
Instead, I say ignore the other individual productivity levels. Profile your factory and find the bottlenecks looking for people waiting around (method execution bottlenecks) and for piles of discarded waste or unusable material (the closest analogy I could think of for object creation bottlenecks). You have reached your targets when the the primary targets are reached in any case, but assuming your bonus depends on how much you beat that primary target, the secondary target will give the best way to the highest bonus. Basing your targets on low level stats from other factories is misleading. Looking for benchmarks from low level statistics of other apps is misleading.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
Online samples are available, see this discussion
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
Recreating instances is fine. In most cases with the latest JVMs you get better performance than pooling. There are no performance problems with globally accessible objects. This is a maintenance or security problem. Huge code bases are not necessarily more likely to be a performance problem (except for the overhead of extra classloading, so startup may need tuning).
The example you give of the auto manufacturer is a design bottleneck. Profilers don't tell you anything about design bottlenecks. One of the reasons patterns are popular is because they can guide you to effficient designs.
These are implementation and design issues you are describing. Design should be addressed at early stage with performance as a focus. Patterns are the closest you'll come to benchmarks for efficient designs. Implementation should focus on functionality until stable. Implementation performance should not be considered until components are functionally stable. then profiling will get you your improvements.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
I can see what you're saying. But I still don't see the usefulness of profiling, other than essentially more tutorials.
Method profiling: You are looking for the methods which take the longest time. Ideally CPU time, since elapsed time includes waits on monitors and I/O. And you want the time for execution within the method code and not calls to external code. Try to speed up the method or change the app to avoid using the method (or use it less often). If one of the methods high up in the profile is the garbage collector executing, then you know there is also an object creation problem.
Memory profiling: Find which objects are created and dumped the most. Memory snapshots help. Track down the methods which create those objects. See if you can reduce the amount of object creation.
Benchmark Application 1 says Object Y was created the most, and could be reused to eliminate a bottleneck.
Benchmark Application 2 says Strings were used the most and could be improved by converting the String creation process in method X to do ...
How do these benchmark applications help? As tutorials, certainly they give you experience in finding and getting rid of bottlenecks. I do this in my book a lot. But as benchmarks to compare your application against, I can't see their usefulness unless your application happens to do very similar things.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
Peter:
I didn't have optimization enabled for the 1.1.8 compile.
Kalpesh:
Any good memory profiler should tell which are the big objects in your application.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
This is not a design patterns book, but I do briefly cover those design patterns that are optimizing design patterns.
I wrote the book because I had a great deal of material I had gathered from performance tuning projects over many years. And I decided to start writing about that material, so that other people could have one reference point for all these techniques that were available for performance tuning applications. And to make some extra money of course.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
It's an issue of resources. If you are performance testing unstable work, you will start tuning parts of the code that will never otherwise need tuning.
And when do you stop? With functional testing, you stop fixing when the test doesn't produce an error. How do you know when to stop tuning? For the whole application, you should have performance targets, but what are going to do, give yourself a target for every piece of functional code you produce? MethodX does X and takes 124 nanoseconds. MethodY does Y and takes 15 milliseconds. That's fun.
Look for design bottlenecks. Look for architecture bottlenecks. Specify performance targets and profile stable units looking for implementation bottlenecks using those specifications as benchmarks.
Stability is very different from performance. If a path followed once or twice by the application fails, the application is breaks. But is a path followed once or twice by the application is slow, mostly you don't even notice.
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago
There are different things to look for at different stages. It's important to look for performance issues at the design stage, especially those of shared resources. At implementation, you shouldn't performance test until after the units are stable, since functional changes will invalidate any tests you do.
I wrote an article covering the basics of what to focus on and when: Performance Planning For Managers
--Jack Shirazi
JavaPerformanceTuning.com
18 years ago