• Post Reply Bookmark Topic Watch Topic
  • New Topic

Is Software Performance Going to Matter Again?

 
Warren Dew
blacksmith
Ranch Hand
Posts: 1332
2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
In the last five or ten years, the improvement in speed of microprocessors has far outstripped the ability of programmers to write software using all that speed. Very little desktop software today comes anywhere near needing the full speed of today's faster personal computers. Even on servers, only the most heavily loaded and performance intensive applications push the limits of today's machines - it's usually cheaper to buy a faster machine than to invest in more than a small amount of programmer effort optimizing the software.

Two developments in the past year, though, cast doubt on the expectation that microprocessors will continue to get faster at the historical rate.

First, IBM, which had its flagship PPC 970 processor running at 2 GHz a year ago on a 0.13 micron process, expected the switch to 0.09 microns to increase their clock speeds to about 3 GHz by now, since clock speeds usually scale inversely with the size of the chip. Instead, the clock speeds have only moved up to 2.5 GHz, and IBM is having trouble producing even those chips in bulk.

In the meantime, Intel has also made the move to 0.09 microns. Since the Pentium was running at over 3 GHz on the 0.13 micron process, they expected the move to 0.09 microns to yield clock speeds well over 4 GHz. Instead, they stalled out at 3.6 GHz and announced Thursday that they were giving up on achieving 4 GHz.

Both of these failures seem to relate to a shift in the limiting factors for chip speeds as the process size goes down. At larger sizes, the power requirements fall as the process size goes down because there are fewer electrons that have to be pushed around for shorter distances. At the process sizes now being reached, though, this effect is being outstripped by the opposite effect from coupling capacitance: as the size goes down, the electrical signals in adjacent wires interfere with each other more because they are closer together, requiring more power to push the electrons through. Since a smaller chip has less surface area for cooling, this can limit the chip's speed.

This may mean that going to even smaller processes - 0.065 or 0.045 microns - will not yield any increase in speed. Hardware may quit getting faster. Will that mean that it will become more important again to write clean, tight code and optimize it?
 
Francis Siu
Ranch Hand
Posts: 867
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
hi Warren Dew
Will that mean that it will become more important again to write clean, tight code and optimize it?
Yes.
In my opinion, in my real live, we developed a lot of application program that run in the notebook or even in pocket pc. The speed of CPU in these devices are pretty slow. For some video or sound transmission, interpretation and compression need much powerful computing, otherwise, the quality must be bad. So, the technique of optimization is important in these area.

As some countries will support some electronic tourist guide, many tourists like to use there device to find out an optimzed tour. Time is money, they can enjoy the trip after business. The computing requirement for this application program should display the best path as soon as possible.

In other's aspect, some real time systems require a lot of CPU resources such as stock exchange, gambling, playing online games....etc. So writing clean, tight code and optimization are necessary.

On the other hand, as a lazy student or a rich boss(user), optimization is unnecessary.
[ October 17, 2004: Message edited by: siu chung man ]
 
William Brogden
Author and all-around good cowpoke
Rancher
Posts: 13078
6
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Three different takes on coding:

It is always important to write clean tight, bug-free code - but sometimes it is even more important to satisfy the marketing droids.

It is always important to write clean tight, bug-free code - but the level of abstraction left assembler behind some time ago.

It is always important to write clean tight, bug-free code - but if you don't get product out the door the whole enterprise dies and you are back on the street.

(my clean-tight-code credentials - using a forth type language, my son and I wrote a 8 line BBS that ran on DOS 5 (640k memory space) with email, discussion groups, chat, and text databases - it would run for months)
Bill
 
fred rosenberger
lowercase baba
Bartender
Posts: 12342
39
Chrome Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm not sure the need for 'clean, tight and optimized' code ever went away. My company writes software for a specialized market. Our customers don't have tons of money to buy new machines every 6 months because their last purchase won't run our new code... In fact, some of our customers are still running, and we still support, some old DEC/VAX mainframes.

My desktop is fairly new and up to date, but there are times when i'm using it's terminal emulator to code on those mainframes.
 
William Brogden
Author and all-around good cowpoke
Rancher
Posts: 13078
6
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
It occurs to me that the recent rush to dump MSIE in favor of Firefox and/or Opera represents a vote for fast, compact, and clean. (Over 5 million downloads of Firefox!)
Bill
 
Stefan Wagner
Ranch Hand
Posts: 1923
Linux Postgres Database Scala
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
5 million downloads in NYC yesterday, or global so far?

It's not too impressive.
I downloaded it twice
Others get it with their linux-cd without download at all...

On servers, the time to run a service on high load might be more important, while on the fat consumer-client often the startup-speed is making me unhappy, and for 30 seconds, the machine is working at full speed.

There should be a way of saving time before execution, or more clever lazy initialisation code.

I'm still impressed, how vendors shout about the speed of the machines, and sell it with low memory.
A stable suspend-to-ram and suspend-to-disk, together with much more memory would improve my environment much more, than a much faster cpu.

And clean code is better to maintain, which reduces its costs in the medium range, and allow to improve the performance when needed.

Performance matters allways - let it be porting to an cellphone, let it allow to run 2000 jobs /h instead of 1500.
But performance isn't the most important thing which matters.
To me, it's aesthetic of code and design, for some it is aesthetic of the user-interface, for some it's the price.
 
Warren Dew
blacksmith
Ranch Hand
Posts: 1332
2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
fred rosenberger:

I'm not sure the need for 'clean, tight and optimized' code ever went away.

Clean has certainly always been useful. Tight? 64 kB used to be a lot of memory; I have a hard time thinking of anything that requires megabytes to run in as very tight. And while there are specialized applications where optimized code is still useful, I don't think most Java applications qualify.

For example, does anyone worry about the fact that Java chars are unicode, and thus take up twice as much space as is actually necessary for English and most other Western languages? Or about using ints where shorts would do? Those are factors of two difference - it used to be that a factor of two was a lot, now we readily sacrifice it if it will make the code cleaner.

How about doing a right shift instead of dividing by two? That can save a factor of ten in speed, but I bet not too many people even consider it any more. I'm not advocating it, by the way - that's the kind of optimization that a good compiler ought to do for you - but I think we are often similarly profligate with CPU cycles in other ways nowadays when it will result in getting the program written faster.

And indeed, that makes perfect sense when you consider that programmer time costs a lot more than machine time, rather than the opposite that used to be the case.

The question is, will the equation change if Moore's law has ceased to have effect?
 
William Brogden
Author and all-around good cowpoke
Rancher
Posts: 13078
6
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
It is well known (or should be) that you can save major memory AND cpu cycles if you know your text data is ASCII and can use byte[] instead of String and char[].
Dov Bulka, in his Java Performance and Scalability V1 book shows lots of real world timing tests related to String operations, including some impressive savings by using byte[].
Naturally, the world of embedded Java is where you would expect to find the greatest reward for tight code.
Intel's recent decision to de-emphasize raw GHZ in favor of multi-core CPU design will hopefully increase interest in efficient use of multithreading, even in single user applications. This is not exactly a failure of Moore's law, but a change in attitude.

Speaking of Threads and conventional thinking, I recently had my attention called to "SEDA" - see this research report.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!