• Post Reply Bookmark Topic Watch Topic
  • New Topic

Division by bit shift  RSS feed

 
Drew Lane
Ranch Hand
Posts: 296
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I read that performance can be improved by using bit shifting in place of division.

Is it possible to do this with denominators other than 2?

For example, say I want to divide 2 by 3. (ie. 2/3)

What about 3/4?

How would I do this and would I get the same performance benefits?

Thanks,

Drew
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Broad sweeping statements regarding performance are almost always false, and certainly in this case. Quite often, extrapolations from benchmarks are also false due to the nature of the implicit changes in the extrapolation.

If you are attempting to increase performance, I strongly suggest you use a profiler to determine that your arithmetic is negligible by many orders of magnitude to some other potential bottlenecks.
 
Ilja Preuss
author
Sheriff
Posts: 14112
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Drew Lane:
I read that performance can be improved by using bit shifting in place of division.


That *might* be true at the machine language level, depending on the platform. A good compiler will do this optimization automatically where appropriate, so it's best not to worry about it, but to worry about making your code communicate intent.


Is it possible to do this with denominators other than 2?


No, not in architectures that use the binary system to represent numbers.

Remember how in the decimal system you can divide by powers of ten simply by moving the decimal point? That's fully equivalent to a bit shift.

Moving to our Performance forum...
 
Henry Wong
author
Sheriff
Posts: 22853
119
C++ Chrome Eclipse IDE Firefox Browser Java jQuery Linux VI Editor Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
That *might* be true at the machine language level, depending on the platform. A good compiler will do this optimization automatically where appropriate, so it's best not to worry about it, but to worry about making your code communicate intent.


Agreed... This "used" to be true at the machine level. Not any more. For example, the intel chip use to take about 30+ cycles to perform one division while a shift took one cycle. This meant it was faster to perform division by shifting, adding, subtracting. Even then, not all combinations were worth it.

Today, the processor has so much internal memory, that division is performed by lookup tables. I don't even think shifting can beat the division operator, for anything but the most simpliest stuff (like divide by 2). And even then, you may get a cycle.

Henry



Henry
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!