• Post Reply Bookmark Topic Watch Topic
  • New Topic

Big O notation & Recursion  RSS feed

 
Tom Cameron
Ranch Hand
Posts: 33
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi it's me again. I was thinking about recursion again, (new to it) and I was thinking about the Big O notations for recursive methods. So, my big question, are recursive methods generally a lot slower than regular methods? Sometimes it seems to be easy to implement but it is making the computer repeat itself several times.

Let's take an easy example to clear my mind up: if I were to reverse some string, I would normally write a for loop and a charAt method to reverse the whole thing, thus it is O(n) but if I were to use recursion such that I am always printing out the character at one end and then cut off that same ending character of the string recursively, wouldn't it also be O(n) ? Sometimes I think recursion would have more "complexity" than that. Or if I'm wrong, please correct me since I'm just thinking.
 
Alberto Caraz
Greenhorn
Posts: 18
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
There is some overhead associated with using recursion, since a new stack frame has to be created for every method invocation. Therefore, recursion might be more inefficient than an iterative solution as far as constant factors are concerned.

However, a recursive method need not be of a different order of growth than an iterative counterpart. In the case of repeating subproblems, you might end up having to compute the same solution several times. For example, consider the case of Fibonacci numbers:

F(0) = F(1) = 1
F ( n ) = F(n -1 ) + F(n -2)

If you want to compute F(10) out of this definition, you will have to compute F(8) twice, since
F(10) = F(9) + F(8), so you will compute F(8) there, and also to compute F(9) = F(8) + F(7). In this case, using memoization (keeping a table of solved fibonacci numbers would leave your algorithm running at the same performance asymptotically as the iterative counterpart. (with the similar caveat about stack frame allocation).

So my conclusion is: recursive methods being more inefficient is mostly a myth, especially as far as asymptotic analysis is concerned. However, if you care about fine tuning your algorithm, you
might want to consider dynamic programming when you are working with recursive algorithms with repeating subproblems, and consider recursive algorithms instead when not all possible subproblems need to be solved in order to find a solution.
 
Tom Cameron
Ranch Hand
Posts: 33
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
interesting... so I guess it would depend on the problem. I'm not really sure what you mean by asymptotic analysis though. Thanks for a speedy reply.
 
Alberto Caraz
Greenhorn
Posts: 18
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Asymptotic analysis means means analysis with things like big O. It is asymptotic because you're saying that if f = O ( g ), c* g will 'eventually' be an upper bound of f, for some c. Therefore,
if you have say some code that runs as f(n) = O(n) and code that runs as g(n) = O(2^n),
it could be that for some number of values, f(n) > g(n) (i.e. the exponential code will be faster than the linear code for some small values). What asymptotic analysis guarantees you is that at SOME POINT, your linear code will start being faster than the exponential.
 
It is sorta covered in the JavaRanch Style Guide.
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!