• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Tail Calls vs memoization

 
Ranch Hand
Posts: 782
Python Chrome Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
With regards to the fibonacci function as discussed here, can I characterized memoization as an optimization to overcome the JVM's lack of support for Tail call optimization ? I'm a bit hazy on both concepts. Perhaps someone can elaborate.

Thanks

Pho
 
Ranch Hand
Posts: 1296
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Someone may correct me on this, but I believe memoization, and tail call optimization are orthogonal concepts.

Memoization is an optimization that allows the runtime to compute the result of a function only once. In a referentially transparent function (or method), subsequent calls to the same function with the same arguments will always yield the same result. Therefore once the result is computed, all other invocations of the function with equal arguments can be replaced with the previously computed result.

Tail call optimization on the other hand, is a "trick" that minimizes the size of the call stack. If the return value of a method is a call to another method, the current stack frame can be reused.

The Scala compiler can optimize tail recursion where the last call of a nonoverrideable method recalls the same method with different arguments. I believe instead of swapping stack frames as in normal tail call optimization, the compiler unrolls this type of tail recursion into a loop.

Tail recursive calls to overridable (non final, top level methods) methods are not tail call optimized. Tail calls to other methods are also not tail call optimized. Both of these situations theoretically could be optimized to run in constant stack space, but the JVM is not able to do that.

Here is an example of a fib function that can be optimized by the Scala compiler.

[ July 04, 2008: Message edited by: Garrett Rowe ]
 
Garrett Rowe
Ranch Hand
Posts: 1296
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
There is a memoization library for Scala in the Scalaz library written by Tony Morris. One of the examples with the distribution is a fib function using memoization.
 
Garrett Rowe
Ranch Hand
Posts: 1296
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
After some experimentation, I see that tail-recursive methods declared in objects are also tail call optimized. Presumably because objects cannot be subclassed so methods declared in them are implicitly final. For example, this overflows the call stack:


But this results in an infinite loop:


This also results in an infinite loop:
 
pie sneak
Posts: 4727
Mac VI Editor Ruby
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
infinite loop = tail call optimization in the above code examples. Without tail call optimization, the stack would eventually get too large and throw an exception.

For further proof, using the following tweak to purposefully throw a RuntimeException so we can see the stacktrace result:

and the result:

Here, only a single call to overflowStack() is shown in the stacktrace even though it was on recursion call #10000000.

I could have sworn I had seen something like this in POJ (Plain Old Java) in the past as well but my quick little tests couldn't confirm it.
[ July 07, 2008: Message edited by: Marc Peabody ]
 
Garrett Rowe
Ranch Hand
Posts: 1296
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Marc Peabody: I could have sworn I had seen something like this in POJ (Plain Old Java) in the past as well but my quick little tests couldn't confirm it.

I don't think the Sun compiler does this kind of optimization, but I believe there is an IBM java compiler that can.
 
Marc Peabody
pie sneak
Posts: 4727
Mac VI Editor Ruby
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Garrett Rowe:
I don't think the Sun compiler does this kind of optimization, but I believe there is an IBM java compiler that can.


That would explain it. I've used RAD a lot in the past.
 
Ranch Hand
Posts: 376
Scala Monad
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I would love to see tail call optimization added to the jvm!
 
reply
    Bookmark Topic Watch Topic
  • New Topic