Results 1 to 3 of 3

Thread: In what case is recursion more efficient?

  1. #1
    Join Date
    Dec 2011
    Location
    Toronto, Ontario
    Posts
    6,424
    Mentioned
    84 Post(s)
    Quoted
    863 Post(s)

    Default In what case is recursion more efficient?

    I'm curious as to know how exactly recursion is more efficient. For functions like these:

    Given a list A = [1,2,3,4,5] and B = [5,4,3,2,1], call vector_dot and result in int(1*5 + 2* 4 + 3*3...etc).

    python Code:
    def vector_dot(a, b):
        if len(a) == 1:
            return a[0] * b[0]
        else:
            return (a[0] * b[0]) + vector_dot(a[1:],b[1:])

    def vector_dot(a,b):
        result = 0
        for x in range(len(a)): #obv a and b have to be same length
            result += a[x] * b[x]
        return result

    does it really matter what you use? You'll end up getting the same answer, as well as it being the same amount of lines.

  2. #2
    Join Date
    Jan 2012
    Posts
    2,568
    Mentioned
    35 Post(s)
    Quoted
    356 Post(s)

    Default

    iteration is more memory efficient.
    if the lengths of ur lists are extremely high, ur first function will result in a run-time error--maximum depth reached.

    recursion could sometimes be much easier to write, eg. given the factorial definition
    f(0) = 1
    f(n) = n * f(n-1)

    to translate that into code:
    Code:
    def factorial(n):
      if n == 0:
        return 1
      return n * factorial(n - 1)
    It's like a direct translation, dont even have to think

  3. #3
    Join Date
    Feb 2011
    Location
    The Future.
    Posts
    5,600
    Mentioned
    396 Post(s)
    Quoted
    1598 Post(s)

    Default

    Recursion is never more efficient than iteration exception in "maybe" ONE (or a rare few) case(s).

    First you need to know what recursion does at the lowest level to determine speed.


    First, all functions have something called an epilogue and prologue and it looks like this:


    I'll use pascal to explain it easier:
    Simba Code:
    function foo(): Integer; //name/address of your function.

    //prologue:
    push ebp  //base stack pointer gets pushed DOWN.
    move ebp, esp //saves the base pointer into the current stack frame.



    //any local variables you create is now created on the stack



    //epilogue:
    move esp, ebp //restore the base pointer from the stack
    pop ebp //pop UP the base pointer to delete all local variables from the stack.
    ret


    This is how functions automatically clean up your local variables. Without the stack frame, for every local variable, you'd have to call "init" and "free" to create and destroy variables. Because of the stack frame, this sort of stuff is automatic (automatic storage).


    When you use recursion, it does this:

    Simba Code:
    function foo(): Integer; //function address.

    push ebp
    move ebp, esp

    //do your calculations here

    jeq foo; //jumps back to the function address on some condition (jump if equal).


    //this only gets called when you are finished ALL recursive jumps.
    move esp, ebp
    pop ebp

    Because of that jump instruction, the stack is pushed down again and again and again (more and more prologues are created) until the function is finished (you break the recursive chain). Then it is popped and popped and popped again and again until it is back to its original state (epilogues).


    If it is pushed down too much (prologues pile up), you get a stack overflow. Think of this as a dirty while loop that constantly creates brand new variables (allocates memory) every time it loops and only frees it at the end of the function (one at a time).

    The iterative approach need only create one variable (for the for-loop counter).

    ------------------------

    So now comes the question (since it does so much work each call), how can it ever be more efficient or equivalent to an iterative approach..

    The answer is called "Tail Call Optimisation". Aka Tail Recursion.

    Tail recursion is when your recursive function returns the result of itself. Instead of pushing the stack down more and more, it instead just uses the previous stack frame over and over until the function is finished. Essentially allowing infinite recursion without a stack-overflow. It's as if the recursion never occurred.

    That is the only time (or one of the only times) recursion is faster OR equivalent to an optimised iterative approach because the recursion will only consist of a simple jump instruction. Otherwise, the answer is "almost" never.


    NOTE: Only a select few languages implement tail call optimisation.

    Riwu's example is tail recursion because the result of the recursive function is the result of itself * n but only if the language allows it.

    If you are in doubt, always choose the iterative approach!
    Last edited by Brandon; 02-27-2015 at 04:45 AM.
    I am Ggzz..
    Hackintosher

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •