Recursion is never more efficient than iteration exception in "maybe" ONE (or a rare few) case(s).
First you need to know what recursion does at the lowest level to determine speed.
First, all functions have something called an epilogue and prologue and it looks like this:
I'll use pascal to explain it easier:
Simba Code:
function foo(): Integer; //name/address of your function.
//prologue:
push ebp //base stack pointer gets pushed DOWN.
move ebp, esp //saves the base pointer into the current stack frame.
//any local variables you create is now created on the stack
//epilogue:
move esp, ebp //restore the base pointer from the stack
pop ebp //pop UP the base pointer to delete all local variables from the stack.
ret
This is how functions automatically clean up your local variables. Without the stack frame, for every local variable, you'd have to call "init" and "free" to create and destroy variables. Because of the stack frame, this sort of stuff is automatic (automatic storage).
When you use recursion, it does this:
Simba Code:
function foo(): Integer; //function address.
push ebp
move ebp, esp
//do your calculations here
jeq foo; //jumps back to the function address on some condition (jump if equal).
//this only gets called when you are finished ALL recursive jumps.
move esp, ebp
pop ebp
Because of that jump instruction, the stack is pushed down again and again and again (more and more prologues are created) until the function is finished (you break the recursive chain). Then it is popped and popped and popped again and again until it is back to its original state (epilogues).
If it is pushed down too much (prologues pile up), you get a stack overflow. Think of this as a dirty while loop that constantly creates brand new variables (allocates memory) every time it loops and only frees it at the end of the function (one at a time).
The iterative approach need only create one variable (for the for-loop counter).
------------------------
So now comes the question (since it does so much work each call), how can it ever be more efficient or equivalent to an iterative approach..
The answer is called "Tail Call Optimisation". Aka Tail Recursion.
Tail recursion is when your recursive function returns the result of itself. Instead of pushing the stack down more and more, it instead just uses the previous stack frame over and over until the function is finished. Essentially allowing infinite recursion without a stack-overflow. It's as if the recursion never occurred.
That is the only time (or one of the only times) recursion is faster OR equivalent to an optimised iterative approach because the recursion will only consist of a simple jump instruction. Otherwise, the answer is "almost" never.
NOTE: Only a select few languages implement tail call optimisation.
Riwu's example is tail recursion because the result of the recursive function is the result of itself * n but only if the language allows it.
If you are in doubt, always choose the iterative approach!