Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can anyone help me understand how this is different from calculating something iteratively ?

I was reading SICP the other day where they introduced the idea that a process is fundamentally iterative if you just need a few variables to always keep track of the state of the computation.

So tail recursion is iterative and this looks the same. So then why is it called recursion ?

Im a bit confused about this.



Edit : Not "few variables" , i meant a fixed single set of variables.


With recursion, we can define something in terms of itself. For example, this is recursive:

    f(n) = f(n - 1) ++ f(n - 1)
Here I'm using "++" to denote list concatenation. Unfortunately this 'diverges', ie. trying to calculate f(x) for any x will keep going forever. There are two ways around this; the first is generally just called "recursion", and it involves fixing a particular "base case", which cuts off the calculation before it becomes infinite, hence causing our function to terminate.

We can do this as follows:

    f(0) = [NULL]
Where "[NULL]" is a list containing NULL. Now we can calculate f(x) for any positive x:

    f(1) = f(0) ++ f(0)
         = [NULL] ++ [NULL]
         = [NULL, NULL]

    f(2) = f(1) ++ f(1)
         = [NULL, NULL] ++ [NULL, NULL]
         = [NULL, NULL, NULL, NULL]
And so on. This kind of code can consume a hell of a lot of resources though, so we may prefer to only write a special kind of recursive function called tail-recursive. Here's a tail-recursive function:

    g(n) = g(n-1)
    g(0) = [0]
This "g" function uses recursion in the same way as "f", but notice that we can switch from calculating g(n) straight to g(n-1), ie. we only transform "n", not g(...). This means we can calculate g in constant space, since we just transform n again and again. Compare this to "f", where we need to calculate f(n-1) separately, then combine the results. Also notice that we don't know anything about the value of "g(x)" until we've finished going down the chain and hit "g(0)".

The other way of preventing divergence is corecursion, which the linked article describes. Here, we must generate a little bit of output before we're allowed to recurse. For example:

    h(n) = cons(n, h(n+1))
This will not terminate, but it will give us some output; specifically:

    h(0) = cons(0, cons(1, cons(2, cons(3, ...))))
We can't "finish" calculating this list, but we don't need to in order to use its elements. We say this function "coterminates", and that it generates "codata".

One interesting fact about co-recursion is that we can turn any diverging function into a coterminating one:

    i(n) = Later(i(n + 1))
This will give an infinite codata structure "Later(Later(Later(Later(...))))". We're effectively breaking down our calculation into steps; after each step, if we're finished we can just give back the result. If we're not finished, we give back a "Later" containing the result of the rest of the calculation.

This forms a really nice structure called the Partial Monad :)


Thanks for the explanation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: