#### [SOLVED] What is a Y-combinator?

A Y-combinator is a computer science concept from the “functional” side of things. Most programmers don't know much at all about combinators, if they've even heard about them.

• What is a Y-combinator?
• How do combinators work?
• What are they good for?
• Are they useful in procedural languages? ## Anonymous recursion

A fixed-point combinator is a higher-order function `fix` that by definition satisfies the equivalence

``````forall f.  fix f  =  f (fix f)
``````

`fix f` represents a solution `x` to the fixed-point equation

``````               x  =  f x
``````

The factorial of a natural number can be proved by

``````fact 0 = 1
fact n = n * fact (n - 1)
``````

Using `fix`, arbitrary constructive proofs over general/μ-recursive functions can be derived without nonymous self-referentiality.

``````fact n = (fix fact') n
``````

where

``````fact' rec n = if n == 0
then 1
else n * rec (n - 1)
``````

such that

``````   fact 3
=  (fix fact') 3
=  fact' (fix fact') 3
=  if 3 == 0 then 1 else 3 * (fix fact') (3 - 1)
=  3 * (fix fact') 2
=  3 * fact' (fix fact') 2
=  3 * if 2 == 0 then 1 else 2 * (fix fact') (2 - 1)
=  3 * 2 * (fix fact') 1
=  3 * 2 * fact' (fix fact') 1
=  3 * 2 * if 1 == 0 then 1 else 1 * (fix fact') (1 - 1)
=  3 * 2 * 1 * (fix fact') 0
=  3 * 2 * 1 * fact' (fix fact') 0
=  3 * 2 * 1 * if 0 == 0 then 1 else 0 * (fix fact') (0 - 1)
=  3 * 2 * 1 * 1
=  6
``````

This formal proof that

``````fact 3  =  6
``````

methodically uses the fixed-point combinator equivalence for rewrites

``````fix fact'  ->  fact' (fix fact')
``````

## Lambda calculus

The untyped lambda calculus formalism consists in a context-free grammar

``````E ::= v        Variable
|  λ v. E   Abstraction
|  E E      Application
``````

where `v` ranges over variables, together with the beta and eta reduction rules

``````(λ x. B) E  ->  B[x := E]                                 Beta
λ x. E x  ->  E          if x doesn’t occur free in E   Eta
``````

Beta reduction substitutes all free occurrences of the variable `x` in the abstraction (“function”) body `B` by the expression (“argument”) `E`. Eta reduction eliminates redundant abstraction. It is sometimes omitted from the formalism. An irreducible expression, to which no reduction rule applies, is in normal or canonical form.

``````λ x y. E
``````

is shorthand for

``````λ x. λ y. E
``````

(abstraction multiarity),

``````E F G
``````

is shorthand for

``````(E F) G
``````

(application left-associativity),

``````λ x. x
``````

and

``````λ y. y
``````

are alpha-equivalent.

Abstraction and application are the two only “language primitives” of the lambda calculus, but they allow encoding of arbitrarily complex data and operations.

The Church numerals are an encoding of the natural numbers similar to the Peano-axiomatic naturals.

``````   0  =  λ f x. x                 No application
1  =  λ f x. f x               One application
2  =  λ f x. f (f x)           Twofold
3  =  λ f x. f (f (f x))       Threefold
. . .

SUCC  =  λ n f x. f (n f x)       Successor
ADD  =  λ n m f x. n f (m f x)   Addition
MULT  =  λ n m f x. n (m f) x     Multiplication
. . .
``````

A formal proof that

``````1 + 2  =  3
``````

using the rewrite rule of beta reduction:

``````   ADD                      1            2
=  (λ n m f x. n f (m f x)) (λ g y. g y) (λ h z. h (h z))
=  (λ m f x. (λ g y. g y) f (m f x)) (λ h z. h (h z))
=  (λ m f x. (λ y. f y) (m f x)) (λ h z. h (h z))
=  (λ m f x. f (m f x)) (λ h z. h (h z))
=  λ f x. f ((λ h z. h (h z)) f x)
=  λ f x. f ((λ z. f (f z)) x)
=  λ f x. f (f (f x))                                       Normal form
=  3
``````

## Combinators

In lambda calculus, combinators are abstractions that contain no free variables. Most simply: `I`, the identity combinator

``````λ x. x
``````

isomorphic to the identity function

``````id x = x
``````

Such combinators are the primitive operators of combinator calculi like the SKI system.

``````S  =  λ x y z. x z (y z)
K  =  λ x y. x
I  =  λ x. x
``````

Beta reduction is not strongly normalizing; not all reducible expressions, “redexes”, converge to normal form under beta reduction. A simple example is divergent application of the omega `ω` combinator

``````λ x. x x
``````

to itself:

``````   (λ x. x x) (λ y. y y)
=  (λ y. y y) (λ y. y y)
. . .
=  _|_                     Bottom
``````

Reduction of leftmost subexpressions (“heads”) is prioritized. Applicative order normalizes arguments before substitution, normal order does not. The two strategies are analogous to eager evaluation, e.g. C, and lazy evaluation, e.g. Haskell.

``````   K          (I a)        (ω ω)
=  (λ k l. k) ((λ i. i) a) ((λ x. x x) (λ y. y y))
``````

diverges under eager applicative-order beta reduction

``````=  (λ k l. k) a ((λ x. x x) (λ y. y y))
=  (λ l. a) ((λ x. x x) (λ y. y y))
=  (λ l. a) ((λ y. y y) (λ y. y y))
. . .
=  _|_
``````

since in strict semantics

``````forall f.  f _|_  =  _|_
``````

but converges under lazy normal-order beta reduction

``````=  (λ l. ((λ i. i) a)) ((λ x. x x) (λ y. y y))
=  (λ l. a) ((λ x. x x) (λ y. y y))
=  a
``````

If an expression has a normal form, normal-order beta reduction will find it.

## Y

The essential property of the `Y` fixed-point combinator

``````λ f. (λ x. f (x x)) (λ x. f (x x))
``````

is given by

``````   Y g
=  (λ f. (λ x. f (x x)) (λ x. f (x x))) g
=  (λ x. g (x x)) (λ x. g (x x))           =  Y g
=  g ((λ x. g (x x)) (λ x. g (x x)))       =  g (Y g)
=  g (g ((λ x. g (x x)) (λ x. g (x x))))   =  g (g (Y g))
. . .                                      . . .
``````

The equivalence

``````Y g  =  g (Y g)
``````

is isomorphic to

``````fix f  =  f (fix f)
``````

The untyped lambda calculus can encode arbitrary constructive proofs over general/μ-recursive functions.

`````` FACT  =  λ n. Y FACT' n
FACT'  =  λ rec n. if n == 0 then 1 else n * rec (n - 1)

FACT 3
=  (λ n. Y FACT' n) 3
=  Y FACT' 3
=  FACT' (Y FACT') 3
=  if 3 == 0 then 1 else 3 * (Y FACT') (3 - 1)
=  3 * (Y FACT') (3 - 1)
=  3 * FACT' (Y FACT') 2
=  3 * if 2 == 0 then 1 else 2 * (Y FACT') (2 - 1)
=  3 * 2 * (Y FACT') 1
=  3 * 2 * FACT' (Y FACT') 1
=  3 * 2 * if 1 == 0 then 1 else 1 * (Y FACT') (1 - 1)
=  3 * 2 * 1 * (Y FACT') 0
=  3 * 2 * 1 * FACT' (Y FACT') 0
=  3 * 2 * 1 * if 0 == 0 then 1 else 0 * (Y FACT') (0 - 1)
=  3 * 2 * 1 * 1
=  6
``````

(Multiplication delayed, confluence)

For Churchian untyped lambda calculus, there has been shown to exist a recursively enumerable infinity of fixed-point combinators besides `Y`.

`````` X  =  λ f. (λ x. x x) (λ x. f (x x))
Y'  =  (λ x y. x y x) (λ y x. y (x y x))
Z  =  λ f. (λ x. f (λ v. x x v)) (λ x. f (λ v. x x v))
Θ  =  (λ x y. y (x x y)) (λ x y. y (x x y))
. . .
``````

Normal-order beta reduction makes the unextended untyped lambda calculus a Turing-complete rewrite system.

In Haskell, the fixed-point combinator can be elegantly implemented

``````fix :: forall t. (t -> t) -> t
fix f = f (fix f)
``````

Haskell’s laziness normalizes to a finity before all subexpressions have been evaluated.

``````primes :: Integral t => [t]
primes = sieve [2 ..]
where
sieve = fix (\ rec (p : ns) ->
p : rec [n | n <- ns
, n `rem` p /= 0])
`````` #### @Jared Smith 2017-10-06 12:48:38

While I appreciate the thoroughness of the answer, it is in no way approachable to a programmer with little formal math background after the first line break. #### @user6428287 2017-10-06 14:31:02

@jared-smith The answer is meant to tell a supplementary Wonkaian story about the CS/math notions behind the Y combinator. I think that, probably, the best possible analogies to familiar concepts have been drawn already by other answerers. Personally, I have always liked to be confronted with the true origin, the radical novelty of an idea, over a nice analogy. I find most broad analogies inappropriate and confusing. #### @MaiaVictor 2017-12-12 03:43:24

Hello, identity combinator `λ x . x`, how are you today? #### @Student 2020-05-12 00:41:20

I like this answer the most. It just cleared all of my questions! #### @user8554766 2018-03-22 09:11:58

Here are answers to the original questions, compiled from the article (which is TOTALY worth reading) mentioned in the answer by Nicholas Mancuso, as well as other answers:

What is a Y-combinator?

An Y-combinator is a "functional" (or a higher-order function — a function that operates on other functions) that takes a single argument, which is a function that isn't recursive, and returns a version of the function which is recursive.

Somewhat recursive =), but more in-depth definition:

A combinator — is just a lambda expression with no free variables.
Free variable — is a variable that is not a bound variable.
Bound variable — variable which is contained inside the body of a lambda expression that has that variable name as one of its arguments.

Another way to think about this is that combinator is such a lambda expression, in which you are able to replace the name of a combinator with its definition everywhere it is found and have everything still work (you will get into an infinite loop if combinator would contain reference to itself, inside the lambda body).

Y-combinator is a fixed-point combinator.

Fixed point of a function is an element of the function's domain that is mapped to itself by the function.
That is to say, `c` is a fixed point of the function `f(x)` if `f(c) = c`
This means `f(f(...f(c)...)) = fn(c) = c`

How do combinators work?

Examples below assume strong + dynamic typing:

Lazy (normal-order) Y-combinator:
This definition applies to languages with lazy (also: deferred, call-by-need) evaluation — evaluation strategy which delays the evaluation of an expression until its value is needed.

``````Y = λf.(λx.f(x x)) (λx.f(x x)) = λf.(λx.(x x)) (λx.f(x x))
``````

What this means is that, for a given function `f` (which is a non-recursive function), the corresponding recursive function can be obtained first by computing `λx.f(x x)`, and then applying this lambda expression to itself.

Strict (applicative-order) Y-combinator:
This definition applies to languages with strict (also: eager, greedy) evaluation — evaluation strategy in which an expression is evaluated as soon as it is bound to a variable.

``````Y = λf.(λx.f(λy.((x x) y))) (λx.f(λy.((x x) y))) = λf.(λx.(x x)) (λx.f(λy.((x x) y)))
``````

It is same as lazy one in it's nature, it just has an extra `λ` wrappers to delay the lambda's body evaluation. I've asked another question, somewhat related to this topic.

What are they good for?

Stolen borrowed from answer by Chris Ammerman: Y-combinator generalizes recursion, abstracting its implementation, and thereby separating it from the actual work of the function in question.

Even though, Y-combinator has some practical applications, it is mainly a theoretical concept, understanding of which will expand your overall vision and will, likely, increase your analytical and developer skills.

Are they useful in procedural languages?

As stated by Mike Vanier: it is possible to define a Y combinator in many statically typed languages, but (at least in the examples I've seen) such definitions usually require some non-obvious type hackery, because the Y combinator itself doesn't have a straightforward static type. That's beyond the scope of this article, so I won't mention it further

And as mentioned by Chris Ammerman: most procedural languages has static-typing.

So answer to this one — not really. #### @Dapeng Li 2017-07-11 13:46:00

As a newbie to combinators, I found Mike Vanier's article (thanks Nicholas Mancuso) to be really helpful. I would like to write a summary, besides documenting my understanding, if it could be of help to some others I would be very glad.

### From Crappy to Less Crappy

Using factorial as an example, we use the following `almost-factorial` function to calculate factorial of number `x`:

``````def almost-factorial f x = if iszero x
then 1
else * x (f (- x 1))
``````

In the pseudo-code above, `almost-factorial` takes in function `f` and number `x` (`almost-factorial` is curried, so it can be seen as taking in function `f` and returning a 1-arity function).

When `almost-factorial` calculates factorial for `x`, it delegates the calculation of factorial for `x - 1` to function `f` and accumulates that result with `x` (in this case, it multiplies the result of (x - 1) with x).

It can be seen as `almost-factorial` takes in a crappy version of factorial function (which can only calculate till number `x - 1`) and returns a less-crappy version of factorial (which calculates till number `x`). As in this form:

``````almost-factorial crappy-f = less-crappy-f
``````

If we repeatedly pass the less-crappy version of factorial to `almost-factorial`, we will eventually get our desired factorial function `f`. Where it can be considered as:

``````almost-factorial f = f
``````

### Fix-point

The fact that `almost-factorial f = f` means `f` is the fix-point of function `almost-factorial`.

This was a really interesting way of seeing the relationships of the functions above and it was an aha moment for me. (please read Mike's post on fix-point if you haven't)

### Three functions

To generalize, we have a non-recursive function `fn` (like our almost-factorial), we have its fix-point function `fr` (like our f), then what `Y` does is when you give `Y` `fn`, `Y` returns the fix-point function of `fn`.

So in summary (simplified by assuming `fr` takes only one parameter; `x` degenerates to `x - 1`, `x - 2`... in recursion):

• We define the core calculations as `fn`: `def fn fr x = ...accumulate x with result from (fr (- x 1))`, this is the almost-useful function - although we cannot use `fn` directly on `x`, it will be useful very soon. This non-recursive `fn` uses a function `fr` to calculate its result
• `fn fr = fr`, `fr` is the fix-point of `fn`, `fr` is the useful funciton, we can use `fr` on `x` to get our result
• `Y fn = fr`, `Y` returns the fix-point of a function, `Y` turns our almost-useful function `fn` into useful `fr`

### Deriving `Y` (not included)

I will skip the derivation of `Y` and go to understanding `Y`. Mike Vainer's post has a lot of details.

### The form of `Y`

`Y` is defined as (in lambda calculus format):

``````Y f = λs.(f (s s)) λs.(f (s s))
``````

If we replace the variable `s` in the left of the functions, we get

``````Y f = λs.(f (s s)) λs.(f (s s))
=> f (λs.(f (s s)) λs.(f (s s)))
=> f (Y f)
``````

So indeed, the result of `(Y f)` is the fix-point of `f`.

### Why does `(Y f)` work?

Depending the signature of `f`, `(Y f)` can be a function of any arity, to simplify, let's assume `(Y f)` only takes one parameter, like our factorial function.

``````def fn fr x = accumulate x (fr (- x 1))
``````

since `fn fr = fr`, we continue

``````=> accumulate x (fn fr (- x 1))
=> accumulate x (accumulate (- x 1) (fr (- x 2)))
=> accumulate x (accumulate (- x 1) (accumulate (- x 2) ... (fn fr 1)))
``````

the recursive calculation terminates when the inner-most `(fn fr 1)` is the base case and `fn` doesn't use `fr` in the calculation.

Looking at `Y` again:

``````fr = Y fn = λs.(fn (s s)) λs.(fn (s s))
=> fn (λs.(fn (s s)) λs.(fn (s s)))
``````

So

``````fr x = Y fn x = fn (λs.(fn (s s)) λs.(fn (s s))) x
``````

To me, the magical parts of this setup are:

• `fn` and `fr` interdepend on each other: `fr` 'wraps' `fn` inside, every time `fr` is used to calculate `x`, it 'spawns' ('lifts'?) an `fn` and delegates the calculation to that `fn` (passing in itself `fr` and `x`); on the other hand, `fn` depends on `fr` and uses `fr` to calculate result of a smaller problem `x-1`.
• At the time `fr` is used to define `fn` (when `fn` uses `fr` in its operations), the real `fr` is not yet defined.
• It's `fn` which defines the real business logic. Based on `fn`, `Y` creates `fr` - a helper function in a specific form - to facilitate the calculation for `fn` in a recursive manner.

It helped me understanding `Y` this way at the moment, hope it helps.

BTW, I also found the book An Introduction to Functional Programming Through Lambda Calculus very good, I'm only part through it and the fact that I couldn't get my head around `Y` in the book led me to this post. #### @Tires 2017-05-02 16:04:20

The this-operator can simplify your life:

``````var Y = function(f) {
return (function(g) {
return g(g);
})(function(h) {
return function() {
return f.apply(h(h), arguments);
};
});
};
``````

Then you avoid the extra function:

``````var fac = Y(function(n) {
return n == 0 ? 1 : n * this(n - 1);
});
``````

Finally, you call `fac(5)`. #### @zumalifeguard 2016-07-23 21:09:24

I think the best way to answer this is to pick a language, like JavaScript:

``````function factorial(num)
{
// If the number is less than 0, reject it.
if (num < 0) {
return -1;
}
// If the number is 0, its factorial is 1.
else if (num == 0) {
return 1;
}
// Otherwise, call this recursive procedure again.
else {
return (num * factorial(num - 1));
}
}
``````

Now rewrite it so that it doesn't use the name of the function inside the function, but still calls it recursively.

The only place the function name `factorial` should be seen is at the call site.

Hint: you can't use names of functions, but you can use names of parameters.

Work the problem. Don't look it up. Once you solve it, you will understand what problem the y-combinator solves. #### @Noctis Skytower 2016-10-28 19:25:53

Are you sure it does not create more problems than it solves? #### @zumalifeguard 2016-10-29 05:06:36

Noctis, can you clarify your question? Are you asking whether the concept of a y-combinator itself creates more problems than it solves, or are you talking about specifically that I chose to demonstrate using JavaScript in particular, or my specific implementation or my recommendation to learn it by discovering it yourself as I described? #### @El Zorko 2015-06-07 10:02:24

For programmers who haven't encountered functional programming in depth, and don't care to start now, but are mildly curious:

The Y combinator is a formula which lets you implement recursion in a situation where functions can't have names but can be passed around as arguments, used as return values, and defined within other functions.

It works by passing the function to itself as an argument, so it can call itself.

It's part of the lambda calculus, which is really maths but is effectively a programming language, and is pretty fundamental to computer science and especially to functional programming.

The day to day practical value of the Y combinator is limited, since programming languages tend to let you name functions.

In case you need to identify it in a police lineup, it looks like this:

Y = λf.(λx.f (x x)) (λx.f (x x))

You can usually spot it because of the repeated `(λx.f (x x))`.

The `λ` symbols are the Greek letter lambda, which gives the lambda calculus its name, and there's a lot of `(λx.t)` style terms because that's what the lambda calculus looks like. #### @Will Ness 2018-08-27 10:57:10

this should be the accepted answer. BTW, with `U x = x x`, `Y = U . (. U)` (abusing Haskell-like notation). IOW, with proper combinators, `Y = BU(CBU)`. Thus, `Yf = U (f . U) = (f . U) (f . U) = f (U (f . U)) = f ((f . U) (f . U))`. #### @z5h 2009-07-03 03:05:25

I have written a sort of "idiots guide" to the Y-Combinator in both Clojure and Scheme in order to help myself come to grips with it. They are influenced by material in "The Little Schemer"

In Scheme: https://gist.github.com/z5h/238891

or Clojure: https://gist.github.com/z5h/5102747

Both tutorials are code interspersed with comments and should be cut & pastable into your favourite editor. #### @Wayne 2011-07-15 23:05:33

I wonder if there's any use in attempting to build this from the ground up. Let's see. Here's a basic, recursive factorial function:

``````function factorial(n) {
return n == 0 ? 1 : n * factorial(n - 1);
}
``````

Let's refactor and create a new function called `fact` that returns an anonymous factorial-computing function instead of performing the calculation itself:

``````function fact() {
return function(n) {
return n == 0 ? 1 : n * fact()(n - 1);
};
}

var factorial = fact();
``````

That's a little weird, but there's nothing wrong with it. We're just generating a new factorial function at each step.

The recursion at this stage is still fairly explicit. The `fact` function needs to be aware of its own name. Let's parameterize the recursive call:

``````function fact(recurse) {
return function(n) {
return n == 0 ? 1 : n * recurse(n - 1);
};
}

function recurser(x) {
return fact(recurser)(x);
}

var factorial = fact(recurser);
``````

That's great, but `recurser` still needs to know its own name. Let's parameterize that, too:

``````function recurser(f) {
return fact(function(x) {
return f(f)(x);
});
}

var factorial = recurser(recurser);
``````

Now, instead of calling `recurser(recurser)` directly, let's create a wrapper function that returns its result:

``````function Y() {
return (function(f) {
return f(f);
})(recurser);
}

var factorial = Y();
``````

We can now get rid of the `recurser` name altogether; it's just an argument to Y's inner function, which can be replaced with the function itself:

``````function Y() {
return (function(f) {
return f(f);
})(function(f) {
return fact(function(x) {
return f(f)(x);
});
});
}

var factorial = Y();
``````

The only external name still referenced is `fact`, but it should be clear by now that that's easily parameterized, too, creating the complete, generic, solution:

``````function Y(le) {
return (function(f) {
return f(f);
})(function(f) {
return le(function(x) {
return f(f)(x);
});
});
}

var factorial = Y(function(recurse) {
return function(n) {
return n == 0 ? 1 : n * recurse(n - 1);
};
});
`````` #### @Pops 2012-05-02 06:08:44

A similar explanation in JavaScript: igstan.ro/posts/… #### @Mörre 2015-12-17 15:25:28

You lost me when you introduced the function `recurser`. Not the slightest idea what it's doing, or why. #### @Wayne 2015-12-17 19:00:48

We're trying to build up to a generic recursive solution for functions that aren't explicitly recursive. The `recurser` function is the first step toward this goal, because it gives us a recursive version of `fact` that never references itself by name. #### @neevek 2018-03-23 09:34:29

@WayneBurkett, Can I rewrite the Y combinator like this: `function Y(recurse) { return recurse(recurse); } let factorial = Y(creator => value => { return value == 0 ? 1 : value * creator(creator)(value - 1); });`. And this is how I digest it(not sure if it is correct): By not explicitly referencing the function(not allowed as a combinator), we can use two partially applied/curried functions(a creator function and the calculate function), with which we can create lambda/anonymous functions that achieve recursive without need for a name for the calculate function? #### @Chris Ammerman 2008-09-18 16:15:48

A Y-combinator is a "functional" (a function that operates on other functions) that enables recursion, when you can't refer to the function from within itself. In computer-science theory, it generalizes recursion, abstracting its implementation, and thereby separating it from the actual work of the function in question. The benefit of not needing a compile-time name for the recursive function is sort of a bonus. =)

This is applicable in languages that support lambda functions. The expression-based nature of lambdas usually means that they cannot refer to themselves by name. And working around this by way of declaring the variable, refering to it, then assigning the lambda to it, to complete the self-reference loop, is brittle. The lambda variable can be copied, and the original variable re-assigned, which breaks the self-reference.

Y-combinators are cumbersome to implement, and often to use, in static-typed languages (which procedural languages often are), because usually typing restrictions require the number of arguments for the function in question to be known at compile time. This means that a y-combinator must be written for any argument count that one needs to use.

Below is an example of how the usage and working of a Y-Combinator, in C#.

Using a Y-combinator involves an "unusual" way of constructing a recursive function. First you must write your function as a piece of code that calls a pre-existing function, rather than itself:

``````// Factorial, if func does the same thing as this bit of code...
x == 0 ? 1: x * func(x - 1);
``````

Then you turn that into a function that takes a function to call, and returns a function that does so. This is called a functional, because it takes one function, and performs an operation with it that results in another function.

``````// A function that creates a factorial, but only if you pass in
// a function that does what the inner function is doing.
Func<Func<Double, Double>, Func<Double, Double>> fact =
(recurs) =>
(x) =>
x == 0 ? 1 : x * recurs(x - 1);
``````

Now you have a function that takes a function, and returns another function that sort of looks like a factorial, but instead of calling itself, it calls the argument passed into the outer function. How do you make this the factorial? Pass the inner function to itself. The Y-Combinator does that, by being a function with a permanent name, which can introduce the recursion.

``````// One-argument Y-Combinator.
public static Func<T, TResult> Y<T, TResult>(Func<Func<T, TResult>, Func<T, TResult>> F)
{
return
t =>  // A function that...
F(  // Calls the factorial creator, passing in...
Y(F)  // The result of this same Y-combinator function call...
// (Here is where the recursion is introduced.)
)
(t); // And passes the argument into the work function.
}
``````

Rather than the factorial calling itself, what happens is that the factorial calls the factorial generator (returned by the recursive call to Y-Combinator). And depending on the current value of t the function returned from the generator will either call the generator again, with t - 1, or just return 1, terminating the recursion.

It's complicated and cryptic, but it all shakes out at run-time, and the key to its working is "deferred execution", and the breaking up of the recursion to span two functions. The inner F is passed as an argument, to be called in the next iteration, only if necessary. #### @Brian Henk 2011-07-15 23:55:37

Why oh why did you have to call it 'Y' and the parameter 'F'! They just gets lost in the type arguments! #### @Peaker 2011-07-17 14:35:45

In Haskell, you can abstraction recursion with: `fix :: (a -> a) -> a`, and the `a` can in turn be a function of as many arguments as you'd like. This means that static typing doesn't really make this cumbersome. #### @GrantJ 2011-07-18 00:02:06

According to Mike Vanier's description, your definition for Y is actually not a combinator because it's recursive. Under "Eliminating (most) explicit recursion (lazy version)" he has the lazy scheme equivalent of your C# code but explains in point 2: "It is not a combinator, because the Y in the body of the definition is a free variable which is only bound once the definition is complete..." I think the cool thing about Y-combinators is that they produce recursion by evaluating the fixed-point of a function. In this way, they don't need explicit recursion. #### @Chris Ammerman 2011-07-18 12:40:55

@GrantJ You make a good point. It's been a couple years since I posted this answer. Skimming Vanier's post now I see that I've written Y, but not a Y-Combinator. I'll read his post again soon and see if I can post a correction. My gut is warning me that the strict static typing of C# may prevent it in the end, but I'll see what I can do. #### @Wayne 2012-05-02 15:10:44

I have never seen the word "functional" used as a noun in this way. Can you point to any other source that uses the term similarly? #### @Tony 2013-01-19 22:20:40

Here is another great explanation that can complement this very great one: blogs.msdn.com/b/wesdyer/archive/2007/02/02/… #### @Benjol 2014-04-28 04:50:23

If I'm understanding this answer to my question correctly, you can't do a Y-combinator in C# because it wants to evaluate parameters before calling functions. #### @YoTengoUnLCD 2016-05-18 03:26:07

@WayneBurkett It's a pretty common practice in mathematics. #### @oldrinb 2018-02-24 23:47:38

@YoTengoUnLCD: not quite--functionals are those map from function spaces to some field of scalars, not just general purpose maps of functions to functions #### @Dathan 2018-07-13 16:48:38

I think Mads Torgersen correctly implements the Y combinator in C# at blogs.msdn.microsoft.com/madst/2007/05/11/… #### @Jørgen Fogh 2011-07-16 18:55:25

Most of the answers above describe what the Y-combinator is but not what it is for.

Fixed point combinators are used to show that lambda calculus is turing complete. This is a very important result in the theory of computation and provides a theoretical foundation for functional programming.

Studying fixed point combinators has also helped me really understand functional programming. I have never found any use for them in actual programming though. #### @Jon Davis 2013-06-05 23:00:54

A Y-Combinator is another name for a flux capacitor. #### @Will Ness 2013-06-07 04:18:59

very funny. :) young(er) ones might not recognize the reference though. #### @user3917838 2015-12-09 01:58:55

haha! Yep, the young one (me) can still understand... #### @Saw Thinkar Nay Htoo 2018-06-17 13:11:52

I thought it was real and I ended up here. youtube.com/watch?v=HyWqxkaQpPw Recent Article futurism.com/scientists-made-real-life-flux-capacitor #### @mike 2020-01-22 15:13:32

I think this answer could be especially confusing for non-English speakers. One might devote quite some time to understanding this claim before (or never) realizing that it is a humorous popular culture reference. (I like it, I just would feel bad if I had answered this and discovered that someone learning was discouraged by it) #### @Andrew 2011-07-16 03:37:43

The y-combinator implements anonymous recursion. So instead of

``````function fib( n ){ if( n<=1 ) return n; else return fib(n-1)+fib(n-2) }
``````

you can do

``````function ( fib, n ){ if( n<=1 ) return n; else return fib(n-1)+fib(n-2) }
``````

of course, the y-combinator only works in call-by-name languages. If you want to use this in any normal call-by-value language, then you will need the related z-combinator (y-combinator will diverge/infinite-loop). #### @Quelklef 2019-01-06 23:17:42

Y combinator can work with pass-by-value and lazy evaluation. #### @xgMz 2010-12-27 05:28:32

Here is a JavaScript implementation of the Y-Combinator and the Factorial function (from Douglas Crockford's article, available at: http://javascript.crockford.com/little.html).

``````function Y(le) {
return (function (f) {
return f(f);
}(function (f) {
return le(function (x) {
return f(f)(x);
});
}));
}

var factorial = Y(function (fac) {
return function (n) {
return n <= 2 ? n : n * fac(n - 1);
};
});

var number120 = factorial(5);
`````` #### @Zach 2008-09-18 15:27:32

y-combinator in JavaScript:

``````var Y = function(f) {
return (function(g) {
return g(g);
})(function(h) {
return function() {
return f(h(h)).apply(null, arguments);
};
});
};

var factorial = Y(function(recurse) {
return function(x) {
return x == 0 ? 1 : x * recurse(x-1);
};
});

factorial(5)  // -> 120
``````

Edit: I learn a lot from looking at code, but this one is a bit tough to swallow without some background - sorry about that. With some general knowledge presented by other answers, you can begin to pick apart what is happening.

The Y function is the "y-combinator". Now take a look at the `var factorial` line where Y is used. Notice you pass a function to it that has a parameter (in this example, `recurse`) that is also used later on in the inner function. The parameter name basically becomes the name of the inner function allowing it to perform a recursive call (since it uses `recurse()` in it's definition.) The y-combinator performs the magic of associating the otherwise anonymous inner function with the parameter name of the function passed to Y.

For the full explanation of how Y does the magic, checked out the linked article (not by me btw.) #### @xitrium 2010-09-30 01:09:04

Javascript doesn't need a Y-combinator to do anonymous recursion because you can access the current function with arguments.callee (see en.wikipedia.org/wiki/…) #### @dave1010 2011-05-16 09:52:25

`arguments.callee` is not allowed in Strict Mode: developer.mozilla.org/en/JavaScript/… #### @Esailija 2012-08-07 16:50:10

You can still give any function a name, and if it's function expression then that name is only known inside the function itself. `(function fact(n){ return n <= 1? 1 : n * fact(n-1); })(5)` #### @VoronoiPotato 2016-08-01 17:12:22

except in IE. kangax.github.io/nfe #### @btilly 2011-07-15 21:39:20

I've lifted this from http://www.mail-archive.com/[email protected]/msg02716.html which is an explanation I wrote several years ago.

I'll use JavaScript in this example, but many other languages will work as well.

Our goal is to be able to write a recursive function of 1 variable using only functions of 1 variables and no assignments, defining things by name, etc. (Why this is our goal is another question, let's just take this as the challenge that we're given.) Seems impossible, huh? As an example, let's implement factorial.

Well step 1 is to say that we could do this easily if we cheated a little. Using functions of 2 variables and assignment we can at least avoid having to use assignment to set up the recursion.

``````// Here's the function that we want to recurse.
X = function (recurse, n) {
if (0 == n)
return 1;
else
return n * recurse(recurse, n - 1);
};

// This will get X to recurse.
Y = function (builder, n) {
return builder(builder, n);
};

// Here it is in action.
Y(
X,
5
);
``````

Now let's see if we can cheat less. Well firstly we're using assignment, but we don't need to. We can just write X and Y inline.

``````// No assignment this time.
function (builder, n) {
return builder(builder, n);
}(
function (recurse, n) {
if (0 == n)
return 1;
else
return n * recurse(recurse, n - 1);
},
5
);
``````

But we're using functions of 2 variables to get a function of 1 variable. Can we fix that? Well a smart guy by the name of Haskell Curry has a neat trick, if you have good higher order functions then you only need functions of 1 variable. The proof is that you can get from functions of 2 (or more in the general case) variables to 1 variable with a purely mechanical text transformation like this:

``````// Original
F = function (i, j) {
...
};
F(i,j);

// Transformed
F = function (i) { return function (j) {
...
}};
F(i)(j);
``````

where ... remains exactly the same. (This trick is called "currying" after its inventor. The language Haskell is also named for Haskell Curry. File that under useless trivia.) Now just apply this transformation everywhere and we get our final version.

``````// The dreaded Y-combinator in action!
function (builder) { return function (n) {
return builder(builder)(n);
}}(
function (recurse) { return function (n) {
if (0 == n)
return 1;
else
return n * recurse(recurse)(n - 1);
}})(
5
);
``````

Feel free to try it. alert() that return, tie it to a button, whatever. That code calculates factorials, recursively, without using assignment, declarations, or functions of 2 variables. (But trying to trace how it works is likely to make your head spin. And handing it, without the derivation, just slightly reformatted will result in code that is sure to baffle and confuse.)

You can replace the 4 lines that recursively define factorial with any other recursive function that you want. #### @v7d8dpo4 2016-12-29 07:15:47

Nice explanation. Why did you write `function (n) { return builder(builder)(n);}` instead of `builder(builder)`? #### @btilly 2016-12-30 09:20:58

@v7d8dpo4 Because I was turning a function of 2 variables into a higher order function of one variable using currying. #### @TheChetan 2017-05-31 04:59:01

Is this the reason we need closures? #### @btilly 2017-05-31 16:06:15

@TheChetan Closures let us tie customized behavior behind a call to an anonymous function. It is just another abstraction technique. #### @Ales Hakl 2011-07-16 02:19:22

Other answers provide pretty concise answer to this, without one important fact: You don't need to implement fixed point combinator in any practical language in this convoluted way and doing so serves no practical purpose (except "look, I know what Y-combinator is"). It's important theoretical concept, but of little practical value. #### @Nicholas Mancuso 2008-09-18 15:23:03

If you're ready for a long read, Mike Vanier has a great explanation. Long story short, it allows you to implement recursion in a language that doesn't necessarily support it natively. #### @Martijn Pieters 2015-08-05 13:16:53

It is slightly more than a link though; it is a link with a very brief summary. A longer summary would be appreciated. #### @Yavar 2015-10-23 15:38:07

It is just a link BUT it cannot get better than this. This answer deserves (add1 votes) with no base case condition to exit; aka infinite recursion. #### @Jørgen Fogh 2016-01-15 18:24:08

@Andre MacFie: I didn't comment on the effort, I commented on the quality. In general, the policy on Stack Overflow is that answers should be self contained, with links to more information. #### @toraritte 2018-11-07 16:50:49

@galdre is right. It is a great link, but it is just a link. It has also been mentioned in 3 other answers below but only as a supporting document as they all good explanations on their own. This answer also doesn't even attempt to answer the OP's questions. #### @Thomas Wagner 2008-09-18 15:24:58

A fixed point combinator (or fixed-point operator) is a higher-order function that computes a fixed point of other functions. This operation is relevant in programming language theory because it allows the implementation of recursion in the form of a rewrite rule, without explicit support from the language's runtime engine. (src Wikipedia)

### [SOLVED] What exactly is RESTful programming?

• 2009-03-22 14:45:39
• hasen
• 1659820 View
• 3991 Score
• Tags:   http rest definition