Road to the Y Combinator, Part V
— Code — 6 min read
In Part IV, we discovered the Y Combinator, a function which allows writing recursive functions in our functional "lambda" style.
In this chapter, we'll examine the significance of fixpoints for higher order functions, and refactor the Y Combinator into a popular form.
Functions and Fixpoints
A fixpoint P of a function F is a value such that P = F(P). In other words, if you pass a value to a function and get the same thing back, that value is a fixpoint of the function. Functions can have 0, 1, or even many fixpoints.
The following function has NaN
as a fixpoint:
(n) => n + 1
Try it in the Node repl:
> ((n) => n + 1)(1)2> ((n) => n + 1)(NaN)NaN
The Identity function always returns what it is given, so everything is a fixpoint for Identity:
(x) => x
The following function has no fixpoint. It always returns something different than it's argument:
(x) => x === 0 ? 1 : 0;
Iterating Towards the Fixpoint
What happens when you take a value from a function, and put that back into itself?
Dividing by Two
Consider the following function:
(x) => x / 2;
It takes a value, and returns the value divided by two. If we started with 1
,
we'd get 0.5
, then 0.25
, and so on. The values get closer and closer to the
fixpoint 0
.
Try out the Fixpoint Finder below. x
starts out at 1, and each time gets
divided by 2. The graph shows the current value of x
, as well as the
difference between the last two values of x
.
If the delta ever reached 0, that would mean that the last value to come out of
our "divide by two function" (x => x / 2
) would be the same as the value
which was passed into the function. While the values converge toward 0, it may
require a silly number of button presses to actually reach that fixpoint.
(x) => x / 2 Fixpoint Finder
Cosine
Ok here's another one, this time we're kicking the tires on the Cosine
function. x
starts out at 0, and each time gets transformed by a trip through
Math.cos
. The graph shows the current value of x
, as well as the difference
between the last two values of x
. It takes about 100 clicks to reach the
fixpoint:
Math.cos Fixpoint Finder
So some functions converge towards a fixpoint. Some of those functions will never reach their fixpoint, while others reach theirs quickly. And still other functions don't converge toward their fixpoint at all.
Fixpoints and Higher Order Functions
If you're unfamiliar with the term "higher order function," it is simply a function that operates on functions. Just like we can make a function that operates on numbers, for example to add sales tax, we can make a function that takes a function as input, and returns a different function as output.
Remember that factImprover
(a higher order function) is a function which
takes a partial definition of Factorial, and improves it by one step...
...What happens when we apply an improver to a function that is already a complete solution in terms of that improver?
Let's apply our improver one more time for kicks:
const factImprover = (partial) => n => n === 0n ? 1n : n * partial( n - 1n );
const y = (improver) => ( gen => gen(gen))( gen => improver( v => gen( gen )(v) ));
let fact = y(factImprover);fact = factImprover(fact);
What does fact
do now? Interestingly enough, the same thing as before:
> fact(5n)120n
Since our new version of fact
is, mathematically speaking, the same as the
old one, we have made a new discovery: higher order functions can have
fixpoints too. fact
is a fixpoint of the factImprover
function, since when
factImprover
receives it, it returns (for our purposes) the same thing.
The End of All Things
Finally, we are ready to finish our journey. Let's take another look at this
y
thing:
const y = (improver) => ( gen => gen(gen))( gen => improver( v => gen( gen )(v) ));
Now it is obvious that y
calculates the fixpoint of an improver
function. In other words, if we can create a function which improves a
recursive solution by a single step, y
can complete the solution.
Let's write a Fibonacci improver and confirm this! Recall that the Nth Fibonacci number is defined to be:
- 0 if N is 0
- 1 if N is 1
- Fibonacci of N-2, plus Fibonacci of N-1, otherwise
In the "improver style," this yields:
const fibImprover = (partial) => n => n < 2n ? n : partial( n - 2n ) + partial( n - 1n );
let fib = y(fibImprover);
A test:
for (let i = 0; i < 10; ++i) { console.log(`i: ${ i }, fib(i): ${ fib(BigInt(i)) }.`);}
...shows this works as well:
i: 0, fib(BigInt(i)): 0.i: 1, fib(BigInt(i)): 1.i: 2, fib(BigInt(i)): 1.i: 3, fib(BigInt(i)): 2.i: 4, fib(BigInt(i)): 3.i: 5, fib(BigInt(i)): 5.i: 6, fib(BigInt(i)): 8.i: 7, fib(BigInt(i)): 13.i: 8, fib(BigInt(i)): 21.i: 9, fib(BigInt(i)): 34.
(Remember, our improvers only deal with BigInt
s at this point, so we have to
convert the value of i
to a BigInt
before passing it in; alternatively, we
could implement fibImprover
with regular Number
s.)
If you have seen the Y Combinator before, you may have seen something different. Just like Pokemon, it comes in many forms. The Applicative Order Y Combinator is the one you might see in applicative order languages like JS. Still, we have a ways to go to make ours look like the version on Wikipedia.
First, lets rename improver
to f
:
const y = (f) => ( gen => gen(gen))( gen => f( v => gen( gen )(v) ));
Next, lets rename gen
to x
, and clean up the format:
const y = (f) => ( x => x(x))( x => f(v => x(x)(v)));
Almost there... First, we recall that f
is an improver function, and since
applying it any number of extra times is a noop, lets wrap x(x)
in a call to
f
:
const y = (f) => ( x => f(x(x)))( x => f(v => x(x)(v)));
Last of all, recalling that a wrapping function is a noop, except for
laziness, we can apply it to the argument to f
to achieve a stunning
symmetry:
const y = (f) => ( x => f(v => x(x)(v)))( x => f(v => x(x)(v)));
Conclusion
What was all this for? Should anyone be writing apps this way? Probably not. Is it worth this work to learn something? Certainly. If you made it this far, like me you are probably amazed at the power of functions. It is incredible what can be achieved with just one primitive construct, despite all the other things out there that we didn't use.
It appears there is an inverse relationship between syntax and freedom. One sign of engineering competence is the practice of subsetting; we limit ourselves in exchange for flexibilty.
While I enjoy functional programming styles, I would normally use assignment. Yet by refraining from it, one gains newfound powers.
Greatness beckons. What will you give up to become more?