Why should I learn lambda calculus?
There are a few reasons to learn lambda calculus, the main ones I can think of are:
It embodies some of the most important concepts of functional programming
For example, pure functions, unary functions, currying.
The internals of many functional programming languages such as Haskell, are heavily based on lambda calculus
Understanding lambda calculus will help you to understand these languages.
It is interesting
Lambda calculus is an interesting area of mathematics, and is relatively accessible to those with a minimal maths background.
What is lambda calculus?
Lambda calculus was invented by the mathematician Alonzo Church in the 1930s, and is what is known as a ‘computational model’. By that, I mean that it is a system which can be used to encode and compute algorithmic problems.
The computational model most of us are familiar with is the Turing machine. While lambda calculus is rather different to the Turing machine in its approach to computation, the two are formally equivalent - ie. any problem that can by solved using a Turing machine can be solved using lambda calculus, and vice versa.
That is to say, lambda calculus can be used to solve any problem that can be computed using a Turing machine (so anything that anyone has ever programmed with a computer)!
Lambda calculus basics
Lambda calculus is very minimalistic in its rules/axioms. It is made up of just 3 basic components, or lambda terms:
Variable - a named token used in a function, which will be replaced by concrete arguments when the function is applied.
Abstraction - a function, made up of a head and a body separated by a ‘’
- The head is a lambda followed by a variable name.
- The body is an expression.
Application - a function invokation.
An expression is a variable, an abstraction, an application, or any combination of these.
In addition, parentheses are used in lambda calculus to indicate the order of evaluation.
This is all we get in lambda calculus! It is quite amazing that using these 3 lambda terms we calculate pretty much anything!
It is worth noting the absence of any of the constructs of basic arithmetic, such as numbers, addition, subtraction, multiplication, division etc - these are not within the basic axioms of lambda calculus.
Numbers and arithmetic operations can be defined in terms of lambda calculus via Church Encoding (which is outside the scope of this article). Rather than working from first principles every time we need them, we can choose to use numbers and basic arithmetic operators for convenience.
A simple example - the identity function
The following function is known as the identity function, it takes a value and returns that same value.
const identity = x => x; identity(3) // 3
We can write the same function using lambda calculus as follows:
In the above:
- is a variable.
is an abstraction.
- is the head of the abstraction
- is the body of the abstraction.
To perform an application, ie. apply the function to a concrete argument, we use the following syntax:
In the above, the syntax indicates that we are going to substitute all instances of in the function with . The above process is known as beta reduction, which I will talk more about now.
const f = (x, y) => x(y); f(z => z + 1, 3); // 4
f takes 2 arguments - a function
x and a value
y, and calls
y as an argument.
In lambda calculus, the process of applying concrete arguments to an abstraction, and reducing the resulting expression is known as beta reduction.
f, when written with lambda calculus, is:
This time, we have an application in the body of the abstraction: , this means ‘call with as an argument’. We can pass arguments to the abstraction and beta reduce it as follows:
Beta normal form is the most simplified form of an expression, ie. when it cannot be beta reduced the terms in an expression any further. In the above example is the beta normal form of the application.
// The following are equivalent. const myFunc1 = (x, y) => x * x; const myFunc2 = (y, x) => y * y;
Similarly, in lambda calculus renaming variables within a function also has no effect. We say that abstractions which differ only in their variable names are alpha equivalent.
The following is an example of 2 abstractions which are alpha equivalent:
Bound and free variables
const z = 5; // z, in the context of our function, is a free variable. const f = (x, y) => z + (x * y); f(2, 3); // 11 === z + 6
Similarly, lambda calculus abstractions can make use of both variables which are defined in the head of the abstraction, and other, ‘unknown’ variables which are not. In lambda calculus:
- Bound variables are variables used in an abstraction which appear in the head of the abstraction.
- Free variables are variables used in an abstraction which do not appear in the head, their value is taken to be unknown within the context of the abstraction.
And now applying arguments to this function and beta reducing it:
z, whereas in the lambda calculus, we just evaluated the function as far as we could without knowing .
Even though it is not a numerical result, is the beta normal form of this application.
Unary functions and currying
Up until now we have been making a simplification, and viewing lambda functions as functions which take one or more arguments and return a single value.
This is not actually correct.
Every lambda function takes a single argument and returns a single result, and this type of function is know as a unary function.
Is actually shorthand for
This is function with a single parameter , which when invoked returns another function with a single parameter , which when invoked returns another function with a single parameter , which when invoked returns the value of
const f = x => y => z => x * y * z;
We can invoke the above function either by passing all of its arguments and immediately obtaining the final result, or, we can first create a more specific function by passing only some arguments, and then call this function later to get the final result.
// Call our function with 3 arguments. f(2)(3)(4); // 24 // Call our function with 2 arguments to create a more specialised function. const multiplyBySix = f(2)(3); multiplyBySix(4); // 24
In lambda calculus, all functions are curried unary functions, despite that fact that we often use the aforementioned shorthand which makes them appear as if they take multiple arguments.
const f = x => x(x); f(f);
This is a little confusing. We define a function
f, which takes a function
x and calls it with itself as an argument. On the next line, we call
f with itself, which will recursively continue to call f with itself, and mayhem ensues.
If you execute the above code, you will get a max call stack size exceeded error. Conceptually we can think of what happens as something like:
f(f(f(f(f(f(...)))))); // and so on, until we reach the maximum call stack size.
In lambda calculus, we say that a lambda function which cannot be reduced to beta normal form diverges. Here is the equivalent of the above expressed with lambda calculus:
If we try and beta reduce this we get stuck in an infinite loop:
Associativity and precendence
const f1 = x => y => z => x / y / z / 2 + 7; f1(1)(2)(3) // 7.083333333333333
- Division (
/) has higher precendence than addition (
+), so we divide first.
- Division is left associative, meaning that when there are consecutive division operators, we group operations to the left.
Knowing this, we figure out where
const f1 = x => y => z => x / y / z / 2 + 7; const f2 = x => y => z => (x / y / z / 2) + 7; // Division has higher precendence than addition. const f3 = x => y => z => (((x / y) / z) / 2) + 7; // Division is left associative f1(1)(2)(3) // 7.083333333333333 f2(1)(2)(3) // 7.083333333333333 f3(1)(2)(3) // 7.083333333333333
Now lets take a look and precendence and associativity in lambda calculus. This is the part of lambda calculus that I most struggled to get my head around.
There are 3 rules to remember:
Application has higher precendence than abstraction.
ie. We group applications before we group abstractions.
In the above, we place parentheses in order to group the application of to . This is in contrast to the result we might get if we tried to group abstractions first, which would be the incorrect way:
Application is left associative, meaning that we group terms to the left.
Abstraction is right associative meaning than we group terms to the right.
In addition, the following were really helpful for writing this article, and I very much recommend them as further reading: