Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A circuit-like notation for lambda calculus (2015) (csvoss.com)
105 points by apsec112 on Aug 17, 2020 | hide | past | favorite | 42 comments


Author here. Thanks for the appreciation!

Since this piece was written, I have discovered another notation by John Tromp which I consider to be the most breathtakingly beautiful:

- https://tromp.github.io/cl/diagrams.html

Here used by Paul Crowley to describe Graham's number:

- https://mindsarentmagic.org/2020/02/19/a-picture-of-grahams-...

- https://mindsarentmagic.org/2020/02/24/some-more-numbers-as-...


Really beautiful idea, and thanks for the link for John Tromp's idea. I will use your notation in my course.

I am sure, being from where you are, you have heard of "Henderson Diagrams" from SICP. They are also a great visual tool for functional programming.

From the Russian side of the cosmos, Drakon is a nice flowchart language (not a circuit diagram language), but I have never used it: https://en.wikipedia.org/wiki/DRAKON


I've been thinking about ways to feed algorithms into a neural network. This looks perfect! Just give the images to a convolutional NN.


Why not feed it a text?


Does this actually give you any insight into lambda calculus?


Have you looked at the similarity Church numerals and Peano axioms? [0]

[0] https://en.wikipedia.org/wiki/Peano_axioms


Noteworthy: Frege developed the first "visual" (2-dimensional) way of doing logic with his Concept Script[1]. Linear logic also uses a similar "tree-like" structure for proofs[2].

I was going to also mention Alligator Eggs, but that already made it in the post :)

[1] https://www.math.uwaterloo.ca/~snburris/htdocs/scav/frege/fr...

[2] https://plato.stanford.edu/entries/logic-linear/


If you get a chance to read it, the Begriffsschrift (literally “concept writing”) in which Frege explains the system and uses it is a fascinating read. Last time I checked it’s pretty hard to find a copy—mostly university libraries have it.


Yep, it's incredibly ahead of its time (written in the late 1800s)! Had to read parts of it for a course on (you guessed it) Lambda Calculus :)



I do love diagrams, I love thinking about some problems in terms of diagrams, eg Feynman diagrams, some enumerative problems, knot theory etc. It can be kind of deep. I think they are good for when you can do some graphical manipulation to prove a statement, or help gain extra insight. Usually there is some monoidal category floating around.

Perhaps one area this could be useful is program simplification? It would be nice if I could take a more complicated thing like a Haskell program or function and then do some graphical manipulation to see how to rewrite it. Kind of reminds me of pointfree too. Sometimes that can lead to interesting insights, where you spot something more general or simpler. You are just focusing on the functions (==diagrams) rather than the strands leading in.


Back in architecture (as in physical buildings, not software) school, our professors always encouraged us to represent our ideas into diagrams to clarify our thoughts. This is a kind of designing and thinking feedback loop popularized by the Bauhaus.


I'm surprised no one had pointed out LabVIEW[0]. It's a graphical programming language that would be very easy to implement the concepts here with. It is proprietary, but worth a look if you're interested in applying this.

Full disclosure, I used to work for National Instruments, hence my familiarity with it.

[0]: https://en.m.wikipedia.org/wiki/LabVIEW


The notation is presented somewhat confusingly; this definition given for +:

  \x. \y. (\s. \z. x s (y s z))
Isn't different from

  \x. \y. \s. \z. x s (y s z)
And usually syntax sugar is assumed such that you can just say:

  \xysz.xs(ysz)
Which is much easier to read.

Personally, I find the textual, symbolic representation much easier to reason about; my eyes just glaze over the circuit diagrams.


Reinventing string diagrams and other notation for symmetric monoidal categories.

Graphicallinearalgebra.net is a good resource.


That is quite fascinating including the fact that much of it seems to have been invented in the 1600s. Why didn't they teach me this stuff in school?

Does diagrammatic Linear Algebra show us that in fact Lambda Calculus and Linear Algebra are really very related?


Yeah, my first thought was "you know, I think I've seen this before, in my category theory course in grad school."


There's a graphical notation you can get from string diagrams for a "reflexive object" (an inscrutable reference that will help if you know how to deal with 2-categories graphically already: https://ncatlab.org/nlab/show/reflexive+object)

The main difference is that instead of a lambda abstraction box, you make the input branch off the output. That is, you have lambda nodes with two outputs and one input. One output is the lambda expression, one output is the expression's argument, and the input is the value of the evaluated function.

I've been wanting to understand non-standard diagrams -- might they be continuations or some other kind of computation?


Also good: To Dissect a Mockingbird [1]

1. http://dkeenan.com/Lambda/


So how can we put this to practical use to describe (functional) programs written in practical programming languages like say Lisp or JavaScript or even Haskell?

I assume that would require having some additional notation for built-in concepts like Numbers for instance, and built-in named library functions ?



An excellent article, I really enjoyed this visual take on the λ-calculus. Knowing this would have made my Programming Language Theory much smoother back when I was a CS undergrad.



Nice. Now we'll never be sure if buried wall decorations are aliens or future us.


I got lost when the author explained zero and one.

Some folks just can't teach. Like, at all.

I write software daily, and I have for 24 years. I design my own CPUs and play with logic gates in FPGAs all the time.

I understand this stuff.

The author of this article lost me when explaining zero and one...


> The actual details of how to implement zero and successor should be implemented as are left as someone else’s problem — we can survive without them. All we care about is that our numbers do the right thing, given whatever zero and successor someone may provide.

Zero is left as an implementation detail (it can actually be quite literally anything -- as long as it's not the successor). One is simply the successor function applied to zero. From http://www.cs.unc.edu/~stotts/723/Lambda/church.html --

     Natural numbers are non-negative. Given a successor function, next,
     which adds one, we can define the natural numbers in terms of zero and next:

         1 = (next 0)
         2 = (next 1)
           = (next (next 0))
         3 = (next 2)
           = (next (next (next 0)))

     and so on. Therefore a number n will be that number of
     successors of zero. Just as we adopted the convention TRUE = first,
     and FALSE = second, we adopt the following convention:

          zero  = Lf.Lx.x
          one   = Lf.Lx.(f x)
          two   = Lf.Lx.(f (f x))
          three = Lf.Lx.(f (f (f x)))
          four  = Lf.Lx.(f (f (f (f x))))


The encoding of natural numbers in lambda calculus can be mysterious at first glance. I'm surprised the author didn't spend more time on it. No need to be so hostile about it though.

Essentially, we represent numbers as functions that will call a function a number of times on a base value. So for `zero = λs. λz. z`, we can see if we gave it a function `s`, and a value `z`, it would apply `s` zero times to `z`. For `one = λs. λz. s z`, clearly `s` is applied once to `z`. I think you can see the rest of the pattern. Its really as simple as that.

We can then define the successor function as `suc = λn. λs. λz. s (n s z)` (I'm not sure why the author doesn't write this out. Instead he confuses the successor function with the `s` argument). We can see `suc` behaves as expected with examples. `suc one = (λn. λs. λz. s (n s z)) one = λs. λz. s (one s z) = λs. λz. s ((λs. λz. s z) s z) = λs. λz. s (s z) = two`. All we need is our definition of `zero` and `suc`, and we can define any natural number.

Now go back and look at the definition of `plus` and it should make a lot more sense.


Check out the explanation of church numerals and the successor function on the lambda calculus wikipedia.

Church numerals are quite beautiful. We use the symbols 1, 2, 3, 4, etc., but there's nothing intrinsic to those symbols that denotes what number they refer to.

Church numerals, on the other hand, do encode the actual numbers. A church numeral is simply a function that takes in two other functions as arguments, call them a and b, and applies a to b N number of times, where N is the numeral you're encoding.

So (using JS lambdas to demonstrate):

0 = (a, b) => b // applies a to b 0 times

1 = (a, b) => a(b) // applies a to b 1 time

2 = (a, b) => a(a(b)) // and so on...


Also checkout Scott encoding - for numerals each number is constructed by embedding it's predecessor (with 0 as per Church):

  0 = \f \x x
  N+1 = \f \x f N
To interrogate such a numeral, you supply a function that accepts the predecessor numeral, and a constant for zero. Recursion is needed to effect a fold.

On the face of it heavier than Church, but allows the recursive definition of infinity. Lists and more complex structures can be similarly defined, again with the possibility of recursively defined infinite components.

Then you can do fun stuff like 'taking' infinite elements from a (possibly infinite) list, adding infinity to any number to get infinity, multiplying likewise. Division by zero naturally returns infinity by construction, no convention necessary. 'dropping' infinity elements from an infinite list does not terminate however - you can never reach the first element of that result.


Some folks can't teach but I do not think that this applies to the author of this post. Rather I think that the issue is that some folks just can't learn.

> The author of this article lost me when explaining zero and one...

If you are interested in learning I would suggest pointing the exact thing that confuses you.


I didn't say that to ask for help. I said it to indicate that the author is speaking to people who already understand the subject, and is not speaking to people who do not understand the subject, despite the appearance that the article is indeed trying to teach the subject.

It is written to be read by people who already know the subject. It confuses me that this isn't readily apparent to everyone, when it is so obvious to me, someone who just can't learn.


i guess it depends on how familiar you are with the lambda calculus and the various "tricks" for representing data as lambdas.

from a quick look at the relevant section, it seems to be using [Scott-encoded natural numbers](https://fermat.github.io/document/talks/lam.pdf). i think describing it as "leaving S and Z up to some implementation" sells it short! the (quite beautiful) idea is roughly that you represent a number by a [fold/reduce]-like function that'll iterate the function you pass it (`s`) the appropriate amount of times, starting from an initial value (`z`).


Church numerals are essentially like the unary notation.

1. 0 : Think of 'y' as a blank blackboard in (\ x (\ y . y )), because when you apply it to two arguments, you get a "blank blackboard" - (\ x (\ y . y )) x y -> y

2. 1 is a tick mark on the blank blackboard (\ x (\ y . x y )). Applying it to two arguments, (\ x (\ y . x y )) x y, you get x y

3. 2 is a tick mark on a blackboard with a tick already on it : (\ x (\ y . x (x y)) x y yields x (x y)

3. 3 is (\ x (\ y . x (x (x y)))

and so on. The count of "tick marks" is essentially the number.

Calculation with this is notation is also fun to understand with the "tick marks on blackboard" analogy.


What's x represent?

What's y represent?

What's the backslash represent?

What's the period represent?

BASIC questions for things no one thinks to teach. Ever.

People who teach this only know how to teach it to people who already know it. Us lower-class ignoramuses can do without, I guess.

Thanks for trying, though, seriously.


No need to be aggressive, I am not being condescending.

x and y are variables.

\ is a notation people use because you cannot type lambdas in ASCII.

For the basic rules regarding application, you need to understand substitution rules.


We can translate it quite easily into Javascript, if you're familiar with that:

> What's x represent?

In JS this would be written as the variable 'x'

> What's y represent?

In JS this would be written as the variable 'y'

> What's the backslash represent?

In JS this would be written as the keyword 'function'

> What's the period represent?

In JS this would be the keyword 'return', or as '=>' using function arrow notation.

For example, '\ x (\ y . y )' (AKA zero) in JS would be:

    function(x) {
      return function(y) {
        return y;
      };
    }
Or, with arrow notation, as simply 'x => (y => y)'

The number one would be 'x => (y => x(y))'

The number two would be 'x => (y => x(x(y)))'

And so on: we represent the number N as functions taking two arguments, which apply the first argument to the second argument N times.

Lambda calculus functions can only take one argument at a time, hence we need two nested functions in order to take two arguments. Javascript functions can take more than one argument, so we could do this instead '(x, y) => x(x(y))'

The argument names are arbitrary; here they're 'x' and 'y', but we could use 'foo' and 'bar', 'hello' and 'world', or whatever.

If you want a more intuitive idea about "how" these work, and how to use them, then you can think of each number N as a loop performing N iterations. The first argument is the 'loop body' or 'stepping function', the second argument is the initial value/accumulator.

For example, if we want to add two numbers A and B, which are represented in this way, we can make a new number (AKA function taking two arguments AKA loop) which runs the A loop, with the result of running the B loop as its initial value (using the same 'step' each time), e.g.

    \x. \y. A x (B x y)
Or, in JS notation and more descriptive variable names:

    add = A => B => step => init => A(step)(B(step)(init))
We can multiply A and B by looping: start with zero (AKA init), then add B on to it A times:

    zero  = step => init => init

    times = A => B => step => init => A(add(B))(zero)(step)(init)
Notice that 'times' doesn't use its 'step' and 'init' arguments directly, they're just passed straight into the result. That's just busy-work, so we can simplify it away (this is called "eta reduction"):

    times = A => B => A(add(B))(zero)
We can pass in anything we like as our 'step' and 'init' arguments. Here's how to recover native JS numbers:

    churchToInt = A => A(x => x + 1)(0)
Encodings of other data structures work in a similar way:

- We write a function which takes in one function for each 'case' ('successor' or 'zero', if we're encoding numbers using Peano's axioms; 'true' or 'false' if we're encoding booleans; 'nil' or 'cons' if we're encoding a list; etc.)

- We return the argument corresponding to the value we've encoded (e.g. an encoding of zero returns the argument corresponding to zero; an encoding of true returns the argument corresponding to true; etc.)

- If we have a composite data structure, we first apply that argument to each of our nested values (e.g. a 'successor' value contains another number (its predecessor), so the argument corresponding to successor gets called with that value)

Some examples:

    // Booleans have two cases, neither contain any sub-values
    true  = T => F => T
    false = T => F => F

    // This only has one case, but it contains three sub-expressions (it's a triplet)
    // This is polymorphic in the type of 'red', 'green' and 'blue', but they could be
    // e.g. numbers
    colour = red => green => blue => x => x(red)(green)(blue)


    // A 4-tuple. If A, B, C and D are booleans then it's a nibble
    nibble = A => B => C => D => x => x(A)(B)(C)(D)

    // A pair. If HIGH and LOW are nibbles then it's a byte
    byte = HIGH => LOW => x => x(HIGH)(LOW)

    // Binary trees with values in the leaves 
    leaf = VALUE => node => leaf => leaf(VALUE)
    node = LEFT => RIGHT => node => leaf => node(LEFT(node)(leaf))(RIGHT(node)(leaf))

    // Some values, encoded using the above
    emptyNibble = nibble(false)(false)(false)(false)
    fullNibble  = nibble(true)(true)(true)(true)

    emptyByte = byte(emptyNibble)(emptyNibble)
    fullByte  = byte(fullNibble)(fullNibble)

    red   = colour(fullByte)(emptyByte)(emptyByte)
    white = colour(fullByte)(fullByte)(fullByte)


You're trying so hard but it isn't sticking for me.

Maybe I am too stupid. Not seeking pity! Being honest with myself


It's just data modelling, except instead of trying to fit our data into some arrangement of pointers (in a C-like language), or some combination of numbers/strings/lists/dicts (in a scripting language), or some algebraic datastructure (in an ML/Haskell-like language), instead we're trying to fit our data into some arrangement of anonymous functions.

I picked JS syntax since it's familiar to many people. If you're more comfortable with some other language, maybe try translating those examples into that language (the language must support first-class/anonymous functions; trying this with e.g. function pointers in C would be even more horrible!)

Then just play around with these functions a little; get a feel for how they behave; e.g. you could call them with strings like "foo", "bar", etc. and see what they return. Maybe write some input/output tables to see what's happening.

Then imagine that your language only has functions; nothing else. Try to find an alternative to if/else which only uses functions (hint: the 'true' and 'false' definitions above will help!). You've already seen how to write finite loops (that's how numbers are represented); try to write an infinite loop (this is easier if you're giving names to functions, like I did with 'add', 'times', etc. above). Then try writing an anonymous infinite loop, only using anonymous functions (hint: look at the URL of this site).

This is largely an academic exercise, but it can bring a deeper appreciation/understanding of what we can do with functions. Smalltalk famously uses a similar approach to this for branching (myBool ifTrue: x else: y) and looping (myNum timesRepeat: x); Smalltalk had a huge influence on OOP and scripting languages.


> Some folks just can't teach. Like, at all.

This is insulting. Is there a specific part of the zero/one explanation that you found confusing?


This talk by David Beazley (Lambda Calculus from the Ground Up) might be useful: https://youtu.be/pkCLMl0e_0k

Zero and one come in around the 35 minute mark (after booleans).


David Beazley talks are like albums or fine meals and should be consumed from beginning to end.


that's math, not computers. Math looks strange. Especially the everyday things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: