Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Applied Category Theory Course (ucr.edu)
161 points by Schiphol on Sept 27, 2023 | hide | past | favorite | 93 comments


I'm writing a book [1] about "bona fide" applications of mathematics to practical programming problems, where bona fide is my own personal definition [2]. I've had my eye on claims of applications of category theory for many years, and none of them have quite fit the bill (sorry, John, if you're reading this). When they talk about applications to databases it seems quite unrealistic or lacking in sufficient detail to evaluate. And when I talk to people who are actually on the front lines that try to use category theory to solve some problem, they tell me that it ended up being completely useless.

I'm open to having my mind changed. The one application that's come the closest is UMAP, [3] though the last time I checked, a few years ago, the jury was still out on whether the structure it produces in applied contexts like biology is spurious.

So what specific applications do you know of category theory to practical problems? For the purpose of this post "application" means used by someone in a production software setting to solve a problem not related to category theory. Someone who doesn't care about category theory should care about the problem being solved.

And, to be sure, category theory is fine an dandy in its applications to other areas of math, as a unifying language for homology and whatnot. I'm talking about the kind of claims category theory folks sometimes make like "category theory is ushering in a programming revolution and you need to learn it to stay relevant"

[1]: http://pmfpbook.org/

[2]: https://buttondown.email/j2kun/archive/whats-in-production/

[3]: https://umap-learn.readthedocs.io/en/latest/


It really depends on what your idea of applied is, but Conal Elliot's work is really interesting.

His paper on Compiling to Categories [1] pitches programming language semantics as a cartesian closed category [2] allows for really cool stuff by mapping typical evaluation semantics to alternatives like building a computational graph visualization, a pretty printer, and imbuing the original program with automatic differentiation capabilities purely through the categorical interpretation of the same program. Essentially, if you can formulate a desired output of the program as a cartesian closed category, you can do it quite easily.

[1]: http://conal.net/papers/compiling-to-categories/

[2]: A cartesian category with is a category with an operation that lets you pair things together into a new object and get out the original parts with eliminators. Think construct a tuple and have the ability to get the first and second elements out of the tuple. A cartesian closed category adds in the ability to model (partial) function application to its arguments via "apply, curry, and uncurry." See https://en.wikipedia.org/wiki/Cartesian_closed_category


This does not appear to pass the criterion I gave:

> used by someone in a production software setting to solve a problem not related to category theory

If this stuff or its derivative work is used in production in a mainstream compiler (GHC, perhaps?), then I would see it differently.


there was a startup that used this to compile functional programs into circuits

not sure if it went anywhere, but it even included self modifying fpga-like circuits iirc


I think the important idea from math for programming, and pretty much everything else, is that of a function that operates on functions. You can build on that important idea in different general ways, for example category theory, or type theory.

There is a much simpler way of making this idea the foundation of pretty much everything though, see my report "Logic is Algebra" [1]. I've simplified matters further since then (for example, instead of the somewhat unfamiliar notation

(lambda x. D P[x])

you write now

lambda(D, x. P[x])

(maybe I should even go as far as just writing "lambda(D, x => P[x])", as that triggers probably even more neurons, especially in programmers).

lambda is here nothing special, just another operator you can introduce yourself, much like "forall", "exists", "sum", etc.

So category theory is, in my opinion, a pretty complicated framework to think in, pushing a certain view of things, and I think there is a more general, yet simpler framework. And I think that will be of use to everyone doing anything vaguely math related, and that certainly includes programmers.

[1] https://arxiv.org/abs/2304.00358

(The above report also includes lots of theory, and looks therefore more complicated than it really is. I am working on something new that leaves out the theory, uses more familiar syntax, and emphasises applications)


Programming is applied algebra, the choice is not using vs not using, but understanding what you are doing vs being ignorant about it. Relevance is in the eye of the beholder. In typical business the incentive is to have a good enough code ready "yesterday", and it will be thrown away tomorrow, as the code and whatever it does are only a means to generate profit and not a goal in itself, and spending time on making it profound is a waste. Wrt production examples, anything written in haskell used categorical constructs explicitly.


Programming is also applied physics. But understanding the kinetic forces behind typing is no guide to improved programming.

To say that learning a field is useful is to say that knowing these abstractions is useful.

Much of Haskell's "category theory" is a reuse of terms. Replace Monoid by Appendable, Monad by Composable, Functor by Runnable, etc. and most programming languages are "category theory".

This says little about whether knowing the field of category theory will be useful to programming. Indeed, I think there's a good argument to say Haskell programming is much harmed by it.

If its authors hadnt used their knowledge of category theory, but rather named the abstractions in more ordinary terms, the language would be more useful to more people.


> Replace Monoid by Appendable, Monad by Composable, Functor by Runnable, etc. and most programming languages are "category theory".

I'm not sure an "Appendable" Int or IO type, or a "Runnable" / "Composable" List type is any less confusing to Joe Programmer. These descriptions do not articulate the same things implied by the Monoid, Functor, and Monad laws. Some of these aren't even correct (i.e. not all Functors are "Runnable", Monadic types do not necessarily compose directly, just via Kleisli composition).

> This says little about whether knowing the field of category theory will be useful to programming. Indeed, I think there's a good argument to say Haskell programming is much harmed by it.

> If its authors hadnt used their knowledge of category theory, but rather named the abstractions in more ordinary terms, the language would be more useful to more people.

I think this is really the least of Haskell's problems. Mystifying/poorly written monad tutorials and conflation with IO (or confusion about how IO works in the haskell runtime) is more responsible for confusing new programmers than the concept itself. I have no background in mathematics above undergrad calc 2 and I was able to understand them just fine.


I take it all Functors are runnable if you see (1) map as generalised function application and (2) functors as lifted map.

The concepts, at the level of abstraction i just use there, are confusing. Thinking this abstractly about programs actively impairs programming in many cases.

This level of 'unification of syntax via abstraction' is relevant only really to language designers.


> If its authors hadnt used their knowledge of category theory, but rather named the abstractions in more ordinary terms, the language would be more useful to more people.

The language would be just as useful, but it might _feel_ more approachable. Depending on the terms that replaced those it might be more confusing, or the opposite, than it is now.


Programming is applied algebra just like tying your shoes is applied knot theory :)


IMO, knot theory is not as closely related to shoe-tying as programming is to algebra...


It’s a bit tough, on one hand John Baez is a great messenger because he’s such a fantastic communicator. But on the other hand, he’s a pretty horrible spokesperson for ACT because the man did theoretical physics and higher category theory for 30 years and now has decided to do “applied” work. It’s pretty evident in this course, he’s just setting up the frameworks people talk about using but isn’t going for any actual results that demonstrate the claimed application. I don’t think he’s ever actually done any applied work before he decided to revolutionize the field.

I once watched John talk down to some sort of research electrical engineer at a conference because his diagrammatic calculus didn’t line up with John’s approach, when anyone with a lick of sense would want to understand why their approach didn’t suffice and how these changes addressed its shortcomings - this person was a subject matter expert and Baez condescended to him about his own field! I think his attitude has, unfortunately, rubbed off on a lot of the people in his orbit.


All "applied category theory" books seem to be like this: they start by talking about how great the applications will be, but then instead of getting to the applications, they never stop developing the category theory for its own sake. A lot of math does similar things, to be fair, but I also don't count those as applied.


So, just to offer a few counterpoints there is Tom Leinster’s work on diversity/entropy [1], where people working in mathematical ecology have been happy with his contributions. I’ve also see the Penn robotics group start to use categorical machinery to nail down concepts like bisimulation of dynamical systems and develop a ‘type theory’ for motion plans that has its semantics in a category of dynamical systems [2].

[1] https://arxiv.org/abs/2012.02113 [2] http://www.tac.mta.ca/tac/volumes/35/45/35-45.pdf

I can’t stress enough that Koditschek group are serious people who are using this stuff because it addresses problems they’ve been grappling with.


Semi-related: I read the first page of your second reference and really can't make any sense of it, despite having what I would have thought to be enough knowledge to at least orient:

"We aim to construct a physically-grounded compositional framework for hybrid system synthesis, particularly targeted at applications in robotics. Compositionality lies at the heart of language in general [WHM12] and its formalization underlies much of computer science in particular [Lee90]. However the behavioral modularization of physical synthesis for digital computing that arguably ushered in the information technology revolution [MC80] has proven much harder to achieve in analog computing technology [Mea89]. There are fundamental reasons for this challenge to become more severe in machines intended to perform mechanical work on their environments [Whi96]."

Since you know enough about this field to cite it, does it appear to you that this stuff could be written in a more comprehensible way, without losing expressive power? Or is this level of ornament necessary to say whatever it's saying?


It’s basically just saying notions of compositionality from software doesn’t really translate over to analogue computers, and by extension robots that interact with their environment. It seems like they’re being careful to make atomic statements that are supported by the individual papers they’re citing. They may have been dealing with some particularly obstinate reviewers and adopted a defensive writing style (it can be a tough habit to kick, I’ve definitely given overly precise seminar talks because someone brought up size issues the week before and I just didn’t want to deal with any pedantry).


And then the difficult part becomes, how could I verify that this is used in practical robotics products? There are a handful of citations to [2], many of which are more category theory. One non-category-theory citation seems to say, "there is this formalization that's still a work in progress, and go read that paper if you're interested", hard to tell as an outsider how much they use it.

Unless I can prove it is used, then it doesn't meet my bar. Maybe it's too new, but that means it still has to prove itself. Good luck to them!


I would say that at that point you’re probably just digging in your heels - that’s a major group that does serious work.

You can also look at Aaron Ames’s PhD thesis and how that work in hybrid dynamical systems, driven by category theory, has shaped his current output (which is 20-50 papers/year as a full professor at CalTech).


I completely agree. I have been looking for a long while and I haven’t found any examples where using CT made sense. There seems to be a lot of “look at me I am a mathematician!” usage of CT to obfuscate what is really very simple ideas that could otherwise have been explained much simpler without requiring the reader to know an obscure subset of CT.

So your basic idea happens to be a Co-Variant Non-Infinitoid Subspace Category? Seriously who cares?


I would care if, e.g., some prior approaches did it poorly, and the category theory helped clarify it among the engineers who then agreed it was a good way to think about their problem.


> And when I talk to people who are actually on the front lines that try to use category theory to solve some problem, they tell me that it ended up being completely useless. I'm open to having my mind changed.

I aim to provide an example of applied category theory in computer science. Your understanding may differ due to the encompassing time frame, however, I ensure that even the most recent findings can rapidly transform into a practical programming concept.

The concept of 'monad' was initially introduced by Roger Godement in 1958, known as the 'standard construction'. The term 'monad' itself came into use later in 1967, credited to Jean Bénabou. The connection between monads and functional programming was first established in 1989 by computer scientist Eugenio Moggi. Moggi implemented monads in semantics to interpret the lambda calculus, a computational model that employs variable binding and substitution for function abstraction and application. The formal incorporation of monads into Haskell occurred in 1992 by Philip Wadler. Since then, monads are increasingly being integrated into mainstream programming languages, including JavaScript. For instance, JavaScript Promises can be viewed as a kind of monad. They encapsulate a value and allow its transformation with the .then method, which can be equated to the 'bind' operation in the realm of monads. Furthermore, a new Promise can be generated from any value using Promise.resolve, an operation analogous to the 'return' operation in monads. Promises have a very practical application and are used in programming by millions of developers every day.

The concept of a monad is one example that is well-understood; however, there are numerous other such ideas. One of these is the Functor, often referred to as "mappable" in mainstream programming languages. Research in this field is ongoing, with concepts like applicative functors, introduced by Conor McBride and Ross Paterson in 2008, already gaining popularity in languages such as Haskell.


I understand those concepts well, but I find the casual chain of events questionable, especially when the story is told by category theorists. For example, I was once told by a prominent category theorist that certain aspects of the API design for C++ futures was informed by category theory. When I spoke to the people who actually wrote the RFCs, they said category theory had no part in their thinking.

This is what it seems category theorists tend to do: they build a framework, express existing ideas within that framework, and then claim that the existing work is an application of category theory.

I think that sort of retcon CAN be done. Many math topics like Fourier Analysis have taken practical results, built a math theory behind it, and then converted the entire field to the theory as standard domain knowledge. But the evidence that category theorists have done this, even in the realm of programming language APIs, seems slim.

Even if you agree that the APIs align and the people designing the API know their thing is secretly a monad, still calling that a "success" of category theory is rather weak. By comparison, just because something has addition, units, and inverses, it doesn't mean that group theory is being applied in a meaningful way. The nontrivial results about groups that DO seem to lead to "real" applications are related to the structure of particular groups, as used in cryptography. But I have seen little similar in the way of category theory. Porting concepts from category theory to Haskell is nice, maybe, but it's not clear to me that doing so isn't creating as many problems as its solving (e.g., lenses solve a problem that no other language has, and most Haskell programs seem to me to be completely unreadable by anyone except the author, despite me being no stranger to Haskell).


> ... but I find the casual chain of events questionable ... When I spoke to the people who actually wrote the RFCs, they said category theory had no part in their thinking.

I think you missed one level of indirection in my argument. I'm not claiming that language designers employ Category Theory explicitly when designing new language features (although this is indeed the case with Haskell, PureScript, and similar languages). Rather, my argument is that Category Theory features typically debut in these purely functional languages and then you will see on the Wikipedia page of mainstream languages something like: “Influenced by Haskell” (e.g. Rust, +20 others) . Category Theory is very abstract by nature, so I think it is only natural that there is some kind of indirect step through e.g. Haskell where people can see it in action first, before such features land in mainstream programming languages. The language designers would then say, 'Well, I have been influenced by Haskell (and not by Category Theory)', although the concept behind it is truly based on CT.

Many programmers don't keep math books at their workstation. However, they constantly utilize mathematical concepts in their coding. Often, they refer to programming language tutorials where these mathematical ideas are already implemented, for example, in Python or C++. This is as opposed to learning the concepts directly from a book on Algebra, Geometry, or Trigonometry. However, that still qualifies as applied mathematics. Hence, I see no reason why it shouldn't count as applied CT if you borrow concepts from Haskell that are based on CT.


> Often, they refer to programming language tutorials where these mathematical ideas are already implemented, for example, in Python or C++.

The difference is that _someone_ is actually implementing the features they're relying on, and those people know the math, understand its importance, and deliberately apply it. I know because I dig through their git commits so I can interview them about their work for my book.

Your argument puts a lot of faith in Haskell. Let me take one path in your example to make this concrete. Rust was influenced by Haskell, how specifically? [1] claims it was mainly in "typeclasses, type families." Picking one randomly, the GHC user guide [2] describes type families as originating from three specific papers, [3], [4], [5], none of which mention category theory (nor does the much longer Haskell Wiki [6]). It seems to suggest, what is obvious to me, that not _all_ features of Haskell are related to category theory.

If every time I trace claims of applied CT I find big holes, how could I possibly believe that so many great ideas that programmers use every day are secretly deliberately based on CT?

There is a deeper discussion to be had here, along the lines of "does it count as an application of math if none of the people doing the applying think about the math." I think the answer is no. Math is a modeling language and if nobody models their problems with it then theorems do nothing. And so, if all of the category theory is lost in the translation from some (perhaps itself far removed) language feature of Haskell and later inspired language design, then that tells me category theory is the _wrong_ modeling language for that design problem. If it were good, and if the resulting theorems were essential and useful, then the practitioners would adopt it. That is a failure of CT.

[1]: https://doc.rust-lang.org/reference/influences.html [2]: https://downloads.haskell.org/ghc/latest/docs/users_guide/ex... [3]: https://www.microsoft.com/en-us/research/wp-content/uploads/... [4]: https://www.microsoft.com/en-us/research/wp-content/uploads/... [5]: https://www.microsoft.com/en-us/research/wp-content/uploads/... [6]: https://wiki.haskell.org/GHC/Type_families


> describes type families as originating from three specific papers, [3], [4], [5], none of which mention category theory (nor does the much longer Haskell Wiki [6]). It seems to suggest, what is obvious to me, that not _all_ features of Haskell are related to category theory.

Since you've mentioned it, one of the inventors of type families is Simon Peyton Jones. He is quite renowned within the Haskell community as he is also a significant contributor to the design of the Haskell language. Now, let's examine what he has to say about Category Theory.

> "I say “surprising” because anything with as exotic a name as “monad” — derived from category theory, one of the most abstract branches of mathematics — is unlikely to be very useful to red-blooded programmers. But one of the joys of functional programming is the way in which apparently exotic theory can have a direct and practical application, and the monadic story is a good example."

https://gist.github.com/dtchepak/3163428

In his own words, "apparently exotic theory [Category Theory] can have a direct and practical application," indicating that you hold a differing view from Simon, fair enough, however I believe that few people on this planet have a deeper understanding of Haskell than he does, which is why I hold his opinion in high regard.

So, what about the other major language designers of Haskell?

Firstly, there's Philip Wadler. For a clearer understanding of his work, you can refer to his presentation titled "Category Theory for the Working Hacker" (https://www.infoq.com/presentations/category-theory-proposit...). The title is quite self-explanatory.

Then there's John Hughes, who is renowned for "Generalising monads to arrows", known as Hughes Arrows. His work has a profound correlation with Category Theory, a fact that John Hughes is fully aware of.

In my view, these examples provide sufficient evidence to affirm that Haskell's language designers are well versed in Category Theory concepts. These concepts, to varying degrees, have informed the design of Haskell. Of course, discussing individual features such as type families can become contentious when determining the extent of Category Theory's influence. For instance, it could be argued that these features are more closely related to Type Theory. However, many argue that Type Theory is the internal language of Category Theory.

> There is a deeper discussion to be had here, along the lines of "does it count as an application of math if none of the people doing the applying think about the math." I think the answer is no

I'd like to introduce you to the topic known as the Curry-Howard-Lambek correspondence, which posits a three-way isomorphism between types in programming languages, propositions in logic, and objects within a Cartesian closed category. You may already be familiar with the term "proofs-as-programs". It's important to note that the type system in Haskell isn't sufficient for theorem proving, but it becomes capable of constructing elementary proofs if you enable features like DataKinds, GADTs, PolyKinds, ScopedTypeVariables, TypeApplications, TypeFamilies, and TypeOperators. In contrast, I've also worked with Coq, which undeniably possesses the strength to prove intriguing theorems. What's particularly notable about Coq is its capacity to output Haskell programs based on its proofs, resulting in a Haskell program that lacks the strong guarantees of Coq. In this sense, you could imagine the resultant Haskell program as a "proof sketch" (which is usually correct if it compiles).

I don't presume that you would refute this isomorphism. However, it's possible that you have again a differing perspective on what isomorphism actually means. To me, it's pretty much equivalence.


I'm aware the Haskell designers like category theory, and so yes, the question boils down to the degree of influence of category theory on particular language features.

Still, to your last point, the existence of an equivalence doesn't automatically count as an application of all sides of the equivalence. Coming up with a post-hoc math explanation for some practical idea doesn't make the practical idea an application of the math. The practitioners have to adopt that framing or rely on theorems proved via the math that they didn't otherwise rely on. It would be like saying that all mathematical theorems are an "application" of the typed lambda calculus because of the Curry–Howard correspondence. It's a disingenuous and fruitless way to think about what it means to apply an idea to solve a problem.

Automated theorem verification may use category theory (does it?), and I'd love to talk to the main contributors to Lean, which appears to be the system that's gaining the most traction among average mathematicians, to discuss that.


> I'm aware the Haskell designers like category theory, and so yes, the question boils down to the degree of influence of category theory on particular language features.

So, if only a segment of Haskell's language design is based on Category Theory (CT), and only a fraction of popular programming languages are based on Haskell, then this percentage, possibly in the single digits for a given mainstream language, is too minor for you to consider it a valid application of CT. That's a valid perspective; I'll leave it at that.

> Still, to your last point, the existence of an equivalence doesn't automatically count as an application of all sides of the equivalence. ... The practitioners have to adopt that framing or rely on theorems proved via the math ...

I really don't agree with your definition of application. If I have an equivalence between three different concepts - types, propositions, and objects within CCC, then, in my viewpoint, you don't need to explicitly consider your application of a concept in the other two fields for it to be recognized as an application. To me, that's akin to saying that you can only do math in a certain way, exactly how mathematicians portray it in their textbooks. If you don't know that because you learned math more indirectly, like from programming blogs with code snippets that encapsulate the same definitions, then, according to that view, you won't be doing applied math.

To me, it simply boils down to a different perspective on the term application, one that isn't specifically about CT.

> Automated theorem verification may use category theory (does it?), and I'd love to talk to the main contributors to Lean

I have listen to some podcasts with the main contributors and creators of Lean. Lean is based on (Lean's kernel) homotopy type theory (a flavor of type theory – specifically of intensional dependent type theory).

> most traction among average mathematicians

Yes. Many math graduate students are using Lean to apply concepts from textbook math, such as HoTT, which has a strong connection to Category Theory. Basically, you can't read the HoTT book without knowing CT, and can then apply it in Lean. I think this would even satisfy your definition of applied math.


> lenses solve a problem that no other language has

Indeed other languages don't have problems dealing with immutable collections since almost none have good immutable collections. All I can say is that, when working with other languages, I wish they did have that problem!

> most Haskell programs seem to me to be completely unreadable by anyone except the author

Interesting. That's the opposite of my experience. Haskell is pretty much the only language where I expect to be able to read code written by others ("others" including "myself a year ago").


Even where they lack the problem I find myself wanting lenses (or other optics) every couple months at a minimum.


I would appreciate if someone wrote here their opinion on Matriarch [1], an application of category theory to biology/physics via software.

[1] https://web.mit.edu/matriarch/


My first approach is to look for references to the project on GitHub. Since this project is not itself on GitHub, next best option is to search the URL for code comments, docs, etc.

https://github.com/search?type=code&q=web.mit.edu%2Fmatriarc...

It has one result, which appears to be someone demonstrating their visualization library using a model from Matriarch as an example.

Would love to hear from users of the software.


Loved your previous book, will the new book "Practical Math for Programmers" will be released this year?


Sorry, but no. I've got too much going on. But I am at ~200 pages of a draft


I mostly agree. Category theory is typically 1-2 steps separated from what programmers do.

Category theory is useful for thinking about X, X is super useful for modelling and designing Y.

X in {Petri net, relational algebra, feedback diagrams, type system, ... } and Y in {communication protocol, database, programming language, ... }


Being 2 steps separated is no problem for me. Can you point to any specific production software systems, for which the author of those systems says they used CT for designing stuff, and that it helped in a meaningful way? The only one I have heard of is UMAP, and it was closer to "plain ol' category theory" rather than Petri nets or any of the stuff the applied CT folks do.


This paper seems to be pretty applied, but I've only skimmed the intro:

https://arxiv.org/pdf/1504.05625.pdf

That starts to make sense to me as a sort of generalized framework for Feynman-diagram-like things. Explains why particle physicists would be drawn to category theory as well.

Feels like there's still probably a hell of a lot of dots to connect there though.


Sorry, off-topic: can you recommend some (applied) math books for self study people can read before your book is published?

I realize the question is somewhat strange and the answer depends on what people work on. Let's say it is not ML/AI.

I've enjoyed e.g. 'Calculus' by G. Simmons and Niven's 'The Mathematics of Choice' even though I didn't apply any of that at work.


I can't say I've read any that I could recommend. I have heard nice things about Princeton Companion to Applied Mathematics (TOC: https://assets.press.princeton.edu/chapters/c10592.pdf) but I haven't ready it personally, and it seems to be more of a reference text and have less stuff relevant to programming specifically.


Developing conceptual maps/diagrams, ubiquitous language, and then implementing such a system is topos theory:

We’re starting with categorical maps and then transporting that model into a type theory, while maintaining the semantics.

Effective patterns in doing so are “applied category theory”.


OK, so name a concrete, production system that followed this pattern.


Two that I know of, when I worked there:

- Amazon Device Econ modeling platform, for econometrics

- Amazon Tax calculation platform, for things like VAT

But more broadly, that’s how you develop software: you’re theory building.

1. Develop language and rules in that language.

2. Give them semantic meaning in a diagram and set of example cases.

3. Develop a corresponding type theory for that.

4. Implement that synthetic domain theory in a general type theory — eg, a programming language.

You don’t have to know that it’s topos theory to do those steps, but each is better when you’re explicit about them — and knowing how to guide the first step to get good results at the last step is definitely applied topos theory.


Are you implying that the people who worked on those two platforms had no idea what topos theory is (but they were "applying" it nonetheless)? Or that they explicitly modeled their stuff in topos theory, and that improved upon the former way they had been doing it without topos theory? I don't count the former as an application of CT, but I would count the latter.

References to primary sources would be appreciated, or feel free to email me at mathintersectprogramming@gmail.com with the name or contact information of someone who directly worked on those systems that I could talk to.


You might want to check the most recent work on epidemiogical modeling. Baez blogs extensively about it.


> When they talk about applications to databases it seems quite unrealistic

I read a paper that claimed that the category of polynomial-like things ("Poly") was somehow useful for database migrations! Without a concrete example, that's just an absurd statement to make. If you don't explicitly connect the dots between SQL schemas and polynomials, then you're just hand-waving. It's the equivalent of: "I have a marvellous proof but alas it is too long to fit into this margin."



This paper is actually interesting and talks about real-world schemas. I particularly like the concept of distinguishable nulls that define a sane equality operator, unlike the traditional SQL NULL.

The paper I was referring to used the category Poly, and literally just stated that Poly could be used for database migrations without elaborating further.


I don't know the paper you are talking about but the author of this one (David Spivak) does a ton of research on Polynomial Functors and he has a whole book on them: https://topos.site/poly-book.pdf

Poly is an incredibly rich category which can be used to describe a ton of stuff such as finite state machines, neural networks, game interactions, wiring diagrams and dynamical systems more generally. I wouldn't be surprised at all if Poly can describe database migrations.


> can be used to describe a ton of stuff such as

One thing that bothers me about applied category theory is that this (being able to describe stuff) always seems to be sufficient for people to claim it is applied. But if the people doing the applying don't find that framing particularly useful, then to me it does not count.


We've used Poly to prove results about functorial data migration.


Hey you might be interested in this work-in-progress type theory for Poly: https://github.com/toposInstitute/polytt


That might be a case of the author be really used to a certain crowd and writing for them. The work by Spivak on databases is well-known in ACT so the connection comes pretty easily. Also, in fairness, is pretty easy to Google your way around that claim.


This agrees with my experience when I was a young mathematician looking for math that is useful.


I ended up there too.

Nice for theoretical work, bad for practical work.

Functor is useful though.


I’ve seen many “applied” category theory posts and books but it’s important to remember that pure mathematicians’ definition of “applied” is way different than and more boring than you’d think. I still have to see some insights that CT gave us about a real world application that wouldn’t have been discovered without CT. Most of CT to me seems like definitions on top of definitions.


Honestly, category theory is more useful for programming language designers than for programmers in general.

Here are some of the more interesting things it's good for:

1. The lambda calculus is the basis for most functional programming and for closures. The lambda calculus is also tightly linked to Cartesian closed categories. And a huge number of structures in math are either Cartesian closed categories, or at least topoi, which are closely related. This connection can be used to quickly sketch out designs for exotic programming languages.

2. Many simpler categorical ideas map cleanly to programming languages. Functors are parameterized types with `map` functions. Natural transformations correspond to certain kinds of tidy parameterized types. Even monads are basically just nested parameterized types that support `flatten`. And so on.

Now, most of this stuff is pretty shallow category theory. It's mostly definitions. But they're definitions that allow you to draw interesting connections between functional programming, parameterized types, and distant branches of math.

Used right, you can design some pretty unusual languages that fit together cleanly.


> Honestly, category theory is more useful for programming language designers than for programmers in general.

Probably true. However... I wouldn't take that comment too far. IMO if more programmers took interest in the subject, they might begin to see complexity/simplicity they didn't recognize before, and make different language choices, or take a different approach to compositionality.


I agree, especially for programming language semantics or abstract compilation.

Also it is very useful for theoretical computer scientists.


> Most of CT to me seems like definitions on top of definitions.

More like definitions of definitions :)


My Excerpts for HN crowd

>I'm not a programmer. I apologize for this flaw.

> this is not a course on "categories in computer science". Nonetheless, because category theory is about understanding and organizing abstract data structures, everything I say will be relevant in some way to computing! Furthermore, several chapters in Seven Sketches explicitly discuss databases, and type systems, and other aspects of computer science.


Great content, too bad some of the links are broken now.


Some things in Haskell seem so laboriously interwoven with Category Theory. Take for example representable functors for memoization. Pages of Category Theory to do something dead simple. In comparison memoization in javascript is just throwing things into a {}.



How much time do you guys really need for such a course? I have then stacking up and have a list of a few dozen books and courses to go over.

I am beginning to think I'm doing this wrong, or simply don't know how to prioritize, a common modern problem of course.


If you’re spending more than a few months on a topic you might be doing it wrong. I find that following a slightly stressful fast-paced schedule is actually kind of important for effective learning.


I'm working through a similar course with friends. We take two hours a week, and ocasionally do homework. We're about halfway since we started early this year. I feel its fine, no need to rush. The upside of taking it slow is that it stays with you for longer, I think. If you rush it and find no immediate application afterwards, you risk forgetting everything.


Long term retention requires spaced repetition of course. But I think the stressful pace does help not get bogged down on a first pass of the material.

It's impossible to cover everything about a topic in a short amount of time, but when you try, you at least develop a map of the terrain.


Is there a PDF version?


Very cool topic list.


I thought this said Joan Baez and was very intrigued.


Me too actually. She's got a new film coming out if that's any consolation.

https://www.imdb.com/title/tt26594061/


She is his cousin.


And her father got John into physics.


But not lyrics.


Surely Baez's even in American English (yeah, I know it's a Spanish name).

British English almost always adds 's for the singular possessive, even when the name ends in s, so Dickens's and St. James's.

However, American English often uses the plural possessive suffix ' for names ending in s. One Spanish name example I have seen is Bezos' for something of Jeff's (his billions, or his wives, I forget which).

But Baez' makes no sense to me on any side of the Atlantic.



It's fine & I've used that. Given a name (say Jones) then the possessive is with a "'s" suffix (Jones's) but for by convention you can elide the final "s" to avoid duplication (so: Jones'). I'm happy that can be done with a "z", hence Baez' is fine.


Surely applied category theory is an oxymoron


The attitude that a serious academic subject has no practical application is pretty anti-intellectual: it's a good heuristic to say that it's probably wrong.


Why is everyone here so obsessed with category theory? It has basically no useful theorems.


I think it's mainly software folks who feel mathematical learning it. Mathematicians primarily use it as a linkage and general tool in some context. I feel category theory needs something to latch it on to. Most software based applications seem a little too contrived, unless I have missed something interesting.


If CT is used to connect different parts of math and software is basically math proofs, then it makes sense programmers find CT interesting.

It let's you connect your software to math concepts.


> software is basically math proofs

But it's not.


Not all software but by the curry-howard correspondence you can arrive at programs as proofs.


Why would I get down voted for this comment?


Because many people (myself included) find it A) interesting, and/or; B) useful.


Please explain how you find it useful


Odd take on a post called "applications of category theory"

I mean topology didn't have so many applications 100 years ago, but now that shits everywhere. Math is the ultimate r&d. It's literally the language of causality. Imho it can't not be useful.


I have looked into applications of topology a lot and "everywhere" seems like a stretch, though perhaps you mean it in a particular sense that differs from mine. Could you give some examples of what you think of as the most useful practical applications of topology?


I believe it's used in cosmology to depict the "shape" of the universe


It is used everywhere. ML/AI would be a good example.


Give me a specific example. Maybe a link to some code, or a design doc, where topology comes into it.

If you're thinking about "topological data analysis" and "persistent homology" or similar, I have been down that road already and consider it a dead end (nobody uses it). See: https://buttondown.email/j2kun/archive/whats-in-production/


I think you're confused about what "applications" means to mathematicians




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: