This does not seem very informed, starting from the false premise in the question up to the random concepts listed in the answers.
Calculus is actually important for Computer Science, it's actually important for everything, it's where you learn how to handle the exponential function and the natural logarithm, how to do approximations and bounds, how to handle infinite series, etc., and those things then appear all over the place, unlike most things listed it's something that you can expect to encounter almost regardless of what domain you are interested in.
I mean, the guy asks what should replace Calculus and then the first answer includes "Asymptotics", "basic limits, sequences and series", so actually calculus. In general I cringe a little every time I hear Computer Science people should focus on "discrete math", because without tools from analysis you can only solve the most trivial discrete problems. And yes, calculus by itself is hardly ever applicable in CS, yet you still have to learn it, tough luck. In general what is not stressed enough I think is that applying math is hard and you need to learn a lot of it before you have enough tools to tackle problems anywhere close to real-world complexity.
The top answer also lists random concepts. I am learning probability currently, for applications in machine learning. "Discrete spaces, Bayes theorem and expected values" you can learn in a day, "Markovs, Chebyshev and Chernoff inequalities" are mostly only useful for further theoretical work, so is "the law of large numbers". What will really be useful will depend a lot on the applications, if you are a theoretical computer scientist, mastery of generating functions and transforms will be useful, and it's one of those instances where discrete problems are solved via tools from calculus/analysis. For machine learning you need to know everything about the normal distribution by heart, and this means you have to know everything about the exponential function by heart, so again back to calculus. Notions from information theory are useful, but of course none of the ones he listed. The comment "This is a must for modern programmers." sounds just comic.
>Calculus is actually important for Computer Science, it's actually important for everything
This.
If you take a look at the MIT course "Mathematics for Computer Science"[1] you'll see that the only prerequisite for learning the math for cs is ... calculus!
I was a TA for that course last semester. I think the actual reason calculus is a prerequisite is just so that students will have some mathematical maturity beforehand. We didn't really teach any concepts that actually used calculus (that I remember).
That said, I totally agree that calculus is good to know for a CS student.
Oh, you do use calculus, at least in the notes, actually touching on the topics I mentioned, so it's a good illustration. For example in the chapters about generating functions and on sums and asymptotics:
> Calculus is actually important for Computer Science, it's actually important for everything, it's where you learn how to handle the exponential function and the natural logarithm, how to do approximations and bounds, how to handle infinite series, etc., and those things then appear all over the place,
It's still interesting to think about which branches of math are actually applicable to programming itself.
People tend to talk about programming and math as very strongly related, and of course there is the obvious relationship that "some computer programs do particular kinds of math" like you're talking about here.
But there is no (intuitive) overlap between writing, say, a web application and doing algebraic or calculus computation on paper. However, there are things like:
I have a couple simple ones off the top of my head:
- You write a recursive program the same way you write an inductive proof
- Abstract algebra and category theory are likely relevant, especially for metaprogramming. My math education hasn't included this, so I can't say much more.
- Linear algebra is just ridiculously important
- Statistics for machine learning. Also for figuring out how to combine data in a meaningful way. There are also a lot of people asking statistical questions directly, and writing programs is how you get those kinds of answers in a reasonable timeframe.
> Abstract algebra and category theory are likely relevant, especially for metaprogramming.
In general, the whole "oh yeah CS people should know some category theory and abstract algebra" is pretty hilarious.
First, it's a bit like saying "oh yeah CS people need to know the undegraduate basics and also the generalization that most mathematicians don't encounter until a couple years into grad school."
Second, most people who say this really mean "a conceptual grasp on different types of morphisms is useful". But that's like saying you need calculus in order to drive a car; or, in the case of categories, it's like saying you need two semesters of real analysis in order to drive a car.
Why not just say "knowing about different sorts of mappings is pretty useful in functional programming"? Knowing how this generalizes to more abstract mathematical objects is totally unnecessary.
Well, frankly you can get along not knowing the "Gang of Four" design patterns and write Java. By the same token, you don't need to know about iterables and comprehensions to write Python, smart pointers to write C++, Graph theory to use a Graph database, macros to write LISP, etc.
By analogy, you don't need to know abstract algebra and category theory to write Haskell. But as in the other cases, knowing helps.
"Different sorts of mappings" is NOT synonymous with "category theory". Not even close. Heck, Euclid knew about "different sorts of mappings".
Most everything in Gamma et al is arguably useful for everyday programming in Java. Maybe 5-10 pages of MacLane is useful for everyday programming in functional languages.
Unless by "Category Theory" you mean "5-10 pages of MacLane", Category Theory -- on the whole -- is a horrendously inefficient way of teaching about "different sorts of mappings useful in functional programming."
Unless you want to use functional programming as an environment for doing pure mathematics, there's no reason to actually study actual Category Theory.
I've really never needed an abstraction for semigroups, monoids, meet-semi-lattices, monads, comonads, arrows or catamorpisms in Clojure, Common Lisp, Scheme, or Hy.
These concepts become more relevant when I program Haskell, Agda, Isabelle/HOL, or Coq.
I'd say a stronger analogy can be made between reading MacLane's Catagories for the Working Mathematician and reading Hoyte's Let Over Lambda; you really only need to read a little bit of these books to get the core concepts. That being said, depending on what sort of functional programming you're doing, a strong background in category theory or meta-programming can enabling (or not).
> when I program Haskell, Agda, Isabelle/HOL, or Coq.
That's fair. Although Haskell is a bit of an odd man out in that list, both in terms of its nature and in terms of its typical use case.
> That being said, depending on what sort of functional programming you're doing, a strong background in category theory or meta-programming can enabling (or not).
This is where the analogy between the two books breaks down. When you're using a functional programming language as a proof assistant, category theory can be helpful. But this is far less common than meta-programming.
Yes. But my more important point is that you can tach about these mappings in isolation, in the context of functional programming. No Category Theory needed.
You can teach about "different sorts of mappings" in just about any setting. In fact, that's kind of the whole reason Category Theory exists. So why teach the general result when all you care about is its application to functional programming?
The importance of structure preserving mappings shows up in many other places besides functional programming. I do believe context is important, but having multiple contexts is even better.
As someone currently teaching themselves linear algebra (via Strang), I'm curious as to why you believe linear algebra is ridiculously important. I see its applications to graph programming and obviously to cryptography, but I've done a lot of work in both of those subjects and never strictly needed a background in linear algebra to be effective.
Linear algebra is insanely important. I would consider the most important area of numerical mathematics to know. Any sort of mathematical modeling you do is going to involve linear algebra at some point.
I have argued that almost all numerical mathematics, in some form, can be modeled as a linear algebra problem. Google's original page rank algorithm is linear algebra. Remember that Netflix challenge? All linear algebra. Optimisation? Linear algebra. Want to do any engineering of a system? Behind the scenes it is all linear algebra as every single numerical technique for solving PDEs, that I am aware of, can be thought of in a linear algebra sense. Fluid modeling is all linear algebra. A lot of the machine learning I've seen is just linear algebra. I would also that almost every simulation running on the world's supercomputers involves linear algebra.
Do you absolutely need linear algebra to do this things? No. Just like I don't need to understand how my car works to drive it. But having an understanding how these systems operate can really help you use them in a more logical way.
`ridiculously` might be hyperbole, but the two biggest ones I can think of are computer graphics and machine learning; these of which you can't even sneeze at without getting in linear algebra.
I have some sense of the linear algebra implicated in machine learning (I lived for many years with a comp. neuroscience PhD), but I have no visibility into graphics. So with my ignorance pinned to my lapel: the linear algebra involved in computer graphics is pretty simple, right? Just knowing how to manipulate vectors and matrices? Not a lot of eigenvalues, or for that matter orthogonalization?
I'm asking not to rebut but because I hope to prompt the sort of "sell us on linear algebra" statement that will make me study harder. :)
Much of computer graphics operates in projective space (http://en.wikipedia.org/wiki/Projective_space), so it's a little bit more than cookie-cutter linear algebra. This is done so that translations in 3D space — which aren't linear transformations — can be represented as 4D linear transformations in projective space via homogeneous coordinates (http://en.wikipedia.org/wiki/Homogeneous_coordinates).
Still, many folks use the APIs without really grokking that, so in practice it can be a bit cookie-cutter. I think of it similarly to how people can use crypto APIs without, say, really understanding what's going on under the hood.
BTW, projective space is also intimately related to elliptic curves (as you may or may not know — not implying anything!). So that darn linear algebra is lurking all over the place.
Likewise, any time you're talking about fields (even finite fields), vector spaces and linear algebra are right around the corner.
I'm familiar with projective coordinates for elliptic curves, but the funny thing about curves is that, for high-speed software, the math is tricky enough that you don't "need" to grok it: there's an "explicit formulas database" that you can just copy from:
Curves are what made algebra 'click' for me, getting me from my high school understanding of "algebra is math about unknown variables" to "algebra is about sets of related objects with operators that have identities and inverses".
Actually, although number theory touches a lot of "conventional" crypto (some of the design rationale for AES, polynomial MACs), most of workhorse cryptography in normal applications is not especially number-theoretic, and has more to do with information theory and statistics.
The belief that number theory is essential for cryptography is due to its role in public-key cryptography. But even if you're comfortable with number theory, new applications of public-key cryptography are tremendously difficult to get right, and require subject-matter specific expertise.
Well there is the Curry-Howard correspondence. Though I don't know if this correspondence is intuitive in the general case - I guess what this correspondence means is that programs can be translated to some computational model, which can then be translated to some formal logic, and then back again (but with these kinds of high-level facts, I'm bound to have misunderstood something, somewhere). Though this connection might be only intuitive for some kinds of languages - like statically typed functional languages.
I think it's amazing that programming can leverage and express so many mathematical facts - from implementing a binary tree that is enforced to always be balanced by the type system in Haskell, to using linear types to safely use and free OS resources in ATS.
I certainly agree. I'm trying to re-learn advanced calculus and analysis from a rigorous standpoint, as I think it is crucial for developing deep knowledge in probability theory, among other things.
Slightly tangential but, while there are many lovely books for linear algebra (like Halmos, Axler or Hoffman & Kunze), as a newcomer I don't find analysis literature so exciting. The standard, Rudin, is really synthetic Bourbaki-style. I like short and precise books, but I found it really removes most intuition. Any good books you happen to like? Perhaps Pugh or Zorich?
Among the many texts almost the only one I really liked and learnt from is Courant's "Differential and Integral calculus" and the newer edition "Introduction to Calculus and Analysis". It doesn't do the typical modern division of topics and instead treats single-variable calculus, multi-variable calculus and real analysis in its two volumes in a single long sequence, but the writing style is very pleasant and the exposition very intuitive, and it includes a lot of physics applications. Hardy "A course of pure mathematics" is great too, but it is much more, well, pure, but it stays relatively intuitive and the clarity with which he writes is unparalleled, many things I first really understood from this book. Those are old texts though and notation and details of exposition differ here and there from modern standards.
I have the book by Pugh, but that one is pure^2, even as far as analysis texts go, the problems are difficult and there are no solutions, so I think it would work only for people very in love with absolutely pure mathematics and most likely only in an academic setting, while I am interested in applications and self-studying. From modern texts, given your interests, I would look at "Understanding analysis" by Abbott and "Measure, Integral and Probability" by Capinski, both pleasant to read and together providing a not too steep path toward measure-theoretic probability.
Thanks for taking the time to write such a thorough reply. I find Abbott a bit imprecise sometimes, but Courant is a fantastic book. Could you also mention to some of your favorite math references, in particular those that deal with probability theory and statistics?
I think I am ready to tackle Rudin, but I don't really like it. I'm not alone there. Arnol'd is said to have called it Bourbakian propaganda. That's a bit extreme, but I certainly dislike books that strive to remove intuition:
Calculus is actually important for Computer Science, it's actually important for everything, it's where you learn how to handle the exponential function and the natural logarithm, how to do approximations and bounds, how to handle infinite series, etc., and those things then appear all over the place, unlike most things listed it's something that you can expect to encounter almost regardless of what domain you are interested in.
I mean, the guy asks what should replace Calculus and then the first answer includes "Asymptotics", "basic limits, sequences and series", so actually calculus. In general I cringe a little every time I hear Computer Science people should focus on "discrete math", because without tools from analysis you can only solve the most trivial discrete problems. And yes, calculus by itself is hardly ever applicable in CS, yet you still have to learn it, tough luck. In general what is not stressed enough I think is that applying math is hard and you need to learn a lot of it before you have enough tools to tackle problems anywhere close to real-world complexity.
The top answer also lists random concepts. I am learning probability currently, for applications in machine learning. "Discrete spaces, Bayes theorem and expected values" you can learn in a day, "Markovs, Chebyshev and Chernoff inequalities" are mostly only useful for further theoretical work, so is "the law of large numbers". What will really be useful will depend a lot on the applications, if you are a theoretical computer scientist, mastery of generating functions and transforms will be useful, and it's one of those instances where discrete problems are solved via tools from calculus/analysis. For machine learning you need to know everything about the normal distribution by heart, and this means you have to know everything about the exponential function by heart, so again back to calculus. Notions from information theory are useful, but of course none of the ones he listed. The comment "This is a must for modern programmers." sounds just comic.