Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Don't Computer Scientists Learn Math? (2016) (lamport.azurewebsites.net)
64 points by 0xCMP on March 24, 2017 | hide | past | favorite | 111 comments


In Lamport's defence: (1) He's Leslie Lamport, so his comments about academic Computer Science should be at least considered before they are dismissed. (2) His audience (and the target of his critique) was not an arbitrary set of self-taught programmers, hackers, and vencap enthusiasts. It was mathematicians and computer scientists who went to a Leslie Lamport talk. Ability to read basic set theory notation isn't exactly something that's weird to expect in that environment. (3) The notation he is using is not at all complex or obscure in academic CS. You literally would have trouble reading even rudimentary papers and summaries of basic theorems in CS if you couldn't follow that notation. He's pointing out that he's seeing a lot of people who claim to be CS people that have the sort of disability that a "physicist" who doesn't know what a matrix or determinant is would have. Without those rudiments, they just can't follow the discussion. If you can't follow the duscussion, you can't contribute to it. CS != Programming.


He is correct in the sense that computer scientists should be able to understand that. I'm one, or at least my diploma says that, even though I'm as far away from the academia nowadays as it gets.

So, I watched the lecture on youtube. Speaking as someone who was glossing over the mathematical notation and essentially deferring to the speaker for correctness, I could not do it from the time he called it into attention to the time he asked for hands up. I wouldn't have put my hand up, had I been at the audience then.

Had he asked "are you able to understand this notation", then I'd put my hands up. Scanning to see if anything unfamiliar is there takes hardly any time at all. Actually reading and being confident that you understand what it means and all the implications takes time, for someone who doesn't do that on a daily basis.

Using a very silly example: e = mc^2. Can you read it? Certainly. Do you understand it?


It's not the mathematical concepts they're struggling with in this example, it's the notation. As someone who majored in math while an undergrad, even I would have difficulty deciphering it were it not for an elective course in Symbolic Logic.


> It's not the mathematical concepts they're struggling with in this example, it's the notation.

Being a mathematician, programmer and psychologist, I can hardly believe that you can rip math notation apart from math itself. To be fluent with math one needs three "languages" which her mind can use in parallel to think about math problem. Its math notation, English (or any other natural language) and visual mental image. If any of such a mental languages is not available to a person, or if he is unable to translate from one to other, then he would struggle with math.


Meh, its like saying that somebody who doesn't know to read doesn't struggle with concepts, but with understanding letters...

That is basic, I didn't touch math 15 years and still now that.


Meh, any competent programmer can understand sets..

I got the meaning presented in the article because I've practiced the notation presented, but wouldn'nt have figured it out instantly otherwise.

We have different programming language syntaxes for a reason.


These are not programmers though, but rather computer scientists, a big difference


Exactly. This is basically just a for loop in a symbolic notation.

Math would be an entirely more enjoyable subject if it were communicated in more practical terms that can be executed on a CPU.


Well, not really a for-loop, it's the definition of a set.

There are so many interesting objects in math that cannot be executed on a computer. You could define the set of permutations of the naturals, for example, which is an infinitely large set; there's no algorithm you can execute which could produce such a set on your computer. Math would be severely limited if it could only deal with things that were computable on your workstation.


You could define the function to generate arbitrary sections of the set. It's not like you're limited to actually generating the entire set, in order to work with it in a practical sense.


True... in my haste I mistook set for sigma (my other pet peeve where symbols make CS translation unnecessarily difficult).


Sigma isn't a for loop, either. You can sum over infinite sequences. A for loop -- or any computation for that matter -- implies a certain dynamic process, with a certain computational complexity. A mathematical expression has no dynamics; it just describes an object. So Σ 1/(2^n) doesn't mean "sum up all 2^-n" -- i.e., it is not a process -- but simply, "the number equal to the sum of all 2^-n". Of course, summation of infinite sequences requires its own definition using limits.

You can have math that is entirely computational -- it's called constructive math -- but the result is a very different math. For example, in constructive math, all functions over the reals are continuous, and not all subsets of finite sets are finite.


Sigma and power aren't for loops, but they do represent the melding of iteration and operation - the key is your definition of the sigma above - silly computers are always trying to 'do' things, and often math can simplify problems too big to 'do' by fiddling with representation - so I do agree with you, I just think you're being cruel to the for loop example :-)


Perhaps I am being a bit cruel, but that's just to point out the fundamental difference between what an "operation" means in computer science and what it means in classical math (constructive math is a different story).

Classical math is concerned with relations between "preexisting" objects. The statement 4 + 5 = 9 does not mean that the numbers 4 and 5 are added by some algorithm to construct the number 9, but that the three numbers are related via a ternary relation. The statement 9 - 5 = 4 is an equivalent statement in classical math, expressing the very same relation, but means something radically different in computer programs. I think it's important for computer scientists to understand this difference.

Jean-Yves Girard, the inventor of system F in functional programming discusses this difference in Proofs and Types[1], in the very first section, called "Sense, Denotation and Semantics".

[1]: http://www.paultaylor.eu/stable/prot.pdf


But... come on. Sigma is basically a for-loop if you're translating Math symbolism into code.

The edge case of summing infinite sequences has no practical application in CS.


> The edge case of summing infinite sequences has no practical application in CS.

The number 2 is \Sigma_{n=0}^{\infty}1/2^n. So you're "using" sigma every time you use the number 2, which may be practical in some programs.

The fact that you can't write a program that sums an infinite sequence using an infinite number of computational steps doesn't mean uncomputable objects have no practical application. In fact, the formalism Lamport talked about in his lecture makes common use of them, and you do, too, every time you use floating point arithmetics. When you use floating point numbers, it's very convenient to think of them not as the very complicated objects that they are, but as a real number (an uncomputable object) with some error term; in fact, that's how floating point is thought of in the design of many numerical algorithms. In other words, objects that are directly representable on a computer, are often conveniently thought of as approximations of uncomputable objects. If you can't write what the non-computable object is in a language designed to assist in reasoning about how algorithms work -- which is the subject of Lamport's talk -- you're making life much harder for yourself.

Another problem of thinking of summation as a for-loop is that it makes you think of the definition as an algorithm, which it isn't. For example 4 * 5 = \Sigma_{i=1}^{5}4, but both of them are just different representations of the number 20. In a program it may make a big difference if you're writing 20, 4 * 5, or `for(i in 0..4) sum+=4`. In mathematical notation, all three are the same. It's not like one uses a cheap multiplication operation and the other an expensive for-loop.


This, it's clearly just the notation that's the issue. I believe that CS has notation better figured out than the math world in general. Since there's way more emphasis on good abstractions, and overall language design (due to the different use case). While mathematicians seem to be much more focused on the underlying concepts, the notation being an afterthought.


While there indeed is some ugly notation out there, I usually find that most of it is pretty straightforward once you understand the underlying concepts.

It usually only seems like bad notation if I am misunderstanding things.


really? it's not that different from the notation i am used to.

  {f ∈ [1..N ⟶ 1..N]| ∀ y ∈ 1..N: ∃ x ∈ 1..N: f(x)=y}
would be the notation i am used to. I believe it's called set comprehension in english.


While I can't comment on this situation in particular (a room of computer science researchers who couldn't understand basic notation), I can offer an anecdote about the lack of mathematical knowledge in industry.

Yesterday one of my coworkers submitted some code to perform an upsert to code review. His logic for calculating the diff set was extremely complicated, and subtly wrong, filled with comments. To me the idea was incredibly simple, because I only needed to think of the upserts in terms of set notation. So in review, I wrote the set notation implementation of the upsert, and then offered code to implement the idea.

My coworker was taken aback at how simple this set based implementation was, despite it being very basic set mathematics. While this was just an anecdote, I find that a lack of basic mathematical fluency peppers codebases with unnecessary complexity and myriads of edge cases that could easily be tamed with a slight application of mathematics.


Is it a web startup by any chance?

This thing doesn't happen in finance. Or generally speaking places that require a hard degree, and in extreme cases test maths aptitudes.


We're a web shop, but not a startup. I don't mean this to knock on my coworker: he loves learning and when presented with a new idea, takes his time to digest it and really apply it to his thinking.


I was really interested in learning math. Was. i continue to be interested in math, I'm discouraged by the total lack of good and available instructional material on something as basic and essential as notation and set theory notation. There is a ton of great material out there for total beginners and people who have recieved formal instruction of advanced math. The in-betweeners get a bit shafted.

I dunno if things have changed since I last pursued this, but 5 years ago it was absurd.

CS, on the other hand, has a lot more material readily available for self-study. I find the subject itself also lends itself to being more accessible. Furthermore, unlike math, the practitioners of CS related fields seem to be concerned with readable notation.

So, mathematicians: acessibility is key! Math is fascinating, but inaccessible even to a large part of the intelligensia.


Pick up and typical math text and you can find an index of notation in the frontmatter or appendices.

All these complaints about mathematical notation seem really uninspired to me. It's like if I went around complaining that programming shouldn't require all this horribly baroque textual input.

You may or may not have a point , but either way it's certainly not one that's helping you at all.

If you find yourself frustrated at a seemingly nasty piece of notation, often this is a (helpful!) signal that you're not fully grokking things. Make use of that confused feeling to dig deeper.

Admittedly, compared to programming there are considerably fewer online resources for hacking together some maths knowledge. However, in book form there absolutely are tons of excellent materials!

Pick a subject, Google around for text recommendations, and then go raid your local university's library. There are even pretty good IRC channels for various math subjects! Try hitting up #math on freenode.


I think you just proved his point. Yes, one must invest time and energy to gain fluency, but making the whole process more accessible will always make it easier and welcome more people.


I bought this book [1] a couple years ago to help with the notation, and it's awesome. I've been able to walk through papers that I never would have understood without this rosetta stone.

[1] https://www.amazon.com/Mathematical-Notation-Guide-Engineers...


Thank you! I got to the comments to find exactly something like this.

About the post, though, it would be way more constructive if the author would propose a way for people to learn what their lacking instead of just complain about it.


Thank you! I looked for something similar a while ago and came up empty. Whenever I asked math friends the answer was always "there's too much variation so a book couldn't tell you everything", which is probably true but even common things would help.

Someday I'd love to see a similar thing that's simply an operator to function index where you can read in code/pseudocode what an operator does on a (bounded for ease of reading) datatype.


A tool that parses equations in CS papers and outputs pseudocode is an amazing idea!


Thanks, I'll be checking this book out ASAP


I learned all of that while I got my CompSci major. Had I filled out some form I would have had a math minor upon graduation. But I could hardly tell you what any of that meant 15 years later, because it never comes up in anything I do.


Same thing for me. I took an extensive array of mathematics and physics courses while pursing an engineering degree. More than 10 years as a practicing software engineer and I remember very little of it today since most never comes up in my daily use. Back at graduation, I could have discussed a lot more maths intelligently than I can today.


Phew. I thought I was the only one that got a degree only to not remember it a few years later. I find that when I relearn the topics it does come a lot easier. The memory might be hard to retrieve but it's in there somewhere ...


I look after a part of product development in a large ecommerce company. I have at least one valid use case for university level math per week. And I hate myself for having let my math skills slide in the 6 years since I left physics. That's because I regularly run into problems that I know I could solve properly some years ago and struggle to formalise now.


Sure, but how much time did he give them to respond? The CS people maybe need a few seconds to "load" first-order logic into memory and may have been thinking about it before he gave them time to put their hands up.

He could have instead asked:

"Raise your left hand if you can interpret this formula, and raise your right hand once you've determined that you wouldn't be able to do it without consulting a resource."

After a set timeout he'd have a better sense of where people stood.

That sounds complicated though. Maybe people just prefer simple consensus algorithms, even with imperfect results.


Provide this legend with the formula. Now how many people understand it?

∈ = is an element of

∀ = for all

∃ = there exists


Exactly. This is a nomenclature issue. When this is spelled out it is easily understood. Math is not the issue here. Should CS students have an intro to abstract algebra? Probably. Do CS students (in those programs where CS isn't conflated with IT) take 3+ years of math courses? Yes.


His response would be that anyone who's studied even a little college math wouldn't need that legend. I learned that notation in High School.


Anyone who doesn't use that nomenclature on a regular basis is bound to need a refresher legend. The concepts are easily remembered (and used in all kinds of programming). But which direction the e points is less easily remembered.


I learned the details of photosynthesis, the Krebs cycle, and a lot of other basic biochemistry in high school, but I don't remember much of that at all.

I did a fair bit of set work in college, but it took me about a minute to dig some of that up to read and understand the relationship that's being described. I don't think I've worked with sets using actual mathematical notation in over 10 years. It wouldn't have surprised me if I couldn't read it (although I would've found it somewhat distressing).

The weird part is his audience: active students and researchers within the field of CS. They're the ones that I would've expected to be most likely to understand what it said.


I took three semesters of math in college and don't remember any of those symbols.


If you took three semesters of post-secondary maths and never encountered "∈", you should ask for your money back.


I don't know whether I encountered it. Perhaps I just forgot. Either way, it hasn't been a notation used in real life.


yep exactly - I have CS degree, rusty on my symbols but knowing them its obvious


What does the colon mean?


As others have said, "such that." It's worth noting that some people use a vertical line instead to mean the same thing.


I believe 'such that'


such that.


such that


"I had already explained that [1..N ⟶ 1..N] is the set of functions that map the set 1..N of integers from 1 through N into itself"

As a programmer, I would read that as if he is defining a set of function objects. And the domain and codomain of the those function objects must be integers in the range 1..N.

Come to think about it... Isn't the size of the set exactly the number of permutations from 1..N? In other words N^N? If so, an audience of computer scientists would probably have understood the following better:

    import itertools
    list(itertools.product(range(N), repeat = N))


> If so, an audience of computer scientists would probably have understood the following better

I'm a very experienced programmer (>25 years) and I don't know the language you're using in your notation (Python maybe?). The kind of mathematical notation Lamport is using is much more universal (at least after he explains its particular peculiarities). Also, reading your notation, I assume that you're describing a list, while he's describing a set.


Yes, it is Python. Python has become a lingua franca in the programming world, and you'd be well-advised to learn it. It creates a list but that is inconsequential. Here is the fixed code (to actually generate permutations) and examples:

    >>> from itertools import *
    >>> [s for s in product(range(2), repeat=2) if len(set(s))==2]
    [(0, 1), (1, 0)]
    >>> [s for s in product(range(3), repeat=3) if len(set(s))==3]
    [(0, 1, 2), (0, 2, 1), (1, 0, 2), (1, 2, 0), (2, 0, 1), (2, 1, 0)]
    >>> len([s for s in product(range(4), repeat=4) if len(set(s))==4])
    24
    >>> len([s for s in product(range(5), repeat=5) if len(set(s))==5])
    120


> Python has become a lingua franca in the programming world

I don't think so. Probably depends on your section of the industry. In mine, C, Java and Matlab are all better known than Python.

> you'd be well-advised to learn it

I did; a few times, actually. I just keep forgetting because I never get an opportunity to use it. Once you've used well over 10 languages, you don't even try to maintain your skills as that would be a waste of time. You just relearn the language next time you need it, especially as popular languages come and go. Standard mathematical notation, however, has been with us, pretty much unchanged, for about 100 years now.

In any event, you can't express in Python nearly everything you can express in standard mathematical notation, unless Python has gained some features that allow it to express uncomputable objects since last time I used it. Does the itertools library support infinite sequences? How about uncountable sets?


The number of permutations of 1..N is N! ("N factorial"). The total number of functions from 1..N to 1..N is N^N.


At least they don't rediscover calculus and get citations for it.

https://fliptomato.wordpress.com/2007/03/19/medical-research...


I sigh every time someone tries to explain a complex concept using a formula in that manner.

In the name of efficiency it is kind of being exclusionary, which I kind of resent.

I love math but it is not the only way to express complex concepts.

I depend more of the concepts from statistics than higher level mathematics courses.

Is this data nominal, ordinal, interval or ratio? OK, let's work with that.


Richard P. Feynman (who else!) had a scathing critique of this: http://calteches.library.caltech.edu/2362/1/feynman.pdf

To quote him, in reference to new mathematics of the post-Sputnik era,

"In regard to this question of words, there is also in the new mathematics books a great deal of talk about the value of precise language - such things as that one must be very careful to distinguish a number from a numeral and, in general, a symbol from the object that it represents. The real problem in speech is not precise language. The problem is clear language. The desire is to have the idea clearly communicated to the other person. It is only necessary to be precise when there is some doubt as to the meaning of a phrase, and then the precision should be put in the place where the doubt exists."


Thanks for sharing.


How would you use statistics to express the set of permutations on a set of integers? Not saying it isn't possible, I'm just asking how you would approach that.


You would describe a Poisson process... define a permutation that the set of them approaches Poisson distribution. The result is a randomized generalized sieve algorithm. (Also known as Gorosort.) You end up on a journey along combinatorial base of statistics.

It is a terribly complex description. Welcome to statistics. Permutation is a basic combinatorial concept that is assumed to not have to be explained.


Then you get to define what those concepts mean. It is not that hard actually...


Im pretty sure that Topics include proof techniques and logic; induction; sets, functions, and relations; etc. is a requirement to understanding Turing Machines, and NP-Completeness.

That they probably forgot all of it immediately after the final is probably indicative that they're interested in a career in software engineering not research


I think we should look at the correct operation T on the set of all computer scientists. If T is an operation that restricts the set to computer scientists in the field of theoretical computer science (TCS), then the opposite seems to hold as mentioned in this post, which talks about TCS math being more rigorous than applied math: https://windowsontheory.org/2014/10/12/applied-mathematician...

If, loosely speaking, T maps you to 'average computer scientist', then the situation is different. And so on.

So depending on the type of T, subset or groups of computer scientists or metrics you are looking at, Lamport's observations hold water.


How come so many are complaining about this being a notation issue? It is as standard a notation I've ever seen for expressing this. It's perfectly okay to be an engineer/developer and not read that on the spot. But if you're a graduate student or young faculty member doing research in computer science and give that as the explanation, I think that's just being defensive.

When I took CompSci 101 15 years ago, basic mathematical notation like this was necessary to pass.


I suspect that some of them were just intimidated. Lamport is a well-known researcher in his field, and is giving a recorded presentation. Even a young postdoc might hesitate to suggest they fully understand something, only to have Lamport single them out and correct them on camera. In a different context, like a non-recorded seminar, you might see a more confident response.


That's very weird. Located near Heidelberg is Karlsruhe, where i am a CS bachelor student (university is KIT). This is very basic and we are absolutely required to understand this without thinking twice.


> Remember, these were specially selected, bright young researchers from around the world.

[Emphasis by me]


In quite a few institutions CompSci is effectively math, undergrad compsci and mathematics degrees may only very in a few courses.

When I did mine we didn't have a single "programming" course in the degree you were expected to learn the language a corse utilized on your own.

Math and physics were at the undergrad level of their respective BSc. Degrees and the coverage was nearly the same.

As far as notations goes then it was covered in one of the first 3 "101" courses you take, your first program was effectively handwritten in this manner.


I think the problem here really is that permutations are defined as bijections, which is rather unintuitive. Now even though I learned that in my undergraduate curriculum I already forgot about that again.

Now think of it as a sequence and you get something like this: {(aᵢ) i∈[1..N] | aᵢ∈1..N, ∀i,j : aᵢ ≠ aⱼ}. Probably already easier to understand.

Or leave it off totally. Formal mathematical definitions don't make sense when they are harder to understand than words and when you do not use them actually later on.


I can read this formula and understand it, I just need a little bit more time than the mathematicians. I'm pretty sure that was the case for most computer scientists there.


These [1] are the current ACM recommendations for CS among related fields.

In all of them, math plays a major role. But is math notation necessary to understand/communicate math in an ORAL way? I understand this is critical to write a paper in a terse way, but orally?!

[1] http://www.acm.org/education/curricula-recommendations


One area I would argue benefits greatly from this type of formal treatment is the specification of business rules in software. I can be quite insightful to formally spell out requirements from the business, and then apply converse/inverse/etc. analysis to drive out missing cases.

Admittedly, most of us can do this in our head quickly (for easy cases), but I find the formal evidence lends itself well to more complex scenarios.


There's kind of a bias here. Does the author mean "computer scientists in America"?

In France, I definitely learned mathematical notation. Actually, I learned ∈, ∀ and ∃ in high school. And I had set theory courses in the first year of my master's degree (I mean first year after high school), among other mathematical courses.


totally readable to this lowly cse graduate. that said, the notation is overly verbose and specific, if you could have defined it in less rigorous notation(unless you truly needed it), more people would have understood and engaged in your presentation. Mathematics is a language like many others where the fundamental goal is to communicate ideas.


> the notation is overly verbose and specific

That's what formal notation is like. The advantage of a formal notation is that it is both succinct and fully precise, and can, therefore, be used to perform formal proofs, possibly using a mechanical proof checker. Formal proofs are especially important in computer science, where theorems about programs are not mathematically deep but do have a lot of details that can be easily overlooked when reasoning informally. Lamport's talk was precisely about that: formal reasoning about algorithms. In that context, the ideas must not only need to be communicated so that they are intuitively or roughly understood -- as is good enough for math -- but made absolutely precise.


as was part of the comment, unless "needed". Otherwise I've fallen into the trap of breaking out the most precise notation from the depths of the annals of mathematics to write slick looking, ultra concise pseudocode in latex (algorithm2e) for submissions to ieee and acm journals, and almost every time i get one or two reviewers saying that the notation is needlessly complex.


Yes, but Lamport's talk was about formal specification and verification, and we're not talking some arcane stuff here: set membership and first order logic. CS graduates should know how to read that.


It's not about the math, it's about the notation. Mathematic notation is archaic in many ways. We keep using notation that is millennia old in some cases and centuries old in almost any other case.

To make things more difficult, any mathematician opposes vehemently to any change in notation or to use easier notation to pass the same concept.


Its just the notations that they might not have recognized immediately, its not the math.


How can you get beyond first year in undergrad maths without knowing that notation though?

BTW as an anecdote I recognised the symbols but didn't derive the correct meaning, as you can see elsewhere in this thread.


If they are not used to the notation quite often, they are likely to forget it. I surely learnt all those notations, passed the math exams and don't remember some of those notations.

May be related - Even with 15 years of programming, I tend to forget certain syntaxes while programming - but that doesn't mean I don't know programming


In {f ∈ [1..N ⟶ 1..N] : ∀ y ∈ 1..N : ∃ x ∈ 1..N : f[x]=y}

What does the colon : mean?

It used to be that:

| = given that

, = and

but, I have never seen a : used in math.


It is TLA+, not quite typical math notation. Lambda and list comprehension notations are more common.


Same as |. Like many things in math, there's a wildly accepted notations.


just syntax. Some leave them out, some choose colons, i've seen dots in formal systems (∀y.∀x.(x=y)) etc.


Can't we also say that mathematicians don't learn computer science? I don't understand what the point is.


The foundations of CS, ML, Cryptography, Algorithms, etc... are expressed in mathematical terms. If you are unable to understand Them... well. Programming is not computer science.


RWTH Aachen Computer Science Master student here and I know how to read it and what it means.


There was a discussion on this a few months back if I remember correctly.


"Why aren't people experts outside of their field?"


No, I'm going to agree that that is 100% within the field of academic computer science. I literally covered all the relevant notation in my freshman year.

What I'm wondering is whether or not there was some other factor going on, because I'm trained as a computer scientist and found nothing particularly objectionable about the formula, other than the f[] application notation. (And as a polyglot programmer, I've long since made my peace with that sort of notation mutation.) And I am by no means well-practiced in that sort of thing; I've been out of school for 14 years now, and only dabble on the side in this sort of thing now. The "forall y there exists an x such that" pattern in the middle is an extremely common recurring pattern, and what surrounds it on either side is also extremely simple.


Did you do it in a few seconds while reading from a slide and listening to a lecturer talk about math? I did the same thing as you and it took me more than 10 seconds to parse through the notation. If someone put this up on a slide during a talk and asked if I 'understood it', the answer would be no. Doesn't mean I'm incapable of understanding it, just that it uses muscles I don't flex very often.


Well, as I was trying to allude to, yeah, I did understand it pretty quickly because it uses a lot of common patterns.

Possibly I'm an extreme outlier, because when I say that I try to keep up with the field a bit, I really do. I really do watch YouTube videos of presentations full of math significantly more complicated than that every so often.

But still, I would also stand by my wondering if there was something else going on here, because it still seems to be grad students in school at the time really should have followed that. When I was in grad school I am quite confident I knew several other students who would have understood that just fine, and I went to "just" Michigan State, not MIT or Berkeley.


'Outside their field' was poor wording on my part. I meant more that it's not the sort of thing that most CS engineers see day to day. People forget things they don't use often.


> CS

> engineers

I think it's important to realize that these are two different things. One is a formal research science, the other deals with practical problem-solving and implementations.

Your typical software engineer likely has a CS degree, but CS researchers and software engineers are two separate populations. Sometimes the same person will do both, but usually not at the same time in their life or for the same organization.

edit: for example, you don't even need a computer to learn computer science fundamentals. A notebook or deck of playing cards will do fine.


As I write this, thearn4's post is fading into the grey, but it's true. That's why I qualified my post with trained as a computer scientist. I have a Master's degree in the field, and I try to keep up with it to some extent, but what I am now is an engineer. Degree or no, I can not currently say "I am a Computer Scientist" with a straight face.


He wasn't talking to software engineers who may have been out of school for a while had time to forget this stuff though; looks like he was talking to a mix of graduate and undergraduate students and some faculty, and I would think stuff like this would be covered during freshman/junior year for sure?

TBH I personally don't always raise hands to such questions though.


This is such a notation issue. If I were to show an implementation of permutations from 1..N in eg Brainfuck (to choose an extreme example), there's no way the mathematicians would get it. Why don't mathematicians learn math??


This issue comes up frequently when I try to read CS papers which explain their concepts using math notation. It's much, much easier to reverse-engineer the concept from a working example written in some notation I can actually read, such as... any programming language, even a programming language I've never formally learned. The math notation is literally Greek to me (if you'll pardon the pun) and does more to obscure than communicate meaning.


Not sure it's just notation but a mater of understanding.

In computer programming "exists" it's a matter of checking all the possibilities and find one, therefore is restricted to finite sets (and realistically speaking quite small the ones).

On the other hand in mathematics there is no such restriction. Existence is just an assumption, if there is at least one, then we go further with the assumption, no meter we talk about finite sets, infinite countable sets or infinite uncountable.

I'm working as a computer programmer for quite a long time and I also find this very annoying seeing people around thinking only finite when they have to solve real problems.


>{f ∈ [1..N ⟶ 1..N] : ∀ y ∈ 1..N : ∃ x ∈ 1..N : f[x]=y}

is not expert-level material.


Didn't read the article, is that supposed to show the properties of a function that is both "1:1" and "onto"? Ie, the set of all inputs is the same as as the set of possible outputs, so for any x in the set, f(x) will return a result that is also in the set?

Didn't major or minor in mathematics but I took a few papers. Time to the read the article and collect my prize or look ignorant under my real name on the web.

EDIT: I was wrong! Though in my defense I had been given the context from TFA I think I would have got it.


But isn't CS one of the formal sciences (together with mathematics, statistics, etc.)? Familiarity with mathematical notation should be pretty foundational for active researchers in the field I would think.


It just doesn't come up very much. The computers abstract it enough that you rarely need to actually look at notation like that. Unless you're doing some real close to the metal work, which the people who kept their hands raised probably were.


You're mixing up programming with computer science. The former is a task that does not necessarily need any math (e.g. web development), the latter is literally math (e.g. category algebra).

Edit: clarification


It doesn't come up because it is too abstract. About the only languages I can think of that will accept existential statements are theorem provers. My favourite is Isabelle/HOL.

In fact, the notation used by the lecturer is sloppy. Numbers 1..N is not a rigorous domain definition. (Unknown if real or natural.)


I'm not sure I buy your claim that mathematical notation is "close to the metal".


Paraphrasing the wise words of my uncle Rick, the rancher. "It is fucking hard to know everything"


The two misnamed sort algorithms shown are bogosort and bozosort. Must be mr Lamport's idea of a joke.


Engineers learn math...


> "people don't understand my notation"

> They don't know math

ok, let's move on to the next thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: