Sigma isn't a for loop, either. You can sum over infinite sequences. A for loop -- or any computation for that matter -- implies a certain dynamic process, with a certain computational complexity. A mathematical expression has no dynamics; it just describes an object. So Σ 1/(2^n) doesn't mean "sum up all 2^-n" -- i.e., it is not a process -- but simply, "the number equal to the sum of all 2^-n". Of course, summation of infinite sequences requires its own definition using limits.
You can have math that is entirely computational -- it's called constructive math -- but the result is a very different math. For example, in constructive math, all functions over the reals are continuous, and not all subsets of finite sets are finite.
Sigma and power aren't for loops, but they do represent the melding of iteration and operation - the key is your definition of the sigma above - silly computers are always trying to 'do' things, and often math can simplify problems too big to 'do' by fiddling with representation - so I do agree with you, I just think you're being cruel to the for loop example :-)
Perhaps I am being a bit cruel, but that's just to point out the fundamental difference between what an "operation" means in computer science and what it means in classical math (constructive math is a different story).
Classical math is concerned with relations between "preexisting" objects. The statement 4 + 5 = 9 does not mean that the numbers 4 and 5 are added by some algorithm to construct the number 9, but that the three numbers are related via a ternary relation. The statement 9 - 5 = 4 is an equivalent statement in classical math, expressing the very same relation, but means something radically different in computer programs. I think it's important for computer scientists to understand this difference.
Jean-Yves Girard, the inventor of system F in functional programming discusses this difference in Proofs and Types[1], in the very first section, called "Sense, Denotation and Semantics".
> The edge case of summing infinite sequences has no practical application in CS.
The number 2 is \Sigma_{n=0}^{\infty}1/2^n. So you're "using" sigma every time you use the number 2, which may be practical in some programs.
The fact that you can't write a program that sums an infinite sequence using an infinite number of computational steps doesn't mean uncomputable objects have no practical application. In fact, the formalism Lamport talked about in his lecture makes common use of them, and you do, too, every time you use floating point arithmetics. When you use floating point numbers, it's very convenient to think of them not as the very complicated objects that they are, but as a real number (an uncomputable object) with some error term; in fact, that's how floating point is thought of in the design of many numerical algorithms. In other words, objects that are directly representable on a computer, are often conveniently thought of as approximations of uncomputable objects. If you can't write what the non-computable object is in a language designed to assist in reasoning about how algorithms work -- which is the subject of Lamport's talk -- you're making life much harder for yourself.
Another problem of thinking of summation as a for-loop is that it makes you think of the definition as an algorithm, which it isn't. For example 4 * 5 = \Sigma_{i=1}^{5}4, but both of them are just different representations of the number 20. In a program it may make a big difference if you're writing 20, 4 * 5, or `for(i in 0..4) sum+=4`. In mathematical notation, all three are the same. It's not like one uses a cheap multiplication operation and the other an expensive for-loop.
You can have math that is entirely computational -- it's called constructive math -- but the result is a very different math. For example, in constructive math, all functions over the reals are continuous, and not all subsets of finite sets are finite.