It's a bit odd (pardon the pun) that the words Taylor series [0] appear neither in the blog post nor so far in the comments, since that's the origin of the terms odd and even in this context [1]: the power series of odd functions have non-zero coefficients only on the odd powers of the variable, and those of even functions only on the even powers.
And thus the Taylor-series representation of a function makes it immediately obvious that the function can be decomposed into odd and even sub-functions. But Taylor-series representations exist only for smooth functions, and generally converge only in parts of their domains, while the even-odd decomposition applies to all real-valued functions defined on all real inputs. It's this algebraic generality of the even-odd decomposition that's most remarkable.
I got curious about the etymology of this, and it's not clear to me that wikipedia is right on this one. According to [1], the earliest use of "even" and "odd" for functions goes back to Euler [2]. It's been a long time since high school Latin, but it doesn't look like he has the Taylor series in mind here. He certainly calls out the functions f(x) = x^n for some n as even or odd, and notes that the sums work in the ways you'd expect, but he also talks about ratios of those functions, which he wouldn't need to do if he were assuming smoothness and could just expand out the Taylor series of the ratio. It's unclear, but it looks to me like he's using "even" and "odd" to draw an algebraic analogy.
I did not mean to imply that Wikipedia was my source for the etymology of even and odd as classifications of functions. (I hadn't really meant to make an etymological point at all, and I added the link only so that readers unfamiliar with the concept of Taylor series might be enticed to click through and learn something.)
I don't have any references for you right now, but I would be surprised if the usage of the terms doesn't pre-date your citation of Euler by at least several decades. The basic technique for deriving Taylor series is known to date to at least 1671, to the correspondence between James Gregory and John Collins; it also shows up in letters from de Moivre to Johann Bernoulli in 1694 and from Leibniz to Bernoulli in 1708. Newton had a geometric means of deriving the coefficients of power series when he was writing the Principa (published in 1687), as demonstrated by the proof of his tenth proposition, and he included a description of the algebraic technique in an early draft of his Quadrature of Curves (but removed it before publication in 1706). So Taylor series were broadly known to Europe's prominent corresponding mathematicians in the first decades of the eighteenth century, and to some of them decades earlier.
And the basic concept of power series, as well as the power-series representations of many common functions, were already widely known in the latter decades of the seventeenth century. Like I said, I don't have a reference for you at the moment, but I have always heard that the terms odd and even derive from the power-series representations of functions, and it's difficult to imagine that no one before Euler had noted that the power series of some functions involve only odd or only even powers—although one may more readily imagine that someone noted it while neglecting to name it.
Anyway, Taylor series and the more basic concept of power series certainly were known to Euler when he wrote the paper you cited. Since the power series of polynomial functions are the polynomials themselves, you're right that Euler would not have had power series in mind in the context of this paper. (But, no, Euler would not have been thinking in terms of smoothness—mathematicians up until the analytic enlightenment of nineteenth century played fairly fast and loose with derivatives. The point you made about ratios of functions is off-base, I think: the paper is about reciprocal solutions to polynominal equations.)
The question is simply whether your citation is evidence that Euler first coined the terms for the concept in the context of this paper (i.e., at that time and referring specifically to polynomials), or whether the concept and terminology already existed. If not, when was it generalized to refer to functions other than polynomials of finite order? I don't know. Perhaps nobody bothered to give the concept a name until Euler in 1727, and perhaps nobody even considered the concept with regard to a more general class of functions until even later. (I have a source book around here somewhere that might say something about the matter. ...)
In any event, if you asked a sample of mathematicians today why functions are called odd or even, I'd bet that many of them would point to power-series representations. That's what I meant to point out in my earlier comment.
> I would be surprised if the usage of the terms doesn't pre-date your citation of Euler by at least several decades
The citation is pretty suggestive, in that the text goes "functions, which I call even, which have this property...". That doesn't mean the terminology is original to Euler, but it does mean it can't have been established at the time he wrote.
> if you asked a sample of mathematicians today why functions are called odd or even, I'd bet that many of them would point to power-series representations
Eh. The terms apply just as much to nondifferentiable functions. I would have just said that it comes from the properties of polynomials with terms of all even or all odd degree. I've never thought of infinite polynomials as having any special place in the categorization of functions as even or odd; the concept is generally introduced with finite ones like f(x) = x^2 . Years later, when you learn about Taylor series, it gets pointed out that sine and cosine form polynomials with terms of only odd or only even degree, and -- look at that -- they conform to the definition of odd and even functions. (I'm not making a claim about the origin of the concept, but I am making a claim about how it's viewed today.)
Yes, that's more or less what I meant, though from what I understand of pre-modern mathematics, I still think it's likely that power-series representations were the inspiration for applying the terms odd and even to functions. ...
But is it even clear that Euler's usage of "functiones pares" etymologically corresponds with our "even functions"? In studying classical Latin for four years, I never encountered a usage of "par" that would be translated as "even" in the numerical sense, but admittedly I know next to nothing about mathematical Latin of the eighteenth century. Reading Euler's words, if I were not aware of the modern English usage of even to describe the functions he named, I imagine I might translate his phrase as "equal functions". That translation seems to capture the idea that such functions have the property that f(x) equals f(-x), or that the functions' values are the same on both sides of the y-axis.
Particularly since Euler did not name what we now call odd functions, it does not seem clear to me that his Latin usage is etymologically related to our modern terms at all. ... Do you know whether par and impar were the Latin terms used in Euler's era to refer to even and odd integers?
(To counter my own objection, yes, the English even, of Germanic origin, does have several meanings that are close to the main meaning of Latin par as equal, e.g., in "even odds" or "an even split". Presumably that's the origin of the mathematical meaning of even in English: an even number is one that can be split into two equal numbers whose sum is the original.)
> But is it even clear that Euler's usage of "functiones pares" etymologically corresponds with our "even functions"?
Well, the etymology of "even" doesn't trace to the Latin word "par", or any cognate or ancestor of it. Is it clear that Euler's usage of "pares" is the same sense as our "even functions"? Pretty clear.
Yep, I looked it up too, and indeed par and impar seem to have been the standard Latin terms for even and odd in the mathematics of the era. See, e.g., Clavius's 1574 translation of Euclid's Elements [0].
You can think of the even and odd parts of a function as decomposing the function into its +/-1 eigencomponents with respect to the operator f(x) -> f(-x).
You can think of the exponential Fourier series of a function as a way to decompose the function into its {..., -2, -1, 0, 1, 2, ...} eigencomponents with respect to the operator f(x) -> f'(x).
I like this from a representation theory perspective. The cyclic group of order two, C_2, is the set {1,x} with x^2=1 (it's reasonable to just use {1,-1} with the group operation being real-number multiplication). Let V be the set of continuous function R->R, and then let's define a linear representation where phi_1 is the identity operator on V and phi_x is the operator you describe, phi_x(f)(c)=f(-c). (The only two symmetries of a line which fix the origin are the identity and flipping, which is what this representation is representing.)
The group C_2 is known to have exactly two irreducible representations (the positive and the negative representations), so V decomposes into (at most) two subrepresentations (that is, there are two subspaces of V which are closed under the group action). Using the characters of C_2, we get two projection operators: (\phi_1+\phi_x)/2 and (\phi_1-\phi_x)/2. Examining what these do, they decompose a function into the even and odd parts, respectively!
This idea can be extended to the circle group for the Fourier transform.
Representations are able to capture a bit more than eigencomponent. Where an eigencomponent requires that the action be strictly scaling, components from representation can have more complicated actions. For instance, if you have a representaiton of the dihedral group of symmetries of a triangle, then there will be projections which will give you 1) the +1 eigencomponent, 2) the -1 eigencomponent associated with flipping the triangle over, and 3) the 2-dimensional component which faithfully represents the symmetries of the triangle (i.e., the one already mentioned when describing the dihedral group).
It's funny that you mention this; I was about to type a similar comment! Thinking of the Fourier transform in terms of group theory seems like it would make it more complicated, but it actually makes the fundamental underlying concept simpler to understand.
One can perform the Fourier transform over an arbitrary, compact, non-commutative group G via f(g) = ∑ dᵏ⋅tr(f̂ᵏ⋅ρᵏ(g)), where the sum is from k = 0 to k = ∞, and k indexes the unitary, irreducible representations ρᵏ of G. dᵏ is the dimension of the kth representation. f̂ᵏ is the (matrix) Fourier coefficient of the kth irreducible representation and is computed as f̂ᵏ = ∫ f(g)⋅ρᵏ(g⁻¹) dμ(g), where μ is a Haar measure on G such that ∫ dμ(g) = 1. Note that since the group representations are unitary, ρᵏ(g⁻¹) = ρᵏ(g)ᴴ.
For commutative groups, all of the ρᵏ are one-dimensional, and so the sums and integrals are over scalar values. As you mention, for the circle group the above expression reduces to the "conventional" equation for the Fourier transform.
One can think of the group Fourier transform as decomposing a nonlinear function over a group into a linear combination of orthonormal functions such that cutting off the sum at the kth term provides the best MSE approximation to the function, i.e., f̂ᵏ = argmin ∫ [tr(ĉᵏ⋅ρᵏ(g)) - f(g)]² dμ(g), where the minimization is over ĉᵏ.
The matrix problem is of course a special case of the same problem, where the domain is the matrix indices, labelled with (0,0) as the middle of the matrix.
This can be rather difficult for general data where an explicit equation isn't obvious (even though it'll work quite often as pointed out by johncolanduoni below).
[I think it's neat that] for sufficiently smooth and periodic data, a Fourier transform will do exactly this (decompose a function into its even and odd parts)!
I'm confused. If you have a table of data where one of the columns varies from -A to A, what is the difficulty in calculating the odd and even parts by just adding (resp. subtracting) the values at x and -x and dividing by two? Even if you don't have a precisely symmetric span of x values you can use simple interpolation as long as your data points are reasonably dense.
Fourier analysis seems to be overkill in this case, unless I'm missing something.
Your algorithm had a bit of a typo--you want to subtract (resp. add) to calculate the odd and even parts of a function.
If you don't have a nearly symmetric span of x values, you may need to do extrapolation to obtain one, which may be difficult.
I brought up Fourier analysis not as a means to replace the decomposition described in the blog, but to connect it. I think it's neat that Fourier transformations can be viewed as a parity decomposition.
if your data are over [a,b], where a < 0 and b > 0, you can do the decomposition mentioned in the article over [-c,c], where c = min(|a|,|b|), so you don't need to extrapolate.
The odd/even parts of a function are unique, so no.
To see why they're unique, say you have two pairs of odd/even functions fo1/fe1 and fo2/fe2 that each sum to the same function. Subtract fo1 from both sides:
fo1 + fe1 = fo2 + fe2
fe1 = fo2 - fo1 + fe2
Since fe1 is even, fo2 - fo1 + fe2 must also be even. The function fo2 - fo1 is the difference of two odd functions, so it is itself an odd function. And the only way the sum of an odd function (fo2 - fo1) and an even function (fe2) can be even is if the odd function is everywhere zero [1]. In other words, fo2 - fo1 = 0, which means fo2 = fo1. Substituting this into the overall equation, that means fe1 = fe2 as well.
The left side is an odd function and the right side is an even function, so the common value must be both odd and even. But only the zero function is both odd and even, QED.
And thus the Taylor-series representation of a function makes it immediately obvious that the function can be decomposed into odd and even sub-functions. But Taylor-series representations exist only for smooth functions, and generally converge only in parts of their domains, while the even-odd decomposition applies to all real-valued functions defined on all real inputs. It's this algebraic generality of the even-odd decomposition that's most remarkable.
0. https://en.wikipedia.org/wiki/Taylor_series
1. The Wikipedia article on odd and even functions points out the decomposition described in the submitted post: https://en.wikipedia.org/wiki/Even_and_odd_functions