*Most middle and high schools nowadays introduce the terms "even" and "odd" to describe functions without really explaining why they're named that way. Without that, the definitions are harder to recall, and the reason we might find them interesting is missing.*

*Note: I write this post to present how I explore this topic with my students, tutoring or otherwise. Questions are posed as I ask and answer them for myself, but they work well as questions for students in a lecture or assignment setting. I typically would not cover more than one or two sections at a time with one student.* \(\newcommand{\comp}{\circ}
\newcommand{\comment}[1]{\textcolor{lightskyblue}{\text{#1}}}\)

## Definition of Even/Odd Functions

Given a real function (\(f: \mathbb{R}\to\mathbb{R}\),!\(f\) is a real function.)(function-notation) it may be even, odd, neither, or both, depending on which of the following criteria it meets for every possible real input (\(x\in\mathbb{R}\):!\(x\) is real.)(element-notation)

(start function-notation)

When I use special notation like this, the words before it should mean the same thing. In this case, the symbol "\(\mathbb{R}\)" represents the set of real numbers, so that "\(f:\mathbb{R}\to\mathbb{R}\)" means "\(f\) is a function with real inputs and outputs" or "\(f\) is a real function".

(stop function-notation)

(start element-notation)

When I use special notation like this, the words before it should mean the same thing. In this case, the symbol "\(\mathbb{R}\)" represents the set of real numbers, so that "\(x\in\mathbb{R}\)" means "\(x\) is in the set of real numbers" or "\(x\) is a real number". Saying "real input" beforehand lets you know that \(x\) represents an input value.

(stop element-notation)

\[ \begin{aligned} \text{Even:}&\qquad f(-x) = f(x)&\text{ for every }x\in\mathbb{R} \\ \text{Odd:}&\qquad f(-x) = -f(x)&\text{ for every }x\in\mathbb{R} \end{aligned} \]

"Can a function can be both even and odd?" you may ask. (Answer?)(both-even-odd)

(start both-even-odd)

Yes, if \(f\) is both even and odd, then \(f(-x)=f(x)=-f(x)\) for every \(x\). Since \(0\) is the only number that is its own negative, \(f\) must be the zero function. So there exactly one really boring function that is both even and odd.

(stop both-even-odd)

Now, why are we using the words "even" and "odd" here? There must be some sort of similarity between the way these functions behave and the way even and odd numbers behave. Before we can notice the similarity, though, we have to make sure we know how the numbers act.

## "Even" and "Odd" Numbers

The two things most familiar for those numbers is when they are added and multiplied. That is, adding an even to any number preserves its parity (the property of being even or odd), while adding an odd to any number swaps its parity; multiplying an even by any number results in an even, while multiplying an odd by any number preserves parity. To summarize this in a table, I'll represent "even" and "odd" with my favorite even and odd numbers, \(0\) and \(1\):

\[ \begin{array}{c|c|c|} + & 0 & 1 \\ \hline 0 & 0 & 1 \\ \hline 1 & 1 & 0 \\ \hline \end{array} \qquad \begin{array}{c|c|c|} × & 0 & 1 \\ \hline 0 & 0 & 0 \\ \hline 1 & 0 & 1 \\ \hline \end{array} \]

## Addition for the Functions

So now, let's try to see if the same happens for even and odd functions, starting by studying addition. The most natural definition for adding functions is to add the outputs at each input to get the new function's output, i.e., \((f+g)(x) = f(x) + g(x)\). This is called *pointwise addition*, since we add the points respectively. So let's see if the behavior of this pointwise addition follows the behavior of the numbers:

- Suppose \(f\) and \(g\) are even, so that \(f(-x)=f(x)\) and \(g(-x)=g(x)\) for every \(x\). Then their sum is indeed even:

\[ \begin{aligned} (f+g)(-x) &= f(-x) + g(-x) && \comment{definition of pointwise addition} \\ &= f(x) + g(x) && \comment{since $f$ and $g$ are even} \\ &= (f+g)(x) && \comment{definition of pointwise addition} \end{aligned} \]

- Suppose \(f\) and \(g\) are odd, so that \(f(-x)=-f(x)\) and \(g(-x)=-g(x)\) for every \(x\). Then their sum is... still odd, actually:

\[ \begin{aligned} (f+g)(-x) &= f(-x) + g(-x) && \comment{definition of pointwise addition} \\ &= -f(x) - g(x) && \comment{since $f$ and $g$ are odd} \\ &= -(f+g)(x) && \comment{definition of pointwise addition} \end{aligned} \]

So far, it looks like we might have some weird swapped behavior, where addition for the functions acts like multiplication for the numbers. But actually, when we assume \(f\) is even while \(g\) is odd, we can't seem to prove much of anything. After hitting a roadblock like this, it's always best to try some examples. So, what are the simplest functions I can think of? Linear functions, of the form \(f(x) = mx + b\). Are any of them even or odd? Some indeed are, and we can fully characterize which are which:

- If the linear function \(f(x) = mx + b\) is even, then \(f(-x) = f(x)\) for all \(x\). Plugging the function into that equation, we have that

\[ \begin{aligned} m(-x) + b &= m(x) + b \\ 0 &= m(x) - m(-x) && \comment{subtract the left from both sides}\\ 0 &= m(x+x) && \comment{distribution backwards}\\ 0 &= 2mx \end{aligned} \]

and for that to be true for all \(x\), it must be true for \(x\ne 0\), and we must have \(m = 0\), implying that \(f\) is constant. We can also go backwards, showing that every constant function \(f(x) = b\) is even, since \(f(-x)=b=f(x)\) for every \(x\).

* If the linear function \(f(x) = mx + b\) is odd, then \(f(-x) = -f(x)\) for all \(x\). Plugging the function into that equation, we have that
\[\begin{aligned}
m(-x) + b &= -\big(m(x) + b\big) \\
-mx + b &= -mx - b \\
b &= -b && \comment{add $mx$ to both sides}\\
2b &= 0
\end{aligned}\]
so \(f\) has \(y\)-intercept \(0\), i.e., the line passes through the origin. We can also go backwards, showing that every linear function passing through the origin \(f(x)=mx\) is odd, since \(f(-x)=m(-x)=-(mx)=-f(x)\) for every \(x\).

**So a linear function is even if and only if it is constant, and odd if and only if it passes through the origin.** These examples seem pretty good, since they quickly illustrate what we've already found: a sum of constant functions is still constant, and a sum of linear functions passing through the origin still passes through the origin. What do they say about adding an odd and an even? Well, the sum of a constant function and a linear passing through the origin is... any line you want. Completely inconclusive. Addition of functions is certainly not what we want.

Note that even though it may at first feel as though we wasted a lot of time going down the rabbit hole of pointwise addition, we have actually gained a lot. We know that both kinds of functions are closed^{2} to addition, and we have a really good class of examples to use later. And if you thought it strange that every linear function is the sum of two linear functions, one even, one odd, you can pursue that further and discover that the same is true for larger and larger classes of functions, and that the choice of the even and odd functions is always unique to their sum.

## Multiplication for the Functions

Since addition didn't match anything, discouragement sets in, and we may not want to just dive in to study pointwise multiplication of general functions. So let's start small, and just look at our linear function examples.

- The product of two constant functions? Still constant.
- The product of a constant and a linear passing through origin? Still linear passing through the origin.
- The product of two linear functions passing through origin? Quadratic. Crap.

Well this is awkward. To summarize, for pointwise multiplication, we have:

\[
\begin{array}{c|c|c|}
& 0 & 1 \\ \hline
0 & 0 & 1 \\ \hline
1 & 1 & ? \\ \hline
\end{array}
\]
which is pretty close to how the addition for numbers table behaved, but that question mark should represent an even function. But wait! Let's look at that case more carefully: The product of \(f_1(x) = m_1 x\) and \(f_2(x) = m_2 x\) is

\[ \begin{aligned}
(f_1\cdot f_2)(x) &= f_1(x)\cdot f_2(x)
\\ &= (m_1 x)(m_2 x)
\\ &= c x^2\quad\text{for}\quad c=m_1 m_2
\end{aligned}\]
and that result is even: \(c(-x)^2 = cx^2\). So this *does* match the addition of numbers behavior. Then we just have to prove it for general functions. It's not hard, and basically follows the same steps as the general addition work, so I won't write all that out here. However, after you've messed with that, you might appreciate seeing it proved in a different way, so I'll do that instead.

**Proof using exponent rules.** I've been using \(0\) and \(1\) to represent even and odd things already; let's actually assign those numbers to the respective functions. That is, define that a function has parity \(p=0\) or \(p=1\) (is even or odd) if and only if \(f(-x)=(-1)^pf(x)\) for every \(x\). Then the parity of the product of two functions \(f,g\) with respective parities \(a,b\) is given by the numerical parity of the sum \(a+b\):

\[ \begin{aligned}
(f\cdot g)(-x) &= f(-x)\cdot g(-x)
\\ &= \big[(-1)^af(x)\big]\big[(-1)^bg(x)\big] && \comment{$f,g$ have parities $a,b$}
\\ &= \big[(-1)^a (-1)^b\big] \big[f(x)g(x)\big] && \comment{commuting}
\\ &= (-1)^{a+b}(f\cdot g)(x) && \comment{exponent property}
\end{aligned} \]
The advantage of this proof is that it directly shows the parity link between addition of numbers and multiplication of functions, with a simple reason: products of powers is addition of exponents.

## Something Else for the Functions...

So at this point, we have shown that part of the behavior of even/odd numbers carries over to even/odd functions, and we would like the other part to carry over in some way. Sadly, our well of pointwise arithmetic is destined to run dry: exponentiation gives no inspiration. If you don't believe me, consider it with some linear examples, such as between the even functions \(f_1(x)=1\), \(f_2(x)=2\) and the odd function \(g(x)=x\). Be sure to compare taking exponents in both directions, since it's not a commutative operation.

Where do we turn to now? There is another natural operation between functions, one more fundamental than the arithmetic ones because it makes sense even when the functions don't act on numbers. It is function composition, the operation where you plug the outputs from one function in as the inputs to another: \((f\comp g)(x) = f(g(x))\).

Let's study composition with our linear examples:

- The composition of a constant function with anything gives that same constant function.
- The composition of anything with a constant function gives a constant function, essentially evaluating the other one at a particular point.
- The composition of two linear functions passing through the origin gives a new line with the product of their slopes as its slope, still passing through the origin.

Indeed, that matches the behavior of multiplication for numbers. As with the last section, I'll leave verifying that this generalizes for all functions to the reader, providing a more insightful proof for you to read afterwards.

**Proof using exponent rules.** The main oddity in the composition case is that we want to handle when the function is given an input like \((-1)^px\), not just \(-x\). Note that the only cases are either when \(p\) is odd and \((-1)^px=-x\), or when \(p\) is even and \((-1)^px=x\). So, let's extend the definition for functions with parity we used before to handle both cases at once: A function \(f\) has parity \(p\) if and only if \(f((-1)^nx)=(-1)^{np}f(x)\) for all \(x\) and integer \(n\). (This works because it's the same as before when \(n\) is odd, and when \(n\) is even, \(np\) is even as well and thus the powers of \((-1)\) on both sides are equal to \(1\). Then, wielding this final definition, the composition of two functions \(f,g\) with respective parities \(a,b\) is given by the numerical parity of the product \(ab\):

\[ \begin{aligned} (f\comp g)(-x) &= f(g(-x)) && \comment{definition of composition} \\&= f((-1)^bg(x)) && \comment{since the parity of $g$ is $b$} \\&= (-1)^{ab}f(g(x)) && \comment{since the parity of $f$ is $a$} \\&= (-1)^{ab}(f\comp g)(x)) && \comment{definition of composition} \end{aligned} \]

## A Final Note on Parity

In this post, I probably introduced a number of readers to the term "parity". Since it's one of my favorite terms to use and abuse, (another being "orthogonal",) I'll just tack on some concluding information about it. It's official definition is:

par·i·ty,

noun

- The state or condition of being equal.
- (math) The fact of being even or odd.

As it happens, there are many specialized usages↗ of the word, from simple extensions off the first definition as with the usage in law, to the many usages in math and physics which are more closely related to everything discussed in this post. In fact, the physics usage is exactly that of the one discussed here for functions, but instead for spatial functions, and then put into different terms to match their linear algebra-focused mentality.

To understand the other mathematical usages and perhaps even make up some of your own, there are two focuses:

- the way abstract algebra looks at evens and odds, thinking of them as the field with two elements↗
- the way combinatorics looks at parity, thinking of it as just a true/false property that some things interact with in a very predictable way

Both concepts of parity are useful in surprising contexts. The former I've been discussing at length, so I'll give it a rest. The latter gives rise to examples such as the chess board usage mentioned on Wikipedia↗.

#### Footnotes

A set is

*closed*to an operation if operating on anything in the set has a result, and that result is in the set. ↩