Representations of univariate polynomials

In this note, we will go over the two main representations of univariate polynomials: the "monomial" and "evaluation" representations. The monomial representation is the traditional representation that most people are familiar with. Specifically, a polynomial in monomial representation is written as:

\[ p(x) = a_{n-1}x^{n-1} + \dots + a_2 x^2 + a_1 x + a_0 = \sum_{i=0}^{n-1} a_i x^i \]

We call it the "monomial representation" because the coefficients \(a_0, \dots, a_{n-1}\) multiply their respective monomial \(x^0, \dots, x^{n-1}\). There is also another very useful representation that we'll call the "evaluation representation". However, before we can talk more deeply about it, and how both representations are two sides of the same coin, we need to first brush up on vector spaces.

Brief review of vector spaces

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Consider the left and right diagrams. Both represent the same vector space, and each depicts 3 vectors: \(u_0, u_1, w\) in the left diagram, and \(v_0, v_1, w\) in the right diagram. Crucially, \(w\) is the same vector in both diagrams.

The set \(U = \{u_0, u_1\}\) forms a basis for the vector space, meaning that every vector in the vector space can be written as a linear combination of \(u_0\) and \(u_1\). Specifically, \(w\) can be written as

\[ w = 1 \times u_0 + 3 \times u_1 \]

Note that the coefficients (or "coordinates") \([1, 3]\) in terms of the basis \(U\) encode \(w\); i.e. if I give you the coefficients \([1, 3]\) and the basis \(U\), then you can have all the information you need to reconstruct \(w\). Hence, we will write \(w = [1, 3]_U\) to mean that "\(w\) has the coefficients \(1\) and \(3\) in the \(U\) basis".

Similarly, \(V = \{v_0, v_1\}\) is another basis of the same vector space. Then, \(w\) can also be written as

\[ w = 1.5 \times v_0 - 0.5 \times v_1, \]

and we write \(w = [1.5, -0.5]_V\).

We then have 2 different sets of coefficients for \(w\): \(w = [1, 3]_U = [1.5, -0.5]_V\). We can call \([1, 3]_U\) and \([1.5, -0.5]_V\) equivalent representations of \(w\).

This gets us to the core principle of this section: coefficients without a basis are meaningless. In this example, the vectors are the literal arrows in the diagram. A basis is a pair of arrows in the diagram. If you identify which arrows you will use to build a coordinate system, then we can start referring to all vectors in the vector space in terms of their coordinates.

This is a crucial concept to understand, as in the rest of the article, our vector space is not arrows, but the set of all univariate polynomials of degree \(n-1\). Then, our monomial and evaluation representations will each be coordinates of the same polynomial, but expressed in a different basis!

Finally, as alluded to above, it is also really important to understand that a vector space is not just a set of arrows in a diagram. Actually, a vector space over a field \(F\) is any set \(V\) where

  • an addition operation is defined between any two elements of \(V\)
  • a scalar multiplication operation between any element of \(F\) and any element of \(V\)
    • e.g. \(5 \times v\) needs to have a well-defined meaning, such as "scaling the arrow's length by 5"
  • addition and scalar multiplication are largely as you'd expect them
    • addition is commutative, associative, etc
    • scalar multiplication distributes over addition, e.g. \(5 \times (u + v) = 5 \times u + 5 \times v\)

Refer to the Wikipedia entry for a proper definition of a vector space.

Vector space of polynomials

One very important thing to realize is that all univariate polynomials of degree \(n-1\) form a vector space over some field \(\mathbb{F}\). To reiterate, a vector space is a set augmented with an addition and scalar multiplication operation. Let's see how these two are defined for the set of all polynomials of degree \(n-1\).

Let \(f\) and \(g\) be two polynomials of degree \(n-1\). Hence, let

\[ \begin{align} f(x) &= \sum_{i=0}^{n-1} a_i x^i \\ g(x) &= \sum_{i=0}^{n-1} b_i x^i \\ \end{align} \]

Addition is defined point-wise. We denote \(f + g\) to be the polynomial that is the addition \(f\) and \(g\), and is defined as

\[ \begin{align} (f + g)(x) &= f(x) + g(x) \\ &= \sum_{i=0}^{n-1} a_i x^i + \sum_{i=0}^{n-1} b_i x^i \\ &= \sum_{i=0}^{n-1} a_i x^i + b_i x^i \\ &= \sum_{i=0}^{n-1} (a_i + b_i) x^i \end{align} \]

Hence, we just showed that \(f+g\) is also a polynomial of degree \(n-1\), where its \(i\)th coefficient is \(a_i + b_i\).

Next, we show how scalar multiplication is defined. Let \(s \in \mathbb{F}\) be some field element, and \(f\) be the same polynomial defined earlier. Then, \(s \cdot f\) is a polynomial defined as

\[ \begin{align} (s \cdot f)(x) &= s \cdot f(x) \\ &= s \sum_{i=0}^{n-1} a_i x^i \\ &= \sum_{i=0}^{n-1} s a_i x^i \end{align} \]

Hence, \(s \cdot f\) is also a polynomial of degree \(n-1\), where its \(i\)th coefficient is \(s a_i\).

To properly show that all univariate polynomials of degree \(n-1\) form a vector space, we'd have to show that all properties of addition and scalar multiplication hold. We leave this as an exercise.

Before we continue, let's take a step back and appreciate what we just showed. At this point, you should be convinced that all polynomials of degree \(n-1\) form a vector space. Hence each of those polynomials are vectors, in the same way that those arrows in the graph were vectors. Most importantly, all derived results of vector spaces will apply to polynomials. So we will be able to talk about a basis - a given set of polynomials which can be linearly combined to yield every other polynomial of degree \(n-1\). And then we should be able to represent polynomials by their coordinates in a given basis. This is exactly what our monomial and evaluation representations will be: coordinates in 2 different well-defined bases.

Monomial representation

As stated earlier, the monomial representation of a polynomial is based off the equation

\[ p(x) = \sum_{i=0}^{n-1} a_i x^i \]

If we apply what was just discussed in the previous section, then you'll notice that \(M = \{x^0, x^1, x^2, \dots, x^{n-1}\}\) forms a basis of the vector space of all polynomials of degree \(n-1\)! And every polynomial can be referred to by its coordinates \([a_0, a_1, \dots, a_{n-1}]_M\) in this basis.

Concretely, in a computer, a valid way to store any polynomial \(p\) is to store its coefficients \([a_0, a_1, \dots, a_{n-1}]_M\) in a list. To evaluate this polynomial at \(x\), simply apply the above equation: \(p(x) = \sum_{i=0}^{n-1} a_i x^i\). However, in practice we use Horner's method, which is simply an efficient way of computing the same equation.

Evaluation representation

As mentioned previously, the evaluation representation of polynomials of degree \(n-1\) will be a set of coordinates in a basis other than \(M\). That basis is based on the "Lagrange interpolating polynomial".

Given a set of points \(D = \{(x_0, y_0), (x_1, y_1), \dots, (x_{n-1}, y_{n-1})\}\), the Lagrange interpolating polynomial \(L\) is the unique polynomial of degree \(n-1\) which passes through all of these points. It is defined as:

\[ L(x) = \sum_{i=0}^{n-1} y_i \cdot l_i(x), \]

where

\[ l_i(x) = \frac{(x - x_0) \cdots (x - x_{i-1}) (x - x_{i+1}) \cdots (x - x_{n-1})}{(x_i - x_0) \cdots (x_i - x_{i-1}) (x_i - x_{i+1}) \cdots (x_i - x_{n-1})} \]

You should convince yourself from the definition that \(l_i(x_i) = 1\), and for all other points \(x_j \in D\) (i.e. \(x_j \neq x_i\)), then \(l_i(x_j) = 0\). You should also convince yourself that \(L(x_i) = y_i\) for all \(x_i \in D\), and therefore that \(L\) does indeed pass through all the points in \(D\).

Now, notice that each \(l_i\) is a polynomial of degree \(n-1\) (since it multiplies \(n-1\) terms of degree \(1\)), and that \(L(x)\) is a linear combination of these \(l_i\) polynomials. In other words, \(E = \{l_0, l_1, \dots, l_{n-1}\}\) forms a basis for the vector space of polynomials of degree \(n-1\), and \([y_0, y_1, \dots, y_{n-1}]_E\) are the coordinates for \(L\) given that basis! We call \([y_0, y_1, \dots, y_{n-1}]_E\) the evaluation representation of polynomials.

Similarly for the monomial representation, to represent a polynomial in a computer, we could alternatively store the list \([y_0, y_1, \dots, y_{n-1}]_E\). However, to evaluate the polynomial at a point \(x\), we'd need to make sure to use \(\sum_{i=0}^{n-1} y_i \cdot l_i(x)\). In practice however, we'd use barycentric evaluation to evaluate a polynomial stored in the evaluation representation, which is nothing more than an efficient way to evaluate \(\sum_{i=0}^{n-1} y_i \cdot l_i(x)\).

Monomial vs evaluation representation

So far we have established that every polynomial can be described in 2 different ways: as coefficients \([a_0, a_1, \dots, a_{n-1}]_M\) in the \(M\) basis, or as coefficients \([y_0, y_1, \dots, y_{n-1}]_E\) in the \(E\) basis. This is very important to understand and easy to be confused about: every polynomial can be represented in either representation, the same way that the same blue arrow in the graph could be represented in 2 different sets of coordinates. For example, in Rust, we could represent a polynomial as

// The field type
type Field = ...;

enum Polynomial {
    Monomial(Vec<Field>),
    Evaluation(Vec<Field>)
}

impl Polynomial {
    fn evaluate(&self, x: Field) -> Field {
        match self {
            Monomial(coefficients) => /* use "monomial formula" */,
            Evaluation(coefficients) => /* use "evaluation formula" */,
        }
    }
}

To really drive the point home, we also know how to convert any polynomial from one representation to the next. In linear algebra, this would correspond to multiplying the coefficients by a change of basis matrix. In the specific case of polynomials, this matrix is called the Vandermonde matrix. The specifics are out of scope for this note.

A note on polynomial interpolation

The problem of "polynomial interpolation" is often stated as "finding the coefficients of a polynomial that passes through a set of points \(D = \{(x_0, y_0), (x_1, y_1), \dots, (x_{n-1}, y_{n-1})\}\)". In the terminology developed in this note, we would rephrase this as "finding the coefficients in the basis \(M\)".

Note though that the problem is greatly simplified if all we want is any representation of the interpolating polynomial. Then we'd just use the evaluation representation, where we have our coefficients \([y_0, y_1, \dots, y_{n-1}]_E\) out of the box directly in the dataset, no computation needed whatsoever.

If however we do need the monomial representation of the interpolating polynomial, then we run a change of basis on the coefficients \([y_0, y_1, \dots, y_{n-1}]_E\), be it implemented as either multiplying the coefficients by the (inverse) Vandermonde matrix, or equivalently using a Fast Fourier Transform.

The "extension" view of polynomial interpolation

We end the note with an alternative terminology that is also sometimes used when discussing polynomial interpolation. Recall that there is a single polynomial of degree \(n-1\) which passes through (or "interpolates") all points in some dataset \(D = \{(x_0, y_0), (x_1, y_1), \dots, (x_{n-1}, y_{n-1})\}\).

Alternatively, we can model \(D\) with a function \(f: \{x_0, \dots, x_{n-1}\} \rightarrow \mathbb{F}\). That is, \(f\) maps every point in the set \(\{x_0, \dots, x_{n-1}\}\) (which itself is a subset of our field \(\mathbb{F}\)) to an element of \(\mathbb{F}\). Specifically, reusing the variable names from \(D\), we would have

\[ \begin{align} f(x_0) &= y_0 \\ f(x_1) &= y_1 \\ \cdots \\ f(x_{n-1}) &= y_{n-1}. \\ \end{align} \]

Before moving on, convince yourself that \(f\) and \(D\) are equivalent ways to talk about the same set of pairs of points. Then, under this interpretation, the interpolating polynomial that we can call \(\tilde{f}\) is the unique polynomial of degree \(n-1\) such that

\[f(x_i) = \tilde{f}(x_i) \quad \forall i \in \{0, \dots, n-1\}\]

The crucial difference however is that the domain of \(\tilde{f}\) is the entirety of \(\mathbb{F}\), and fully contains the smaller domain of \(f\). Hence, we say that \(\tilde{f}\) extends \(f\), exactly because:

  • The domain of \(f\) is a strict subset of the domain of \(\tilde{f}\)
  • \(\tilde{f}\) maps every point in the domain of \(f\) to the same element in the codomain
    • or in math, \(f(x_i) = \tilde{f}(x_i) \quad \forall i \in \{0, \dots, n-1\}\)

Note: for those familiar with the concept of an "extension field", the term "extension" is used to express a similar idea. That is, \(E\) is an extension field of \(F\) exactly when

  • \(E\) is a strict subset of \(F\)
  • addition, multiplication and every other property of fields is preserved, e.g.
    • if \(c = a + b\) for \(a, b, c \in E\), then it is also true that \(c = a + b\) for the same elements \(a, b, c \in F\)
    • if \(c = a \cdot b\) for \(a, b, c \in E\), then it is also true that \(c = a \cdot b\) for the same elements \(a, b, c \in F\)
    • (and so on for every other property of fields)
Select a repo