5 Dimensions of vector spaces

In this chapter, we’ll prove two of the key theorems of this module – the Fundamental Inequality and the Main Theorem on Bases. This chapter is quite abstract (and quite difficult), but we’ll get back to more concrete computations later!

5.1 Growing and shrinking sets

We’ve seen that if \(\mathcal{S}\) is a generating set in \(V,\) and we modify \(\mathcal{S}\) by putting some more elements into it, then it’s still a generating set. On the other hand, if we take some elements out, it might not be generating any more. Linear independence, on the other hand, has the opposite behaviour: if \(\mathcal{S}\) is LI, and we take some elements out, then the modified set is still LI; but if we put some more elements in, it might not be LI any more.

The next two lemmas give criteria for when we can shrink a generating set without breaking the generating property, or grow an LI set without breaking the LI property. These will be crucial in the next section.

Lemma 5.1 • Shrinking generating sets

Let \(V\) be a \(\mathbb{K}\)-vector space and \(\mathcal{S}\subset V\) a generating set. If \(v_0 \in \mathcal{S}\) satisfies \(v_0 \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) then \(\mathcal{S}\setminus\{v_0\}\) is a generating set.

Proof. Since \(v_0 \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) there exist vectors \(v_1,\ldots,v_n \in \mathcal{S}\) with \(v_i\neq v_0\) and scalars \(s_1,\ldots,s_n\) so that \(v_0=s_1v_1+\cdots+s_n v_n.\)

Suppose we are given \(v \in V.\) Since \(\mathcal{S}\) is a generating set, there exist vectors \(w_1,\ldots,w_k \in \mathcal{S}\) and scalars \(t_1,\ldots,t_k\) so that \(v=t_1w_1+\cdots+t_kw_k.\) If \(\{w_1,\ldots,w_k\}\) does not contain \(v_0,\) then \(v \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) so assume that \(v_0 \in \{w_1,\ldots,w_k\}.\) After possibly relabelling the elements of \(\{w_1,\ldots,w_k\}\) we can assume that \(v_0=w_1.\) Hence we have \[v=t_1\left(s_1v_1+\cdots+s_n v_n\right)+t_2w_2+\cdots+t_kw_k\] with \(v_0 \neq v_i\) for \(1\leqslant i\leqslant n\) and \(v_0 \neq w_j\) for \(2\leqslant j\leqslant k.\) It follows that \(v \in \operatorname{span}(\mathcal{S}\setminus\{v_0\}),\) as claimed.

Lemma 5.2 • Growing LI sets

Let \(V\) be a \(\mathbb{K}\)-vector space, \(\mathcal{S}\subset V\) linearly independent and \(v_0 \in V.\) Suppose \(v_0 \notin \operatorname{span}(\mathcal{S}),\) then \(\mathcal{S}\cup \{v_0\}\) is linearly independent.

Proof. Let \(\mathcal{T}\) be a finite subset of \(\mathcal{S}\cup\{v_0\}.\) If \(v_0 \notin \mathcal{T},\) then \(\mathcal{T}\) is linearly independent, as \(\mathcal{S}\) is linearly independent. So suppose \(v_0 \in \mathcal{T}.\) There exist distinct elements \(v_1,\ldots,v_n\) of \(\mathcal{S}\) so that \(\mathcal{T}=\{v_0,v_1,\ldots,v_n\}.\) Suppose \(s_0v_0+s_1v_1+\cdots+s_nv_n=0_V\) for some scalars \(s_0,s_1,\ldots,s_n \in \mathbb{K}.\) If \(s_0 \neq 0,\) then we can write \[v_0=-\sum_{i=1}^n \frac{s_i}{s_0}v_i,\] contradicting the assumption that \(v_0 \notin \operatorname{span}(\mathcal{S}).\) Hence we must have \(s_0=0.\) Since \(s_0=0\) it follows that \(s_1v_1+\cdots+s_nv_n=0_V\) so that \(s_1=\cdots=s_n=0\) by the linear independence of \(\mathcal{S}.\) We conclude that \(\mathcal{S}\cup\{v_0\}\) is linearly independent.

5.2 The fundamental inequality

We’ll now prove the following important lemma, which is going to be the key to understanding how finite-dimensional vector spaces “work”:

Lemma 5.3 • Fundamental inequality

Let \(V\) be a vector space, and suppose \(V\) has a finite generating set \(W.\) Then any linearly independent set \(U\) in \(V\) is finite and satisfies \(|U| \le |W|.\)

The proof of this lemma (everything from here to the end of 5.2) is non-examinable; but you should definitely make sure you are aware of the statement!

We’ll deduce the Fundamental Inequality from the following stronger statement:

Lemma 5.4 • The Steinitz Exchange Lemma

Suppose \(V\) is a vector space and \(U,\) \(W\) subsets of \(V\) such that

  • \(W\) is finite,

  • \(U\) is linearly independent,

  • \(W\) spans \(V.\)

Then \(|U| \le |W|\) (so \(U\) is also finite); and there is a subset \(W' \subseteq W,\) with \(|W'| = |W| - |U|\) and \(W' \cap U = \varnothing,\) such that \(U \cup W'\) also spans \(V.\)

In other words, we can exchange some of the elements of \(W\) for the elements of \(U,\) without breaking the generating property.

Proof. Let us first prove the result assuming \(U\) is finite. Let \(m = |U|.\) We will argue by induction on \(m.\) If \(m = 0\) then the statement is trivial; so we can assume the statement holds for \(m - 1.\) Thus we can suppose that \(U = \{ u_1, \dots, u_m\},\) with \(m-1 \le n,\) and \(W = \{u_1, \dots, u_{m-1}, w_m, \dots, w_n\}\) for some vectors \(w_m, \dots, w_n.\) (We haven’t yet excluded the possibility that \(n = m-1,\) in which case this just means that \(W = \{ u_1, \dots, u_{m-1}\}.\))

We first deal with a silly special case: if \(u_m \in \{w_m, \dots, w_n\},\) then we can assume \(u_m = w_m\) by relabeling; then \(W' = \{w_{m+1}, \dots, w_n\}\) works, since \(U \cup W'\) is equal to \(W\) and we already know that \(W\) spans \(V.\) So we can suppose that \(u_m\) is not one of the \(w_i.\)

Since \(W\) spans \(V,\) and \(u_m \in V,\) we know that \(u_m\) can be written as a linear combination of \(W\): \[u_m = \sum_{i=1}^{m-1} s_i u_i + \sum_{i = m}^n t_i w_i, \qquad s_i, t_i \in \mathbb{K}.\] If all the \(t_i\)’s are zero, then this would contradict the linear independence of \(U\); so there must be some \(i\) with \(m \le i \le n\) such that \(t_i \ne 0.\) (This shows in particular that \(n\) must be at least \(m.\)) Reordering the \(w_i\) if necessary, we can suppose \(t_m \ne 0.\) Our goal will be to show that \(\{u_1, \dots, u_m, w_{m+1}, \dots, w_n\}\) spans \(V.\)

We rearrange the equality above into \[w_m = \frac{1}{t_m} \left(u_m - \sum_{i = 1}^{m-1} s_i u_i - \sum_{j = m + 1}^n t_j w_j\right).\] Since we’re not in the ‘silly special case’, the sum on the right doesn’t involve \(w_m.\) So we’ve shown that \(w_m\) is in the span of \(T \setminus \{w_m\},\) where \(T = \{u_1, \dots, u_m, w_m, \dots, w_n\}.\) But \(T\) spans \(V,\) since it contains \(W.\) So we can apply the “shrinking generating sets” lemma to conclude that \(T \setminus \{w_m\}\) spans \(V.\)

This completes the proof for finite \(U.\) If \(U\) is infinite, then we can find a subset \(U' \subset U\) of size \(n + 1,\) where \(n = |W|\); but \(U'\) is linearly independent (because it’s contained in \(U\)) and hence applying the lemma to \(U'\) and \(W\) gives a contradiction. So the lemma holds for all \(U.\)

Remark 5.5

Notice that the Fundamental Inequality is just one part of the Steinitz Exchange Lemma, but our proof of the Fundamental Inequality for \(|U| = m\) depends on knowing the whole of the Exchange Lemma for \(|U| = m - 1.\) It is possible to give a self-contained proof of the Fundamental Inequality (without proving the rest of Steinitz at the same time), but it needs a different method, and it’s quite fiddly. This is an example of a curious paradox: sometimes a stronger (but more specific) theorem can be easier to prove than a weaker one!

5.3 Bases of vector spaces

Definition

Definition 5.6 • Basis

Let \(V\) be a vector space. A subset \(\mathcal{S}\subset V\) which is a generating set of \(V\) and also linearly independent is called a basis of \(V.\)

Example 5.7

Thinking of a field \(\mathbb{K}\) as a \(\mathbb{K}\)-vector space, the set \(\{1_{\mathbb{K}}\}\) is linearly independent, since \(1_{\mathbb{K}}\neq 0_{\mathbb{K}}.\) Example 4.32 implies that \(\{1_{\mathbb{K}}\}\) is a basis of \(\mathbb{K}.\)

Example 5.8

Clearly, the standard basis \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) of \(\mathbb{K}^n\) is linearly independent since \[s_1\vec{e}_1+\cdots+s_n\vec{e}_n=\begin{pmatrix} s_1 \\ \vdots \\ s_n\end{pmatrix}=0_{\mathbb{K}^{n}}=\begin{pmatrix} 0 \\ \vdots \\ 0 \end{pmatrix} \qquad \iff \qquad s_1=\dots=s_n=0.\] It follows together with Example 4.35 that the standard basis of \(\mathbb{K}^n\) is indeed a basis in the sense of Definition 5.6.

Example 5.9

The matrices \(\mathbf{E}_{k,l} \in M_{m,n}(\mathbb{K})\) for \(1\leqslant k \leqslant m\) and \(1\leqslant l \leqslant n\) are linearly independent. Suppose we have scalars \(s_{kl} \in \mathbb{K}\) such that \[\sum_{k=1}^m\sum_{l=1}^ns_{kl}\mathbf{E}_{k,l}=\mathbf{0}_{m,n}=\begin{pmatrix}s_{11} & \cdots & s_{1n} \\ \vdots & \ddots & \vdots\\ s_{m1} & \cdots & s_{mn}\end{pmatrix}=\begin{pmatrix}0 & \cdots & 0 \\ \vdots & \ddots & \vdots\\ 0 & \cdots & 0\end{pmatrix}\] so \(s_{kl}=0\) for all \(1\leqslant k \leqslant m\) and all \(1\leqslant l \leqslant n.\) It follows together with Example 4.36 that \(\{\mathbf{E}_{k,l}\}_{1\leqslant k\leqslant m, 1\leqslant l \leqslant n}\) is a basis of \(M_{m,n}(\mathbb{K}).\) We refer to \(\{\mathbf{E}_{k,l}\}_{1\leqslant k\leqslant m, 1\leqslant l \leqslant n}\) as the standard basis of \(M_{m,n}(\mathbb{K})\).

The Main Theorem on Bases

We’ll now come to one of the central theorems of this course

Theorem 5.10 • Main Theorem on Bases

Let \(V\) be a \(\mathbb{K}\)-vector space.

  1. Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)

  2. Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)

  3. If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)

  4. If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.

We will only prove Theorem 5.10 for finite dimensional vector spaces. The proof will use the Fundamental Inequality.

Proof of Theorem 5.10. We restrict to the case where \(V\) is finite dimensional. Hence there exists an integer \(n\geqslant 0\) so that \(V\) has a generating set \(\mathcal{S}_0\) with \(n\) elements.

(i) (Slogan: A maximal LI set is a basis) Let \(\mathcal{S}\subset V\) be a subset generating \(V\) (we don’t assume that \(\mathcal{S}\) is finite). We consider the set \(\mathcal{X} \subset \mathbb{N}\) consisting of those integers \(d\geqslant 0\) for which there exists a linearly independent subset \(\mathcal{T} \subset \mathcal{S}\) with \(d\) elements. Since \(\emptyset \subset \mathcal{S},\) we have \(0 \in \mathcal{X} ,\) so \(\mathcal{X}\) is non-empty. Furthermore, \(\mathcal{X}\) is a finite set, as it cannot contain any integer greater than \(n,\) by the Fundamental Inequality.

Let \(m \in \mathcal{X}\) be the largest integer and \(\mathcal{T}\subset \mathcal{S}\) an LI set with \(m\) elements. We want to argue that \(\mathcal{T}\) is a basis of \(V.\) Suppose \(\mathcal{T}\) is not a basis of \(V.\) Then there exists an element \(v_0 \in \mathcal{S}\) so that \(v_0 \notin \operatorname{span}(\mathcal{T}),\) since if no such element exists, we have \(\mathcal{S}\subset \operatorname{span}(\mathcal{T})\) and hence \(V=\operatorname{span}(\mathcal{S})\subset \operatorname{span}(\mathcal{T})\) contradicting the assumption that \(\mathcal{T}\) is not a basis of \(V.\) Applying Lemma 5.2 (the growing-LI-sets lemma), we conclude that \(\hat{\mathcal{T}}=\{v_0\}\cup \mathcal{T}\subset \mathcal{S}\) is linearly independent. Since \(\hat{\mathcal{T}}\) has \(m+1\) elements, we have \(m+1 \in \mathcal{X} ,\) contradicting the fact that \(m\) is the largest integer in \(\mathcal{X} .\) It follows that \(\mathcal{T}\) must be a basis of \(V.\)

(ii) (Slogan: A minimal generating set is a basis) Let \(\mathcal{S}\subset V\) be a subset that is linearly independent in \(V.\) Note that \(\mathcal{S}\) is finite, by the Fundamental Inequality. Let \(\hat{\mathcal{X} }\) denote the set consisting of those integers \(d\geqslant 0\) for which there exists a subset \(\mathcal{T} \subset V\) with \(d\) elements, which contains \(\mathcal{S}\) and which is a generating set of \(V.\) Notice that \(\mathcal{S}\cup \mathcal{S}_0\) is such a set, hence \(\hat{\mathcal{X} }\) is not empty. Let \(m\) denote the smallest element of \(\hat{\mathcal{X} }\) and \(\mathcal{T}\) be a generating subset of \(V\) containing \(\mathcal{S}\) and with \(m\) elements. We want to argue that \(\mathcal{T}\) is basis for \(V.\) By assumption, \(\mathcal{T}\) generates \(V,\) hence we need to check that \(\mathcal{T}\) is linearly independent in \(V.\) Suppose \(\mathcal{T}\) is linearly dependent and write \(\mathcal{T}=\{v_1,\ldots,v_m\}\) for distinct elements of \(V.\) Suppose \(\mathcal{S}=\{v_1,\ldots,v_k\}\) for some \(k\leqslant m.\) This holds true since \(\mathcal{S}\subset \mathcal{T}.\) Since \(\mathcal{T}\) is linearly dependent we have scalars \(s_1,\ldots,s_m\) so that \[s_1v_1+\cdots+s_m v_m=0_V.\] There must exist a scalar \(s_i\) with \(i>k\) such that \(s_i\neq 0.\) Otherwise \(\mathcal{S}\) would be linearly dependent. After possibly relabelling the vectors, we can assume that \(s_{k+1}\neq 0\) so that \[\tag{5.1} v_{k+1}=-\frac{1}{s_{k+1}}\left(s_1v_1+\cdots+s_kv_k+s_{k+2}v_{k+2}+\cdots+s_mv_m\right).\] Let \(\hat{\mathcal{T}}=\{v_1,\ldots,v_k,v_{k+2},\ldots,v_m\}.\) Then \(\mathcal{S}\subset \hat{\mathcal{T}}\) and (5.1) shows that \(v_{k+1} \in \operatorname{span}(\hat{\mathcal{T}}).\) Lemma 5.1 (the shrinking-generating-sets lemma) shows that \(\hat{\mathcal{T}}\) generates \(V,\) contains \(\mathcal{S}\) and has \(m-1\) elements, contradicting the minimality of \(m.\)

(iii) Suppose \(\mathcal{S}_1\) is a basis of \(V\) with \(n_1\) elements and \(\mathcal{S}_2\) is a basis of \(V\) with \(n_2\) elements. Since \(\mathcal{S}_2\) is linearly independent and \(\mathcal{S}_1\) generates \(V,\) the Fundamental Inequality implies that \(n_2\leqslant n_1.\) Likewise, we conclude that \(n_2\geqslant n_1.\) It follows that \(n_1=n_2\) and hence there exists a bijective mapping from \(\mathcal{S}_1\) to \(\mathcal{S}_2\) as these are finite sets with the same number of elements.

(iv) is an immediate consequence of (iii).

Consequences of the Main Theorem

Corollary 5.11

Every \(\mathbb{K}\)-vector space \(V\) admits at least one basis.

Proof. Since \(V\) is a generating set for \(V,\) we can apply (i) from Theorem 5.10 to \(\mathcal{S}=V\) to obtain a basis of \(V.\)

Remark 5.12

We’ve seen that \(\mathbb{R}\) is a \(\mathbb{Q}\)-vector space. It is impossible to explicitly write down a basis of this vector space, even though the corollary says that they exist!

5.4 Dimensions of vector spaces

We can now, at last, make sense of the idea of dimension of a vector space: we know that all vector spaces have bases, and any two bases are the same size; so we can make the following definition:

Definition 5.13

The dimension of a finite dimensional \(\mathbb{K}\)-vector space \(V,\) denoted by \(\dim(V)\) or \(\dim_{\mathbb{K}}(V),\) is the number of elements of any basis of \(V.\)

Example 5.14

  1. The zero vector space \(\{0\}\) has the empty set as a basis and hence is \(0\)-dimensional. Conversely, if \(V\) is a zero-dimensional space, then the empty set is a basis of \(V,\) so we must have \(V = \{0\}.\)

  2. A field \(\mathbb{K}\) – thought of as a \(\mathbb{K}\)-vector space – has \(\{1_{\mathbb{K}}\}\) as a basis and hence is \(1\)-dimensional.

  3. The vector space \(\mathbb{K}^n\) has \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) as a basis and hence is \(n\)-dimensional.

  4. The vector space \(M_{m,n}(\mathbb{K})\) has \(\mathbf{E}_{k,l}\) for \(1\leqslant k \leqslant m\) and \(1\leqslant l \leqslant n\) as a basis, hence it is \(mn\)-dimensional.

Lemma 5.15

Let \(V\) be a finite-dimensional \(\mathbb{K}\)-vector space, and \(U\) a subspace of \(V.\) Then \(U\) is also finite-dimensional and we have \[0 \le \dim(U) \le \dim(V).\] Furthermore, \(\dim(U) = 0\) iff \(U = \{0_V\},\) and \(\dim(U) = \dim(V)\) iff \(U = V.\)

Proof. Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(U\subset V\) a subspace. Let’s consider the set \(\mathcal{X} = \{ m \in \mathbb{N}:\) there exists a LI set in \(U\) with \(m\) elements\(\}.\) Obviously \(0 \in \mathcal{X},\) since \(\varnothing\) is LI. On the other hand, \(\mathcal{X}\) can’t contain any integer larger than the dimension of \(V,\) since an LI set in \(U\) is a fortiori an LI subset of \(V.\)

So \(\mathcal{X}\) must have a largest element, say \(d.\) Let us choose an LI subset \(\mathcal{S}\subset U\) whose size is \(d.\) We will show that \(\mathcal{S}\) generates \(U\); this shows that \(U\) is finite-dimensional, since \(\mathcal{S}\) is a finite generating set.

Let \(u \in U\) be arbitrary. If \(u \notin \operatorname{span}(\mathcal{S}),\) then we can apply the growing-LI-sets lemma to see that \(\mathcal{S}\cup \{u\}\) is an LI set. Since this set has size \(d + 1,\) and \(d\) is maximal, this is a contradiction. Thus \(u \in \operatorname{span}(\mathcal{S})\) as required.

Since this set \(\mathcal{S}\) is by definition LI, it is a basis of \(U,\) so \(d = \dim U.\) Using the Fundamental Inequality on \(\mathcal{S}\) and a basis of \(V,\) we find that \(\dim(U) \le \dim(V)\); and if \(\dim(U) = \dim(V),\) then the Exchange Lemma tells us that \(\mathcal{S}\) generates \(V,\) so \(U = \operatorname{span}(\mathcal{S}) = V.\) On the other hand, if \(d = 0,\) then \(U\) is the empty set, so its span is \(\{0_V\}.\)

Remark 5.16

This proof can be done much more quickly if we allow ourselves to apply the Main Theorem on Bases to conclude that \(U\) has a basis: if \(\mathcal{B}\) is any basis of \(U,\) then \(\mathcal{B}\) is in particular an LI set in \(V\) and hence it has size \(\le \dim(V)\) by the Fundamental Inequality. The problem is that we have only proved the Main Theorem on Bases for finite-dimensional spaces, and we don’t know a priori that \(U\) is finite-dimensional. So we have to work a bit harder.

5.5 Computing with subspaces

The above theory applies to subspaces of any abstract vector space; we’ll now specialise to the concrete case of row and column vectors, and see how to compute with bases of subspaces in practice. As usual, this will reduce to computing a RREF (the “Swiss army knife” of linear algebra calculations).

We’ll first investigate the case of subspaces of \(\mathbb{K}_n\) (recall that the index \(n\) “downstairs” means row vectors, not column vectors).

Proposition 5.17

Let \(\vec{\alpha}_1, \dots, \vec{\alpha}_r\) be vectors in \(\mathbb{K}_n,\) and let \(U = \operatorname{span}(\vec{\alpha}_1, \dots, \vec{\alpha}_r).\)

Let \(\mathbf{A}\) be the matrix with the \(\vec{\alpha}_i\) as rows, and let \(\mathbf{B}\) be the RREF of \(\mathbf{A},\) with rows \(\vec{\beta}_1, \dots, \vec{\beta}_r.\) If \(h\) is the number of nonzero rows of \(\mathbf{B},\) then \(h = \dim U,\) the vectors \(\vec{\beta}_1, \dots, \vec{\beta}_h\) are a basis of \(U,\) and \(\vec{\beta}_{h + 1} = \dots = \vec{\beta}_r = 0.\)

Proof. Since \(\mathbf{B}\) is left-equivalent to \(\mathbf{A},\) every row of \(\mathbf{B}\) is a linear combination of rows of \(\mathbf{A},\) and vice versa. Hence the span of the \(\beta\)’s is equal to the span of the \(\alpha\)’s. By the definition of RREF, all the non-zero rows of \(\mathbf{B}\) come before all the zero rows. So it suffices to show that the non-zero rows of \(\mathbf{B}\) are linearly independent.

Let \(s_1, \dots, s_h\) be scalars such that \(\sum s_i \vec{\beta}_i = 0.\) For \(1 \le j \le h,\) let \(p_j\) be the index of the leading entry of \(\vec{\beta}_j.\) Then \([\vec{\beta}_j]_{p_i} = 1,\) and \([\vec{\beta_i}]_{p_j} = 0\) for \(i \ne j\) by the definition of row echelon form. Hence we have \[0 = [\mathbf{0}]_{p_j} = \sum_i s_i[\vec{\beta}_i]_{p_j} = \sum_i s_i \delta_{ij} = s_j,\] so \(s_j = 0.\) Since \(j\) was arbitrary, this shows that \(s_1 = \dots = s_h = 0\) and hence \(\{\vec{\beta}_1, \dots, \vec{\beta}_h\}\) is an LI set.

Of course, a vector space can have many different bases, so sometimes it can be hard to recognise when two subspaces are actually the same. The uniqueness part of RREF allows us to solve this too:

Proposition 5.18

Let \(U\) be a subspace of \(\mathbb{K}_n.\) Then there is a unique basis \(\{\vec{\beta}_1, \dots, \vec{\beta}_h\}\) of \(U\) (and a unique ordering of those basis vectors) such that the matrix with those vectors as rows is in RREF; and this basis, the RREF basis of \(U\), can be computed explicity starting from any generating set of \(U.\)

Proof. We’ve seen that an echelon-form basis exists (and is computable), so we need to show uniqueness. But any two bases of \(U\) give matrices which are left-equivalent; hence there cannot be more than one such matrix which is in RREF.

Thus, if we are given generating sets for two subspaces \(U, U'\) and we want to know if \(U = U',\) we just compute the RREF bases of each, and check whether they’re the same.

Remark 5.19

As a special case, we can check if a given vector \(\vec{\xi}\) lies in \(U\) or not, since \(u \in U\) iff the subspace \(U' = \operatorname{span}(\{\)generators of \(U\} \cap \{\vec{\xi}\})\) is equal to \(U.\) Concretely, we just form the matrix \(\begin{pmatrix} \mathbf{B}\\ \hline \vec{\xi}\end{pmatrix},\) where \(\mathbf{B}\) is the echelon basis matrix of \(U,\) and echelonize that; if the echelon form is \(\begin{pmatrix} \mathbf{B}\\ \hline 0\end{pmatrix},\) then \(\vec{u} \in U,\) and otherwise not.

Of course, if we want to compute with subspaces of \(\mathbb{K}^n\) instead, we can just transpose all the matrices3 and formulate our problem in terms of \(\mathbb{K}_n.\)
Example 5.20

Let’s revisit example 4.15: “Is \(\begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} \in \mathbb{R}^3\) a linear combination of \(\left\{\left(\begin{smallmatrix} 3 \\ -1 \\ 0\end{smallmatrix}\right), \left(\begin{smallmatrix} 0 \\ 1 \\-1 \end{smallmatrix}\right)\right\}\)?”

This is equivalent to asking: is \(\vec{\gamma} = \left(1\quad 2 \quad 1\right)\in \mathbb{R}_3\) in the subspace generated by \(\vec{\alpha} = \left(3\quad -1 \quad 0\right)\) and \(\vec{\beta} = \left( 0 \quad 1 \quad -1\right)\)?

We compute that the echelon form of the matrix \(\left(\begin{array}{rrr} 3 & -1 & 0 \\ 0 & 1 & -1 \\ \hline 1 & 2 & 1 \end{array}\right)\) is the \(3 \times 3\) identity matrix. So \(\operatorname{span}(\vec{\alpha}, \vec{\beta}, \vec{\gamma})\) is 3-dimensional – it’s the whole of \(\mathbb{R}_3\) – whereas \(\operatorname{span}(\vec{\alpha}, \vec{\beta})\) must have dimension \(\le 2.\) Thus \(\vec{\gamma}\) is not a linear combination of the other two.

Remark 5.21

Note that this is a genuinely different method from the previous computation: previously we applied elementary row operations to \(\left(\begin{smallmatrix} 3 & 0 & 1 \\ -1 & 1 & 2 \\ 0 & -1 & 1 \end{smallmatrix}\right),\) and now we’re applying elementary row operations to its transpose (or elementary column operations to the original matrix). Both methods are equally valid.


Exercises

Exercise 5.22 • hard!

Show that if \(V\) is not finite-dimensional, then there exists an infinite linearly independent set in \(V.\)

[In this exercise you may only use theorems proved in the course, so you may not use the fact that the Main Theorem on Bases holds for infinite-dimensional spaces.]

Solution

We build up an infinite linearly independent set \(\{x_0, x_1, \dots \}\) inductively, as follows. Since \(V\) clearly cannot be zero, we let \(x_0\) be any non-zero vector in \(V.\) Now, suppose that we have chosen an LI set \(\{x_0, \dots, x_n\}.\) Since \(V\) is not finite-dimensional, this set cannot be a generating set, so there exists some vector not in the span of \(x_0, \dots, x_n\); let \(x_{n+1}\) be a choice of such a vector. Then, by the Growing LI Sets lemma, \(\{x_0, \dots, x_{n+1}\}\) is still linearly independent. Since we never run out of vectors to add, the result is an infinite linearly independent set.

[If this proof seems strange to you, then your instincts are good: this is an example of an argument which needs the Axiom of Choice, which you saw briefly in the Statistics and Discrete Structures module. This axiom is what we need in order to make sense of the idea that “we can go on making choices forever”. In fact the axiom of choice is indispensable here: the Swiss mathematician Hans Läuchli showed that it is impossible to prove this exercise without using the Axiom of Choice.]



MCQ 5.1

If \(S\) is a linearly dependent subset of \(\mathbb{K}^n,\) so is every subset of \(\mathbb{K}^n\) that contains \(S.\)

  • True
  • False
MCQ 5.2

Every subset of a linearly independent set is linearly independent.

  • True
  • False
MCQ 5.3

If \(\mathcal S,\mathcal T\) are generating sets of \(V\) with finitely many elements, then \(|\mathcal S| = |\mathcal T|.\)

  • True
  • False
MCQ 5.4

If \(\mathcal S,\mathcal T\) are bases of \(V\) with finitely many elements, then \(|\mathcal S| = |\mathcal T|.\)

  • True
  • False
MCQ 5.5

A set of vectors \({v_1,\dots,v_n}\in V\) is linearly independent if its span is \(n\)-dimensional.

  • True
  • False
MCQ 5.6

The dimension of a subspace \(U\) of \(\mathbb{K}^n\) is equal to the number of vectors in a basis for it.

  • True
  • False
MCQ 5.7

Any linearly independent set of \(n\) vectors in \(\mathbb{K}^n\) is a basis for \(\mathbb{K}^n.\)

  • True
  • False
MCQ 5.8

If a subset of \(\mathbb{K}^n\) spans \(\mathbb{K}^n,\) it is linearly independent.

  • True
  • False

Home

Chapters

Contents