6 Endomorphisms
6.1 Sums, direct sums and complements
In this chapter we study linear mappings from a vector space to itself.
A linear map \(g : V \to V\) from a \(\mathbb{K}\)-vector space \(V\) to itself is called an endomorphism. An endomorphism that is also an isomorphism is called an automorphism.
Before we develop the theory of endomorphisms, we introduce some notions for subspaces.
Let \(V\) be a \(\mathbb{K}\)-vector space, \(n \in \mathbb{N}\) and \(U_1,\ldots,U_n\) be vector subspaces of \(V.\) The set \[\sum_{i=1}^n U_i=U_1+U_2+\cdots +U_n=\{v \in V | v=u_1+u_2+\cdots + u_n \text{ for } u_i \in U_i\}\] is called the sum of the subspaces \(U_i\).
Let \(V\) be a \(\mathbb{K}\)-vector space, \(n\geqslant 2\) a natural number and \(U_1,\ldots,U_n\) vector subspaces of \(V.\) Then the intersection \[U^{\prime}=\bigcap_{j=1}^n U_j=\left\{v \in V\,|\, v \in U_j\;\text{for all}\; j=1,\ldots,n\right\}\] is a vector subspace of \(V\) as well.
The sum of the subspaces \(U_i\subset V,\) \(i=1\ldots,n\) is again a vector subspace.
Let \(V\) be a \(\mathbb{K}\)-vector space. A subset \(U\subset V\) is called a vector subspace of \(V\) if \(U\) is non-empty and if \[\tag{3.8} s_1\cdot_Vv_1+_Vs_2\cdot_V v_2 \in U\quad \text{for all}\; s_1,s_2 \in \mathbb{K}\; \text{and all}\; v_1,v_2 \in U.\]
Notice that \(U_1+\cdots+ U_n\) is the smallest vector subspace of \(V\) containing all vector subspaces \(U_i,\) \(i=1,\ldots,n.\)
If each vector in the sum is in a unique way the sum of vectors from the subspaces we say the subspaces are in direct sum:
Let \(V\) be a \(\mathbb{K}\)-vector space, \(n \in \mathbb{N}\) and \(U_1,\ldots,U_n\) be vector subspaces of \(V.\) The subspaces \(U_1,\ldots,U_n\) are said to be in direct sum if each vector \(w \in W=U_1+\cdots+U_n\) is in a unique way the sum of vectors \(v_i \in U_i\) for \(1\leqslant i\leqslant n.\) That is, if \(w=v_1+v_2+\cdots+v_n=v^{\prime}_1+v^{\prime}_2+\cdots+v^{\prime}_n\) for vectors \(v_i,v^{\prime}_i \in U_i,\) then \(v_i=v^{\prime}_i\) for all \(1\leqslant i \leqslant n.\) We write \[\bigoplus_{i=1}^n U_i\] in case the subspaces \(U_1,\ldots,U_n\) are in direct sum.
Let \(n \in \mathbb{N}\) and \(V=\mathbb{K}^n\) as well as \(U_i=\operatorname{span}\{\vec{e}_i\},\) where \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) denotes the standard basis of \(\mathbb{K}^n.\) Then \(\mathbb{K}^n=\bigoplus_{i=1}^n U_i.\)
Two subspaces \(U_1,U_2\) of \(V\) are in direct sum if and only if \(U_1\cap U_2=\{0_V\}.\) Indeed, suppose \(U_1\cap U_2=\{0_V\}\) and consider \(w=v_1+v_2=v_1^{\prime}+v_2^{\prime}\) with \(v_i,v_i^{\prime} \in U_i\) for \(i=1,2.\) We then have \(v_1-v_1^{\prime}=v_2^{\prime}-v_2 \in U_2,\) since \(U_2\) is a subspace. Since \(U_1\) is a subspace as well, we also have \(v_1-v_1^{\prime} \in U_1.\) Since \(v_1-v_1^{\prime}\) lies both in \(U_1\) and \(U_2,\) we must have \(v_1-v_1^{\prime}=0_V=v_2^{\prime}-v_2.\) Conversely, suppose \(U_1,U_2\) are in direct sum and let \(w \in (U_1\cap U_2).\) We can write \(w=w+0_V=0_V+w,\) since \(w \in U_1\) and \(w \in U_2.\) Since \(U_1,U_2\) are in direct sum, we must have \(w=0_V,\) hence \(U_1\cap U_2=\{0_V\}.\)
Observe that if the subspaces \(U_1,\ldots,U_n\) are in direct sum and \(v_i \in U_i\) with \(v_i \neq 0_V\) for \(1\leqslant i\leqslant n,\) then the vectors \(\{v_1,\ldots,v_n\}\) are linearly independent. Indeed, if \(s_1,\ldots,s_n\) are scalars such that \[s_1v_1+s_2v_2+\cdots+s_n v_n=0_V=0_V+0_V+\cdots+0_V,\] then \(s_iv_i=0_V\) for all \(1\leqslant i\leqslant n.\) By assumption \(v_i\neq 0_V\) and hence \(s_i=0\) for all \(1\leqslant i\leqslant n.\)
Let \(n \in \mathbb{N},\) \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(U_1,\ldots,U_n\) be subspaces of \(V.\) Let \(\mathbf{b}_i\) be an ordered basis of \(U_i\) for \(1\leqslant i\leqslant n.\) Then we have:
The tuple of vectors obtained by listing all the vectors of the bases \(\mathbf{b}_i\) is a basis of \(V\) if and only if \(V=\bigoplus_{i=1}^n U_i.\)
\(\dim(U_1+\cdots+U_n)\leqslant \dim(U_1)+\cdots+\dim (U_n)\) with equality if and only if the subspaces \(U_1,\ldots,U_n\) are in direct sum.
Proof. Part of an exercise.
Let \(V\) be a \(\mathbb{K}\)-vector space and \(U\subset V\) a subspace. A subspace \(U^{\prime}\) of \(V\) such that \(V=U\oplus U^{\prime}\) is called a complement to \(U\).
Notice that a complement need not be unique. Consider \(V=\mathbb{R}^2\) and \(U=\operatorname{span}\{\vec{e}_1\}.\) Let \(v \in V.\) Then the subspace \(U^{\prime}=\operatorname{span}\{v\}\) is a complement to \(U,\) provided \(\vec{e}_1,\vec{v}\) are linearly independent.
Let \(U\) be a subspace of a finite dimensional \(\mathbb{K}\)-vector space \(V.\) Then there exists a subspace \(U^{\prime}\) so that \(V=U\oplus U^{\prime}.\)
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Let \(n \in \mathbb{N},\) \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(U_1,\ldots,U_n\) be subspaces of \(V.\) Let \(\mathbf{b}_i\) be an ordered basis of \(U_i\) for \(1\leqslant i\leqslant n.\) Then we have:
The tuple of vectors obtained by listing all the vectors of the bases \(\mathbf{b}_i\) is a basis of \(V\) if and only if \(V=\bigoplus_{i=1}^n U_i.\)
\(\dim(U_1+\cdots+U_n)\leqslant \dim(U_1)+\cdots+\dim (U_n)\) with equality if and only if the subspaces \(U_1,\ldots,U_n\) are in direct sum.
The dimension of a sum of two subspaces equals the sum of the dimensions of the subspaces minus the dimension of the intersection:
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(U_1,U_2\) subspaces of \(V.\) Then we have \[\dim(U_1+U_2)=\dim(U_1)+\dim(U_2)-\dim(U_1\cap U_2).\]
Let \(V\) be a \(\mathbb{K}\)-vector space.
Any subset \(\mathcal{S}\subset V\) generating \(V\) admits a subset \(\mathcal{T}\subset \mathcal{S}\) that is a basis of \(V.\)
Any subset \(\mathcal{S}\subset V\) that is linearly independent in \(V\) is contained in a subset \(\mathcal{T}\subset V\) that is a basis of \(V.\)
If \(\mathcal{S}_1,\mathcal{S}_2\) are bases of \(V,\) then there exists a bijective map \(f : \mathcal{S}_1 \to \mathcal{S}_2.\)
If \(V\) is finite dimensional, then any basis of \(V\) is a finite set and the number of elements in the basis is independent of the choice of the basis.
Now consider the set \(\mathcal{S}=\{u_1,\ldots,u_r,v_1,\ldots,v_{m-r},w_1,\ldots,w_{n-r}\}\) consisting of \(r+m-r+n-r=n+m-r\) vectors. If this set is a basis of \(U_1+U_2,\) then the claim follows, since then \(\dim(U_1+U_2)=n+m-r=\dim(U_1)+\dim(U_2)-\dim(U_1\cap U_2).\)
We first show that \(\mathcal{S}\) generates \(U_1+U_2.\) Let \(y \in U_1+U_2\) so that \(y=x_1+x_2\) for vectors \(x_1 \in U_1\) and \(x_2 \in U_2.\) Since \(\mathcal{S}_1\) is a basis of \(U_1,\) we can write \(x_1\) as a linear combination of elements of \(\mathcal{S}_1.\) Likewise we can write \(x_2\) as a linear combination of elements of \(\mathcal{S}_2.\) It follows that \(\mathcal{S}\) generates \(U_1+U_2.\)
We need to show that \(\mathcal{S}\) is linearly independent. So suppose we have scalars \(s_1,\ldots,s_r,\) \(t_1,\ldots,t_{m-r},\) and \(r_{1},\ldots,r_{n-r},\) so that \[\underbrace{s_1u_1+\cdots+s_r u_r}_{=u} +\underbrace{t_1v_1+\cdots+t_{m-r}v_{m-r}}_{=v}+\underbrace{r_1w_1+\cdots+r_{n-r}w_{n-r}}_{=w}=0_V.\] Equivalently, \(w=-u-v\) so that \(w \in U_1.\) Since \(w\) is a linear combination of elements of \(\mathcal{S}_2,\) we also have \(w \in U_2.\) Therefore, \(w \in U_1\cap U_2\) and there exist scalars \(\hat{s}_1,\ldots,\hat{s}_r\) such that \[w=\underbrace{\hat{s}_1u_1+\cdots+\hat{s}_r u_r}_{=\hat{u}}\] This is equivalent to \(w-\hat{u}=0_V,\) or written out \[r_1w_1+\cdots+r_{n-r}w_{n-r}-\hat{s}_1u_1-\cdots+\hat{s}_r u_r=0_V.\] Since the vectors \(\{u_1,\ldots,u_r,w_1,\ldots,w_{n-r}\}\) are linearly independent, we conclude that \(r_1=\cdots=r_{n-r}=\hat{s}_1=\cdots=\hat{s}_r=0.\) It follows that \(w=0_V\) and hence \(u+v=0_V.\) Again, since \(\{u_1,\ldots,u_r,v_1,\ldots,v_{n-r}\}\) are linearly independent, we conclude that \(s_1=\cdots=s_r=t_1=\cdots=t_{m-r}=0\) and we are done.
6.2 Invariants of endomorphisms
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(\mathbf{b},\mathbf{b}^{\prime}\) ordered bases of \(V\) and \(\mathbf{c},\mathbf{c}^{\prime}\) ordered bases of \(W.\) Let \(g : V \to W\) be a linear map. Then we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{c}^{\prime})=\mathbf{C}(\mathbf{c},\mathbf{c}^{\prime})\mathbf{M}(g,\mathbf{b},\mathbf{c})\mathbf{C}(\mathbf{b}^{\prime},\mathbf{b})\] In particular, for a linear map \(g : V \to V\) we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{b}^{\prime})=\mathbf{C}\,\mathbf{M}(g,\mathbf{b},\mathbf{b})\,\mathbf{C}^{-1},\] where we write \(\mathbf{C}=\mathbf{C}(\mathbf{b},\mathbf{b}^{\prime}).\)
Let \(n \in \mathbb{N}\) and \(\mathbf{A},\mathbf{A}^{\prime} \in M_{n,n}(\mathbb{K}).\) The matrices \(\mathbf{A}\) and \(\mathbf{A}^{\prime}\) are called similar or conjugate over \(\mathbb{K}\) if there exists an invertible matrix \(\mathbf{C}\in M_{n,n}(\mathbb{K})\) such that \[\mathbf{A}^{\prime} =\mathbf{C}\mathbf{A}\mathbf{C}^{-1}.\]
Similarity of matrices over \(\mathbb{K}\) is an equivalence relation:
Let \(n \in \mathbb{N}\) and \(\mathbf{A},\mathbf{B},\mathbf{X}\in M_{n,n}(\mathbb{K}).\) Then we have
\(\mathbf{A}\) is similar to itself;
\(\mathbf{A}\) is similar to \(\mathbf{B}\) then \(\mathbf{B}\) is similar to \(\mathbf{A}\);
If \(\mathbf{A}\) is similar to \(\mathbf{B}\) and \(\mathbf{B}\) is similar to \(\mathbf{X},\) then \(\mathbf{A}\) is also similar to \(\mathbf{X}.\)
Proof. (i) We take \(\mathbf{C}=\mathbf{1}_{n}.\)
(ii) Suppose \(\mathbf{A}\) is similar to \(\mathbf{B}\) so that \(\mathbf{B}=\mathbf{C}\mathbf{A}\mathbf{C}^{-1}\) for some invertible matrix \(\mathbf{C}\in M_{n,n}(\mathbb{K}).\) Multiplying with \(\mathbf{C}^{-1}\) from the left and \(\mathbf{C}\) from the right, we get \[\mathbf{C}^{-1}\mathbf{B}\mathbf{C}=\mathbf{C}^{-1}\mathbf{C}\mathbf{A}\mathbf{C}^{-1}\mathbf{C}=\mathbf{A},\] so that the similarity follows for the choice \(\hat{\mathbf{C}}=\mathbf{C}^{-1}.\)
(iii) We have \(\mathbf{B}=\mathbf{C}\mathbf{A}\mathbf{C}^{-1}\) and \(\mathbf{X}=\mathbf{D}\mathbf{B}\mathbf{D}^{-1}\) for invertible matrices \(\mathbf{C},\mathbf{D}.\) Then we get \[\mathbf{X}=\mathbf{D}\mathbf{C}\mathbf{A}\mathbf{C}^{-1}\mathbf{D}^{-1},\] so that the similarity follows for the choice \(\hat{\mathbf{C}}=\mathbf{D}\mathbf{C}.\)
Because of (ii) in particular, one can say that two matrices \(\mathbf{A}\) and \(\mathbf{B}\) are similar without ambiguity.
Theorem 3.106 Theorem 3.106 ➔shows that \(\mathbf{A}\) and \(\mathbf{B}\) are similar if and only if there exists an endomorphism \(g\) of \(\mathbb{K}^n\) such that \(\mathbf{A}\) and \(\mathbf{B}\) represent \(g\) with respect to two ordered bases of \(\mathbb{K}^n.\)Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(\mathbf{b},\mathbf{b}^{\prime}\) ordered bases of \(V\) and \(\mathbf{c},\mathbf{c}^{\prime}\) ordered bases of \(W.\) Let \(g : V \to W\) be a linear map. Then we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{c}^{\prime})=\mathbf{C}(\mathbf{c},\mathbf{c}^{\prime})\mathbf{M}(g,\mathbf{b},\mathbf{c})\mathbf{C}(\mathbf{b}^{\prime},\mathbf{b})\] In particular, for a linear map \(g : V \to V\) we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{b}^{\prime})=\mathbf{C}\,\mathbf{M}(g,\mathbf{b},\mathbf{b})\,\mathbf{C}^{-1},\] where we write \(\mathbf{C}=\mathbf{C}(\mathbf{b},\mathbf{b}^{\prime}).\)
For matrices \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K})\) we have \[\det(\mathbf{A}\mathbf{B})=\det(\mathbf{A})\det(\mathbf{B}).\]
A matrix \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) is invertible if and only if \(\det(\mathbf{A})\neq 0.\) Moreover, in the case where \(\mathbf{A}\) is invertible, we have \[\det\left(\mathbf{A}^{-1}\right)=\frac{1}{\det \mathbf{A}}.\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(\mathbf{b},\mathbf{b}^{\prime}\) ordered bases of \(V\) and \(\mathbf{c},\mathbf{c}^{\prime}\) ordered bases of \(W.\) Let \(g : V \to W\) be a linear map. Then we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{c}^{\prime})=\mathbf{C}(\mathbf{c},\mathbf{c}^{\prime})\mathbf{M}(g,\mathbf{b},\mathbf{c})\mathbf{C}(\mathbf{b}^{\prime},\mathbf{b})\] In particular, for a linear map \(g : V \to V\) we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{b}^{\prime})=\mathbf{C}\,\mathbf{M}(g,\mathbf{b},\mathbf{b})\,\mathbf{C}^{-1},\] where we write \(\mathbf{C}=\mathbf{C}(\mathbf{b},\mathbf{b}^{\prime}).\)
Another example of a scalar that we can associate to an endomorphism is the so-called trace. Like for the determinant, we first define the trace for matrices. Luckily, the trace is a lot simpler to define:
Let \(n \in \mathbb{N}\) and \(\mathbf{A}\in M_{n,n}(\mathbb{K}).\) The sum \(\sum_{i=1}^n [\mathbf{A}]_{ii}\) of its diagonal entries is called the trace of \(\mathbf{A}\) and denoted by \(\operatorname{Tr}(\mathbf{A})\) or \(\operatorname{Tr}\mathbf{A}.\)
For all \(n \in \mathbb{N}\) we have \(\operatorname{Tr}(\mathbf{1}_{n})=n.\) For \[\mathbf{A}=\begin{pmatrix} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 3 \end{pmatrix}\] we have \(\operatorname{Tr}(\mathbf{A})=2+2+3=7.\)
The trace of a product of square matrices is independent of the order of multiplication:
Let \(n \in \mathbb{N}\) and \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K}).\) Then we have \[\operatorname{Tr}(\mathbf{A}\mathbf{B})=\operatorname{Tr}(\mathbf{B}\mathbf{A}).\]
Proof. Let \(\mathbf{A}=(A_{ij})_{1\leqslant i,j\leqslant n}\) and \(\mathbf{B}=(B_{ij})_{1\leqslant i,j\leqslant n}.\) Then \[[\mathbf{A}\mathbf{B}]_{ij}=\sum_{k=1}^n A_{ik}B_{kj} \qquad \text{and}\qquad [\mathbf{B}\mathbf{A}]_{kj}=\sum_{i=1}^n B_{ki}A_{ij},\] so that \[\operatorname{Tr}(\mathbf{A}\mathbf{B})=\sum_{i=1}^n\sum_{k=1}^n A_{ik}B_{ki}=\sum_{k=1}^n\sum_{i=1}^n B_{ki}A_{ik}=\operatorname{Tr}(\mathbf{B}\mathbf{A}).\]
Using the previous proposition, we obtain \[\tag{6.2} \operatorname{Tr}\left(\mathbf{C}\mathbf{A}\mathbf{C}^{-1}\right)=\operatorname{Tr}\left(\mathbf{A}\mathbf{C}^{-1}\mathbf{C}\right)=\operatorname{Tr}(\mathbf{A}).\] As for the determinant, the following definition thus makes sense:
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(\mathbf{b},\mathbf{b}^{\prime}\) ordered bases of \(V\) and \(\mathbf{c},\mathbf{c}^{\prime}\) ordered bases of \(W.\) Let \(g : V \to W\) be a linear map. Then we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{c}^{\prime})=\mathbf{C}(\mathbf{c},\mathbf{c}^{\prime})\mathbf{M}(g,\mathbf{b},\mathbf{c})\mathbf{C}(\mathbf{b}^{\prime},\mathbf{b})\] In particular, for a linear map \(g : V \to V\) we have \[\mathbf{M}(g,\mathbf{b}^{\prime},\mathbf{b}^{\prime})=\mathbf{C}\,\mathbf{M}(g,\mathbf{b},\mathbf{b})\,\mathbf{C}^{-1},\] where we write \(\mathbf{C}=\mathbf{C}(\mathbf{b},\mathbf{b}^{\prime}).\)
The trace and determinant of endomorphisms behave nicely with respect to composition of maps:
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space. Then, for all endomorphisms \(f,g :V \to V\) we have
\(\operatorname{Tr}(f\circ g)=\operatorname{Tr}(g\circ f)\);
\(\det(f\circ g)=\det(f)\det(g).\)
Let \(V_1,V_2,V_3\) be finite dimensional \(\mathbb{K}\)-vector spaces and \(\mathbf{b}_i\) an ordered basis of \(V_i\) for \(i=1,2,3.\) Let \(g_1 : V_1 \to V_2\) and \(g_2 : V_2 \to V_3\) be linear maps. Then \[\mathbf{M}(g_2\circ g_1,\mathbf{b}_1,\mathbf{b}_3)=\mathbf{M}(g_2,\mathbf{b}_2,\mathbf{b}_3)\mathbf{M}(g_1,\mathbf{b}_1,\mathbf{b}_2).\]
Let \(n \in \mathbb{N}\) and \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K}).\) Then we have \[\operatorname{Tr}(\mathbf{A}\mathbf{B})=\operatorname{Tr}(\mathbf{B}\mathbf{A}).\]
For matrices \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K})\) we have \[\det(\mathbf{A}\mathbf{B})=\det(\mathbf{A})\det(\mathbf{B}).\]
Let \(n \in \mathbb{N}\) and \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K}).\) Then we have \[\operatorname{Tr}(\mathbf{A}\mathbf{B})=\operatorname{Tr}(\mathbf{B}\mathbf{A}).\]
We also have:
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(g : V \to V\) an endomorphism. Then the following statements are equivalent:
\(g\) is injective;
\(g\) is surjective;
\(g\) is bijective;
\(\det(g) \neq 0.\)
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces with \(\dim(V)=\dim(W)\) and \(f : V \to W\) a linear map. Then the following statements are equivalent:
\(f\) is injective;
\(f\) is surjective;
\(f\) is bijective.
A matrix \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) is invertible if and only if \(\det(\mathbf{A})\neq 0.\) Moreover, in the case where \(\mathbf{A}\) is invertible, we have \[\det\left(\mathbf{A}^{-1}\right)=\frac{1}{\det \mathbf{A}}.\]
Let \(V,W\) be finite dimensional \(\mathbb{K}\)-vector spaces, \(\mathbf{b}\) an ordered basis of \(V\) and \(\mathbf{c}\) an ordered basis of \(W.\) A linear map \(g : V \to W\) is bijective if and only if \(\mathbf{M}(g,\mathbf{b},\mathbf{c})\) is invertible. Moreover, in the case where \(g\) is bijective we have \[\mathbf{M}(g^{-1},\mathbf{c},\mathbf{b})=(\mathbf{M}(g,\mathbf{b},\mathbf{c}))^{-1}.\]
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(g : V \to V\) an endomorphism. Then the following statements are equivalent:
\(g\) is injective;
\(g\) is surjective;
\(g\) is bijective;
\(\det(g) \neq 0.\)
A mapping \(x : \mathbb{N} \to \mathbb{K}\) from the natural numbers into a field \(\mathbb{K}\) called a sequence in \(\mathbb{K}\) (or simply a sequence, when \(\mathbb{K}\) is clear from the context). It is common to write \(x_n\) instead of \(x(n)\) for \(n \in \mathbb{N}\) and to denote a sequence by \((x_n)_{n \in \mathbb{N}}=(x_1,x_2,x_3,\ldots).\) We write \(\mathbb{K}^{\infty}\) for the set of sequences in \(\mathbb{K}.\) For instance, taking \(\mathbb{K}=\mathbb{R},\) we may consider the sequence \[\left(\frac{1}{n}\right)_{n \in \mathbb{N}}=\left(1,\frac{1}{2},\frac{1}{3},\frac{1}{4},\frac{1}{5},\ldots\right)\] or the sequence \[\left(\sqrt{n}\right)_{n \in \mathbb{N}}=\left(1,\sqrt{2},\sqrt{3},2,\sqrt{5},\ldots\right).\] If we equip \(\mathbb{K}^{\infty}\) with the zero vector given by the zero sequence \((0,0,0,0,0,\ldots),\) addition given by \((x_n)_{n \in \mathbb{N}}+(y_n)_{n\in N}=(x_n+y_n)_{n \in \mathbb{N}}\) and scalar multiplication given by \(s\cdot(x_n)_{n \in \mathbb{N}}=(sx_n)_{n \in \mathbb{N}}\) for \(s\in \mathbb{K},\) then \(\mathbb{K}^{\infty}\) is a \(\mathbb{K}\)-vector space.