9.2 Symmetric bilinear forms
We now restrict to the case \(\mathbb{K}=\mathbb{R}.\) Perpendicular vectors are orthogonal in the following sense:
Let \(V\) be an \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Two vectors \(v_1,v_2 \in V\) are called orthogonal with respect to \(\langle\cdot{,}\cdot\rangle\) if \(\langle v_1,v_2\rangle=0.\) We write \(v_1\perp v_2\) if the vectors \(v_1,v_2 \in V\) are orthogonal. A subset \(S\subset V\) is called orthogonal with respect to \(\langle\cdot{,}\cdot\rangle\) if all pairs of distinct vectors of \(S\) are orthogonal with respect to \(\langle\cdot{,}\cdot\rangle.\) A basis of \(V\) which is also an orthogonal subset is called an orthogonal basis.
Perpendicular vectors in \(\mathbb{R}^n\) are orthogonal with respect to the standard scalar product defined by the rule (9.1).
Example 9.7 Example 9.7 ➔continued: As we computed above, the vectors \(\vec{v}_1=\vec{e}_1+\vec{e}_2\) and \(\vec{v}_2=\vec{e}_2-\vec{e}_1\) satisfy \(\langle \vec{v}_1,\vec{v}_2\rangle_\mathbf{A}=0\) and hence are orthogonal with respect to \(\langle\cdot{,}\cdot\rangle_\mathbf{A}.\)We consider the symmetric bilinear form \(\langle\cdot{,}\cdot\rangle_\mathbf{A}\) on \(\mathbb{R}^2\) arising from the matrix \[\mathbf{A}=\begin{pmatrix} 5 & 1 \\ 1 & 5 \end{pmatrix}.\] via the rule (9.2). Let \(\mathbf{e}=(\vec{e}_1,\vec{e}_2)\) denote the ordered standard basis of \(\mathbb{R}^2\) and \(\mathbf{b}=(\vec{e}_1+\vec{e}_2,\vec{e}_2-\vec{e}_1).\) In Example 9.5 we have seen that \(\mathbf{M}(\langle\cdot{,}\cdot\rangle_\mathbf{A},\mathbf{e})=\mathbf{A}.\) In Example 3.105 we computed that \[\mathbf{C}(\mathbf{b},\mathbf{e})=\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}.\] By definition, we have \[\begin{aligned} \phantom{mathjax}[\mathbf{M}(\langle\cdot{,}\cdot\rangle_A,\mathbf{b})]_{11}&=\begin{pmatrix} 1 & 1 \end{pmatrix}\begin{pmatrix} 5 & 1 \\ 1 & 5 \end{pmatrix}\begin{pmatrix} 1 \\ 1 \end{pmatrix}=12,\\ [\mathbf{M}(\langle\cdot{,}\cdot\rangle_A,\mathbf{b})]_{12}&=\begin{pmatrix} 1 & 1 \end{pmatrix}\begin{pmatrix} 5 & 1 \\ 1 & 5 \end{pmatrix}\begin{pmatrix} -1 \\ 1 \end{pmatrix}=0,\\ [\mathbf{M}(\langle\cdot{,}\cdot\rangle_A,\mathbf{b})]_{22}&=\begin{pmatrix} -1 & 1 \end{pmatrix}\begin{pmatrix} 5 & 1 \\ 1 & 5 \end{pmatrix}\begin{pmatrix} -1 \\ 1 \end{pmatrix}=8, \end{aligned}\] so that \[\mathbf{M}(\langle\cdot{,}\cdot\rangle_\mathbf{A},\mathbf{b})=\begin{pmatrix} 12 & 0 \\ 0 & 8 \end{pmatrix}.\] Indeed, writing \(\mathbf{C}=\mathbf{C}(\mathbf{b},\mathbf{e}),\) we have \[\mathbf{C}^T\mathbf{M}(\langle\cdot{,}\cdot\rangle_\mathbf{A},\mathbf{e})\mathbf{C}=\begin{pmatrix} 1 & 1 \\ -1 & 1 \end{pmatrix}\begin{pmatrix} 5 & 1 \\ 1 & 5 \end{pmatrix}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}=\begin{pmatrix} 12 & 0 \\ 0 & 8 \end{pmatrix}=\mathbf{M}(\langle\cdot{,}\cdot\rangle_\mathbf{A},\mathbf{b}),\] in agreement with Proposition 9.6.
Example 9.2 Example 9.2 • Bilinear forms ➔(vi) continued: Let \(f_1 \in V\) be the function \(x \mapsto x\) and \(f_3 \in V\) be the function \(x\mapsto \frac{1}{2}(5x^3-3x).\) Then \[\langle f_1,f_3\rangle=\int_{-1}^1 x\,\frac{1}{2}(5x^3-3x)\mathrm{d}x=\left.\frac{1}{2}\left(x^5-x^3\right)\right|_{-1}^1=0,\] so that \(f_1\) and \(f_3\) are orthogonal with respect to \(\langle\cdot{,}\cdot\rangle.\)The standard scalar product defined by the rule (9.1) is a bilinear form on \(\mathbb{R}^n.\)
Let \(n \in \mathbb{N}\) and \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) be a matrix. Using matrix multiplication, we define a mapping \[\tag{9.2} \langle\cdot{,}\cdot\rangle_\mathbf{A}: \mathbb{K}^n \times \mathbb{K}^n \to \mathbb{K}, \qquad (\vec{x}_1,\vec{x}_2)\mapsto \langle\vec{x}_1,\vec{x}_2\rangle_\mathbf{A}=\vec{x}_1^{T}\mathbf{A}\vec{x}_2.\] Notice that \(\mathbf{A}\vec{x}_2 \in M_{n,1}(\mathbb{K})\) and \(\vec{x}_1^T \in M_{1,n}(\mathbb{K})\) so that \(\vec{x}_1^{T}\mathbf{A}\vec{x}_2 \in M_{1,1}(\mathbb{K})=\mathbb{K}.\) The properties of the transpose and matrix multiplication imply that \(\langle\cdot{,}\cdot\rangle_\mathbf{A}\) is indeed a bilinear form on \(\mathbb{K}^n.\) Also, observe that the standard scalar product on \(\mathbb{R}^n\) arises by taking \(\mathbf{A}\) to be the identity matrix \(\mathbf{1}_{n}\in M_{n,n}(\mathbb{R}).\) That is, for all \(\vec{x},\vec{y} \in \mathbb{R}^n,\) we have \(\vec{x}\cdot \vec{y}=\langle \vec{x},\vec{y}\rangle_{\mathbf{1}_{n}}.\)
Using the determinant of a \(2\times 2\)-matrix, we obtain a map \[\langle\cdot{,}\cdot\rangle: \mathbb{K}_2 \times \mathbb{K}_2 \to \mathbb{K}\qquad (\vec{\xi}_1,\vec{\xi}_2) \mapsto \langle \vec{\xi}_1,\vec{\xi}_2\rangle=\det \begin{pmatrix} \vec{\xi}_1 \\ \vec{\xi}_2\end{pmatrix}.\] The properties of the determinant then imply that \(\langle\cdot{,}\cdot\rangle\) is an alternating bilinear form on the \(\mathbb{K}\)-vector space \(\mathbb{K}_2.\)
For \(n \in \mathbb{N}\) we consider \(V=M_{n,n}(\mathbb{K}),\) the \(\mathbb{K}\)-vector space of \(n\times n\)-matrices with entries in \(\mathbb{K}.\) We define \(\langle\cdot{,}\cdot\rangle: M_{n,n}(\mathbb{K}) \times M_{n,n}(\mathbb{K}) \to \mathbb{K}\) by the rule \[\tag{9.3} (\mathbf{A},\mathbf{B})\mapsto \langle \mathbf{A},\mathbf{B}\rangle=\operatorname{Tr}(\mathbf{A}\mathbf{B}).\] Definition 6.17 implies that \[\operatorname{Tr}(s_1 \mathbf{A}_1+s_2 \mathbf{A}_2)=s_1\operatorname{Tr}(\mathbf{A}_1)+s_2\operatorname{Tr}(\mathbf{A}_2)\] for all \(s_1,s_2 \in \mathbb{K}\) and all \(\mathbf{A}_1,\mathbf{A}_2 \in M_{n,n}(\mathbb{K}),\) that is, the trace is a linear map from \(M_{n,n}(\mathbb{K})\) into \(\mathbb{K}.\) Hence we obtain for all \(s_1,s_2 \in \mathbb{K}\) and all \(\mathbf{A}_1,\mathbf{A}_2,\mathbf{B}\in M_{n,n}(\mathbb{K})\) \[\begin{aligned} \langle s_1 \mathbf{A}_1+s_2 \mathbf{A}_2,B\rangle&=\operatorname{Tr}((s_1\mathbf{A}_1+s_2\mathbf{A}_2)B)=s_1\operatorname{Tr}(\mathbf{A}_1\mathbf{B})+s_2\operatorname{Tr}(\mathbf{A}_2\mathbf{B})\\ &=s_1\langle \mathbf{A}_1,\mathbf{B}\rangle+s_2\langle \mathbf{A}_2,\mathbf{B}\rangle. \end{aligned}\] showing that \(\langle\cdot{,}\cdot\rangle\) is linear in the first argument. Proposition 6.19 implies that \(\langle \mathbf{A},\mathbf{B}\rangle=\langle \mathbf{B},\mathbf{A}\rangle\) for all \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K}),\) hence \(\langle\cdot{,}\cdot\rangle\) is symmetric and therefore also linear in the second variable. We conclude that (9.3) defines a symmetric bilinear form on the vector space \(M_{n,n}(\mathbb{K}).\)
We consider \(V=\mathsf{P}(\mathbb{K}),\) the \(\mathbb{K}\)-vector space of polynomials. For some fixed scalar \(x_0 \in \mathbb{K}\) we may define \[\langle\cdot{,}\cdot\rangle: \mathsf{P}(\mathbb{K}) \times \mathsf{P}(\mathbb{K}) \to \mathbb{K}, \quad (p,q)\mapsto \langle p,q\rangle=p(x_0)q(x_0).\] Then we have for all \(s_1,s_2 \in \mathbb{K}\) and polynomials \(p_1,p_2,q \in \mathsf{P}(\mathbb{K})\) \[\begin{aligned} \langle s_1\cdot_{\mathsf{P}(\mathbb{K})}p_1+_{\mathsf{P}(\mathbb{K})}s_2\cdot_{\mathsf{P}(\mathbb{K})}p_2,q\rangle &=\left(s_1\cdot_{\mathsf{P}(\mathbb{K})}p_1+_{\mathsf{P}(\mathbb{K})}s_2\cdot_{\mathsf{P}(\mathbb{K})}p_2\right)(x_0)q(x_0)\\ &=(s_1p_1(x_0)+s_2p_2(x_0))q(x_0)\\ &=s_1p_1(x_0)q(x_0)+s_2p_2(x_0)q(x_0)\\ &=s_1\langle p_1,q\rangle+s_2\langle p_2,q\rangle. \end{aligned}\] Hence \(\langle\cdot{,}\cdot\rangle\) is linear in the first variable. Clearly \(\langle\cdot{,}\cdot\rangle\) is also symmetric and therefore defines a symmetric bilinear form on \(V=\mathsf{P}(\mathbb{K}).\)
We consider \(V=\mathsf{C}([-1,1],\mathbb{R}),\) the \(\mathbb{R}\)-vector space of continuous real-valued functions defined on the interval \([-1,1].\) Recall from M03 Analysis I that continuous functions are integrable, hence we can define \[\langle\cdot{,}\cdot\rangle: V \times V \to \mathbb{R}, \qquad (f,g) \mapsto \langle f,g\rangle=\int_{-1}^1f(x)g(x)\mathrm{d}x.\] The properties of integration imply that this defines a symmetric bilinear form on \(\mathsf{C}([-1,1],\mathbb{R}).\)
Let \(V\) be an \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) A subset \(\mathcal{S}\subset V\) is called orthonormal with respect to \(\langle\cdot{,}\cdot\rangle\) if \(\mathcal{S}\) is orthogonal with respect to \(\langle\cdot{,}\cdot\rangle\) and if for all vectors \(v \in \mathcal{S}\) we have \(\langle v,v\rangle=1.\) A basis of \(V\) which is also a orthonormal subset is called an orthonormal basis.
Often when \(\langle\cdot{,}\cdot\rangle\) is clear from the context we will simply speak of orthogonal or orthonormal vectors without explicitly mentioning \(\langle\cdot{,}\cdot\rangle.\)
Notice that an ordered basis \(\mathbf{b}\) of \(V\) is orthonormal with respect to \(\langle\cdot{,}\cdot\rangle\) if and only if \[\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b})=\mathbf{1}_{n},\] where \(n=\dim V.\)
The standard basis \(\{\vec{e}_1,\ldots,\vec{e}_n\}\) of \(\mathbb{R}^n\) satisfies \[\vec{e}_i\cdot \vec{e}_j=\delta_{ij}\] and hence is a orthonormal basis with respect to the standard scalar product on \(\mathbb{R}^n.\)
Example 9.2 Example 9.2 • Bilinear forms ➔(vi) continued: Let \(\mathcal{S}=\{f_1,f_2,f_3\}\subset \mathsf{C}([-1,1],\mathbb{R})\) be the subset defined by the functions \[f_1 : x \mapsto \sqrt{\frac{3}{2}}x, \qquad f_2 : x \mapsto \frac{1}{2}\sqrt{\frac{5}{2}}(3x^2-1),\qquad f_3 : x \mapsto \frac{1}{2}\sqrt{\frac{7}{2}}(5x^3-3x).\] Then \(\mathcal{S}\) is orthonormal with respect to \(\langle\cdot{,}\cdot\rangle\) as can be verified by direct computation.The standard scalar product defined by the rule (9.1) is a bilinear form on \(\mathbb{R}^n.\)
Let \(n \in \mathbb{N}\) and \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) be a matrix. Using matrix multiplication, we define a mapping \[\tag{9.2} \langle\cdot{,}\cdot\rangle_\mathbf{A}: \mathbb{K}^n \times \mathbb{K}^n \to \mathbb{K}, \qquad (\vec{x}_1,\vec{x}_2)\mapsto \langle\vec{x}_1,\vec{x}_2\rangle_\mathbf{A}=\vec{x}_1^{T}\mathbf{A}\vec{x}_2.\] Notice that \(\mathbf{A}\vec{x}_2 \in M_{n,1}(\mathbb{K})\) and \(\vec{x}_1^T \in M_{1,n}(\mathbb{K})\) so that \(\vec{x}_1^{T}\mathbf{A}\vec{x}_2 \in M_{1,1}(\mathbb{K})=\mathbb{K}.\) The properties of the transpose and matrix multiplication imply that \(\langle\cdot{,}\cdot\rangle_\mathbf{A}\) is indeed a bilinear form on \(\mathbb{K}^n.\) Also, observe that the standard scalar product on \(\mathbb{R}^n\) arises by taking \(\mathbf{A}\) to be the identity matrix \(\mathbf{1}_{n}\in M_{n,n}(\mathbb{R}).\) That is, for all \(\vec{x},\vec{y} \in \mathbb{R}^n,\) we have \(\vec{x}\cdot \vec{y}=\langle \vec{x},\vec{y}\rangle_{\mathbf{1}_{n}}.\)
Using the determinant of a \(2\times 2\)-matrix, we obtain a map \[\langle\cdot{,}\cdot\rangle: \mathbb{K}_2 \times \mathbb{K}_2 \to \mathbb{K}\qquad (\vec{\xi}_1,\vec{\xi}_2) \mapsto \langle \vec{\xi}_1,\vec{\xi}_2\rangle=\det \begin{pmatrix} \vec{\xi}_1 \\ \vec{\xi}_2\end{pmatrix}.\] The properties of the determinant then imply that \(\langle\cdot{,}\cdot\rangle\) is an alternating bilinear form on the \(\mathbb{K}\)-vector space \(\mathbb{K}_2.\)
For \(n \in \mathbb{N}\) we consider \(V=M_{n,n}(\mathbb{K}),\) the \(\mathbb{K}\)-vector space of \(n\times n\)-matrices with entries in \(\mathbb{K}.\) We define \(\langle\cdot{,}\cdot\rangle: M_{n,n}(\mathbb{K}) \times M_{n,n}(\mathbb{K}) \to \mathbb{K}\) by the rule \[\tag{9.3} (\mathbf{A},\mathbf{B})\mapsto \langle \mathbf{A},\mathbf{B}\rangle=\operatorname{Tr}(\mathbf{A}\mathbf{B}).\] Definition 6.17 implies that \[\operatorname{Tr}(s_1 \mathbf{A}_1+s_2 \mathbf{A}_2)=s_1\operatorname{Tr}(\mathbf{A}_1)+s_2\operatorname{Tr}(\mathbf{A}_2)\] for all \(s_1,s_2 \in \mathbb{K}\) and all \(\mathbf{A}_1,\mathbf{A}_2 \in M_{n,n}(\mathbb{K}),\) that is, the trace is a linear map from \(M_{n,n}(\mathbb{K})\) into \(\mathbb{K}.\) Hence we obtain for all \(s_1,s_2 \in \mathbb{K}\) and all \(\mathbf{A}_1,\mathbf{A}_2,\mathbf{B}\in M_{n,n}(\mathbb{K})\) \[\begin{aligned} \langle s_1 \mathbf{A}_1+s_2 \mathbf{A}_2,B\rangle&=\operatorname{Tr}((s_1\mathbf{A}_1+s_2\mathbf{A}_2)B)=s_1\operatorname{Tr}(\mathbf{A}_1\mathbf{B})+s_2\operatorname{Tr}(\mathbf{A}_2\mathbf{B})\\ &=s_1\langle \mathbf{A}_1,\mathbf{B}\rangle+s_2\langle \mathbf{A}_2,\mathbf{B}\rangle. \end{aligned}\] showing that \(\langle\cdot{,}\cdot\rangle\) is linear in the first argument. Proposition 6.19 implies that \(\langle \mathbf{A},\mathbf{B}\rangle=\langle \mathbf{B},\mathbf{A}\rangle\) for all \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K}),\) hence \(\langle\cdot{,}\cdot\rangle\) is symmetric and therefore also linear in the second variable. We conclude that (9.3) defines a symmetric bilinear form on the vector space \(M_{n,n}(\mathbb{K}).\)
We consider \(V=\mathsf{P}(\mathbb{K}),\) the \(\mathbb{K}\)-vector space of polynomials. For some fixed scalar \(x_0 \in \mathbb{K}\) we may define \[\langle\cdot{,}\cdot\rangle: \mathsf{P}(\mathbb{K}) \times \mathsf{P}(\mathbb{K}) \to \mathbb{K}, \quad (p,q)\mapsto \langle p,q\rangle=p(x_0)q(x_0).\] Then we have for all \(s_1,s_2 \in \mathbb{K}\) and polynomials \(p_1,p_2,q \in \mathsf{P}(\mathbb{K})\) \[\begin{aligned} \langle s_1\cdot_{\mathsf{P}(\mathbb{K})}p_1+_{\mathsf{P}(\mathbb{K})}s_2\cdot_{\mathsf{P}(\mathbb{K})}p_2,q\rangle &=\left(s_1\cdot_{\mathsf{P}(\mathbb{K})}p_1+_{\mathsf{P}(\mathbb{K})}s_2\cdot_{\mathsf{P}(\mathbb{K})}p_2\right)(x_0)q(x_0)\\ &=(s_1p_1(x_0)+s_2p_2(x_0))q(x_0)\\ &=s_1p_1(x_0)q(x_0)+s_2p_2(x_0)q(x_0)\\ &=s_1\langle p_1,q\rangle+s_2\langle p_2,q\rangle. \end{aligned}\] Hence \(\langle\cdot{,}\cdot\rangle\) is linear in the first variable. Clearly \(\langle\cdot{,}\cdot\rangle\) is also symmetric and therefore defines a symmetric bilinear form on \(V=\mathsf{P}(\mathbb{K}).\)
We consider \(V=\mathsf{C}([-1,1],\mathbb{R}),\) the \(\mathbb{R}\)-vector space of continuous real-valued functions defined on the interval \([-1,1].\) Recall from M03 Analysis I that continuous functions are integrable, hence we can define \[\langle\cdot{,}\cdot\rangle: V \times V \to \mathbb{R}, \qquad (f,g) \mapsto \langle f,g\rangle=\int_{-1}^1f(x)g(x)\mathrm{d}x.\] The properties of integration imply that this defines a symmetric bilinear form on \(\mathsf{C}([-1,1],\mathbb{R}).\)
Given a subspace \(U\subset V,\) its orthogonal subspace consists of all vectors in \(V\) that are orthogonal to all vectors of \(U.\)
Let \(V\) be an \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle\) and \(U\subset V\) a subspace. The set \[U^{\perp}=\left\{v \in V| \langle v,u\rangle=0\;\;\forall u \in U\right\}\] is called the orthogonal subspace to \(U\).
It is common to write \(\langle v,U\rangle=0\) instead of \(\langle v,u\rangle=0\;\forall u \in U.\)
Notice that the orthogonal subspace is indeed a subspace. The bilinearity of \(\langle\cdot{,}\cdot\rangle\) implies that \(\langle 0_V,u\rangle=0\) for all \(u \in U,\) hence \(0_V \in U^{\perp}\) and \(U^{\perp}\) is non-empty. Moreover, if \(v_1,v_2 \in U^{\perp},\) then we have for all \(u \in U\) and all \(s_1,s_2 \in \mathbb{R}\) \[\langle s_1v_1+s_2 v_2,u\rangle=s_1\langle v_1,u\rangle+s_2\langle v_2,u\rangle=0\] where we use the bilinearity of \(\langle\cdot{,}\cdot\rangle\) and that \(v_1,v_2 \in U^{\perp}.\) By Definition 3.21 Definition 3.21 • Vector subspace ➔it follows that \(U^{\perp}\) is indeed a subspace.Let \(V\) be a \(\mathbb{K}\)-vector space. A subset \(U\subset V\) is called a vector subspace of \(V\) if \(U\) is non-empty and if \[\tag{3.8} s_1\cdot_Vv_1+_Vs_2\cdot_V v_2 \in U\quad \text{for all}\; s_1,s_2 \in \mathbb{K}\; \text{and all}\; v_1,v_2 \in U.\]
Notice also that a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle\) on \(V\) is non-degenerate if and only if \(V^{\perp}=\{0_V\}.\)
Let \(\mathbb{R}^3\) be equipped with the standard scalar product. If \(U\) is a line through the origin in \(\mathbb{R}^3,\) then \(U^{\perp}\) consists of the plane through the origin that is perpendicular to \(U,\) see Figure 9.1.
Example 9.2 Example 9.2 • Bilinear forms ➔(iv) continued. Let \(U=\left\{s\mathbf{1}_{n} | s\in \mathbb{R}\right\}\) then \[U^{\perp}=\left\{ \mathbf{A}\in M_{n,n}(\mathbb{R}) | \operatorname{Tr}(\mathbf{A}s\mathbf{1}_{n})=0\;\;\forall s\in \mathbb{R}\right\}.\] Since \(\operatorname{Tr}(\mathbf{A}s\mathbf{1}_{n})=s\operatorname{Tr}(\mathbf{A}\mathbf{1}_{n})=s\operatorname{Tr}(\mathbf{A}),\) we conclude that the orthogonal subspace to \(U\) consists of the matrices whose trace is zero \[U^{\perp}=\left\{\mathbf{A}\in M_{n,n}(\mathbb{R}) | \operatorname{Tr}(\mathbf{A})=0\right\}.\]The standard scalar product defined by the rule (9.1) is a bilinear form on \(\mathbb{R}^n.\)
Let \(n \in \mathbb{N}\) and \(\mathbf{A}\in M_{n,n}(\mathbb{K})\) be a matrix. Using matrix multiplication, we define a mapping \[\tag{9.2} \langle\cdot{,}\cdot\rangle_\mathbf{A}: \mathbb{K}^n \times \mathbb{K}^n \to \mathbb{K}, \qquad (\vec{x}_1,\vec{x}_2)\mapsto \langle\vec{x}_1,\vec{x}_2\rangle_\mathbf{A}=\vec{x}_1^{T}\mathbf{A}\vec{x}_2.\] Notice that \(\mathbf{A}\vec{x}_2 \in M_{n,1}(\mathbb{K})\) and \(\vec{x}_1^T \in M_{1,n}(\mathbb{K})\) so that \(\vec{x}_1^{T}\mathbf{A}\vec{x}_2 \in M_{1,1}(\mathbb{K})=\mathbb{K}.\) The properties of the transpose and matrix multiplication imply that \(\langle\cdot{,}\cdot\rangle_\mathbf{A}\) is indeed a bilinear form on \(\mathbb{K}^n.\) Also, observe that the standard scalar product on \(\mathbb{R}^n\) arises by taking \(\mathbf{A}\) to be the identity matrix \(\mathbf{1}_{n}\in M_{n,n}(\mathbb{R}).\) That is, for all \(\vec{x},\vec{y} \in \mathbb{R}^n,\) we have \(\vec{x}\cdot \vec{y}=\langle \vec{x},\vec{y}\rangle_{\mathbf{1}_{n}}.\)
Using the determinant of a \(2\times 2\)-matrix, we obtain a map \[\langle\cdot{,}\cdot\rangle: \mathbb{K}_2 \times \mathbb{K}_2 \to \mathbb{K}\qquad (\vec{\xi}_1,\vec{\xi}_2) \mapsto \langle \vec{\xi}_1,\vec{\xi}_2\rangle=\det \begin{pmatrix} \vec{\xi}_1 \\ \vec{\xi}_2\end{pmatrix}.\] The properties of the determinant then imply that \(\langle\cdot{,}\cdot\rangle\) is an alternating bilinear form on the \(\mathbb{K}\)-vector space \(\mathbb{K}_2.\)
For \(n \in \mathbb{N}\) we consider \(V=M_{n,n}(\mathbb{K}),\) the \(\mathbb{K}\)-vector space of \(n\times n\)-matrices with entries in \(\mathbb{K}.\) We define \(\langle\cdot{,}\cdot\rangle: M_{n,n}(\mathbb{K}) \times M_{n,n}(\mathbb{K}) \to \mathbb{K}\) by the rule \[\tag{9.3} (\mathbf{A},\mathbf{B})\mapsto \langle \mathbf{A},\mathbf{B}\rangle=\operatorname{Tr}(\mathbf{A}\mathbf{B}).\] Definition 6.17 implies that \[\operatorname{Tr}(s_1 \mathbf{A}_1+s_2 \mathbf{A}_2)=s_1\operatorname{Tr}(\mathbf{A}_1)+s_2\operatorname{Tr}(\mathbf{A}_2)\] for all \(s_1,s_2 \in \mathbb{K}\) and all \(\mathbf{A}_1,\mathbf{A}_2 \in M_{n,n}(\mathbb{K}),\) that is, the trace is a linear map from \(M_{n,n}(\mathbb{K})\) into \(\mathbb{K}.\) Hence we obtain for all \(s_1,s_2 \in \mathbb{K}\) and all \(\mathbf{A}_1,\mathbf{A}_2,\mathbf{B}\in M_{n,n}(\mathbb{K})\) \[\begin{aligned} \langle s_1 \mathbf{A}_1+s_2 \mathbf{A}_2,B\rangle&=\operatorname{Tr}((s_1\mathbf{A}_1+s_2\mathbf{A}_2)B)=s_1\operatorname{Tr}(\mathbf{A}_1\mathbf{B})+s_2\operatorname{Tr}(\mathbf{A}_2\mathbf{B})\\ &=s_1\langle \mathbf{A}_1,\mathbf{B}\rangle+s_2\langle \mathbf{A}_2,\mathbf{B}\rangle. \end{aligned}\] showing that \(\langle\cdot{,}\cdot\rangle\) is linear in the first argument. Proposition 6.19 implies that \(\langle \mathbf{A},\mathbf{B}\rangle=\langle \mathbf{B},\mathbf{A}\rangle\) for all \(\mathbf{A},\mathbf{B}\in M_{n,n}(\mathbb{K}),\) hence \(\langle\cdot{,}\cdot\rangle\) is symmetric and therefore also linear in the second variable. We conclude that (9.3) defines a symmetric bilinear form on the vector space \(M_{n,n}(\mathbb{K}).\)
We consider \(V=\mathsf{P}(\mathbb{K}),\) the \(\mathbb{K}\)-vector space of polynomials. For some fixed scalar \(x_0 \in \mathbb{K}\) we may define \[\langle\cdot{,}\cdot\rangle: \mathsf{P}(\mathbb{K}) \times \mathsf{P}(\mathbb{K}) \to \mathbb{K}, \quad (p,q)\mapsto \langle p,q\rangle=p(x_0)q(x_0).\] Then we have for all \(s_1,s_2 \in \mathbb{K}\) and polynomials \(p_1,p_2,q \in \mathsf{P}(\mathbb{K})\) \[\begin{aligned} \langle s_1\cdot_{\mathsf{P}(\mathbb{K})}p_1+_{\mathsf{P}(\mathbb{K})}s_2\cdot_{\mathsf{P}(\mathbb{K})}p_2,q\rangle &=\left(s_1\cdot_{\mathsf{P}(\mathbb{K})}p_1+_{\mathsf{P}(\mathbb{K})}s_2\cdot_{\mathsf{P}(\mathbb{K})}p_2\right)(x_0)q(x_0)\\ &=(s_1p_1(x_0)+s_2p_2(x_0))q(x_0)\\ &=s_1p_1(x_0)q(x_0)+s_2p_2(x_0)q(x_0)\\ &=s_1\langle p_1,q\rangle+s_2\langle p_2,q\rangle. \end{aligned}\] Hence \(\langle\cdot{,}\cdot\rangle\) is linear in the first variable. Clearly \(\langle\cdot{,}\cdot\rangle\) is also symmetric and therefore defines a symmetric bilinear form on \(V=\mathsf{P}(\mathbb{K}).\)
We consider \(V=\mathsf{C}([-1,1],\mathbb{R}),\) the \(\mathbb{R}\)-vector space of continuous real-valued functions defined on the interval \([-1,1].\) Recall from M03 Analysis I that continuous functions are integrable, hence we can define \[\langle\cdot{,}\cdot\rangle: V \times V \to \mathbb{R}, \qquad (f,g) \mapsto \langle f,g\rangle=\int_{-1}^1f(x)g(x)\mathrm{d}x.\] The properties of integration imply that this defines a symmetric bilinear form on \(\mathsf{C}([-1,1],\mathbb{R}).\)
Every \(\mathbb{K}\)-vector space \(V\) admits at least one basis.
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
Let \(V\) be an \(\mathbb{R}\)-vector space and \(\langle\cdot{,}\cdot\rangle\) a symmetric bilinear form on \(V.\) Suppose there exist vectors \(v_1,v_2 \in V\) such that \(\langle v_1,v_2\rangle \neq 0.\) Then there exists a vector \(v \in V\) with \(\langle v,v\rangle \neq 0.\)
Proof. If \(\langle v_1,v_1\rangle\neq 0\) or \(\langle v_2,v_2\rangle\neq 0\) we are done, hence assume \(\langle v_1,v_1\rangle=\langle v_2,v_2\rangle=0.\) Let \(v=v_1+v_2,\) then we obtain \[\langle v,v\rangle=\langle v_1+v_2,v_1+v_2\rangle=\langle v_1,v_1\rangle +2 \langle v_1,v_2\rangle +\langle v_2,v_2\rangle=2\langle v_1,v_2\rangle.\] By assumption we have \(\langle v_1,v_2\rangle \neq 0\) and hence also \(\langle v,v\rangle \neq 0.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Suppose \(v \in V\) satisfies \(\langle v,v\rangle\neq 0,\) then \(V=U\oplus U^{\perp}\) where \(U=\{sv | s\in \mathbb{R}\}.\)
Two subspaces \(U_1,U_2\) of \(V\) are in direct sum if and only if \(U_1\cap U_2=\{0_V\}.\) Indeed, suppose \(U_1\cap U_2=\{0_V\}\) and consider \(w=v_1+v_2=v_1^{\prime}+v_2^{\prime}\) with \(v_i,v_i^{\prime} \in U_i\) for \(i=1,2.\) We then have \(v_1-v_1^{\prime}=v_2^{\prime}-v_2 \in U_2,\) since \(U_2\) is a subspace. Since \(U_1\) is a subspace as well, we also have \(v_1-v_1^{\prime} \in U_1.\) Since \(v_1-v_1^{\prime}\) lies both in \(U_1\) and \(U_2,\) we must have \(v_1-v_1^{\prime}=0_V=v_2^{\prime}-v_2.\) Conversely, suppose \(U_1,U_2\) are in direct sum and let \(w \in (U_1\cap U_2).\) We can write \(w=w+0_V=0_V+w,\) since \(w \in U_1\) and \(w \in U_2.\) Since \(U_1,U_2\) are in direct sum, we must have \(w=0_V,\) hence \(U_1\cap U_2=\{0_V\}.\)
Observe that if the subspaces \(U_1,\ldots,U_n\) are in direct sum and \(v_i \in U_i\) with \(v_i \neq 0_V\) for \(1\leqslant i\leqslant n,\) then the vectors \(\{v_1,\ldots,v_n\}\) are linearly independent. Indeed, if \(s_1,\ldots,s_n\) are scalars such that \[s_1v_1+s_2v_2+\cdots+s_n v_n=0_V=0_V+0_V+\cdots+0_V,\] then \(s_iv_i=0_V\) for all \(1\leqslant i\leqslant n.\) By assumption \(v_i\neq 0_V\) and hence \(s_i=0\) for all \(1\leqslant i\leqslant n.\)
We first show that \(U\cap U^{\perp}=\{0_V\}.\) Suppose \(u \in U\) and \(u \in U^{\perp}.\) Since \(u \in U\) we have \(u=sv\) for some scalar \(s.\) Since \(u \in U^{\perp}\) we must also have \(0=\langle u,v\rangle=s\langle v,v\rangle.\) Since \(\langle v,v\rangle \neq 0,\) this implies \(s=0\) and hence \(u=0_V.\)
We next show that \(U+U^{\perp}=V.\) Let \(w \in V.\) We want to write \(w=sv+\hat{v}\) for some \(\hat{v}\) satisfying \(\langle\hat{v},v\rangle=0.\) Since \(\hat{v}=w-sv,\) this condition becomes \[0=\langle v,w-sv\rangle=\langle v,w\rangle-s\langle v,v\rangle\] and since \(\langle v,v\rangle \neq 0,\) this gives \(s=\frac{\langle v,w\rangle}{\langle v,v\rangle}.\) Taking \[\hat{v}=w-\frac{\langle v,w\rangle}{\langle v,v\rangle} v\] thus gives \(w=sv+\hat{v}.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
Let \(U\) be a subspace of a finite dimensional \(\mathbb{K}\)-vector space \(V.\) Then there exists a subspace \(U^{\prime}\) so that \(V=U\oplus U^{\prime}.\)
It is thus sufficient to prove the existence of an orthogonal basis for the case when \(\langle\cdot{,}\cdot\rangle\) is non-degenerate.
Let us therefore assume that \(\langle\cdot{,}\cdot\rangle\) is non-degenerate on \(V.\) We are going to prove the statement by using induction on the dimension of the vector space. If \(\dim V=0\) there is nothing to show, hence the statement is anchored. We will argue next that if every \((n-1)\)-dimensional \(\mathbb{R}\)-vector space equipped with a non-degenerate symmetric bilinear form admits an orthogonal basis, then so does every \(n\)-dimensional \(\mathbb{R}\)-vector space equipped with a non-degenerate symmetric bilinear form.
Let \(V\) be an \(\mathbb{R}\)-vector space and \(\langle\cdot{,}\cdot\rangle\) a symmetric bilinear form on \(V.\) Suppose there exist vectors \(v_1,v_2 \in V\) such that \(\langle v_1,v_2\rangle \neq 0.\) Then there exists a vector \(v \in V\) with \(\langle v,v\rangle \neq 0.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Suppose \(v \in V\) satisfies \(\langle v,v\rangle\neq 0,\) then \(V=U\oplus U^{\perp}\) where \(U=\{sv | s\in \mathbb{R}\}.\)
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space and \(U_1,U_2\) subspaces of \(V.\) Then we have \[\dim(U_1+U_2)=\dim(U_1)+\dim(U_2)-\dim(U_1\cap U_2).\]
We also have:
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Furthermore, let \(U\subset V\) be a subspace and \(\{u_1,\ldots,u_k\}\) be a basis of \(U.\) Then the following two statements are equivalent
a vector \(v \in V\) is an element of \(U^{\perp}\);
for \(1\leqslant i \leqslant k\) we have \(\langle v,u_i\rangle=0.\)
Proof. Exercise.
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Suppose \(v \in V\) satisfies \(\langle v,v\rangle\neq 0,\) then \(V=U\oplus U^{\perp}\) where \(U=\{sv | s\in \mathbb{R}\}.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space and \(\langle\cdot{,}\cdot\rangle\) a symmetric bilinear form on \(V.\) Suppose \(U\subset V\) is a subspace such that the restriction of \(\langle\cdot{,}\cdot\rangle\) to \(U\) is non-degenerate. Then \(U\) and \(U^{\perp}\) are in direct sum and we have \[V=U\oplus U^{\perp}.\]
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Suppose \(v \in V\) satisfies \(\langle v,v\rangle\neq 0,\) then \(V=U\oplus U^{\perp}\) where \(U=\{sv | s\in \mathbb{R}\}.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
Let \(n \in \mathbb{N}\) and \(\mathbf{A}=(A_{ij})_{1\leqslant i,j\leqslant n} \in M_{n,n}(\mathbb{K})\) be an upper triangular matrix so that \(A_{ij}=0\) for \(i>j.\) Then \[\tag{5.6} \det(\mathbf{A})=\prod_{i=1}^n A_{ii}=A_{11}A_{22}\cdots A_{nn}.\]
Let \(\langle\cdot{,}\cdot\rangle\) be a bilinear form on a finite dimensional \(\mathbb{K}\)-vector space \(V\) and \(\mathbf{b}\) an ordered basis of \(V.\) Then \(\langle\cdot{,}\cdot\rangle\) is non-degenerate if and only if \(\det \mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b})\neq 0.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Suppose \(v \in V\) satisfies \(\langle v,v\rangle\neq 0,\) then \(V=U\oplus U^{\perp}\) where \(U=\{sv | s\in \mathbb{R}\}.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Suppose \(v \in V\) satisfies \(\langle v,v\rangle\neq 0,\) then \(V=U\oplus U^{\perp}\) where \(U=\{sv | s\in \mathbb{R}\}.\)
In the case where the restriction of a symmetric bilinear form to a subspace \(U\) is non-degenerate, we have seen that \(U^{\perp}\) is a complement to \(U.\) The subspace \(U^{\perp}\) is called the orthogonal complement of \(U\).
The process of scaling a vector \(v\) so that \(\langle v,v\rangle\) equals some specific value – typically \(1\) – is known as normalising the vector.
By definition, the matrix representation of a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle\) with respect to an ordered orthogonal basis \(\mathbf{b}=(v_1,\ldots,v_n)\) of \(V\) is diagonal. Notice that if we define \[v^{\prime}_i=\left\{\begin{array}{cc} v_i, & \langle v_i,v_i\rangle =0\\ \frac{v_i}{\sqrt{|\langle v_i,v_i\rangle|}}, & \langle v_i,v_i\rangle \neq 0\end{array}\right.\] for \(1\leqslant i\leqslant n,\) then \(\mathbf{b}^{\prime}=(v^{\prime}_1,\ldots,v^{\prime}_n)\) is also an ordered basis of \(V\) and either \(\langle v^{\prime}_i,v^{\prime}_i\rangle=0\) or \[\langle v^{\prime}_i,v^{\prime}_i\rangle=\left\langle \frac{v_i}{\sqrt{|\langle v_i,v_i\rangle|}},\frac{v_i}{\sqrt{|\langle v_i,v_i\rangle|}}\right\rangle=\frac{\langle v_i,v_i\rangle}{|\langle v_i,v_i\rangle|}=\pm 1.\] Therefore, the matrix representation of \(\langle\cdot{,}\cdot\rangle\) with respect to \(\mathbf{b}^{\prime}\) is diagonal as well and the diagonal entries are elements of the set \(\{-1,0,1\}.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
Let \(\langle\cdot{,}\cdot\rangle\) denote the standard scalar product on \(\mathbb{R}^n\) and \(\mathbf{e}=(\vec{e}_1,\ldots,\vec{e}_n)\) the standard basis of \(\mathbb{K}^n.\) Then, one easily computes that \[\langle \vec{e}_i,\vec{e}_j\rangle=\vec{e}_i^T\vec{e}_j=\delta_{ij}\] and hence \(\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{e})=(\delta_{ij})_{1\leqslant i,j\leqslant n}=\mathbf{1}_{n}.\)
Likewise, if \(\mathbf{A}\in M_{n,n}(\mathbb{K}),\) then \(\mathbf{M}(\langle\cdot{,}\cdot\rangle_\mathbf{A},\mathbf{e})=\mathbf{A}.\) Indeed, writing \(\mathbf{A}=(A_{ij})_{1\leqslant i,j\leqslant n},\) we have \[\mathbf{A}\vec{e}_j=\sum_{k=1}^n A_{kj}\vec{e}_k\] and thus \[\langle \vec{e}_i,\vec{e}_j\rangle_\mathbf{A}=\vec{e}_i^T\mathbf{A}\vec{e}_j=\vec{e}_i^T\sum_{k=1}^n A_{kj}\vec{e}_k=\sum_{k=1}^nA_{kj}\vec{e}_i^T\vec{e}_k=\sum_{k=1}^nA_{kj}\delta_{ik}=A_{ij}.\]
Let \(\langle\cdot{,}\cdot\rangle\) denote the alternating bilinear form on \(\mathbb{K}_2\) from Example 9.2 above and \[\mathbf{b}=\left(\begin{pmatrix} 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \end{pmatrix}\right).\] The alternating property of \(\langle\cdot{,}\cdot\rangle\) implies that the diagonal entries of \(\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b})\) vanish. Hence we obtain \[\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b})=\begin{pmatrix} 0 & \det\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \\ \det\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} & 0 \end{pmatrix}=\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}.\]
Let \(V\) be a finite dimensional \(\mathbb{R}\)-vector space equipped with a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle.\) Then \(V\) admits an orthogonal basis with respect to \(\langle\cdot{,}\cdot\rangle.\)
By definition, the matrix representation of a symmetric bilinear form \(\langle\cdot{,}\cdot\rangle\) with respect to an ordered orthogonal basis \(\mathbf{b}=(v_1,\ldots,v_n)\) of \(V\) is diagonal. Notice that if we define \[v^{\prime}_i=\left\{\begin{array}{cc} v_i, & \langle v_i,v_i\rangle =0\\ \frac{v_i}{\sqrt{|\langle v_i,v_i\rangle|}}, & \langle v_i,v_i\rangle \neq 0\end{array}\right.\] for \(1\leqslant i\leqslant n,\) then \(\mathbf{b}^{\prime}=(v^{\prime}_1,\ldots,v^{\prime}_n)\) is also an ordered basis of \(V\) and either \(\langle v^{\prime}_i,v^{\prime}_i\rangle=0\) or \[\langle v^{\prime}_i,v^{\prime}_i\rangle=\left\langle \frac{v_i}{\sqrt{|\langle v_i,v_i\rangle|}},\frac{v_i}{\sqrt{|\langle v_i,v_i\rangle|}}\right\rangle=\frac{\langle v_i,v_i\rangle}{|\langle v_i,v_i\rangle|}=\pm 1.\] Therefore, the matrix representation of \(\langle\cdot{,}\cdot\rangle\) with respect to \(\mathbf{b}^{\prime}\) is diagonal as well and the diagonal entries are elements of the set \(\{-1,0,1\}.\)
Let \(V\) be a finite dimensional \(\mathbb{K}\)-vector space, \(\mathbf{b}=(v_1,\ldots,v_n)\) an ordered basis of \(V\) with associated linear coordinate system \(\boldsymbol{\beta}: V \to \mathbb{K}^n\) and \(\langle\cdot{,}\cdot\rangle\) a bilinear form on \(V.\) Then
for all \(w_1,w_2 \in V\) we have \[\langle w_1,w_2\rangle=(\boldsymbol{\beta}(w_1))^T\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b})\boldsymbol{\beta}(w_2).\]
\(\langle\cdot{,}\cdot\rangle\) is symmetric if and only if \(\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b})\) is symmetric;
if \(\mathbf{b}^{\prime}\) is another ordered basis of \(V,\) then \[\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b}^{\prime})=\mathbf{C}^T\mathbf{M}(\langle\cdot{,}\cdot\rangle,\mathbf{b})\mathbf{C},\] where \(\mathbf{C}=\mathbf{C}(\mathbf{b}^{\prime},\mathbf{b})\) denotes the change of basis matrix, see Definition 3.103.
Notice that by definition \[\mathbf{C}(\mathbf{b},\mathbf{b}^{\prime})=\mathbf{M}(\mathrm{Id}_V,\mathbf{b},\mathbf{b}^{\prime}).\] Since the identity map \(\mathrm{Id}_V : V \to V\) is bijective with inverse \((\mathrm{Id}_V)^{-1}=\mathrm{Id}_V,\) Proposition 3.101 implies that the change of basis matrix \(\mathbf{C}(\mathbf{b},\mathbf{b}^{\prime})\) is invertible with inverse \[\mathbf{C}(\mathbf{b},\mathbf{b}^{\prime})^{-1}=\mathbf{C}(\mathbf{b}^{\prime},\mathbf{b}).\]
Sylvester’s law of inertia states that the numbers \(p\) and \(q\) in (9.5) (and hence also \(s\)) are uniquely determined by the bilinear form \(\langle\cdot{,}\cdot\rangle.\) That is, they do not depend on the choice of matrix \(\mathbf{C}\in \mathrm{GL}(n,\mathbb{R})\) such that \(\mathbf{C}^T\mathbf{A}\mathbf{C}\) is diagonal with diagonal entries from the set \(\{-1,0,1\}.\) We will not prove this fact, but a proof can be found in most textbooks about Linear Algebra.
The pair \((p,q)\) is known as the signature of the bilinear form \(\langle\cdot{,}\cdot\rangle\).