© Springer International Publishing Switzerland 2016
Rainer DickAdvanced Quantum MechanicsGraduate Texts in Physics10.1007/978-3-319-25675-7_4

4. Notions from Linear Algebra and Bra-Ket Notation

Rainer Dick
(1)
Department of Physics and Engineering Physics, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
 
The Schrödinger equation (1.​14) is linear in the wave function  $$\psi (\boldsymbol{x},t)$$ . This implies that for any set of solutions  $$\psi _{1}(\boldsymbol{x},t)$$ ,  $$\psi _{2}(\boldsymbol{x},t),\ldots$$ , any linear combination  $$\psi (\boldsymbol{x},t) = C_{1}\psi _{1}(\boldsymbol{x},t) + C_{2}\psi _{2}(\boldsymbol{x},t)+\ldots$$ with complex coefficients C n is also a solution. The set of solutions of equation (1.​14) for fixed potential V will therefore have the structure of a complex vector space, and we can think of the wave function  $$\psi (\boldsymbol{x},t)$$ as a particular vector in this vector space. Furthermore, we can map this vector bijectively into different, but equivalent representations where the wave function depends on different variables. An example of this is Fourier transformation (2.​5) into a wave function which depends on a wave vector  $$\boldsymbol{k}$$ ,
 $$\displaystyle{\psi (\boldsymbol{k},t) = \frac{1} {\sqrt{2\pi }^{3}}\int \!d^{3}\boldsymbol{x}\,\exp \!\left (-\mathrm{i}\boldsymbol{k} \cdot \boldsymbol{ x}\right )\psi (\boldsymbol{x},t).}$$
We have already noticed that this is sloppy notation from the mathematical point of view. We should denote the Fourier transformed function with  $$\tilde{\psi }(\boldsymbol{k},t)$$ to make it clear that  $$\tilde{\psi }(\boldsymbol{k},t)$$ and  $$\psi (\boldsymbol{x},t)$$ have different dependencies on their arguments (or stated differently, to make it clear that  $$\psi (\boldsymbol{k},t)$$ and  $$\psi (\boldsymbol{x},t)$$ are really different functions). However, there is a reason for the notation in equations (2.4, 2.5). We can switch back and forth between  $$\psi (\boldsymbol{x},t)$$ and  $$\psi (\boldsymbol{k},t)$$ using Fourier transformation. This implies that any property of a particle that can be calculated from the wave function  $$\psi (\boldsymbol{x},t)$$ in  $$\boldsymbol{x}$$ space can also be calculated from the wave function  $$\psi (\boldsymbol{k},t)$$ in  $$\boldsymbol{k}$$ space. Therefore, following Dirac, we nowadays do not think any more of  $$\psi (\boldsymbol{x},t)$$ as a wave function of a particle, but we rather think more abstractly of ψ(t) as a time-dependent quantum state, with particular representations of the quantum state ψ(t) given by the wave functions  $$\psi (\boldsymbol{x},t)$$ or  $$\psi (\boldsymbol{k},t)$$ . There are infinitely more possibilities to represent the quantum state ψ(t) through functions. For example, we could perform a Fourier transformation only with respect to the y variable and represent ψ(t) through the wave function ψ(x, k y , z, t), or we could perform an invertible transformation to completely different independent variables. In 1939, Paul Dirac introduced a notation in quantum mechanics which emphasizes the vector space and representation aspects of quantum states in a very elegant and suggestive manner. This notation is Dirac’s bra-ket notation, and it is ubiquitous in advanced modern quantum mechanics. It is worthwhile to use bra-ket notation from the start, and it is most easily explained in the framework of linear algebra.

4.1 Notions from linear algebra

The mathematical structure of quantum mechanics resembles linear algebra in many respects, and many notions from linear algebra are very useful in the investigation of quantum systems. Bra-ket notation makes the linear algebra aspects of quantum mechanics particularly visible and easy to use. Therefore we will first introduce a few notions of linear algebra in standard notation, and then rewrite everything in bra-ket notation.
Tensor products
Suppose  $$\mathcal{V}$$ is an N-dimensional real vector space with a Cartesian basis1  $$\hat{\boldsymbol{e}}_{a}$$ , 1 ≤ a ≤ N,  $$\hat{\boldsymbol{e}}_{a}^{\mathrm{T}} \cdot \hat{\boldsymbol{ e}}_{b} =\delta _{ab}$$ . Furthermore, assume that u a , v a are Cartesian components of the two vectors  $$\boldsymbol{u}$$ and  $$\boldsymbol{v}$$ ,
 $$\displaystyle{\boldsymbol{u} =\sum _{ a=1}^{N}u^{a}\hat{\boldsymbol{e}}_{ a} \equiv u^{a}\hat{\boldsymbol{e}}_{ a}.}$$
Here we use summation convention: Whenever an index appears twice in a multiplicative term, it is automatically summed over its full range of values. We will continue to use this convention throughout the remainder of the book.
The tensor product  $$\underline{\boldsymbol{M}} =\boldsymbol{ u} \otimes \boldsymbol{ v}^{\mathrm{T}}$$ of the two vectors yields an N × N matrix with components M ab  = u a v b in the Cartesian basis:
 $$\displaystyle{ \underline{\boldsymbol{M}} =\boldsymbol{ u} \otimes \boldsymbol{ v}^{\mathrm{T}} = u^{a}v^{b}\hat{\boldsymbol{e}}_{ a} \otimes \hat{\boldsymbol{ e}}_{b}^{\mathrm{T}}. }$$
(4.1)
Tensor products appear naturally in basic linear algebra e.g. in the following simple problem: Suppose  $$\boldsymbol{u} = u^{a}\hat{\boldsymbol{e}}_{a}$$ and  $$\boldsymbol{w} = w^{a}\hat{\boldsymbol{e}}_{a}$$ are two vectors in an N-dimensional vector space, and we would like to calculate the part  $$\boldsymbol{w}_{\|}$$ of the vector  $$\boldsymbol{w}$$ that is parallel to  $$\boldsymbol{u}$$ . The unit vector in the direction of  $$\boldsymbol{u}$$ is  $$\hat{\boldsymbol{u}} =\boldsymbol{ u}/\vert \boldsymbol{u}\vert$$ , and we have
 $$\displaystyle{ \boldsymbol{w}_{\|} =\hat{\boldsymbol{ u}}\vert \boldsymbol{w}\vert \cos (\boldsymbol{u},\boldsymbol{w}), }$$
(4.2)
where  $$\cos (\boldsymbol{u},\boldsymbol{w}) =\hat{\boldsymbol{ u}} \cdot \hat{\boldsymbol{ w}}$$ is the cosine of the angle between  $$\boldsymbol{u}$$ and  $$\boldsymbol{w}$$ . Substituting the expression for  $$\cos (\boldsymbol{u},\boldsymbol{w})$$ into (4.2) yields
 $$\displaystyle\begin{array}{rcl} \boldsymbol{w}_{\|}& =& \hat{\boldsymbol{u}}(\hat{\boldsymbol{u}} \cdot \boldsymbol{ w}) =\hat{ u}^{a}\hat{u}^{b}w^{c}\hat{\boldsymbol{e}}_{ a}(\hat{\boldsymbol{e}}_{b}^{\mathrm{T}} \cdot \hat{\boldsymbol{ e}}_{ c}) =\hat{ u}^{a}\hat{u}^{b}w^{c}(\hat{\boldsymbol{e}}_{ a} \otimes \hat{\boldsymbol{ e}}_{b}^{\mathrm{T}}) \cdot \hat{\boldsymbol{ e}}_{ c} \\ & =& (\hat{\boldsymbol{u}} \otimes \hat{\boldsymbol{ u}}^{T}) \cdot \boldsymbol{ w}, {}\end{array}$$
(4.3)
i.e. the tensor product  $$\underline{P}_{\|} =\hat{\boldsymbol{ u}} \otimes \hat{\boldsymbol{ u}}^{T}$$ is the projector onto the direction of the vector  $$\boldsymbol{u}$$ .
The matrix  $$\underline{\boldsymbol{M}}$$ is called a 2nd rank tensor due to its transformation properties under linear transformations of the vectors appearing in the product.
Suppose we perform a transformation of the Cartesian basis vectors  $$\hat{\boldsymbol{e}}_{a}$$ to a new set  $$\hat{\boldsymbol{e}}'_{i}$$ of basis vectors,
 $$\displaystyle{ \hat{\boldsymbol{e}}_{a} \rightarrow \hat{\boldsymbol{ e}}'_{i} =\hat{\boldsymbol{ e}}_{a}R^{a}_{ i}, }$$
(4.4)
subject to the constraint that the new basis vectors also provide a Cartesian basis,
 $$\displaystyle{ \hat{\boldsymbol{e}}'_{i} \cdot \hat{\boldsymbol{ e}}'_{j} =\delta _{ab}R^{a}_{ i}R^{b}_{ j} = R^{a}_{ i}R_{aj} =\delta _{ij}. }$$
(4.5)
Linear transformations which map Cartesian bases into Cartesian bases are denoted as rotations.
We defined  $$R_{aj} \equiv \delta _{ab}R^{b}_{j}$$ in equation (4.5), i.e. numerically R aj  = R a j . Equation (4.5) is in matrix notation
 $$\displaystyle{ \underline{R}^{T} \cdot \underline{ R} =\underline{ 1}, }$$
(4.6)
i.e.  $$\underline{R}^{T} =\underline{ R}^{-1}$$ .
However, a change of basis in our vector space does nothing to the vector  $$\boldsymbol{v}$$ , except that the vector will have different components with respect to the new basis vectors,
 $$\displaystyle{ \boldsymbol{v} =\hat{\boldsymbol{ e}}_{a}v^{a} =\hat{\boldsymbol{ e}}'_{ i}v'^{i} =\hat{\boldsymbol{ e}}_{ a}R^{a}_{ i}v'^{i}. }$$
(4.7)
Equations (4.7) and (4.5) and the uniqueness of the decomposition of a vector with respect to a set of basis vectors imply
 $$\displaystyle{ v^{a} = R^{a}_{ i}v'^{i},\quad v'^{i} = (R^{-1})^{i}_{ a}v^{a} = (R^{T})^{i}_{ a}v^{a} = v^{a}R_{ a}^{i}. }$$
(4.8)
This is the passive interpretation of transformations: The transformation changes the reference frame, but not the physical objects (here: vectors). Therefore the expansion coefficients of the physical objects change inversely (or contravariant) to the transformation of the reference frame. We will often use the passive interpretation for symmetry transformations of quantum systems.
The transformation laws (4.4) and (4.8) define first rank tensors, because the transformation laws are linear (or first order) in the transformation matrices R or R −1.
The tensor product  $$\underline{\boldsymbol{M}} =\boldsymbol{ u} \otimes \boldsymbol{ v}^{\mathrm{T}} = u^{a}v^{b}\hat{\boldsymbol{e}}_{a} \otimes \hat{\boldsymbol{ e}}_{b}^{\mathrm{T}}$$ then defines a second rank tensor, because the components and the basis transform quadratically (or in second order) with the transformation matrices R or R −1,
 $$\displaystyle\begin{array}{rcl} M'^{ij} = u'^{i}v'^{j} = (R^{-1})^{i}_{ a}(R^{-1})^{j}_{ b}u^{a}v^{b} = (R^{-1})^{i}_{ a}(R^{-1})^{j}_{ b}M^{ab},& &{}\end{array}$$
(4.9)
 $$\displaystyle\begin{array}{rcl} \hat{\boldsymbol{e}}'_{i} \otimes \hat{\boldsymbol{ e}}'_{j}{}^{\mathrm{T}} =\hat{\boldsymbol{ e}}_{ a} \otimes \hat{\boldsymbol{ e}}_{b}^{\mathrm{T}}R^{a}_{ i}R^{b}_{ j}.& &{}\end{array}$$
(4.10)
The concept immediately generalizes to n-th order tensors.
Writing the tensor product explicitly as  $$\boldsymbol{u} \otimes \boldsymbol{ v}^{\mathrm{T}}$$ reminds us that the a-th row of  $$\underline{\boldsymbol{M}}$$ is just the row vector  $$u^{a}\boldsymbol{v}^{\mathrm{T}}$$ , while the b-th column is just the column vector  $$\boldsymbol{u}v^{b}$$ . However, usually one simply writes  $$\boldsymbol{u} \otimes \boldsymbol{ v}$$ for the tensor product, just as one writes  $$\boldsymbol{u} \cdot \boldsymbol{ v}$$ instead of  $$\boldsymbol{u}^{\mathrm{T}} \cdot \boldsymbol{ v}$$ for the scalar product.
Dual bases
We will now complicate things a little further by generalizing to more general sets of basis vectors which may not be orthonormal. Strictly speaking this is overkill for the purposes of quantum mechanics, because the infinite-dimensional basis vectors which we will use in quantum mechanics are still mutually orthogonal, just like Euclidean basis vectors in finite-dimensional vector spaces. However, sometimes it is useful to learn things in a more general setting to acquire a proper understanding, and besides, non-orthonormal basis vectors are useful in solid state physics (as explained in an example below) and unavoidable in curved spaces.
Let  $$\boldsymbol{a}_{i}$$ , 1 ≤ i ≤ N, be another basis of the vector space  $$\mathcal{V}$$ . Generically this basis will not be orthonormal:  $$\boldsymbol{a}_{i} \cdot \boldsymbol{ a}_{j}\neq \delta _{ij}$$ . The corresponding dual basis with basis vectors  $$\boldsymbol{a}^{i}$$ is defined through the requirements
 $$\displaystyle{ \boldsymbol{a}^{i} \cdot \boldsymbol{ a}_{ j} =\delta ^{i}_{ j}. }$$
(4.11)
Apparently a basis is self-dual ( $$\boldsymbol{a}^{i} =\boldsymbol{ a}_{i}$$ ) if and only if it is orthonormal (i.e. Cartesian).
For the explicit construction of the dual basis, we observe that the scalar product of the N vectors  $$\boldsymbol{a}_{i}$$ defines a symmetric N × N matrix
 $$\displaystyle{g_{ij} =\boldsymbol{ a}_{i} \cdot \boldsymbol{ a}_{j}.}$$
This matrix is not degenerate, because otherwise it would have at least one vanishing eigenvalue, i.e. there would exist N numbers X i (not all vanishing) such that g ij X j  = 0. This would imply existence of a non-vanishing vector  $$\boldsymbol{X} = X^{i}\boldsymbol{a}_{i}$$ with vanishing length,
 $$\displaystyle{\boldsymbol{X}^{2} = X^{i}X^{j}\boldsymbol{a}_{ i} \cdot \boldsymbol{ a}_{j} = X^{i}g_{ ij}X^{j} = 0.}$$
The matrix g ij is therefore invertible, and we denote the inverse matrix with g ij ,
 $$\displaystyle{g^{ij}g_{ jk} =\delta ^{i}_{ k}.}$$
The inverse matrix can be used to construct the dual basis vectors as
 $$\displaystyle{ \boldsymbol{a}^{i} = g^{ij}\boldsymbol{a}_{ j}. }$$
(4.12)
The condition for dual basis vectors is readily verified,
 $$\displaystyle{\boldsymbol{a}^{i} \cdot \boldsymbol{ a}_{ k} = g^{ij}\boldsymbol{a}_{ j} \cdot \boldsymbol{ a}_{k} = g^{ij}g_{ jk} =\delta ^{i}_{ k}.}$$
For an example for the construction of a dual basis, consider Figure 4.1. The vectors  $$\boldsymbol{a}_{1}$$ and  $$\boldsymbol{a}_{2}$$ provide a basis. The angle between  $$\boldsymbol{a}_{1}$$ and  $$\boldsymbol{a}_{2}$$ is π∕4 radian, and their lengths are  $$\vert \boldsymbol{a}_{1}\vert = 2$$ and  $$\vert \boldsymbol{a}_{2}\vert = \sqrt{2}$$ .
A214425_2_En_4_Fig1_HTML.gif
Fig. 4.1
The blue vectors are the basis vectors  $$\boldsymbol{a}_{i}$$ . The red vectors are the dual basis vectors  $$\boldsymbol{a}^{i}$$
The matrix g ij therefore has the following components in this basis,
 $$\displaystyle{\underline{g} = \left (\begin{array}{rr} g_{11} & \,g_{12} \\ g_{21} & \,g_{22}\\ \end{array} \right ) = \left (\begin{array}{rr} \boldsymbol{a}_{1} \cdot \boldsymbol{ a}_{1} & \,\boldsymbol{a}_{1} \cdot \boldsymbol{ a}_{2} \\ \boldsymbol{a}_{2} \cdot \boldsymbol{ a}_{1} & \,\boldsymbol{a}_{2} \cdot \boldsymbol{ a}_{2}\\ \end{array} \right ) = \left (\begin{array}{rr} 4&\,2\\ 2 &\,2\\ \end{array} \right ).}$$
The inverse matrix is then
 $$\displaystyle{\underline{g}^{-1} = \left (\begin{array}{rr} g^{11} & \,g^{12} \\ g^{21} & \,g^{22}\\ \end{array} \right ) = \frac{1} {2}\left (\begin{array}{rr} 1&\,\, - 1\\ \, - 1 & \,2\\ \end{array} \right ).}$$
This yields with (4.12) the dual basis vectors
 $$\displaystyle{\boldsymbol{a}^{1} = \frac{1} {2}\boldsymbol{a}_{1} -\frac{1} {2}\boldsymbol{a}_{2},\quad \boldsymbol{a}^{2} = -\,\frac{1} {2}\boldsymbol{a}_{1} +\boldsymbol{ a}_{2}.}$$
These equations determined the vectors  $$\boldsymbol{a}^{i}$$ in Figure 4.1.
Decomposition of the identity
Equation (4.11) implies that the decomposition of a vector  $$\boldsymbol{v} \in \mathcal{V}$$ with respect to the basis  $$\boldsymbol{a}_{i}$$ can be written as (note summation convention)
 $$\displaystyle{ \boldsymbol{v} =\boldsymbol{ a}_{i}(\boldsymbol{a}^{i} \cdot \boldsymbol{ v}), }$$
(4.13)
i.e. the projection of  $$\boldsymbol{v}$$ onto the i-th basis vector  $$\boldsymbol{a}_{i}$$ (the component v i in standard notation) is given through scalar multiplication with the dual basis vector  $$\boldsymbol{a}^{i}$$ :
 $$\displaystyle{v^{i} =\boldsymbol{ a}^{i} \cdot \boldsymbol{ v}.}$$
The right hand side of equation (4.13) contains three vectors in each summand, and brackets have been employed to emphasize that the scalar product is between the two rightmost vectors in each term. Another way to make that clear is to write the combination of the two leftmost vectors in each term as a tensor product:
 $$\displaystyle{\boldsymbol{v} =\boldsymbol{ a}_{i} \otimes \boldsymbol{ a}^{i} \cdot \boldsymbol{ v}.}$$
If we first evaluate all the tensor products and sum over i, we have for every vector  $$\boldsymbol{v} \in \mathcal{V}$$
 $$\displaystyle{\boldsymbol{v} = (\boldsymbol{a}_{i} \otimes \boldsymbol{ a}^{i}) \cdot \boldsymbol{ v},}$$
which makes it clear that the sum of tensor products in this equation adds up to the identity matrix,
 $$\displaystyle{ \boldsymbol{a}_{i} \otimes \boldsymbol{ a}^{i} =\underline{ 1}. }$$
(4.14)
This is the statement that every vector can be uniquely decomposed in terms of the basis  $$\boldsymbol{a}_{i}$$ , and therefore this is a basic example of a completeness relation.
Note that we can just as well expand  $$\boldsymbol{v}$$ with respect to the dual basis:
 $$\displaystyle{\boldsymbol{v} = v_{i}\boldsymbol{a}^{i} =\boldsymbol{ a}^{i}(\boldsymbol{a}_{ i} \cdot \boldsymbol{ v}) = (\boldsymbol{a}^{i} \otimes \boldsymbol{ a}_{ i}) \cdot \boldsymbol{ v},}$$
and therefore we also have the dual completeness relation
 $$\displaystyle{ \boldsymbol{a}^{i} \otimes \boldsymbol{ a}_{ i} =\underline{ 1}. }$$
(4.15)
We could also have inferred this from transposition of equation (4.14).
Linear transformations of vectors can be written in terms of matrices,
 $$\displaystyle{\boldsymbol{v}' =\underline{\boldsymbol{ A}} \cdot \boldsymbol{ v}.}$$
If we insert the decompositions with respect to the basis  $$\boldsymbol{a}_{i}$$ ,
 $$\displaystyle{\boldsymbol{v}' =\boldsymbol{ a}_{i} \otimes \boldsymbol{ a}^{i} \cdot \boldsymbol{ v}' =\boldsymbol{ a}_{ i} \otimes \boldsymbol{ a}^{i} \cdot \underline{\boldsymbol{ A}} \cdot \boldsymbol{ a}_{ j} \otimes \boldsymbol{ a}^{j} \cdot \boldsymbol{ v},}$$
we find the equation in components vi  = A i j v j , with the matrix elements of the operator A,
 $$\displaystyle{ A^{i}_{ j} =\boldsymbol{ a}^{i} \cdot \underline{\boldsymbol{ A}} \cdot \boldsymbol{ a}_{ j}. }$$
(4.16)
Using (4.14), we can also infer that
 $$\displaystyle{ \underline{\boldsymbol{A}} =\boldsymbol{ a}_{i} \otimes \boldsymbol{ a}^{i} \cdot \underline{\boldsymbol{ A}} \cdot \boldsymbol{ a}_{ j} \otimes \boldsymbol{ a}^{j} = A^{i}_{ j}\boldsymbol{a}_{i} \otimes \boldsymbol{ a}^{j}. }$$
(4.17)
An application of dual bases in solid state physics: The Laue conditions for elastic scattering off a crystal
Non-orthonormal bases and the corresponding dual bases play an important role in solid state physics. Assume e.g. that  $$\boldsymbol{a}_{i}$$ , 1 ≤ i ≤ 3, are the three fundamental translation vectors of a three-dimensional lattice L. They generate the lattice according to
 $$\displaystyle{\boldsymbol{\ell}=\boldsymbol{ a}_{i}m^{i},\quad m^{i} \in \mathbb{Z}.}$$
In three dimensions one can easily construct the dual basis vectors using cross products:
 $$\displaystyle{ \boldsymbol{a}^{i} =\epsilon ^{ijk} \frac{\boldsymbol{a}_{j} \times \boldsymbol{ a}_{k}} {2\boldsymbol{a}_{1} \cdot (\boldsymbol{a}_{2} \times \boldsymbol{ a}_{3})} = \frac{1} {2V }\epsilon ^{ijk}\boldsymbol{a}_{ j} \times \boldsymbol{ a}_{k}, }$$
(4.18)
where  $$V =\boldsymbol{ a}_{1} \cdot (\boldsymbol{a}_{2} \times \boldsymbol{ a}_{3})$$ is the volume of the lattice cell spanned by the basis vectors  $$\boldsymbol{a}_{i}$$ .
The vectors  $$\boldsymbol{a}^{i}$$ , 1 ≤ i ≤ 3, generate the dual lattice or reciprocal lattice  $$\tilde{L}$$ according to
 $$\displaystyle{\tilde{\boldsymbol{\ell}}= n_{i}\boldsymbol{a}^{i},\quad n_{ i} \in \mathbb{Z},}$$
and the volume of a cell in the dual lattice is
 $$\displaystyle{ \tilde{V } =\boldsymbol{ a}^{1} \cdot (\boldsymbol{a}^{2} \times \boldsymbol{ a}^{3}) = \frac{1} {V }. }$$
(4.19)
Max von Laue derived in 1912 the conditions for constructive interference in the coherent elastic scattering off a regular array of scattering centers. If the directions of the incident and scattered waves of wavelength λ are  $$\hat{\boldsymbol{e}}_{\boldsymbol{k}}$$ and  $$\hat{\boldsymbol{e}}_{\boldsymbol{k}}'$$ , as shown in Figure 4.2, the condition for constructive interference from all scattering centers along a line generated by  $$\boldsymbol{a}_{i}$$ is
 $$\displaystyle{ \vert \boldsymbol{a}_{i}\vert \left (\cos \alpha '-\cos \alpha \right ) = \left (\hat{\boldsymbol{e}}_{\boldsymbol{k}}' -\hat{\boldsymbol{ e}}_{\boldsymbol{k}}\right ) \cdot \boldsymbol{ a}_{i} = n_{i}\lambda, }$$
(4.20)
with integer numbers n i .
A214425_2_En_4_Fig2_HTML.gif
Fig. 4.2
The Laue equation (4.20) is the condition for constructive interference between scattering centers along the line generated by the primitive basis vector  $$\boldsymbol{a}_{i}$$
In terms of the wavevector shift
 $$\displaystyle{\Delta \boldsymbol{k} =\boldsymbol{ k}' -\boldsymbol{ k} = \frac{2\pi } {\lambda } \left (\hat{\boldsymbol{e}}_{\boldsymbol{k}}' -\hat{\boldsymbol{ e}}_{\boldsymbol{k}}\right )}$$
equation (4.20) can be written more neatly as
 $$\displaystyle{ \Delta \boldsymbol{k} \cdot \boldsymbol{ a}_{i} = 2\pi n_{i}. }$$
(4.21)
If we want to have constructive interference from all scattering centers in the crystal this condition must hold for all three values of i. In case of surface scattering equation (4.21) must only hold for the two vectors  $$\boldsymbol{a}_{1}$$ and  $$\boldsymbol{a}_{2}$$ which generate the lattice structure of the scattering centers on the surface.
In 1913 W.L. Bragg observed that for scattering from a bulk crystal equations (4.21) are equivalent to constructive interference from specular reflection from sets of equidistant parallel planes in the crystal, and that the Laue conditions can be reduced to the Bragg equation in this case. However, for scattering from one or two-dimensional crystals2 and for the Ewald construction one still has to use the Laue conditions.
If we study scattering off a three-dimensional crystal, we know that the three dual basis vectors  $$\boldsymbol{a}^{i}$$ span the whole three-dimensional space. Like any three-dimensional vector, the wavevector shift can then be expanded in terms of the dual basis vectors according to
 $$\displaystyle{\Delta \boldsymbol{k} =\boldsymbol{ a}^{i}(\boldsymbol{a}_{ i} \cdot \Delta \boldsymbol{k}),}$$
and substitution of equation (4.21) yields
 $$\displaystyle{\Delta \boldsymbol{k} = 2\pi n_{i}\boldsymbol{a}^{i},}$$
i.e. the condition for constructive interference from coherent elastic scattering off a three-dimensional crystal is equivalent to the statement that  $$\Delta \boldsymbol{k}/(2\pi )$$ is a vector in the dual lattice  $$\tilde{L}$$ . Furthermore, energy conservation in the elastic scattering implies  $$\vert \boldsymbol{p}'\vert = \vert \boldsymbol{p}\vert$$ ,
 $$\displaystyle{ \Delta \boldsymbol{k}^{2} + 2\boldsymbol{k} \cdot \Delta \boldsymbol{k} = 0. }$$
(4.22)
Equations (4.21) and (4.22) together lead to the Ewald construction for the momenta of elastically scattered beams (see Figure 4.3): Draw the dual lattice and multiply all distances by a factor 2π. Then draw the vector  $$-\boldsymbol{k}$$ from one (arbitrary) point of this rescaled dual lattice. Draw a sphere of radius  $$\vert \boldsymbol{k}\vert$$ around the endpoint of  $$-\boldsymbol{k}$$ . Any point in the rescaled dual lattice which lies on this sphere corresponds to the  $$\boldsymbol{k}'$$ vector of an elastically scattered beam;  $$\boldsymbol{k}'$$ points from the endpoint of  $$-\boldsymbol{k}$$ (the center of the sphere) to the rescaled dual lattice point on the sphere.
A214425_2_En_4_Fig3_HTML.gif
Fig. 4.3
The Ewald construction of the wave vectors of elastically scattered beams. The points correspond to the reciprocal lattice stretched with the factor 2π
We have already noticed that for scattering off a planar array of scattering centers, equation (4.21) must only hold for the two vectors  $$\boldsymbol{a}_{1}$$ and  $$\boldsymbol{a}_{2}$$ which generate the lattice structure of the scattering centers on the surface. And if we have only a linear array of scattering centers, equation (4.21) must only hold for the vector  $$\boldsymbol{a}_{1}$$ which generates the linear array. In those two cases the wavevector shift can be decomposed into components orthogonal and parallel to the scattering surface or line, and the Laue conditions then imply that the parallel component is a vector in the rescaled dual lattice,
 $$\displaystyle{\Delta \boldsymbol{k} = \Delta \boldsymbol{k}_{\perp } + \Delta \boldsymbol{k}_{\|} = \Delta \boldsymbol{k}_{\perp } +\boldsymbol{ a}^{i}(\boldsymbol{a}_{ i} \cdot \Delta \boldsymbol{k}) = \Delta \boldsymbol{k}_{\perp } + 2\pi n_{i}\boldsymbol{a}^{i}.}$$
The rescaled dual lattice is also important in the umklapp processes in phonon-phonon or electron-phonon scattering in crystals. Lattices can only support oscillations with wavelengths larger than certain minimal wavelengths, which are determined by the crystal structure. As a result momentum conservation in phonon-phonon or electron-phonon scattering involves the rescaled dual lattice,
 $$\displaystyle{\sum \boldsymbol{k}_{in} -\sum \boldsymbol{ k}_{out} \in 2\pi \times \tilde{ L},}$$
see textbooks on solid state physics.
Bra-ket notation in linear algebra
The translation of the previous notions in linear algebra into bra-ket notation starts with the notion of a ket vector for a vector,  $$\boldsymbol{v} = \vert v\rangle$$ , and a bra vector for a transposed vector3,  $$\boldsymbol{v}^{\mathrm{T}} =\langle v\vert$$ . The tensor product is
 $$\displaystyle{\boldsymbol{u} \otimes \boldsymbol{ v}^{\mathrm{T}} = \vert u\rangle \langle v\vert,}$$
and the scalar product is
 $$\displaystyle{\boldsymbol{u}^{\mathrm{T}} \cdot \boldsymbol{ v} =\langle u\vert v\rangle.}$$
The appearance of the brackets on the right hand side motivated the designation “bra vector” for a transposed vector and “ket vector” for a vector.
The decomposition of a vector in the basis | a i 〉, using the dual basis | a i 〉 is
 $$\displaystyle{\vert v\rangle = \vert a_{i}\rangle \langle a^{i}\vert v\rangle,}$$
and corresponds to the decomposition of unity
 $$\displaystyle{\vert a_{i}\rangle \langle a^{i}\vert =\underline{ 1}.}$$
A linear operator maps vectors | v〉 into vectors | v′〉, | v′〉 = A | v〉. This reads in components
 $$\displaystyle{\langle a^{i}\vert v'\rangle =\langle a^{i}\vert A\vert v\rangle =\langle a^{i}\vert A\vert a_{ j}\rangle \langle a^{j}\vert v\rangle,}$$
where
 $$\displaystyle{A^{i}_{ j} \equiv \langle a^{i}\vert A\vert a_{ j}\rangle }$$
are the matrix elements of the linear operator A. There is no real advantage in using bra-ket notation in the linear algebra of finite-dimensional vector spaces, but it turns out to be very useful in quantum mechanics.

4.2 Bra-ket notation in quantum mechanics

We can represent a state as a probability amplitude in  $$\boldsymbol{x}$$ -space or in  $$\boldsymbol{k}$$ -space, and we can switch between both representations through Fourier transformation. The state itself is apparently independent from which representation we choose, just like a vector is independent from the particular basis in which we expand the vector. In Chapter 7 we will derive a wave function  $$\psi _{1s}(\boldsymbol{x},t)$$ for the relative motion of the proton and the electron in the lowest energy state of a hydrogen atom. However, it does not matter whether we use the wave function  $$\psi _{1s}(\boldsymbol{x},t)$$ in  $$\boldsymbol{x}$$ -space or the Fourier transformed wave function  $$\psi _{1s}(\boldsymbol{k},t)$$ in  $$\boldsymbol{k}$$ -space to calculate observables for the ground state of the hydrogen atom. Every information on the state can be retrieved from each of the two wave functions. We can also contemplate more exotic possibilities like writing the ψ 1s state as a linear combination of the oscillator eigenstates that we will encounter in Chapter 6 There are infinitely many possibilities to write down wave functions for one and the same quantum state, and all possibilities are equivalent. Therefore wave functions are only particular representations of a state, just like the components 〈a i  | v〉 of a vector | v〉 in an N-dimensional vector space provide only a representation of the vector with respect to a particular basis | a i 〉, 1 ≤ i ≤ N.
This motivates the following adaptation of bra-ket notation: The (generically time-dependent) state of a quantum system is | ψ(t)〉, and the  $$\boldsymbol{x}$$ -representation is just the specification of | ψ(t)〉 in terms of its projection on a particular basis,
 $$\displaystyle{\psi (\boldsymbol{x},t) =\langle \boldsymbol{ x}\vert \psi (t)\rangle,}$$
where the “basis” is given by the non-enumerable set of “x-eigenkets”:
 $$\displaystyle{ \mathbf{x}\vert \boldsymbol{x}\rangle =\boldsymbol{ x}\vert \boldsymbol{x}\rangle. }$$
(4.23)
Here x is the operator, or rather a vector of operators x = (x, y, z), and  $$\boldsymbol{x} = (x,y,z)$$ is the corresponding vector of eigenvalues.
In advanced quantum mechanics, the operators for location or momentum of a particle and their eigenvalues are sometimes not explicitly distinguished in notation, but for the experienced reader it is always clear from the context whether e.g.  $$\boldsymbol{x}$$ refers to the operator or the eigenvalue. We will denote the operators x and p for location and momentum and their Cartesian components with upright notation, x = (x, y, z), p = (p x , p y , p z ), while their eigenvalue vectors and Cartesian eigenvalues are written in cursive notation,  $$\boldsymbol{x} = (x,y,z)$$ and  $$\boldsymbol{p} = \hslash \boldsymbol{k} = (p_{x},p_{y},p_{z})$$ . However, this becomes very clumsy for non-Cartesian components of the operators x and p, but once we are at the stage where we have to use e.g. both location operators and their eigenvalues in polar coordinates, you will have so much practice with bra-ket notation that you will infer from the context whether e.g. r refers to the operator  $$r = \sqrt{\mathrm{x}^{2 } +\mathrm{ y}^{2 } +\mathrm{ z}^{2}}$$ or to the eigenvalue  $$r = \sqrt{x^{2 } + y^{2 } + z^{2}}$$ . Some physical quantities have different symbols for the related operator and its eigenvalues, e.g. H for the energy operator and E for its eigenvalues,
 $$\displaystyle{H\vert E\rangle = E\vert E\rangle,}$$
so that in these cases the use of standard cursive mathematical notation for the operators and the eigenvalues cannot cause confusion.
Expectation values of observables are often written in terms of the operator or the observable, e.g.  $$\langle x\rangle \equiv \langle \mathrm{ x}\rangle$$ ,  $$\langle E\rangle \equiv \langle H\rangle$$ etc., but explicit matrix elements of operators should always explicitly use the operator, e.g. 〈ψ | x | ψ〉, 〈ψ | H | ψ〉.
The “momentum-eigenkets” provide another basis of quantum states of a particle,
 $$\displaystyle{ \mathbf{p}\vert \boldsymbol{k}\rangle = \hslash \boldsymbol{k}\vert \boldsymbol{k}\rangle, }$$
(4.24)
and the change of basis looks like the corresponding equation in linear algebra: If we have two sets of basis vectors | a i 〉, | b a 〉, then the components of a vector | v〉 with respect to the new basis | b a 〉 are related to the | a i 〉-components via (just insert | v〉 = | a i 〉〈a i  | v〉)
 $$\displaystyle{\langle b^{a}\vert v\rangle =\langle b^{a}\vert a_{ i}\rangle \langle a^{i}\vert v\rangle,}$$
i.e. the transformation matrix T a i  = 〈b a  | a i 〉 is just given by the components of the old basis vectors in the new basis.
The corresponding equation in quantum mechanics for the  $$\vert \boldsymbol{x}\rangle$$ and  $$\vert \boldsymbol{k}\rangle$$ bases is
 $$\displaystyle{\langle \boldsymbol{x}\vert \psi (t)\rangle =\int \! d^{3}\boldsymbol{k}\,\langle \boldsymbol{x}\vert \boldsymbol{k}\rangle \langle \boldsymbol{k}\vert \psi (t)\rangle = \frac{1} {\sqrt{2\pi }^{3}}\int \!d^{3}\boldsymbol{k}\,\exp (\mathrm{i}\boldsymbol{k} \cdot \boldsymbol{ x})\langle \boldsymbol{k}\vert \psi (t)\rangle,}$$
which tells us that the expansion coefficients of the vectors  $$\vert \boldsymbol{k}\rangle$$ with respect to the  $$\vert \boldsymbol{x}\rangle$$ -basis are just
 $$\displaystyle{ \langle \boldsymbol{x}\vert \boldsymbol{k}\rangle = \frac{1} {\sqrt{2\pi }^{3}}\exp (\mathrm{i}\boldsymbol{k} \cdot \boldsymbol{ x}). }$$
(4.25)
The Fourier decomposition of the δ-function implies that these bases are self-dual, e.g.
 $$\displaystyle{\langle \boldsymbol{x}\vert \boldsymbol{x}'\rangle =\int \! d^{3}\boldsymbol{k}\,\langle \boldsymbol{x}\vert \boldsymbol{k}\rangle \langle \boldsymbol{k}\vert \boldsymbol{x}'\rangle = \frac{1} {(2\pi )^{3}}\int \!d^{3}\boldsymbol{k}\,\exp [\mathrm{i}\boldsymbol{k} \cdot (\boldsymbol{x} -\boldsymbol{ x}')] =\delta (\boldsymbol{x} -\boldsymbol{ x}').}$$
The scalar product of two states can be written in terms of  $$\vert \boldsymbol{x}\rangle$$ -components or  $$\vert \boldsymbol{k}\rangle$$ -components
 $$\displaystyle\begin{array}{rcl} \langle \varphi (t)\vert \psi (t)\rangle & =& \int \!d^{3}\boldsymbol{x}\,\langle \varphi (t)\vert \boldsymbol{x}\rangle \langle \boldsymbol{x}\vert \psi (t)\rangle =\int \! d^{3}\boldsymbol{x}\,\varphi ^{+}(\boldsymbol{x},t)\psi (\boldsymbol{x},t) {}\\ & =& \int \!d^{3}\boldsymbol{x}\,\langle \varphi (t)\vert \boldsymbol{k}\rangle \langle \boldsymbol{k}\vert \psi (t)\rangle =\int \! d^{3}\boldsymbol{x}\,\varphi ^{+}(\boldsymbol{k},t)\psi (\boldsymbol{k},t). {}\\ \end{array}$$
To get some practice with bra-ket notation let us derive the  $$\boldsymbol{x}$$ -representation of the momentum operator. We know equation (4.24) and we want to find out what the  $$\boldsymbol{x}$$ -components of the state  $$\boldsymbol{p}\vert \psi (t)\rangle$$ are. We can accomplish this by inserting the decomposition
 $$\displaystyle{\vert \psi (t)\rangle =\int \! d^{3}\boldsymbol{k}\,\vert \boldsymbol{k}\rangle \langle \boldsymbol{k}\vert \psi (t)\rangle }$$
into  $$\langle \boldsymbol{x}\vert \mathbf{p}\vert \psi (t)\rangle$$ ,
 $$\displaystyle{ \langle \boldsymbol{x}\vert \mathbf{p}\vert \psi (t)\rangle =\int \! d^{3}\boldsymbol{k}\,\langle \boldsymbol{x}\vert \mathbf{p}\vert \boldsymbol{k}\rangle \langle \boldsymbol{k}\vert \psi (t)\rangle =\int \! d^{3}\boldsymbol{k}\,\hslash \boldsymbol{k}\langle \boldsymbol{x}\vert \boldsymbol{k}\rangle \langle \boldsymbol{k}\vert \psi (t)\rangle. }$$
(4.26)
However, equation (4.25) implies
 $$\displaystyle{\hslash \boldsymbol{k}\langle \boldsymbol{x}\vert \boldsymbol{k}\rangle = \frac{\hslash } {\mathrm{i}} \boldsymbol{\nabla }\langle \boldsymbol{x}\vert \boldsymbol{k}\rangle,}$$
and substitution into equation (4.26) yields
 $$\displaystyle{ \langle \boldsymbol{x}\vert \mathbf{p}\vert \psi (t)\rangle = \frac{\hslash } {\mathrm{i}} \boldsymbol{\nabla }\int \!d^{3}\boldsymbol{k}\,\langle \boldsymbol{x}\vert \boldsymbol{k}\rangle \langle \boldsymbol{k}\vert \psi (t)\rangle = \frac{\hslash } {\mathrm{i}} \boldsymbol{\nabla }\langle \boldsymbol{x}\vert \psi (t)\rangle. }$$
(4.27)
This equation yields in particular the matrix elements of the momentum operator in the  $$\vert \boldsymbol{x}\rangle$$ -basis,
 $$\displaystyle{\langle \boldsymbol{x}\vert \mathbf{p}\vert \boldsymbol{x}'\rangle = \frac{\hslash } {\mathrm{i}} \boldsymbol{\nabla }\delta (\boldsymbol{x} -\boldsymbol{ x}').}$$
Equation (4.27) means that the  $$\boldsymbol{x}$$ -expansion coefficients  $$\langle \boldsymbol{x}\vert \mathbf{p}\vert \psi (t)\rangle$$ of the new state p | ψ(t)〉 can be calculated from the expansion coefficients  $$\langle \boldsymbol{x}\vert \psi (t)\rangle$$ of the old state | ψ(t)〉 through application of  $$-\mathrm{i}\hslash \boldsymbol{\nabla }$$ . In sloppy terminology this is the statement “the  $$\boldsymbol{x}$$ -representation of the momentum operator is  $$-\mathrm{i}\hslash \boldsymbol{\nabla }$$ ”, but the proper statement is equation (4.27),
 $$\displaystyle{\langle \boldsymbol{x}\vert \mathbf{p}\vert \psi (t)\rangle = \frac{\hslash } {\mathrm{i}} \boldsymbol{\nabla }\langle \boldsymbol{x}\vert \psi (t)\rangle.}$$
The quantum operator p acts on the quantum state | ψ(t)〉, the differential operator  $$-\mathrm{i}\hslash \boldsymbol{\nabla }$$ acts on the expansion coefficients  $$\langle \boldsymbol{x}\vert \psi (t)\rangle$$ of the state | ψ(t)〉.
The corresponding statement in linear algebra is that a linear transformation A transforms a vector | v〉 according to
 $$\displaystyle{\vert v\rangle \,\, \rightarrow \,\,\vert v'\rangle = A\vert v\rangle,\quad }$$
and the transformation in a particular basis reads
 $$\displaystyle{\langle a^{i}\vert v'\rangle =\langle a^{i}\vert A\vert v\rangle =\langle a^{i}\vert A\vert a_{ j}\rangle \langle a^{j}\vert v\rangle.}$$
The operator A acts on the vector, and its representation 〈a i  | A | a j 〉 in a particular basis acts on the components of the vector in that basis.
Bra-ket notation requires a proper understanding of the distinction between quantum operators (like p) and operators that act on expansion coefficients of quantum states in a particular basis (like  $$-\mathrm{i}\hslash \boldsymbol{\nabla }$$ ). Bra-ket notation appears in virtually every equation of advanced quantum mechanics and quantum field theory. It provides in many respects the most useful notation for recognizing the elegance and power of quantum theory.
Equations equivalent to equations (4.234.244.27) are contained in
 $$\displaystyle\begin{array}{rcl} \mathbf{x}& =& \int \!d^{3}\boldsymbol{x}\,\vert \boldsymbol{x}\rangle \boldsymbol{x}\langle \boldsymbol{x}\vert =\int \! d^{3}\boldsymbol{k}\,\vert \boldsymbol{k}\rangle \mathrm{i} \frac{\partial } {\partial \boldsymbol{k}}\langle \boldsymbol{k}\vert,{}\end{array}$$
(4.28)
 $$\displaystyle\begin{array}{rcl} \mathbf{p}& =& \int \!d^{3}\boldsymbol{k}\,\vert \boldsymbol{k}\rangle \hslash \boldsymbol{k}\langle \boldsymbol{k}\vert =\int \! d^{3}\boldsymbol{x}\,\vert \boldsymbol{x}\rangle \frac{\hslash } {\mathrm{i}} \frac{\partial } {\partial \boldsymbol{x}}\langle \boldsymbol{x}\vert.{}\end{array}$$
(4.29)
Here we used the very convenient notation  $$\boldsymbol{\nabla }\equiv \partial /\partial \boldsymbol{x}$$ for the del operator in  $$\boldsymbol{x}$$ space, and  $$\partial /\partial \boldsymbol{k}$$ for the del operator in  $$\boldsymbol{k}$$ space. One often encounters several copies of several vector spaces in an equation, and this notation is extremely useful to distinguish the different del operators in the different vector spaces.
Functions of operators are operators again. An important example are the operators V (x) for the potential energy of a particle. The eigenkets of x are also eigenkets of V (x),
 $$\displaystyle{V (\mathbf{x})\vert \boldsymbol{x}\rangle = V (\boldsymbol{x})\vert \boldsymbol{x}\rangle,}$$
and the matrix elements in  $$\boldsymbol{x}$$ representation are
 $$\displaystyle{\langle \boldsymbol{x}\vert V (\mathbf{x})\vert \boldsymbol{x}'\rangle = V (\boldsymbol{x}')\delta (\boldsymbol{x} -\boldsymbol{ x}').}$$
The single particle Schrödinger equation (1.​14) is in representation free notation
 $$\displaystyle{ \mathrm{i}\hslash \frac{d} {dt}\vert \psi (t)\rangle = H\vert \psi (t)\rangle = \frac{\mathbf{p}^{2}} {2m}\vert \psi (t)\rangle + V (\mathbf{x})\vert \psi (t)\rangle. }$$
(4.30)
We recover the  $$\boldsymbol{x}$$ representation already used in (1.​14) through projection on  $$\langle \boldsymbol{x}\vert$$ and substitution of
 $$\displaystyle\begin{array}{rcl} 1& =& \int \!d^{3}\boldsymbol{x}'\,\vert \boldsymbol{x}'\rangle \langle \boldsymbol{x}'\vert, {}\\ \mathrm{i}\hslash \frac{\partial } {\partial t}\langle \boldsymbol{x}\vert \psi (t)\rangle & =& -\,\frac{\hslash ^{2}} {2m}\Delta \langle \boldsymbol{x}\vert \psi (t)\rangle + V (\boldsymbol{x})\langle \boldsymbol{x}\vert \psi (t)\rangle. {}\\ \end{array}$$
The definition of adjoint operators in representation-free bra-ket notation is
 $$\displaystyle{ \langle \varphi \vert A\vert \psi \rangle =\langle \psi \vert A^{+}\vert \varphi \rangle ^{+}. }$$
(4.31)
This implies in particular that the “bra vector”  $$\langle \Psi \vert$$ adjoint to the “ket vector”  $$\vert \Psi \rangle = A\vert \psi \rangle$$ satisfies
 $$\displaystyle{ \langle \Psi \vert =\langle \psi \vert A^{+}. }$$
(4.32)
This is an intuitive equation which can be motivated e.g. from matrix algebra of complex finite-dimensional vector spaces. However, it deserves a formal derivation. We have for any third state | ϕ〉 the relation
 $$\displaystyle{\langle \Psi \vert \phi \rangle = (\langle \phi \vert \Psi \rangle )^{+} = (\langle \phi \vert A\vert \psi \rangle )^{+} =\langle \psi \vert A^{+}\vert \phi \rangle,}$$
where we used the defining property of adjoint operators in the last equation. Since this equation holds for every state | ϕ〉, the operator equation (4.32) follows: Projection4 onto the state  $$\vert \Psi \rangle = A\vert \psi \rangle$$ is equivalent to action of the operator A + followed by projection onto the state | ψ〉.
Self-adjoint operators (e.g.  $$\mathrm{p}^{+} =\mathrm{ p}$$ ) have real expectation values and in particular real eigenvalues:
 $$\displaystyle{\langle \psi \vert \mathrm{p}\vert \psi \rangle =\langle \psi \vert \mathrm{p}^{+}\vert \psi \rangle ^{+} =\langle \psi \vert \mathrm{p}\vert \psi \rangle ^{+}.}$$
Observables are therefore described by self-adjoint operators in quantum mechanics.
Unitary operators ( $$U^{+} = U^{-1}$$ ) do not change the norm of a state: Substitution of | ψ〉 = U | φ〉 into 〈ψ | ψ〉 yields
 $$\displaystyle{\langle \psi \vert \psi \rangle =\langle \psi \vert U\vert \varphi \rangle =\langle \varphi \vert U^{+}\vert \psi \rangle ^{+} =\langle \varphi \vert U^{+}U\vert \varphi \rangle ^{+} =\langle \varphi \vert \varphi \rangle ^{+} =\langle \varphi \vert \varphi \rangle.}$$
Time evolution and symmetry transformations of quantum systems are described by unitary operators.

4.3 The adjoint Schrödinger equation and the virial theorem

We consider a matrix element
 $$\displaystyle{ \langle \psi (t)\vert A(t')\vert \phi (t')\rangle = (\langle \phi (t')\vert A^{+}(t')\vert \psi (t)\rangle )^{+}. }$$
(4.33)
We assume that | ψ(t)〉 satisfies the Schrödinger equation
 $$\displaystyle{\mathrm{i}\hslash \frac{d} {dt}\vert \psi (t)\rangle = H\vert \psi (t)\rangle,}$$
while A(t′) and | ϕ(t′)〉 are an arbitrary operator and state, respectively. We have artificially taken the state  $$\vert \Phi (t')\rangle = A(t')\vert \phi (t')\rangle$$ at another time t′, because we are particularly interested in the time-dependence of the matrix element 〈ψ(t) | A(t′) | ϕ(t′)〉 which arises from the time-dependence of | ψ(t)〉.
Equation (4.33), the Schrödinger equation, and hermiticity of H imply
 $$\displaystyle\begin{array}{rcl} & & \!\!\!\! \frac{d} {dt}\langle \psi (t)\vert A(t')\vert \phi (t')\rangle = \left (\langle \phi (t')\vert A^{+}(t') \frac{d} {dt}\vert \psi (t)\rangle \right )^{+} {}\\ & & = \left ( \frac{1} {\mathrm{i}\hslash }\langle \phi (t')\vert A^{+}(t')H\vert \psi (t)\rangle \right )^{+} = \frac{\mathrm{i}} {\hslash }\langle \psi (t)\vert HA(t')\vert \phi (t')\rangle. {}\\ \end{array}$$
Since this holds for every operator A(t′) and state | ϕ(t′)〉, we have an operator equation
 $$\displaystyle{ \left ( \frac{d} {dt}\langle \psi (t)\vert \right ) = \frac{\mathrm{i}} {\hslash }\langle \psi (t)\vert H. }$$
(4.34)
With the brackets on the left hand side, this equation also holds for projection on time-dependent states of the form A(t) | ϕ(t)〉: Projection of any state  $$\vert \Phi (t)\rangle$$ on (dψ(t) | ∕dt) is equivalent to action of H on  $$\vert \Phi (t)\rangle$$ followed by projection of  $$H\vert \Phi (t)\rangle$$ on (i∕)〈ψ(t) | ,
 $$\displaystyle\begin{array}{rcl} \frac{d} {dt}\langle \psi (t)\vert A(t)\vert \phi (t)\rangle & =& \frac{\mathrm{i}} {\hslash }\langle \psi (t)\vert HA(t)\vert \phi (t)\rangle +\langle \psi (t)\vert \frac{dA(t)} {dt} \vert \phi (t)\rangle {}\\ & & +\,\langle \psi (t)\vert A(t) \frac{d} {dt}\vert \phi (t)\rangle. {}\\ \end{array}$$
In particular, if | ϕ(t)〉 also satisfies the Schrödinger equation, we have
 $$\displaystyle{ \frac{d} {dt}\langle \psi (t)\vert A(t)\vert \phi (t)\rangle = \frac{\mathrm{i}} {\hslash }\langle \psi (t)\vert [H,A(t)]\vert \phi (t)\rangle +\langle \psi (t)\vert \frac{dA(t)} {dt} \vert \phi (t)\rangle. }$$
(4.35)
The operator equation (4.34) is the adjoint Schrödinger equation. In general it is an operator equation, but it reduces to the complex conjugate of the Schrödinger equation if it is projected onto x eigenkets,
 $$\displaystyle\begin{array}{rcl} \frac{d} {dt}\langle \psi (t)\vert \boldsymbol{x}\rangle & =& \frac{\mathrm{i}} {\hslash }\int \!d^{3}\boldsymbol{x}'\,\langle \psi (t)\vert \boldsymbol{x}'\rangle \left (-\frac{\hslash ^{2}} {2m} \frac{\partial ^{2}} {\partial \boldsymbol{x}'^{2}} + V (\boldsymbol{x}')\right )\delta (\boldsymbol{x}' -\boldsymbol{ x}) {}\\ & =& \frac{\mathrm{i}} {\hslash }\left (-\frac{\hslash ^{2}} {2m} \frac{\partial ^{2}} {\partial \boldsymbol{x}^{2}} + V (\boldsymbol{x})\right )\langle \psi (t)\vert \boldsymbol{x}\rangle. {}\\ \end{array}$$
The result (4.35) for the time-dependence of matrix elements appears in many different settings in quantum mechanics, but one application that we will address now concerns the particular choice of the virial operator x ⋅ p for the operator A. In classical mechanics, Newton’s equation and  $$m\dot{\boldsymbol{x}} =\boldsymbol{ p}$$ imply that the time derivative of the virial  $$\boldsymbol{x} \cdot \boldsymbol{ p}$$ is
 $$\displaystyle{ \frac{d} {dt}\boldsymbol{x} \cdot \boldsymbol{ p} = \frac{\boldsymbol{p}^{2}} {m} -\boldsymbol{ x} \cdot \boldsymbol{\nabla }V (\boldsymbol{x}).}$$
Application of the time averaging operation lim T →  0 T dt … on both sides of this equation then yields the classical virial theorem for the time average 〈KT of the kinetic energy  $$K =\boldsymbol{ p}^{2}/2m$$ ,
 $$\displaystyle{ 2\langle K\rangle _{T} =\langle \boldsymbol{ x} \cdot \boldsymbol{\nabla }V (\boldsymbol{x})\rangle _{T}. }$$
(4.36)
The equation (4.35) applied to A = x ⋅ p implies that the same relation holds for all matrix elements of the operators  $$K = \mathbf{p}^{2}/2m$$ and  $$\mathbf{x} \cdot \boldsymbol{\nabla }V (\mathbf{x})$$ . We have
 $$\displaystyle{ \frac{\mathrm{i}} {\hslash }[H,\mathbf{x} \cdot \mathbf{p}] = \frac{\mathbf{p}^{2}} {m} -\mathbf{x} \cdot \frac{\partial } {\partial \mathbf{x}}V (\mathbf{x}),}$$
and therefore
 $$\displaystyle{ \frac{d} {dt}\langle \psi (t)\vert \mathbf{x} \cdot \mathbf{p}\vert \phi (t)\rangle = 2\langle \psi (t)\vert K\vert \phi (t)\rangle -\langle \psi (t)\vert \mathbf{x} \cdot \frac{\partial } {\partial \mathbf{x}}V (\mathbf{x})\vert \phi (t)\rangle. }$$
(4.37)
Time averaging then yields a quantum analog of the classical virial theorem,
 $$\displaystyle{ 2\langle \psi (t)\vert K\vert \phi (t)\rangle _{T} =\langle \psi (t)\vert \mathbf{x} \cdot \frac{\partial } {\partial \mathbf{x}}V (\mathbf{x})\vert \phi (t)\rangle _{T}. }$$
(4.38)
However, if | ψ(t)〉 and | ϕ(t)〉 are energy eigenstates,
 $$\displaystyle{\vert \psi (t)\rangle = \vert \psi \rangle \exp (-\mathrm{i}E_{\psi }t/\hslash ),\quad \vert \phi (t)\rangle = \vert \phi \rangle \exp (-\mathrm{i}E_{\phi }t/\hslash ),}$$
then equation (4.37) yields
 $$\displaystyle{ \frac{\mathrm{i}} {\hslash }(E_{\psi } - E_{\phi })\langle \psi \vert \mathbf{x} \cdot \mathbf{p}\vert \phi \rangle = 2\langle \psi \vert K\vert \phi \rangle -\langle \psi \vert \mathbf{x} \cdot \frac{\partial } {\partial \mathbf{x}}V (\mathbf{x})\vert \phi \rangle. }$$
(4.39)
In this case, the classical time averaging cannot yield anything interesting, but if we assume that our energy eigenstates are degenerate normalizable states,
 $$\displaystyle{E_{\psi } = E_{\phi },\quad \langle \psi \vert \psi \rangle =\langle \phi \vert \phi \rangle = 1,}$$
then we find the quantum virial theorem for matrix elements of degenerate normalizable energy eigenstates5,
 $$\displaystyle{ 2\langle \psi \vert K\vert \phi \rangle =\langle \psi \vert \mathbf{x} \cdot \frac{\partial } {\partial \mathbf{x}}V (\mathbf{x})\vert \phi \rangle. }$$
(4.40)
Furthermore, if  $$V (\boldsymbol{x})$$ is homogeneous of order ν,
 $$\displaystyle{V (a\boldsymbol{x}) = a^{\nu }V (\boldsymbol{x}),}$$
then
 $$\displaystyle{\boldsymbol{x} \cdot \boldsymbol{\nabla }V (\boldsymbol{x}) =\nu V (\boldsymbol{x})}$$
and
 $$\displaystyle{ 2\langle \psi \vert K\vert \phi \rangle =\nu \langle \psi \vert V \vert \phi \rangle. }$$
(4.41)
The relations (4.40) and (4.41) hold in particular for the expectation values of normalizable energy eigenstates. Special cases for the appearance of physically relevant homogeneous potential functions include harmonic oscillators, ν = 2, and the three-dimensional Coulomb potential,  $$\nu = -1$$ . We will discuss harmonic oscillators and the Coulomb problem in Chapters 6 and 7, respectively. Equation (4.41) has also profound implications for hypothetical physics in higher dimensions, see Problem 20.5.

4.4 Problems

4.1. We consider again the rotation (4.4) of a Cartesian basis,
 $$\displaystyle{\hat{\boldsymbol{e}}_{a} \rightarrow \hat{\boldsymbol{ e}}'_{i} =\hat{\boldsymbol{ e}}_{a}R^{a}_{ i},}$$
but this time we insist on keeping the expansion coefficients v a of the vector  $$\boldsymbol{v} = v^{a}\hat{\boldsymbol{e}}_{a}$$ . Rotation of the basis with fixed expansion coefficients {v 1, … v N } will therefore generate a new vector
 $$\displaystyle{\boldsymbol{v} \rightarrow \boldsymbol{ v}' \equiv v^{i}\hat{\boldsymbol{e}}'_{ i}.}$$
This is the active interpretation of transformations, because the change of the reference frame is accompanied by a change of the physical objects.
In the active interpretation, transformations of the expansion coefficients are defined by the condition that the transformed expansion coefficients describe the expansion of the new vector  $$\boldsymbol{v}'$$ with respect to the old basis  $$\hat{\boldsymbol{e}}_{a}$$ ,
 $$\displaystyle{ \boldsymbol{v}' \equiv v^{i}\hat{\boldsymbol{e}}'_{ i} = v'^{a}\hat{\boldsymbol{e}}_{ a}. }$$
(4.42)
How are the new expansion coefficients va related to the old expansion coefficients v i for an active transformation?
In the active interpretation, rotations are special by preserving the lengths of vectors and the angles between vectors.
Equation (4.42) implies that we can describe an active transformation either through a transformation of the basis with fixed expansion coefficients, or equivalently through a transformation of the expansion coefficients with a fixed basis. This is very different from the passive transformation, where a transformation of the basis is always accompanied by a compensating contragredient transformation of the expansion coefficients.
4.2. Two basis vectors  $$\boldsymbol{a}_{1}$$ and  $$\boldsymbol{a}_{2}$$ have length one and the angle between the vectors is π∕3. Construct the dual basis.
4.3. Nickel atoms form a regular triangular array with an interatomic distance of 2. 49 Å on the surface of a Nickel crystal. Particles with momentum  $$p = h/\lambda$$ are incident on the crystal. Which conditions for coherent elastic scattering off the Nickel surface do we get for orthogonal incidence of the particle beam? Which conditions for coherent elastic scattering do we get for grazing incidence in the plane of the surface?
4.4. Suppose  $$V (\boldsymbol{x})$$ is an analytic function of  $$\boldsymbol{x}$$ . Write down the  $$\boldsymbol{k}$$ -representation of the time-dependent and time-independent Schrödinger equations. Why is the  $$\boldsymbol{x}$$ -representation usually preferred for solving the Schrödinger equation?
4.5. Sometimes we seem to violate the symmetric conventions (2.4, 2.5) in the Fourier transformations of the Green’s functions that we will encounter later on. We will see that the asymmetric split of powers of 2π that we will encounter in these cases is actually a consequence of the symmetric conventions (2.4, 2.5) for the Fourier transformation of wave functions.
Suppose that the operator G has translation invariant matrix elements,
 $$\displaystyle{\langle \boldsymbol{x}\vert G\vert \boldsymbol{x}'\rangle = G(\boldsymbol{x} -\boldsymbol{ x}').}$$
Show that the Fourier transformed matrix elements  $$\langle \boldsymbol{k}\vert G\vert \boldsymbol{k}'\rangle$$ satisfy  $$\langle \boldsymbol{k}\vert G\vert \boldsymbol{k}'\rangle = G(\boldsymbol{k})\delta (\boldsymbol{k} -\boldsymbol{ k}')$$ with
 $$\displaystyle\begin{array}{rcl} G(\boldsymbol{k})& =& \int \!d^{3}\boldsymbol{x}\,G(\boldsymbol{x})\exp (-\mathrm{i}\boldsymbol{k} \cdot \boldsymbol{ x}), \\ G(\boldsymbol{x})& =& \frac{1} {(2\pi )^{3}}\int \!d^{3}\boldsymbol{k}\,G(\boldsymbol{k})\exp (\mathrm{i}\boldsymbol{k} \cdot \boldsymbol{ x}).{}\end{array}$$
(4.43)
4.6. Suppose that the Hamilton operator depends on a real parameter λ, H = H(λ). This parameter dependence will influence the energy eigenvalues and eigenstates of the Hamiltonian,
 $$\displaystyle{H(\lambda )\vert \psi _{n}(\lambda )\rangle = E_{n}(\lambda )\vert \psi _{n}(\lambda )\rangle.}$$
Use 〈ψ m (λ) | ψ n (λ)〉 = δ mn (this could also be a δ function normalization), to show that6
 $$\displaystyle\begin{array}{rcl} \delta _{mn}\frac{dE_{n}(\lambda )} {d\lambda } & =& \langle \psi _{m}(\lambda )\vert \frac{dH(\lambda )} {d\lambda } \vert \psi _{n}(\lambda )\rangle \\ & & +\left [E_{m}(\lambda ) - E_{n}(\lambda )\right ]\langle \psi _{m}(\lambda )\vert \frac{d} {d\lambda }\vert \psi _{n}(\lambda )\rangle.{}\end{array}$$
(4.44)
For m = n discrete this is known as the Hellmann-Feynman theorem7 [15]. The theorem is important for the calculation of forces in molecules.
4.7. We consider particles of mass m which are bound in a potential  $$V (\boldsymbol{x})$$ . The potential does not depend on m. How do the energy levels of the bound states change if we increase the mass of the particles?
The eigenstates for different energies will usually have different momentum uncertainties  $$\Delta p$$ . Do the energy levels with large or small  $$\Delta p$$ change more rapidly with mass?
4.8. Show that the free propagator (3.32, 3.33) is the x representation of the one-dimensional free time evolution operator,
 $$\displaystyle{U(t) =\exp \! \left (-\mathrm{i}\frac{t -\mathrm{ i}\epsilon } {2m\hslash }\mathrm{p}^{2}\right ),\quad U(x - x',t) =\langle x\vert U(t)\vert x'\rangle.}$$
Here a small negative imaginary part was added to the time variable to ensure convergence of a Gaussian integral.
Show also that the free time-evolution operator in three dimensions satisfies
 $$\displaystyle\begin{array}{rcl} U(\boldsymbol{x} -\boldsymbol{ x}',t)& =& \langle \boldsymbol{x}\vert \exp \!\left (-\mathrm{i}\frac{t -\mathrm{ i}\epsilon } {2m\hslash }\mathbf{p}^{2}\right )\vert \boldsymbol{x}'\rangle \\ & =& \sqrt{ \frac{m} {2\pi \mathrm{i}\hslash (t -\mathrm{ i}\epsilon )}}^{3}\exp \!\left (\mathrm{i} \frac{m} {2\hslash (t -\mathrm{ i}\epsilon )}(\boldsymbol{x} -\boldsymbol{ x}')^{2}\right ).{}\end{array}$$
(4.45)
For later reference we also note that this implies the formula
 $$\displaystyle\begin{array}{rcl} \exp \!\left (\mathrm{i\hslash }\frac{t -\mathrm{ i}\epsilon } {2m} \frac{\partial ^{2}} {\partial \boldsymbol{x}^{2}}\right )\delta (\boldsymbol{x} -\boldsymbol{ x}')& =& \sqrt{ \frac{m} {2\pi \mathrm{i}\hslash (t -\mathrm{ i}\epsilon )}}^{3} \\ & & \times \exp \!\left (\mathrm{i} \frac{m} {2\hslash (t -\mathrm{ i}\epsilon )}(\boldsymbol{x} -\boldsymbol{ x}')^{2}\right ).{}\end{array}$$
(4.46)
4.9. Apply equation (4.39) in the case  $$V (\boldsymbol{x}) = 0$$ to plane wave states. Show that in this case the left hand side does not vanish in the limit  $$E(\boldsymbol{k}) \rightarrow E(\boldsymbol{k}')$$ . Indeed, the equation remains correct in this case only because the left hand side does not vanish.
4.10. Use the calculation of  $$\boldsymbol{p}$$ or  $$\boldsymbol{x}$$ expectation values in the wave vector representation and in the momentum representation of the state | ψ〉 to show that momentum and wave vector eigenstates in d spatial dimensions are related according to  $$\vert \boldsymbol{p}\rangle = \vert \boldsymbol{k}\rangle /\hslash ^{d/2}$$ . Does this comply with proper δ function normalization of the two bases?
Bibliography
15.
H. Hellmann, Einführung in die Quantenchemie (Deuticke, Leipzig, 1937)
Footnotes
1
We write scalar products of vectors initially as  $$\boldsymbol{u}^{\mathrm{T}} \cdot \boldsymbol{ v}$$ to be consistent with proper tensor product notation used in (4.1), but we will switch soon to the shorter notations  $$\boldsymbol{u} \cdot \boldsymbol{ v}$$ ,  $$\boldsymbol{u} \otimes \boldsymbol{ v}$$ for scalar products and tensor products.
 
2
For scattering off two-dimensional crystals the Laue conditions can be recast in simpler forms in special cases. E.g. for orthogonal incidence a plane grating equation can be derived from the Laue conditions, or if the momentum transfer  $$\Delta \boldsymbol{k}$$ is in the plane of the crystal a two-dimensional Bragg equation can be derived.
 
3
In the case of a complex finite-dimensional vector space, the “bra vector” would actually be the transpose complex conjugate vector,  $$\langle v\vert =\boldsymbol{ v}^{+} =\boldsymbol{ v}^{{\ast}\mathrm{T}}$$ .
 
4
Strictly speaking, we can think of multiplication of a state | ϕ〉 with  $$\langle \Psi \vert$$ as projecting onto a component parallel to  $$\vert \Psi \rangle$$ only if  $$\vert \Psi \rangle$$ is normalized. It is convenient, though, to denote multiplication with  $$\langle \Psi \vert$$ as projection, although in the general case this will only be proportional to the coefficient of the  $$\vert \Psi \rangle$$ component in | ϕ〉.
 
5
Normalizability is important for the correctness of equation (4.40), because for states in an energy continuum the left hand side of equation (4.39) may not vanish in the degenerate limit E ψ  → E ϕ , see Problem 9.
 
6
P. Güttinger, Diplomarbeit, ETH Zürich, Z. Phys. 73, 169 (1932). Exceptionally, there is no summation convention used in equation (4.44).
 
7
R.P. Feynman, Phys. Rev. 56, 340 (1939).