The Schrödinger equation (1.14) is linear in the wave function
. This implies that for any
set of solutions , , any linear
combination
with complex coefficients C
n is also a
solution. The set of solutions of equation (1.14) for fixed potential
V will therefore have the
structure of a complex vector space, and we can think of the wave
function as a particular vector in
this vector space. Furthermore, we can map this vector bijectively
into different, but equivalent representations where the wave
function depends on different variables. An example of this is
Fourier transformation (2.5) into a wave function which
depends on a wave vector ,
We have already noticed that this is sloppy
notation from the mathematical point of view. We should denote the
Fourier transformed function with to make it clear
that and have different dependencies
on their arguments (or stated differently, to make it clear that
and are really different
functions). However, there is a reason for the notation in
equations (2.4, 2.5). We can switch back and forth
between and using Fourier
transformation. This implies that any property of a particle that
can be calculated from the wave function in space can also be calculated from
the wave function in space. Therefore, following Dirac,
we nowadays do not think any more of as a wave function of a
particle, but we rather think more abstractly of ψ(t) as a time-dependent quantum state, with particular
representations of the
quantum state ψ(t)
given by the wave functions or . There are infinitely more
possibilities to represent the quantum state ψ(t) through functions. For example, we
could perform a Fourier transformation only with respect to the
y variable and represent
ψ(t) through the wave function
ψ(x, k y , z, t), or we could perform an invertible
transformation to completely different independent variables. In
1939, Paul Dirac introduced a notation in quantum mechanics which
emphasizes the vector space and representation aspects of quantum
states in a very elegant and suggestive manner. This notation is
Dirac’s bra-ket notation, and it is ubiquitous in advanced modern
quantum mechanics. It is worthwhile to use bra-ket notation from
the start, and it is most easily explained in the framework of
linear algebra.
4.1 Notions from linear algebra
The mathematical structure of quantum mechanics
resembles linear algebra in many respects, and many notions from
linear algebra are very useful in the investigation of quantum
systems. Bra-ket notation makes the linear algebra aspects of
quantum mechanics particularly visible and easy to use. Therefore
we will first introduce a few notions of linear algebra in standard
notation, and then rewrite everything in bra-ket notation.
Tensor products
Suppose is an N-dimensional real vector space with a
Cartesian basis1
, 1 ≤ a ≤ N, .
Furthermore, assume that u
a , v a are Cartesian components of the
two vectors and ,
Here we use summation
convention: Whenever an index appears twice in a
multiplicative term, it is automatically summed over its full range
of values. We will continue to use this convention throughout the
remainder of the book.
The tensor
product
of the two vectors yields an N × N matrix with components M ab = u a v b in the Cartesian basis:
(4.1)
Tensor products appear naturally in basic linear
algebra e.g. in the following simple problem: Suppose and
are
two vectors in an N-dimensional vector space, and we
would like to calculate the part of the vector that is parallel to . The unit vector in the direction of
is ,
and we have
where
is the cosine of the angle between and . Substituting the expression for
into (4.2) yields
i.e. the tensor product
is the projector onto the direction of the vector .
(4.2)
(4.3)
The matrix is called a 2nd rank tensor due to its
transformation properties under linear transformations of the
vectors appearing in the product.
Suppose we perform a transformation of the
Cartesian basis vectors to a new set of basis vectors,
subject to the constraint that the new basis vectors also provide a
Cartesian basis,
Linear transformations which map Cartesian bases into Cartesian
bases are denoted as rotations.
(4.4)
(4.5)
We defined in
equation (4.5), i.e. numerically R aj = R a j . Equation (4.5) is in matrix notation
i.e. .
(4.6)
However, a change of basis in our vector space
does nothing to the vector , except that the vector will have
different components with respect to the new basis vectors,
Equations (4.7) and (4.5) and the uniqueness of
the decomposition of a vector with respect to a set of basis
vectors imply
(4.7)
(4.8)
This is the passive interpretation of
transformations: The transformation changes the reference frame,
but not the physical objects (here: vectors). Therefore the
expansion coefficients of the physical objects change inversely (or
contravariant) to the
transformation of the reference frame. We will often use the
passive interpretation for symmetry transformations of quantum
systems.
The transformation laws (4.4)
and (4.8)
define first rank tensors,
because the transformation laws are linear (or first order) in the
transformation matrices R or R −1.
The tensor product
then defines a second rank
tensor, because the components and the basis transform
quadratically (or in second order) with the transformation matrices
R or R −1,
The concept immediately generalizes to n-th order tensors.
(4.9)
(4.10)
Writing the tensor product explicitly as
reminds us that the a-th
row of is just the row vector
, while the
b-th column is just the
column vector . However, usually one simply
writes for the
tensor product, just as one writes instead of
for the scalar product.
Dual bases
We will now complicate things a little further by
generalizing to more general sets of basis vectors which may not be
orthonormal. Strictly speaking this is overkill for the purposes of
quantum mechanics, because the infinite-dimensional basis vectors
which we will use in quantum mechanics are still mutually
orthogonal, just like Euclidean basis vectors in finite-dimensional
vector spaces. However, sometimes it is useful to learn things in a
more general setting to acquire a proper understanding, and
besides, non-orthonormal basis vectors are useful in solid state
physics (as explained in an example below) and unavoidable in
curved spaces.
Let , 1 ≤ i ≤ N, be another basis of the vector space
.
Generically this basis will not be orthonormal: .
The corresponding dual
basis with basis vectors is defined through the
requirements
Apparently a basis is self-dual () if and
only if it is orthonormal (i.e. Cartesian).
(4.11)
For the explicit construction of the dual basis,
we observe that the scalar product of the N vectors defines a symmetric N × N matrix
This matrix is not degenerate, because otherwise it would have at
least one vanishing eigenvalue, i.e. there would exist N numbers X i (not all vanishing) such that
g ij X j = 0. This would imply
existence of a non-vanishing vector with
vanishing length,
The matrix g ij is therefore invertible, and
we denote the inverse matrix with g ij ,
The inverse matrix can be used to construct the dual basis vectors
as
The condition for dual basis vectors is readily verified,
(4.12)
For an example for the construction of a dual
basis, consider Figure 4.1. The vectors and provide a basis. The angle
between and is π∕4 radian, and their lengths are
and .
Fig. 4.1
The blue vectors are the basis vectors
. The red vectors are the dual
basis vectors
Decomposition of the identity
Equation (4.11) implies that the decomposition of a
vector with respect to the
basis can be written as (note
summation convention)
i.e. the projection of onto the i-th basis vector (the component v i in standard notation) is given
through scalar multiplication with the dual basis vector
:
The right hand side of equation (4.13) contains three
vectors in each summand, and brackets have been employed to
emphasize that the scalar product is between the two rightmost
vectors in each term. Another way to make that clear is to write
the combination of the two leftmost vectors in each term as a
tensor product:
If we first evaluate all the tensor products and sum over
i, we have for every vector
which makes it clear that the sum of tensor products in this
equation adds up to the identity matrix,
This is the statement that every vector can be uniquely decomposed
in terms of the basis , and therefore this is a basic
example of a completeness
relation.
(4.13)
(4.14)
Note that we can just as well expand with respect to the dual basis:
and therefore we also have the dual completeness relation
We could also have inferred this from transposition of
equation (4.14).
(4.15)
Linear
transformations of vectors can be written in terms of
matrices,
If we insert the decompositions with respect to the basis
,
we find the equation in components v′ i = A i j v j , with the matrix elements of
the operator A,
Using (4.14), we can also infer that
(4.16)
(4.17)
An application of dual bases in solid state
physics: The Laue conditions for elastic scattering off a
crystal
Non-orthonormal bases and the corresponding dual
bases play an important role in solid state physics. Assume e.g.
that , 1 ≤ i ≤ 3, are the three fundamental
translation vectors of a three-dimensional lattice L. They generate the lattice according
to
In three dimensions one can easily construct the dual basis vectors
using cross products:
where
is the volume of the lattice cell spanned by the basis
vectors .
(4.18)
The vectors , 1 ≤ i ≤ 3, generate the dual lattice or reciprocal lattice
according to
and the volume of a cell in the dual lattice is
(4.19)
Max von Laue derived in 1912 the conditions for
constructive interference in the coherent elastic scattering off a
regular array of scattering centers. If the directions of the
incident and scattered waves of wavelength λ are and
, as shown in
Figure 4.2, the condition for constructive interference
from all scattering centers along a line generated by is
with integer numbers n
i .
(4.20)
Fig. 4.2
The Laue equation (4.20) is the condition
for constructive interference between scattering centers along the
line generated by the primitive basis vector
If we want to have constructive interference from
all scattering centers in the crystal this condition must hold for
all three values of i. In
case of surface scattering equation (4.21) must only hold for
the two vectors and which generate the lattice
structure of the scattering centers on the surface.
In 1913 W.L. Bragg observed that for scattering
from a bulk crystal equations (4.21) are equivalent to
constructive interference from specular reflection from sets of
equidistant parallel planes in the crystal, and that the Laue
conditions can be reduced to the Bragg equation in this case.
However, for scattering from one or two-dimensional
crystals2 and for the
Ewald construction one still has to use the Laue conditions.
If we study scattering off a three-dimensional
crystal, we know that the three dual basis vectors span the whole three-dimensional
space. Like any three-dimensional vector, the wavevector shift can
then be expanded in terms of the dual basis vectors according to
and substitution of equation (4.21) yields
i.e. the condition for constructive interference from coherent
elastic scattering off a three-dimensional crystal is equivalent to
the statement that is a vector in the
dual lattice .
Furthermore, energy conservation in the elastic scattering implies
,
Equations (4.21) and (4.22) together lead to
the Ewald construction for the momenta of elastically scattered
beams (see Figure 4.3): Draw the dual lattice and multiply all
distances by a factor 2π.
Then draw the vector from one (arbitrary) point of this
rescaled dual lattice. Draw a sphere of radius around the endpoint of
. Any point in the rescaled dual
lattice which lies on this sphere corresponds to the vector of an elastically scattered
beam; points from the endpoint of
(the center of the sphere) to the
rescaled dual lattice point on the sphere.
(4.22)
Fig. 4.3
The Ewald construction of the wave vectors
of elastically scattered beams. The points correspond to the
reciprocal lattice stretched with the factor 2π
We have already noticed that for scattering off a
planar array of scattering centers, equation (4.21) must only hold for
the two vectors and which generate the lattice
structure of the scattering centers on the surface. And if we have
only a linear array of scattering centers,
equation (4.21) must only hold for the vector
which generates the linear
array. In those two cases the wavevector shift can be decomposed
into components orthogonal and parallel to the scattering surface
or line, and the Laue conditions then imply that the parallel
component is a vector in the rescaled dual lattice,
The rescaled dual lattice is also important in
the umklapp processes in
phonon-phonon or electron-phonon scattering in crystals. Lattices
can only support oscillations with wavelengths larger than certain
minimal wavelengths, which are determined by the crystal structure.
As a result momentum conservation in phonon-phonon or
electron-phonon scattering involves the rescaled dual lattice,
see textbooks on solid state physics.
Bra-ket notation in linear algebra
The translation of the previous notions in linear
algebra into bra-ket notation starts with the notion of a
ket vector for a vector,
, and a bra vector for a transposed
vector3, . The
tensor product is
and the scalar product is
The appearance of the brackets on the right hand side motivated the
designation “bra vector” for a transposed vector and “ket vector”
for a vector.
The decomposition of a vector in the
basis | a i 〉, using the dual
basis | a i 〉 is
and corresponds to the decomposition of unity
A linear operator maps vectors | v〉 into vectors | v′〉, | v′〉 = A | v〉. This reads in components
where
are the matrix elements of the linear operator A. There is no real advantage in using
bra-ket notation in the linear algebra of finite-dimensional vector
spaces, but it turns out to be very useful in quantum
mechanics.
4.2 Bra-ket notation in quantum mechanics
We can represent a state as a probability
amplitude in -space or in -space, and we can switch between
both representations through Fourier transformation. The state
itself is apparently independent from which representation we
choose, just like a vector is independent from the particular basis
in which we expand the vector. In Chapter 7 we will derive a wave function
for the relative
motion of the proton and the electron in the lowest energy state of
a hydrogen atom. However, it does not matter whether we use the
wave function in -space or the Fourier transformed
wave function in -space to calculate observables for
the ground state of the hydrogen atom. Every information on
the state can be retrieved from each of the two wave
functions. We can also contemplate more exotic possibilities like
writing the ψ
1s state as a
linear combination of the oscillator eigenstates that we will
encounter in Chapter 6 There are infinitely many
possibilities to write down wave functions for one and the same
quantum state, and all possibilities are equivalent. Therefore wave
functions are only particular representations of a state, just like
the components 〈a
i
| v〉 of a
vector | v〉 in an
N-dimensional vector space
provide only a representation of the vector with respect to a
particular basis | a
i 〉,
1 ≤ i ≤ N.
This motivates the following adaptation of
bra-ket notation: The (generically time-dependent) state of a
quantum system is | ψ(t)〉, and the -representation is just the
specification of | ψ(t)〉 in terms of its projection on a
particular basis,
where the “basis” is given by the non-enumerable set of
“x-eigenkets”:
Here x is the operator, or
rather a vector of operators x = (x, y, z), and is the corresponding
vector of eigenvalues.
(4.23)
In advanced quantum mechanics, the operators for
location or momentum of a particle and their eigenvalues are
sometimes not explicitly distinguished in notation, but for the
experienced reader it is always clear from the context whether e.g.
refers to the operator or the
eigenvalue. We will denote the operators x and p for location and momentum and their
Cartesian components with upright notation, x = (x, y, z), p = (p x , p y , p z ), while their eigenvalue
vectors and Cartesian eigenvalues are written in cursive notation,
and .
However, this becomes very clumsy for non-Cartesian components of
the operators x and
p, but once we are at the
stage where we have to use e.g. both location operators and their
eigenvalues in polar coordinates, you will have so much practice
with bra-ket notation that you will infer from the context whether
e.g. r refers to the
operator
or to the eigenvalue . Some physical
quantities have different symbols for the related operator and its
eigenvalues, e.g. H for the
energy operator and E for
its eigenvalues,
so that in these cases the use of standard cursive mathematical
notation for the operators and the eigenvalues cannot cause
confusion.
Expectation values of observables are often
written in terms of the operator or the observable, e.g.
,
etc., but
explicit matrix elements of operators should always explicitly use
the operator, e.g. 〈ψ | x | ψ〉, 〈ψ | H | ψ〉.
The “momentum-eigenkets” provide another basis of
quantum states of a particle,
and the change of basis looks like the corresponding equation in
linear algebra: If we have two sets of basis vectors | a i 〉, | b a 〉, then the components of a
vector | v〉 with respect to
the new basis | b
a 〉 are related
to the | a
i 〉-components
via (just insert | v〉 = | a i 〉〈a i | v〉)
i.e. the transformation matrix T a i = 〈b a | a i 〉 is just given by the
components of the old basis vectors in the new basis.
(4.24)
The corresponding equation in quantum mechanics
for the and bases is
which tells us that the expansion coefficients of the vectors
with respect to the
-basis are just
The Fourier decomposition of the δ-function implies that these bases are
self-dual, e.g.
The scalar product of two states can be written in terms of
-components or
-components
(4.25)
To get some practice with bra-ket notation let us
derive the -representation of the momentum
operator. We know equation (4.24) and we want to find out what the
-components of the state are. We can
accomplish this by inserting the decomposition
into ,
However, equation (4.25) implies
and substitution into equation (4.26) yields
This equation yields in particular the matrix elements of the
momentum operator in the -basis,
Equation (4.27) means that the -expansion coefficients
of the new state p | ψ(t)〉 can be calculated from the
expansion coefficients of the
old state | ψ(t)〉 through application of . In sloppy
terminology this is the statement “the -representation of the momentum
operator is ”, but the
proper statement is equation (4.27),
The quantum operator p acts
on the quantum state | ψ(t)〉, the differential operator
acts on the
expansion coefficients of the
state | ψ(t)〉.
(4.26)
(4.27)
The corresponding statement in linear algebra is
that a linear transformation A transforms a vector | v〉 according to
and the transformation in a particular basis reads
The operator A acts on the
vector, and its representation 〈a i | A | a j 〉 in a particular basis acts on
the components of the vector in that basis.
Bra-ket notation requires a proper understanding
of the distinction between quantum operators (like p) and operators that act on expansion
coefficients of quantum states in a particular basis (like
). Bra-ket
notation appears in virtually every equation of advanced quantum
mechanics and quantum field theory. It provides in many respects
the most useful notation for recognizing the elegance and power of
quantum theory.
Here we used the very convenient notation
for the del operator in space, and for the del
operator in space. One often encounters several
copies of several vector spaces in an equation, and this notation
is extremely useful to distinguish the different del operators in
the different vector spaces.
Functions of operators are operators again. An
important example are the operators V (x) for the potential energy of a
particle. The eigenkets of x
are also eigenkets of V
(x),
and the matrix elements in representation are
The single particle Schrödinger
equation (1.14) is in representation free
notation
We recover the representation already used
in (1.14) through projection on
and substitution of
(4.30)
The definition of adjoint operators in
representation-free bra-ket notation is
This implies in particular that the “bra vector” adjoint to the “ket vector”
satisfies
This is an intuitive equation which can be motivated e.g. from
matrix algebra of complex finite-dimensional vector spaces.
However, it deserves a formal derivation. We have for any third
state | ϕ〉 the relation
where we used the defining property of adjoint operators in the
last equation. Since this equation holds for every
state | ϕ〉, the operator
equation (4.32) follows: Projection4 onto the state is
equivalent to action of the operator A + followed by projection
onto the state | ψ〉.
(4.31)
(4.32)
Self-adjoint
operators (e.g. ) have real expectation
values and in particular real eigenvalues:
Observables are therefore described by self-adjoint operators in
quantum mechanics.
Unitary
operators () do not change the norm of a state:
Substitution of | ψ〉 = U | φ〉 into 〈ψ | ψ〉 yields
Time evolution and symmetry transformations of quantum systems are
described by unitary operators.
4.3 The adjoint Schrödinger equation and the virial theorem
We consider a matrix element
We assume that | ψ(t)〉 satisfies the Schrödinger equation
while A(t′) and | ϕ(t′)〉 are an arbitrary operator and
state, respectively. We have artificially taken the state
at another time t′, because
we are particularly interested in the time-dependence of the matrix
element 〈ψ(t) | A(t′) | ϕ(t′)〉 which arises from the
time-dependence of | ψ(t)〉.
(4.33)
Equation (4.33), the Schrödinger equation, and
hermiticity of H imply
Since this holds for every operator A(t′) and state | ϕ(t′)〉, we have an operator equation
With the brackets on the left hand side, this equation also holds
for projection on time-dependent states of the form A(t) | ϕ(t)〉: Projection of any state
on (d〈ψ(t) | ∕dt) is equivalent to action of
H on followed by projection of
on (i∕ℏ)〈ψ(t) | ,
In particular, if | ϕ(t)〉 also satisfies the Schrödinger
equation, we have
(4.34)
(4.35)
The operator equation (4.34) is the adjoint
Schrödinger equation. In general it is an operator equation, but it
reduces to the complex conjugate of the Schrödinger equation if it
is projected onto x
eigenkets,
The result (4.35) for the
time-dependence of matrix elements appears in many different
settings in quantum mechanics, but one application that we will
address now concerns the particular choice of the virial operator
x ⋅ p for the operator A. In classical mechanics, Newton’s
equation and imply that
the time derivative of the virial is
Application of the time averaging operation lim T → ∞ ∫ 0 T dt … on both sides of this equation
then yields the classical virial theorem for the time average
〈K〉 T of the kinetic energy
,
(4.36)
The equation (4.35) applied to
A = x ⋅ p implies that the same relation holds
for all matrix elements of the operators and .
We have
and therefore
Time averaging then yields a quantum analog of the classical virial
theorem,
(4.37)
(4.38)
However, if | ψ(t)〉 and | ϕ(t)〉 are energy eigenstates,
then equation (4.37) yields
In this case, the classical time averaging cannot yield anything
interesting, but if we assume that our energy eigenstates are
degenerate normalizable states,
then we find the quantum virial theorem for matrix elements of
degenerate normalizable energy eigenstates5,
Furthermore, if is homogeneous of order
ν,
then
and
(4.39)
(4.40)
(4.41)
The relations (4.40)
and (4.41) hold in particular for the expectation
values of normalizable energy eigenstates. Special cases for the
appearance of physically relevant homogeneous potential functions
include harmonic oscillators, ν = 2, and the three-dimensional
Coulomb potential, . We will discuss harmonic oscillators and
the Coulomb problem in Chapters 6 and 7, respectively.
Equation (4.41) has also profound implications for
hypothetical physics in higher dimensions, see
Problem 20.5.
4.4 Problems
4.1. We consider again the
rotation (4.4) of a Cartesian basis,
but this time we insist on keeping the expansion coefficients
v a of the vector .
Rotation of the basis with fixed expansion coefficients
{v
1, … v
N } will
therefore generate a new vector
This is the active
interpretation of transformations, because the change of the
reference frame is accompanied by a change of the physical
objects.
In the active interpretation, transformations of
the expansion coefficients are defined by the condition that the
transformed expansion coefficients describe the expansion of the
new vector with respect to the old basis ,
How are the new expansion coefficients v′ a related to the old expansion
coefficients v
i for an active
transformation?
(4.42)
In the active interpretation, rotations are
special by preserving the lengths of vectors and the angles between
vectors.
Equation (4.42) implies that we can describe an active
transformation either through a transformation of the basis with
fixed expansion coefficients, or equivalently through a
transformation of the expansion coefficients with a fixed basis.
This is very different from the passive transformation, where a
transformation of the basis is always accompanied by a compensating
contragredient transformation of the expansion coefficients.
4.2. Two basis vectors and have length one and the angle
between the vectors is π∕3.
Construct the dual basis.
4.3. Nickel atoms form a regular
triangular array with an interatomic distance of 2. 49 Å on the
surface of a Nickel crystal. Particles with momentum are incident on the crystal. Which
conditions for coherent elastic scattering off the Nickel surface
do we get for orthogonal incidence of the particle beam? Which
conditions for coherent elastic scattering do we get for grazing
incidence in the plane of the surface?
4.4. Suppose is an analytic function of
. Write down the -representation of the time-dependent
and time-independent Schrödinger equations. Why is the -representation usually preferred for
solving the Schrödinger equation?
4.5. Sometimes we seem to violate the
symmetric conventions (2.4, 2.5) in the Fourier
transformations of the Green’s functions that we will encounter
later on. We will see that the asymmetric split of powers of
2π that we will encounter
in these cases is actually a consequence of the symmetric
conventions (2.4, 2.5) for the Fourier transformation of
wave functions.
Suppose that the operator G has translation invariant matrix
elements,
Show that the Fourier transformed matrix elements
satisfy
with
(4.43)
4.6. Suppose that the Hamilton operator
depends on a real parameter λ, H = H(λ). This parameter dependence will
influence the energy eigenvalues and eigenstates of the
Hamiltonian,
Use 〈ψ m (λ) | ψ n (λ)〉 = δ mn (this could also be a
δ function normalization),
to show that6
For m = n discrete this is known as the
Hellmann-Feynman theorem7 [15]. The theorem is important for the
calculation of forces in molecules.
(4.44)
4.7. We consider particles of mass
m which are bound in a
potential . The potential does not depend
on m. How do the energy
levels of the bound states change if we increase the mass of the
particles?
The eigenstates for different energies will
usually have different momentum uncertainties .
Do the energy levels with large or small
change more rapidly with mass?
4.8. Show that the free
propagator (3.32, 3.33) is the x representation of the one-dimensional
free time evolution operator,
Here a small negative imaginary part was added to the time variable
to ensure convergence of a Gaussian integral.
Show also that the free time-evolution operator
in three dimensions satisfies
(4.45)
For later reference we also note that this
implies the formula
(4.46)
4.9. Apply equation (4.39) in the case
to plane wave states. Show
that in this case the left hand side does not vanish in the limit
.
Indeed, the equation remains correct in this case only because the left hand side does not
vanish.
4.10. Use the calculation of or expectation values in the wave
vector representation and in the momentum representation of the
state | ψ〉 to show that
momentum and wave vector eigenstates in d spatial dimensions are related
according to .
Does this comply with proper δ function normalization of the two
bases?
Bibliography
15.
H. Hellmann, Einführung in die Quantenchemie
(Deuticke, Leipzig, 1937)
Footnotes
1
We write scalar products of vectors initially as
to be consistent with proper tensor product notation used
in (4.1),
but we will switch soon to the shorter notations , for scalar
products and tensor products.
2
For scattering off two-dimensional crystals the
Laue conditions can be recast in simpler forms in special cases.
E.g. for orthogonal incidence a plane grating equation can be
derived from the Laue conditions, or if the momentum transfer
is in the plane of the
crystal a two-dimensional Bragg equation can be derived.
3
In the case of a complex finite-dimensional
vector space, the “bra vector” would actually be the transpose
complex conjugate vector, .
4
Strictly speaking, we can think of multiplication
of a state | ϕ〉 with
as projecting onto a component
parallel to only if is normalized. It is convenient,
though, to denote multiplication with as projection, although in the
general case this will only be proportional to the coefficient of the
component in | ϕ〉.