Exam (elaborations) TEST BANK FOR Manifolds, Tensor and Forms An Introduction for Mathematics and Physicists By Renteln Paul (Solution manual)
Solution
... [Show More] Manual
for
Manifolds, Tensors, and Forms
Paul Renteln
Department of Physics
California State University
San Bernardino, CA 92407
and
Department of Mathematics
California Institute of Technology
Pasadena, CA 91125
prenteln@csusb. edu
Contents
1 Linear algebra page 1
2 Multilinear algebra 20
3 Differentiation on manifolds 33
4 Homotopy and de Rham cohomology 65
5 Elementary homology theory 77
6 Integration on manifolds 84
7 Vector bundles 90
8 Geometric manifolds 97
9 The degree of a smooth map 151
Appendix D Riemann normal coordinates 154
Appendix F Frobenius’ theorem 156
Appendix G The topology of electrical circuits 157
Appendix H Intrinsic and extrinsic curvature 158
iii
1
Linear algebra
1.1 We have
0 = c1(1, 1) + c2(2, 1) = (c1 + 2c2, c1 + c2)
⇒ c2 = −c1 ⇒ c1 − 2c1 = 0 ⇒ c1 = 0 ⇒ c2 = 0,
so (1, 1) and (2, 1) are linearly independent. On the other hand,
0 = c1(1, 1) + c2(2, 2) = (c1 + 2c2, c1 + 2c2)
can be solved by choosing c1 = 2 and c1 = −1, so (1, 1) and (2, 2) are
linearly dependent (because c1 and c2 are not necessarily zero).
1.2 Subtracting gives
0 =
i
vi ei −
i
v
i ei =
i
(vi − v
i )ei .
But the ei ’s are a basis for V, so they are linearly independent, which implies
vi − v
i
= 0.
1.3 Let V = U ⊕ W, and let E := {ei }n
i=1 be a basis for U and F := { f j }mj
=1 a
basis for W. Define a collection of vectors G := {gk}n+m
k=1 where gi = ei for
1 ≤ i ≤ n and gn+i = fi for 1 ≤ i ≤ m. Then the claim follows if we can
show G is a basis for V. To that end, assume
0 =
n+m
i=1
ci gi =
n
i=1
ci ei +
m
i=1
ci fi .
The first sum in the rightmost expression lives in U and the second sum lives
in W, so by the uniqueness property of direct sums, each sum must vanish
by itself. But then by the linear independence of E and F, all the constants
ci must vanish. Therefore G is linearly independent. Moreover, every vector
v ∈ V is of the form v = u + w for some u ∈ U and w ∈ W, each of which
1
2 Linear algebra
can be written as a linear combination of the gi ’s. Hence the gi ’s form a basis
for V.
1.4 Let S be any linearly independent set of vectors with |S| < n. The claim is
that we can always find a vector v ∈ V so that S ∪{v} is linearly independent.
If not, consider the sum
cv +
|S|
i=1
ci si = 0,
where si ∈ S. Then some of the ci ’s are nonzero. We cannot have c = 0,
because S is linearly independent. Therefore v lies in the span of S, which
says that dim V = |S| < n, a contradiction.
1.5 Let S, T : V → W be two linear maps, and let {ei } be a basis for V.
Assume Sei = Tei for all i, and that v =
i ai ei. Then Sv =
i ai Sei =
i aiTei = T v.
1.6 Let v1, v2 ∈ ker T. Then T (av1 + bv2) = aT v1 + bT v2 = 0, so ker T is
closed under linear combinations. Moreover ker T contains the zero vector of
V. All the other vector space properties are easily seen to follow, so ker T is a
subspace of V. Similarly, let w1,w2 ∈ im T and consider aw1 + bw2. There
exist v1, v2 ∈ V such that T v1 = w1 and T v2 = w2, so T (av1 + bv2) =
aT v1 + bT v2 = aw1 + bw2, which shows that imT is closed under linear
combinations.Moreover, im T contains the zero vector, so imT is a subspace
of W.
1.7 For any two vectors v1 and v2 we have
T v1 = T v2 ⇒ T (v1 − v2) = 0 ⇒ v1 − v2 = 0 ⇒ v1 = v2.
Assume the kernel of T consists only of the zero vector. Then for any two
vectors v1 and v2, T (v1 − v2) = 0 implies v1 − v2 = 0, which is equivalent
to saying that T v1 = T v2 implies v1 = v2, namely that T is injective. The
converse follows similarly.
1.8 Let V and W be two vector spaces of the same dimension, and choose a basis
{ei } for V and a basis { fi } for W. Let T : V → W be the map that sends ei to
fi , extended by linearity. Then the claim is that T is an isomorphism. Let v =
i ai ei be a vector in V. If v ∈ ker T , then 0 = T v =
i aiTei =
i ai fi .
By linear independence, all the ai ’s vanish, which means that the kernel of
T consists only of the zero vector, and hence by Exercise 1.7, T is injective.
Also, if w =
i ai fi, then w =
i aiTei = T
i ai ei , which shows that T
is also surjective.
1.9 a. Let v ∈ V and define w := π(v) and u := (1 − π)(v). Then π(u) =
(π − π2)(v) = 0, so v = w + u with w ∈ im π and u ∈ ker π. Now
Linear algebra 3
suppose x ∈ ker π ∩ im π. Then there is a y ∈ V such that x = π(y). But
then 0 = π(x) = π2(y) = π(y) = x.
b. Let { fi } be a basis for W, and complete it to a basis of V by adding a linearly
independent set of vectors {gj }. Let U be the subspace of V spanned
by the gi ’s. With these choices, any vector v ∈ V can be written uniquely
as v = w + u, where w ∈ W and u ∈ U. Define a linear map π : V → V
by π(v) = w. Obviously π(w) = w, so π2 = π.
1.10 Clearly, T 0 = 0, so T −10 = 0. Let T v1 = v
1 and T v2 = v
2. Then
aT −1v
1
+ bT −1v
2
= av1 + bv2 = (T −1T )(av1 + bv2) = T −1(av
1
+ bv
1),
which shows that T −1 is linear.
1.11 The identity map I : V → V is clearly an automorphism. If S ∈ Aut V
then S−1S = SS−1 = I . Finally, if S, T ∈ Aut V, then ST is invertible,
with inverse (ST )
−1 = T −1S−1. (Check.) This implies that ST ∈ Aut V.
(Associativity is automatic.)
1.12 By exactness, the kernel of ϕ1 is the image of ϕ0. But the image of ϕ0 consists
only of the zero vector (as its domain consists only of the zero vector). Hence
the kernel of ϕ1 is trivial, so by Exercise 1.7, ϕ1 must be injective. Again by
exactness, the kernel of ϕ3 is the image of ϕ2. But ϕ3 maps everything to zero,
so V3 = ker ϕ1, and hence V3 = im V2, which says that ϕ2 is surjective. The
converse follows by reversing the preceding steps. As for the last assertion, ϕ
is both injective and surjective, so it is an isomorphism.
1.13 If T is injective then ker T = 0, so by the rank/nullity theorem rk T =
dim V = dimW, which shows that T is surjective as well.
1.14 The rank of a linear map is the dimension of its image. There is no way
that the image of ST can be larger than that of either S or T individually,
because the dimension of the image of a map cannot exceed the dimension of
its domain.
1.15 If v
∈ [v] then v
= v + u for some u ∈ U. By linearity ϕ(v
) = ϕ(v) + w
for some w ∈ W, so [ϕ(v
)] = [ϕ(v) + w] = [ϕ(v)].
1.16 Pick a basis {ei } for V. Then,
i
(ST )i j ei = (ST )e j = S(
k
Tk j ek) =
k
Tk j Sek =
ik
Tk j Sikei .
Hence
(ST )i j =
k
SikTk j = (ST)i j ,
which shows that ST → ST.
4 Linear algebra
1.17 The easiest way to see this is just to observe that the identity automorphism
I is represented by the identity matrix I (in any basis). Suppose T −1 is
represented by U in some basis. Then by the results of Exercise 1.16,
TT−1 → TU.
But TT−1 = I, so TU = I, which shows that U = T−1.
1.18 Choose a basis {ei } for V. Then by definition,
Tej =
i
Ti j ei .
It follows that Tej is represented by the j th column of T, so the maximum
number of linearly dependent vectors in the image of T is precisely
the maximum number of linearly independent columns of T.
1.19 Suppose
i ci θi = 0. By linearity of the dual pairing,
0 =
e j ,
i
ci θi
=
i
ci
e j, θi
=
i
ci δi j = c j ,
so the θ j ’s are linearly independent.
Now let f ∈ V∗. Define f (e j ) =: aj and introduce a linear functional
g :=
i ai θi. Then
g(e j ) =
g, e j
=
i
ai δi j = aj ,
so f = g (two linear functionals that agree on a basis agree everywhere).
Hence the θ j ’s span.
1.20 Suppose f (v) = 0 for all v. Let f =
i fi θi and v = e j. Then f (v) =
f (e j ) = f j = 0. This is true for all j, so f = 0. The other proof is similar.
1.21 Let w ∈ W and θ1, θ2 ∈ AnnW. Then
(aθ1 + bθ2)(w) = aθ1(w) + bθ2(w) = 0,
so AnnW is closed under linear combinations. Moreover, the zero functional
(which sends every vector to zero) is clearly in AnnW, so AnnW is a
subspace of V∗.
Conversely, let U∗ ⊆ V∗ be a subspace of V∗, and define
W := {v ∈ V : f (v) = 0, for all f ∈ U∗}.
If f ∈ U∗ then f (v) = 0 for all v ∈ W, so f ∈ AnnW. It therefore suffices
to prove that dimU∗ = dim AnnW. Let { fi } be a basis for U∗, and let {ei }
be its dual basis, satisfying fi (e j ) = δi j . Obviously, ei ∈ W. Thus dimW =
dim V − dimU∗. On the other hand, let {wi } be a basis for W and complete
Linear algebra 5
it to a basis for V: {w1, . . . , wdimW, edimW+1, . . . , edim V }. Let {ui } be a basis
for AnnW. Then ui (e j ) = 0, else e j ∈ W. So dim AnnW = dim V −dimW.
1.22 a. The map is well defined, because if [v
] = [v] then v
= v + w for some
w ∈ W, so ϕ( f )([v
]) = f (v
) = f (v + w) = f (v) + f (w) = f (v) =
ϕ( f )([v]). Moreover, if ϕ( f ) = ϕ(g) then for any v ∈ V, 0 = ϕ( f −
g)([v]) = ( f −g)(v), so f = g. But the proof of Exercise 1.21 shows that
dim AnnW = dim(V/W) = dim(V/W)
∗, so ϕ is an isomorphism.
b. Suppose [g] = [f ] in V∗
/ AnnW. Then g = f + h for some h ∈
AnnW. So π
∗
([g])(v) = g(π(v)) = f (π(v)) + h(π(v)) = f (π(v)) =
π
∗
([ f ])(v). Moreover, if π
∗
([ f ]) = π
∗
([g]) then f (π(v)) = g(π(v)) or
( f − g)(π(v)) = 0, so f = g when restricted to W. Dimension counting
shows that π
∗ is an isomorphism.
1.23 Let g be the standard inner product on Cn and let u = (u1, . . . , un), v =
(v1, . . . , vn) and w = (w1, . . . , wn). Then
g(u, av + bw) =
i
ui (avi + bwi )
= a
i
uivi + b
i
uiwi
= ag(u, v) + bg(u,w).
Also,
g(v, u) =
i
viui =
i
uivi = g(u, v).
Assume g(u, v) = 0 for all v. Let v run through all the vectors v(i ) =
(0, . . . , 1, . . . , 0), where the ‘1’ is in the i th place. Plugging into the definition
of g gives ui = 0 for all i, so u = 0. Thus g is indeed an inner product.
The same proof works equally well for the Euclidean and Lorentzian inner
products.
Again consider the standard inner product on Cn. Then
g(u, u) =
i
uiui =
i
|ui |2 ≥ 0,
because the modulus squared of a complex number is always nonnegative, so
g is nonnegative definite. Moreover, the only way we could have g(u, u) = 0
is if each ui were zero, in which case we would have u = 0. Thus g is
positive definite. The same proof applies in the Euclidean case, but fails in
the Lorentzian case because then
6 Linear algebra
g(u, u) = −u20
+
n−1
i=1
u2
i ,
and it could happen that g(u, u) = 0 but u = 0. (For example, let u =
(1, 1, 0, . . . , 0).)
1.24 We have
(A∗
(a f + bg))(v) = (a f + bg)(Av) = a f (Av) + bg(Av)
= a(A∗ f )(v) + b(A∗g)(v) = (aA∗ f + bA∗g)(v),
so A∗ is linear. (The other axioms are just as straightforward.)
1.25 We have
A∗e∗
j , ei
=
k
(A∗
)k j e∗
k , ei
=
k
(A∗
)k j δki = (A∗
)i j ,
while
e∗
j , Aei
=
k
e∗
j , Aki ek
=
k
Aki δ jk = Aji ,
so the matrix representing A∗ is just the transpose of the matrix
representing A.
1.26 We have
A†e j , ei
=
k
(A†)k j ek , ei
=
k
(A†)k j δki = (A†)i j ,
while
e∗
j , Aei
=
k
e∗
j , Aki ek
=
k
Aki δ jk = Aji ,
which gives
(A†)i j = Aji .
1.27 Let w =
i aivi (where not all the ai ’s vanish) and suppose
i civi +cw =
0. The latter equation may be solved by choosing c = 1 and ci = −ai, so the
set {v1, . . . , vn,w} is linearly dependent. Conversely, suppose {v1, . . . , vn,w}
is linearly dependent. Then the equations
i civi +cw = 0 have a nontrivial
solution (c, c1, . . . , cn). We must have c = 0 else the set {vi } is not linearly
independent. But then w = −
i (ci/c)vi .
1.28 Obviously, the monomials span V, so we need only check linear independence.
Assume
c0 + c1x + c2x2 + c3x3 = 0.
Linear algebra 7
The zero on the right side represents the zero vector, namely the polynomial
that is zero for all values of x. In other words, this equation must hold for all
values of x. In particular, it must hold for x = 0. Plugging in gives c0 = 0.
Next let x = 1 and x = −1, giving c1 + c2 + c3 = 0 and −c1 + c2 −
c3 = 0. Adding and subtracting the latter two equations gives c2 = 0 and
c1 +c3 = 0. Finally, choose x = 2 to get 2c1 +8c3 = 0. Combining this with
c1 + c3 = 0 gives c1 = c3 = 0.
1.29 We must show exactness at each [Show Less]