Linear Algebra Lecture 10

xiaoxiao2021-02-28  32

Linear Algebra Lecture 10

1. Four Fundamental Subspaces

Four subspaces

Column space C(A) C ( A ) Null space N(A) N ( A ) Row space = All combinations of rows of A A = All combinations of ATAT = C(AT) C ( A T ) Null space of AT A T = the left null space of A A (左零矩阵) = N(AT)N(AT)

When A A is m×nm×n, C(A) C ( A ) in Rm R m N(A) N ( A ) in Rn R n C(AT) C ( A T ) in Rn R n N(AT) N ( A T ) in Rm R m


basis of A transpose

A=111212323111=100010110100=[I0F0]=R A = [ 1 2 3 1 1 1 2 1 1 2 3 1 ] = [ 1 0 1 1 0 1 1 0 0 0 0 0 ] = [ I F 0 0 ] = R

The column space changed after we do row reduction, the column space of R is not the column space of A A , C(R)C(A)C(R)≠C(A), different column spaces.

The row space of A A and row space of RR are all combinations of these rows, then the basis of R R will be a basis for the row space of the original AA.

For the row space of A A or of RR, a basis is the first r r (rank) rows of RR. It’s the best basis. If the columns of the identity matrix are the best basis for Rn R n , the rows of R R are the best basis for the row space, best in the sense of being as clean as I can make it.

Null space of A transpose

For N(AT)N(AT), it has in it vectors, call them y y , if ATy=0ATy=0, then y y is in the null space of AA transpose.

Take transpose on both side of ATy=0yTA=0T A T y = 0 → y T A = 0 T , then I have a row vector, y y transpose, multiplying AA and multiplying from the left, that’s why it called the left null space.

Basis of left null space

Simplified A A to RR should have revealed the left null space too. From A A to RR, took some step, and I’m interested in what were those steps.

Gauss-Jordan, were you tack on the identity matrix, [Am×nIm×m] [ A m × n I m × m ] . And do the reduced row echelon form of this matrix, rref[Am×nIm×m][Rm×nEm×m] r r e f [ A m × n I m × m ] → [ R m × n E m × m ] .

E E is just going to contain a record of what we did, we did whatever it took to get AA to become R R , and at the same time, we were doing it to the identity matrix.

So we started with the identity matrix, we took all this row reduction amounted to multiplying on the left by some matrix, some series of elementary matrices that altogether gave us one matrix, and that matrix is EE.

E[Am×nIm×m][Rm×nEm×m] E [ A m × n I m × m ] → [ R m × n E m × m ]

EA=R E A = R

When A A was square and invertible, EA=IEA=I, then E E was A1A−1. Now A A is rectangular, it hasn’t got an inverse. Then follow Gauss-Jordan to get EE

111212323111100010001=100010110100111210001 [ 1 2 3 1 1 0 0 1 1 2 1 0 1 0 1 2 3 1 0 0 1 ] = [ 1 0 1 1 − 1 2 0 0 1 1 0 1 − 1 0 0 0 0 0 − 1 0 1 ]

E=111210001 E = [ − 1 2 0 1 − 1 0 − 1 0 1 ]

The dimension of the left null space is supposed to be mr m − r . There is one combination of those three rows that produces the zero row. If I am looking for the left null space, I am looking for combinations of rows that give the zero row.


basis and dimension of four subspaces

Four subspaces C(A) C ( A ) N(A) N ( A ) C(AT) C ( A T ) N(AT) N ( A T ) Basispivot columnsspecial solutionsfirst r r rows of RRlast mr m − r rows of E E Dimension rr nr n − r r r mrm−r

The row space and null space are in Rn R n , and their dimensions add to n. The column space and the left null space are in Rm R m , and their dimensions add to m.

转载请注明原文地址: https://www.6miu.com/read-2619834.html

最新回复(0)