- Published on
Notes on Strang Lecture - 3
Matrix multiplication
For matrices and , the product can be computed and interpreted in multiple ways.
Element-wise computation
The element equals the dot product of the -th row of and the -th column of :
Column and row perspectives
Column perspective: Matrix consists of linear combinations of the columns of , where the coefficients come from the columns of .
Row perspective: Matrix consists of linear combinations of the rows of , where the coefficients come from the rows of .
Sum of rank-one matrices
Another important interpretation: is the sum of rank-one matrices formed by the outer product of the -th column of and the -th row of :
where denotes the -th column of and denotes the -th row of .
Inverse matrices
For a square matrix , the inverse matrix satisfies:
and equivalently:
Existence conditions
The inverse exists if and only if there does not exists a non-zero vector such that . To see this, note that if such an exists and existed, we would have:
which contradicts the assumption that .
Similarly, exist iff there is no non-zero row vector such that .
Computing the inverse
Suppose is invertible. One method to find is to perform Gaussian-Jordan elimination on the augmented matrix , applying both forward and backward elimination steps to transform it into .