Before diving into the actual material, it's important to find the core drive behind learning
linear algebra.
Lots of us (including myself) took linear algebra courses in the college, but we never really understood
how it's useful, and why we needed to learn it in the first place.
Most likely, our remnant knowledge of linear algebra was reduced to numerical operations between 2-3
vectors, e.g.
how to compute determinant using diagnal rules, how to compute dot product of 2 vectors...... This is
pure sadness.
After watching Terrence Tao's master
class,
I had some sort of epiphany, mathematics should be a method to guide you to transform your way of
thinking,
finding a narrative or a story for your problem.
Especially there are several great examples from Terrance's class:
Kepler measured the wine volume in a wine barrel -> finding the local maximum;
measuring the salt content of sea -> sampling population (why the size of population is not as
important);
Transforming the game of finding 15 (magic square), to playing tic-tac-toe;
$\begin{bmatrix}
2 & 7 & 6 \\
9 & 5 & 1 \\
4 & 3 & 8 \\
\end{bmatrix}$
Counterfeit coin problem (12 coins, 1 scale, 3 times limitation) -> Linear algebra -> Compress
Sensing
Resources
Gilbert Strang, god of Linear Algebra. This course reignited my desire to learn math.
This essence series is pure magic, beyond words to describe how helpful it is.
Projection:
$$P^2 = P,\\ P^T = P,\\ P = A(A^TA)^{-1}A^T, \\ A\hat{X}= A(A^TA)^{-1}A^Tb$$
Why did we need $proj.$ to begin with?
Since $Ax=b$ might not have a solution, so we instead solve proj of b onto the column space $Ax
= p$
If b in column space, $Pb = b$
If b $\perp$ column space, $Pb = 0$ (in the null space of $A^T$)
Determinant:
$Det(I) = 1$,
Det is linear to each row! Det(
$Det(2A) = 2^nDet(A)$,$Det(AB) = Det(A)Det(B)$,
$Det(A) = Det(A^T)$,$Det(LU) = Det(L^T)Det(U^T)$, $Det(A^{-1}) = \frac{1}{Det(A)}$,
$Det(A^{-1})Det(A) = 1$ Exchange rows reverse sign of Det
Eigenvalues and eigenvectors:
Intuition: multiply by $A$ doesn't change the direction, instead just scales it.
$A \in \mathbb{R}^{n\times n}, x \in \mathbb{C}^n, \lambda \in \mathbb{C}$
at most you could have $n$ independent eigen values and $n$ eigen vectors
$Ax = \lambda x$
$AX = X\Lambda \Leftrightarrow A = X\Lambda X^{-1}$ (if $X$ invertible), where
$\Lambda = \begin{bmatrix}
\lambda_{1} & & & \\
& \lambda_{2} & & \\
& & \ddots & \\
& & & \lambda_{n}
\end{bmatrix}$
$A^2 = X\Lambda X^{-1}X\Lambda X^{-1} = X\Lambda^{2}X^{-1}$
What can we say about $A^k$ when $k \rightarrow \infty$?
As long as there's one eigen value $> 1$, gets to $\infty$.
VS. Every eigen value $<1$, $A^k \rightarrow 0$
Fourier
FFT in action, the best video about FFT out there:
Hardware Acceleration of
Linear Algebra, how to break down the $O(n^3)$ matrix multiplication operation
to loading cost, register cost, and speed them up.
Comments
Want to leave a comment? Visit this post's issue page on GitHub (you'll need a GitHub account).