In 2D, when $\|x\|=1$, what is the shape of curve?
Simple extension from vector norms - think a $m\times n$ matrix as a $mn$-dimensional vector
Contour plots of $f(x)=x^TAx$, starting with -
$$ A=\begin{bmatrix} 1 & 0 \\ 0 & 4 \end{bmatrix} $$A = np.array([
[1,1],
[1,4]])
pltPSD(A, with_eig=False, if3d=False)
pltPSD(A, with_eig=False, if3d=True)
X = np.array([[3, 1], [1, 3]])
Matrix(X)
eigenvals, eigenvecs = np.linalg.eig(X)
Matrix(eigenvecs)
newX = eigenvecs.dot(np.diag(eigenvals)).dot(eigenvecs.T)
Matrix(newX)
X = np.random.randn(5,10)
A = X.dot(X.T) # For fun, let's look at A = X * X^T
eigenvals, eigvecs = np.linalg.eig(A) # Compute eigenvalues of A
sum_of_eigs = sum(eigenvals) # Sum the eigenvalues
trace_of_A = A.trace() # Look at the trace
(sum_of_eigs, trace_of_A) # Are they the same?
# We'll use the same matrix A as before
prod_of_eigs = np.prod(eigenvals) # Sum the eigenvalues
determinant = np.linalg.det(A) # Look at the trace
(prod_of_eigs, determinant) # Are they the same?
A = np.array([
[1,1],
[1,2]])
pltPSD(A, with_eig=True, if3d=False)
A = np.array([[4, 4], [-3, 3]])
Matrix(A)
U, Sigma_diags, V = np.linalg.svd(A)
Matrix(np.diag(Sigma_diags)) # Numpy's SVD only returns diagonals, here I'm showing full Sigma
Matrix(U), Matrix(V) # I rounded the values for clarity
For simplicity, let's assume everything is real; then conjugate transpose is just transpose.
$$\text{SVD:} \quad \quad A = U \Sigma V^\top$$Also - $\|A\|_2 = \sigma_\max(A)$, $\|A\|_F = (\sum_{i=1}^n \sigma_i^2)^{1/2}$
Consider two sets of linear equations $$ \begin{bmatrix} 3 & 1 \\ -3.0001 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 4 \\ 4.0001 \end{bmatrix} $$
$$ \begin{bmatrix} 3 & 1 \\ -2.9999 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 4 \\ 4.0002 \end{bmatrix} $$How different are their solutions?
A1=np.array([
[3,1],
[-3.0001,1]])
b1=np.array([4,4.0001])
A2=np.array([
[3,1],
[-2.9999,1]])
b2=np.array([4,4.0002])
x1=np.linalg.solve(A1,b1)
x2=np.linalg.solve(A2,b2)
print(x1)
print(x2)
[-1.66663889e-05 4.00005000e+00] [-3.33338889e-05 4.00010000e+00]
Consider another two sets of linear equations - note the signs $$ \begin{bmatrix} 3 & 1 \\ 3.0001 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 4 \\ 4.0001 \end{bmatrix} $$
$$ \begin{bmatrix} 3 & 1 \\ 2.9999 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 4 \\ 4.0002 \end{bmatrix} $$How different are their solutions?
B1=np.array([
[3,1],
[3.0001,1]])
b1=np.array([4,4.0001])
B2=np.array([
[3,1],
[2.9999,1]])
b2=np.array([4,4.0002])
y1=np.linalg.solve(B1,b1)
y2=np.linalg.solve(B2,b2)
print(y1)
print(y2)
[1. 1.] [-2. 10.]
Condition number of a matrix, $\kappa(A)$: How much the solution $x$ would change with respect to a change in $b$.
$$ \kappa(A) = \|A\|\|A^{-1}\| = \frac{\sigma_\max(A)}{\sigma_\min(A)} $$print(np.linalg.cond(A1))
print(np.linalg.cond(B1))
3.0000500009374806 200006.00009591616
Special algorithms are needed to solve linear systems with ill-conditioned matrices!