Basic

Transpose:

We use upper case for matrix and lower case for vector .

Distributive, associative & communicative:

Element-wise product:

Norms

Norms measures the size of vectors.

L1-norm (Manhattan distance):

L2-norm (Euclidian distance): it is commonly used in deep learning and with notation simplified as . However, L2-norm may not penalize the near-zero parameters enough to push it to 0. Hence, L1-norm is preferable if the sparsity of the model’s parameters is important.

A unit vector is a vector with .

Lp-norm

-norm (max norm)

Frobenius norm: It measures the size of a matrix:

Sometimes, we count the number of non-zero element: add 1 for every non-zero elements.

Diagonal matrix

A diagonal matrix is a matrix with all non-diagonal element being zero. We form a square diagonal matrix by moving vector elements into the diagonal position of the matrix.

Providing has no element with zero value, we replace each diagonal element with to form its inverse .

Machine learning may approximate solutions with diagonal matrices because finding inverse or performing matrix multiplication is easy. Non-square diagonal matrix does not have an inverse matrix.

Symmetric matrix

A symmetric matrix is a matrix with .

In machine learning, many equations in calculating elements between are not directional (). For example, the distance between 2 points is not directional. Hence, many matrices in machine learning is symmetrical. The inverse of a symmetric matrix is also symmetric. Any real symmetric matrices can be decomposed into eigenvalues and eigenvectors which is very desirable in matrix factorization. Symmetric matrix can easily decompose into orthogonal matrices: which its inverse equals to its transpose.

Inverse matrix

Properties:

Solving linear equation with an inverse matrix:

is equivalent to the multiplication of each column vectors with vector element :

The span of a set of vectors is the set of points reachable by linear combination of those vectors. The column vectors of form a column space. A square matrix with linearly dependent columns is known as singular. If is singular, its inverse does not exist.

In machine learning, we rarely use inverse matrix to solve linear equation. is often not numerical stable: small input errors amplifies output errors. In machine learning, is often a sparse matrix, however, the inverse is not which will take too much memory.

Orthogonal matrix

A set of vectors are orthonormal if and only if the inner product of any two vectors are zero. An orthogonal matrix is a square matrix whose rows (columns) are mutually orthonormal. i.e. no dot products of 2 row vectors (column vectors) are 0.

For an orthogonal matrix, there is one important property. Its inverse is the transpose which is very easy to compute. .

Also orthogonal matrices does not amplify errors which is very desirable.

The size of the multiplication result has the same size as . If we multiple with orthogonal matrices, the errors present in will not be amplified by the multiplication. i.e. it is more numeric stable.

Decompose matrix into smaller components helps us to solve some problems faster with better numeric stability. Here is the pseudo code to use SVD to decompose matrix into orthogonal matrices and solve the linear equation with the result.

where takes the reciprocal of the non-zero elements of and then transpose.

Quadric form equation in Matrix

A quadric form equation contains terms and .

The matrix form of a quadratic equation:

With 3 variables:

Eigen vector & eigen value

Scalar and vector are the eigenvalue and eigenvector of respectively if:

Properties:

  • A matrix has at most eigenvalues and eigenvectors.
  • A matrix is singular iff any eigenvalues are 0.

Find the eigenvalues and eigenvectors for .

Finding the eigenvalues

Find the eigenvalues of:

To solve:

The possible factors for 16 are 1, 2, 4, 8, 16.

when ,

So

By solving the root, the eigenvalues are -4, 2, -2.

Finding the eigenvectors

Doing row reduction to solve the linear equation

Appending 0:

Perform

Perform row subtraction/multiplication:

After many more reductions:

So for , the eigenvector is:

Eigendecomposition

Matrix decomposition decompose a matrix into special type of matrices for easy manipulation in solving problems like linear equations. But eigendecomposition is only defined for square matrices.

Say has eigenvalues . We concatenate all values to form a column vector . also has eigenvectors . We compose a matrix with as the column of the matrix. . The eigen decomposition of A is:

Not every has eigendecomposition. But in deep learning, we often due with real symmetric metrics. Real symmetric metrics are eigendecomposable and the equation can be further simplify to:

which is an orthogonal matrix composed of eigenvectors of . is a diagonal matrix. The value of diagonal element is the eigenvalue of the corresponding eigenvector in column of . We do not specify the order of eigenvectors. Therefore different order of creates different . By convention, we re-arrange the column order in by the descending sorting order of its eigenvalue . It helps us to decompose in a more deterministic way.

In eigendecomposition, we decompose the matrix into different eigenvectors scaled by the eigenvalue . Therefore, for any vectors pointing at the same direction of eigenvector , scales by the corresponding eigenvalue .

For a quadratic equation in the form of , if is an unit vector equal to , will be equal to the eigenvalues . Therefore, the max and min of is the max and min of the eigenvalues.

Singular value decomposition (SVD)

SVD factorizes a matrix into singular vectors and singular values. Every real matrix has a SVD but not true for eigendecomposition.

  • A is a m×n matrix. (Does not need to be a square matrix like eigendecomposition.)
  • Left-singular vector: U is a m×m orthogonal matrix (the eigenvectors of )
  • Reft-singular vector: V is a n×n orthogonal matrix (the eigenvectors of )
  • Singular values: D is a m×n diagonal matrix (square roots of the eigenvalues of and )

SVD is a powerful but expensive matrix factorization method. In numerical linear algebra, many problems can be solved to represent in this form.

Positive definite or negative definite matrix

If all eigenvalues of are:

  • positive: the matrix is positive definite.
  • positive or zero: positive semi-definite.
  • negative: the matrix is negative definite.

If a matrix is positive semi-definite, . If a matrix is positive definite and , it implies .

Positive definite or negative definite helps us to solve optimization problem. Quadratic equation with positive definite matrices are always positive for non-zero and the function is convex. i.e. it guarantees the existences of the global minimum. This allows us to use Hessian matrix to solve the optimization problem. Similar arguments hold true for negative definite.

Trace

Trace is the sum of all diagonal elements

We can rewrite some operations using Trace to get rid of the summation (like the summation in the Frobenius norm):

Other properties:

Derivative of matrix

For vector :

For matrix :

Example, Optimize mean square error of a linear regression.

Given: is the size of the dataset, , , .

Principal Component Analysis (PCA)

PCA encodes a n-dimensional input space into an m-dimensional output space with . We want to minimize the amount of information lost during the reduction and minimize the difference if it is reconstructed.

Let’s and be the encoder and the decoder:

We apply a linear transformation to decode . We constraint to be a matrix composed of columns that is orthogonal to each other with unit norm. (i.e. .)

PCA uses L2-norm to minimize the reconstruction error.

Compute L2-norm:

Optimize :

To optimize it:

So, the optimize encode and decode scheme is

To find the optimal transformation , we want to optimize over all datapoint :

Let’s consider , so D is just a vector .

where contains all datapoints. Xdd

This equation can be solved using eigendecomposition. Optimal is the eigenvector of that has the largest eigenvalue. For , the matrix D is given by the eigenvectors corresponding to the largest eigenvalues.

Moore-Penrose Pseudoinverse

For a linear equation:

A^{+} is a pseudo inverse of matrix A. We do not called it because inverse matrix is only defined for a square matrix.

is computed as (if exist):

which are the SVD of . The pseudoinverse of a diagonal matrix D is computed by taking the reciprocal of all non-zero elements then taking the transpose of the resulting matrix.

Determinant

The determinant of the matrix is the product of all eigenvalues. If the absolute value is greater than 1, expands the output space. If it is between 0 and 1, it shrinks the space.