Linear regression

We transpose because by convention we express all vectors as column vectors here.

Mean square error (MSE)

which and is the input features and the true value for the datapoints in the training dataset.

MSE is popular because it is easy to compute and it has a smooth differentiable To optimize , we take its differentiate.

Mean square error (MSE) is actually the L2-norm:

Since sometimes it is so common, we often drop its subfix.

Adding a bias

Transform x to:

Optimize MSE cost

Here we calculate a generic formula to optimize .

Setting to optimize :

In the following equation, we can solve using linear algebra:

We will adopt the following notation saying is computed by linear algebra using and :


can therefore be solved by:

Notice that the solution for are not unique.

MSE is also vulnerable to outlier. With an outlier, our model is shifted from the blue line to the red line which the blue line can model the training dataset better if the outlier is not there.

L2 regularization with mean square error (MSE)

To avoid overfitting, we use L2-norm as a regularization to the cost function . (With the MSE computed in the previous section)

Optimize :

With L2 normalization and MSE, is:

Let’s visualize the solution. In the diagram below, is where regularization cost is 0. i.e. all . is where MSE is minimum. The optimal solution for is where the concentric circle meet with the eclipse .

This is also called ridge regression.

Regression tree

Linear regression models a linear relationship between input and output. We can combine decision tree with linear regression to create a non-linear model.

Change of basis

Non-linearity can be produced by a change of basis also. For example, to model the following quadratic relation:

We can transform :


and then apply:

To apply a quadratic functions with 2 features:

We transfrom to with:

Note: We are not restricted to polynomial functions: any functions including exponentials, logarithms, trigonometric functions can be applied to the basis.

L1-norm as cost functions (Robust regression) or regularization

We are going to explore more cost functions besides MSE. MSE has issues with outliners. It exaggerate the cost by squaring the error. MSE tries harder to fit the outlier into the model. A L1-norm cost function has the same incentive to reduce error from the outlier or a normal data and hence less vulnerable to outlier. Nevertheless, the differential at 0 is not smooth which sometimes impact the effectiveness of the gradient descent in those area.

Besides using L-1 as the error cost, L1-norm can be added to the cost function as a regularization, the optimal solution for the L1 regularization usually push some of the to be exactly 0.

L1-norm or L1 regularization promotes sparsity for which can be desirable to avoid overfitting in particular when we have too many input features. Alternatively, L1-norm can be used for feature selection by eliminate features with .

Huber loss

Huber loss smoothes the cost function when approach 0 so it is differentiable.

The blue curve is the MSE, the red is L1-norm and the green is the Huber loss which tries to smooth out at .

L0-norm as regularization

We can add a L0 norm to the regularization. L0-norm even favor more sparsity in than L1-norm.

For example, if , the L0-norm regularization is 3.

Since training data is not unlimited, even a true model may have , the MSE cost is likely not bottom at . When we add L0 regularization, we add a constant to the MSE cost except at which has 0 regularization cost. The total cost shift upward to the green line with the exception at the green dot which remain unchanged.

Without regularization, the optimized is at the orange dot. With L2 regularization, we may shift the orange dot slightly to the left. However, with a L0 regularization, we can move the optimal point to the green dot subject to the value of .

Marginal loss (Hinge loss)

Let’s consider a problem to create a boundary to separate the green and red dots below. With a linear regression with MSE cost function, we create a red line to divide the green dots from the red dots. If , it is classified as green. As shown 3 green dots will be mis-classified.

MSE cannot provide an optimal solution (the blue line) because it mis-calculates the cost value. At point , it should be already classified correctly but yet it contributes to its cost function. Hence, for classification problem, we need a cost function that does not penalize when a good decision is make.

For example, for the green dots, if , the penalty should be set to 0 and we start counting penalty when .

The mathematical equation for Hunge loss is formated as:

Support vector machine (SVM)

L2-regularization with hinge loss is SVM.

The datapoint closest to the gray lines are called support vectors. The optimal solution for SVM maximizes the margins between the support vectors. i.e. SVM creates the largest margin to separate both classes.

We add penalty to the cost function for points where . i.e. points falls within the max magin.

The cost function is defined as the amount of constraint violation.

To have the lowest cost, we want with the smallest . (Assume , behave the same way.) The cost is reduced to:

The dot product of and is the projection of on . i.e. the green line.

For to have the maximum value (longest green line), the optimal and should be the same. If your draw 2 parallel boundaries passing the support vectors, like the one above on the right, the one with the maximum margin is the radius i.e. . Hence the optimal provides the maximum margin for the support vectors. Therefore SVM indeed maximizes the margin.

With regularization, we can prevent overfitting for outlier:

Having different value of regularization level controlled by , we can have very different optimal solution:

Maximum likelihood estimation

Given a model parameterized by and a datapoint , the probability that the model predicts the true value is:

The probability to match the predictions for all training datapoints become:

In our training, we want to find that have the highest . i.e. we want to have the maximum likelihood given our observed training data.

Negative log likelihood (NNL)

Logarithm is monotonic. Taking the log of a probability function changes the sign and the direction of the function. Hence, to maximize a probability function is the same as minimize the negative log of the function.

In later section, NNL is very handy as a cost function because the log cancels the exponential function used in the classifier.

Linear regression with gaussian distribution

We want to optimize for a linear regression model with the assumption that is gaussian distributed with :


Optimize using the log likelihood :

The inverse of may not be well conditioned. We can add a to improve the solution:


In machine learning, we often use Mean Square Error (MSE) with L2-regularization as our cost function. In fact, the L2-regularization is not a random choice. Here, we proof that solving the equation below lead us to the same solution above.

As a side note, we can rewrite the regularization as an optimization constraint which the needs to smaller than .

Bayesian linear regression

To optimize a linear regression, we can also use Bayesian inference. Based on a prior belief on how may be distributed (), we compute the posterior with the likelihood using Bayes’ theorem:

Comparison between MLE linear regression and Bayesian linear regression

Source Nando de Freitas, UBC machine learning class.

Logistic regression

Logistic function (sigmoid function)

A logistic function is defined as:

Logistic loss

In a logistic regression, we compute the probability of being a class as:

Here we use the negative log likeliness (NNL) as our cost function:

Logistic loss is sometime viewed as the smooth version of the Hinge loss.

Logistic loss with L2 regularization:

Maximum a posteriori (MAP)

We use Maximum likelihood estimation as our cost function to find the optimized .

Alternatively, we can use Maximum a posteriori (MAP) to find the optimized .

In this section, we go through the process of how some cost functions are defined using MAP.

Using Baye’s theorem:

Take the negative log

The total cost for all training data is

Compute the prior with the assumption that it has a Gaussian distribution of :

If the likelihood is also gaussian distributed:

So for a Gaussian distribution prior and likelihood, the cost function is

which is the same as the MSE with L2-regularization.

If the likeliness is computed from a logistic function, the corresponding cost function is:

Softmax classifier

For many classification problems, we categorize an input to one of the many classes. For example, we can classify an image to one of the 100 possible object classes. We use a softmax classifier to compute K probabilities, one per class for an input image (the combined probabilities remains 1).

The network computes K scores per image. The probability that an image belongs to the class will be.

To avoid the numerical stability problem caused by adding large exponential values, we subtract the inputs by its maximum. Adding or subtract a number from the input produces the same probabilities in softmax.

Softmax is the most common classifier among others.

Softmax cost function defined as the NLL:

Non-linear decision boundary

In the classification problem discussed before, classes are separable by a line or a plane. How can we handle non-linear decision boundary?

We are changing our basis to a quadric function:

After the change, we plot it again. It is easy to realize that the and plane represent the distance from the center of the cluster and therefore can be divided by a plane.

Other polynomial basis:

Recall the optimized for a linear regression using MSE + L2 regularization is:

To make a prediction after the change of basis:

which the Gram matrix ‘K’ is defined as

which contains the inner products between all training examples.

contains the inner products between test example and training.

Consider a degree 2 basis:

We change the basis to a quadratic equation:

And K can be computed directly from

Hence, we can compute from directly.

Parametric model vs non-parametric model

A parametric model captures a model with equations and parameters. For example, a linear regression model is represented by a linear equation parameterized by .

After a model is built, we make prediction directly from and we do not need to keep the training data. If the model is too simple, our predictions will suffer regardless of how big is the training data.

A non-parametric model uses similarity between training data and the testing data to make prediction. The training data becomes the parameters of the model. The K-nearest neighbors classifier is a typical non-parametric model. We locate K nearest neighbors in the training dataset and make predictions based on them. There is little assumption on the model and the predictions improved with the size of the training dataset. A non-parametric model however can be difficult to optimize if the training dataset is too large.


Generic form

The generic form of using linear regression with a kernel is:

which contains all training datapoints.

There are many kernel functions. For example, using a feature function to extract features:

Or a Gaussian function to measure the similarity between the training datapoints and the input.

Kernel transforms non-lineally before applying the linear regression. Therefore, we can take advantage of the convex property of the cost function in linear regression to learn efficiently, yet we can transform for a non-linear boundary.


In the example below, we use a Gaussian function as a kernel:


We use as instead of learning those parameters:

For example, if we have 3 training datapoints with output , then

In classification problem

Radial basis functions (RBF)

Radial basis functions (RBF) is a non-parametric model that use a Gaussian function as a kernel while learning . In the first step, we build a matrix using all training datapoints ():

which is a kernel function to measure similarity between 2 datapoints based on a Gaussian distribution:

and is learned from and true value :

To make prediction for new datapoints :

We compute :

The output is calculated with:

If we have only one testing datapoint , the equation is simplified to:

For example, we have 3 training datapoints , each creating a bell curve indicating how much it contributes to the final output (blue dots) at .

With more datapoints, we can build a complex functions using this gaussian functions.

SVM with kernels

We reuse a Gaussian kernel:

Instead of , we train the model with a SVM cost function:

Choice of & :

  • Larger means higher regularization. i.e. we want to reduces overfitting. This reduces the variance but increase the bias.
  • Larger means we take neighbors farther away into consideration. Again, this reduces variance but increases bias.

When to use a linear model (Logistic regression or SVM with linear kernel ) or a SVM Gaussian kernel?

Note: N is the number of feature for and m is the number of training data.

N (large) & m (small) Many input features and vulnerable to overfit with small amount of data Linear model
N (large) & m (large) Training data is too large to use kernels Linear model
N (small) & m (intermediate) Use non-linear kernel to have a more complex mode Non-linear kernel
N (small) & m (large) Training data is too large to use kernels. Add/Create new features. Linear model