Probability mass function & probability density function

Probability mass function is the probability distribution for discret variables, for example, the probability of rolling a fair die with a value 2 is

It has an upper case notation . The plan text form represents the variable and the script form represents the variable’s value. We use the bold form if is a vector. We often shorten both notations to and for vector which include the variable’s values only. means sampling from the probability distribution of .

Probability density function (PDF) is the probability distribution for continuous variables using lower case notation . For example, the probability of completing a task between and seconds. Sum of all possibilities is equal to 1.

is called a random variable whose values are the outcome of a random process, like rolling a dice five times. For example, holds the probability for after five rolls.

Conditional probability

Apply last equation:

Chain rule

Marginal probability

Marginal probability is the probability of a sub-set of variables. It is often calculated by summing over all the possibilities of other variables. For an example, with a discret variable , we sum over all possibility of :

or for continuous variables


Given 2 events are independent:

Bayes’ theorem

Apply Bayes’ theorem:


In Bayes’ theorem,

is called prior which quantifies our belief . We all start learning probability using a frequentist approach: we calculate the probability by . For a fair die, the chance of getting a tail is 0.5. But if the total trials are small, the calculated value is unlikely accurate. In Bayes inference, we quantify all possibilities of getting a tail to deal with uncertainty. We want to find the probability of all the probabilities:

For example, means what is the probability of finding the coin has a 0.6 chance of getting a tail. Of course, it is much lower than if the coin is fair. We can use previous knowledge (including previous data) or assumption to define the prior at the beginning and re-adjust it with Bayes’ theorem with observed evidence. is the likelihood of the observed data given the belief. (say, the likelihood of observing 2 tails in the next 2 trails) For example, if , the likelihood of seeing 2 tails are . The posterior is the updated belief using Bayes’ equation above after taking the observed data into account. Since we see more tails in our evidence, the peak of the posterior is shifted to the right but not to as the frequentist approach may calculate. As suspected, we are dealing with a series of probabilities rather than one single value. However, with the beta function, this can be done easily.

When the next round of sample data is available, we can apply the Bayes’ theorem again with the prior replaced by the last posterior. Bayes’ theorem works better than simple frequency calculation in particular the sampling error can be high when sampling size is small at the beginning.

As indicated by the naming, the observed data is also called evidence and the belief is also called hypothesis.

Bayes theorem can handle multiple variables and provide a better mathematical foundations than the frequentist approach.This section introduces the key terminology. In the later section on beta function, we will detail the implementation and the advantage.

Naive Bayes’ theorem

Naive Bayes’ theorem assume and are independent. i.e.

We often ignore the marginal property (the denominator) in Naive Bayes theorem because it is a constant. The probability of the evidence is un-related with . We usually compare calculated values rather than finding the absolute values. In particular, may be too hard to find in some problems.

In detecting e-mail spam, indicates whether the email is a spam. represents the presence of the word in our vocabulary. represents the chance that word presence in a spam. We can collect the spam emails marked by users and calculate this probability easily. By trial and error, we can set a threshold for the calculated value before we mark it as a spam.


The definition of expectation for discret variables:

Can be shorten as:

For continuous variables:


Variance and covariance

Covariance measures how variables are related. If covariance is high, data tend to take on relatively high (or low) values simultaneously. If they are negative, the tends to take the opposite values simultaneously. If it is zero, they are linearly independent.

Gaussian distribution/Normal distribution

is called standard normal distribution.

The PDF of a multivariate Gaussian distribution is defined as:

Bernoulli distributions

Source Wikipedia

Binomial distributions

Source Wikipedia

The Gaussian distribution is the limiting case for the binomial distribution with:

Poisson Distribution

Assuming a rare event with an event rate , the probability of observing x events within an interval is:

Example: If there were 2 earthquakes per 10 years, what is the chance of having 3 earth quakes in 20 years.



Beta distribution

The definition of a beta distribution is:

where and are parameters for the beta distribution.

For discret variable, the beta function is defined as:

For continuos variable, the beta function is:

For our discussion, let be the infection rate of the flu. With Bayes’ theorem, we study the probabilities of different infection rates rather than just finding the most likely infection rate. The prior is the belief on the probabilities for different infection rates. means the probability that the infection rate equals to 0.3 is 0.6. If we know nothing about this flu, we use an uniform probability distribution for in Bayes’ theorem and assume any infection rate is equally likely.

An uniform distribution maximizes the entropy (randomness) to reflect the highest uncertainty of our belief.

We use Beta function to model our belief. We set in the beta distribution for an uniform probability distribution. The following plots the distribution :

Different values of and result in different probability distribution. For , the probability peaks towards :

For , the probability peaks towards :

For example, we can start with some prior information about the infection rate of the flu. For example, for , we set the peak around 0.35:

For :

We model the likeliness of our observed data given a specific infection rate with a Binomial distribution. For example, what is the possibility of seeing 4 infections (variable x) out of 10 (N) samples given the infection rate .

Let’s apply the Bayes theorem to calculate the posterior : the probability distribution function for given the observed data. We usually remove constant terms from the equations because we can always re-normalize the graph later if needed.

Using Bayes’ theorem:

We start with a Beta function for the prior and end with another Beta function as the posterior. Pior is a conjugate prior if the posterior is the same class of function as the prior. As shown below, this helps us to calculate the posterior much easier.

We start with the uniformed distributed prior assuming we have no prior knowledge on the infection rate. is equally likely for any values between [0, 1].

For the first set of sample, we have 3 infections our of a sample of 10. () The posterior will be which has a peak at 0.3. Even assuming no prior knowledge (an uniform distribution), Bayes’ theorem arrives with a posterior peaks at the maximum likeliness estimation (0.3) from the first sample data.

Just for discussion, we can start with a biased prior which peak at 100% infection:

The observed sample () will produce the posterior:

with peak at 0.62.

Let’s say a second set of samples came in 1 week later with . We can update the posterior again.

As shown, the posterior’s peak moves closer to the maximum likeliness to correct the previous bias.

When we enter a new flu season, our new sampling size for the new Flu strain is small. The sampling error can be large if we just use this small sampling data to compute the infection rate. Instead, we use prior knowledge (the last 12 months data) to compute a prior for the infection rate. Then we use Bayes theorem with the prior and the likeliness to compute the posterior probability. When data size is small, the posterior rely more on the prior but once the sampling size increases, it re-adjusts itself to the new sample data. Hence, Bayes theorem can give better prediction.

Given datapoints , we can compute the probability of by integrating the probability of each with the probability of given for each . i.e. the expected value .

Dirac distribution

Dirac distribution models a distribution with value sharply located at .

Exponential and Laplace distribution

Both exponential and Laplace distribution have a sharp point at 0 which is sometimes used for machine learning.

Exponential distribution:

(Source Wikipedia)

Laplace distribution:


Calculate bias and variances of a Gaussian distribution estimator

Gaussian equation:

A common estimator for the Gaussian mean parameter by sampling datapoints:

Estimate the Bias of the estimator:

Hence, our estimator for the Gaussian mean parameter has zero bias.

Let’s consider the following estimator for the Gaussian variance parameter:

Estimate the Bias of the estimator:

Calculate first:

Hence, this estimator is biased. Intuitively, we sometimes over-estimate and sometimes estimate . By squaring it, we tends to over-estimate all the time and therefore the estimator has biases. The correct estimator for is:


Calculate bias and variances of a Bernoulli Distribution estimator

Bernoulli Distribution:

A common estimator will be:

Find the bias:

i.e. our estimator has no bias.

The variance of drops as increases: