Home
Candy Note
Cancel

Gamma Distribution

Gamma Distribution [\begin{aligned} \mathrm{Gam}(\lambda \vert a,b) &= \frac {1}{\Gamma(a)} b^a \lambda^{a-1}\exp(-b\lambda) \end{aligned}] To verity the integral over $\lambda$ is 1 ...

Dirichlet Distribution

Dirichlet Distribution we now introduce a prior distribution for the parameters $\boldsymbol{\mu} = {\mu_1, \mu_2, \cdots, \mu_K}$ of the multinomial distribution. By inspection...

Beta Distribution

Beta Distribution New we introduce a prior distribution of $p(\mu)$ over the parameter $\mu$ in the Bernouli and Binomial distribution The beta distribution as the prior distribu...

Bernoulli and Binomial Distribution

Bernoulli Distributions Bernoulli Distributions is the discrete probability distribution of a random variable which takes the value 1 with probability $\mu$ and the value 0 with probabilit...

A Quick Look of Discrete Distribution

Bernoulli distribution $x\sim Bern(x\vert \mu)$ \[\begin{aligned} p(x) = Bern(x\vert \mu) &= \mu^x(1-\mu)^{1-x} \\ &= \mu^{\mathbb{I}(x=1)}(1-\mu)^{\mathbb{I}(x=0)} \\ \\ \mathbb...

A Quick Look of Continuous Distribution

Uniform distribution $x\sim \mathcal{U}(x\vert a,b)$, $x\in [a,b]$ \[\begin{aligned} p(x) = \mathcal{U}(x\vert a,b) = \frac {1}{b-a} \end{aligned}\] Univariate Gaussian distribution ...

Probaility Theory

Probaility Theory Sum Rule \(p(X) = \sum_Y p(X,Y)\) Product Rule \(p(X,Y) = p(Y \vert X)p(X)\) Bayes’ Theorem \(\begin{aligned} p(Y\vert X) = \frac {p(X \vert Y)p(...

Conjugate_priors

Bernoulli Distributions Bernoulli Distributions is the discrete probability distribution of a random variable which takes the value 1 with probability $\mu$ and the value 0 with probabili...

Bayes' Theorem for Gaussian Variables

Bayes’ Theorem for Gaussian Variables Given the distribution of $p(\boldsymbol{x})$ and $p(\boldsymbol{y} \vert \boldsymbol{x})$, they both are gaussian distribution, like this: \[\begin{...

Information Theorem

Information Theory Information Content or Self-Information \(\begin{aligned} h(x) &= -\log_2 p(x) \\ h(x) &= -\ln p(x) \end{aligned}\) information Entropy: The average ...