Difficulty: Normal
Everybody loves the Gaussian distribution (or if you prefer, the Normal distribution) to bits. It’s easy to see why: The Gaussian distribution is easy to handle, there are tons of closed-form results, and if you get a large enough sample size, everything turns Gaussian!
Because we love Gaussians so much, we love to sample from them. But how do you sample from a multivariate Gaussian? It doesn’t look like you can easily find its inverse function:
\(\mathcal{N}(\mu, \Sigma) = \frac{1}{\sqrt{(2\Pi)^d|\Sigma|}}e^{-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)}\)In fact, it looks pretty difficult. So I asked myself, what really happens when you invoke rnorm(mu, sigma)?
Thankfully, because the Gaussian is so old and well-studied, there exist really good tutorial papers on this kind of stuff. Essentially, sampling from a multivariate Gaussian is done in three stages:
- Generate \(n\) random variables \(x_i\). Then the central limit theorem allows us to construct samples from \(\mathcal{N}(0,1)\).
- Repeat step 1. \(d\) times to get a \(d\)-dimensional vector \(x\). This is a sample from \(\mathcal{N}(0,I_d)\).
- Now for the tricky part: Use the eigenvectors and eigenvalues of the covariance matrix \(\Sigma\) that we wish to sample from to transform the \(\mathcal{N}(0,I_d)\) sample into a \(\mathcal{N}(\mu,\Sigma)\) sample.
The last step bears some explanation: If \(\Phi\) is the matrix of normalised eigenvectors of \(\Sigma\), and \(Delta\) is a diagonal matrix with the eigenvalues along the diagonal, then:
\[y = \Delta^{\frac{1}{2}}\Phi x +\mu\]
is a sample from \(\mathcal{N}(\mu,\Sigma)\).
The above is taken from this excellent review by Istvan Hernadvolgyi (there are at least three accents in this name that I did not render properly, and for that, I apologise).
Comments are Disabled