Introduction to Gradient Flows in the 2-Wasserstein Space
Gradient flows have been a popular tool in the analysis of PDEs. Recently, various gradient flows have been studied in machine learning literature. This article is an introduction to the concept of gradient flows in the 2-Wasserstein space.
Monte Carlo Sampling using Langevin Dynamics
Langevin Monte Carlo is a class of Markov Chain Monte Carlo algorithms that generate samples from a probability distribution of interest by simulating the Langevin Equation. This post explores the basics of Langevin Monte Carlo.
1-Wasserstein distance: Kantorovich–Rubinstein duality
The 1-Wasserstein distance is a popular integral probability metric. In this post, the dual form of the 1-Wasserstein distance is derived from its primal form.
Normalizing Flows: Planar and Radial Flows
A normalizing flow is a great tool that can transform simple probability distributions into very complex ones by applying a series of invertible functions to samples from the simple distribution. This post explores two simple flows introduced by Rezende and Mohamed (2015) –– Planar Flow and Radial Flow.
Implicit Reparameterization Gradients
Backpropagation through a stochastic node is an important problem in deep learning. Implicit reparameterization gradients go beyond the reparameterization trick to address the problem of efficient gradient computation in such a setting.