
HighOrder Langevin Diffusion Yields an Accelerated MCMC Algorithm
We propose a Markov chain Monte Carlo (MCMC) algorithm based on thirdor...
read it

Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations
We present a framework that allows for the nonasymptotic study of the 2...
read it

Algorithmic Theory of ODEs and Sampling from Wellconditioned Logconcave Densities
Sampling logconcave functions arising in statistics and machine learning...
read it

Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and NonConvex Learning
Langevin algorithms are gradient descent methods with additive noise. Th...
read it

The shifted ODE method for underdamped Langevin MCMC
In this paper, we consider the underdamped Langevin diffusion (ULD) and ...
read it

Simulated Tempering Langevin Monte Carlo II: An Improved Proof using Soft Markov Chain Decomposition
A key task in Bayesian machine learning is sampling from distributions t...
read it

Dimensionally Tight Running Time Bounds for SecondOrder Hamiltonian Monte Carlo
Hamiltonian Monte Carlo (HMC) is a widely deployed method to sample from...
read it
The Randomized Midpoint Method for LogConcave Sampling
Sampling from logconcave distributions is a well researched problem that has many applications in statistics and machine learning. We study the distributions of the form p^*∝(f(x)), where f:R^d→R has an LLipschitz gradient and is mstrongly convex. In our paper, we propose a Markov chain Monte Carlo (MCMC) algorithm based on the underdamped Langevin diffusion (ULD). It can achieve ϵ· D error (in 2Wasserstein distance) in Õ(κ^7/6/ϵ^1/3+κ/ϵ^2/3) steps, where Ddef=√(d/m) is the effective diameter of the problem and κdef=L/m is the condition number. Our algorithm performs significantly faster than the previously best known algorithm for solving this problem, which requires Õ(κ^1.5/ϵ) steps. Moreover, our algorithm can be easily parallelized to require only O(κlog1/ϵ) parallel steps. To solve the sampling problem, we propose a new framework to discretize stochastic differential equations. We apply this framework to discretize and simulate ULD, which converges to the target distribution p^*. The framework can be used to solve not only the logconcave sampling problem, but any problem that involves simulating (stochastic) differential equations.
READ FULL TEXT
Comments
There are no comments yet.