title | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Random Coordinate Underdamped Langevin Monte Carlo |
The Underdamped Langevin Monte Carlo (ULMC) is a popular Markov chain Monte Carlo sampling method. It requires the computation of the full gradient of the log-density at each iteration, an expensive operation if the dimension of the problem is high. We propose a sampling method called Random Coordinate ULMC (RC-ULMC), which selects a single coordinate at each iteration to be updated and leaves the other coordinates untouched. We investigate the computational complexity of RC-ULMC and compare it with the classical ULMC for strongly log-concave probability distributions. We show that RC-ULMC is always cheaper than the classical ULMC, with a significant cost reduction when the problem is highly skewed and high dimensional. Our complexity bound for RC-ULMC is also tight in terms of dimension dependence. |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
ding21b |
0 |
Random Coordinate Underdamped Langevin Monte Carlo |
2701 |
2709 |
2701-2709 |
2701 |
false |
Ding, Zhiyan and Li, Qin and Lu, Jianfeng and Wright, Stephen |
|
2021-03-18 |
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics |
130 |
inproceedings |
|
|