Skip to content

Zoubin Ghahramani

Yee Whye Teh edited this page Jun 9, 2015 · 1 revision

Scaling up Bayesian inference

I will first outline several reasons why I think Bayesian models is still highly relevant in the age of Big Data. Much of the recent work in our group has focused on scalability, so I will then give very brief pointers to a few directions we have been pursuing. These include (1) submodular optimisation for approximate inference (with Colorado Reed), (2) subsampling-based MCMC (with Yutian Chen), (3) sparse stochastic variational inference for Gaussian process classification (with James Hensman and Alex Matthews) and (4) the rational allocation of computational resources (with James Lloyd).

Clone this wiki locally