In this presentation I will introduce a novel class of estimators for Monte Carlo integration that leverage gradient information in order to provide the improved estimator performance demanded by contemporary statistical applications. The proposed estimators, called "control functionals," achieve sub-root-n convergence and often require orders of magnitude fewer simulations, compared with existing approaches, in order to achieve a fixed level of precision. We focus on a particular sub-class of estimators that permit an elegant analytic form and study their properties, both theoretically and empirically. Results are presented on Bayes-Hermite quadrature, hierarchical Gaussian process models and non-linear ordinary differential equation models, where in each case our estimators are shown to offer state of the art performance.
This work is joint with Chris Oates and Nicolas Chopin.