In this talk we will consider Bayesian optimization of an expensive-to-evaluate black-box objective function where we additionally have access to cheaper approximations of the objective, e.g., numerical simulations that employ models of the true objective function of varying complexity. In general, such approximations arise in applications from robotics, reinforcement learning, engineering, and the natural sciences, and are subject to an unknown bias because they come from simulations with model discrepancy, i.e. whose internal models deviate from reality.

We present an algorithm that provides a rigorous mathematical treatment of the uncertainties arising from model discrepancy and noisy observations. Its optimization decisions rely on a value of information analysis and maximize the predicted benefit per unit cost.

Moreover, we consider the common scenario of facing a series of related optimization tasks, e.g., each instantiated by a modification of the problem specifications or through the introduction of new data. We show how to adapt the above approach to significantly reduce the overall optimization cost in this case.

Based on joint work with Peter Frazier and Jialei Wang.