You are here

Hybrid Batch Bayesian Optimization

TitleHybrid Batch Bayesian Optimization
Publication TypeConference Paper
Year of Publication2012
AuthorsAzimi, J., A. Jalali, and X. Z. Fern
Conference NameInternational Conference on Machine Learning (ICML 2012)
Date Published07/2012
Conference LocationEdinburgh, Scotland

Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm that, based on the current state, dynamically switches between a sequential policy and a batch policy with variable batch sizes. We provide theoretical justification for our algorithm and present experimental results on eight benchmark BO problems. The results show that our method achieves substantial speedup (up to 78%) compared to a pure sequential policy, without suffering any significant performance loss.