Bayesian Optimization: Black-box Optimization and Beyond
Bayesian optimization has emerged as an exciting subfield of machine learning that is concerned with the global optimization of expensive, noisy, black-box functions using probabilistic methods. Systems implementing Bayesian optimization techniques have been successfully used to solve difficult problems in a diverse set of applications. Many recent advances in the methodologies and theory underlying Bayesian optimization have extended the framework to new applications and provided greater insights into the behaviour of these algorithms. Bayesian optimization is now increasingly being used in industrial settings, providing new and interesting challenges that require new algorithms and theoretical insights.
Classically, Bayesian optimization has been used purely for expensive single-objective black-box optimization. However, with the increased complexity of tasks and applications, this paradigm is proving to be too restricted. Hence, this year’s theme for the workshop will be “black-box optimization and beyond”. Among the recent trends that push beyond BO we can briefly enumerate:
- Adapting BO to not-so-expensive evaluations.
- “Open the black-box” and move away from viewing the model as a way of simply fitting a response surface, and towards modelling for the purpose of discovering and understanding the underlying process. For instance, this so-called grey-box modelling approach could be valuable in robotic applications for optimizing the controller, while simultaneously providing insight into the mechanical properties of the robotic system.
- “Meta-learning”, where a higher level of learning is used on top of BO in order to control the optimization process and make it more efficient. Examples of such meta-learning include learning curve prediction, Freeze-thaw Bayesian optimization, online batch selection, multi-task and multi-fidelity learning.
- Multi-objective optimization where not a single objective, but multiple conflicting objectives are considered (e.g., prediction accuracy vs training time).
The target audience for this workshop consists of both industrial and academic practitioners of Bayesian optimization as well as researchers working on theoretical and practical advances in probabilistic optimization. We expect that this pairing of theoretical and applied knowledge will lead to an interesting exchange of ideas and stimulate an open discussion about the long term goals and challenges of the Bayesian optimization community.
A further goal of this workshop is to encourage collaboration between the diverse set of researchers involved in Bayesian optimization. This includes not only interchange between industrial and academic researchers, but also between the many different subfields of machine learning which make use of Bayesian optimization or its components. We are also reaching out to the wider optimization and engineering communities for involvement.
Invited speakers and panelists
We would like to thank our program committee for their great help in reviewing submissions:
Archambeau Cedric, Emile Contal, Daniel Hernandez-Lobato, David Duvenaud, Katharina Eggensperger, Favour Nyikosa, Matthias Feurer, Roman Garnett, Ian Dewancker, John-Alexander Assael, Rodolphe Jenatton, Jan H Metzen, Jose Miguel Hernandez-Lobato, James Wilson, Aaron Klein, Kevin Swersky, Marc Deisenroth, Mike Mccourt, Mickaël Binois, Matt Hoffman, Philipp Hennig, Ruben Martinez-Cantin, Stefan Falkner, Paul Supratik, Takayuki Osa, Filipe Veiga, Zhenwen Dai, Zi Wang, Ziyu Wang.
Below are the papers accepted for the 2016 workshop. For papers accepted at
previous workshops look here.
- hyperSPACE: Automated Optimization of Complex Processing Pipelines for pySPACE
Torben Hansing, Mario Michael Krell, Frank Kirchner
- Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints
Eduardo Garrido-Merchán, Daniel Hernandez-Lobato
- Multi-objective Optimization with Unbounded Solution Sets
Oswin Krause, Tobias Glasmachers, Christian Igel
- Quantifying mismatch in Bayesian optimization
Eric Schulz, Maarten Speekenbrink, Jose Miguel Hernandez-Lobato, Zoubin Ghahramani, Samuel Gershman
- Preemptive Termination of Suggestions during Sequential Kriging Optimization of a Brain Activity Reconstruction Simulation
Mike Mccourt, Ian Dewancker, Salvatore Ganci
- Advancing Bayesian Optimization: The Mixed-Global-Local (MGL) Kernel and Length-Scale Cool Down
Kim Wabersich, Marc Toussaint
- Hypervolume-based Multi-objective Bayesian Optimization with Student-t Processes
Joachim Van der Herten, Ivo Couckuyt, Tom Dhaene
- Factored Contextual Policy Search with Bayesian Optimization
Peter Karkus, Andras Kupcsik, David Hsu, Wee Sun Lee
- Infinite dimensions optimistic optimisation with applications on physical systems
Muhammad Kasim, Peter Norreys,
- A Physically-grounded and Data-efficient Approach to Motion Prediction using Black-box Optimization
Shaojun Zhu, Abdeslam Boularias
- A Simple Recursive Algorithm for calculating Expected Hypervolume Improvement
Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh
- High Dimensional Bayesian Optimization with Elastic Gaussian Process
Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, Svetha Venkatesh
- Hybrid Repeat/Multi-point Sampling for Highly Volatile Objective Functions
Brett Israelsen, Nisar Ahmed
- Bayesian Optimisation for solving Continuous State-Action-Observation POMDPs
Philippe Morere, Roman Marchant, Fabio Ramos
- Multiple Recommendation for Bayesian optimization via Multi-Scale Search
Tinu Theckel Joy, Santu Rana, Sunil Gupta, Svetha Venkatesh
- Safety-Aware Robot Damage Recovery Using Constrained Bayesian Optimization and Simulated Priors
Vaios Papaspyros, Konstantinos Chatzilygeroudis, Vassilis Vassiliades, Jean-Baptiste Mouret
- Learning Optimal Interventions
Jonas Mueller, David Reshef, George Du, Tommi Jaakkola
- Bayesian Optimisation with Pairwise Preferential Returns
Javier Gonzalez, Zhenwen Dai, Andreas Damianou, Neil Lawrence
- Designing Neural Network Hardware Accelerators with Decoupled Objective Evaluations
Jose Miguel Hernandez-Lobato, Michael A. Gelbart, Brandon Reagen, Robert Adolf, Daniel Hernandez-Lobato, Paul N. Whatmough, David Brooks, Gu-Yeon Wei, Ryan P. Adams
- Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space
Jose Miguel Hernandez-Lobato, Edward Pyzer-Knapp, Alan Aspuru-Guzik, Ryan Adams
- Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization
Valentin Dalibard, Michael Schaarschmidt, Eiko Yoneki
- Do we need “Harmless” Bayesian Optimization and “First-Order” Bayesian Optimization?
Mohamed Osama Ahmed, Bobak Shahriari, Mark Schmidt
- Think Globally, Act Locally: a Local Strategy for Bayesian Optimization
Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, Svetha Venkatesh
- Bayesian Optimization with shape constraints
Michael Jauch, Victor Pena
- Efficient nonmyopic active search
Shali Jiang, Gustavo Malkomes, Geoff Converse, Alyssa Shofner, Benjamin Moseley, Roman Garnett