Title: Adaptive Experiment Design with Temporal Logic Specifications
Abstract: Many robot scenarios assess robustness against a task specification. If the controller or environment are composed of "black-box" components with unknown dynamics, using formal verification to assess our system becomes difficult. If the space of environments is large compared to experiment cost, then assessing robustness via exhaustive testing is also often infeasible.
This talk discusses how to choose experiment inputs which give greatest insight into system performance given a limited budget. By combining smooth metrics for temporal logic with techniques from adaptive design, our method chooses inputs by incrementally constructing a surrogate model of specification robustness. This model then chooses the next experiment in areas with either high prediction error or uncertainty.
Our experiments show how this adaptive experimental design technique results in sample-efficient descriptions of system robustness. Further, we show how to use the model built via the experiment design process to assess the behaviour of a data-driven control system under domain shift.
Given sufficient time, I may also discuss the issues with assessing real-world tasks when experiments are conducted in simulation (and potential ways to address these issues).