Verified Neural Network Training for Safe Human-Robot Interaction
Robotic movement in the home and other human-centred environments requires safe, constrained behaviour.
Led by Jacques Fleuriot and Ram Ramamoorthy with Mark Chevallier (Postdoctoral Researcher)
Robotic movement in the home and other human-centred environments requires safe, constrained behaviour. Neural networks are powerful tools that can be used for low-level perception in such settings but whether they subsequently implement the intended behaviour is usually difficult to assert, let alone guarantee. We wish for the assurance that a neural network can be taught constraints such as “never go past this threshold” or “avoid this region until it is clear” and actually obey them as intended when deployed e.g. in a robotic assistant. To this end, we propose a neurosymbolic AI approach that tightly integrates rigorous logical reasoning and learning to formally constrain the training of neural networks. We will build a theorem-proving framework that allows for formal representation and proofs, and then faithful code-generation, of user-specified, spatio-temporal constraints that can then be used for learning in scenarios involving human-robot interaction. Our ultimate aim is to provide a machine-certified, neurosymbolic approach that ensures safe robotics in complex human-centred environments.
- Craig Innes and Ram Ramamoorthy. Elaborating on Learned Demonstrations with Temporal Logic Specifications. In Proceedings of Robotics: Science and System XVI, 2020.
- Mark Chevallier, Matthew Whyte, and Jacques D. Fleuriot. Constrained Training of Neural Networks via Theorem Proving, arXiv:2207.03880, 2022.