Asking Your Self-Driving Car to Explain its Decisions

Led by Stefano V. Albrecht, Chris Lucas, and Shay Cohen with Cheng Wang (Postdoctoral Researcher)


Trusting an autonomous vehicle to make the right decisions depends on our ability to understand its reasoning processes. We envision a future in which a human user can ask the autonomous vehicle via natural language to explain its decisions in intuitive terms. To this end, this interdisciplinary project will combine techniques from AI planning and prediction, natural language processing, and human cognitive science to develop a natural language interface through which human users can ask for explanations of the vehicle’s executed maneuvers and receive counter-factual based explanations extracted from the vehicle’s planning process. This project will build on the recent Interpretable Goal-based Prediction and Planning (IGP2) system which integrates prediction and planning for autonomous vehicles to generate plans which are explainable by means of rationality principles.

| Stefano V. Albrecht | Chris Lucas | Shay Cohen | Cheng Wang | Publications |