Asking Your Self-Driving Car to Explain its Decisions
Trusting an autonomous vehicle to make the right decisions.
Led by Stefano V. Albrecht, Chris Lucas, and Shay Cohen with Cheng Wang (Postdoctoral Researcher)
Trusting an autonomous vehicle to make the right decisions depends on our ability to understand its reasoning processes. We envision a future in which a human user can ask the autonomous vehicle via natural language to explain its decisions in intuitive terms. To this end, this interdisciplinary project will combine techniques from AI planning and prediction, natural language processing, and human cognitive science to develop a natural language interface through which human users can ask for explanations of the vehicle’s executed maneuvers and receive counter-factual based explanations extracted from the vehicle’s planning process. This project will build on the recent Interpretable Goal-based Prediction and Planning (IGP2) system which integrates prediction and planning for autonomous vehicles to generate plans which are explainable by means of rationality principles.
| Stefano V. Albrecht | Chris Lucas | Shay Cohen | Cheng Wang | Publications |
- Stefano V. Albrecht, Cillian Brewitt, John Wilhelm, Balint Gyevnar, Francisco Eiras, Mihai Dobre, Subramanian Ramamoorthy, Interpretable Goal-based Prediction and Planning for Autonomous Driving, arxiv.org/abs/2002.02277, (2021).
- Josiah P. Hanna, Arrasy Rahman, Elliot Fosong, Francisco Eiras, Mihai Dobre, John Redford, Subramanian Ramamoorthy, Stefano V. Albrecht, Interpretable Goal Recognition in the Presence of Occluded Factors for Autonomous Vehicles, arxiv.org/abs/2108.02530, (2021).
- Cillian Brewitt, Balint Gyevnar, Samuel Garcin, Stefano V. Albrecht, GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving, arxiv.org/abs/2103.06113, (2021).