BECAREFUL: Building Embodied Conversational Agent Reliability by Exerting Friction through Uncertain Language Project aims to enhance decision making mechanisms for conversational embodied AI agents by reducing user over-reliance to possible misinformation from AI systems (i.e., due to AI hallucinations, AI sycophancy or misunderstanding the user due to low-bandwidth or unreliable communication situations)
Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems
Towards Preventing Overreliance on Task-Oriented Conversational AI Through Accountability Modeling
Accounting for Sycophancy in Language Model Uncertainty Estimation
ReSpAct: Harmonizing Reasoning, Speaking, and Acting Towards Building Large Language Model-Based Conversational AI Agents
This project is supported by the U.S. Defense Advanced Research Projects Agency (DARPA) Friction for Accountability in Conversational Transactions (FACT) program