Modern virtual personal assistants provide a convenient interface for completing daily tasks via voice commands. An important consideration for these assistants is the ability to recover from automatic speech recognition (ASR) and natural language understanding (NLU) errors. I present our recent work on learning robust dialog policies to recover from these errors. To this end, we developed a user simulator which interacts with the assistant through voice commands in realistic scenarios with noisy audio, and use it to learn dialog policies through deep reinforcement learning. We show that dialogs generated by our simulator are indistinguishable from human generated dialogs, as determined by human evaluators. Furthermore, preliminary experimental results show that the learned policies in noisy environments achieve the same execution success rate with fewer dialog turns compared to fixed rule-based policies.
Alborz Geramifard is currently a machine learning manager at Amazon, leading the conversational AI team in Alexa Boston. He has led more than a dozen NLU Models for Alexa. Before joining Amazon, he was a postdoctoral associate at MIT's Laboratory for Information and Decision Systems. Alborz received his PhD from MIT working on representation learning and safe exploration in large scale sensitive sequential decision-making problems in 2012. He completed his MSc at University of Alberta in 2008, where he worked on data efficient online reinforcement learning techniques. His research interests lie in machine learning with the focus on reinforcement learning, natural language understanding, planning, and brain and cognitive sciences. Alborz was the recipient of the NSERC postgraduate scholarships 2010-2012 program.