Abstract Julian Hough 03 December 2018

From IMC wiki
Jump to: navigation, search

FLUID: Improving Fluidity in Human-Robot Interaction with Speech Interfaces

Collaborating with robots is becoming part of everyday life, as is speech technology. Robots are increasingly used in manufacturing and medical domains, and there is a burgeoning interest in using them in social care. All of these interaction domains can be enhanced with appropriate speech interfaces. A key element to making interaction with a user efficient and natural through speech input is the robot interacting in a fluid way.

Fluid aspects of interaction include seamless transitions from human speech to robotic action and then back again, permitting appropriate overlap between speech and action. Most state-of-the-art robots which process speech do not have fluid interaction capabilities, exhibiting slow reactions to speech commands and often introducing severe delays and slow recovery from communication errors. While the manipulation and motion capabilities of robots and the accuracy of speech recognition have improved vastly, the deployment of real-world robots with speech understanding is being stifled by sub-optimal models of human-robot interaction (HRI) which do not allow fluid communication.

The proposed FLUID project will build tools to investigate and improve fluidity in HRI with speech interfaces by enhancing existing computational models with empirical data from experiments, and implementing these models in simulations in a Virtual Reality (VR) environment using novel interactive evaluation methods with users. In this talk I outline the motivation and program for research I propose.