Files
Abstract
Human-robot collaboration (HRC) stands at the forefront of scientific innovation and technological advancements. By combining the cognitive acuity and dexterity of human agents with the endurance and efficiency of robotic agents, HRC systems can seamlessly tackle complex tasks. To realize this ambitious goal, it is crucial to develop sophisticated models that capture the intricacies of humans and robots navigating shared workspaces. Additionally, intelligent algorithms that leverage these models to learn behavioral policies are essential for guiding agents appropriately during task execution. To this end, we develop novel multiagent models and inverse reinforcement learning (IRL) algorithms to solve various HRC problems. We begin by addressing a key challenge in robotics: learning a near-optimal policy from imperfect and incomplete input data. We achieve this by generalizing the well-known maximum-a-posteriori Bayesian IRL technique to sum out the occluded portions of the trajectory and then extending it with an observation model to account for perception noise. Subsequently, in HRC, we tackle three key challenges: 1. Decentralized collaboration with sparse interactions: we develop a decentralized adversarial IRL technique (Dec-AIRL). 2. Open HRC to allow agents to enter and exit the task as needed: we propose a new open HRC model (Open-DecMDP) and formulate a novel IRL method (Open-Dec-AIRL) to enable open collaboration. 3. Type-based collaboration by modeling dynamic agent types: we propose a new type-based mixed-observability model (Type-Based-DecMDP) and a corresponding IRL technique (Type-Based-Dec-AIRL) to learn a type-specific reward function and its corresponding policies (one for each agent). In this dissertation, we present these challenges through realistic HRC domains and demonstrate how our methods significantly improve upon existing art.