Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Robots deployed into many real-world scenarios are expected to face situations that their designers could not anticipate. Machine learning is an effective tool for extending the capabilities of these robots by allowing them to adapt their behavior to the situation in which they find themselves. Most machine learning techniques are applicable to learning either static elements in an environment or elements with simple dynamics. We wish to address the problem of learning the behavior of other intelligent agents that the robot may encounter. To this end, we extend a well-known Inverse Reinforcement Learning (IRL) algorithm, Maximum Entropy IRL, to address challenges expected to be encountered by autonomous robots during learning. These include: occlusion of the observed agents state space due to limits of the learners sensors or objects in the environment, the presence of multiple agents who interact, and partial knowledge of other agents dynamics. Our contributions are investigated with experiments using simulated and real world robots. These experiments include learning a fruit sorting task from human demonstrations and autonomously penetrating a perimeter patrol. Our work takes several important steps towards deploying IRL alongside other machine learning methods for use by autonomous robots.

Details

PDF

Statistics

from
to
Export
Download Full History