Robot Learning for Persistent Autonomy
Post date: May 29, 2016 3:46:57 PM
Petar kormushev, Seyed Reza Ahmadzadeh
Reference:
Petar Kormushev, Seyed Reza Ahmadzadeh, “Robot Learning for Persistent Autonomy”, Chapter in
Handling Uncertainty and Networked Structure in Robot Control (Lucian Busoniu, Levente Tamás, eds.),
Springer International Publishing, pp. 3-28, 2015.
Bibtex Entry:
@INBOOK{kormushev2015chapterrobot, AUTHOR={Kormushev, Petar and Ahmadzadeh, Seyed Reza}, EDITOR={Busoniu, Lucian and Tam{\'a}s, Levente}, TITLE={Robot Learning for Persistent Autonomy}, BOOKTITLE={Handling Uncertainty and Networked Structure in Robot Control}, YEAR={2015}, PUBLISHER={Springer International Publishing}, ADDRESS={Cham}, PAGES={3--28}, ISBN={978-3-319-26327-4}, DOI={10.1007/978-3-319-26327-4_1}, URL={http://dx.doi.org/10.1007/978-3-319-26327-4_1} }
DOI:
Abstract:
Autonomous robots are not very good at being autonomous. They work well in structured environments,
but fail quickly in the real world facing uncertainty and dynamically changing conditions. In this
chapter, we describe robot learning approaches that help to elevate robot autonomy to the next
level, the so-called 'persistent autonomy'. For a robot to be 'persistently autonomous' means to be
able to perform missions over extended time periods (e.g. days or months) in dynamic, uncertain
environments without need for human assistance. In particular, persistent autonomy is extremely
important for robots in difficult-to-reach environments such as underwater, rescue, and space
robotics. There are many facets of persistent autonomy, such as: coping with uncertainty, reacting
to changing conditions, disturbance rejection, fault tolerance, energy efficiency and so on. This
chapter presents a collection of robot learning approaches that address many of these facets.
Experiments with robot manipulators and autonomous underwater vehicles demonstrate the usefulness
of these learning approaches in real world scenarios.