Visuospatial Skill Learning for Object Reconfiguration Tasks
Post date: May 29, 2016 5:37:51 PM
Seyed Reza Ahmadzadeh, Petar Kormushev, Darwin G. Caldwell
Reference:
Seyed Reza Ahmadzadeh, Petar Kormushev, Darwin G. Caldwell, "Visuospatial Skill Learning for Object
Reconfiguration Tasks", In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS 2013),
Tokyo, Japan, pp. 685-691, 3-8 Nov. 2013.
Bibtex Entry:
@INPROCEEDINGS{ahmadzadeh2013visuospatial, TITLE={Visuospatial Skill Learning for Object Reconfiguration Tasks}, AUTHOR={Ahmadzadeh, Seyed Reza and Kormushev, Petar and Caldwell, Darwin G.}, BOOKTITLE={Intelligent Robots and Systems ({IROS}), {IEEE/RSJ} International Conference on}, PAGES={685--691}, YEAR={2013}, Month={November}, ADDRESS={Tokyo, Japan}, ORGANIZATION={IEEE}, DOI={10.1109/IROS.2013.6696425} }
DOI:
Abstract:
We present a novel robot learning approach based on visual perception that allows a robot to
acquire new skills by observing a demonstration from a tutor. Unlike most existing learning from
demonstration approaches, where the focus is placed on the trajectories, in our approach the focus
is on achieving a desired goal configuration of objects relative to one another. Our approach is
based on visual perception which captures the object's context for each demonstrated action. This
context is the basis of the visuospatial representation and encodes implicitly the relative
positioning of the object with respect to multiple other objects simultaneously. The proposed
approach is capable of learning and generalizing multi-operation skills from a single demonstration,
while requiring minimum a priori knowledge about the environment. The learned skills comprise a
sequence of operations that aim to achieve the desired goal configuration using the given objects.
We illustrate the capabilities of our approach using three object reconfiguration tasks with a
Barrett WAM robot.