Publications‎ > ‎Book Chapters‎ > ‎

Visuospatial Skill Learning

posted May 29, 2016, 8:44 AM by Reza A   [ updated Jun 23, 2017, 8:50 PM ]
Seyed Reza Ahmadzadeh, Petar Kormushev

Reference:
Seyed Reza Ahmadzadeh, Petar Kormushev, “Visuospatial Skill Learning”, Chapter in Handling
Uncertainty and Networked Structure in Robot Control (Lucian Busoniu, Levente Tamás, eds.),
Springer International Publishing, pp. 75-99, 2015.
Bibtex Entry:
@INBOOK{ahmadzadeh2015chaptervsl, AUTHOR={Ahmadzadeh, Seyed Reza and Kormushev, Petar}, EDITOR={Busoniu, Lucian and Tam{\'a}s, Levente}, TITLE={Visuospatial Skill Learning}, BOOKTITLE={Handling Uncertainty and Networked Structure in Robot Control}, YEAR={2015}, PUBLISHER={Springer International Publishing}, ADDRESS={Cham}, PAGES={75--99}, ISBN={978-3-319-26327-4}, DOI={10.1007/978-3-319-26327-4_4}, URL={http://dx.doi.org/10.1007/978-3-319-26327-4_4} }
DOI:
10.1007/978-3-319-26327-4_4
Abstract:
This chapter introduces Visuospatial Skill Learning (VSL), which is a novel interactive robot
learning approach. VSL is based on visual perception that allows a robot to acquire new skills by
observing a single demonstration while interacting with a tutor. The focus of VSL is placed on
achieving a desired goal configuration of objects relative to another. VSL captures the object’s
context for each demonstrated action. This context is the basis of the visuospatial representation
and encodes implicitly the relative positioning of the object with respect to multiple other
objects simultaneously. VSL is capable of learning and generalizing multi-operation skills from a
single demonstration, while requiring minimum a priori knowledge about the environment. Different
capabilities of VSL such as learning and generalization of object reconfiguration, classification,
and turn-taking interaction are illustrated through both simulation and real-world experiments.

PDF Preview: