Visuospatial Skill Learning for Robots

Post date: Jun 06, 2017 2:3:13 AM

S. Reza Ahmadzadeh, Fulvio Mastrogiovanni, Petar Kormushev

Reference:

S. Reza Ahmadzadeh, Fulvio Mastrogiovanni, Petar Kormushev, "Visuospatial Skill Learning for Robots",

arXiv preprint arXiv:1706.00989, 2017.

Bibtex Entry:

@ARTICLE{ahmadzadeh2017visuospatial, TITLE={Visuospatial Skill Learning for Robots}, AUTHOR={Ahmadzadeh, S. Reza and Mastrogiovanni, Fulvio and Kormushev, Petar}, JOURNAL={ar{X}iv preprint ar{X}iv:1706.00989}, YEAR={2017}, PAGES={1--24}, MONTH={June}, }

Abstract:

A novel skill learning approach is proposed that allows a robot to acquire human-like visuospatial

skills for object manipulation tasks. Visuospatial skills are attained by observing spatial

relationships among objects through demonstrations. The proposed Visuospatial Skill Learning (VSL)

is a goal-based approach that focuses on achieving a desired goal configuration of objects relative

to one another while maintaining the sequence of operations. VSL is capable of learning and

generalizing multi-operation skills from a single demonstration, while requiring minimum prior

knowledge about the objects and the environment. In contrast to many existing approaches, VSL

offers simplicity, efficiency and user-friendly human-robot interaction. We also show that VSL can

be easily extended towards 3D object manipulation tasks, simply by employing point cloud processing

techniques. In addition, a robot learning framework, VSL-SP, is proposed by integrating VSL,

Imitation Learning, and a conventional planning method. In VSL-SP, the sequence of performed

actions are learned using VSL, while the sensorimotor skills are learned using a conventional

trajectory-based learning approach. such integration easily extends robot capabilities to novel

situations, even by users without programming ability. In VSL-SP the internal planner of VSL is

integrated with an existing action-level symbolic planner. Using the underlying constraints of the

task and extracted symbolic predicates, identified by VSL, symbolic representation of the task is

updated. Therefore the planner maintains a generalized representation of each skill as a reusable

action, which can be used in planning and performed independently during the learning phase.

The proposed approach is validated through several real-world experiments.