Publications‎ > ‎Conference Papers‎ > ‎

Learning Symbolic Representations of Actions from Human Demonstrations

posted May 30, 2016, 3:37 PM by Reza A   [ updated Jun 24, 2017, 9:47 AM ]
Seyed Reza Ahmadzadeh, Ali Paikan, Fulvio Mastrogiovanni, Lorenzo Natale, Petar Kormushev, Darwin G. Caldwell

Reference:
Seyed Reza Ahmadzadeh, Ali Paikan, Fulvio Mastrogiovanni, Lorenzo Natale, Petar Kormushev,
Darwin G. Caldwell, "Learning Symbolic Representations of Actions from Human Demonstrations",
In Proc. IEEE Intl Conf. on Robotics and Automation, (ICRA 2015), Seattle, WA, USA, pp.
3801--3808, 26-30 May 2015.
Bibtex Entry:
@INPROCEEDINGS{ahmadzadeh2015learning, TITLE={Learning Symbolic Representations of Actions from Human Demonstrations}, AUTHOR={Ahmadzadeh, Seyed Reza and Paikan, Ali and Mastrogiovanni, Fulvio and Natale, Lorenzo and Kormushev, Petar and Caldwell, Darwin G.}, BOOKTITLE={Robotics and Automation ({ICRA}), {IEEE} International Conference on}, PAGES={3801--3808}, YEAR={2015}, MONTH={May}, ADDRESS={Seattle, Washington, USA}, ORGANIZATION={{IEEE}}, DOI={10.1109/ICRA.2015.7139728} }
DOI:
10.1109/ICRA.2015.7139728
Abstract:
In this paper, a robot learning approach is proposed which integrates Visuospatial Skill Learning,
Imitation Learning, and conventional planning methods. In our approach, the sensorimotor skills
(i.e., actions) are learned through a learning from demonstration strategy. The sequence of
performed actions is learned through demonstrations using Visuospatial Skill Learning. A standard
action-level planner is used to represent a symbolic description of the skill, which allows the
system to represent the skill in a discrete, symbolic form. The Visuospatial Skill Learning module
identifies the underlying constraints of the task and extracts symbolic predicates (i.e., action
preconditions and effects), thereby updating the planner representation while the skills are being
learned. Therefore the planner maintains a generalized representation of each skill as a reusable
action, which can be planned and performed independently during the learning phase. Preliminary
experimental results on the iCub robot are presented.

PDF Preview: