Publications‎ > ‎

Journal Papers

Visuospatial Skill Learning for Robots

posted Jun 5, 2017, 7:03 PM by Reza A   [ updated Jun 23, 2017, 9:07 PM ]

S. Reza Ahmadzadeh, Fulvio Mastrogiovanni, Petar Kormushev

Reference:
S. Reza Ahmadzadeh, Fulvio Mastrogiovanni, Petar Kormushev, "Visuospatial Skill Learning for Robots",
arXiv preprint arXiv:1706.00989, 2017.
Bibtex Entry:
@ARTICLE{ahmadzadeh2017visuospatial, TITLE={Visuospatial Skill Learning for Robots}, AUTHOR={Ahmadzadeh, S. Reza and Mastrogiovanni, Fulvio and Kormushev, Petar}, JOURNAL={ar{X}iv preprint ar{X}iv:1706.00989}, YEAR={2017}, PAGES={1--24}, MONTH={June}, }
Abstract:
A novel skill learning approach is proposed that allows a robot to acquire human-like visuospatial
skills for object manipulation tasks. Visuospatial skills are attained by observing spatial
relationships among objects through demonstrations. The proposed Visuospatial Skill Learning (VSL)
is a goal-based approach that focuses on achieving a desired goal configuration of objects relative
to one another while maintaining the sequence of operations. VSL is capable of learning and
generalizing multi-operation skills from a single demonstration, while requiring minimum prior
knowledge about the objects and the environment. In contrast to many existing approaches, VSL
offers simplicity, efficiency and user-friendly human-robot interaction. We also show that VSL can
be easily extended towards 3D object manipulation tasks, simply by employing point cloud processing
techniques. In addition, a robot learning framework, VSL-SP, is proposed by integrating VSL,
Imitation Learning, and a conventional planning method. In VSL-SP, the sequence of performed
actions are learned using VSL, while the sensorimotor skills are learned using a conventional
trajectory-based learning approach. such integration easily extends robot capabilities to novel
situations, even by users without programming ability. In VSL-SP the internal planner of VSL is
integrated with an existing action-level symbolic planner. Using the underlying constraints of the
task and extracted symbolic predicates, identified by VSL, symbolic representation of the task is
updated. Therefore the planner maintains a generalized representation of each skill as a reusable
action, which can be used in planning and performed independently during the learning phase.
The proposed approach is validated through several real-world experiments.

PDF Preview:



Towards Autonomous Robotic Valve Turning

posted May 29, 2016, 9:08 AM by Reza A   [ updated Jun 26, 2017, 1:43 PM ]

Arnau Carrera, Seyed Reza Ahmadzadeh, Arash Ajoudani, Petar kormushev, Marc Carreras, Darwin G. Caldwell

Reference:
Arnau Carrera, S. Reza Ahmadzadeh, Petar kormushev, Marc Carreras, Darwin G. Caldwell, "Towards
Autonomous Robotic Valve Turning", Cybernetics and Information Technologies, vol. 12, no. 3,
pp. 17-26, 2012.
Bibtex Entry:
@ARTICLE{carrera2012towards, TITLE={Towards Autonomous Robotic Valve Turning}, AUTHOR={Carrera, Arnau and Ahmadzadeh, Seyed Reza and Ajoudani, Arash and Kormushev, Petar and Carreras, Marc and Caldwell, Darwin G.}, JOURNAL={Cybernetics and Information Technologies}, VOLUME={12}, NUMBER={3}, PAGES={17--26}, YEAR={2012}, DOI={10.2478/cait-2012-0018} }
DOI:
10.2478/cait-2012-0018
Abstract:
In this paper an autonomous intervention robotic task to learn the skill of grasping and turning a
valve is described. To resolve this challenge a set of different techniques are proposed, each one
realizing a specific task and sending information to the others in a Hardware-In-Loop (HIL)
simulation. To improve the estimation of the valve position, an Extended Kalman Filter is designed.
Also to learn the trajectory to follow with the robotic arm, Imitation Learning approach is used.
In addition, to perform safely the task a fuzzy system is developed which generates appropriate
decisions. Although the achievement of this task will be used in an Autonomous Underwater Vehicle,
for the first step this idea has been tested in a laboratory environment with an available robot
and a sensor.

PDF Preview:



1-2 of 2