Target reaching by using visual information and Q-learning controllers

Year: 2000

Authors: Distante C., Anglani A., Taurisano F.

Autors Affiliation: Dipartimento di Ingegneria dell\’Innovazione, via Arnesano, Universita di Lecce, 73100 Lecce, Italy; Signal and Image Processing Institute IESI – CNR, via Amendola 166/5 70126 Bari, Italy

Abstract: This paper presents a solution to the problem of manipulation control: target identification and grasping. The proposed controller is designed for a real platform in combination with a monocular vision system. The objective of the controller is to learn an optimal policy to reach and to grasp a spherical object of known size, randomly placed in the environment. In order to accomplish this, the task has been treated as a reinforcement problem, in which the controller learns by a trial and error approach the situation-action mapping. The optimal policy is found by using the Q-Learning algorithm, a model free reinforcement learning technique, that rewards actions that move the arm closer to the target. The vision system uses geometrical computation to simplify the segmentation of the moving target (a spherical object) and determines an estimate of the target parameters. To speed-up the learning time, the simulated knowledge has been ported on the real platform, an industrial robot manipulator PUMA 560. Experimental results demonstrate the effectiveness of the adaptive controller that does not require an explicit global target position using direct perception of the environment.

Journal/Review: AUTONOMOUS ROBOTS

Volume: 9 (1)      Pages from: 41  to: 50

More Information: doi: 10.1023/A:1008972101435
KeyWords: reinforcement learning; behavior based; robotic manipulator; visual servoing; Hough transform
DOI: 10.1023/A:1008972101435

ImpactFactor: 0.672
Citations: 13
data from “WEB OF SCIENCE” (of Thomson Reuters) are update at: 2024-11-10
References taken from IsiWeb of Knowledge: (subscribers only)

Connecting to view paper tab on IsiWeb: Click here
Connecting to view citations from IsiWeb: Click here