International Journal of Computers, Communications & Control Vol. I (2006), No. 2, pp. 85-94 Coordinated Control Of Mobile Robots Based On Artificial Vision Carlos M. Soria, Ricardo Carelli, Rafael Kelly, Juan M. Ibarra Zannatha Abstract: This work presents a control strategy for coordination of multiple robots based on artificial vision to measure the relative posture between them, in order to reach and maintain a specified formation. Given a leader robot that moves about an unknown trajectory with unknown velocity, a controller is designed to maintain the robots following the leader at a certain distance behind, by using visual information about the position of the leader robot. The control system is proved to be asymptoti- cally stable at the equilibrium point, which corresponds to the accomplishment of the navigation objective. Experimental results with two robots, a leader and a follower, are included to show the performance of the vision-based control system. Keywords: Mobile Robots, Coordinated Robots, Vision Based Control, Artificial Vision. 1 Introduction During the last years, efforts have been made to give autonomy to single mobile robots by using different sensors, actuators and advanced control algorithms. This was mainly motivated by the necessity to develop complex tasks in an autonomous way, as demanded by service or production applications. Robots have thus become highly sophisticated systems. In some applications, a valid alternative (or even the mandatory solution) is the use of multiple simple robots which, operating in a coordinated way, can develop complex tasks ([1]; [2]; [7]). This alternative offers additional advantages, in terms of flexibility in operating the group of robots and failure tolerance due to redundancy in available mobile robots [6]. As the number of robots increases, the task of controlling the system becomes more complex. Control strategies can be classified as centralized and decentralized. In a centralized control system, all planning and control functions are developed in a single control unit. Each mobile robot has few simple sensors, the actuators and the communication system with the single control unit. Every motion and conflict between robots is solved by the control unit. Nevertheless, the system becomes vulnerable to any failure that may occur in the control unit. On the other hand, with the decentralized control approach, each robot is equipped with multiple sensors and a controller thus becoming capable of recognizing the environment and taking its own control actions ([3]; [5]). The function of the control center -if available- is to assign tasks to each robot and to govern the information flow in the system. The coordinated control of robots allows a team of robots to perform missions not easy or viable to be achieved by a single robot. The guidance of a mobile robot requires its localization in the environment [4]. A precise localization is needed when multiple robots share a common environment. Odometric sensors, sonar sensors, gyros, laser and vision and its fusion are commonly used for robot localization and environment modeling. Vision sensors are increasingly applied because of its ever-growing capability to capture information. This work proposes a strategy based on capturing vision information with single cameras mounted on the follower robots, to obtain the relative posture of a leader robot and to guide a specified robots formation. The controller design is then obtained using non linear control theory, and the overall control system is proved to be asymptotically stable. The paper is organized as follows. The concept of robot formation is briefly discussed in Section 2 and the robot kinematics model in Section 3. The visual measurement of the posture of a leader robot is presented in Section 4. The controller design and the stability analysis are given in Section 5. Some experimental results are discussed in Section 6. Finally, Section 7 presents some concluding comments. Copyright c© 2006 by CCC Publications 86 Carlos M. Soria, Ricardo Carelli, Rafael Kelly, Juan M. Ibarra Zannatha 2 Robots Formation In this work the following formation strategy is considered. One of the robots is defined as the leader robot, and moves in an unknown trajectory to the other follower robots. The leader has a pattern mounted on its back, which is observed by the follower robots to obtain information about their relative posture to the leader. This information is used to control their position to reach the specified formation. The visual information is obtained by a looking ahead camera mounted on each follower robot. Figure 1 shows typical triangle and convoy configurations that can be defined for three robots by using this strategy. Figure 1: a) Triangle formation. b) Convoy formation. 3 Mobile Robot Model In this work it is considered the unicycle-like mobile robot, with is described by the following kine- matics equations: . x = v cos ϕ (1) . y = v sin ϕ . θ = ω where (x, y) are the Cartesian coordinates of robot position, and ϕ the robot heading or orientation angle; v and ω are the robot’s linear and angular velocities. The non-holonomic restriction for model (1) is . y cos ϕ − .x sin ϕ = 0 (2) which specifies the tangent trajectory along any feasible trajectory for the robot. The reference point of the robot is assumed to be the middle point between the two driven wheels, Fig. 2. v1 and v2 denote linear speeds of the left and the right wheels, respectively. Linear and angular velocities of the robot can be expressed as v = (v1 + v2)/2 and ω = (v1 − v2)/L respectively, where L represents the distance between the two driven wheels. Coordinated Control Of Mobile Robots Based On Artificial Vision 87 Figure 2: Geometric description of the mobile robot. 4 Visual Measurement of the Leader Posture The control objective is that the follower robots follow the leading robot evolving with unknown motion in the working area. The robots have been equipped with a looking-ahead fixed vision camera. This camera captures the image of a pattern mounted on the leading vehicle that features four marks on a square of known side length E[m]. In order to ease the calculations, and without losing generality, the height of the horizontal median of this square is made to coincide with the height of the image’s camera center. The projected pattern on the camera image will appear with a projection distortion as represented in Fig. 3. The positions of the pattern’s marks on the image are expressed in pixels as (xi, yi) with i = A, B,C and D. These variables are considered as the image features: Figure 3: Image of the pattern’s marks. From these image features, as measured by the vision system, it is possible to compute the posture of the leading or target vehicle (xT , zT , ϕT ), measured on a coordinate system associated to the camera (Xc, Zc). Fig. 4 shows a diagram with the horizontal projection of the vision system, showing the posture of the leader or target vehicle on the camera’s coordinates of a follower robot, from which the following expressions are obtained: 88 Carlos M. Soria, Ricardo Carelli, Rafael Kelly, Juan M. Ibarra Zannatha xT = xR + xL 2 zT = zR + zL 2 (3) ϕT = cos−1( xR −xL E ) By resorting now to the model of reverse perspective, expressions are obtained to compute the leading vehicle’s posture of (3) as a function of the measurements supplied by the vision system, Fig.4: xL = zL − f f xA, xR = zR − f f xB (4) zL = f ( E hL + 1 ) , zR = f ( E hR + 1 ) Using the variables from (3) and (4), and assuming that the camera is mounted on the robot’s center, the relative posture of the follower robot with respect to the leading robot (ϕ, θ , d) can then be calculated. The relative posture is defined as in Figs. 4 and 5. These variables are calculated by the following expressions: φ = tan−1( zR −zL xR −xL ) ϕ = tan−1 ( xT zT ) (5) θ = ϕ + φ d = √ x2T + z 2 T Figure 4: Location of the leader respecting the camera. Coordinated Control Of Mobile Robots Based On Artificial Vision 89 Figure 5: Relative position between the leader and a follower robot. 5 Servo Visual Control of the Mobile Robot 5.1 Controller Figure 6 defines the control structure to be used in this work, where H represents the relationship between the variables defining the relative posture of the follower and the leading robot, and the image features ξ . Figure 6: Control structure. The control objective is defined as follows: Assuming a case where a leading robot moves along an unknown trajectory, with unknown velocity as well, make the follower robot keep a desired distance dd to the leader and pointing to it (that is ϕd = 0), using only visual information, Fig. 5. More specifically, the control objective can be expressed as lim t→∞ e(t) = lim t→∞ (dd −d) = 0 (6) lim t→∞ ϕ̃(t) = lim t→∞ (ϕd −ϕ) = 0 The evolution of the posture of the follower robot relative to the leader will be stated by the time derivative of the two error variables. The variation of distance error is given by the difference between the projection of the leader’s velocity and the follower robot velocity on the line connecting both vehicles, that is: 90 Carlos M. Soria, Ricardo Carelli, Rafael Kelly, Juan M. Ibarra Zannatha . e = −vT cos θ + v cos ϕ̃ (7) Likewise, the variation of angle error has three terms: the angular velocity of the follower robot, and the rotational effect of the linear velocities of both robots, which can be expressed as: . ϕ̃ = ω + vT sin θ d + v sin ϕ̃ d (8) For the system dynamics expressed by (7) and (8), the following nonlinear controller is proposed to achieve the control objective given in (6), v = 1 cos ϕ̃ (vT cos θ − f (e)) (9) ω = − f (ϕ̃)−vT sin θ d −v sin ϕ̃ d In (9), f (e), f (ϕ̃) ∈ Ω , with Ω the set of functions that meet the following definition: Ω = { f : ℜ → ℜ/ f (0) = 0 and x f (x) > 0 ∀x ∈ ℜ} In particular, the following functions are considered: f (e) = ke tanh(λee) and f (ϕ̃) = kϕ̃ tanh(λϕ̃ ϕ̃). These functions prevent the control actions becoming saturated. The variables used by this controller (θ , ϕ, d) as given by (5) are calculated from the image captured by the vision system. By combining (7), (8) and (9), the closed-loop system is obtained: . e = − fe(e) (10) . ϕ̃ = − fϕ̃ (ϕ̃) 5.2 Stability Analysis Considering the system of (10) with its single equilibrium point at the origin, the following Lyapunov candidate function, V = e2 2 + ϕ̃ 2 2 (11) has a time-derivative on the system trajectories given by . V = −e f (e)−ϕ̃ f (ϕ̃) (12) It is then concluded the asymptotic stability of the equilibrium, that is: e(t) → 0, ϕ̃(t) → 0, with t → ∞. It should be noted that the controller of (9) requires knowing the linear velocity vT of the leader vehicle. This variable should be estimated with the available visual information. By approximating the derivative of the position error as given in (7) by the discrete difference between successive positions, and considering a 0.1s sampling period, the objective velocity can be approximated as follows: v̂T = (dk −dk−1)/0.1 + v cos ϕ̃ cos θ Coordinated Control Of Mobile Robots Based On Artificial Vision 91 6 Experiments In order to evaluate the performance of the proposed coordination control algorithm, experiments were carried out with two Pioneer 2DX Mobile Robots (Fig. 7). Each robot has its own control system. The vision system includes a frame grabber PXC200 that allows capturing the images from a camera SONY EV-D30 mounted on the follower robot. These images are transmitted from the follower robot to a Pentium II-400 Mhz PC, in charge of processing the images and of calculating the corresponding control actions. Figure 8(a) shows the image captured by the camera; this image is processed to obtain the image shown in Fig. 8(b). From this image, the centroids of the four projected pattern’s marks are calculated and used to compute the variables (θ , ϕ, d) needed by the controller. Finally, the computed control actions are sent by a transmitter to the follower robot. For the experiences, the following parameter values were used: ke = 200, λe = 0.005, kϕ̃ = 10 and λϕ̃ = 0.1. The follower robot has to follow the leading robot by keeping a desired distance of dd = 0.50m and ϕd = 0. Figure 9 shows the evolution of the distance between both robots. Figure 10 shows the evolution of angles ϕ and θ . From these figures, the accomplishment of the control objective can be verified. Fig. 11 depicts the control actions that are calculated and sent to the follower robot. The estimation of the leader robot’s velocity is shown in Fig. 12, and the trajectory of the follower robot is depicted in Fig. 13. Figure 7: Robots used in the experiment. Figure 8: (a) Image captured by the robot’s camera. (b) Image processed to find the centroid of each mark. 92 Carlos M. Soria, Ricardo Carelli, Rafael Kelly, Juan M. Ibarra Zannatha Figure 9: Evolution of the distance to the objective. Figure 10: Evolution of angles ϕ and θ . Figure 11: Calculated control actions. Coordinated Control Of Mobile Robots Based On Artificial Vision 93 Figure 12: Estimated velocity of the objective. Figure 13: Trajectory followed by the follower robot. 7 Conclusions In this work, a non-linear vision-based controller for coordinated motion of mobile robots following a leader robot has been presented. The nonlinear controller has been designed with state dependent control gains that allow avoiding the saturation of the control actions. By using the Lyapunov method, it has been proven that the resulting control system is asymptotically stable. Through experiences, it has been demonstrated that the proposed control system accomplishes the control objective with a good performance, leading to a specified formation in which the follower robots follow a leader robot by using visual information. 8 Acknowledgment This work was partially supported by ANPCyT and CONICET, Argentina. The authors thank the Science and Technology for Development Program (CYTED) for promoting the research cooperation between their groups. References [1] R. Alami, Multirobot cooperation in the MARTHA project, SIEEE Robotics and Automation Maga- zine, Vol. 5, No. 1, pp. 36-47, 1998. 94 Carlos M. Soria, Ricardo Carelli, Rafael Kelly, Juan M. Ibarra Zannatha [2] R. C. Arkin, Integrating behavioural, perceptual and world knowledge in reactive navigation, Journal of Robotics and Autonomous Systems, Vol. 6, No. 1, pp. 36-47, 1990. [3] H. Asama, Operation of cooperative multiple robots using communication in a decentralised robotic system, Proc. Conf. From Perception to Action, Switzerland, 5-7 Sept., 1994. [4] J. Borestein, Navigating MobileRobots - Systems and Techniques, A K Peters, Wesley, MA, USA, 1996. [5] R. A. Brooks, A robust layered control system for a mobile robot, IEEE Journal of Robotics and Automation, Vol. 2, No. 1, pp. 14-23, 1986. [6] Y. Ishida, Functional complement by co-operation of multiple autonomous robots, Proc. IEEE Int. Conf. on Robotics and Automation, pp. 2476-2481, 1994. [7] M. J. Mataric, Learning in Multi-Robot Systems, Lecture Notes in Artificial Intelligence – Adaptation and Learning in Multi-Agent Systems, Vol. 10, 1996. Carlos M. Soria, Ricardo Carelli Universidad Nacional de San Juan Instituto de Automática Av. San Martín oeste 1109 5400, San Juan, Argentina E-mail: csoria@inaut.unsj.edu.ar, rcarelli@inaut.unsj.edu.ar Rafael Kelly Centro de Investigación Científica y de Educación Superior de Ensenada Ensenada, Baja California, 22800 México E-mail: rkelly@cicese.mx Juan M. Ibarra Zannatha Laboratorio de Robótica del Departamento de Control Automático Centro de Investigación y de Estudios Avanzados Av. IPN N 2508, Lindavista, 07360 México, DF E-mail: jibarra@ctrl.cinvestav.mx