Microsoft Word - brain_2_4.doc 5 Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System Seyed Mohammadreza Kasaei Department of Computer Engineering, University of Isfahan, Isfahan, 81746, Iran Mrezamk2005@yahoo.com Kamal Jamshidi Department of Computer Engineering, University of Isfahan, Faculty of Engineering, Isfahan, 81746, Iran Jamshidi@ui.ac.ir Seyed Amir Hassan Monajemi Department of Computer Engineering, University of Isfahan, Faculty of Engineering, Isfahan, 81746, Iran Abstract Unmanned Aerial Vehicles (UAVs) are the subject of an increasing interest in many applications. UAVs are seeing more widespread use in military, scenic, and civilian sectors in recent years. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensor in order to provide efficient navigation functions. The helicopter has been stabilized with visual information through the control loop. Omni directional vision can be a useful sensor for this propose. It can be used as the only sensor or as complementary sensor. In this paper , we propose a novel method for path planning on an UAV based on electrical potential .We are using an omni directional vision system for navigating and path planning. Keywords: UAV, micro helicopter, Omini drictional Vision, Path planning, Visual Servoing. 1. Introduction Unmanned helicopters are widely used in the area of aerial work, since they perform particular behaviors such as vertical takeoff / landing, hovering and sideslip. More over, micro helicopters can be used in narrow space, and much effort has been devoted to development and control of micro helicopters [1]. Computer vision plays an important role for automatic control of unmanned helicopters [2, 3]. For example, cameras are useful to sense the position of a vehicle relative to the world coordinate frame. Vision-based control is also powerful in take-offs and landings, since vision provides the precise position relative to a desired landing point. Automatic control approaches for unmanned helicopters do not require skilled operators, and much effort has been devoted to development of autonomous control systems of unmanned helicopters (see for example [5, 6]). Visual servoing is an approach to control motion of a helicopter using information feedback from a camera mounted on it. For their tremendous potential applications in various areas including environmental monitoring and anti-terrorism, unmanned small helicopters are being extensively studied in robotics and control in recent years. However, the research advance in dynamic control of small helicopters is limited due to highly coupled non- linear dynamics and the existence of various uncertain- ties. Many people studied controller design based on a Publisher Item Identifier. linearized or simplified model, but the controllers developed under linearized models cannot guarantee dynamic stability rigorously. Another effort is application of modern non-linear control theory to helicopter control because small helicopter are good test beds for sophisticated control techniques for their small size and highly coupled dynamics [4]. BRAINovations BRAIN. Broad Research in Artificial Intelligence and Neuroscience Volume 2, Issue 4, December 2011, ISSN 2067-3957 (online), ISSN 2068 - 0473 (print) 6 Figure 1. The use of visual servo control for helicopter stabilization Visual feedback control systems can be divided into two main classes: position-based control and image-based control [9], [11]. Position-based control estimates the pose of the controlled object with respect to the camera in the task space (Cartesian space). The reference signal is defined in the task space, and the control input is computed by using errors between the pose of the controlled object and the reference. This class of systems requires 3D reconstruction of the controlled object by fusing image features. Position- based control may become sensitive to camera calibration errors. On the other hand, image-based control defines the reference signal in the image space, and it uses the image Jacobian to compute the control input. The image Jacobian, say J, gives a transformation from the tangent space of the generalized coordinate space to the tangent space of the image space. In this paper, we propose a novel method for navigating and path planning on an UAV based on electrical potential and Omni directional vision system. We will at first describe the vision system configuration and after that focus on our scientific approaches in image processing and feature extraction and then trajectory. Finally, concludes this paper. 2. Vision system configuration Omni-directional vision systems can provide images with a 360 degree of field of view. For this reason, they can be useful to robotic applications such as navigation, teleportation and visual servoing. Omni- directional vision systems are devices that combine reflective elements (catoptrics) and refractive systems (dioptric) to form a projection onto the image plane of the camera. They can be classified as central and non- central catadioptric cameras according to the single effective viewpoint criteria. Baker and Nayar [14,15], define the configurations that satisfy the constraints of a single viewpoint, finding that a central catadioptric system can be built combining a perspective camera with a hyperbolic, elliptical or planar mirror, or using an orthographic camera with a parabolic mirror. Geyer and Daniilidis [16,17] proposed an unified model for the projective geometry induced by central catadioptric systems, showing that these projections are isomorphic to a projective mapping from a sphere (centered on the effective viewpoint) to a plane with the projection center on the perpendicular axis to the plane. A modified version of this unified model is presented by Barreto and Araujo in [18,19], where the mapping between points in the 3D world and points in the catadioptric image plane is split into three steps: 1. A linear function maps the world into an oriented projective plane. S. M. Kasaei, K. Jamshidi, S. A. H. Monajemi - Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System 7 2. A non-linear function transforms points between two oriented projective planes. 3. A collineation function depending on the mirror parameters and the camera calibration matrix (intrinsic parameters). Table 1. Parameters Ψ and ξ for central catadioptric systems (d is distance between focus and 4p is the Lactus Rectum) Parabolic Hyperbolic Elliptical Planar Ψ 1 0 ξ 1+2p 1 Fig.2 shows the general unit sphere projection for modeling catadioptric systems. Consider a point in space (visible to the catadioptric system), with Cartesian coordinates in the catadioptric reference (focus). This point is mapped onto point on the unitary sphere centered on the effective view point by Eq. (1). Figure 2. Catadioptric projection modelled by the unit sphere The effective view point by Eq. (1) (1) To each projective point Xs, corresponds a projective point in a coordinate system with origin at the camera projection center. This projection is a non-linear mapping between two projective planes and is defined by Eq. (2). (2) Where BRAIN. Broad Research in Artificial Intelligence and Neuroscience Volume 2, Issue 4, December 2011, ISSN 2067-3957 (online), ISSN 2068 - 0473 (print) 8 Since the beginning of UAV, the map building was one of the most addressed problems by researchers. Several researchers used Omni directional vision for robot navigation and map building. Because of the wide field of view in Omni directional sensors, the robot does not need to look around using moving parts (cameras or mirrors) or turning the moving parts. The global view offered by Omni directional vision is especially suitable for highly dynamic environments. The Omni directional vision system consists of a hyperbolic mirror, a USB color digital camera (Logitech C905) and a regulation device. 3. Image processing and feature extraction Color space : we used to use the HSI color space for segmentation, but in practice we found some limitation especially in I and H parameters. As a result of a search for a more suitable color space, we concluded that we should combine the parameters of different color spaces and introduce a new color space[7]. In this way a color space named H'SY. where the H' is taken from CIELab, S from HSI and Y from YIQ color spaces. We have obtained satisfactory results from this new color space. The reason for this selection is that, the component Y in YIQ represents the conversion of a color image into a monochrome image. Therefore, comparing with I in HSI which is simply the average of R, G and B, the component Y is a better mean for measuring the pixel intensity in an image. The component H' in CIELab is defined as follows: ) (3) Where a* denotes relative redness-greenness and b* shows yellowness-blueness. The reason for selection H' is that, it has been reported that H' is a good measure for detecting regions matching a given color. Manual segmentation training: The output of camera in this system is in RGB color space. In training stage, we put the UAV in field and run a program which convert RGB values of each pixel into H'SY components as described above. However, to obtain a relatively good segmentation, by try and error, we find the minimum and maximum range value for each parameter H', S and Y such that the interval in between covers for example almost all tones of red color seen in the specific object. Doing this, we put the object and UAV in different location on the field and update the interval range for red color. The same procedure is done for the remarking color (other objects). At the end of training stage a LUT(Look Up Table) is constructed which can assign any RGB to a color code (each objects has a specific color code). Auto segmentation training: Manual segmentation training was hard and very time consuming and rely to operator's accuracy, so we developed a color calibration algorithm using SVM(Support Vector Machine). In this subsection, we present a color recognition algorithm using the support vector machine (SVM).SVM is one of the classification algorithms which it has high generality since it can calculate a super plane that maximizes the margin of classes, Fig.3.[8] In our algorithm, the SVM is trained by the H'SY values of the classes and the mean of the obtained image H'SY values. After training, the obtained image is binarized by setting the maximum and minimum value in the distribution of each class as a threshold. S. M. Kasaei, K. Jamshidi, S. A. H. Monajemi - Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System 9 Figure 3. The principle of SVM. In instance, the image in Fig. 4 was binarized manually (however, the person in charge of color adjustment is professional) and by using SVM resulting in Fig.5 Figure 4. Images binarized by hand. Figure 5. Images binarized by SVM. To evaluate the performance of the developed algorithm, we compare ten different images obtained from our experiment field. We consider the similarity of manually and SVM binarized images as the recognition rate of SVM. The results are shown in Table 2. The mean of the recognition rates are over 90 percent in each class. The time, to convert result of SVM classification to the binarized image, is about 15 seconds. BRAIN. Broad Research in Artificial Intelligence and Neuroscience Volume 2, Issue 4, December 2011, ISSN 2067-3957 (online), ISSN 2068 - 0473 (print) 10 Table 2. Experiment result. Red Black Green White 98 96.05 94.28 98.57 Object detection algorithm: The image processing algorithm first filters the image by using a LUT that make in segmentation training and then recognizes the contiguous regions through either a breadth first search or a depth first search algorithm and finally extracts the positions by looking in the image to ground map table. The algorithm used to find objects is optimized to process the maximum number of frames. First, it searches the pixels by swiping them with certain steps, when it finds a desired one and detects that object, saves its coordinates so the next time it can start back with the same point about. We are trying to evaluate new methods to find some kinds of objects based on pattern recognition to reduce the effect of changing the colors on algorithm. 4. Trajectory Since the motion trajectory of UAV is divided into several median points that the UAV should reach them one by one in a sequence the output obtained after the execution of AI will be a set of position and velocity vectors. So the task of the trajectory will be to guide the UAV through the obstacles to reach the destination. The routine used for this purpose is the potential field method (also an alternative new method is in progress which models the UAV motion through opponents same as the owing of a bulk of water through obstacles) [5]. In this method, different electrical charges are assigned to UAV, obstacles, and the destination. Then by calculating the potential field of this system of charges a path will be suggested for the UAV. At a higher level, predictions can be used to anticipate the position of the obstacles and make better decisions in order to reach the desired vector. In our path- planning algorithm, an articial potential field is set up in the space; that is, each point in the space is assigned a scalar value. The value at the goal point is set to be 0 and the value of the potential at all other points is positive. The potential at each point has two contributions: a goal force that causes the potential to increase with path distance from the goal, and an obstacle force that increases in inverse proportion to the distance to the nearest obstacle boundary (Figures 3 and 4). Figure 6. Goal force In other words, the potential is lowest at the goal, large at points far from the goal, and large at points next to obstacles (4) If the potential is suitably defined, then if a UAV starts at any point in the space and always moves in the direction of the steepest negative potential slope, then the UAV will move towards the S. M. Kasaei, K. Jamshidi, S. A. H. Monajemi - Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System 11 goal while avoiding obstacles. The numerical potential field path planner is guaranteed to produce a path even if the start or goal is placed in an obstacle. If there is no possible way to get from the start to the goal without passing through an obstacle then the path planner will generate a path through the obstacle, although if there is any alternative then the path will do that instead. For this reason, it is important to make sure that there is some possible path, although there are ways around this restriction such as returning an error if the potential at the start point is too high. The path is found by moving to the neighboring square with the lowest potential, starting at any point in the space and stopping when the goal is reached. Figure 7. Obstacle force (repulsive potential) and goal force obstacle force Figure 8. Potential at every point; it is highest in the obstacles and lowest at the goal. 5. Conclusion We have presented the system for a fully autonomous navigation of an UAV based on Omni directional vision system and image processing. we explain vision system configuration ,image processing and feature extraction methods and finaly suggest an algorithm based on potential field for navigation of an UAV. BRAIN. Broad Research in Artificial Intelligence and Neuroscience Volume 2, Issue 4, December 2011, ISSN 2067-3957 (online), ISSN 2068 - 0473 (print) 12 Figure 9. Trajectory Algorithm Simulation. References [1] S. Bouabdallah, M. Becker, and R. Siegwart, Autonomous miniature flying robots, IEEE Robotics and Automation Magazine, vol. 14, no. 3, pp. 8898, 2007. [2] B. Ludington, E. Johnson, and G. Vachtsevanos, Augmenting UAV autonomy: vision-based navigation and target tracking for unmanned aerial vehicles, IEEE Robotics and Automation Magazine ,vol . 13, no. 3, pp. 6371, 2006. [3] L.Merino, J.Wiklund, F. Caballero, A.Moe, J. Dios, P. Forss en, K. Nordberg, and A. Ollero, " Vision-based multi-UAV position estimation: Localization based on blob features for exploration missions," IEEE Robotics and Automation Magazine, vol. 13,no. 3, pp. 5362, 2006. [4] Yun-Hui Liu, Yipin Chen and Hesheng Wang, "Adaptive Visual Servoing of Autonomous Helicopters," SICE Annual Conference, The University Electro-Communications, Japan, August 20- 22, 2008. [5] S. Hamidreza Kasaei, S. Mohammadreza Kasaei, S. Alireza Kasaei, S. Amir Hassan Monadjemi, Mohsen Taheri, Modeling and implementation of a fully autonomous soccer robot based on Omni directional vision system, Industrial Robot: An Inter- national Journal Volume: 37 Issue: 3 2010. [6] L. Merino, J. Wiklund, F. Caballero, A. Moe, J. Dios, P. Forss en, K. Nordberg, and A. Ollero, " Vision-based multi-UAV position estimation: Localization based on blob features for exploration missions," IEEE Robotics and Automation Magazine, vol. 13, no. 3, pp. 53-62, 2006 [7] M.Jamzad, B.S.Sajad, V.S.Mirrokni, M.Kazemi, H.Chitsaz, A.Heidarnoori,M.T.Hajiaghai, and E.Chiniforooshan, "A Fast Vision System for Middle Size Robots in RoboCup," RoboCup 2001: Robot Soccer World Cup V, ISBN Number 978-3-540- 43912-7 , 2002. [8] Amir A.F. Nassiraei, Shuichi Ishida, Noriyuki Shinpuku, Miyuki Hayashi, Naoya Hirao, Kazunori Fujimoto, Kazutaka Fukuda, Kazutomo Takanaka, Ivan Godler, Kazuo Ishii and Hiroyuki Miyamoto "Hibikino-Musashi Team Description Paper," in Proc. RoboCup Publisher,Team description of middle size robots, 2011. [9] K. Hashimoto. "A review on vision-based control of robot manipulators.," Advanced Robotics,17(10):969991, 2003. S. M. Kasaei, K. Jamshidi, S. A. H. Monajemi - Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System 13 [10] OpenGL Architecture Review Board, "OpenGL 2.0 Specification," available at http://www.opengl.org. [11] S. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servo control," IEEE Trans. on Robotics and Automation,12(5):651670, 1996. [12] M. J. Schulte and J. E. Stine, Approximating elementary functions with symmetric bipartite tables, IEEE Trans. Computers, vol. 48, no. 8, pp.842847, 1999. [13] M. J. Schulte and E. E. Swartzlander, Hardware designs for exactly rounded elementary functions, IEEE Trans. Computers, vol. 43, no. 8, pp. 964973, 1994. [14] S. Nayar, S. Baker, A theory of catadioptric image formation, Tech Report CUCS-015-97, Department of Computer Science, Columbia University, 1997. [15] S. Baker, S.K.Nayar, A theory of single-viewpoint catadioptric image formation, International Journal of Computer Vision 35 (2) (1999) 122. [16] C. Geyer, K. Daniilidis, A unifying theory for central panoramic systems and practical applications, in: ECCV, vol. 2, 2000, pp. 445461. [17] C. Geyer, K. Daniilidis, Catadioptric projective geometry, International Journal of Computer Vision 43 (2001) 223243. [18] J.A. Barreto, H. Araujo, Issues on the geometry of central catadioptric image formation, Computer Vision and Pattern Recognition, 2001, CVPR 2001, in: Proceedings of the 2001 IEEE Computer Society Conference on 2, vol. 2, 2001, pp. II-422II-427, doi:10.1109/CVPR.2001.990992. [19] J.A. Barreto,H. Araujo, Geometric properties of central catadioptric line images, in: ECCV 02: Proceedings of the 7th European Conference on Computer Vision-Part IV, Springer-Verlag, London, UK, 2002, pp. 237251